id
stringlengths
18
42
text
stringlengths
0
3.39M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
158 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
569k
proofpile-arXiv_065-10258
\section{Introduction} The nature of dark matter remains an unsolved problem and the solution might reside at the intersection of cosmology and particle physics. The gravitational evidences for DM from cosmological observables are beyond doubt but its particle nature is still hypothetical \cite{Feng:2010gw,Bertone:2004pz}. Weakly interacting massive particles (WIMPs) are vastly studied candidates for dark matter \cite{STEIGMAN1985375,Arcadi:2017kky,Bergstrom:2000pn}. The mass of the wimpy dark matter can be very light as that for axions \cite{Kawasaki:2013ae} or it may emerge at the TeV-scale \cite{PhysRevLett.64.615,Blum:2014dca}. The thermal production of DM in the early universe, known as the freeze-out process \cite{Steigman:2012nb}, is a natural paradigm in cosmology resembling the same mechanism which makes very successful prediction for light element abundance and cosmic microwave background. The mass of the dark matter particle and its interaction type are key ingredients in searching for its direct detection (DD). Interactions with velocity suppressed or momentum suppressed DM-nucleon scattering cross section are instances where DM may evade detection in direct and collider searches \cite{Nobile:2021qsn}. Direct search for DM candidates with mass around 10 GeV up to hundred GeV having DM-nucleon interaction has been a dedicated strategy in underground DD experiments \cite{Akerib:2016vxi,Aprile:2017iyp,Agnese:2017njq}. In fact we do not know if WIMPs should necessarily interact with the atomic nuclei. At any rate, if DM interacts with nucleons it might be in the mass range that the current DD experiments cannot exclude it. This in turn advocates the absence of DM-nucleon elastic scattering in the current DD experiments to date. One possible avenue in direct search for dark matter is that DM might interact exclusively with the SM leptons and possibly having suppressed interactions with nucleons. The focus here is on WIMP candidates with masses in the range $\lesssim 10$ GeV communicating with the SM leptons by exchanging light scalar mediator. This type of interaction for DM receives stringent constraints from astrophysical and cosmological observations \cite{Cyburt:2015mya,Raffelt:1996wa,Ade:2015xua,Hinshaw:2012aka}. Additionally, searches beyond the SM in rare kaon decays \cite{CortinaGil:2021gga,Krnjaic:2019rsv}, $e^+e^-$ colliders \cite{TheBABAR:2016rlg,Adachi:2019otg,BaBar:2020jma}, beam-dump experiments \cite{Davier:1989wz,Bjorken:1988as,Gninenko:2014pea,Chen:2017awl,Marsicano:2018vin} and muon anomalous magnetic moment (MAMM) \cite{Bennett:2006fi,Abi:2021gix} are highly motivated probes of light dark matter with leptophilic scalar mediator. Moreover, we apply the newest results from Xenon1T \cite{Aprile:2019xxb} which probe DM-electron scattering for DM masses in the range $(0.03-10)$ GeV. The SM prediction for the muon magnetic moment reads $a_\mu^{\text{SM}} = (116591810 \pm 43) \times 10^{-11}$, where contributions from QED \cite{Aoyama:2012wk,Aoyama:2019ryr}, QCD or Lattice-QCD \cite{Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019mqg,Keshavarzi:2019abf,Kurz:2014wya,Davier:2019can,Borsanyi:2020mff,Melnikov:2003xd,Masjuan:2017tvw,Hoferichter:2018kwz,Gerardin:2019vio,Pauk:2014rta,Danilkin:2016hnh,Roig:2019reh,Blum:2019ugy} and electroweak interactions \cite{Czarnecki:2002nt,Gnendiger:2013pva} are taken into account with highest precision. The first measurement indicating a deviation form the SM prediction was found by Brookhaven National Laboratory (BNL), $a_\mu^{\text{BNL}} = (116592089 \pm 63) \times 10^{-11}$ \cite{Bennett:2006fi}. The newest measurement which confirms the deviation is announced by the Fermi National Laboratory (FNAL) with improved statistics, $a_\mu^{\text{FNAL}} = (116592040 \pm 54) \times 10^{-11}$ \cite{Abi:2021gix}. In order to explain the deviation a large number of investigations applying various models beyond the SM are performed. Among them there are models introducing DM candidates interacting with the SM leptons via leptophilic scalar \cite{Agrawal:2014ufa,Boehm:2020wbt,Garani:2019fpa,YaserAyazi:2019psw,Liu:2021mhn,Bai:2021bau,Ge:2021cjz,Horigome:2021qof, Chun:2021dwx,Borah:2021jzu,Yin:2021mls}, via generic scalar mediator \cite{Athron:2021iuf,Arcadi:2021cwg,Zhu:2021vlz} and through vector mediator \cite{Bell:2014tta,Ghorbani:2017cey,Athron:2017drj,Arcadi:2021yyr}, emphasizing on the MAMM. The present work examines a dark matter scenario in which the DM candidate is a vector gauge boson in an abelian scalar gauge theory. The gauge boson gets mass when the symmetry is broken spontaneously. Thus the mass of the gauge boson is confined by the gauge coupling and the vacuum expectation value (vev) of the new scalar. On the other side the scalar mediates the force between DM and the SM charged leptons. In this work the scalar interaction with the SM leptons is induced by dimension-6 operators. The models with a scalar mediator motivated by an effective field theory with dimension-5 operators are studied in \cite{Batell:2017kty,Batell:2016ove,Chen:2015vqy}. The main purpose of this work is two-fold. First, we would like to see if there can be found DM candidates and appropriate scalar mediators to explain the newest muon magnetic moment anomaly and the same time satisfying other constraints from indirect searches. And finally we investigate to find out whether the most strongest upper limits on DM-electron and DM-proton scattering cross section from Xenon1T are sensitive to the remaining viable parameter space. The structure of the paper is as follows. In section \ref{model} the DM model is presented and the effective operators of dimension-6 are motivated by introducing a UV complete model. Discussion on the evaluation of the DM abundance is given in section \ref{WIMP-Planck}. A couple of different terrestrial and astrophysical constraints are introduced in section \ref{various}. Our final results are shown in section \ref{final-results} after imposing the upper bounds from DD experiments. We will finish with a conclusion. \section{Model} \label{model} The model we consider here contains a complex scalar field gauged under a $\text{U}^\prime$(1) symmetry with the Lagrangian \begin{equation} {\cal L}_{\text{DM}} = (D_\mu \phi)(D^\mu \phi)^* - m^2 \phi \phi^* - \frac{1}{4} F'^{\mu\nu} F'_{\mu\nu} \,, \end{equation} where $D_\mu = \partial_\mu - i g_\text{v} V_\mu$. The $\text{U}^\prime$(1) gauge symmetry is broken when the complex scalar field gets a non-zero vacuum expectation value, $v_s$. The scalar field can be parameterized as $\phi = \frac{1}{\sqrt{2}}(s+v_s) \exp(-i\pi/v_s)$. Here, $s$ and $\pi$ are real scalar fields. The Goldstone boson is "eaten" by the longitudinal component of the gauge field giving a mass to the gauge boson as $M_{V}= g_\text{v} v_s$. In addition, one may consider a type of low energy effective interaction for the complex scalar $\phi$ in the form of a dimension-6 operator as $\sim \frac{1}{\Lambda_l^2} |\phi|^2\bar{L} H l_R$. Here $H$ is the SM Higgs doublet, $L$ is the SM left-handed lepton doublet, $l_R$ is the right-handed SM lepton, and $\Lambda_l$ is an appropriate energy scale for lepton $l$. In principle the dimension-6 operators including the SM quarks like $\frac{1}{\Lambda_Q^2} |\phi|^2~\bar{Q} H^\dagger u_R$ and $\frac{1}{\Lambda_Q^2} |\phi|^2~\bar{Q} H d_R$ are allowed by the symmetry. These interactions induce a large contribution to the DM-nucleon elastic scattering leading to the exclusion of the entire parameter space by the current direct detection bounds. Through a UV complete model we will motivate a {\it lepton-specific} scenario in which only leptonic operator is important. Here, we discuss a possible UV-completion of the above-mentioned effective interactions. To this end, we introduce a heavy new Higgs doublet, $\Phi$, with appropriate quantum numbers. The new doublet in general can have interactions with all the SM fermions. In this work we are interested in the so-called lepton-specific models in which the new doublet only interacts with the SM leptons. This type of interaction for the new doublet is motivated in Two-Higgs doublet models \cite{Su:2009fz,Branco:2011iw,Marshall:2010qi}. We consider the following UV model \begin{equation} {\cal L}_{UV} = y_e \Phi \bar{L}_e e_R + y_\mu \Phi \bar{L}_\mu \mu_R + y_\tau \Phi \bar{L}_\tau \tau_R + \kappa \Phi^\dagger H |\phi|^2 + \text{h.c.}\,, \end{equation} where $H$ is the SM Higgs doublet. In the limit that the mass of the new doublet is heavy, integrating out the heavy doublet will lead us to the dimension-6 effective operator introduced earlier, i.e, $\sim \frac{1}{\Lambda_l^2} |\phi|^2~\bar{L} H l_R$. Therefore, if we assume that the new scalar interacts with the SM particles only through the leptonic operator then it couples to the SM charged lepton currents effectively as, \begin{equation} {\cal L}_{\text{eff}} = \alpha_l s l^+ l^- \,, \end{equation} where $l = e, \mu, \tau$, and $\alpha_l$ is the corresponding effective coupling constant. The effective couplings of the scalar to leptons are parameterized to be mass-hierarchical couplings, $\alpha_l = \frac{m_l}{v_s} c_l$, which is intriguing phenomenologically. The cutoff scale $\Lambda_l$ is obtained as $\Lambda_l^2\sim v_h v_s/\alpha_l$, where $v_h$ is the vacuum expectation value of the SM Higgs. Following the same line of reasoning in \cite{Batell:2017kty}, in the effective Lagrangian above we expect the two-loop contribution to the muon anomalous magnetic moment $(\Delta a_\mu)^{\text{2loop}}$ and the one-loop contribution to the muon anomalous magnetic moment $(\Delta a_\mu)^{\text{1loop}}$, satisfy the relation $(\Delta a_\mu)^{\text{2loop}}/(\Delta a_\mu)^{\text{1loop}} \sim \Lambda_\mu^2/(8\pi^2v_h^2)$. In order to have small two-loop contribution in comparison to the one-loop contribution we should have $\Lambda_\mu < 2\sqrt2 \pi v_h \sim 2$ TeV. The interaction Lagrangian which is relevant in this work includes these terms \begin{equation} \label{int-Lag} {\cal L}_{\text{int}} = g_\text{v}^2 v_s s V_\mu V^\mu + \frac{1}{2} g_\text{v}^2 s^2 V_\mu V^\mu + \alpha_l s l^+ l^- + \frac{\alpha_l}{v_h} s h l^+ l^- \,. \end{equation} The dark gauge boson $V$ is identified as our vector dark matter candidate. In the rest of the paper we shall use $m_{V}$ and $m_{\text{DM}}$ exchangeably. Moreover, we may consider another interaction term in the Lagrangian as $\sim \frac{1}{4} \lambda |\phi|^{2} H^\dagger H$, which can be arised from the potential part of the UV model. We will justify below that in order to respect bounds from the invisible Higgs decay, the coupling $\lambda$ should be negligible. This interaction is interesting here, because it cases mixing between the singlet scalar and the SM Higgs which can then lead to the invisible Higgs decay via the interaction $\sim (g_v^2 v_s \sin \theta) h V_\mu V^\mu$. The mixing angle, $\theta$, which diagonalizes the scalar mass matrix satisfies the relation $\sin 2\theta = 2\lambda v_s v_h/(m_h^2 - m_s^2)$. The SM Higgs invisible decay width in the decay process $h \to VV$ is given by the formula \begin{equation} \Gamma_{\text{inv}} = \frac{g_v^2v_s^2 m_h^3 \sin^2 \theta }{16\pi m_V^4} (1-4x^2+12x^4)(1-4x^2)^{1/2} \,, \end{equation} where $x = m_V/m_h$. The observed upper limit at 95\% confidence level on the branching ratio of the invisible Higgs decay is $\sim 0.19$ \cite{CMS:2018yfx}. Depending on the region of the parameter space we explore in this work, it is found that if the mixing angle $\theta$ lies in the range $\lesssim 8\times 10^{-4}$ then the respecting regions evade bounds from invisible Higgs decay. The last term in Eq.~\ref{int-Lag}, opens up the possibility of a new decay channel for the SM Higgs. The Higgs particle can then decay to a scalar $s$ in the process $h \to s \bar{f} f$, where $f$ stands for the SM leptons. In the following we present some results for the decay width of $h \to s \tau^+ \tau^-$, in terms of the scalar mass. We picked out this decay channel because it has the largest decay width among the others. The value $\alpha_\tau = 10^{-1}$ is chosen which is reasonably large enough to find the upmost contribution to the Higgs total decay width. Since the decay width is proportional to $\alpha_\tau^2$, it is easy to estimate the decay width for other values of $\alpha_\tau$. To compute numerically the decay width $\Gamma(h \to s \tau^+ \tau^-)$ the code CalcHEP \cite{Belyaev:2012qa} is employed. \begin{table}[h] \caption{The decay width $\Gamma(h \to s \tau^+ \tau^-)$. The relevant effective coupling $\alpha_\tau = 10^{-1}$.} \begin{center} \begin{tabular}{c rrrrrrr} \hline\hline $m_s$ [GeV] & $10^{-3}$ & 0.1 & 1 & 5 & 10 & 50 & 100 \\ \hline $\Gamma$~[$10^{-5}$GeV]&$6.68$&$5.76$&$5.16$&$4.5$&$3.90$&$0.675$&$0.002.49$ \\ \hline \end{tabular} \end{center} \label{decay-width} \end{table} Our results for the decay width is presented in Table~\ref{decay-width}. We can estimate that the total decay width, $\Gamma(h \to s \bar{f} f)$, for the scalar mass of interest in this work is of order $\sim 10^{-5}$ GeV. The total decay width of the SM Higgs is $3.2^{+2.8}_{-2.2}$ MeV \cite{ParticleDataGroup:2020ssz}. In conclusion, the total decay width $\Gamma(h \to s \bar{f} f)$ is about two orders of magnitude smaller than the Higgs total decay width and therefore the measured Higgs decay width will not put any constraint on the relevant parameters. \section{Constraints from WMAP/Planck observation} \label{WIMP-Planck} In light of lacking any evidence of hundred GeV DM in direct detection searches so far, the interest has pushed towards low mass DM or light DM with $\sim$ GeV DM particles. In this work we adopt thermal production of light DM particles through the so-called freeze-out mechanism which sounds natural and regarded as an standard mechanism for thermal relic. During this thermal process DM annihilation to the SM particles (visibles) or other particles (secluded sector), and the reverse processes take place. The annihilation rate is in competition with the expansion rate of the Universe in the early universe. There is a special temperature called freeze-out temperature, $T_{f}$, or decoupling temperature around which the DM particles get out of equilibrium and its density remains constant thereafter. The stronger the DM interaction with the SM particles, the longer it takes for DM particles to freeze out. Dark matter relic density is a function of the thermally averaged annihilation cross section, $\langle \sigma v \rangle$, as $\Omega h^2 \propto \langle \sigma v \rangle^{-1}$. The observed value of the dark matter density is $\Omega h^2 \approx 0.12$ \cite{Hinshaw:2012aka,Ade:2015xua}. The theoretical value for the DM relic density in the model parameter space is obtained by solving the relevant Boltzmann equation numerically applying micrOMEGAs \cite{Belanger:2013oya}. Initially, we would like to find viable regions in the parameter space with DM masses in the range $10^{-3}~\text{GeV} < m_{\text{DM}} < 10~\text{GeV}$ which give rise to a relic abundance consistent with the observed value provided by WMAP \cite{Hinshaw:2012aka} and Planck \cite{Ade:2015xua}. When thermal WIMPs have s-wave $2\to2$ annihilation to visible final states, observed DM density puts a lower limit on the WIMP mass, $m \gtrsim$ 20 GeV \cite{Leane:2018kjk}. However, this is not the case when WIMPs also annihilate to secluded dark sector. There are two possible DM annihilation channels for the DM particles in our model, namely, annihilation to a pair of dark scalars and annihilation to a pair of the SM charged leptons. In fact, we stay in a region of parameter space that $2 \to 2$ annihilation processes are dominant. DM particles can annihilate as $VV \to ss$ via $t$- and $u$-channels with exchanging a vector boson and annihilation to a pair of dark scalar through a contact interaction, if $m_{\text{DM}} > m_s$. In addition, the s-channel DM annihilation, $VV \to s \to e^+ e^-, \mu^+ \mu^-, \tau^+ \tau^-$ will be accessible when kinematically allowed. When we consider the scalar-Higgs mixing processes another annihilation channel, $V V \to h \to f^+ f^-$, becomes possible. However, since its annihilation cross section is proportional to $\sin^2 \theta$ and $\theta$ is restricted to quite small values, this effect has a very small contribution to the DM relic density. The relevant Feynman diagrams for the DM annihilation processes with dominant contributions are depicted in Fig.~\ref{feynman-anni}. The analytical formulas for the DM annihilation cross sections are given in the Appendix. The analytical results are confirmed after implementing our model in the code CalcHEP \cite{Belyaev:2012qa}. \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth,angle =0]{annihilation.eps} \end{center} \caption{Feynman diagrams for DM annihilation with dominant contributions are shown.} \label{feynman-anni} \end{figure} \begin{figure} \hspace{-.8cm} \begin{minipage}{0.37\textwidth} \includegraphics[width=\textwidth,angle =-90]{relic.eps} \end{minipage} \hspace{3cm} \begin{minipage}{0.37\textwidth} \includegraphics[width=\textwidth,angle =-90]{relic2.eps} \end{minipage} \begin{center} \includegraphics[width=0.37\textwidth,angle =-90]{relic-vs.eps} \end{center} \caption{In these plots we only applied the constraints on the relic density from WMAP/Planck. In the plane $m_\text{DM}-m_s$, viable ranges for the parameters $m_\text{DM}$, $m_s$, $c_l$, $g_\text{v}$ and $v_s$ are shown in three plots.} \label{relic} \end{figure} The parameter space we scan over, lies in the following intervals: $10^{-3}~\text{GeV}<m_s<100~\text{GeV}$, $0<g_\text{v}<1$, $0< c_e = c_\mu = c_\tau <1$ and $1~\text{GeV}<v_s<300~\text{GeV}$. Lets remind that $m_{V} = m_{\text{DM}} = g_\text{v} v_s$ and $\alpha_l = (\frac{m_l}{v_s}) c_l$. In our scan the number of sampling is $10^7$. In each sampling only when the computed relic density is consistent with the observed DM relic density we keep the sampled free parameters. After finding the viable values for the parameters, $c_l$, $g_\text{v}$, $v_s$, $m_s$ and $m_\text{DM}$, we present in the plane $m_{\text{DM}}-m_s$ the resulting values for $c_l$, $g_\text{v}$ and $v_s$ in three plots, respectively in Fig.~\ref{relic}. It is evident from the results shown in Fig.~\ref{relic} that larger DM mass towards 10 GeV requires larger mass for the scalar up to about 100 GeV. \section{Various constraints on scalar-muon coupling} \label{various} In this section we discuss several types of constraints that might affect the viable parameter space. I) {\it Muon anomalous magnetic moment} The precise measurement of the muon anomalous magnetic moment, $a_{\mu}$, has been under intense scrutiny since long time, for a review on this regards one may consult \cite{Aoyama:2020ynm}. This quantity is defined as $a_{\mu} = \frac{g_{\mu}-2}{2}$, where $g_{\mu}$ is the well-known gyromagnetic ratio for muon. At tree level in perturbation theory the quantity $g_{\mu}$ reads, $g_{\mu} =2$. The SM radiative corrections include loop contributions from QED, QCD and weak interactions. The theoretical prediction of muon magnetic moment in the SM is suffered mainly from the uncertainties in the hadronic vacuum polarization and the hadronic light-by-light scattering. A sizable deviation, $\Delta a_{\mu}$, observed in the past experiments at the Brookhaven National Laboratory (BNL) experiments \cite{Bennett:2006fi} considered as a footprint of a probable new physics, taking into account the controllable uncertainties from the theoretical side. The new updated data from the muon g-2 experiment at Fermi National Laboratory (FNAL) not only supports the long-standing discrepancy but also provides results with improved statistics \cite{Abi:2021gix}. The new result comes along with a significance of about 4.2 sigma and indicates a positive excess over the SM prediction. An updated experimental world average gives $\Delta a_{\mu} = a_{\mu}^{exp}-a_{\mu}^{\text{SM}} = (2.51 \pm 0.59) \times 10^{-9}$. As a new physics effect, the scalar mediator in the present model will contribute to the muon anomalous magnetic moment at loop level and leads to the correction, \begin{equation} \begin{aligned} \Delta a_{\mu}^{\text{NP}} & = \frac{\alpha_\mu^2}{8\pi^2} \int_{0}^{1} \frac{(1-z)^2 (1+z)}{(1-z)^2+b^2 z} dz \\ & = \frac{\alpha_\mu^2}{8\pi^2} \Big[ \frac{1}{2} \left(-2 b^2+\left(b^2-3\right) b^2 \log \left(b^2\right)-2 \sqrt{b^2-4} \left(b^2-1\right) b \tanh ^{-1}\left(\frac{b^2-2}{b \sqrt{b^2-4}}\right)+3\right) \\ & + \frac{b \left(b^4-5 b^2+4\right) \tanh ^{-1}\left(\frac{b}{\sqrt{b^2-4}}\right)}{\sqrt{b^2-4}} \Big] \,, \end{aligned} \end{equation} where $b = \frac{m_s}{m_\mu}$. The new data on the muon magnetic moment deviation puts stringent constraints on the scalar-muon coupling and the scalar mass. \\ II) {\it $e^+ e^-$ annihilation in colliders} In $e^+ e^-$ colliders the production of the new scalar is possible through the process $e^+ e^- \to \mu^+ \mu^- s$. The scalar will subsequently decay to $\mu^+ \mu^-$ and therefore there are 4 muons in the final state. The BaBar experiment has done search in this channel and found the strongest upper limits on the effective coupling, $\alpha_\mu$, for $m_s > 2m_\mu$ \cite{TheBABAR:2016rlg}. For scalar masses with $m_s < 2 m_\mu$, the Belle II experiment found constraints in search for scalar production in the same channel but with the subsequent decay $s\to \text{Invisible}$ \cite{Adachi:2019otg}. Moreover, the BaBar experiment has found constraints on a leptophilic scalar ($\Phi_L$) decaying predominantly to leptons \cite{BaBar:2020jma}. The limits constrain the scalar coupling for scalar masses up to $\sim 7$ GeV. \\ III) {\it Beam-dump experiments} Proton and electron beam-dump experiments are suitable probes in search for new physics at low energy. In particular, a secondary muon beam originated from the original beam may radiate a dark scalar and the scalar can decay subsequently into the SM leptons. Therefore, it is possible to search for the scalar-lepton coupling in these experiments. We apply exclusion limits on the scalar-muon coupling from two electron beam-dump experiments, Orsay \cite{Davier:1989wz} and E137 \cite{Bjorken:1988as}. IV) {\it Meson decays} Since the scalar mediator, $s$, in our model is leptophilic the meson decays as $B\to K s$ and $K\to \pi s$ are not possible and there exist no constraints in this regard.\\ V) {\it Supernova cooling} The stellar cooling processes such as supernova cooling are type of probes which are sensitive to scalar-muon coupling for the scalar masses below $\sim 1$ MeV \cite{Bollig:2020xdr}. In this work the interest is mainly in the scalar mass $\gtrsim 1$ MeV.\\ VI) {\it BBN } BBN put constraints on the effective number of relativistic degrees of freedom beyond the SM particles with $\Delta N_{\text{eff}} \lesssim 0.2-0.6$ \cite{Mangano:2011ar}. In case our new particles have mass $\gtrsim {\cal O}$(1) MeV then the parameter space is not sensitive to the BBN bounds \cite{Pospelov:2010hj}. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth,angle =0]{exclusion.eps} \end{center} \caption{The points in color show the viable space respecting the observed DM abundance in the place $\alpha_\mu$-$m_s$. Constraints from $e^+e^-$ colliders and electron beam-dump experiments are imposed. The allowed region from muon anomalous magnetic moment is also shown as a red band.} \label{exclusion} \end{figure} Taking into account all the constraints mentioned in this section and that from observed relic density we show the viable region of the parameter space in Fig.~\ref{exclusion}. The supernova cooling constraints exclude the allowed region by ($g_\mu-2$) anomaly for scalar masses smaller than $\sim$ 1 MeV, while the BaBar (in the process $e^+ e^- \to \mu^+ \mu^- (\mu^+ \mu^-)$) and Belle II upper limits do not overlap with the allowed region. However, the limits from BaBar (in the process $e^+ e^- \to \tau^+ \tau^- \Phi_L$) partially exclude the allowed region in the scalar mass range $\sim 1~\text{GeV}- 4~\text{GeV}$. It is also seen that in the remaining parameter space respecting the $g_\mu-2$ allowed region, the observed relic density and Beam-dump experiments, DM mass varies in the range $\sim 0.1~\text{GeV}-10~\text{GeV}$ and the scalar mass in the range $\sim 0.07~\text{GeV}-20$ GeV. The strongest lower limits on the scalar mass belongs to two electron beam-dump experiments where scalar masses smaller than $\sim 0.07$ GeV are excluded by the Orsay experiment. \section{Direct detection bounds} \label{final-results} In our model spin-independent (SI) DM-nucleon interaction is present at tree level, due to the scalar-Higgs mixing effects. In addition, DM-electron elastic scattering of spin-independent type is feasible at tree level. In the following we ignore the loop suppressed DM-matter interactions. In Fig.~\ref{diagramsDD} Feynman diagrams for the DM-electron scattering at tree level and also DM-quark scattering at tree level are depicted. We obtain a reference DM-electron direct detection cross section, \begin{equation} \sigma^{e} \sim \frac{4}{3\pi} \alpha_e^2 g_{\text{v}}^2 \frac{\mu_{\text{ve}}^2}{(m_s^2+\alpha^2m_e^2)^2} \,, \end{equation} where the reduced mass of the DM-electron is $\mu_{\text{ve}}$ and the electron momentum transfer is typically set by $q\sim \alpha m_e$. The contribution of the diagram with the Higgs propagator to the DM-electron cross section is numerically negligible since the mixing angle, $\theta$, is very small. In the limit that $m_s \gg \alpha m_e$, the DM form-factor $F_{\text{DM}}\sim 1$ \cite{Essig:2011nj}. So far, in direct detection experiments there is found no evidence of DM-electron elastic scattering. However, recently the experimental results from Xenon10 \cite{Essig:2011nj}, DarkSide-50 \cite{Agnes:2018oej} set upper bounds on DM-electron for masses below $\sim$ 1 GeV and Xenon1T \cite{Aprile:2019xxb} provides stringent bounds on the DM-electron cross section for DM masses in the range $\sim 0.03- 10$ GeV \cite{Aprile:2019xxb}. On the other hand, the neutrino floor sets the lowest limits for the scattering cross section of dark matter with visible matter. We apply the latest result for the neutrino floor given in \cite{Billard:2021uyg}. \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth,angle =0]{feyn-DD.eps} \caption{Feynman diagrams for DM-electron and DM-quark elastic scattering at tree level.} \end{center} \label{diagramsDD} \end{figure} \begin{figure} \hspace{-1.cm} \begin{minipage}{0.37\textwidth} \includegraphics[width=\textwidth,angle =-90]{direct-electron.eps} \end{minipage} \hspace{2.7cm} \begin{minipage}{0.37\textwidth} \includegraphics[width=\textwidth,angle =-90]{direct-electron2.eps} \end{minipage} \caption{We show regions in the parameter space which respect all the constraints discussed in the text and also points which are excluded by the electron beam-dump experiment, Orsay. All the points respect the allowed region by the muon $(g_\mu-2)$ anomaly. The upper bounds from direct detection experiments on the DM-electron elastic cross section are shown. In the left panel the scalar mass, $m_s$, and in the right panel the coupling, $\alpha_e$, are shown in the vertical color spectrum. The neutrino floor is shaded in gray.} \label{direct-electron} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth,angle =-90]{direct-nucleon.eps} \end{center} \caption{In this plot we show regions in the parameter space which respect all the constraints discussed in the text and also points which are excluded by the electron beam-dump experiment, Orsay. All the points respect the allowed region by the muon $(g_\mu-2)$ anomaly. The upper bounds from direct detection experiments on DM-proton elastic cross section are shown. The viable values for the scalar mass, $m_s$, is also shown. The neutrino floor is shaded in gray.} \label{direct-nucleon} \end{figure} In this section we pick out points in the parameter space which respect all the relevant constraints discussed previously, including those from beam-dump experiments, observed relic density. We also confine the parameter space to the regions allowed by the muon $(g_\mu-2)$ anomaly. The regions in the parameter space that we scan over are: $0 < g_\text{v} < 1$, $1~\text{GeV} < v_s < 300~\text{GeV}$, $0 < c_l < 1$ and $10^{-3}~\text{GeV} < m_s < 10~\text{GeV}$. For the DM mass we have $m_V = g_\text{v} \times v_s$, and the relevant effective coupling here is $\alpha_e = \frac{m_e}{v_s} c_e$. The final result for DM-electron elastic scattering cross section in terms of the DM mass and the scalar mass, $m_s$, is shown in left panel of Fig.~\ref{direct-electron} and in terms of DM mass and the coupling, $\alpha_e$, is presented in the right panel of Fig.~\ref{direct-electron}. The results indicate that Xenon1T having the strongest limits among other DD experiments is only sensitive to the region with the scalar mass which is already excluded by the electron beam-dump experiments, Orsay. However, there are regions with $m_s \gtrsim 0.07$ GeV and with dark matter mass in the range $1~\text{GeV} \lesssim m_{\text{DM}} \lesssim 10$ GeV which evade the current Xenon1T bounds and stand well above the neutrino floor. Xenon1T \cite{Aprile:2019xxb} and DarkSide-50 \cite{Agnes:2018oej} collaborations provide bounds on the DM-nucleon cross section for DM masses below 10 GeV, as shown in Fig.~\ref{direct-nucleon}. We apply the package micrOMEGAs to compute the DM-proton SI cross section in the parameter space in the same ranges we discussed on the DM-electron case. Concerning the mixing angle, $\theta$, we always pick values to respect the invisible Higgs decay bounds. We show our results in Fig.~\ref{direct-nucleon} for points which respect all the restrictions and also points which are excluded by the Orsay beam-dump experiment. We find that there are viable DM candidates with masses $\sim 0.7- 10$ GeV and scalar mass $m_s \sim 0.1-10$ GeV with SI cross section well above the neutrino floor and respecting the available DD bounds. \section{Conclusion} In light of the newest results from the muon magnetic moment anomaly, ($g_\mu-2$), and DM-matter elastic scattering upper bounds from Xenon-1T, we exemplified a vector DM model with a scalar mediator which is coupled to the SM charged leptons via dimension-6 operators. We introduced a UV complete model to motivate the types of dimension-6 operators used in our study. From phenomenological point of view, we confined the dark matter mass to the range $ 10^{-3}~\text{GeV} < m_{\text{DM}} < 10$ GeV and the scalar masses in the range $ 10^{-3}~\text{GeV} < m_s < 100$ GeV. In the first part of the analysis we imposed constraints from the observed DM density, muon anomalous magnetic moment, supernova cooling, $e^+e^-$ colliders and electron beam-dump experiments. The viable range for the scalar mass is then obtained as $ 0.07~\text{GeV} \lesssim m_{s} \lesssim 20$ GeV and for the DM mass as $0.1~\text{GeV} \lesssim m_{\text{DM}} \lesssim 10$ GeV. Next we computed the DM-electron elastic scattering cross section. We then apply the upper limits from the DD experiments, Xenon100, DarkSide and Xenon1T and find that the strongest bound from Xenon1T excludes scalar masses with $m_{s} \lesssim 3$ MeV for DM masses $0.1~\text{GeV} \lesssim m_{\text{DM}} \lesssim 10$ GeV. Since we already had found that electron beam-dump experiment, Orsay, excludes scalar masses with $m_s < 0.07$ GeV, we can conclude that the current DD experiments via DM-electron interaction have almost two orders of magnitude weaker sensitivity reach on the scalar mass than the electron beam-dump experiments. Given that the neutrino floor is increasing in the region with DM mass smaller than 10 GeV, we are still able to find DM candidates of ${\cal O}(1)$ GeV with direct detection cross section about two orders of magnitude above the neutrino floor. Moreover, considering the DM-nucleon interaction for DM mass below 10 GeV, viable regions are found that are not yet explored by the DD experiments and further improvements on the experimental bounds in this mass range would be essential in order to further constrain or exclude the dark matter models. \section{Acknowledgment} The author would like to thank Dr. Parsa Ghorbani for useful discussions. \section{Appendix: Annihilation cross sections} \label{Apen} Here we present the formulas for the DM annihilation cross section times the relative velocity. First, the annihilation cross section for the $s$-channel annihilation process $V V \to l^+ l^-$ with $l = e, \mu, \tau$, is obtained \begin{equation} \sigma_{\text{anni}} v_{rel} (V V \to l^+ l^-) = \frac{2 \alpha_l^2 v_s^2 g_{\text{v}}^4}{9\pi^2} \frac{(1-4m_l^2/s)^{3/2}}{(s-m^2_s)^2} \,. \end{equation} And then we find the DM annihilation cross section with a pair of singlet scalars in the final state \begin{equation} \begin{aligned} \sigma_{\text{anni}} v_{rel} (V V \to s s) & = \frac{\sqrt{1-4m^2_s/s}}{16\pi^2s} \int d\Omega \Big[ \frac{64}{9} v_s^4 g_{\text{v}}^8 \Big(\frac{1}{t-m_V^2} + \frac{1}{u-m_V^2}\Big)^2 \\ & -\frac{64}{9} v_s^2 g_{\text{v}}^6 \Big(\frac{1}{t-m_V^2}+\frac{1}{u-m_V^2}\Big) + \frac{8}{9} g_{\text{v}}^4 \Big] \,, \end{aligned} \end{equation} where in the formulas above, $s$, $t$ and $u$ are the relevant mandelstam variables. The relative velocity of the incoming DM particles is denoted by $v_{rel}$.
2024-02-18T23:40:20.308Z
2021-11-23T02:19:11.000Z
algebraic_stack_train_0000
2,103
5,681
proofpile-arXiv_065-10438
\section{Introduction} The soft gamma-ray repeaters (SGRs) showcase flux variability on many different timescales. The quiescent state, with persistent X-ray emission ($L_X\sim10^{35}\mbox{ erg s}^{-1}$), punctuated by numerous sporadic short bursts of gamma-rays, with peak luminosities up to $\sim 10^{42}\mbox{ erg s}^{-1}$ and typical duration in the range $\sim 0.01 - 1$ s, mark the defining characteristics of SGRs (see \citealt{Mereghetti2008} and \citealt{WoodsThompson2006} for a review). Out of the seven confirmed SGR sources, SGR 0525-66, SGR 1806-20, SGR 1900+14, SGR 1627-41, SGR 1150-5418, SGR 0418+5729, SGR 0501+4516 (with the last three added recently to the SGR family; see \citealt{Kanekoetal2010,Horstetal2010,Kumaretal2010}), the first three have been known to emit giant flares. A rare phenomenon compared to the commonly occurring short bursts, the giant flares unleash a stupendous amount of energy ($\sim 10^{44}$ erg) in gamma-rays over a timescale of $\sim 0.2-0.5$ s in a fast rising initial peak. The initial high energy burst is followed by a long ($\sim 200-400$ s), exponentially decaying pulsating tail of hard X-ray emission, the period of which coincides with that of the rotation of the neutron star (NS). Additionally, intermediate strength but rare outbursts lasting for few tens of seconds have been observed in the case of SGR 1900+14. The first extremely energetic giant flare from a recurrent gamma-ray source, SGR 0525-66, was detected on March 5, 1979 by the gamma-ray burst detector aboard the Venera 11 \& 12 space probes and the nine interplanetary spacecraft of the burst sensor network \citep{Mazetsetal1979,HelfandLong1979}. The position of the source was found to be coincident with the supernova remnant N49 at a distance of $\sim55$ kpc in the Large Magellanic Cloud. The flare consisted of a sharp rise ($\sim 15$ ms) to the peak gamma-ray luminosity, $L_{\gamma}\sim10^{44}\mbox{ erg s}^{-1}$, subsequently followed by an exponentially decaying tail with $L_{\gamma}\sim10^{42}\mbox{ erg s}^{-1}$. Remarkably, the initial burst only lasted for $\sim0.1$ s compared to the longer lasting ($\sim100$ s) tail that pulsated with a period of $\sim8$ s \citep{Terrelletal1980}. The total emitted energy during the initial peak and the decaying tail amounted to an astonishing $\sim10^{44}$ erg. An even more energetic flare was detected from SGR 1900+14 on August 27, 1998 by a multitude of space telescopes in the direction of a Galactic supernova remnant G42.8+0.6 \citep{Hurleyetal1999a}, making it the second exceptionally energetic event detected in the past century from a recurrent gamma-ray source. The burst had properties similar to that of the March 5 event, with a short ($<4$ ms) rise time to the main peak that lasted for $\sim 1$ s and then decayed into a pulsating tail with a period identical to the rotation period of the NS of 5.16 s. The flare had a much harder energy spectrum compared to the March 5 event \citep{Ferocietal1999}, with peak luminosity in excess of $\sim4\times10^{44}\mbox{ erg s}^{-1}$, assuming a distance of $\sim10$ kpc to the source. The total energy unleashed in this outburst amounted to $\sim10^{44}$~erg in hard X-rays and gamma-rays \citep{Mazetsetal1999}. Finally, on December 27, 2004 the most energetic outburst ever to be detected came from SGR 1806-20 \citep{Hurleyetal2005}, a Galactic source that was found to have a possible association with a compact stellar cluster at a distance of $\sim15$ kpc \citep{CorbelEikenberry2004}. The initial spike had a much shorter rise time ($\leq1$ ms) to the peak luminosity of $\sim2\times10^{47}\mbox{ erg s}^{-1}$ which persisted for a mere $\sim0.2$ s. Like other giant flares, a hard X-ray tail, of duration $\sim380$ s, followed the main spike pulsating at a period of 7.56 s. The total energies emitted during the initial spike and the harmonic tail are $\sim4\times10^{46}$ erg and $\sim10^{44}$ erg, respectively. \subsection{The Precursor} \citet{Hurleyetal2005} reported the detection of a $\sim1$ s long precursor that was observed 142 s before the main flare of December 27. A similar event, lasting a mere 0.05 s \citep{Ibrahimetal2001}, was observed only 0.4 s prior to the August 27 giant flare \citep{Hurleyetal1999a,Ferocietal2001}, albeit at softer energies ($15-50$ keV, \citealt{Mazetsetal1999}); a non detection at harder energies ($40-700$ keV) was reported in \citet{Ferocietal1999}. Such a precursor was not detected at all for the March 5 flare for which the detectors at the time had no sensitivity below $\sim50$ keV, which suggests that a softer precursor, if there indeed was one, may have gone unnoticed. Unlike the August 27 precursor, which was short and weak and for which no spectrum could be obtained \citep{Ibrahimetal2001}, the relatively longer lasting December 27 precursor had a thermal blackbody spectrum with $kT\approx10.4$ keV \citep{Boggsetal2007}. In comparison to the common short SGR bursts, that typically last for $\sim$ 0.1 s and have sharply peaked pulse morphologies, the December 27 precursor was not only longer in duration but also had a nearly flat light curve. Nevertheless, the burst packed an energy $\sim 3.8\times10^{41}$ erg which is comparable to that of the short SGR bursts. The possible causal connection of the precursors to the giant flares in both cases indicates that they may have acted as a final trigger \citep{Hurleyetal1999a,Boggsetal2007}. A strong case for the causal connection of the precursor to the giant flare in both events can be established on statistical grounds. For example, SGR 1900+14 emitted a total of 50 bursts during its reactivation between May 26 and August 27, 1998 following a long dormant phase lasting almost 6 years \citep{Hurleyetal1999b}. Here we are only interested in the burst history immediately prior to the Aug 27 event as this time period is indicative of the hightened activity that concluded with the giant flare. From these burst statistics, the rate of short bursts of typical duration $\sim$0.1 s is $\sim6\times10^{-6}\mbox{ s}^{-1}$, which then yields a null hypothesis probability of $\sim2.4\times10^{-6}$ for the August 27 precursor. Additionally, we find a null hypothesis probability of $\sim8.6\times10^{-4}$ in the case of the December 27 precursor, assuming similar burst rates. Although the magnetar model (particularly the phenomenological models developed in \citealt{ThompsonDuncan1995}, hereafter TD95, and \citealt{ThompsonDuncan2001}, hereafter TD01), as we discuss below, offers plausible explanations for the occurrence of short bursts and giant flares, the connection between the precursor and the main flare has remained unknown. In the event the precursor indeed acted as a trigger to the main flare, it is of fundamental significance that the association between the two events is understood. As magnetars, SGRs are endowed with extremely large magnetic fields with $B\sim 10^2 B_{\rm QED}$, where $B_{\rm QED}=4.4\times10^{13}$ G is the quantum critical field, and all the energetic phenomena discussed above are ascribed to such high fields (TD95). In the the TD95 model, the short bursts result due to sudden cracking of the crust as it fails to withstand the building stresses caused by the motion of the magnetic footpoints. The slippage of the crust, as a result, injects Alfv\'en waves into the external magnetic field lines, that subsequently damp to higher wavenumbers, and ultimately dissipate into a trapped thermal pair plasma. Such a mechanism may not be invoked for the giant flares due to energy requirements. Alternatively, a large-scale interchange instability \citep{Moffatt1985}, driven by the diffusion of the internal magnetic field, in combination with a magnetic reconnection event can power the giant flares. The plausibility of these mechanisms is well supported by the observed energetics of the bursts and the associated timescales. Nevertheless, a clear description of the reconnection process, which indubitably serves as one of the most efficient mechanisms to convert magnetic energy into heat and particle acceleration, has not been forthcoming. Furthermore, an alternative mechanism, motivated by the coronal heating problem in the solar case, can be formulated to give a reasonable explanation for the association of the precursor and the main flare. In this paper, we propose two possible trigger mechanisms for the SGR giant flares - one internal and the other external to the NS. As we argue, either of the two trigger mechanisms can initiate the main hyperflare. In the following discussion, we calculate model parameters for the December 27 event, however, the analysis is similar for the other two events. We start with a discussion of the internal trigger in the next section, followed by that of the external trigger in Section 3. The discussion regarding some of the observed characteristics of the flares that our model can account for is presented in Section 4. \section{Internal trigger} In the magnetar model, the magnetic field in the interior of SGRs is considered to be strongly wound up which then generates a strong toroidal field component, possibly even larger than the poloidal component (TD01). The relative strengths of the poloidal and toriodal magnetic field components have been quantified by constructing relativistic models of NSs and testing the stability of axisymmetric fields by \citet{LanderJones2009} and \citet{Ciolfietal2009}. Both studies arrive at the conclusion that the amplitude of the two field components may be comparable but the total magnetic energy is dominated by the poloidal component as the toroidal component is non-vanishing only in the interior, with $E_{\rm B,tor}/E_B\leq10$. However, in another study \citet{Braithwaite2009} arrived at a somewhat different conclusion where he found a significant enhancement in the toriodal component to sustain a stable magnetic field configuration, with $0.20\leq E_{\rm B,tor}/E_B\leq0.95$. In the interior, the twisted flux bundle, composed of several flux tubes, can be envisioned to stretch from one magnetic pole to the other along the symmetry axis of the dipole field that is external to the star. It has been shown by \citet{Parker1983b,Parker1983a} that any tightly wound flux bundle is unstable to dynamical nonequilibrium, and will dissipate its torsional energy as heat due to internal neutral point reconnection. Although Parker had provided such a solution to the long standing problem of coronal heating in the solar case, with a few exceptions, the same applies to the case of magnetars as the arguments are very general. In the case of the Sun, flux tubes are stochastically shuffled and wrapped around each other due to convective motions in the photosphere. Unlike in the Sun, where the flux tube footpoints are free to move in the photospheric layer, the footpoints are pinned to the rigid crust in NSs. Nevertheless, for exceptionally high magnetic fields ($B>10^{15}$ G) the crust responds plastically (TD01), and any moderate footpoint motion can still occur. It is understood that this is only true to the point where the crustal stresses are below some threshold, which depends on the composition. Thus, as the imposed strain exceeds some critical value, the crust will yield abruptly \citep{HorowitzKadau2009}, but may not fracture \citep{Jones2003}. Parker's solution is at best qualitative, however, it serves as a reasonably good starting point in the context of the present case. As we have noted earlier, the precursor may be causally connected to the main flare, and so can be argued to act as a trigger in the following manner. Immediately after the precursor the internal field evolves towards a new state of equilibrium. Since the crust has yielded to the built up stresses, and may deform plastically under magnetic pressure, some of the footpoints can now move liberally. Understandably, the turbulent dynamics, due to the high Reynolds number \citep{Peraltaetal2006}, of the internal fluid in response to the burst translates into chaotic motion of the footpoints. As a result, the flux tubes are wrapped around each other in a random fashion. Current sheets then inevitably form leading to reconnection followed by violent relaxation of the twisted flux bundle. The heat flux resulting from the dissipation of the torsional energy of the flux bundle is given by \citet{Parker1983b}, \begin{equation} P = \left(\frac{B^2v^2\tau}{4\pi L}\right) \label{eq:ParkerPower} \end{equation} where $B$ is the strength of the internal magnetic field, $v$ is the footpoint displacement velocity, and $L$ is the length scale of the flux tubes. Here $\tau$ is the timescale over which accumulation of energy by the random shuffling and wrapping of flux tubes occurs until some critical moment, after which neutral point reconnection becomes explosive. Having knowledge of the burst energetics, equation (\ref{eq:ParkerPower}) can be solved for $\tau$ \begin{equation} \tau \sim 142\ B_{15}^{-2}L_6^{-1}E_{46}T_{0.125}^{-1}\left(\frac{v}{8.4\times10^3\mbox{ cm s}^{-1}}\right)^{-2}\mbox{ s} \label{eq:ParkerTime} \end{equation} where we have used the event of December 27 as an example, with internal field strength measured in units of $10^{15}$ G, flux tube length scales in $10^6$ cm, the total energy of the flare in $10^{46}$ erg, and the timescale of the initial spike in 0.125 s (RHESSI PD time resolution). It is clear from equation (\ref{eq:ParkerTime}) that the preflare quiescent time scales linearly with the total energy emitted in the initial spike but is inversely proportional to the internal magnetic field strength: $\tau \propto E_{\rm spike}B_{\rm in}^{-2}$. Following TD95, we have assumed that almost all of the energy of the flare was emitted in the initial transient phase during which the lightcurve rose to its maximum. Additionally, we find that the footpoints are displaced at a rate of few tens of meters per second, which is a reasonable estimate considering the fact that it is insignificant in comparison to the core Alfv\'en velocity $V_A\sim10^7\mbox{ cm s}^{-1}$. A noteworthy point is that in regards to the burst energetics there is nothing special about the precursor when compared to the common SGR bursts, other than that it occurs at the most opportune time when the internal field undergoes a substantial reconfiguration. The mechanism outlined above is activated after every SGR burst after which significant footpoint motion ensues. However, whether the entanglement of flux tubes is sufficient to reach a critical state such that an explosive release of energy can occur depends on the evolution of the internal field configuration. Alternatively, the twisted flux bundle can become unstable to a resistive instability, such as the tearing mode. The resistivity here is provided by the turbulent motion of the highly conductive fluid which is in a state of nonequilibrium immediately after the precursor. The growth time of the tearing mode instability is given by the geometric mean of the Alfv\'en time, say in the core, and the resistive timescale \begin{eqnarray} \tau & = & (t_At_R)^{1/2} \\ & = & \left(\frac{4\pi\sigma L^3}{V_Ac^2}\right)^{1/2} \\ & \sim & 142\ L_6^{3/2}\left(\frac{V_A}{10^7\mbox{ cm s}^{-1}}\right)^{-1/2}\left(\frac{\sigma}{10^{13}\mbox{ s}^{-1}}\right)^{1/2}\mbox{ s} \end{eqnarray} where $\sigma$ is not the electrical conductivity, but corresponds to the diffusivity of the turbulent fluid. In this case, the scaling for the preflare quiescent time becomes \begin{equation} \tau \propto E_{\rm spike}^{1/2}B_{\rm in}^{-3/2} \end{equation} where we have assumed that the twisted flux bundle occupied the entire internal region of the NS. \section{External trigger} \begin{figure*} \includegraphics{fig1.eps} \caption{This figure displays the setup of the different reconnecting current layers. The macroscopic Sweet-Parker layer with length $L\sim10^5$ cm and width $\delta\sim0.01$ cm is the largest of the three. This layer is then thinned down vertically as strong magnetic flux is convected into the dissipation region. The Hall reconnection layer, represented by the dark gray region, develops when $\delta$ becomes comparable to the ion-inertial length $d_i$. The system makes a transition from the slow to the impulsive reconnection and powers the main flare. The tiny region embedded inside the Sweet-Parker layer is the super-hot turbulent current layer, which aids in creating sufficient anomalous resistivity to facilitate the formation of the Sweet-Parker layer. The strongly accelerated plasma downstream of the reconnection layer is trapped inside magnetic flux lines and forms a plasmoid moving at some speed $V$. This plasmoid is then finally ejected during the initial spike when the external field undergoes a sudden relaxation (After \citealt{Lyutikov2006}).} \label{fig:NSfig} \end{figure*} The notion that the giant flares are a purely magnetospheric phenomena appears very promising and requires further development. A magnetospheric reconnection model has become the favourite of many for two main reasons. First, it can easily explain the millisecond rise times of the explosive giant flares in terms of the Alfv\'en time of the inner magnetosphere, which for exceptionally low values of the plasma beta parameter is very small; $\tau_A\sim R_{\star}/c\sim 30 \mu$s. Second, the SGR giant flares have much in common with the extensively studied solar flares, for which reconnection models explaining nonthermal particle creation, plasma bulk motions, and gas heating have been developed over the last few decades \citep{Lyutikov2002}. The most powerful solar flares release an equally impressive amounts of energy $\sim 10^{32}$ erg, which is mainly divided into heating the plasma and radiation in multiple wavebands, for example $\gamma$-rays, X-rays, and radio. The impulsive rise in the soft X-ray emission to peak luminosity occurs over a timespan of few hundreds of seconds, which is then followed by a gradual decay lasting several hours \citep{PriestForbes2002}. In the magnetar model, because of the shearing of the magnetic footpoints caused by the unwinding of the internal field, a twist can be injected into the external magnetic field (TD95,TD01). Depending on how the crust responds to the stresses, either plastically or rigidly, the gradual or sudden (in the event of a crustal fracture) transport of current from the interior creates a non-potential region in the magnetosphere where a reconnecting current layer can develop \citep{TLK2002,MikicLinker1994}. \citet{Lyutikov2003,Lyutikov2006} has explained the impulsive nature of the giant flares in terms of the tearing mode instability, which has a magnetospheric growth time of $\tau_{\rm tear}\sim 10$ ms. Impulsiveness of the underlying magnetic reconnection mechanism explaining the origin of giant flares is a primary requirement. The tearing mode instability is quite befitting in that regard; however, it has not been shown to bear any dependence on the precursor, which as we argue, triggered the main hyperflare. Hall reconnection, which is another impulsive reconnection mechanism, has been completely ignored on the basis that it is unable to operate in a mass symmetric electron-positron pair plasma. Nevertheless, a mild baryon contamination may be enough to render it operational. Therefore, what is needed here is the synergy of two distinct mechanisms --- a slow reconnection process, like the Sweet-Parker solution, that only dissipates magnetic energy at a much longer timescale \citep{Sweet1958,Parker1957}, and a fast process that is explosive, like Hall reconnection \citep{Bhattacharjeeetal1999}. To put this in the context of the December 27 event, we envision that immediately after the precursor a macroscopic current layer developed as a result of the sheared field lines. Then began the slow dissipation of magnetic field energy by Sweet-Parker reconnection, which continued throughout the quiescent state that followed the precursor. Finally, the transition to Hall reconnection resulted in the explosive release of energy (see figure~\ref{fig:NSfig}). We describe this process in more detail in the following section. \subsection{Transition from Resistive to Collisionless Reconnection by Current Sheet Thinning} The steady state reconnection process of Sweet and Parker is severely limited by its sensitivity to the size of the macroscopic dissipation region, such that the plasma inflow velocity is regulated by the aspect ratio of the current layer \begin{equation} v_i = \frac{\delta}{L}v_A \label{eq:vin} \end{equation} where $\delta$ is the width and $L$ is the length of the dissipation region, with $\delta\ll L$ generally. The downstream plasma flow speed coincides with the Alfv\'en velocity, which in the magnetar case approaches the speed of light. The Sweet-Parker mechanism is a resistive reconnection process where the resistivity is either collisional or anomalous. It is understood that the electron-positron pair plasma pervading the inner magnetosphere is collisionless. Nevertheless, if enough ions are present in the dissipation region, as we show below, then a source of anomalous resistivity can be established. We argue that the energy released during the precursor was enough to heat the crust to a point where a baryon layer was evaporated into the magnetosphere. TD95 provide an upper limit to the mass of the baryon layer ablated during a burst by comparing the thermal energy of the burst to that of the potential energy of the mass layer \begin{eqnarray} \Delta M & \sim & \frac{E_{\rm th}R_{\star}}{GM_{\star}} \\ & \sim & 10^{17}\left(\frac{E_{\rm th}}{10^{38}\mbox{ erg}}\right) \left(\frac{R_{\star}}{10^6\mbox{ cm}}\right)\left(\frac{M_{\star}} {1.4\mbox{M}_{\odot}}\right)\mbox{ g} \end{eqnarray} where we have assumed a more conservative estimate of $E_{\rm th}$. Then, assuming that $\Delta M$ amount of baryonic mass, in the form of protons, was injected into the magnetospheric volume of $\sim R_{\star}^3$ yielding a baryon number density of \begin{equation} n_b \sim 6\times10^{22}\left(\frac{E_{\rm th}}{10^{38}\mbox{ erg}}\right) \left(\frac{R_{\star}}{10^6\mbox{ cm}}\right)^{-2}\left(\frac{M_{\star}} {1.4\mbox{M}_{\odot}}\right)\mbox{ cm}^{-3} \label{eq:numberdensity} \end{equation} Even with the large amount of baryons, the magnetospheric plasma is still collisionless. The Spitzer resistivity for a quasi-neutral electron-ion plasma is only a function of the electron temperature $\propto T_e^{-3/2}$, which for electron temperatures as high as $\sim 10^8$ K yields a negligible resistivity. \subsubsection{Super-Hot Turbulent Current Layer} For plasma temperatures higher than $T>3\times10^7$ K, the reconnecting current layer turns into a super-hot turbulent current layer (SHTCL), for which the theory has been well developed by and documented in (\citealt{Somov2006}, pp. 129-151). The anomalous resistivity in the current layer arises due to wave-particle interactions, where the ions interact with field fluctuations in the waves. As a result, the resistivity and other transport coefficients of the plasma are altered. The electrons are the current carriers and participate mainly in the heat conductive cooling of the SHTCL. The current layer is assumed to have been penetrated by a relatively weak transverse magnetic field component (transverse to the electric field in the current layer), where $B_\perp\ll B_0$ with $B_0$ as the strength of the external dipole field. In the two temperature model, where the electrons and ions are allowed to have dissimilar temperatures, the effective anomalous resistivity is generally a combination of two terms. One resulting from the ion-acoustic turbulence and the other from the ion-cyclotron turbulence. \begin{equation} \eta_{\rm eff} = \eta_{\rm ia} + \eta_{\rm ic} \end{equation} In addition, each turbulent instability has two separate regimes -- marginal and saturated. The former applies when the wave-particle interactions are described by quasilinear equations, and the latter becomes important in the case of strong electric fields when the nonlinear contributions can no longer be ignored (see for e.g. \citealt{Somov1992} pp. 115-217 for a detailed description). For an equal temperature plasma ($T_e\sim T_i$), the saturated ion-cyclotron turbulent instability makes the dominant contribution to the effective resistivity. Thus, we ignore any other terms corresponding to the ion-acoustic instability. The effective resistivity in the present case is given as \citep{Somov2006}, depending on the dimensionless temperature parameter $\theta\equiv T_e/T_i$, \begin{equation} \eta_{\rm eff} = \frac{2m_e^{1/2}\pi^{1/4}}{ec^{1/2}m_p^{1/4}} \left[\frac{(1+\theta^{-1})^{1/2}}{N^{1/4}(\theta)U_k(\theta)} \right]\frac{(B_\perp E_0)^{1/2}}{B_0^{1/2}n_b^{3/4}} \label{eq:effectiveEta} \end{equation} where \begin{eqnarray} N(\theta) & = & 1.75 + \frac{f(\theta)}{\sqrt{8}(1+\theta^{-1})^{3/2}} \\ f(\theta) & = & \frac{1}{4}\left(\frac{m_p}{m_e}\right)^{1/2}\mbox{ for } 1\leq\theta\leq8.1 \\ U_k(\theta) & \sim & \mathcal{O}(1)\mbox{ for }\theta\sim 1 \\ E_0 & = & \alpha B_0 \label{eq:frozenin} \end{eqnarray} $\alpha\equiv v_0/c$ is the effective reconnection rate determined by the inflow fluid velocity $v_0$ into the current layer, and the rest of the variables in equation (\ref{eq:effectiveEta}) retain their usual meaning. Equation (\ref{eq:frozenin}) conveys the frozen-in field condition. Next, we write the magnetic diffusivity of the plasma due to the effective anomalous resistivity \begin{eqnarray} \eta_{\rm diff} & = & \frac{\eta_{\rm eff} c^2}{4\pi} \\ & \simeq & 6\times10^{23}(\alpha B_\perp)^{1/2}n_0^{-3/4} \end{eqnarray} To calculate the inflow plasma velocity, we assume that the SHTCL is embedded in a macroscopic Sweet-Parker current layer. The primary role of the SHTCL is to provide enough resistivity in a collisionless plasma so that the magnetic field lines can diffuse through it and ultimately undergo reconnection. From equation (\ref{eq:vin}) we know that for a Sweet-Parker current layer the inflow fluid velocity is regulated by the aspect ratio of the current layer. The outflow velocity is limited by the speed of light. By expressing the width of the Sweet-Parker current layer in terms of the magnetic diffusivity, we find that the inflow velocity has to be on the order of $v_0\sim10^3\mbox{ cm s}^{-1}$ (so that $\alpha\ll1$), with the width of the layer given as \begin{eqnarray} \delta & \sim & \sqrt{\frac{\eta_{\rm diff} L}{c}} \\ & \sim & 0.01\mbox{ cm }\left(\frac{v_0}{10^3\mbox{cm s}^{-1}}\right)^{1/4}\left(\frac{B_\perp} {10^{11}\mbox{ G}}\right)^{1/4} \\ & &\times\left(\frac{n_b}{6\times10^{22}\mbox{ cm}^{-3}}\right)^{-3/8} \left(\frac{L}{10^5\mbox{ cm}}\right)^{1/2} \nonumber \end{eqnarray} where the transverse magnetic field is $B_\perp\sim10^{-3}B_0$, and $L$ is the length of the current layer. The size of the SHTCL can now be obtained from the following \begin{eqnarray} a & = & \frac{c}{e}\sqrt{\frac{m_e}{2\pi n_b}}\left[\sqrt{\frac{1+\theta^{-1}} {N(\theta)}}\frac{1}{U_k(\theta)}\right] \\ & \sim & 2.5\times10^{-6}\left(\frac{n_b}{6\times10^{22}\mbox{ cm}^{-3}} \right)^{-1/2}\mbox{ cm} \\ b & = & \frac{B_0}{h_0}\sqrt{\frac{2v_0}{B_\perp}}\left[\frac{\pi m_p n_b}{N(\theta)}\right]^{1/4} \\ & \sim & 80\mbox{ cm }\left(\frac{R_{\star}}{10^6\mbox{ cm}}\right)\left(\frac{v_0} {10^3\mbox{ cm s}^{-1}}\right)^{1/2} \\ & & \times\left(\frac{B_\perp}{10^{11}\mbox{ G}}\right)^{-1/2}\left( \frac{n_b}{6\times10^{22}\mbox{ cm}^{-3}}\right)^{1/4} \nonumber \end{eqnarray} where $a$ and $b$ are, respectively, the half-width and the half-length of the SHTCL, and $h_0\sim B_0/R_{\star}$ is the magnetic field gradient in the vicinity of the current layer. \subsubsection{Current Sheet Thinning} The main flare is triggered when the transition is made from the steady state, slow reconnection process to an impulsive one. In the present case, Sweet-Parker reconnection makes a transition to Hall reconnection when the width of the current layer $\delta$ drops below the ion-inertial length $d_i$, where \begin{eqnarray} d_i & = & \frac{c}{\omega_{p,i}} = \frac{c}{e}\sqrt{\frac{m_p}{4\pi n_b}} \\ & \sim & 10^{-4} \left(\frac{n_b}{6\times10^{22}\mbox{ cm}^{-3}} \right)^{-1/2}\mbox{ cm} \end{eqnarray} and $\omega_{p,i}$ is the non-relativistic ion plasma frequency. \citet{Cassaketal2005} show that for a given set of plasma parameters, the solution is bistable such that the slow Sweet-Parker solution can operate over long timescales, during which the system can accumulate energy, while the faster Hall solution starts to dominate as the resistivity is reduced below some critical value. Lowering the resistivity would naturally reduce the width of the current layer to the point where the system can access the Hall mechanism. Alternatively, as \citet{Cassaketal2006} argue, the same result can be achieved by thinning down the current layer by convecting in stronger magnetic fields into the dissipation region during Sweet-Parker reconnection. The critical field strength needed to thin the current layer is \begin{eqnarray} B_{c} & \sim & \sqrt{4\pi m_pn_b}\left(\frac{\eta_{\rm diff}}{d_i^2}L\right) \\ & \sim & 4\times10^{14}\mbox{ G }\left(\frac{n_b}{6\times10^{22} \mbox{ cm}^{-3}}\right)^{3/4} \\ & & \times\left(\frac{v_0}{10^3\mbox{ cm s}^{-1}} \right)^{1/2}\left(\frac{B_\perp}{10^{11}\mbox{ G}}\right)^{1/2} \left(\frac{L}{10^5\mbox{ cm}}\right) \nonumber \end{eqnarray} Due to flux pile up outside the current layer, it can be argued that the system is able to achieve such high field strengths. The timescale for thinning down the current sheet until its width is comparable to the ion-inertial length is given as \begin{eqnarray} \tau_{\rm thin} & \sim & 2W_s\sqrt{\frac{L}{\eta_{\rm diff}c}\left(\frac{B_c}{B_0} \right)} \\ & \sim & 130\mbox{ s }\left(\frac{W_s}{10^5\mbox{ cm}}\right)\left( \frac{L}{10^5\mbox{ cm}}\right)\left(\frac{B_0}{10^{14}\mbox{ G}} \right)^{-1/2} \\ & & \times\left(\frac{n_b}{6\times10^{22}\mbox{ cm}^{-3}}\right)^{3/4} \nonumber \end{eqnarray} where $W_s$ is the magnetic shear length, that is the length scale over which the field lines are severely sheared. What we find here is that the thinning down time $\tau_{\rm thin}$ of the current layer from the Sweet-Parker width to the ion-inertial length, where Hall reconnection dominates, is on the order of the preflare quiescent time of $\sim 142$ s for the December 27 event. The scaling relation of the thinning down time in terms of the initial spike energy and the external magnetic field strength can be deduced to be the following \begin{equation} \tau_{\rm thin} \propto E_{\rm spike}^{2/3}B_0^{-11/6} \label{eq:scalingthin} \end{equation} Again, we emphasize here that this same mechanism may operate after every SGR burst which is energetic enough to inject the requisite baryon number density, as calculated in equation (\ref{eq:numberdensity}), to facilitate the development of a Sweet-Parker current layer. However, this mechanism will fail if the twist injected into the magnetosphere by the unwinding of the internal field is not sufficient to create a tangential discontinuity at the first place. In that instance no current sheet will form. \subsubsection{Giant flare Submillisecond Rise Times} In Hall reconnection, a multiscale dissipation region develops with characteristic spatial scales on the order of the ion and electron inertial lengths \citep{Shayetal2001}. Within a distance $d_i$ of the neutral X-line, the ions decouple from the electrons and are accelerated away at Alfv\'enic speeds in the direction perpendicular to that of the inflow. The electrons continue their motion towards the neutral line as they are frozen-in, and only decouple from the magnetic field when they are a distance $d_e$, the electron-inertial length, away from the neutral line. Within the ion-inertial region, the dynamics of the electrons are significantly influenced by the nonlinear whistler waves. Subsequently, the electrons too are accelerated away in an outflowing jet at Alfv\'enic speeds. The timescale associated to Hall reconnection then is in good accord with the rise times of giant flares \citep{Schwartzetal2005}, that is \begin{equation} \tau_{\rm Hall} \sim \frac{R_{\star}}{0.1 c} \sim 0.3\mbox{ ms} \end{equation} \section{Discussion} In this paper, we present an internal and an external trigger mechanism for the SGR giant flares, where we strongly emphasize the causal connection of the precursor to the main flare. The quiescent state that follows the precursor has been argued, in our model, to be the time required for the particular instabilities to develop, along with the accumulation of energy just before the flare. The internal mechanism is based on the hypothesis that poloidal field component in the interior of the NS is strongly wound up. The solution is motivated by Parker's reasoning that such a twisted field would inevitably develop tangential discontinuities and dissipate its torsional energy as heat. The timescale for the accumulation of energy that is to be released in the main flare is on the order of the duration of the preflare quiescent state. The external trigger mechanism makes use of the fact that a Sweet-Parker reconnection layer may develop between significantly sheared field lines if a source of resistivity is established. Such a source may be embedded inside the macroscopic Sweet-Parker layer in the form of a super-hot turbulent current layer. To make the reconnection process impulsive, we invoke the non-steady Hall reconnection which is switched on as the width of the Sweet-Parker layer is thinned down to the ion-inertial length. Again, the timescale over which the layer is thinned down roughly coincides with that of the preflare quiescent state duration. We have shown detailed calculations of the timescales for the December 27 event in particular. However, a similar analysis can also be carried out for the August 27 event. For the internal mechanism, we find the timescale to be comparable to the observed preflare quiescent time with $\tau\sim0.4\ B_{15}^{-2}L_6^{-1}E_{44}T_1^{-1} \left(\frac{v}{5.6\times10^4\mbox{ cm s}^{-1}}\right)^{-2}$ s, where we have assumed the same internal magnetic field strength and length of flux tubes. For the external mechanism, assuming $\Delta M\sim10^{15}$ g, since the precursor was short and weak, $W_s\sim2\times10^4$, and $L\sim5\times10^4$, we find $\tau_{\rm thin}\sim0.4$ s, $\delta\sim0.04$ cm, $a\sim2.5\times10^{-5}$ cm, $b\sim25$ cm, and $d_i\sim10^{-3}$ cm. A significant nonthermal component, with an average power-law index of $\Gamma\sim2$ as in $E^{-\Gamma}$, was observed during the decaying phase of the flare in both the August 27 and December 27 events \citep{Ferocietal1999,Boggsetal2007}. In the magnetar model, the nonthermal emission originates much farther out from the star, almost at the light cylinder (TD01). At this distance, inverse Compton cooling by X-ray photons has been invoked to explain the nonthermal spectrum. Nonthermal particle generation is one of readily identified features of magnetic reconnection, especially in the case of Hall reconnection where outflow velocities approach the Alfv\'en speed of the medium. Such acceleration of high energy particles due to meandering-like orbits in the presence of strong electric fields has also been seen in particle-in-cell simulations \citep{ZenitaniHoshino2001}. Therefore, the Hall reconnection process that gives rise to the main flare can easily explain the origin of nonthermal particles. \citet{Israeletal2005} and \citet{StrohmayerWatts2005} reported the detection of quasi-periodic oscillations (QPOs) in the burst spectra of the December 27 and August 27 events, respectively. QPOs in the December 27 event were detected at 92.5 Hz, and 18 and 30 Hz at, respectively, 170 s and $\sim200-300$ s after the initial spike. \citet{WattsStrohmayer2006} confirmed the detection of the first two QPOs and reported the presence of two additional QPOs at 26 and 626.5 Hz. Similarly, in the August 27 event, QPOs at 84 Hz, 53.5 Hz, 155.1 Hz, and 28 Hz (with lower significance) were detected at about a minute after the onset of the flare. Torsional oscillation of the NS crust appeared to be the natural explanation for the QPOs. However, \citet{Levin2006} argues that purely crustal oscillations rapidly lose their energy to an Alfv\'en continuum in the core by resonant absorption. He demonstrates that steady low frequency oscillations can be associated with MHD continuum turning points in the core \citep{Levin2007}, while others have reproduced the QPOs in toy models governed by global MHD-elastic modes of the NS \citep{Glampedakisetal2006, Sotanietal2007}. Both trigger mechanisms presented in this paper can be linked to the initiation of such an oscillatory behavior, whether in the core or as a global mode of the star, by the realization that during the giant flare the global magnetic field of the star undergoes sudden magnetic relaxation (TD95). In the internal trigger, the sudden loss of helicity can be argued to be sufficient to launch Alfv\'en waves in the interior. On the other hand, although the external trigger, is not directly tied to the crust, however, a sudden relaxation of the internal toroidal field in the sense of the \citet{FlowersRuderman1977} instability can be realized. The loss of magnetic energy in the form of a plasmoid, which has been know to form during an eruptive flare \citep{MagaraShibata1997}, serves to relax both the sheared external dipole field and the twisted internal toroidal field. Since both fields thread through the entire star, a sudden relaxation during the initial spike can easily excite global elastic modes in the NS. The ejection of a plasmoid also naturally explains the origin of the radio afterglow observed for both the August 27 and the December 27 events (\citealt{Frailetal1999,Gaensleretal2005}; also see \citealt{Lyutikov2006} for afterglow geometry and parameters). In our calculations, we have shown how the preflare quiescent time scales with the total energy released in the initial spike. In the internal trigger mechanism, we find that Parker's solution yields a linear dependence. Although, as we have remarked earlier, this mechanism is simple and elegant but based on qualitative arguments. Nevertheless, it does reproduce the observed result, that is the longer the preflare quiescent time the energetic the flare, if the internal magnetic field strengths for both NSs are assumed to be similar: $\tau_{\rm Aug}/\tau_{\rm Dec} \sim E_{\rm Aug}/E_{\rm Dec}$. On the other hand, for the scaling to reconcile with observations in the case of the tearing mode instability and the external trigger, either $E_{\rm Aug}/E_{\rm Dec} \ll 10^{-3}$ or $B_{1900+14}/B_{1806-20} \gg 10^{-1}$. Based on the measured $P$ and $\dot{P}$ values for SGR 1900+14 and SGR 1806-20, which suggest that $B_{1806-20}\sim3B_{1900+14}$, and with the revised distance estimate of $D\sim12-15$ kpc for SGR 1900+14 \citep{Vrbaetal2000}, neither condition may be satisfied. However, one should not ignore the fact that the external mechanism also depends on the size of the current layer and the field line shearing lengthscale. Therefore, the scaling relation may not be as simple as that argued in equation (\ref{eq:scalingthin}). In any case, with only two events it is premature to observe any trends regarding the preflare quiescent times and the burst energies. Future giant flares from SGRs will certainly improve our understanding of such correlations. \section*{Acknowledgements} We would like to thank the anonymous reviewer for his help in improving the quality of this paper. R.G. is supported by NSERC CGS-D3 scholarship. The Natural Sciences and Engineering Research Council of Canada, Canadian Foundation for Innovation and the British Columbia Knowledge Development Fund supported this work. Correspondence and requests for materials should be addressed to J.S.H. (heyl@phas.ubc.ca). This research has made use of NASA's Astrophysics Data System Bibliographic Services.
2024-02-18T23:40:21.000Z
2010-05-19T02:02:26.000Z
algebraic_stack_train_0000
2,141
6,662
proofpile-arXiv_065-10472
\section{Introduction}\label{sec1} Since the discovery of accelerated expansion of our universe, dark energy has been one of the most active fields in modern cosmology~\cite{r1,r2,r3}. The simplest candidate of dark energy is a tiny positive cosmological constant. As an alternative to the cosmological constant, some dynamical field models have been proposed. These dynamical field models can be categorized into three major types: (complex) scalar field models (e.g. quintessence~\cite{r4,r5}, phantom~\cite{r6}, k-essence~\cite{r7}, quintom~\cite{r8,r9,r10}, hessence~\cite{r11,r12}), vector field models (e.g.~\cite{r13,r14,r15,r16}), and spinor field models. Of course, there are also other dark energy models which are not directly described by quantum fields, and so we do not mention them here. To our knowledge, in the literature there are relatively few works on the dark energy models with spinor fields. In~\cite{r17}, the Bianchi type~I cosmology with Dirac spinor fields has been investigated. In~\cite{r18}, it is found that the Dirac spinor fields could be responsible for the cosmic acceleration. In~\cite{r19}, the massive non-linear dark spinors have been discussed. In~\cite{r20}, the spinor quintom has been studied. It is worth noting that all the spinors considered in the aforementioned works~\cite{r17,r18,r19,r20} are Dirac spinors. In fact, there is a different type of spinor in the literature, namely, the so-called Elko spinor (e.g.~\cite{r21,r29}), which is similar to Majorana spinor. In the beginning, the Elko spinor was considered as a candidate of dark matter~\cite{r21}. Subsequently, it has been used to drive inflation~\cite{r22,r23,r24,r25,r26}. Recently, the Elko spinor has been proposed to be a candidate of dark energy~\cite{r27}. In fact, this type of dark energy model described by the Elko spinor fields is the one we will discuss in the present work. Following~\cite{r22,r23,r24,r27}, here is a brief review of the so-called Elko spinor. It is a spin one half field with mass dimension one~\cite{r21}. Unlike the standard fields which obey $(CPT)^2=1$, the Elko spinor is non-standard spinor according to the Wigner classification~\cite{r28} and obeys the unusual property $(CPT)^2=-1$ instead. In fact, the Elko spinor fields (together with Majorana spinor fields) belong to a wider class of spinor fields, i.e., the so-called flagpole spinor fields, according to the Lounesto general classification of all spinor fields~\cite{r29,r36}. The Elko spinors are defined by~\cite{r21,r24,r25} \be{eq1} \lambda=\left( \begin{array}{c} \pm\sigma_2\phi^\ast_L \\ \phi_L\end{array} \right), \end{equation} where subscript $L$ refers to left-handed spinor; $\sigma_2$ denotes the second Pauli matrix; $\phi^\ast_L$ denotes the complex conjugate of $\phi_L$. Note that the helicities of $\phi_L$ and $\sigma_2\phi^\ast_L$ are opposite~\cite{r21}. Therefore, there are two distinct helicity configurations denoted by $\lambda_{\{-,+\}}$ and $\lambda_{\{+,-\}}$. The corresponding action is given by~\cite{r24,r25} \be{eq2} S=\frac{1}{2}\int\left[\,g^{\mu\nu}{\cal D}_{(\mu} \stackrel{\neg}{\lambda}{\cal D}_{\nu)}\lambda -V\left(\stackrel{\neg}{\lambda}\!\lambda\right)\right] \sqrt{-g}\,\,d^4 x\,, \end{equation} where $V$ is the potential; the round subscript brackets denote symmetrization; ${\cal D}_\mu$ is covariant derivative and $\stackrel{\neg}{\lambda}$ is the Elko dual which is different from the standard model spinors (see e.g.~\cite{r21,r25} for definitions). We consider a spatially flat Friedmann-Robertson-Walker (FRW) universe and assume that the spinor fields are homogeneous. Following~\cite{r22,r23,r24}, one can find that \be{eq3} \lambda_{\{-,+\}}=\phi(t)\frac{\xi}{\sqrt{2}}\,,~~~~~~~ \lambda_{\{+,-\}}=\phi(t)\frac{\zeta}{\sqrt{2}}\,, \end{equation} where $\phi$ is a homogeneous real scalar; $\xi$ and $\zeta$ are constant spinors satisfying $\stackrel{\neg}{\xi}\!\xi=\,\stackrel{\neg}{\zeta}\!\zeta=+2$. In~\cite{r22,r24,r27}, the effective pressure and energy density of the Elko spinor field are found to be \begin{eqnarray} p_\phi=\frac{1}{2}\dot{\phi}^2-V(\phi) +\frac{1}{8}H^2\phi^2,\label{eq4}\\ \rho_\phi=\frac{1}{2}\dot{\phi}^2+V(\phi) -\frac{3}{8}H^2\phi^2,\label{eq5} \end{eqnarray} where $H\equiv\dot{a}/a$ is the Hubble parameter; $a=(1+z)^{-1}$ is the scale factor (we have set $a_0=1$); $z$ is the redshift; a dot denotes the derivatives with respect to cosmic time $t$; the subscript ``0'' indicates the present value of the corresponding quantity; we use the units $\hbar=c=1$. Recently, in~\cite{r27} the Elko spinor field has been proposed to be a candidate of dark energy, and we will call it ``spinor dark energy'' in the present work. However, very recently, it is found that the previous researches (e.g.~\cite{r22,r24,r27}) overlooked one part of the energy-momentum tensor which arises when the spin connection is varied appropriately with respect to the metric~\cite{r37,r38}. Therefore, the correct pressure and energy density of spinor dark energy should be~\cite{r38} \begin{eqnarray} && p_\phi=\frac{1}{2}\dot{\phi}^2-V(\phi) -\frac{3}{8}H^2\phi^2-\frac{1}{4}\dot{H}\phi^2- \frac{1}{2}H\phi\dot{\phi}\,,\label{eq6}\\ && \rho_\phi=\frac{1}{2}\dot{\phi}^2+V(\phi) +\frac{3}{8}H^2\phi^2.\label{eq7} \end{eqnarray} Correspondingly, the equation-of-state parameter (EoS) of spinor dark energy reads \be{eq8} w_\phi\equiv\frac{p_\phi}{\rho_\phi}= \frac{\frac{1}{2}\dot{\phi}^2-V(\phi)-\frac{3}{8}H^2\phi^2 -\frac{1}{4}\dot{H}\phi^2-\frac{1}{2}H\phi\dot{\phi}} {\frac{1}{2}\dot{\phi}^2+V(\phi)+\frac{3}{8}H^2\phi^2}\,. \end{equation} In this case, it is easy to see that $w_\phi\ge -1$ when $\dot{\phi}^2\ge(\dot{H}\phi^2)/4+H\phi\dot{\phi}/2$, whereas $w_\phi<-1$ when $\dot{\phi}^2<(\dot{H}\phi^2)/4+H\phi\dot{\phi}/2$. The EoS of spinor dark energy crosses the phantom divide $w_{de}=-1$ when $\dot{\phi}^2=(\dot{H}\phi^2)/4+H\phi\dot{\phi}/2$. This note is organized as follows. In Sec.~\ref{sec2}, we discuss the cosmological coincidence problem in the spinor dark energy models by using the dynamical system method. A brief summary is given in Sec.~\ref{sec3}. It is worth noting that in the present work, we merely consider the spinor dark energy with the correct pressure and energy density given in Eqs.~(\ref{eq6}) and~(\ref{eq7}) \cite{r39}. \section{Spinor dark energy and cosmological coincidence problem}\label{sec2} The cosmological coincidence problem~\cite{r1,r2,r3} is asking why are we living in an epoch in which the dark energy density and the matter energy density are comparable? Since their densities scale differently with the expansion of the universe, there should be some fine-tunings. Most dark energy models are plagued with this coincidence problem. However, this problem can be alleviated in these models via the method of scaling solution. If there is a possible interaction between dark energy and matter, their evolution equations could be rewritten as a dynamical system~\cite{r30} (see also e.g.~\cite{r5,r9,r12,r14,r31,r32,r33,r34,r35}). There might be some scaling attractors in this dynamical system, and both the densities of dark energy and matter are non-vanishing constants over there. The universe will eventually enter these scaling attractors regardless of the initial conditions, and hence the coincidence problem could be alleviated without fine-tuning. This method works fairly well in most of dark energy models (especially in the scalar field models). To our knowledge, there is no attempt to do this in spinor dark energy model. Let us have a try. \subsection{Dynamical system}\label{sec2a} We consider a flat FRW universe containing both spinor dark energy and background matter. The background matter is described by a perfect fluid with barotropic EoS, namely \be{eq9} p_m=w_m\rho_m\equiv (\gamma-1)\rho_m\,, \end{equation} where the so-called barotropic index $\gamma$ is a positive constant. In particular, $\gamma=1$ and $4/3$ correspond to dust matter and radiation, respectively. Of course, the Friedmann equation and Raychaudhuri equation are given by \begin{eqnarray} &&H^2=\frac{\kappa^2}{3}\rho_{tot}= \frac{\kappa^2}{3}\left(\rho_\phi+\rho_m\right),\label{eq10}\\ &&\dot{H}=-\frac{\kappa^2}{2}\left(\rho_{tot}+p_{tot}\right) =-\frac{\kappa^2}{2}\left(\rho_\phi+ \rho_m+p_\phi+p_m\right),\label{eq11} \end{eqnarray} where $\kappa^2\equiv 8\pi G=M_{pl}^{-2}$ and $M_{pl}$ is the reduced Planck mass. We assume that spinor dark energy and background matter interact through a coupling term $Q$, according to \begin{eqnarray} &&\dot{\rho}_\phi+3H\left(\rho_\phi +p_\phi\right)=-Q\,,\label{eq12}\\ &&\dot{\rho}_m+3H\left(\rho_m+p_m\right)=Q\,,\label{eq13} \end{eqnarray} which preserves the total energy conservation equation $\dot{\rho}_{tot}+3H\left(\rho_{tot}+p_{tot}\right)=0$. Obviously, $Q=0$ corresponds to {\em no} interaction between spinor dark energy and background matter. Following e.g.~\cite{r5,r9,r12,r14,r31,r32,r33,r34,r35}, we introduce following dimensionless variables \be{eq14} x\equiv\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,~~~~~~~ y\equiv\frac{\kappa\sqrt{V}}{\sqrt{3}H}\,,~~~~~~~ u\equiv\frac{\kappa\phi}{2\sqrt{2}}\,,~~~~~~~ v\equiv\frac{\kappa\sqrt{\rho_m}}{\sqrt{3}H}\,. \end{equation} Then, we can recast the Friedmann equation~(\ref{eq10}) as \be{eq15} x^2+y^2+u^2+v^2=1\,. \end{equation} From Eqs.~(\ref{eq10}), (\ref{eq11}) and (\ref{eq6}), (\ref{eq7}), it is easy to find that \be{eq16} s\equiv -\frac{\dot{H}}{H^2}=3x^2+su^2-\sqrt{3}\,xu+ \frac{3}{2}\gamma v^2, \end{equation} in which $s$ appears in both sides. One can solve Eq.~(\ref{eq16}) and get \be{eq17} s=\left(3x^2-\sqrt{3}\,xu+\frac{3}{2}\gamma v^2\right) \left(1-u^2\right)^{-1}. \end{equation} By the help of Eqs.~(\ref{eq10}), (\ref{eq11}) and (\ref{eq6}), (\ref{eq7}), the evolution equations (\ref{eq12}) and (\ref{eq13}) can be rewritten as a dynamical system, namely \begin{eqnarray} &&x^\prime=(s-3)x+\frac{\sqrt{3}}{2}u -\frac{\kappa V_{,\phi}}{\sqrt{6}H^2}-Q_1\,,\label{eq18}\\ &&y^\prime=sy+\frac{x}{\sqrt{2}H} \frac{V_{,\phi}}{\sqrt{V}}\,,\label{eq19}\\ &&u^\prime=\frac{\sqrt{3}}{2}x\,,\label{eq20}\\ &&v^\prime=\left(s-\frac{3}{2}\gamma\right)v+Q_2\,,\label{eq21} \end{eqnarray} where \be{eq22} Q_1\equiv\frac{\kappa Q}{\sqrt{6}H^2\dot{\phi}}\,,~~~~~~~ Q_2\equiv\frac{vQ}{\,2H\rho_m}\,, \end{equation} a prime and the subscript ``$,\phi$'' denote derivatives with respect to $N\equiv\ln a$ and $\phi$, respectively; we have used the universal relation $f^\prime=H^{-1}\dot{f}$ for any function $f$. On the other hand, the fractional energy densities $\Omega_i\equiv (\kappa^2\rho_i)/(3H^2)$ of spinor dark energy and background matter are given by \be{eq23} \Omega_\phi=x^2+y^2+u^2,~~~~~~~ \Omega_m=v^2. \end{equation} The EoS of spinor dark energy reads \be{eq24} w_\phi=\frac{p_\phi}{\rho_\phi}= \frac{x^2-y^2-u^2+\frac{2}{3}su^2-\frac{2}{\sqrt{3}}\,xu} {x^2+y^2+u^2}\,. \end{equation} Eqs.~(\ref{eq18})---(\ref{eq21}) could be an autonomous system when the potential $V(\phi)$ and the interaction term $Q$ are chosen to be suitable forms. In fact, we will consider the model with a power-law or exponential potential in the next subsections. In each model with different potential, we consider four cases with various interaction forms between spinor dark energy and background matter. The first case is the one without interaction, i.e., $Q=0$. The other three cases are taken as the most familiar interaction terms extensively considered in the literature, namely \begin{eqnarray*} &{\rm Case~(I)} &Q=0\,,\\ &{\rm Case~(II)} &Q=\alpha\kappa\rho_m\dot{\phi}\,,\\ &{\rm Case~(III)} &Q=3\beta H\rho_{tot} =3\beta H\left(\rho_\phi+\rho_m\right),\\ &{\rm Case~(IV)} &Q=3\eta H\rho_m\,, \end{eqnarray*} where $\alpha$, $\beta$ and $\eta$ are dimensionless constants. The interaction form Case~(II) arises from, for instance, string theory or scalar-tensor theory (including Brans-Dicke theory)~\cite{r31,r32,r33}. The interaction forms Case~(III)~\cite{r34} and Case~(IV)~\cite{r35} are phenomenally proposed to alleviate the coincidence problem in the other dark energy models. \subsection{Spinor dark energy with a power-law potential}\label{sec2b} In this subsection, we consider the spinor dark energy model with a power-law potential \be{eq25} V(\phi)=V_0\left(\kappa\phi\right)^n, \end{equation} where $n$ is a dimensionless constant. Actually, in most of the models with Elko spinor field~\cite{r21,r22,r23,r24,r25,r26,r27}, the potential is usually chosen to be, for instance, $\frac{1}{2}m^2\phi^2$ or $\alpha\phi^4$. It is easy to see that they are the special cases of the power-law potential considered here. In this case, Eqs.~(\ref{eq18})---(\ref{eq21}) become \begin{eqnarray} &&x^\prime=(s-3)x+\frac{\sqrt{3}}{2}u -\frac{\sqrt{3}}{4}ny^2u^{-1}-Q_1\,,\label{eq26}\\ &&y^\prime=sy+\frac{\sqrt{3}}{4}nxyu^{-1},\label{eq27}\\ &&u^\prime=\frac{\sqrt{3}}{2}x\,,\label{eq28}\\ &&v^\prime=\left(s-\frac{3}{2}\gamma\right)v+Q_2\,.\label{eq29} \end{eqnarray} If $Q$ is given, we can obtain the critical points $(\bar{x},\bar{y},\bar{u},\bar{v})$ of the autonomous system by imposing the conditions $\bar{x}^\prime=\bar{y}^\prime=\bar{u}^\prime=\bar{v}^\prime=0$. Of course, they are subject to the Friedmann constraint Eq.~(\ref{eq15}), i.e., $\bar{x}^2+\bar{y}^2+\bar{u}^2+\bar{v}^2=1$. One the other hand, by definitions in Eq.~(\ref{eq14}), $\bar{x}$, $\bar{y}$, $\bar{u}$, $\bar{v}$ should be real, and $\bar{y}\ge 0$, $\bar{v}\ge 0$ are required. Here we consider the interaction forms $Q$ given in the end of Sec.~\ref{sec2a} one by one. In Case~(I) $Q=0$, the corresponding $Q_1=Q_2=0$. There is only one critical point $\{\bar{x}=0,\ \bar{y}=\sqrt{\frac{2}{2+n}},\ \bar{u}= \pm\sqrt{\frac{n}{2+n}},\ \bar{v}=0\}$. This is not a scaling solution because its $\Omega_m=\bar{v}^2=0$. Therefore, the coincidence problem persists. In Case~(II) $Q=\alpha\kappa\rho_m\dot{\phi}$, the corresponding $Q_1=\sqrt{\frac{3}{2}}\alpha v^2$, and $Q_2=\sqrt{\frac{3}{2}}\alpha xv$. There are two critical points. The first one is given by \be{eq30} \left\{\bar{x}=0,\ \bar{y}=0, \ \bar{u}=\frac{-1+\sqrt{1+8\alpha^2}}{2\sqrt{2}\,\alpha}, \ \bar{v}=\frac{1}{2}\sqrt{\frac{-1+\sqrt{1+8\alpha^2}} {\alpha^2}}\,\right\}, \end{equation} which is a scaling solution. The second one is $\{\bar{x}=0,\ \bar{y}=\sqrt{\frac{2}{2+n}},\ \bar{u}= \pm\sqrt{\frac{n}{2+n}},\ \bar{v}=0\}$, which is not a scaling solution. Finally, we find that in both Cases~(III) $Q=3\beta H\rho_{tot}$ and (IV) $Q=3\eta H\rho_m$, there is {\em no} critical point and hence there is {\em no} attractor of course. Therefore, the cosmological evolution trajectory completely depends on the initial conditions, and the coincidence problem is inevitable. The fine-tuning of initial conditions is required. So, the only hope to alleviate the coincidence problem relies on the sole scaling solution given in Eq.~(\ref{eq30}) for Case~(II) $Q=\alpha\kappa\rho_m\dot{\phi}$. However, its stability is required in order to be an attractor which is necessary to alleviate the coincidence problem. To study the stability of the critical points for the autonomous system Eqs.~(\ref{eq26})---(\ref{eq29}), we substitute linear perturbations $x\to\bar{x}+\delta x$, $y\to\bar{y}+\delta y$, $u\to\bar{u}+\delta u$, and $v\to\bar{v}+\delta v$ about the critical point $(\bar{x},\bar{y},\bar{u},\bar{v})$ into the autonomous system Eqs.~(\ref{eq26})---(\ref{eq29}) and linearize them. Because of the Friedmann constraint~(\ref{eq15}), there are only three independent evolution equations, namely \begin{eqnarray} &&\delta x^\prime=(\bar{s}-3)\delta x+\bar{x}\delta s+ \frac{\sqrt{3}}{2}\,\delta u- \frac{\sqrt{3}}{4}n\left(2\bar{y}\bar{u}^{-1}\delta y- \bar{y}^2 \bar{u}^{-2}\delta u\right)-\delta Q_1\,,\label{eq31}\\ &&\delta y^\prime=\bar{s}\delta y+\bar{y}\delta s +\frac{\sqrt{3}}{4}n\left[\bar{u}^{-1}\left(\bar{x}\delta y +\bar{y}\delta x\right)-\bar{x}\bar{y}\bar{u}^{-2}\delta u \right],\label{eq32}\\ &&\delta u^\prime=\frac{\sqrt{3}}{2}\,\delta x\,,\label{eq33} \end{eqnarray} where \begin{eqnarray} &&\bar{s}=\left[3\bar{x}^2-\sqrt{3}\,\bar{x}\bar{u}+ \frac{3}{2}\gamma\left(1-\bar{x}^2-\bar{y}^2-\bar{u}^2\right) \right]\left(1-\bar{u}^2\right)^{-1},\label{eq34}\\ &&\delta s=\left[2\bar{u}\bar{s}\delta u+6\bar{x}\delta x- \sqrt{3}\left(\bar{x}\delta u+\bar{u}\delta x\right)- 3\gamma\left(\bar{x}\delta x+\bar{y}\delta y+\bar{u}\delta u \right)\right]\left(1-\bar{u}^2\right)^{-1},\label{eq35} \end{eqnarray} and $\delta Q_1$ is the linear perturbation coming from $Q_1$. The three eigenvalues of the coefficient matrix of Eqs.~(\ref{eq31})---(\ref{eq33}) determine the stability of the critical point. For Case~(II) $Q=\alpha\kappa\rho_m\dot{\phi}$, the corresponding $\delta Q_1=-\sqrt{6}\,\alpha\left(\bar{x}\delta x +\bar{y}\delta y+\bar{u}\delta u\right)$. We find that the corresponding eigenvalues for the critical point~(\ref{eq30}) are given by \be{eq36} \hspace{-2.0mm} \left\{\frac{3\gamma}{2},\,\frac{1}{4}\left[3(\gamma-2)- \sqrt{12\sqrt{1+8\alpha^2}+\left[3(\gamma-2)\right]^2}\,\right], \,\frac{1}{4}\left[3(\gamma-2)+ \sqrt{12\sqrt{1+8\alpha^2}+\left[3(\gamma-2)\right]^2}\,\right] \right\}. \end{equation} Note that $3\gamma/2$ is positive, and the second and third eigenvalues are negative and positive, respectively. So, the critical point given in Eq.~(\ref{eq30}) is unstable, and hence it is {\em not} an attractor. Therefore, the hope to alleviate the coincidence problem is shattered. Due to the failure in the models with a power-law potential, we should turn to the models with another potential. \subsection{Spinor dark energy with an exponential potential}\label{sec2c} In this subsection, we consider the spinor dark energy models with an exponential potential \be{eq37} V(\phi)=V_0\, e^{-\epsilon\kappa\phi}, \end{equation} where $\epsilon$ is a dimensionless constant. In this case, Eqs.~(\ref{eq18})---(\ref{eq21}) become \begin{eqnarray} &&x^\prime=(s-3)x+\frac{\sqrt{3}}{2}u +\sqrt{\frac{3}{2}}\,\epsilon y^2-Q_1\,,\label{eq38}\\ &&y^\prime=sy-\sqrt{\frac{3}{2}}\,\epsilon xy\,,\label{eq39}\\ &&u^\prime=\frac{\sqrt{3}}{2}x\,,\label{eq40}\\ &&v^\prime=\left(s-\frac{3}{2}\gamma\right)v+Q_2\,.\label{eq41} \end{eqnarray} If $Q$ is given, we can obtain the critical points $(\bar{x},\bar{y},\bar{u},\bar{v})$ of the autonomous system by imposing the conditions $\bar{x}^\prime=\bar{y}^\prime=\bar{u}^\prime=\bar{v}^\prime=0$. Of course, they are subject to the Friedmann constraint Eq.~(\ref{eq15}), i.e., $\bar{x}^2+\bar{y}^2+\bar{u}^2+\bar{v}^2=1$. One the other hand, by definitions in Eq.~(\ref{eq14}), $\bar{x}$, $\bar{y}$, $\bar{u}$, $\bar{v}$ should be real, and $\bar{y}\ge 0$, $\bar{v}\ge 0$ are required. We consider the four cases of the interaction term $Q$ given in the end of Sec.~\ref{sec2a}. Unfortunately, similar to the models with a power-law potential, we find that in both Cases~(III) and~(IV) there is {\em no} critical point and hence there is {\em no} attractor of course. So, the cosmological evolution trajectory completely depends on the initial conditions, and the coincidence problem is inevitable. The fine-tuning of initial conditions is required. In Case~(I), there are two critical points, i.e., $\{\bar{x}=0,\ \bar{y}=0,\ \bar{u}=0,\ \bar{v}=1\}$~and \be{eq42} \left\{\bar{x}=0,\ \bar{y}=\frac{1}{2} \sqrt{\frac{-1+\sqrt{1+8\epsilon^2}}{\epsilon^2}}, \ \bar{u}=\frac{1-\sqrt{1+8\epsilon^2}}{2\sqrt{2}\,\epsilon}, \ \bar{v}=0\right\}. \end{equation} Unfortunately, these two critical points are not scaling solutions. In Case~(II), there are also two critical points. The first one is the same given in Eq.~(\ref{eq30}), which is a scaling solution. The second one is the same given in Eq.~(\ref{eq42}), which is not a scaling solution. Again, the only hope to alleviate the coincidence problem relies on the sole scaling solution given in Eq.~(\ref{eq30}) for Case~(II) $Q=\alpha\kappa\rho_m\dot{\phi}$. To study the stability of the critical points for the autonomous system Eqs.~(\ref{eq38})---(\ref{eq41}), we substitute linear perturbations $x\to\bar{x}+\delta x$, $y\to\bar{y}+\delta y$, $u\to\bar{u}+\delta u$, and $v\to\bar{v}+\delta v$ about the critical point $(\bar{x},\bar{y},\bar{u},\bar{v})$ into the autonomous system Eqs.~(\ref{eq38})---(\ref{eq41}) and linearize them. Because of the Friedmann constraint~(\ref{eq15}), there are only three independent evolution equations, namely \begin{eqnarray} &&\delta x^\prime=(\bar{s}-3)\delta x+\bar{x}\delta s+ \frac{\sqrt{3}}{2}\,\delta u+\sqrt{6}\,\epsilon\bar{y}\delta y -\delta Q_1\,,\label{eq43}\\ &&\delta y^\prime=\bar{s}\delta y+\bar{y}\delta s -\sqrt{\frac{3}{2}}\,\epsilon\left(\bar{x}\delta y+ \bar{y}\delta x\right),\label{eq44}\\ &&\delta u^\prime=\frac{\sqrt{3}}{2}\,\delta x\,,\label{eq45} \end{eqnarray} where $\bar{s}$ and $\delta s$ are given in Eqs.~(\ref{eq34}) and~(\ref{eq35}), respectively. $\delta Q_1$ is the linear perturbation coming from $Q_1$. The three eigenvalues of the coefficient matrix of Eqs.~(\ref{eq43})---(\ref{eq45}) determine the stability of the critical point. For Case~(II) $Q=\alpha\kappa\rho_m\dot{\phi}$, the corresponding $\delta Q_1=-\sqrt{6}\,\alpha\left(\bar{x}\delta x +\bar{y}\delta y+\bar{u}\delta u\right)$. We find that the corresponding eigenvalues for the critical point~(\ref{eq30}) are the same given in Eq.~(\ref{eq36}), in which three eigenvalues are positive, negative and positive, respectively. So, the critical point given in Eq.~(\ref{eq30}) is also unstable for the models with an exponential potential, and hence it is {\em not} an attractor. Therefore, the hope to alleviate the coincidence problem is shattered again. \section{Summary}\label{sec3} Recently, the so-called Elko spinor field has been proposed to be a candidate of dark energy. It is a non-standard spinor and has unusual properties. When the Elko spinor field is used in cosmology, its unusual properties could bring some interesting consequences. In this work, we discussed the cosmological coincidence problem in the spinor dark energy models by using the dynamical system method. According to the results obtained in Sec.~\ref{sec2}, we should admit that in the spinor dark energy models with $p_\phi$ and $\rho_\phi$ given in Eqs.~(\ref{eq6}) and~(\ref{eq7}) coming from~\cite{r38}, it is a hard task to alleviate the coincidence problem~\cite{r39}. Nevertheless, it is still possible to find some suitable potentials $V(\phi)$ and interaction forms $Q$ to obtain the scaling attractors of the most general dynamical system~\cite{r40}, and hence the hope to alleviate the coincidence problem still exists, although this is a fairly hard task. Of course, there might be other smart methods different from the usual method used in most of dark energy models to alleviate the coincidence problem. Anyway, our results obtained in the present work showed that the cosmological coincidence problem should be taken to heart in the investigations of spinor dark energy models. \section*{ACKNOWLEDGEMENTS} We thank the anonymous referees for quite useful suggestions, which help us to improve this work. We are grateful to Professors Rong-Gen~Cai, Shuang~Nan~Zhang, Zong-Hong~Zhu and Jian-Min~Wang for helpful discussions. We also thank Minzi~Feng, as well as Hongsheng~Zhang, Xiao-Peng~Ma, Bo~Tang, for kind help and discussions. This work was supported in part by NSFC under Grant No.~10905005, the Excellent Young Scholars Research Fund of Beijing Institute of Technology, and the Fundamental Research Fund of Beijing Institute of Technology. \renewcommand{\baselinestretch}{1.1}
2024-02-18T23:40:21.118Z
2010-12-21T02:02:25.000Z
algebraic_stack_train_0000
2,148
4,228
proofpile-arXiv_065-10603
\section{\bf Introduction} Calculation of complex structures on homogeneous complex manifolds and especially on Lie groups are important from both mathematical and physical point of view. From mathematical discussion the classification of these manifolds are based on the determination of the possible complex structures. From the physical point of view these structures have important role in the N=(2,2) supersymmetric sigma models \cite{G}. Its shown that the N=(2,2) extended supersymmetry in sigma model is equivalent to the existence of biHermitian structure on the target manifold such that the complex structures are covariantly constant with respect to torsionful affine connections (see for example \cite{SL} and references therein ).Furthermore it is shown that the algebraic structures related to these bihermitian structure for N=(2,2) supersymmetric WZW models are the Manin triples \cite{Sp} \cite{L}. For these reasons the calculation of the complex and the biHermitian structures on manifolds especially on Lie groups are important. Samelson \cite{Sa} shows that compact Lie groups always admit an invariant complex structures . In the non-compact case, Morimoto \cite{Mo} showed that there always exist invariant complex structures on any even dimensional reductive Lie groups. In \cite{S} and \cite{O} complex structure on real four dimensional Lie algebra are classified. The method that used in those articles was special and seems not to be adequate for the calculation in higher dimensions . Here we give a new method for this purposes, which can be applied for low dimensional Lie groups . In this method, by use of non coordinate basis we first transform the Nijenhuis tensor on Lie groups to algebraic tensor relations on their Lie algebras. Then by use of adjoint representation we rewrite these relations in the matrix form. Finally, we solve these matrix relations by use of maple program. Here we perform this work for real four dimensional Lie algebras. Our consequences for some algebra are different and complete with respect to \cite{O}. Furthermore, the calculation of biHermitian structure for four dimensional Lie algebras are new . The paper is organized as follows.\\ In section 2, by use of non coordinate basis we transform the Nijenhuis tensor relation to the algebraic tensor relation on Lie groups. Then by use of adjoint representation we write these relations in the matrix form. These relations can also be obtained from definition of the complex structures on Lie algebras as endomorphism of them. Then, in section 3 by use of maple program we solve these matrix relations to obtain the complex structures on real four dimensional Lie algebras. In this process, we apply automorphism groups of real four dimensional Lie algebras for obtaining of non equivalent complex structures (table 1). We then compare our results with \cite{S} and \cite{O}. Note that here we use Patera {\it etal} \cite{P} classification of real four dimensional Lie algebras. The list of Lie algebras and their automorphism groups \cite{PaP} are given in appendix. In section 4, we first transform the tensorial form of the biHermitian relations on Lie groups into the algebraic tensorial relations on their Lie algebras. In this respect, we define biHermitian structure on Lie algebra independently and give a proposition for obtaining non equivalent biHermitian structures. We then by use of adjoint representation rewrite these relations in the matrix forms and solve them by use of maple program. Our results for biHermitian structure on real four dimensional Lie algebra are new. Some discussions are given in the conclusion. \section{\bf A brief review of complex structures on Lie groups} {\bf Definition 1:} {\it Let M be a differentiable manifold, then the pair (M,J) is called almost complex manifold if there exists a tensor field J of type (1,1) such that at each point p of M , $J_{p}^2=-1$. Tensor field J is also called the almost complex structure.}\\ \hspace{-.7cm} {\bf Theorem :} {\it An almost complex structure J on a manifold M is integrable if and only if \begin{equation} N(X,Y)=0 \hspace{1cm}\forall X,Y \in \chi(M)\hspace{1mm}, \end{equation} where ${\chi(M)}$ is the set of vector fields on M and the Nijenhuis tensor N(X,Y) is given by \begin{equation} N(X,Y)=[X,Y]+J[JX,Y]+J[X,JY]-[JX,JY]\hspace{1mm}. \end{equation}} In the coordinate basis i.e the basis $\{{e_{\mu}=\frac{\partial}{\partial{x}^{\mu}}}\}$ and $\{{dx^{\mu}}\}$ for vectors and dual vectors (forms) respectively on M the almost complex structures is expressed as $J=J_{\mu}\hspace{0cm}^{\nu} e_{\nu}\otimes dx^{\mu}$ and the integrability condition can be rewritten as follows : \begin{equation} N(X,Y)=X^{\kappa}Y^{\nu}[J_{\lambda}\hspace{0cm}^{\mu}(\partial_{\kappa}J_{\nu}\hspace{0cm}^{\lambda})+J_{\nu}\hspace{0cm}^{\lambda}(\partial_{\lambda}J_{\kappa}\hspace{0cm}^{\mu}) -J_{\lambda}\hspace{0cm}^{\mu}(\partial_{\nu}J_{\kappa}\hspace{0cm}^{\lambda})-J_{\kappa}\hspace{0cm}^{\lambda}(\partial_{\lambda}J_{\nu}\hspace{0cm}^{\mu})]e_{\mu}\hspace{1mm}=0. \end{equation} Meanwhile the relation $J^2=-1$ can be rewritten as \begin{equation} J_{\lambda}\hspace{0cm}^{\mu}J_{\mu}\hspace{0cm}^{\nu}=-\delta_{\lambda}\hspace{0cm}^{\nu}. \end{equation} Furthermore one can rewrite the above equations by use of the non-coordinate basis $\{{\hat{e}_{\alpha}}\}$ and $\{{\hat{\theta}^{\alpha}}\}$ for vectors and forms on M. For these basis we have \begin{equation} \hat{e}_{\alpha}=e_{\alpha}\hspace{0cm}^{\mu}e_{\mu}\hspace{1cm}, \hspace{1cm}e_{\alpha}\hspace{0cm}^{\mu}\in GL(m,R)\hspace{1mm}, \end{equation} where by requiring that $\{{\hat{e}_{\alpha}}\}$ be orthogonal we have the following relation for metric on M: \begin{equation} g_{\alpha\beta}=g(\hat{e}_{\alpha},\hat{e}_{\beta})=e_{\alpha}\hspace{0cm}^{\mu}\hspace{0cm}e_{\beta}\hspace{0cm}^{\nu}g_{\mu\nu}\hspace{1mm}, \end{equation} or \begin{equation} g_{\mu\nu}=e^{\alpha}\hspace{0cm}_{\mu}\hspace{0cm}e^{\beta}\hspace{0cm}_{\nu}g_{\alpha\beta}\hspace{1mm}, \end{equation} where $e^{\alpha}\hspace{0cm}_{\mu}$ is the inverse of vielbein $e_{\alpha}\hspace{0cm}^{\mu}$ \begin{equation} e^{\alpha}\hspace{0cm}_{\mu}e_{\beta}\hspace{0cm}^{\mu}=\delta^{\alpha}\hspace{0cm}_{\beta}\hspace{2mm},\hspace{2mm} e^{\alpha}\hspace{0cm}_{\mu}e_{\alpha}\hspace{0cm}^{\nu}=\delta_{\mu}\hspace{0cm}^{\nu}\hspace{1mm}. \end{equation} The dual basis $\{{\hat{\theta^{\alpha}}}\}$ are defined by $<\hat{\theta^{\alpha}},\hat{e}_{\beta}>=\delta^{\alpha}\hspace{0cm}_{\beta}$ and we have $\hat{\theta^{\alpha}}=e^{\alpha}\hspace{0cm}_{\mu}dx^{\mu}$\hspace{1mm}. Furthermore the vielbeins satisfy in the following relation: \begin{equation} {f_{\alpha\beta}}^{\gamma}=e^{\gamma}\hspace{0cm}_{\nu}(e_{\alpha}\hspace{0cm}^{\mu} \partial_{\mu}e_{\beta}\hspace{0cm}^{\nu}-e_{\beta}\hspace{0cm}^{\mu}\partial_{\mu} e_{\alpha}\hspace{0cm}^{\nu})\hspace{1mm}, \end{equation} where in the case M is a Lie group manifold G the $f_{\alpha\beta}\hspace{0cm}^{\gamma}$,s are structure constants of Lie algebra {\bf g} of G. Now by use of these basis the tensor $J=J_{\mu}\hspace{0cm}^{\nu} e_{\nu}\otimes dx^{\mu}$ can be rewritten as \begin{equation} J_{\mu}\hspace{0cm}^{\nu} =e^{\alpha}\hspace{0cm}_{\mu}J_{\alpha}\hspace{0cm}^{\beta}e_{\beta}\hspace{0cm}^{\nu}\hspace{1mm}, \end{equation} where $J_{\alpha}\hspace{0cm}^{\beta}$ is an endomorphism of {\bf g} i.e. $J:{\bf g}\longrightarrow {\bf g}$\hspace{1mm}. Now by applying this relation in $(4)$ we have the following matrix relations for matrices $J_{\alpha}\hspace{0cm}^{\beta}$: \begin{equation} J^{2}=-I\hspace{1mm}. \end{equation} Furthermore by applying relations $(9)$ and $(10)$ in tensor equation $(3)$ and use of $(8)$ and assuming that $J_{\alpha}\hspace{0cm}^{\beta}$ and $g_{\alpha\beta}$ are independent of coordinates of G, after some calculations we have the following algebraic relation for $(3)$ : \begin{equation} f_{\beta \alpha}\hspace{0cm}^{\gamma}+J_{\sigma}\hspace{0cm}^{\gamma}\hspace{1mm}J_{\alpha}\hspace{0cm}^{\delta}\hspace{1mm}f_{\beta \delta}\hspace{0cm}^{\sigma}-J_{\beta}\hspace{0cm}^{\sigma}\hspace{1mm}J_{\alpha}\hspace{0cm}^{\delta}\hspace{1mm}f_{\sigma \delta}\hspace{1mm}^{\gamma} +J_{\beta}\hspace{0cm}^{\sigma}\hspace{1mm}J_{\delta}\hspace{0cm}^{\gamma}\hspace{1mm}f_{\sigma \alpha}\hspace{0cm}^{\delta}=0. \end{equation} Finally by using of adjoint representations \begin{equation} f_{\beta\alpha}\hspace{0cm}^{\gamma}=-({\cal{Y}}^{\gamma})_{\beta\alpha}\hspace{4mm},\hspace{4mm}f_{\beta\alpha}\hspace{0cm}^{\gamma}=-(\chi_{\beta})_{\alpha}\hspace{0cm}^{\gamma}, \end{equation} these relations have the following matrix forms: \begin{equation} {\cal{Y}}^{\alpha}+J\hspace{1mm}{\cal{Y}}^{\beta}\hspace{1mm}J_{\beta}\hspace{0cm}^{\alpha}+J_{\beta}\hspace{0cm}^{\alpha}\hspace{1mm}{\cal{Y}}^{\beta}\hspace{1mm}J^{t}-J\hspace{1mm}{\cal{Y}}^{\alpha}\hspace{1mm}J^{t}=0 \hspace{1mm}, \end{equation} or \begin{equation} {\chi}_{\alpha}+J\hspace{1mm}{\chi}_{\alpha}\hspace{1mm}J+J_{\alpha}\hspace{0cm}^{\beta}\hspace{1mm}{\chi}_{\beta}\hspace{1mm}J-J_{\alpha}\hspace{0cm}^{\beta}\hspace{1mm}J\hspace{1mm}{\chi}_{\beta}=0 \hspace{1mm}. \end{equation} Note that the above equation can also be obtained from the definition of complex structure on Lie algebra {\bf g} as follows.\\ \hspace{-8mm} {\bf Definition 2:} {\it An invariant complex structure on a real Lie algebra\hspace{1mm}{\bf g} is an endomorphism J of {\bf g} such that\\ \begin{equation} \hspace{-13.8cm}a) \hspace{1mm}J^{2}=-Id ,\\ \end{equation} \begin{equation} \hspace{-5.5cm}b) \hspace{1mm}[X,Y]+J[JX,Y]+J[X,JY]-[JX,JY]=0\hspace{1mm}, \hspace{1cm}\forall X,Y\in {\bf g}\hspace{1mm}. \end{equation}} Now if we apply $\{X_{\alpha}\}$ as basis for Lie algebra {\bf g} with the following structure constants: \begin{equation} [X_{\alpha},X_{\beta}]=f_{\alpha\beta}\hspace{0cm}^{\gamma} X_{\gamma}\hspace{1mm}, \end{equation} and use of the following relation for J: \begin{equation} JX_{\alpha}=J_{\alpha}\hspace{0cm}^{\beta}X_{\beta}\hspace{1mm}, \end{equation} then relations $(16)$ and $(17)$ are rewritten as $(11),(14)$ or $(15)$. Now to obtain algebraic complex structures $J_{\alpha}\hspace{0cm}^{\beta}$ it is enough to solve equations $(11)$ and $(14)$ or $(15)$ simultaneously\hspace{1mm}. We perform this work for real four dimensional Lie algebras in the next section. \vspace{6mm} \section{\bf Calculation of complex structures on four dimensional real Lie algebras} Here we use the Patera {\it etal} classification \cite{P} of four dimensional real Lie algebras . The commutation relations and the automorphism groups of these Lie algebras are given in the appendix{ \footnote{ Note that for decomposable Lie algebra $(L_{3}\oplus R)$we use the Bianchi classification for real three dimensional Lie algebras $L_{3}$.}}. Now one can write the adjoint representation $(i.e \hspace{1mm}{\cal Y})$ for these Lie algebras and then solve the matrix relations $(11)$ and $(14)$ for obtaining complex structures. We perform this work by use of maple program. Note that in this process one can obtain equivalent complex structures; to avoid these and obtain inequivalent complex structure we use the following proposition: \bigskip {\it{\bf Proposition 1\cite{O}} : Two complex structures $J_{1}$ and $J_{2}$ of Lie algebra {\bf g} are equivalent if there exist an element A of automorphism group of Lie algebra {\bf g} (Aut({\bf g})) such that:} \begin{equation} J_{2}=A\hspace{1mm} J_{1} \hspace{1mm} A^{-1}\hspace{1mm}. \end{equation} In this way, we perform this work and obtain all nonequivalent complex structures on four dimensional real Lie algebras. The results are classified in the table 1 { \footnote{Note that in this table the basis are showed with $\{e_{\alpha}\}$ instead $\{X_{\alpha}\}$}}. We see that 21 out of 30 real four dimensional Lie algebras have complex structures. To comparison of these results with the results of \cite{O} we must firstly obtain the isomorphism relations between the four dimensional real Lie algebras presented in \cite{P} and those presented in \cite{O}. According to the calculations \cite{O} and \cite{A} we have isomorphism relations as the following table: \begin{center} \small{ \begin{tabular}{|c|c|c|c|c|c|c| c|c|c|c|c|c|}\hline $\begin{array}{c} \vspace{-.3cm} \\ 4A_{1} \\ \vspace{-.3cm} \end{array}$ &$A_{4,1}$& ${A^a_{4,2}}$ & $A_{4,3}$ & $A _{4,4}$ & ${A^{a, b}_{4,5}}$ & ${A^{a, b}_{4,6}}$ & $A _{4,7}$ & $A _{4,8}$ &${A^b_{4,9}}$ & $A_{4,10}$ & ${A^a_{4,11}}$ & $A_{4,12}$ \\ \hline $\begin{array}{c} \vspace{-.3cm} \\ \mathfrak a_{4} \\ \vspace{-.3cm} \end{array}$ &$\mathfrak n_4$&$\mathfrak r_{4,a}$ & $\mathfrak r_{4,0}$ & $\mathfrak r_{4}$ & $\mathfrak r_{4,a,b}$ & $\mathfrak r_{4,a,b}'$ & $\mathfrak h_{4}$ & $\mathfrak d _4$& $\mathfrak d_{4,1/1+b}$ & $\mathfrak d_{4,0}'$ & $\mathfrak d_{4,a}'$ & $\mathfrak a\mathfrak f\mathfrak f(\mathbb{C})$\\ \hline \end{tabular} } \end{center} \begin{center} \small{ \begin{tabular}{|c|c|c|c|c|c|c| c|c|c|c|}\hline $\begin{array}{c} \vspace{-.3cm} \\A_{2} \oplus A_{2} \\ \vspace{-.3cm} \end{array}$ & $II\oplus R $ & $III\oplus R$ & $IV\oplus R$ & $V\oplus R$ & $VI_{0}\oplus R$ & $VI_{a}\oplus R$ & $VII_{0}\oplus R$ &$VII_{a}\oplus R$ \\ &&&&&&$(a\neq 0,1)$&&$(a\neq 0)$ \\ \hline $\begin{array}{c} \vspace{-.3cm} \\ \mathfrak r_2\mathfrak r_2 \\ \vspace{-.3cm} \end{array}$ & $\mathfrak r\mathfrak h_{3}$ & $\mathfrak r\mathfrak r_{3,0}$ & $\mathfrak r\mathfrak r_{3}$ & $\mathfrak r\mathfrak r_{3,1}$ & $\mathfrak r\mathfrak r_{3,-1}$ & $\mathfrak r\mathfrak r_{3,a}$ & $\mathfrak r\mathfrak r'_{3,0}$ & $\mathfrak r\mathfrak r'_{3,a}$ \\ \hline \end{tabular} } \end{center} In this respect one can see that in \cite{O} for Lie algebra $VII_{0}+R$ one complex structure is obtained but according to our calculation this Lie algebra has two non-equivalent complex structures.\\ For non solvable Lie algebras $VIII+R$ and $IX+R$ we obtain complex structures.\\In \cite{O}, for Lie algebra $A_{4,8}$, two complex structures are obtained but here we obtain four complex structures to this Lie algebra.\\ For Lie algebra $A_{4,10}$ two complex structures are obtained in \cite{O} but according to table 1,we obtain four complex structures for this Lie algebras.\\ In \cite{O}, for Lie algebra $A_{4,12}$, two complex structures are obtained but here we obtain three complex structures for this Lie algebra.\\ The results for other Lie algebras are the same as \cite{O}. \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\small{TABLE 1 : Complex structures on four dimensional real Lie algebras }}\\ \hline \hline Lie Algebra & Complex Structures \\ \hline \hline $III\oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{1}-e_{4} $ \\ \hline $A_{2}\oplus A_{2}$ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $ \\ \hline $II \oplus R$ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $ V \oplus R $ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $VII_{0}\oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $\\ \cline {2-2} & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $ \\ \hline $VII_{a}\oplus R$ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $\\ \cline {2-2} \small$(a\neq0)$ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3} $ \\ \hline $VIII\oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=pe_{3}-{(1+p^2)}e_{4} \hspace{10mm} (p\in\mathbb{R}) $\\ \hline $IX \oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=pe_{3}-{(1+p^2)}e_{4} \hspace{10mm} (p\in\mathbb{R}) $\\ \hline $A^1_{4,2}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4} $ \\ \hline $A^{a,a}_{4,5}$ \small($-1\leq a < 1, a\neq 0)$ &$Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $A^{a,1}_{4,5}$ \small$(-1\leq a < 1, a\neq 0)$ & $Je_{1}=e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=e_{4} $ \\ \hline $A^{1,1}_{4,5}$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \hline $A^{a,b}_{4,6}$ & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $\\ \cline {2-2} \small$(a\neq0, b\geq0)$ & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3} $ \\ \hline $A_{4,7}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $ \\ \hline $A_{4,8}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4} $\\ \cline {2-2} & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \cline {2-2} & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{2}+e_{4} $ \\ \cline {2-2} & $Je_{1}=-(e_{1}+2e_{3}) \hspace{2mm},\hspace{2mm} Je_{2}=-(e_{1}+e_{2})-2(e_{3}+e_{4}) $ \\ \hline $A^{b}_{4,9}$ & $Je_{1}=-be_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4} $\\ \cline {2-2} \small$(0<\mid b \mid<1)$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \hline $A^{1}_{4,9}$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $\\ \cline {2-2} & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3}$ \\ \cline {2-2} & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3} $ \\ \hline $A^{0}_{4,9}$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \hline $A_{4,10}$ & $Je_{2}=-e_{1}+e_{3}\hspace{2mm},\hspace{2mm} Je_{3}=-e_{1}-e_{2}+e_{3}+e_{4}$\\ \cline {2-2} & $ Je_{2}=e_{1}+e_{3}\hspace{2mm},\hspace{2mm} Je_{3}=-e_{1}-e_{2}-e_{3}+e_{4}$\\ \cline {2-2} & $Je_{2}=-e_{1}-e_{3}\hspace{2mm},\hspace{2mm} Je_{3}=e_{1}+e_{2}+e_{3}-e_{4}$\\ \cline {2-2} & $Je_{2}=-e_{1}-e_{3}\hspace{2mm},\hspace{2mm} Je_{3}=-e_{1}+e_{2}-e_{3}+e_{4}$\\ \hline $A^{a}_{4,11}$ & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3} $\\ \cline {2-2} \small$(a>0)$ & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3}$ \\ \cline {2-2} & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{3} $ \\ \cline {2-2} & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $A_{4,12}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4} $\\ \cline {2-2} & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4}$ \\ \cline {2-2} & $Je_{1}=e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline \end{tabular} \end{center} \newpage \section{\bf BiHermitian structures on four dimensional real Lie algebras} {\bf Definition 3 \cite{G} \cite{SL} :}{\it If the manifold M has two complex structures $J_{\pm}$ such that these are biHermitian ,i.e Hermitian with respect to both complex structures \begin{equation} J_{\pm}^{2}=-1\hspace{1mm}, \end{equation} \begin{equation} {N_{\mu\nu}}^{\kappa}(J_{\pm})=0\hspace{1mm}, \end{equation} \begin{equation} J_{\pm\mu}\hspace{0cm}^{\lambda}\hspace{1mm} g_{\lambda\eta}\hspace{1mm}J_{\pm\nu}\hspace{0cm}^{\eta}=g_{\mu\nu}, \end{equation} where $g_{\mu\nu}$ is the metric on M , and furthermore these complex structures be covariantly constant with respect to certain connections $\Gamma^{\pm}$, respectively \begin{equation} \bigtriangledown^{\pm}_{\mu}\hspace{2mm}J_{\pm\nu}\hspace{0cm}^{\lambda}\equiv{{J_{\pm\nu}\hspace{0cm}^{\lambda}}_{,\mu}} +{\Gamma^{\pm}_{\mu\rho}}^{\lambda}J_{\pm\nu}\hspace{0cm}^{\rho}-{\Gamma^{\pm}_{\mu\nu}}^{\rho}J_{\pm\rho}\hspace{0cm}^{\lambda}=0, \end{equation} such that \begin{equation} {\Gamma^{\pm}_{\mu\nu}}^{\lambda}={\Gamma_{\mu\nu}}^{\lambda} \pm {T_{\mu\nu}}^{\lambda}, \end{equation} where ${\Gamma_{\mu\nu}}^{\lambda}$ is the Christofel connection for the metric g and the torsion is given by \begin{equation} {T_{\mu\nu}}^{\lambda}=H_{\mu\nu\eta}g^{\eta\lambda}, \end{equation} so that $H_{\mu\nu\eta}$ is antisymmetric tensor ; then it is said that $M$ has biHermitian structure and is shown by $(M,g,J_{\pm})$.} \vspace{.5cm}\\ Using $(24)$ the integrebility condition $(22)$ may be rewritten in the following form \cite{SL}: \begin{equation} H_{\delta\nu\lambda}= J_{\pm\delta}\hspace{0cm}^{\sigma} J_{\pm\nu}\hspace{0cm}^{\rho}H_{\sigma\rho\lambda}+J_{\pm\delta}\hspace{0cm}^{\rho} J_{\pm\lambda}\hspace{0cm}^{\sigma}H_{\sigma\rho\nu}+J_{\pm\upsilon}\hspace{0cm}^{\sigma} J_{\pm\lambda}\hspace{0cm}^{\rho}H_{\sigma\rho\delta}\hspace{1mm}. \end{equation} Furthermore by introducing the K\"{a}hler forms \begin{equation} \omega_{\pm\mu\nu}\equiv\ g_{\mu\nu} J_{\pm\nu}\hspace{0cm}^{\lambda}, \end{equation} and by use of $(24)$ one can find \begin{equation} (d\omega_{\pm})_{\rho\mu\nu}=\pm(H_{\sigma\rho\mu}J^{\sigma}_{\pm\nu}+H_{\sigma\mu\nu}J^{\sigma}_{\pm\rho}+H_{\sigma\nu\rho}J^{\sigma}_{\pm\mu}), \end{equation} where \begin{equation} (d\omega_{\pm})_{\lambda\sigma\gamma}=\frac{1}{2}(\partial_{\lambda}\omega_{\pm\sigma\gamma}+\partial_{\sigma}\omega_{\pm\gamma\lambda}+\partial_{\gamma}\omega_{\pm\lambda\sigma}). \end{equation} Finally by use of $(28)$ and $(29)$ one can find \begin{equation} H_{\mu\nu\rho}=-J^{\lambda}_{+\mu}J^{\sigma}_{+\nu}J^{\gamma}_{+\rho}(d\omega_{+})_{\lambda\sigma\gamma}=-J^{\lambda}_{-\mu}J^{\sigma}_{-\nu}J^{\gamma}_{-\rho}(d\omega_{-})_{\lambda\sigma\gamma}. \end{equation} In this respect, the target manifold $(M,g,J_{\pm})$ is said to have biHermitian structure if two Hermitian complex structures $J_{\pm}$ satisfy the relation $(31)$ (i.e relation between $(J_{+},\omega_{+})$ and $(J_{-},\omega_{-})$) \hspace{1mm} defining the torsion H. Now for the case where M is a Lie group G in the similar way of section 2 one can transform relations $(21)-(24)$ and $(27)$ to the algebric relation by using the relations $(7)-(9)$ and the following relations: \begin{equation} H_{\mu\nu\rho}=\frac{1}{2} L^{\alpha}\hspace{0cm}_{\mu}L^{\beta}\hspace{0cm}_{\nu}L^{\gamma}\hspace{0cm}_{\rho}f_{\alpha\beta\gamma}=\frac{1}{2} R^{\alpha}\hspace{0cm}_{\mu}R^{\beta}\hspace{0cm}_{\nu}R^{\gamma}\hspace{0cm}_{\rho}f_{\alpha\beta\gamma}, \end{equation} \begin{equation} J_{+\mu}\hspace{0cm}^{\nu}=R^{\alpha}\hspace{0cm}_{\mu}{J_{\alpha}}\hspace{0cm}^{\beta}R_{\beta}\hspace{0cm}^{\nu}\hspace{1mm},\hspace{1mm}J_{-\mu}\hspace{0cm}^{\nu}=L^{\alpha}\hspace{0cm}_{\mu}{J_{\alpha}}\hspace{0cm}^{\beta}L_{\beta}\hspace{0cm}^{\nu}, \end{equation} where $L^{\alpha}\hspace{0cm}_{\mu}( R^{\alpha}\hspace{0cm}_{\mu})$ and $L_{\beta}\hspace{0cm}^{\nu}(R_{\beta}\hspace{0cm}^{\nu})$ are left(right) invariant vielbeins and their inverses respectively. Now by using these relations , $(23)$ and $(27)$ transform to the following matrix relations: \begin{equation} J\hspace{1mm}g\hspace{1mm}J^{t}=g, \end{equation} \begin{equation} H_{\alpha}= J (H_{\beta} {J_{\alpha}}\hspace{0cm}^{\beta}) + JH_{\alpha}J^{t}+(H_{\beta} {J_{\alpha}}\hspace{0cm}^{\beta}) J^{t}, \end{equation} where $(H_{\alpha})_{\beta\gamma}={H_{\alpha}}\hspace{0cm}_{\beta\gamma}$. Furthermore by using the following relations: {\footnote{Note that in these relations all algebraic (target) indices are lowered and raised by $g_{\alpha\beta}(g_{\mu\nu})$; furthermore these indices transform into each other by $L_{\alpha}\hspace{0cm}^{\mu}(R_{\alpha}\hspace{0cm}^{\mu})$ or $L^{\alpha}\hspace{0cm}_{\mu}(R^{\alpha}\hspace{0cm}_{\mu})$. The symmetrization notation have the following form: $f_{\alpha}\hspace{0cm}^{(\rho\mu)}=f_{\alpha}\hspace{0cm}^{\rho\mu}+f_{\alpha}\hspace{0cm}^{\mu\rho}$.} \begin{equation} \bigtriangledown^{\rho}{L_{\alpha}}\hspace{0cm}^{\mu}=-\frac{1}{2}(f_{\alpha}\hspace{0cm}^{(\rho\mu)}+{f^{\rho\mu}}\hspace{0cm}_{\alpha}+T_{\alpha}\hspace{0cm}^{(\rho\mu)}+{T^{\rho\mu}}\hspace{0cm}_{\alpha} +{L_{\beta}}\hspace{0cm}^{\rho}{L_{\gamma}}\hspace{0cm}^{\mu}{\bigtriangledown}_{\alpha}g^{\beta\gamma}+L^{\beta\rho}\bigtriangledown^{\mu}g_{\alpha\beta}-L^{\beta\mu}\bigtriangledown^{\rho}g_{\alpha\beta}), \end{equation} \begin{equation} \bigtriangledown^{\rho}{R_{\alpha}}\hspace{0cm}^{\mu}=-\frac{1}{2}(-f_{\alpha}\hspace{0cm}^{(\rho\mu)}-{f^{\rho\mu}}\hspace{0cm}_{\alpha}+T_{\alpha}\hspace{0cm}^{(\rho\mu)}+{T^{\rho\mu}}\hspace{0cm}_{\alpha} +{R_{\beta}}\hspace{0cm}^{\rho}{R_{\gamma}}\hspace{0cm}^{\mu}{\bigtriangledown}_{\alpha}g^{\beta\gamma}+R^{\beta\rho}\bigtriangledown^{\mu}g_{\alpha\beta}-R^{\beta\mu}\bigtriangledown^{\rho}g_{\alpha\beta}), \end{equation} and by assuming that $g_{\alpha\beta}$ are coordinate independent; the relation $(24)$ transforms to the following algebraic relation{\footnote{This relation can also be obtained from algebraic form of $(31)$. }}: \begin{equation} J(H_{\alpha}-\chi_{\alpha}g) =(J(H_{\alpha}-\chi_{\alpha}g))^{t}. \end{equation} Note that the metric $g_{\alpha\beta}$ is the ad invariant metric on Lie algebras ${\bf g}$ .i.e.\hspace{.5mm}we have \begin{equation} \langle{X_{\alpha},X_{\beta}}\rangle=g_{\alpha\beta}, \end{equation} \begin{equation} \langle{X_{\alpha},[X_{\beta},X_{\gamma}]}\rangle=\langle{[X_{\alpha},X_{\beta}],X_{\gamma}}\rangle, \end{equation} or in matrix notation we have \begin{equation} \chi_{\alpha}g=-(\chi_{\alpha}g)^{t}. \end{equation} Now, one can obtain biHermitian structures on Lie algebras by solving relations $(11),(14),(34),(35),(38)$ and $(41)$ simultaneously. These relations can be applied as a definition of algebraic biHermitian structure on the Lie algebra {\bf g}\hspace{1mm};\\ {\bf Definition 4:} {\it If there exist endomorphism $J:\bf g \rightarrow \bf g$ of Lie algebra with ad invariant metric g and antisymmetric bilinear map $H:\bf g\otimes \bf g \rightarrow \bf g$ such that the relations $(11),(14),(34),(35),(38)$ and $(41)$ are satisfied, then we have biHermitian structure $(J,g,H)$ on {\bf g}.}\\ Note that relation $(35)$ is equivalent to matrix relation of integrability condition i.e relation $(14)$ . For this reason, first it is better to obtain algebraic complex structures J, then solve relations $(34),(38)$ and $(41)$ and finally check them in $(35)$. We perform this work for real four dimensional Lie algebras by use of maple program. Note that as in the case of complex structures to obtain non equivalent biHermitian structures we propose the following proposition: \\ {\it{\bf Proposition 2} : Two biHermitian structures $(J,g,H)$ and $(J^{'},g^{'},H^{'})$ of Lie algebras {\bf g} are equivalent if there exist an element A of automorphism group of Lie algebra {\bf g}(Auto {\bf g}) such that: \begin{equation} J^{'}=A J A^{-1}, \end{equation} \begin{equation} g^{'}=A g A^{t}, \end{equation} \begin{equation} H^{'}_{\alpha}=A(H_{\beta} A_{\alpha}\hspace{0cm}^{\beta})A^{t}. \end{equation}} {\bf Proof}: {\it By use of $(14),(34)$ and $(35)$ one can see that if $(J,g,H)$ is a biHermitian structure then $(J^{'},g^{'},H^{'})$ is also biHermitian structure and satisfy in those relations}. \vspace{0.5cm}\\ Note that in the case that $f_{\beta\gamma}\hspace{0cm}^{\alpha}=H_{\delta\beta\gamma}g^{\delta\alpha}$ or $H$ is isomorphic with $f$ i.e if there exist isomorphism matrix $C$ so that \begin{equation} C Y^{\alpha} C^{t}= \tilde{Y}^{\beta } C_{\beta}\hspace{0cm}^{\alpha}, \end{equation} where $(Y^{\alpha})_{\beta\gamma}=-f_{\beta\gamma}\hspace{0cm}^{\alpha}$ and $({\tilde{Y}}^{\alpha})_{\beta\gamma}=-H_{\delta\beta\gamma}g^{\delta\alpha}$; then $(J,g,H)$ shows the Manin triple structure on ${\bf g}$\cite{L}. In this way biHermitian structures on real four dimensional Lie algebras are classified in the following table . Note that according to the table 2 for Lie algebra $A_{4,8}$ we have two nonequivalent biHermitian structure $(J,g,H)$ where the second biHermitian structure shows the Manin triple structure of $A_{4,8}$ \cite{L} (i.e $A_{4,8}$ is a Manin triple of two dimensional Lie bialgebras (type B and semiabelian)\cite{Sn}) for the following values of parameters: $$ c_1=c_2=c_3=c_4=c_5=c_6=c_7=c_9=c_{10}=c_{11}=c_{13}=c_{14}=0\hspace{1mm},\hspace{1mm} c_{12}=c_{15}=-1. $$ For Lie algebras $VIII \oplus R$ there is one biHermitian structure where this structure for the values $$ d_1=d_2=d_4=d_5=d_7=d_8=d_{10}=d_{11}=d_{12}=d_{14}=d_{15}=d_{16}=0\hspace{1mm},\hspace{1mm} d_9\neq 0\hspace{1mm},\hspace{2mm}d_3=-d_6=\alpha, $$ is isomorphic with two dimensional Lie bialgebra type A \cite{Sn}. There exist one biHermotian structure for Lie algebra $IX \oplus R$. The results are given in the table 2. Note that the isomorphism relation $(45)$(i.e. the biHermitian structures which shows Manin triple) are independent of the choice of special biHermitian structures from equivalent class of biHermitian structures. In this way if relation $(45)$ holds ; then by use of $\tilde{Y}^{'\alpha}=-H^{'}_{\delta} g^{'\delta\alpha}$ and relations $(44)$ and $(45)$ one can show that \begin{equation} (AC) Y^{\gamma} (AC)^{t}=\tilde{Y}^{'\alpha} (AC)_{\alpha}\hspace{0cm}^{\gamma}. \end{equation} \begin{center} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{TABLE 2 : biHermitian structures on four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Complex Structures & $g$ & antisymmetric tensor \\ \hline \hline $A_{4,8} $&&& $H_{1}=\left( \begin{array}{cccc} 0 & b_{1} & 1 & -b_{2} \\ -b_{1} & 0 & b_{2} & 1 \\ -1 & -b_{2} & 0 & b_{3} \\ b_{2} & -1 & -b_{3} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array} \right)$& $H_{2}=\left( \begin{array}{cccc} 0 & b_{4} & b_{5} & -b_{6} \\ -b_{4} & 0 & b_{6} & b_{5} \\ -b_{5} & -b_{6} & 0 & b_{7} \\ b_{6} & -b_{5} & -b_{7} & 0 \\ \end{array} \right)$\\ &&&$H_{3}=\left( \begin{array}{cccc} 0 & b_{8} & b_{9} & -b_{10} \\ -b_{8} & 0 & b_{10} & 1+b_{9} \\ -b_{9} & -b_{10} & 0 & b_{11} \\ b_{10} & -1-b_{9} & -b_{11} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & b_{12} & b_{13} & -1-b_{14} \\ -b_{12} & 0 & b_{14} & b_{13} \\ -b_{13} & -b_{14} & 0 & b_{15} \\ 1+b_{14} & -b_{13} & -b_{15} & 0 \\ \end{array} \right)$\\ \cline {2-4} &&& $H_{1}=\left( \begin{array}{cccc} 0 & c_{1} & c_{2} & c_{3} \\ -c_{1} & 0 & c_{3} & c_{4} \\ -c_{2} & -c_{3} & 0 & c_{1} \\ -c_{3} & -c_{4} & -c_{1} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right)$&$g=\left( \begin{array}{cccc} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & c_{5}-1 & c_{6} & c_{7} \\ -c_{5}+1 & 0 & c_{7} & c_{8} \\ -c_{6} & -c_{7} & 0 & c_{5} \\ -c_{7} & -c_{8} & -c_{5} & 0 \\ \end{array} \right)$\\ &&&$H_{3}=\left( \begin{array}{cccc} 0 & c_{9} & c_{10} & c_{11} \\ -c_{9} & 0 & c_{11} & c_{12} \\ -c_{10} & -c_{11} & 0 & c_{9} \\ -c_{11} & -c_{12} & -c_{9} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & c_{13} & c_{14} & c_{15} \\ -c_{13} & 0 & 1+c_{15} & c_{16} \\ -c_{14} & -1-c_{15} & 0 & c_{13} \\ -c_{15} & -c_{16} & -c_{13} & 0 \\ \end{array} \right)$\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{TABLE 2 : biHermitian structures on four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Complex Structures & $g$ & antisymmetric tensor \\ \hline \hline $VIII \oplus R$&&& $H_{1}=\left( \begin{array}{cccc} 0 & d_{1} & d_{2} & -d_{3}+\alpha \\ -d_{1} & 0 & d_{3} & d_{2} \\ -d_{2} & -d_{3} & 0 & d_{4} \\ d_{3}-\alpha & -d_{2} & -d_{4} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} -\alpha & 0 & 0 & 0 \\ 0 & -\alpha & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & d_{5} & d_{6} & -d_{7} \\ -d_{5} & 0 & d_{7} & d_{6}+\alpha \\ -d_{6} & -d_{7} & 0 & d_{8} \\ d_{7} & -d_{6}-\alpha & -d_{8} & 0 \\ \end{array} \right)$\\ &&$\alpha \in R-\{0\}$&$H_{3}=\left( \begin{array}{cccc} 0 & d_{9} & d_{10} & -d_{11} \\ -d_{9} & 0 & d_{11} & d_{10} \\ -d_{10} & -d_{11} & 0 & d_{12} \\ d_{11} & -d_{10} & -d_{12} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & d_{13} & d_{14} & -d_{15} \\ -d_{13} & 0 & d_{15} & d_{14} \\ -d_{14} & -d_{15} & 0 & d_{16} \\ d_{15} & -d_{14} & -d_{16} & 0 \\ \end{array} \right)$\\ \hline $IX \oplus R$&&& $H_{1}=\left( \begin{array}{cccc} 0 & 0 & f_{1} & -f_{2}-\beta \\ 0 & 0 & f_{2} & f_{1} \\ -f_{1} & -f_{2} & 0 & f_{3} \\ f_{2}+\beta & -f_{1} & -f_{3} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} \beta & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \beta \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & f_{4} & f_{5} & -f_{6} \\ -f_{4} & 0 & f_{6} & f_{5}-\beta \\ -f_{5} & -f_{6} & 0 & f_{7} \\ f_{6} & -f_{5}+\beta & -f_{7} & 0 \\ \end{array} \right)$\\ &&$\beta \in R-\{0\}$&$H_{3}=\left( \begin{array}{cccc} 0 & f_{8} & f_{9} & -f_{10} \\ -f_{8} & 0 & f_{10} & f_{9} \\ -f_{9} & -f_{10} & 0 & f_{11} \\ f_{10} & -f_{9} & -f_{11} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & f_{12} & f_{13} & -f_{14} \\ -f_{12} & 0 & f_{14} & f_{13} \\ -f_{13} & -f_{14} & 0 & f_{15} \\ f_{14} & -f_{13} & -f_{15} & 0 \\ \end{array} \right)$\\ \hline \end{tabular} \end{center} \vspace{1cm} \section{\bf Conclusion} We give new method for calculation of complex and biHermitian structures on low dimensional Lie algebras. By use of this method we obtain complex and biHermitian structures on real four dimensional Lie algebras. In this manner one can obtain these structures on Lie groups by use of veilbins . Some biHermitian structure on real four dimensional Lie algebras are equivalent to Manin triple structure that have been obtained previously \cite{Sn}. One can apply these methods for obtaining of complex and biHermitian structures on real six dimensional Lie algebras \cite{RS}. \vspace{1cm} \bigskip {\bf Acknowledgments} \vspace{3mm} We would like to thank F. Darabi and Z. Haghpanah for carefully reading the manuscript and useful comments. \vspace{5mm} \newpage \hspace{-6.5mm}{\bf{{Appendix\hspace{2mm}A}}} \vspace{3mm}\\ {\bf Real four dimensional Lie algebras and their automorphisms groups \cite{P},\cite{PaP} } \vspace{2mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline $4A_{1}$ & & $$\\ \hline \vspace{-4mm} $III\oplus R\cong\left(A_{2}\oplus 2A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{12}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & a_{4} & -a_{6} \\ 0 & a_{4} & a_{5} & a_{6} \\ 0 & a_{7} & -a_{7} & a_{8} \\ \end{array} \right)$ \\ & $,f^{2}_{31}=1,f^{3}_{31}=1$\\ \hline $2A_{2}$ & $f^{2}_{12}=1,f^{4}_{34}=1$ & $\left( \begin{array}{cccc} 1 & a_{1} & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & 0 & 1 & a_{3} \\ 0 & 0 & 0 & a_{4} \\ \end{array} \right)$\\ \hline $II\oplus R\cong\left(A_{3,1}\oplus A_{1}\right)$ & $f^{1}_{23}=1$ & $\left( \begin{array}{cccc} a_{2}a_{7}-a_{3}a_{6} & 0 & 0 & 0 \\ a_{1} & a_{2} & a_{3} & a_{4} \\ a_{5} & a_{6} & a_{7} & a_{8} \\ a_{9} & 0 & 0 & a_{10} \\ \end{array} \right)$ \\ \hline $IV\oplus R\cong\left(A_{3,2}\oplus A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{12}=1,f^{3}_{13}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & a_{4} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ \hline $V\oplus R\cong\left(A_{3,3}\oplus A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{13}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{4} & a_{5} & 0 \\ 0 & a_{6} & a_{7} & 0 \\ 0 & 0 & 0 & a_{8} \\ \end{array} \right)$ \\ \hline $VI_{0}\oplus R\cong\left(A_{3,4}\oplus A_{1}\right)$ & $f^{2}_{13}=1,f^{1}_{23}=1$ & $\left( \begin{array}{cccc} a_{2} & a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3} & a_{4} & 1 & a_{5} \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$\\ \hline \vspace{-4mm} $VI_{a}\oplus R\cong\left(A^{a}_{3,5}\oplus A_{1}\right)$ & $f^{2}_{12}=-a,f^{3}_{12}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & a_{4} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$\\ & $,f^{2}_{31}=1,f^{3}_{31}=a$\\ \hline $VII_{0}\oplus R\cong\left(A_{3,6}\oplus A_{1}\right)$ & $f^{1}_{23}=1,f^{2}_{13}=-1$ & $\left( \begin{array}{cccc} a_{2} & -a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3} & a_{4} & 1 & a_{5} \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $VII_{a}\oplus R\cong\left(A^{a}_{3,7}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{31}=a$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & -a_{4} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ & $,f^{2}_{12}=-a,f^{3}_{12}=1$\\ \hline \end{tabular} \end{center} \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline $VIII\oplus R\cong\left(A_{3,8}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{12}=-1,f^{1}_{23}=1$ & $\Lambda_{1}$ \\ \hline $IX\oplus R\cong\left(A_{3,9}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{12}=1,f^{1}_{23}=1$ & $\Lambda_{2}$\\ \hline $A_{4,1}$ & $f^{1}_{24}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^{2}_{7}a_{3} & 0 & 0 & 0 \\ a_{2}a_{7} & a_{3}a_{7} & 0 & 0 \\ a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & a_{7} \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $A^{a}_{4,2}$ & $f^{1}_{14}=a,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{3} & 0 & 0 \\ 0 & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{1}_{4,2}$& $f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & a_{2} & 0 & 0 \\ 0 & a_{5} & 0 & 0 \\ a_{3} & a_{4} & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline $A_{4,3}$ & $f^{1}_{14}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & a_{3} & a_{2} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $A_{4,4}$ & $f^{1}_{14}=1,f^{1}_{24}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{3} & 0 & 0 & 0 \\ a_{2} & a_{3} & 0 & 0 \\ a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$\\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{a,b}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & 0 & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$\\ & $,f^{3}_{34}=b$\\ \hline \vspace{-4mm} $A^{a,a}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & a_{3} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=a$\\ \hline \vspace{-4mm} $A^{a,1}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$& $\left( \begin{array}{cccc} a_{1} & 0 & a_{2} & 0 \\ 0 & a_{3} & 0 & 0 \\ a_{4} & 0 & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{1,1}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 0 \\ a_{7} & a_{8} & a_{9} & 0 \\ a_{10} & a_{11} & a_{12} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1$\\ \hline \end{tabular} \end{center} \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline \vspace{-4mm} $A^{a,b}_{4,6}$ & $f^{1}_{14}=a,f^{2}_{24}=b,f^{3}_{24}=-1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{3} & -a_{2} & 0 \\ 0 & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=b$\\ \hline \vspace{-4mm} $A_{4,7}$ & $f^{1}_{14}=2,f^{2}_{24}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^2_{2} & 0 & 0 & 0 \\ -a_{2}a_{5} & a_{2} & 0 & 0 \\ -a_{2}a_{5}+a_{2}a_{4}-a_{1}a_{5} & a_{1} & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$\\ & $,f^{3}_{34}=1,f^{1}_{23}=1$\\ \hline \vspace{-5mm} $A_{4,8}$& $f^{2}_{24}=1,f^{3}_{34}=-1$ & $\left( \begin{array}{cccc} a_{1}a_{2} & 0 & 0 & 0 \\ a_{1}a_{5} & a_{1} & 0 & 0 \\ a_{2}a_{4} & 0 & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{1}_{23}=1$\\ \hline \vspace{-5mm} $A^{b}_{4,9}$ & $f^{1}_{14}={1+b},f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1}a_{2} & 0 & 0 & 0 \\ -a_{1}a_{5}/b & a_{1} & 0 & 0 \\ a_{2}a_{4} & 0 & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=b,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{1}_{4,9}$ & $f^{1}_{14}=2,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1}a_{4}-a_{2}a_{3} & 0 & 0 & 0 \\ a_{2}a_{6}-a_{1}a_{7} & a_{1} & a_{2} & 0 \\ a_{4}a_{6}-a_{3}a_{7} & a_{3} & a_{4} & 0 \\ a_{5} & a_{6} & a_{7} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{0}_{4,9}$ &$f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{2}a_{3} & 0 & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3}a_{5} & 0 & a_{3} & 0 \\ a_{4} & a_{5} & 0 & 1 \\ \end{array} \right)$\\ & $,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A_{4,10}$ & $f^{3}_{24}=-1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^2_{1}+a^2_{2} & 0 & 0 & 0 \\ -a_{1}a_{4}-a_{2}a_{5} & a_{1} & a_{2} & 0 \\ a_{2}a_{4}-a_{1}a_{5} & -a_{2} & a_{1} & 0 \\ a_{3}& a_{4} & a_{5} & 1 \\ \end{array} \right)$\\ & $,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{a}_{4,11}$ & $f^{1}_{14}=2a,f^{2}_{24}=a,f^{3}_{24}=-1$ & $\left( \begin{array}{cccc} a^{2}_{1}+ a^{2}_{2} & 0 & 0 & 0 \\ -\frac{a(a_{1}a_{4})+a(a_{2}a_{5})+a_{2}a_{4}-a_{1}a_{5}}{a^{2}+{1}} & a_{2} & -a_{1} & 0 \\ \frac{a(a_{2}a_{4})-a(a_{1}a_{5})-a_{1}a_{4}-a_{2}a_{5}}{a^{2}+{1}} & a_{1} & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=a,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A_{4,12}$& $f^{2}_{14}=-1,f^{1}_{13}=1$ & $\left( \begin{array}{cccc} a_{2} & -a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ -a_{4} & a_{3} & 1 & 0 \\ a_{3} & a_{4} & 0 & 1 \\ \end{array} \right)$ \\ & $,f^{1}_{24}=1,f^{2}_{23}=1$\\ \hline \end{tabular} \end{center} \newpage \begin{eqnarray} \Lambda_{1}=\textrm{Rotation}_{xy}\textrm{Boost}_{xz}\textrm{Boost}_{yz}C \end{eqnarray} where: \begin{eqnarray} \textrm{Rotation}_{xy}&=&\left( \begin{array}{cccc} \cos(a_{1}) & \sin(a_{1}) & 0 & 0 \\ -\sin(a_{1}) & \cos(a_{1}) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Boost}_{xz}&=&\left( \begin{array}{cccc} \cosh(a_{2}) & 0 & \sinh(a_{2}) & 0\\ 0 & 1 & 0 & 0 \\ \sinh(a_{2}) & 0 & \cosh(a_{2}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Boost}_{yz}&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & \cosh(a_{3}) & \sinh(a_{3}) & 0 \\ 0 & \sinh(a_{3}) & \cosh(a_{3}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ C&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a_{4} \end{array} \right)\\ \end{eqnarray} \begin{eqnarray} \Lambda_{2}=\textrm{Rotation}_{xy}\textrm{Rotation}_{xz}\textrm{Rotation}_{yz}C \end{eqnarray} where: \begin{eqnarray} \textrm{Rotation}_{xy}&=&\left( \begin{array}{cccc} \cos(a_{1}) & \sin(a_{1}) & 0 & 0 \\ -\sin(a_{1}) & \cos(a_{1}) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Rotation}_{xz}&=&\left( \begin{array}{cccc} \cos(a_{2}) & 0 & -\sin(a_{2}) & 0\\ 0 & 1 & 0 & 0 \\ \sin(a_{2}) & 0 & \cos(a_{2}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Rotation}_{yz}&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & \cos(a_{3}) & \sin(a_{3}) & 0 \\ 0 & -\sin(a_{3}) & \cos(a_{3}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ C&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a_{4} \end{array} \right) \end{eqnarray} \newpage \section{\bf Introduction} Calculation of complex structures on homogeneous complex manifolds and especially on Lie groups is important from both mathematical and physical point of view. Mathematically, classification of these manifolds is based on determination of possible complex structures. From the physical point of view these structures have an important role in the N=(2,2) supersymmetric sigma models \cite{G}. It is shown that the N=(2,2) extended supersymmetry in sigma model implies the existence of biHermitian structure on the target manifold such that the complex structures are covariantly constant with respect to torsionful affine connections (see for example \cite{SL} and references therein). Furthermore, it is shown that the algebraic structures related to these biHermitian structures for N=(2,2) supersymmetric WZW models are the Manin triples \cite{Sp},\cite{L}. For these reasons, the calculation of the complex and the biHermitian structures on manifolds especially on Lie groups, are important. Samelson \cite{Sa} shows that compact Lie groups always admit an invariant complex structure . As for the non-compact case, Morimoto \cite{Mo} proves that there always exist invariant complex structures on any even dimensional reductive Lie groups. In \cite{S} and \cite{O}, complex structures on real four dimensional Lie algebras are classified. The method used in this works is special and seems not to be adequate for the calculation in higher dimensions. In the present paper we give a new method for this purpose, which can be applied for low dimensional Lie groups . In this method, using non-coordinate basis, we first transform the Nijenhuis tensor on Lie groups to algebraic tensor relations on their Lie algebras. Then, using adjoint representation we rewrite these relations in the matrix form. Finally, we solve these matrix relations using Maple. In this research we perform this for real four dimensional Lie algebras. The results for some algebras are different and complete with respect to \cite{O}. Furthermore, calculation of biHermitian structure for four dimensional Lie algebras is new .\\ The paper is organized as follows. In section 2, using non-coordinate basis, we transform the Nijenhuis tensor relation on a Lie group to the algebraic tensor relation on its Lie algebra. Then, using adjoint representation, we write these relations in the matrix form. The relations can also be obtained from definition of complex structures on Lie algebras as endomorphism of them. Then, in section 3 using Maple we solve these matrix relations to obtain complex structures on real four dimensional Lie algebras. In this process, we apply automorphism groups of real four dimensional Lie algebras for obtaining non-equivalent complex structures (table 1). We then compare our results with \cite{S} and \cite{O}. Note that here we use Patera {\it etal} \cite{P} classification of real four dimensional Lie algebras. The list of Lie algebras and their automorphism groups \cite{PaP} are given in appendix. In section 4, we first transform the tensorial form of the biHermitian relations on Lie groups into the algebraic tensorial relations on their Lie algebras. In this respect, we define biHermitian structure on Lie algebra independently and give an equivalent relation for obtaining non equivalent biHermitian structures. Then using adjoint representation we rewrite these relations in the matrix forms and solve them by Maple. Therefore, the present paper will be a continuation to the discussion of biHermitian structure on real four dimensional Lie algebras. Some discussions are given in the conclusion section. \section{\bf A brief review of complex structures on Lie groups} {\bf Definition 1:} {\it Let M be a differentiable manifold, then the pair (M,J) is called almost complex manifold if there exists a tensor field J of type (1,1) such that at each point p of M , $J_{p}^2=-1$; tensor field J is also called the almost complex structure. Furthermore, if the Lie bracket of any vector fields of type $(1,0)$ $X,Y\in T_{p}{M^{+}}$ is again of the same type, then the complex structure $J_{p}$ is said to be integrable, where $T_{p}{M^{+}} = \{Z \in T_{p}{M^{C}} \mid J_{p}Z=+iZ \}$.}\\ \hspace{-.7cm} {\bf Theorem (Newlander and Nirenberg \cite{NN}) :} {\it An almost complex structure J on a manifold M is integrable if and only if \begin{equation} N(X,Y)=0 \hspace{1cm}\forall X,Y \in \chi(M)\hspace{1mm}, \end{equation} where ${\chi(M)}$ is the set of vector fields on M and the Nijenhuis tensor $N: \chi(M) \otimes \chi(M) \rightarrow \chi(M)$ is given by \begin{equation} N(X,Y)=[X,Y]+J[JX,Y]+J[X,JY]-[JX,JY]\hspace{1mm}. \end{equation}} In the coordinate basis, i.e the basis $\{{e_{\mu}=\frac{\partial}{\partial{x}^{\mu}}}\}$ and $\{{dx^{\mu}}\}$ for vectors and dual vectors (forms) respectively on M, the almost complex structures and Nijenhuis tensor are expressed as $J=J_{\mu}\hspace{0cm}^{\nu} e_{\nu}\otimes dx^{\mu}$ and \\ $N=N_{\mu\nu}\hspace{0cm}^{\lambda} dx^{\mu}\otimes dx^{\nu} \otimes e_{\lambda}$ respectively and the integrability condition $(2)$ can be rewritten as follows : \begin{equation} N_{\kappa\nu}\hspace{0cm}^{\mu} =J_{\lambda}\hspace{0cm}^{\mu}(\partial_{\kappa}J_{\nu}\hspace{0cm}^{\lambda})+J_{\nu}\hspace{0cm}^{\lambda}(\partial_{\lambda}J_{\kappa}\hspace{0cm}^{\mu}) -J_{\lambda}\hspace{0cm}^{\mu}(\partial_{\nu}J_{\kappa}\hspace{0cm}^{\lambda})-J_{\kappa}\hspace{0cm}^{\lambda}(\partial_{\lambda}J_{\nu}\hspace{0cm}^{\mu}\hspace{1mm}=0. \end{equation} Meanwhile, the relation $J^2=-1$ can be rewritten as \begin{equation} J_{\lambda}\hspace{0cm}^{\mu}J_{\mu}\hspace{0cm}^{\nu}=-\delta_{\lambda}\hspace{0cm}^{\nu}. \end{equation} Furthermore one can rewrite the above equations using the non-coordinate basis $\{{\hat{e}_{\alpha}}\}$ and $\{{\hat{\theta}^{\alpha}}\}$ for vectors and forms on M. For these basis we have \begin{equation} \hat{e}_{\alpha}=e_{\alpha}\hspace{0cm}^{\mu}e_{\mu}\hspace{1cm}, \hspace{1cm}e_{\alpha}\hspace{0cm}^{\mu}\in GL(m,R)\hspace{1mm}, \end{equation} where for the vierbeins $e_{\alpha}\hspace{0cm}^{\mu}$ and its inverse $e^{\alpha}\hspace{0cm}_{\mu}$ we have \begin{equation} e^{\alpha}\hspace{0cm}_{\mu}e_{\beta}\hspace{0cm}^{\mu}=\delta^{\alpha}\hspace{0cm}_{\beta}\hspace{2mm},\hspace{2mm} e^{\alpha}\hspace{0cm}_{\mu}e_{\alpha}\hspace{0cm}^{\nu}=\delta_{\mu}\hspace{0cm}^{\nu}\hspace{1mm}. \end{equation} The dual basis $\{{\hat{\theta^{\alpha}}}\}$ are defined by $<\hat{\theta^{\alpha}},\hat{e}_{\beta}>=\delta^{\alpha}\hspace{0cm}_{\beta}$ and we have $\hat{\theta^{\alpha}}=e^{\alpha}\hspace{0cm}_{\mu}dx^{\mu}$\hspace{1mm}. Furthermore, the vierbeins satisfy the following relation: \begin{equation} {f_{\alpha\beta}}^{\gamma}=e^{\gamma}\hspace{0cm}_{\nu}(e_{\alpha}\hspace{0cm}^{\mu} \partial_{\mu}e_{\beta}\hspace{0cm}^{\nu}-e_{\beta}\hspace{0cm}^{\mu}\partial_{\mu} e_{\alpha}\hspace{0cm}^{\nu})\hspace{1mm}, \end{equation} if M is a Lie group manifold G then $f_{\alpha\beta}\hspace{0cm}^{\gamma}$,s are structure constants of Lie algebra {\bf g} of G. Now on these bases the tensor $J=J_{\mu}\hspace{0cm}^{\nu} e_{\nu}\otimes dx^{\mu}$ can be rewritten as \begin{equation} J_{\mu}\hspace{0cm}^{\nu} =e^{\alpha}\hspace{0cm}_{\mu}J_{\alpha}\hspace{0cm}^{\beta}e_{\beta}\hspace{0cm}^{\nu}\hspace{1mm}, \end{equation} where $J_{\alpha}\hspace{0cm}^{\beta}$ is an endomorphism of {\bf g}, i.e. $J:{\bf g}\longrightarrow {\bf g}$\hspace{1mm}. Now by applying this relation to $(4)$ we have the following matrix relation for matrices $J_{\alpha}\hspace{0cm}^{\beta}$: \begin{equation} J^{2}=-I\hspace{1mm}. \end{equation} Furthermore, by applying relations $(7)$ and $(8)$ to tensor equation $(3)$ and using $(6)$ and assuming that $J_{\alpha}\hspace{0cm}^{\beta}$ and $g_{\alpha\beta}$ are independent of coordinates of G, after some calculations we have the following algebraic relation for$(3)$: \begin{equation} f_{\beta \alpha}\hspace{0cm}^{\gamma}+J_{\sigma}\hspace{0cm}^{\gamma}\hspace{1mm}J_{\alpha}\hspace{0cm}^{\delta}\hspace{1mm}f_{\beta \delta}\hspace{0cm}^{\sigma}-J_{\beta}\hspace{0cm}^{\sigma}\hspace{1mm}J_{\alpha}\hspace{0cm}^{\delta}\hspace{1mm}f_{\sigma \delta}\hspace{1mm}^{\gamma} +J_{\beta}\hspace{0cm}^{\sigma}\hspace{1mm}J_{\delta}\hspace{0cm}^{\gamma}\hspace{1mm}f_{\sigma \alpha}\hspace{0cm}^{\delta}=0. \end{equation} Finally, using adjoint representations \begin{equation} f_{\beta\alpha}\hspace{0cm}^{\gamma}=-({\cal{Y}}^{\gamma})_{\beta\alpha}\hspace{4mm},\hspace{4mm}f_{\beta\alpha}\hspace{0cm}^{\gamma}=-(\chi_{\beta})_{\alpha}\hspace{0cm}^{\gamma}, \end{equation} the relation $(10)$ will have the following matrix form: \begin{equation} {\cal{Y}}^{\alpha}+J\hspace{1mm}{\cal{Y}}^{\beta}\hspace{1mm}J_{\beta}\hspace{0cm}^{\alpha}+J_{\beta}\hspace{0cm}^{\alpha}\hspace{1mm}{\cal{Y}}^{\beta}\hspace{1mm}J^{t}-J\hspace{1mm}{\cal{Y}}^{\alpha}\hspace{1mm}J^{t}=0 \hspace{1mm}, \end{equation} or \begin{equation} {\chi}_{\alpha}+J\hspace{1mm}{\chi}_{\alpha}\hspace{1mm}J+J_{\alpha}\hspace{0cm}^{\beta}\hspace{1mm}{\chi}_{\beta}\hspace{1mm}J-J_{\alpha}\hspace{0cm}^{\beta}\hspace{1mm}J\hspace{1mm}{\chi}_{\beta}=0 \hspace{1mm}. \end{equation} Note that the above equation can also be obtained from the definition of complex structure on Lie algebra {\bf g} as follows.\\ \hspace{-8mm} {\bf Definition 2:} {\it An integrable complex structure on a real Lie algebra\hspace{1mm}{\bf g} is an endomorphism J of {\bf g} such that\\ \begin{equation} \hspace{-13.8cm}a) \hspace{1mm}J^{2}=-Id ,\\ \end{equation} \begin{equation} \hspace{-5.5cm}b) \hspace{1mm}[X,Y]+J[JX,Y]+J[X,JY]-[JX,JY]=0\hspace{1mm}, \hspace{1cm}\forall X,Y\in {\bf g}\hspace{1mm}. \end{equation}} Now if we use $\{X_{\alpha}\}$ as basis for Lie algebra {\bf g} with the following structure constants: \begin{equation} [X_{\alpha},X_{\beta}]=f_{\alpha\beta}\hspace{0cm}^{\gamma} X_{\gamma}\hspace{1mm}, \end{equation} and use the following relation for J: \begin{equation} JX_{\alpha}=J_{\alpha}\hspace{0cm}^{\beta}X_{\beta}\hspace{1mm}, \end{equation} then relations $(14)$ and $(15)$ can be rewritten as $(9),(12)$ or $(13)$. Now, in order to obtain algebraic complex structures $J_{\alpha}\hspace{0cm}^{\beta}$, it is enough to solve equations $(9)$ and $(12)$ or $(13)$ simultaneously\hspace{1mm}. We do this for real four dimensional Lie algebras in the next section. \vspace{6mm} \section{\bf Calculation of complex structures on four dimensional real Lie algebras} In this section we use the Patera {\it etal} classification \cite{P} of four dimensional real Lie algebras . The commutation relations and the automorphism groups of these Lie algebras are given in the appendix{ \footnote{ Note that for decomposable Lie algebra $(L_{3}\oplus R)$ we use Bianchi classification of real three dimensional Lie algebras $L_{3}$.}}. Now one can write the adjoint representation $({\cal Y})$ for these Lie algebras and then solve the matrix relations $(9)$ and $(12)$ for obtaining complex structures. We do this by Maple. Note that in this process one can obtain equivalent complex structures; to avoid these and in order to obtain inequivalent complex structure we use the following equivalent relation: \bigskip {\it{\bf Definition 3\cite{O}} : Two complex structures $J_{1}$ and $J_{2}$ of Lie algebra {\bf g} are equivalent if there exists an element A of automorphism group of Lie algebra {\bf g} (Aut({\bf g})) such that:} \begin{equation} J_{2}=A\hspace{1mm} J_{1} \hspace{1mm} A^{-1}\hspace{1mm}. \end{equation} Note that this relation is an equivalent relation.\\ In this way, we do this and obtain all non-equivalent complex structures on four dimensional real Lie algebras. The results are classified in table 1 { \footnote{Note that in this table the bases are shown with $\{e_{\alpha}\}$ instead of $\{X_{\alpha}\}$}}. As the table shows 21 out of 30 real four dimensional Lie algebras have complex structures. To compare these results with results of \cite{O} first we must obtain the isomorphism relations between the four dimensional real Lie algebras presented in \cite{P} and those presented in \cite{O}. According to the calculations in \cite{O} and \cite{A} we have isomorphism relations as summarized in the following table: \begin{center} \small{ \begin{tabular}{|c|c|c|c|c|c|c| c|c|c|c|c|c|}\hline $\begin{array}{c} \vspace{-.3cm} \\ 4A_{1} \\ \vspace{-.3cm} \end{array}$ &$A_{4,1}$& ${A^a_{4,2}}$ & $A_{4,3}$ & $A _{4,4}$ & ${A^{a, b}_{4,5}}$ & ${A^{a, b}_{4,6}}$ & $A _{4,7}$ & $A _{4,8}$ &${A^b_{4,9}}$ & $A_{4,10}$ & ${A^a_{4,11}}$ & $A_{4,12}$ \\ \hline $\begin{array}{c} \vspace{-.3cm} \\ \mathfrak a_{4} \\ \vspace{-.3cm} \end{array}$ &$\mathfrak n_4$&$\mathfrak r_{4,a}$ & $\mathfrak r_{4,0}$ & $\mathfrak r_{4}$ & $\mathfrak r_{4,a,b}$ & $\mathfrak r_{4,a,b}'$ & $\mathfrak h_{4}$ & $\mathfrak d _4$& $\mathfrak d_{4,1/1+b}$ & $\mathfrak d_{4,0}'$ & $\mathfrak d_{4,a}'$ & $\mathfrak a\mathfrak f\mathfrak f(\mathbb{C})$\\ \hline \end{tabular} } \end{center} \begin{center} \small{ \begin{tabular}{|c|c|c|c|c|c|c| c|c|c|c|}\hline $\begin{array}{c} \vspace{-.3cm} \\A_{2} \oplus A_{2} \\ \vspace{-.3cm} \end{array}$ & $II\oplus R $ & $III\oplus R$ & $IV\oplus R$ & $V\oplus R$ & $VI_{0}\oplus R$ & $VI_{a}\oplus R$ & $VII_{0}\oplus R$ &$VII_{a}\oplus R$ \\ &&&&&&$(a\neq 0,1)$&&$(a\neq 0)$ \\ \hline $\begin{array}{c} \vspace{-.3cm} \\ \mathfrak r_2\mathfrak r_2 \\ \vspace{-.3cm} \end{array}$ & $\mathfrak r\mathfrak h_{3}$ & $\mathfrak r\mathfrak r_{3,0}$ & $\mathfrak r\mathfrak r_{3}$ & $\mathfrak r\mathfrak r_{3,1}$ & $\mathfrak r\mathfrak r_{3,-1}$ & $\mathfrak r\mathfrak r_{3,a}$ & $\mathfrak r\mathfrak r'_{3,0}$ & $\mathfrak r\mathfrak r'_{3,a}$ \\ \hline \end{tabular} } \end{center} In this respect, one can see that in \cite{O} one complex structure is obtained for Lie algebra $VII_{0}+R$ but according to our calculation this Lie algebra has two non-equivalent complex structures. For non-solvable Lie algebras $VIII+R$ and $IX+R$ we obtain complex structures. In \cite{O}, two complex structures are obtained for Lie algebra $A_{4,8}$ but here we obtain four complex structures to this Lie algebra. Furthermore, two complex structures are obtained for Lie algebra $A_{4,10}$ in \cite{O}, but according to table 1 we obtain four complex structures for this Lie algebra. Meanwhile, in \cite{O}, two complex structures are obtained for Lie algebra $A_{4,12}$ but here we obtain three complex structures for this Lie algebra. The results for other Lie algebras are the same as \cite{O}. \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\small{TABLE 1 : Complex structures on four dimensional real Lie algebras }}\\ \hline \hline Lie Algebra & Complex Structures \\ \hline \hline $III\oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{1}-e_{4} $ \\ \hline $A_{2}\oplus A_{2}$ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $ \\ \hline $II \oplus R$ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $ V \oplus R $ & $Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $VII_{0}\oplus R $ & $J_{1}e_{1}=-e_{2} \hspace{2mm},\hspace{2mm} J_{1}e_{3}=-e_{4} $\\ & $J_{2}e_{1}=e_{2} \hspace{2mm},\hspace{2mm} J_{2}e_{3}=-e_{4} $ \\ \hline $VII_{a}\oplus R$ & $J_{1}e_{1}=-e_{4} \hspace{2mm},\hspace{2mm} J_{1}e_{2}=e_{3} $\\ \small$(a\neq0)$ & $J_{2}e_{1}=-e_{4} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=-e_{3} $ \\ \hline $VIII\oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=pe_{3}-{(1+p^2)}e_{4} \hspace{10mm} (p\in\mathbb{R}) $\\ \hline $IX \oplus R $ & $Je_{1}=-e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=pe_{3}-{(1+p^2)}e_{4} \hspace{10mm} (p\in\mathbb{R}) $\\ \hline $A^1_{4,2}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=e_{4} $ \\ \hline $A^{a,a}_{4,5}$ \small($-1\leq a < 1, a\neq 0)$ &$Je_{1}=-e_{4} \hspace{2mm},\hspace{2mm} Je_{2}=e_{3} $ \\ \hline $A^{a,1}_{4,5}$ \small$(-1\leq a < 1, a\neq 0)$ & $Je_{1}=e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=e_{4} $ \\ \hline $A^{1,1}_{4,5}$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \hline $A^{a,b}_{4,6}$ & $J_{1}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{1}e_{2}=e_{3} $\\ \small$(a\neq0, b\geq0)$ & $J_{2}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=-e_{3} $ \\ \hline $A_{4,7}$ & $Je_{1}=e_{2} \hspace{2mm},\hspace{2mm} Je_{3}=-e_{4} $ \\ \hline $A_{4,8}$ & $J_{1}e_{1}=e_{2} \hspace{2mm},\hspace{2mm} J_{1}e_{3}=e_{4} $\\ & $J_{2}e_{1}=-e_{3} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=-e_{4} $ \\ & $J_{3}e_{1}=e_{2} \hspace{2mm},\hspace{2mm} J_{3}e_{3}=e_{2}+e_{4} $ \\ & $J_{4}e_{1}=-(e_{1}+2e_{3}) \hspace{2mm},\hspace{2mm} J_{4}e_{2}=-(e_{1}+e_{2})-2(e_{3}+e_{4}) $ \\ \hline $A^{b}_{4,9}$ & $J_{1}e_{1}=-be_{2} \hspace{2mm},\hspace{2mm} J_{1}e_{3}=e_{4} $\\ \small$(0<\mid b \mid<1)$ & $J_{2}e_{1}=-e_{3} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=-e_{4} $ \\ \hline $A^{1}_{4,9}$ & $J_{1}e_{1}=-e_{3} \hspace{2mm},\hspace{2mm} J_{1}e_{2}=-e_{4} $\\ & $J_{2}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=-e_{3}$ \\ & $J_{3}e_{1}=-e_{4} \hspace{2mm},\hspace{2mm} J_{3}e_{2}=-e_{3} $ \\ \hline $A^{0}_{4,9}$ & $Je_{1}=-e_{3} \hspace{2mm},\hspace{2mm} Je_{2}=-e_{4} $ \\ \hline $A_{4,10}$ & $J_{1}e_{2}=-e_{1}+e_{3}\hspace{2mm},\hspace{2mm} J_{1}e_{3}=-e_{1}-e_{2}+e_{3}+e_{4}$\\ & $ J_{2}e_{2}=e_{1}+e_{3}\hspace{2mm},\hspace{2mm} J_{2}e_{3}=-e_{1}-e_{2}-e_{3}+e_{4}$\\ & $J_{3}e_{2}=-e_{1}-e_{3}\hspace{2mm},\hspace{2mm} J_{3}e_{3}=e_{1}+e_{2}+e_{3}-e_{4}$\\ & $J_{4}e_{2}=-e_{1}-e_{3}\hspace{2mm},\hspace{2mm} J_{4}e_{3}=-e_{1}+e_{2}-e_{3}+e_{4}$\\ \hline $A^{a}_{4,11}$ & $J_{1}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{1}e_{2}=-e_{3} $\\ \small$(a>0)$ & $J_{2}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{2}e_{2}=e_{3}$ \\ & $J_{3}e_{1}=-e_{4} \hspace{2mm},\hspace{2mm} J_{3}e_{2}=-e_{3} $ \\ & $J_{4}e_{1}=-e_{4} \hspace{2mm},\hspace{2mm} J_{4}e_{2}=e_{3} $ \\ \hline $A_{4,12}$ & $J_{1}e_{1}=e_{2} \hspace{2mm},\hspace{2mm} J_{1}e_{3}=e_{4} $\\ & $J_{2}e_{1}=-e_{2} \hspace{2mm},\hspace{2mm} J_{2}e_{3}=e_{4}$ \\ & $J_{3}e_{1}=e_{4} \hspace{2mm},\hspace{2mm} J_{3}e_{2}=e_{3} $ \\ \hline \end{tabular} \end{center} \newpage \section{\bf BiHermitian structures on four dimensional real Lie algebras} {\bf Definition 4 \cite{G} \cite{SL} :}{\it If the complex manifold M has two complex structures $J_{\pm}$ such that it is Hermitian with respect to both complex structures. i.e. \begin{equation} J_{\pm}^{2}=-1\hspace{1mm}, \end{equation} \begin{equation} {N_{\mu\nu}}^{\kappa}(J_{\pm})=0\hspace{1mm}, \end{equation} \begin{equation} J_{\pm\mu}\hspace{0cm}^{\lambda}\hspace{1mm} g_{\lambda\eta}\hspace{1mm}J_{\pm\nu}\hspace{0cm}^{\eta}=g_{\mu\nu}, \end{equation} and furthermore if these complex structures be covariantly constant with respect to certain connections $\Gamma^{\pm}$ \begin{equation} \bigtriangledown^{\pm}_{\mu}\hspace{2mm}J_{\pm\nu}\hspace{0cm}^{\lambda}\equiv{{J_{\pm\nu}\hspace{0cm}^{\lambda}}_{,\mu}} +{\Gamma^{\pm}_{\mu\rho}}^{\lambda}J_{\pm\nu}\hspace{0cm}^{\rho}-{\Gamma^{\pm}_{\mu\nu}}^{\rho}J_{\pm\rho}\hspace{0cm}^{\lambda}=0, \end{equation} with \begin{equation} {\Gamma^{\pm}_{\mu\nu}}^{\lambda}={\Gamma_{\mu\nu}}^{\lambda} \pm {T_{\mu\nu}}^{\lambda}\hspace{2mm},\hspace{2mm} {T_{\mu\nu}}^{\lambda}=H_{\mu\nu\eta}g^{\eta\lambda}, \end{equation} then it is said that $M$ has biHermitian structure, shown by $(M,g,J_{\pm})$.} \vspace{.5cm}\\ In the above definition $g_{\mu\nu},{\Gamma_{\mu\nu}}^{\lambda}$ and $H_{\mu\nu\eta}$ are metric , Christoffel connection and antisymmetric tensors on M respectively. Using $(22)$, the integrability condition $(20)$ may be rewritten in the following form \cite{SL}: \begin{equation} H_{\delta\nu\lambda}= J_{\pm\delta}\hspace{0cm}^{\sigma} J_{\pm\nu}\hspace{0cm}^{\rho}H_{\sigma\rho\lambda}+J_{\pm\delta}\hspace{0cm}^{\rho} J_{\pm\lambda}\hspace{0cm}^{\sigma}H_{\sigma\rho\nu}+J_{\pm\upsilon}\hspace{0cm}^{\sigma} J_{\pm\lambda}\hspace{0cm}^{\rho}H_{\sigma\rho\delta}\hspace{1mm}. \end{equation} Furthermore, by introducing the K\"{a}hler forms \begin{equation} \omega_{\pm\mu\nu}\equiv\ g_{\mu\nu} J_{\pm\nu}\hspace{0cm}^{\lambda}, \end{equation} and by use of $(22)$ one can find \begin{equation} (d\omega_{\pm})_{\rho\mu\nu}=\pm(H_{\sigma\rho\mu}J^{\sigma}_{\pm\nu}+H_{\sigma\mu\nu}J^{\sigma}_{\pm\rho}+H_{\sigma\nu\rho}J^{\sigma}_{\pm\mu}), \end{equation} where \begin{equation} (d\omega_{\pm})_{\lambda\sigma\gamma}=\frac{1}{2}(\partial_{\lambda}\omega_{\pm\sigma\gamma}+\partial_{\sigma}\omega_{\pm\gamma\lambda}+\partial_{\gamma}\omega_{\pm\lambda\sigma}). \end{equation} Finally, using $(25)$ and $(26)$ one can find \begin{equation} H_{\mu\nu\rho}=-J^{\lambda}_{+\mu}J^{\sigma}_{+\nu}J^{\gamma}_{+\rho}(d\omega_{+})_{\lambda\sigma\gamma}=-J^{\lambda}_{-\mu}J^{\sigma}_{-\nu}J^{\gamma}_{-\rho}(d\omega_{-})_{\lambda\sigma\gamma}. \end{equation} In this respect, the target manifold $(M,g,J_{\pm})$ is said to have biHermitian structure if two Hermitian complex structures $J_{\pm}$ satisfy the relation $(28)$ (i.e. relation between $(J_{+},\omega_{+})$ and $(J_{-},\omega_{-})$)which defines the torsion H. Now, for the case where M is a Lie group G, similar to the process presented in section 2 one can transform relations $(19)-(22)$ and $(24)$ to the algebraic relation using the relations $(6),(7)$ and the following relations: \begin{equation} g_{\alpha\beta}=L_{\alpha}\hspace{0cm}^{\mu}L_{\beta}\hspace{0cm}^{\nu}g_{\mu\nu}=R_{\alpha}\hspace{0cm}^{\mu}R_{\beta}\hspace{0cm}^{\nu}g_{\mu\nu}\hspace{2mm},\hspace{2mm} g_{\mu\nu}=L^{\alpha}\hspace{0cm}_{\mu}L^{\beta}\hspace{0cm}_{\nu}g_{\alpha\beta}=R^{\alpha}\hspace{0cm}_{\mu}R^{\beta}\hspace{0cm}_{\nu}g_{\alpha\beta}, \end{equation} \begin{equation} H_{\mu\nu\rho}=\frac{1}{2} L^{\alpha}\hspace{0cm}_{\mu}L^{\beta}\hspace{0cm}_{\nu}L^{\gamma}\hspace{0cm}_{\rho}H_{\alpha\beta\gamma}=\frac{1}{2} R^{\alpha}\hspace{0cm}_{\mu}R^{\beta}\hspace{0cm}_{\nu}R^{\gamma}\hspace{0cm}_{\rho}H_{\alpha\beta\gamma}, \end{equation} \begin{equation} J_{+\mu}\hspace{0cm}^{\nu}=R^{\alpha}\hspace{0cm}_{\mu}{J_{\alpha}}\hspace{0cm}^{\beta}R_{\beta}\hspace{0cm}^{\nu}\hspace{1mm},\hspace{1mm}J_{-\mu}\hspace{0cm}^{\nu}=L^{\alpha}\hspace{0cm}_{\mu}{J_{\alpha}}\hspace{0cm}^{\beta}L_{\beta}\hspace{0cm}^{\nu}, \end{equation} where $L^{\alpha}\hspace{0cm}_{\mu}( R^{\alpha}\hspace{0cm}_{\mu})$ and $L_{\beta}\hspace{0cm}^{\nu}(R_{\beta}\hspace{0cm}^{\nu})$ are left(right) invariant vierbeins and their inverses respectively. Now using these relations , $(21)$ and $(24)$ transform to the following matrix relations: \begin{equation} J\hspace{1mm}g\hspace{1mm}J^{t}=g, \end{equation} \begin{equation} H_{\alpha}= J (H_{\beta} {J_{\alpha}}\hspace{0cm}^{\beta}) + JH_{\alpha}J^{t}+(H_{\beta} {J_{\alpha}}\hspace{0cm}^{\beta}) J^{t}, \end{equation} \vspace{1cm}\\ where $(H_{\alpha})_{\beta\gamma}={H_{\alpha}}\hspace{0cm}_{\beta\gamma}$. Furthermore, using the following relations \cite{LI} {\footnote{Note that in these relations all algebraic (target) indices are lowered and raised by $g_{\alpha\beta}(g_{\mu\nu})$; furthermore, these indices transform into each other by $L_{\alpha}\hspace{0cm}^{\mu}(R_{\alpha}\hspace{0cm}^{\mu})$ or $L^{\alpha}\hspace{0cm}_{\mu}(R^{\alpha}\hspace{0cm}_{\mu})$. The symmetrization notation have the following form: $f_{\alpha}\hspace{0cm}^{(\rho\mu)}=f_{\alpha}\hspace{0cm}^{\rho\mu}+f_{\alpha}\hspace{0cm}^{\mu\rho}$.} \begin{equation} \bigtriangledown^{\rho}{L_{\alpha}}\hspace{0cm}^{\mu}=-\frac{1}{2}(f_{\alpha}\hspace{0cm}^{(\rho\mu)}+{f^{\rho\mu}}\hspace{0cm}_{\alpha}+T_{\alpha}\hspace{0cm}^{(\rho\mu)}+{T^{\rho\mu}}\hspace{0cm}_{\alpha} +{L_{\beta}}\hspace{0cm}^{\rho}{L_{\gamma}}\hspace{0cm}^{\mu}{\bigtriangledown}_{\alpha}g^{\beta\gamma}+L^{\beta\rho}\bigtriangledown^{\mu}g_{\alpha\beta}-L^{\beta\mu}\bigtriangledown^{\rho}g_{\alpha\beta}), \end{equation} \begin{equation} \bigtriangledown^{\rho}{R_{\alpha}}\hspace{0cm}^{\mu}=-\frac{1}{2}(-f_{\alpha}\hspace{0cm}^{(\rho\mu)}-{f^{\rho\mu}}\hspace{0cm}_{\alpha}+T_{\alpha}\hspace{0cm}^{(\rho\mu)}+{T^{\rho\mu}}\hspace{0cm}_{\alpha} +{R_{\beta}}\hspace{0cm}^{\rho}{R_{\gamma}}\hspace{0cm}^{\mu}{\bigtriangledown}_{\alpha}g^{\beta\gamma}+R^{\beta\rho}\bigtriangledown^{\mu}g_{\alpha\beta}-R^{\beta\mu}\bigtriangledown^{\rho}g_{\alpha\beta}), \end{equation} and assuming that $g_{\alpha\beta}$ are coordinate independent; the relation $(22)$ transforms to the following algebraic relation{\footnote{This relation can also be obtained from algebraic form of $(28)$.}}: \begin{equation} J(H_{\alpha}-\chi_{\alpha}g) =(J(H_{\alpha}-\chi_{\alpha}g))^{t}. \end{equation} Note that the metric $g_{\alpha\beta}$ is the ad invariant metric on Lie algebras ${\bf g}$ .i.e.\hspace{.5mm}we have \begin{equation} \langle{X_{\alpha},X_{\beta}}\rangle=g_{\alpha\beta}, \end{equation} \begin{equation} \langle{X_{\alpha},[X_{\beta},X_{\gamma}]}\rangle=\langle{[X_{\alpha},X_{\beta}],X_{\gamma}}\rangle, \end{equation} or in matrix notation we have {\footnote{For semisimple Lie algebras, the Killing form is one of nondegenerate solutions for this equation and other nondegenerate solution may exist. For nonsemisimple Lie algebras the Killing tensor degenerates and there may exist nondegenerate solution for $(39)$.} } \begin{equation} \chi_{\alpha}g=-(\chi_{\alpha}g)^{t}. \end{equation} Now, one can obtain biHermitian structures on Lie algebras by solving relations $(9),(12),(32),(33),(36)$ and $(39)$ simultaneously. These relations can be applied on the Lie algebra as a definition of algebraic biHermitian structure on {\bf g}\hspace{1mm};\\ {\bf Definition 5:} {\it If there exist endomorphism $J:\bf g \rightarrow \bf g$ of Lie algebra with ad invariant metric g and antisymmetric bilinear map $H:\bf g\otimes \bf g \rightarrow \bf g$ such that the relations $(9),(12),(32),(33),(36)$ and $(39)$ are satisfied, then we have biHermitian structure $(J,g,H)$ on {\bf g}.}\\ Note that relation $(33)$ is equivalent to matrix relation of integrability condition i.e. relation $(12)$ . For this reason, first it is better to obtain algebraic complex structures J, then solve relations $(32),(36)$ and $(39)$ and finally check them in $(33)$. We do this for real four dimensional Lie algebras using Maple. Note that similar to complex structures, in order to obtain non-equivalent biHermitian structures we suggest the following equivalent relations. \\ {\it{\bf Definition 6} : Two biHermitian structures $(J,g,H)$ and $(J^{'},g^{'},H^{'})$ of Lie algebras {\bf g} are equivalent if there exists an element A of automorphism group of Lie algebra {\bf g} (Auto {\bf g}) such that: \begin{equation} J^{'}=A J A^{-1}, \end{equation} \begin{equation} g^{'}=A g A^{t}, \end{equation} \begin{equation} H^{'}_{\alpha}=A(H_{\beta} A_{\alpha}\hspace{0cm}^{\beta})A^{t}. \end{equation}} These relations are equivalent relations and are satisfied in the equivalent conditions. Note that if $f_{\beta\gamma}\hspace{0cm}^{\alpha}=H_{\delta\beta\gamma}g^{\delta\alpha}$ or $H$ is isomorphic with $f$, i.e. if there exists isomorphism matrix $C$ such that \begin{equation} C Y^{\alpha} C^{t}= \tilde{Y}^{\beta } C_{\beta}\hspace{0cm}^{\alpha}, \end{equation} where $(Y^{\alpha})_{\beta\gamma}=-f_{\beta\gamma}\hspace{0cm}^{\alpha}$ and $({\tilde{Y}}^{\alpha})_{\beta\gamma}=-H_{\delta\beta\gamma}g^{\delta\alpha}$; then $(J,g,H)$ shows the Manin triple structure on ${\bf g}$\cite{L}. In this way biHermitian structures on real four dimensional Lie algebras can be classified as table 2 . Note that according to the table for Lie algebra $A_{4,8}$, we have two non-equivalent biHermitian structure $(J,g,H)$ where the second biHermitian structure shows the Manin triple structure of $A_{4,8}$ \cite{L} (i.e. $A_{4,8}$ is a Manin triple of two dimensional Lie bialgebras (type B and semiabelian)\cite{Sn}) for the following values of parameters: $$ c_1=c_2=c_3=c_4=c_5=c_6=c_7=c_9=c_{10}=c_{11}=c_{13}=c_{14}=0\hspace{1mm},\hspace{1mm} c_{12}=c_{15}=-1. $$ For Lie algebras $VIII \oplus R$, there is one biHermitian structure where this structure, for the values $$ d_1=d_2=d_4=d_5=d_7=d_8=d_{10}=d_{11}=d_{12}=d_{14}=d_{15}=d_{16}=0\hspace{1mm},\hspace{1mm} d_9\neq 0\hspace{1mm},\hspace{2mm}d_3=-d_6=\alpha, $$ is isomorphic with two dimensional Lie bialgebra type A \cite{Sn}. There exists one biHermitian structure for Lie algebra $IX \oplus R$. The results are given in table 2 {\footnote{Note that results of table 1 are solutions of the equations $(9)$ and $(12)$. But the results of table 2 are solutions of $(9)$,$(12)$ and $(32)$,$(33)$,$(36)$,$(39)$ so the results of table 2 can also be solutions of these equations which must be consistent with H and $g$ .}}. Note that the isomorphism relation $(43)$(i.e. the biHermitian structures which show Manin triple) are independent of the choice of special biHermitian structures from equivalent class of biHermitian structures. In this way if relation $(43)$ holds ; then by $\tilde{Y}^{'\alpha}=-H^{'}_{\delta} g^{'\delta\alpha}$ and using relations $(42)$ and $(43)$ one can show that \begin{equation} (AC) Y^{\gamma} (AC)^{t}=\tilde{Y}^{'\alpha} (AC)_{\alpha}\hspace{0cm}^{\gamma}. \end{equation} \begin{center} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{TABLE 2 : biHermitian structures on four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Complex Structures & $g$ & antisymmetric tensor \\ \hline \hline $A_{4,8} $&&& $H_{1}=\left( \begin{array}{cccc} 0 & b_{1} & 1 & -b_{2} \\ -b_{1} & 0 & b_{2} & 1 \\ -1 & -b_{2} & 0 & b_{3} \\ b_{2} & -1 & -b_{3} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array} \right)$& $H_{2}=\left( \begin{array}{cccc} 0 & b_{4} & b_{5} & -b_{6} \\ -b_{4} & 0 & b_{6} & b_{5} \\ -b_{5} & -b_{6} & 0 & b_{7} \\ b_{6} & -b_{5} & -b_{7} & 0 \\ \end{array} \right)$\\ &&&$H_{3}=\left( \begin{array}{cccc} 0 & b_{8} & b_{9} & -b_{10} \\ -b_{8} & 0 & b_{10} & 1+b_{9} \\ -b_{9} & -b_{10} & 0 & b_{11} \\ b_{10} & -1-b_{9} & -b_{11} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & b_{12} & b_{13} & -1-b_{14} \\ -b_{12} & 0 & b_{14} & b_{13} \\ -b_{13} & -b_{14} & 0 & b_{15} \\ 1+b_{14} & -b_{13} & -b_{15} & 0 \\ \end{array} \right)$\\ \cline {2-4} &&& $H_{1}=\left( \begin{array}{cccc} 0 & c_{1} & c_{2} & c_{3} \\ -c_{1} & 0 & c_{3} & c_{4} \\ -c_{2} & -c_{3} & 0 & c_{1} \\ -c_{3} & -c_{4} & -c_{1} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right)$&$g=\left( \begin{array}{cccc} 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & c_{5}-1 & c_{6} & c_{7} \\ -c_{5}+1 & 0 & c_{7} & c_{8} \\ -c_{6} & -c_{7} & 0 & c_{5} \\ -c_{7} & -c_{8} & -c_{5} & 0 \\ \end{array} \right)$\\ &&&$H_{3}=\left( \begin{array}{cccc} 0 & c_{9} & c_{10} & c_{11} \\ -c_{9} & 0 & c_{11} & c_{12} \\ -c_{10} & -c_{11} & 0 & c_{9} \\ -c_{11} & -c_{12} & -c_{9} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & c_{13} & c_{14} & c_{15} \\ -c_{13} & 0 & 1+c_{15} & c_{16} \\ -c_{14} & -1-c_{15} & 0 & c_{13} \\ -c_{15} & -c_{16} & -c_{13} & 0 \\ \end{array} \right)$\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|} \multicolumn{4}{c}{TABLE 2 : biHermitian structures on four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Complex Structures & $g$ & antisymmetric tensor \\ \hline \hline $VIII \oplus R$&&& $H_{1}=\left( \begin{array}{cccc} 0 & d_{1} & d_{2} & -d_{3}+\alpha \\ -d_{1} & 0 & d_{3} & d_{2} \\ -d_{2} & -d_{3} & 0 & d_{4} \\ d_{3}-\alpha & -d_{2} & -d_{4} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} -\alpha & 0 & 0 & 0 \\ 0 & -\alpha & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & d_{5} & d_{6} & -d_{7} \\ -d_{5} & 0 & d_{7} & d_{6}+\alpha \\ -d_{6} & -d_{7} & 0 & d_{8} \\ d_{7} & -d_{6}-\alpha & -d_{8} & 0 \\ \end{array} \right)$\\ &&$\alpha \in R-\{0\}$&$H_{3}=\left( \begin{array}{cccc} 0 & d_{9} & d_{10} & -d_{11} \\ -d_{9} & 0 & d_{11} & d_{10} \\ -d_{10} & -d_{11} & 0 & d_{12} \\ d_{11} & -d_{10} & -d_{12} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & d_{13} & d_{14} & -d_{15} \\ -d_{13} & 0 & d_{15} & d_{14} \\ -d_{14} & -d_{15} & 0 & d_{16} \\ d_{15} & -d_{14} & -d_{16} & 0 \\ \end{array} \right)$\\ \hline $IX \oplus R$&&& $H_{1}=\left( \begin{array}{cccc} 0 & 0 & f_{1} & -f_{2}-\beta \\ 0 & 0 & f_{2} & f_{1} \\ -f_{1} & -f_{2} & 0 & f_{3} \\ f_{2}+\beta & -f_{1} & -f_{3} & 0 \\ \end{array} \right)$\\&$J=\left( \begin{array}{cccc} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ \end{array} \right)$ &$g=\left( \begin{array}{cccc} \beta & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \beta \\ \end{array} \right)$&$H_{2}=\left( \begin{array}{cccc} 0 & f_{4} & f_{5} & -f_{6} \\ -f_{4} & 0 & f_{6} & f_{5}-\beta \\ -f_{5} & -f_{6} & 0 & f_{7} \\ f_{6} & -f_{5}+\beta & -f_{7} & 0 \\ \end{array} \right)$\\ &&$\beta \in R-\{0\}$&$H_{3}=\left( \begin{array}{cccc} 0 & f_{8} & f_{9} & -f_{10} \\ -f_{8} & 0 & f_{10} & f_{9} \\ -f_{9} & -f_{10} & 0 & f_{11} \\ f_{10} & -f_{9} & -f_{11} & 0 \\ \end{array} \right)$\\&&&$H_{4}=\left( \begin{array}{cccc} 0 & f_{12} & f_{13} & -f_{14} \\ -f_{12} & 0 & f_{14} & f_{13} \\ -f_{13} & -f_{14} & 0 & f_{15} \\ f_{14} & -f_{13} & -f_{15} & 0 \\ \end{array} \right)$\\ \hline \end{tabular} \end{center} Note that $b_{i}, c_{i}, d_{i}, f_{i}$ are all real parameters. \vspace{1cm} \section{\bf Conclusion} We offered a new method for calculation of complex and biHermitian structures on low dimensional Lie algebras. By this method, we obtain complex and biHermitian structures on real four dimensional Lie algebras. In this manner, one can obtain these structures on Lie groups using vierbeins . Some biHermitian structures on real four dimensional Lie algebras are equivalent to Manin triple structure obtained in \cite{Sn}. One can use these methods for obtaining complex and biHermitian structures on real six dimensional Lie algebras \cite{RS}. We also apply this method for calculation of generalized complex structures on four dimensional real Lie algebras \cite{SRS}. \vspace{3mm}\\ {\bf Acknowledgments} \vspace{3mm} We would like to thank F. Darabi, Sh. Mogadassi and Z. Haghpanah for carefully reading the manuscript and useful comments. \vspace{5mm} \newpage \hspace{-6.5mm}{\bf{{Appendix\hspace{2mm}}}} \vspace{3mm}\\ {\bf Real four dimensional Lie algebras and their automorphisms groups \cite{P},\cite{PaP} } \vspace{2mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline $4A_{1}$ & & $$\\ \hline \vspace{-4mm} $III\oplus R\cong\left(A_{2}\oplus 2A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{12}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & a_{4} & -a_{6} \\ 0 & a_{4} & a_{5} & a_{6} \\ 0 & a_{7} & -a_{7} & a_{8} \\ \end{array} \right)$ \\ & $,f^{2}_{31}=1,f^{3}_{31}=1$\\ \hline $2A_{2}$ & $f^{2}_{12}=1,f^{4}_{34}=1$ & $\left( \begin{array}{cccc} 1 & a_{1} & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & 0 & 1 & a_{3} \\ 0 & 0 & 0 & a_{4} \\ \end{array} \right)$\\ \hline $II\oplus R\cong\left(A_{3,1}\oplus A_{1}\right)$ & $f^{1}_{23}=1$ & $\left( \begin{array}{cccc} a_{2}a_{7}-a_{3}a_{6} & 0 & 0 & 0 \\ a_{1} & a_{2} & a_{3} & a_{4} \\ a_{5} & a_{6} & a_{7} & a_{8} \\ a_{9} & 0 & 0 & a_{10} \\ \end{array} \right)$ \\ \hline $IV\oplus R\cong\left(A_{3,2}\oplus A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{12}=1,f^{3}_{13}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & a_{4} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ \hline $V\oplus R\cong\left(A_{3,3}\oplus A_{1}\right)$ & $f^{2}_{12}=-1,f^{3}_{13}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{4} & a_{5} & 0 \\ 0 & a_{6} & a_{7} & 0 \\ 0 & 0 & 0 & a_{8} \\ \end{array} \right)$ \\ \hline $VI_{0}\oplus R\cong\left(A_{3,4}\oplus A_{1}\right)$ & $f^{2}_{13}=1,f^{1}_{23}=1$ & $\left( \begin{array}{cccc} a_{2} & a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3} & a_{4} & 1 & a_{5} \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$\\ \hline \vspace{-4mm} $VI_{a}\oplus R\cong\left(A^{a}_{3,5}\oplus A_{1}\right)$ & $f^{2}_{12}=-a,f^{3}_{12}=-1$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & a_{4} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$\\ & $,f^{2}_{31}=1,f^{3}_{31}=a$\\ \hline $VII_{0}\oplus R\cong\left(A_{3,6}\oplus A_{1}\right)$ & $f^{1}_{23}=1,f^{2}_{13}=-1$ & $\left( \begin{array}{cccc} a_{2} & -a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3} & a_{4} & 1 & a_{5} \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $VII_{a}\oplus R\cong\left(A^{a}_{3,7}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{31}=a$ & $\left( \begin{array}{cccc} 1 & a_{1} & a_{2} & a_{3} \\ 0 & a_{5} & -a_{4} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ 0 & 0 & 0 & a_{6} \\ \end{array} \right)$ \\ & $,f^{2}_{12}=-a,f^{3}_{12}=1$\\ \hline \end{tabular} \end{center} \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline $VIII\oplus R\cong\left(A_{3,8}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{12}=-1,f^{1}_{23}=1$ & $\Lambda_{1}$ \\ \hline $IX\oplus R\cong\left(A_{3,9}\oplus A_{1}\right)$ & $f^{2}_{31}=1,f^{3}_{12}=1,f^{1}_{23}=1$ & $\Lambda_{2}$\\ \hline $A_{4,1}$ & $f^{1}_{24}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^{2}_{7}a_{3} & 0 & 0 & 0 \\ a_{2}a_{7} & a_{3}a_{7} & 0 & 0 \\ a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & a_{7} \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $A^{a}_{4,2}$ & $f^{1}_{14}=a,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{3} & 0 & 0 \\ 0 & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{1}_{4,2}$& $f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & a_{2} & 0 & 0 \\ 0 & a_{5} & 0 & 0 \\ a_{3} & a_{4} & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline $A_{4,3}$ & $f^{1}_{14}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & a_{3} & a_{2} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ \hline \vspace{-4mm} $A_{4,4}$ & $f^{1}_{14}=1,f^{1}_{24}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{3} & 0 & 0 & 0 \\ a_{2} & a_{3} & 0 & 0 \\ a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$\\ & $,f^{2}_{34}=1,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{a,b}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & 0 & 0 \\ 0 & 0 & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$\\ & $,f^{3}_{34}=b$\\ \hline \vspace{-4mm} $A^{a,a}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{2} & a_{3} & 0 \\ 0 & a_{4} & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=a$\\ \hline \vspace{-4mm} $A^{a,1}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=a$& $\left( \begin{array}{cccc} a_{1} & 0 & a_{2} & 0 \\ 0 & a_{3} & 0 & 0 \\ a_{4} & 0 & a_{5} & 0 \\ a_{6} & a_{7} & a_{8} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1$\\ \hline \vspace{-4mm} $A^{1,1}_{4,5}$ & $f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1} & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 0 \\ a_{7} & a_{8} & a_{9} & 0 \\ a_{10} & a_{11} & a_{12} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1$\\ \hline \end{tabular} \end{center} \newpage \vspace{10mm} \begin{center} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE A: Classifications of four dimensional real Lie algebras }\\ \hline \hline Lie Algebra & Non Vanishing & Automorphisms group \\&Structure Constants&\\ \hline \hline \vspace{-4mm} $A^{a,b}_{4,6}$ & $f^{1}_{14}=a,f^{2}_{24}=b,f^{3}_{24}=-1$ & $\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{3} & -a_{2} & 0 \\ 0 & a_{2} & a_{3} & 0 \\ a_{4} & a_{5} & a_{6} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=b$\\ \hline \vspace{-4mm} $A_{4,7}$ & $f^{1}_{14}=2,f^{2}_{24}=1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^2_{2} & 0 & 0 & 0 \\ -a_{2}a_{5} & a_{2} & 0 & 0 \\ -a_{2}a_{5}+a_{2}a_{4}-a_{1}a_{5} & a_{1} & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$\\ & $,f^{3}_{34}=1,f^{1}_{23}=1$\\ \hline \vspace{-5mm} $A_{4,8}$& $f^{2}_{24}=1,f^{3}_{34}=-1$ & $\left( \begin{array}{cccc} a_{1}a_{2} & 0 & 0 & 0 \\ a_{1}a_{5} & a_{1} & 0 & 0 \\ a_{2}a_{4} & 0 & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{1}_{23}=1$\\ \hline \vspace{-5mm} $A^{b}_{4,9}$ & $f^{1}_{14}={1+b},f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1}a_{2} & 0 & 0 & 0 \\ -a_{1}a_{5}/b & a_{1} & 0 & 0 \\ a_{2}a_{4} & 0 & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=b,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{1}_{4,9}$ & $f^{1}_{14}=2,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{1}a_{4}-a_{2}a_{3} & 0 & 0 & 0 \\ a_{2}a_{6}-a_{1}a_{7} & a_{1} & a_{2} & 0 \\ a_{4}a_{6}-a_{3}a_{7} & a_{3} & a_{4} & 0 \\ a_{5} & a_{6} & a_{7} & 1 \\ \end{array} \right)$ \\ & $,f^{3}_{34}=1,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{0}_{4,9}$ &$f^{1}_{14}=1,f^{2}_{24}=1$ & $\left( \begin{array}{cccc} a_{2}a_{3} & 0 & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ a_{3}a_{5} & 0 & a_{3} & 0 \\ a_{4} & a_{5} & 0 & 1 \\ \end{array} \right)$\\ & $,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A_{4,10}$ & $f^{3}_{24}=-1,f^{2}_{34}=1$ & $\left( \begin{array}{cccc} a^2_{1}+a^2_{2} & 0 & 0 & 0 \\ -a_{1}a_{4}-a_{2}a_{5} & a_{1} & a_{2} & 0 \\ a_{2}a_{4}-a_{1}a_{5} & -a_{2} & a_{1} & 0 \\ a_{3}& a_{4} & a_{5} & 1 \\ \end{array} \right)$\\ & $,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A^{a}_{4,11}$ & $f^{1}_{14}=2a,f^{2}_{24}=a,f^{3}_{24}=-1$ & $\left( \begin{array}{cccc} a^{2}_{1}+ a^{2}_{2} & 0 & 0 & 0 \\ -\frac{a(a_{1}a_{4})+a(a_{2}a_{5})+a_{2}a_{4}-a_{1}a_{5}}{a^{2}+{1}} & a_{2} & -a_{1} & 0 \\ \frac{a(a_{2}a_{4})-a(a_{1}a_{5})-a_{1}a_{4}-a_{2}a_{5}}{a^{2}+{1}} & a_{1} & a_{2} & 0 \\ a_{3} & a_{4} & a_{5} & 1 \\ \end{array} \right)$ \\ & $,f^{2}_{34}=1,f^{3}_{34}=a,f^{1}_{23}=1$\\ \hline \vspace{-4mm} $A_{4,12}$& $f^{2}_{14}=-1,f^{1}_{13}=1$ & $\left( \begin{array}{cccc} a_{2} & -a_{1} & 0 & 0 \\ a_{1} & a_{2} & 0 & 0 \\ -a_{4} & a_{3} & 1 & 0 \\ a_{3} & a_{4} & 0 & 1 \\ \end{array} \right)$ \\ & $,f^{1}_{24}=1,f^{2}_{23}=1$\\ \hline \end{tabular} \end{center} \newpage \begin{eqnarray} \Lambda_{1}=\textrm{Rotation}_{xy}\textrm{Boost}_{xz}\textrm{Boost}_{yz}C \end{eqnarray} where: \begin{eqnarray} \textrm{Rotation}_{xy}&=&\left( \begin{array}{cccc} \cos(a_{1}) & \sin(a_{1}) & 0 & 0 \\ -\sin(a_{1}) & \cos(a_{1}) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Boost}_{xz}&=&\left( \begin{array}{cccc} \cosh(a_{2}) & 0 & \sinh(a_{2}) & 0\\ 0 & 1 & 0 & 0 \\ \sinh(a_{2}) & 0 & \cosh(a_{2}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Boost}_{yz}&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & \cosh(a_{3}) & \sinh(a_{3}) & 0 \\ 0 & \sinh(a_{3}) & \cosh(a_{3}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ C&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a_{4} \end{array} \right)\\ \end{eqnarray} \begin{eqnarray} \Lambda_{2}=\textrm{Rotation}_{xy}\textrm{Rotation}_{xz}\textrm{Rotation}_{yz}C \end{eqnarray} where: \begin{eqnarray} \textrm{Rotation}_{xy}&=&\left( \begin{array}{cccc} \cos(a_{1}) & \sin(a_{1}) & 0 & 0 \\ -\sin(a_{1}) & \cos(a_{1}) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Rotation}_{xz}&=&\left( \begin{array}{cccc} \cos(a_{2}) & 0 & -\sin(a_{2}) & 0\\ 0 & 1 & 0 & 0 \\ \sin(a_{2}) & 0 & \cos(a_{2}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ \textrm{Rotation}_{yz}&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & \cos(a_{3}) & \sin(a_{3}) & 0 \\ 0 & -\sin(a_{3}) & \cos(a_{3}) & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)\\ C&=&\left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & a_{4} \end{array} \right) \end{eqnarray} \newpage
2024-02-18T23:40:22.105Z
2010-07-14T02:01:27.000Z
algebraic_stack_train_0000
2,177
16,548
proofpile-arXiv_065-10668
\section{Introduction Recent years have seen a proliferation of simulations of star and cluster formation involving a range of theoretical assumptions and physical ingredients \citep[e.g. ][]{bonnell-etal2003,schmeja+klessen2004,bonnell-etal2004,jappsen-etal2005,bate+bonnell2005,dale-etal2005,bonnell-etal2006b,dale+bonnell2008,bate2009b}. Whereas the choice of model ingredients is set by a mixture of theoretical prejudice and numerical feasibility, it has already proved useful to undertake detailed comparisons between the output of such simulations and observational data. For example, the over-production of brown dwarfs in the original simulations of \citet{bate-etal2002} pointed to shortcomings in the treatment of gas thermodynamics which appears to have been largely remedied in subsequent simulations incorporating radiative transfer \citep{bate2009b}. The simulations of choice for the analysis of the larger scale clustering properties of stars are however those of \citet{bonnell-etal2003} and \citet{bonnell-etal2008} which, at the expense of being able to resolve the formation of the smallest objects, are able to follow the formation of hundreds of stars and track the hierarchical assembly of stellar clusters. Qualitatively, these simulations demonstrated how clusters grow through a combination of merging, the formation of new stars through fragmentation and the accretion of gas onto existing stars during cluster merging. \citet{bonnell-etal2003,bonnell-etal2004} were thus able to use these simulations in order to take a first look at how the mass of the most massive star in a cluster changes as the cluster grows through successive merger events. In this paper we return to these simulations and their successors in order to analyse the properties of the resulting clusters and to take a more detailed look at issues such as the relationship between maximum stellar mass and cluster growth (the $m_\mathrm{max}-n_\mathrm{tot}$ relation; \citealp{weidner+kroupa2004,weidner+kroupa2006,weidner-etal2009}; \citealp{maschberger+clarke2008}), the degree of mass segregation (primordial vs. dynamical, \citealp[cf.][]{bonnell+davies1998,mcmillan-etal2007,allison-etal2009b}) and other cluster diagnostics such as fractal dimension, ellipticity and slope of the upper IMF. We here have the luxury of simulations which produce large numbers of stars: in particular, the large scale simulation discussed here produces thousands of stars, with individual clusters that contain up to hundreds of members. It thus becomes possible to analyse the {\it statistical} properties of the resulting ensemble. Apart from the superior statistics offered by the large scale simulation, the main difference between our analysis and the preliminary description given in \citet{bonnell-etal2004} is that we here identify subclusters through use of a minimum spanning tree technique, in contrast to \citet{bonnell-etal2004} who instead employed the ad hoc device of identifying a cluster as being all the stars within 0.1 pc of a massive star. The obvious advantage of our present analysis is that the clusters in the simulations are identified in precisely the same way as observers would extract clusters from maps of star forming regions and thus allows a much more direct comparison with observations (indeed, parameters such as cluster morphology, mass segregation and the cluster membership number, $n$, can only be explored if one has a generalised algorithm for defining clusters). This exercise is particularly timely given the accumulating survey data on stellar distributions in star forming regions (see the two substantial volumes on star forming regions edited by \citealp{reipurth-handbook-1,reipurth-handbook-2} or the recent survey by \citealp{gutermuth-etal2009}); in particular, the use of Xray observations (for example of the ONC \citealp{getman-etal2005c,prisinzano-etal2008} or NGC 6334 \citealp{feigelson-etal2009} and further regions mentioned in \citealp{feigelson-etal2009}) allows one to distinguish young stars from foreground/background sources and will this provide a good census of the clustering properties of stars at birth. The structure of the paper is as follows. In Section \ref{sec_calculations} we recapitulate the main features of the simulations to be analysed and in Section \ref{sec_clusteridentification} describe the algorithm used for cluster extraction. In the following we describe the results for the cluster assembly history (Sec. \ref{sec_clusterassembly}), for the structure and morphology of subclusters (Sec. \ref{sec_morphology}), for the locations of newly formed stars and for the initial mass segregation (Sec. \ref{sec_inistar}) and finally for the initial mass function (Sec. \ref{sec_IMF}). \section{Calculations}\label{sec_calculations We analyse the data of two SPH simulations, the $10^3\ \Msun$\ simulation by \citet{bonnell-etal2003} and the $10^4\ \Msun$\ simulation by \citet{bonnell-etal2008}. The initial condition for the $10^3\ \Msun$\ simulations \citep{bonnell-etal2003} is a uniform-density sphere containing 1000 \ensuremath{\rmn{M}_\odot}\ of gas in a diameter of 1 pc at a temperature of 10 K, using $5\times 10^5$ SPH particles. Supersonic turbulent motions are modelled by including an initial divergence-free, random Gaussian velocity field with a power spectrum $P(k) \propto k^{-4}$. The velocities are normalised such that the cloud is marginally unbound, and the thermal energy is initially 1\% of the kinetic energy. Protostars are replaced by sink-particles \citep{bate-etal1995} if the densest gas particle and its $\approx 50$ neighbours are a self-gravitating system (exceeding the critical density of $1.5\times 10^{-15}\ \mathrm{g}\ \mathrm{cm}^{-3}$), sub-virial and occupy a region smaller than the sink radius of 200 \ensuremath{\rmn{AU}}. Accretion onto the sink particles occurs i) in the case of gas particles moving within a sink radius (200 au) and being gravitationally bound or ii) in the case of all gas particles moving within the accretion radius of 40 \ensuremath{\rmn{AU}}. The mass resolution for sink particles is $\approx 0.1\ \ensuremath{\rmn{M}_\odot}$. Gravitational forces between stars are smoothed at 160 \ensuremath{\rmn{AU}}. For the $10^4\ \Msun$\ calculation \citep{bonnell-etal2008} $10^4 \ \ensuremath{\rmn{M}_\odot}$ of gas are initially distributed in a cylinder of 10 pc length and 3 pc diameter, with a linear density gradient along the main axis, reaching a maximum of 33\% higher than the average density at one end, and 33\% lower at the other. For computational reasons a particle-splitting method was employed \citep{kitsionas+whitworth2002,kitsionas+whitworth2007}, which gives an equivalent of $4.5\times 10^{7}$ SPH particles for the calculation, and a mass resolution of 0.0167\ensuremath{\rmn{M}_\odot}. Turbulence is modelled using an initial velocity field with power spectrum $P(k) \propto k^{-4}$. For the whole cloud the kinetic energy equals the gravitational energy, which results in one end of the cloud being bound and the other unbound. The gas follows a barotropic equation of state of the form \< P &=& k \rho^{\gamma}\> where \< \begin{array}{r@{\ }c@{\ }l@{}lr@{\ }c@{\ }c@{\ }c@{\ }l} \gamma &= & 0.&75; & & & \rho & \leq & \rho_1 \\ \gamma &= & 1.&0; & \rho_1 & \leq & \rho & \leq & \rho_2 \\ \gamma &= & 1.&4; & \rho_2 & \leq & \rho & \leq & \rho_3 \\ \gamma &= & 1.&0; & & & \rho & \geq & \rho_2 \\ \end{array} \> and $\rho_1 = 5.5 \times 10^{-19}\ \mathrm{g}\ \mathrm{cm}^{-3}$, $\rho_2 = 5.5 \times 10^{-15}\ \mathrm{g}\ \mathrm{cm}^{-3}$ and $\rho_3 = 2 \times 10^{-13}\ \mathrm{g}\ \mathrm{cm}^{-3}$. Again, star formation is modelled via sink particles, with a critical density of $6.8 \times 10^{-14}\ \mathrm{g}\ \mathrm{cm}^{-3}$, a sink radius of 200 \ensuremath{\rmn{AU}}\ and an accretion radius of 40 \ensuremath{\rmn{AU}}. The smoothing radius for gravitational interactions is 40 \ensuremath{\rmn{AU}}, a quarter of that for the $10^3\ \Msun$\ calculation. \section{Cluster identification}\label{sec_clusteridentification \begin{figure \begin{center} \includegraphics[width=0.62\fullcolumn]{fig01} \end{center} \caption{\label{snapshot_dbreak} Influence of $d_\mathrm{break}$ on the detected subclusters (large dots), sink particles not in subcluster are shown as small dots. With $d_\mathrm{break}=0.025$ pc the five detected subclusters have properties similar to a detection by eye. A too small $d_\mathrm{break}$ (0.01 pc) cuts off the lower-density outer regions. With a too large $d_\mathrm{break}$ (0.05 pc) only 2 subclusters are detected, with the larger one being highly substructured. } \end{figure} For the identification of subclusters we employ a minimum spanning tree. The minimum spanning tree is a network of connections between points, not containing any closed loops, with the minimum possible total length of the connections \citep[for the relation between the minimum spanning tree and clustering identification, the properties of the minimum spanning tree in general and algorithms for the construction see e.g. ][]{zahn1971}. The minimum spanning tree and and its properties have previously been used to determine the level of substructure in a star cluster, e.g. the $Q$ measure of structure by \citet{cartwright+whitworth2004} or the $\Lambda$ measure of mass segregation by \citet{allison-etal2009a}. A minimum spanning tree does not only characterise the degree of substructure, but can also be used to identify the sub-clusters themselves. A clustering-algorithm based on the minimum spanning tree has the advantage that the subclusters can have arbitrary shapes, that small-$n$ subclusters can be found and only one parameter, the break distance $d_\mathrm{break}$, needs to be specified. Once the minimum spanning tree containing all sinks has been constructed, subclusters can be identified by splitting the global minimum spanning tree into sub-trees by removing all edges which have a length larger than $d_\mathrm{break}$. The break distance can be related to a minimum density of points per area which is required that groups remain connected. The remaining sub-trees are then identified as a subcluster if they contain more than $n_\mathrm{min}=12$ sink particles. Sinks of subtrees with a smaller $n$ are attributed to the ``field''. To each subcluster we assign an identification number which is unique to the most massive sink particle in it. Sometimes it can occur that another sink particle in the same physical subcluster has accreted so much that it takes over the position as the most massive particle. In this case we assign a new identification number to the cluster. The clustering algorithm using the minimum spanning tree is not scale-free, as a particular length scale, $d_\mathrm{break}$, is needed. The choice of $d_\mathrm{break}$ is somewhat arbitrary, as experiments following ideas by \citet{zahn1971} to determine a reasonable $d_\mathrm{break}$ self-consistently from e.g. the edge length distribution gave no robust scale-independent criteria. Thus we chose $d_\mathrm{break}$ such that the subclusters found by the clustering algorithm have properties similar to subclusters which are selected by eye. For the effects of various $d_\mathrm{break}$ we analysed the $10^3\ \Msun$\ data set with $d_\mathrm{break}=$ 0.01 pc, 0.025 pc and 0.05 pc and show a snapshot made at $3 \times 10^5$ yr in Fig. \ref{snapshot_dbreak}. Clearly one sees that too large a value of $d_\mathrm{break}$ (0.05 pc) identifies objects as subclusters that themselves contain considerable substructure. On the other hand, a very small $d_\mathrm{break}$ cuts off low-density regions of the actual subclusters. We found that $d_\mathrm{break}$ = 0.025 pc gives the best results, as all reasonably rich subclusters are detected with a sufficient quantity of their low-density outskirts, and mergers do not occur prematurely. We however emphasise that the main utility of this approach is that it allows one comparisons with observations that are analysed with the same value of $d_\mathrm{break}$. As by observations only a {\it projection} is available, we also project the simulation data onto two dimensions, for both calculations in the $x$-$y$ plane. To exclude projection effects causing artefacts in the results we did all our analyses in other projections as well ($x$-$z$ and $y$-$z$). Different choices of the plane of projection do not affect our results qualitatively, and not significantly quantitatively. \section{Cluster assembly history}\label{sec_clusterassembly \begin{figure* \includegraphics[width=2.2\fullcolumn]{fig02_top} \includegraphics[width=2.2\fullcolumn]{fig02_bottom} \caption{\label{snapshot_times}\label{snapshot_large} Time-evolution of the projected spatial distribution of the sink particles, with large dots representing sinks in subclusters (whose labels correspond to the identification numbers in Fig. \ref{merginghistory}) and small dots for ``field'' sinks. The snapshots start at different global times of the two calculations, but at similar structures. The top row shows the central $0.6 \times 0.6$ pc of the $10^3\ \Msun$\ calculation, the middle row the corresponding section in the $10^4\ \Msun$\ calculation (large ticks = 0.1 pc). In the bottom row displaying the whole area of the $10^4\ \Msun$\ calculation ($6 \times 6$ pc, large ticks = 1 pc) a box marks the location of the detail section. } \end{figure*} We start our analysis of the simulations by constructing the merging history of the subclusters and the general properties of the simulation, such as the evolution of the total number of sinks and their total mass. The overall evolution of the two simulations is illustrated by Figure \ref{snapshot_times}, showing the projected distributions of the sink particles at different times. The small scale simulation (top row) simply demonstrates a history of hierarchical merging, with the final outcome being the creation of a single merged entity and a smaller population of sinks that are identified as `field stars' by our clustering algorithm. The bottom row shows the global evolution of the large scale simulation: as is consistent with globally unbound state of this simulation, one sees that merging does not go to completion and that there are instead regions of local merging and a pronounced field population in between. On the other hand, when one homes in on a dense region of this large scale simulation (the box shown in the lower panels) we see (middle row) an evolutionary sequence that is very similar to that shown in the small scale simulation (top row). In general terms we will find in all our subsequent analysis that significant differences between the two simulations all relate to parameters that take into account the dispersed population and the survival of multiple clusters in the larger (unbound) simulation. \subsection{Merging history} \begin{figure* \parbox[c]{\fullcolumn}{\includegraphics[width=\fullcolumn]{fig03_left_top}\\\vspace{.7cm}\\\hspace{0.07\fullcolumn}\includegraphics[width=0.885\fullcolumn]{fig03_left_bottom}} \hspace{\fullcolumnspace} \parbox[c]{\fullcolumn}{\includegraphics[width=\fullcolumn]{fig03_right}} \caption{\label{merginghistory} Merging history of the subclusters, left for the $10^3\ \Msun$\ and right for the $10^4\ \Msun$\ calculation. Each dot marks the detection of a subcluster, with the size of the dot scaling with the richness of the subcluster (only subclusters which have been detected more than 5 times are shown). Arrows at the end of a lifeline mark mergers of subclusters, or, if they point to the beginning of a new lifeline, a change of the most massive sink particle as the most massive sink is overtaken by another. } \end{figure*} Figure \ref{merginghistory} depicts merger trees for cluster assembly. Each subcluster is denoted by the identification number of its most massive sink particle, with a symbol size corresponding to the number of sinks in the subcluster. The arrows at the end of a lifeline correspond to merger events where the merged subcluster is given the identification number of the subcluster that had previously contained the most massive member of the new combined entity. The upward pointing arrows connecting the end of one lifeline with the start of a new one correspond to cases where the identity of the most massive sink particle changes (i.e. one sink overtakes another in mass as a result of accretion). The subcluster is then assigned a new identification number (and lifeline), but this is only a re-labelling. Subclusters that are registered as subclusters on less than five occasions do not appear on this plot. We also see occasional gaps in the lifelines of particular subclusters: these are usually small or low density subclusters where the relatively modest rearrangement of its members due to few body dynamical effects changes whether or not the grouping is classified as a subcluster. Depending on the size of the subclusters involved, it can take up to $\approx 5 \times 10^4$ yr for a merger to produce a single, stable new structure, as can for example be seen from the sporadic detections of subcluster \# 4 during its merger with \# 2 in the $10^3\ \Msun$\ simulation. \citet{fellhauer-etal2009} investigated the time scales for mergers of a spherically symmetric distribution of subclusters embedded in a background potential (typically more than $\approx 5 \times10^5$ yr for systems comparable to ours). The time scale for mergers we find are perhaps somewhat quicker than theirs, as the subclusters are not distributed isotropically but along filaments, which also direct their motion. Overall, Figure \ref{merginghistory} describes a situation of hierarchical merging; in the small simulation the system evolves towards a single merged entity whereas in the large simulation (which is globally unbound) the system is tending to several merged structures which (from inspection of the simulation) are unlikely to undergo further merging. We note that the change of identity of the most massive sink particle in a cluster occurs relatively frequently. This is rather surprising in the case of a power law mass distribution: in this case the expected spacings in mass between sinks are relatively large and it is not expected that differential accretion would cause one sink to overtake another. In fact, we shall see later that the masses of the most massive sink particles in a cluster are rather well correlated so that relatively minor changes in accretion history can change the identity of the most massive member. \subsection{Cluster population}\label{composite_population} \begin{figure* \includegraphics[width=\fullcolumn]{fig04_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig04_right}\\ \caption{\label{ninclusterhistogram} Histogram of the subclusters of the composite population (i.e. all subclusters of all time steps) by their number of sinks, for the $10^3\ \Msun$\ (left) and the $10^4\ \Msun$\ (right) calculation. The peak at $n=10^{2.5}$ in the left hand panel corresponds to the formation of a long-lived central cluster in the $10^3\ \Msun$\ simulation, similar to the large-$n$ peaks of the $10^4\ \Msun$\ calculation. Note that these distributions do not correspond to what would be seen in a single snapshot in time, for this see Fig. \ref{ninclusterhistogram_end}. } \end{figure*} In later Sections we will look at various properties of the subclusters, as for example their shape, mass segregation etc. In an individual time step the number of detected subclusters is not very large, therefore we sometimes use the subclusters from {\it all time steps together} for the analysis, which we term the {\it `composite population'}. As they can be at different stages of evolution one has to be careful when interpreting the results. Figure \ref{ninclusterhistogram} shows a histogram of subclusters in the composite population by their number of sink particles. The composite population is dominated by rather small clusters ($n < 30$--$50$) which are usually very young subclusters ($< 10^5$ yr since their first detection), or subclusters which have never merged (compare with the merging history, Fig. \ref{merginghistory}, where the symbols' sizes reflect the number of sinks). The large-$n$ peaks in the $10^3\ \Msun$\ histogram is produced by the formation of a cluster of $\approx 300$ sinks which, being long-lived, appears in many time steps. We emphasise that the distributions in Fig. \ref{ninclusterhistogram} are provided in order to interpret results based on the composite population and should {\it not} be interpreted as spectra of cluster richness at a given time. \begin{figure \includegraphics[width=\fullcolumn]{fig05} \caption{\label{ninclusterhistogram_end} Number spectrum of subclusters at the end of the $10^4\ \Msun$\ simulation. We show for comparison a line corresponding to a number spectrum $\propto n^{-2}$. } \end{figure} In order to get an idea of the latter we plot in Fig. \ref{ninclusterhistogram_end} a histogram of the cluster number spectrum at the end of the $10^4\ \Msun$\ simulation. For comparison we show the $n^{-2}$ spectrum found by \citet{lada+lada2003} for the embedded star clusters in the Milky Way (with $m_\mathrm{cluster}$ between 50-1000\ \ensuremath{\rmn{M}_\odot}). \subsection{Build-up of stellar number and mass} \begin{figure* \includegraphics[width=\fullcolumn]{fig06_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig06_right} \caption{\label{clusterassembly} Assembly by number (solid) and mass (dotted) for the whole system (thick symbols) and for all sinks in subclusters (thin symbols), respectively, normalised to the total number/total mass of all sinks at the end of the simulation. The left panel is for the $10^3\ \Msun$, the right one for the $10^4\ \Msun$\ calculation. } \end{figure*} \begin{figure* \includegraphics[width=\fullcolumn]{fig07_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig07_right} \caption{\label{starfractions} Evolution of the fractions of sinks in subclusters at a given time by number (solid) and mass (dotted), for the $10^3\ \Msun$\ calculation (left) and the $10^4\ \Msun$\ calculation (right). The fraction in subclusters increases by a mixture of sink formation within the subclusters and the accretion of isolated sinks or small groupings onto the subclusters. The modest decrease in the fraction of sinks in subclusters at late times in the $10^3\ \Msun$\ calculation results from the formation of one large cluster with a low-density halo. } \end{figure*} Figure \ref{clusterassembly} shows that the fraction of all sinks formed by a given time rises more steeply by number (solid curves) than by mass (dotted curves, both normalised to the total number or mass at the end of the simulation). Later on, fewer new sinks are formed but all accrete mass so that the mean stellar mass increases during the simulation (and hence, by implication, the mass function evolves during the simulation). The thin curves in Figure \ref{clusterassembly} refer to the sinks that are classified as being in subclusters at any time (also normalised to the total number or mass at the end of the simulation): they start to increase later than the thick curves (for all sinks) what shows that the classification of clusters is delayed with respect to formation of the first sinks. This can be seen more directly in Figure \ref{starfractions}, which shows that, after an initial delay, the fraction of sinks in clusters rises to $60-80\%$ (note that the fraction of sinks in clusters is higher in the bound simulation, as expected). The initial delay is comprehensible since we imposed a minimum cluster membership number of $12$; the first sinks form in small-$n$ clusters that do not register as clusters until they have acquired enough members by cluster merging. In the $10^3\ \Msun$\ simulation the fraction of sinks in subclusters reaches a maximum and then decreases slightly, which is caused by dynamical evolution. \section{Cluster structure and morphology}\label{sec_morphology \subsection{Structure}\label{sec_structure} \begin{figure* \includegraphics[width=\fullcolumn]{fig08_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig08_right} \caption{\label{cartwrightq} Time-evolution of the $Q$ parameter, see Section \ref{sec_structure}. The horizontal line marks $Q=0.8$, which corresponds to a uniform distribution (radial exponent = 0, $D0.0$, or fractal dimension = 3, $F3.0$). Fractally subclustered systems have $Q<0.8$ and radially concentrated systems $Q>0.8$. The fractal dimension ($F$) and the radial exponent ($D$) can be read off the right axis. The whole system (big dots) starts fractal and evolves towards a centrally concentrated system in the bound $10^3\ \Msun$\ calculation and stays fractal in the unbound $10^4\ \Msun$\ calculation. The subclusters (lines) evolve in both calculations towards concentrated systems when they are not disturbed. Mergers lead to the more or less pronounced jumps towards smaller $Q$. The thick line is for the richest subcluster that is formed in each of the calculations. } \end{figure*} Figure \ref{cartwrightq} illustrates the effect of the cluster merging history on a structural parameter of the stellar distribution. Here we use the $Q$ parameter, introduced by \citet{cartwright+whitworth2004}, which is defined as the ratio of the mean edge length in the minimum spanning tree to the correlation length of the the stellar distribution. Fig. \ref{cartwrightq} shows the time-evolution of the $Q$ parameter for the whole simulation (big dots) and for individual subclusters containing a minimum number of 48 sinks (lines). As discussed by \citet{cartwright+whitworth2004} small values of this parameter ($< 0.8$) correspond to fractally distributed points (the small value reflecting the fact that the existence of multiple nuclei tends to increase the correlation length more than the mean edge length). On the other hand, higher $Q$ values correspond to centrally concentrated distributions, with the $Q$ value rising with the degree of central concentration.% \footnote{ We do not correct our interpretation of Figure \ref{cartwrightq} for the fact that our stellar distributions are not spherically symmetric, since \citet{cartwright+whitworth2008} found that such corrections were negligible for aspect ratios less than $\approx 3$; we show below that extreme ellipticities are rare in our data. In the normalisation a geometrical factor is implicitly contained by choosing a circle as circumference for the uniform distribution, as in \citet{cartwright+whitworth2004} (\citealp{schmeja+klessen2006} instead use the convex hull of the data set). } Figure \ref{cartwrightq} shows that in the small simulation, the total stellar distribution is characterised by monotonically increasing $Q$ values, indicating the formation of a single centrally concentrated cluster through hierarchical merging. The recovery from a substructured subcluster to a radially concentrated system occurs over about $0.5$--$1.0\times 10^5$ yr, which can be seen as the time for a merger. The large simulation remains in the fractal regime throughout, since (being globally unbound) it retains a multiply clustered structure. In both simulations, the $Q$ values of individual clusters fluctuate, exhibiting periods of increase (as isolated clusters become more centrally concentrated as a result of two body relaxation) followed by abrupt reductions of $Q$ into the fractal regime during episodes of cluster mergers. The range of $Q$ values that we recover from our whole simulations is similar to that found in observations by \citet{cartwright+whitworth2004} and \citet{schmeja+klessen2006}, where fractal dimension as low as 1.5 ($Q=0.47$) are found for Taurus and radial concentrations following $r^{-2.2}$ ($Q=0.98$) for IC 348. The Orion Nebula Cluster has $Q=0.82$ \citep[considering only stars; ][]{kumar+schmeja2007}. \citet{schmeja-etal2008,schmeja-etal2009} derived $Q$ in subclusters identified within larger regions (Perseus, Serpens, Ophiuchus and NGC346) and obtained values of $0.59\leq Q \leq 0.93$. \begin{figure \includegraphics[width=\fullcolumn]{fig09} \caption{\label{radialdensity} Double-logarithmic plot of the complementary cumulative radial density, $1 - P(r)$, against distance measured from the geometrical cluster centre containing all sinks at the end of the $10^3\ \Msun$\ calculation. Power-law distributed data follow straight lines in this kind of plot. } \end{figure} In the $10^3\ \Msun$\ simulation the $Q$ parameter reaches values of $\approx 1.4$ at the end of the calculation, which implies a very steep radial density following $r^{-3}$, but the central subcluster appears to have a uniform density. In order to resolve this apparent contradiction we investigate the density profile of the whole system at the end of the simulation. For power-law distributed data the cumulative distribution function provides a convenient way of visually assessing all available data without the need of grouping them as in a histogram. The probability density of a power law distribution from $l$ to $\infty$ is given by $p(x)= - \frac{1-\alpha}{l^{1-\alpha}}x^{-\alpha}$, and the cumulative density is $P(x)= 1 - \frac{x^{1-\alpha}}{l^{1-\alpha}}$. Therefore, a plot of $\log \left(1- P(x)\right)$ (the logarithm of the complementary cumulative density) vs. $\log x$ should be a straight line. We show such a plot for the data in Figure \ref{radialdensity}. The radial density distribution does not follow a straight line but falls into three segments, a flat/uniform central region, a main region from 0.1 pc to 1 pc proportional to $r^{-1.9}$ and an outer halo having $r^{-3.5}$ or even a larger exponent. The halo is formed by low-mass sinks which have left the main region due to dynamical interactions (an effect which is also responsible for the decreasing fraction of sinks in subclusters in Fig. \ref{starfractions}). Most of the mass in this merged cluster is contained in a region whose density profile is close to the isothermal $\rho \propto r^{-2}$ profile. We thus see that the $Q$ parameter method of estimating the radial exponent is unduly influenced by the steeper distribution in the halo. \subsection{Morphology} \begin{figure* \includegraphics[width=\fullcolumn]{fig10_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig10_right} \caption{\label{ellipticityhistogram} Histogram of the ellipticities of the subclusters (derived from fitting a 2D Gaussian distribution), using the composite population of subclusters (from all times). The left panel shows the result for the $10^3\ \Msun$\ calculation and the right panel for the $10^4\ \Msun$\ calculation. } \end{figure*} In Figure \ref{ellipticityhistogram} we plot a histogram of the ratio of the projected major axis to projected minor axis for our clusters. This quantity has been derived by fitting a two-dimensional normal distribution to the projected number density distribution. The eigenvalues of the covariance matrix then give an elliptical contour of equal values of probability density containing $\approx 30 \%$ of the sink particles.% \footnote{ Note that - in contrast to some previous algorithms for deriving cluster shapes - we are not unduly sensitive to the locations of the outermost points in the dataset \citep[cf][]{schmeja+klessen2006,cartwright+whitworth2008}. This can be particularly problematical since the definition of clusters through splitting a minimum spanning tree can lead to `hairs' at the end of the cluster (sub-trees that reach out of the cluster body and have no branches) and so it is important to avoid an algorithm that gives undue importance to these outlying protrusions. } We see in Figure \ref{ellipticityhistogram} that most clusters are mildly elongated: the distribution peaks at $1.5$ and most clusters have an axis ratio of less than $2$. Subclusters form in dense nodes along the filaments of gas, as dense small-$n$ systems which shortly after their formation attain a spherical shape, which gives the peak in Fig. \ref{ellipticityhistogram}. One filament can contain several subclusters, so that the distribution of subclusters is elongated, but not the subclusters themselves, as visible in the snapshots in Fig. \ref{snapshot_times}. During a merging event the resulting object is naturally elongated, leading to the tail of large ellipticities in Fig. \ref{ellipticityhistogram}. An example is the cluster with \#5 in the $10^4\ \Msun$\ simulation at $6\times 10^{5} \mathrm{yr}$, which has an ellipticity of 3.86 and $Q=0.46$ (see the middle right panel in Fig. \ref{snapshot_times} for the projection) and is currently merging with cluster \# 20. \section{Formation sites of stars and (primordial?) mass segregation}\label{sec_inistar \subsection{Formation sites of stars} \begin{figure* \includegraphics[width=\fullcolumn]{fig11_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig11_right} \caption{\label{newsinks} Histogram of the fractional radial ranking of newly formed sinks in the subcluster to which they are assigned, measured at the time of formation (left $10^3\ \Msun$, right $10^4\ \Msun$\ calculation). Sinks which are born in the field ($\approx$ 30--40\% of all sinks) are not included. } \end{figure*} It has already been mentioned in \citet{bonnell-etal2004} that sinks do not necessarily form close to the centres of existing clusters (with the centre defined using the most massive sink particle, an assumption we test below). With our definition of a subcluster we find that only 50--60\% of all sinks form within a subcluster. Within the subclusters the distribution of the formation sites follows the same distribution as existing sinks in the subclusters (with only a very mild concentration towards the inner region), as visible in the histogram of the radial ranking (Figure \ref{newsinks}). The sinks forming outside of subclusters form either in the immediate neighbourhood of a subcluster or as the centres of new subclusters. Significantly, we find that the most massive sink particles avoid formation within existing subclusters: indeed virtually no sinks which end up with masses $>1\ \ensuremath{\rmn{M}_\odot}$ form within the half-number radius of an existing cluster. It is thereby more correct to say that {\it clusters form around (seeds of) massive stars} than massive stars form in clusters. \subsection{Development of mass segregation} We now turn to the question of where stars of various masses end up within the subclusters (as opposed to where they form). We emphasise that since the entirety of the simulations correspond to the deeply embedded phase (age $<$ 0.5 Myr) then even the {\it final} state of the simulations can be used to assess what is usually termed primordial mass segregation. We have looked at a variety of mass segregation diagnostics and find that mass segregation usually applies to the ten to fifty most massive sinks. For example, cumulative radial distributions within clusters for stars in different mass bins rarely reveal consistent evidence for mass segregation apart from its existence in some clusters which are spherically symmetric. \citet{bate2009a} finds no mass segregation in his data using cumulative distributions whereas \citet{moeckel+bonnell2009b} using their (non-parametric) technique find mass segregation in the same data. We use the $\Lambda$ measure of \citet{allison-etal2009a} which is based on the minimum spanning tree and allows one to detect mass segregation also if only a few stars are involved. For the $i$th most massive star it is defined as \< \Lambda_{(i)} = \frac{\bar{l_i}}{l_{(i)}} \pm \frac{\bar{\sigma_i}}{l_{(i)}}. \label{allison_lambda}\> $\bar{l_i}$ and $\bar{\sigma_i}$ are the mean length and its standard deviation of a minimum spanning tree constructed from a sample of $i$ stars which are randomly drawn from the total sample of stars in the subcluster. $l_{(i)}$ is the length of the minimum spanning tree containing the $i$ most massive stars. $\Lambda_{(i)}=1$ means that the $i$ most massive stars are distributed as the other stars and there is no mass segregation. Mass segregation is detected if $\Lambda$ is significantly larger than unity (in terms of standard deviations), and the absolute value of $\Lambda$ reflects the degree of spatial concentration (i.e. the larger $\Lambda$ the more spatially concentrated). $\Lambda$ has the big advantage of being non-parametric, i.e. knowledge about the shape or density profile is not necessary. \begin{figure* \includegraphics[width=1.6\fullcolumn]{fig12} \caption{\label{masssegregation_merger} Evolution of mass segregation for a particular subcluster during a merging event (\# 2 in the $10^3\ \Msun$\ calculation). The left panels show the projection distribution of the sink particles at the snapshots with the analysed subcluster marked with black dots. The right panels display the $\Lambda$ measure (eq. \ref{allison_lambda}, \citealp{allison-etal2009a}) for the 100 most massive sinks. The index of the sinks can be read off the top axis of the uppermost panel and is the same throughout. As the subcluster grows in number we show the percentages for the massive sinks of the total number at the bottom axis of each panel. Before the merger (top row) the subcluster is already mass segregated, $\Lambda$ is larger than unity for the $\approx$ 60 most massive sinks (40\%). During the merger (middle panel) the $\approx$ 15 most massive sink particles are not mass segregated as they are still in the centres of the merging subclusters, but not randomly distributed ($\Lambda$ exceeds unity). After the merger (bottom row) the $\approx$ 10 most massive sinks quickly reach a state of strong central concentration (large $\Lambda$) and general mass segregation is at a 10\% level. } \end{figure*} The typical states of mass segregation in a rich subcluster are shown in Fig. \ref{masssegregation_merger}, which follows the time evolution of mass segregation in an individual cluster during a merging event. The left panel shows the projected spatial distribution and the right panel $\Lambda$. A subcluster that has never undergone a merging event or had a merging event a long time ago (top panel) shows a monotonic decrease of $\Lambda$ extending over a large fraction of the massive sinks: in our example about 40 per cent of all sinks (by number) are significantly segregated. The snapshot is taken just before a number of subclusters will merge into the analysed subcluster. During the merger (middle panel) the merging clusters are gradually dissolved and incorporated in the merger product, so that for some time the detected subcluster actually has multiple centres. These centres still hold the massive sinks, so that they are spatially more widely distributed than a random sample of sinks, which will contain mostly sinks from the richest previous cluster. However, as soon as with an increasing random sample size sinks are also chosen from the other centres, the massive sinks show a concentration within these centres. This explains the typical behaviour of $\Lambda$ during a merger, which is increasing from unity for the $\approx$ 10 most massive sinks until it reaches a maximum, in our example at 5\% of the sinks, from which it gradually decreases again. The total percentage of sinks that are mass segregated is smaller compared to before the merger. When the merged subcluster has settled down to a system with a single centre (bottom panel), the $\approx$ 10 most massive sinks quickly form a close, concentrated system in the centre, leading to large values of $\Lambda$. The less massive sinks are more randomly distributed so that in total a smaller fraction of the sinks is mass segregated ($\approx$ 10\%). This quick development of mass segregation after a merger has already been found in nbody simulations of merging subclusters by \citet{mcmillan-etal2007} and \citet{allison-etal2009b}. The feature of mass segregation (i.e. that it involves of the order of ten stars shortly after a merger) is the same as found by \citet{moeckel+bonnell2009b} in the simulation of \citet{bate2009a}. \citet{allison-etal2009b} analysed the evolution of mass segregation in a cluster evolving from fractal initial conditions to a centrally concentrated system, but without mass segregation of the subclusters. At an age of $\approx$ 500 000 yr they find values of $\Lambda$ for the whole cluster which are comparable to the values we derived. For the Orion Nebula Cluster (analysed by \citealp{allison-etal2009a}) only the nine most massive stars are mass segregated which is comparable to the post-merger state we find. \begin{figure* \includegraphics[width=2\fullcolumn]{fig13} \caption{\label{dmaxhistogram} Histogram of the fractional radial ranking of the most massive (top), second most massive (middle) and third most massive (bottom) sink particle in its associated subcluster, split up by the number of sinks in the subcluster. The composite population of the $10^4\ \Msun$\ calculation is used to make the histograms. In the absence of mass segregation the histogram would be flat: the peak at small values shows that the massive sinks are preferentially found near the cluster centre. The second peak with a ranking of $\approx 1$, especially for the second and third most massive sink, is due to mergers, where two centres are still present. } \end{figure*} In the previous paragraph we gave examples of rich subclusters that are mass segregated if they have not undergone a merger recently. In order to establish what is the observational norm we turn to the composite population of the $10^4\ \Msun$\ calculation (Sec. \ref{composite_population} and Fig. \ref{ninclusterhistogram}) and have split our sample between subclusters according to their richness ($n \leq 30$, $30< n \leq 50$, $50 < n \leq 100$ and $n > 100$). As subclusters gain new sinks during their evolution this sequence of increasing richness can also be seen as a sequence in time. In Figure \ref{dmaxhistogram} we plot histograms of the fractional radial rankings of the most massive, second and third most massive sinks. In the absence of mass segregation these histograms should be flat, which is roughly the case for the very small clusters ($n<30$), although already for them a weak trend of central concentration is present. These systems already contain the seeds of massive sinks (they have a large average stellar mass, see Fig. \ref{averagemassplot}, and will become the central parts of richer subclusters. For the larger clusters there is clear evidence that the most massive sink particle is concentrated towards small radii, being rarely located beyond the inner 25\% of sinks (we emphasise that this radial ranking is based on distance from the geometrical cluster centre, rather than centre of mass). The second (and also third) most massive sink particle is also frequently found in the inner regions of populous subclusters, but there is a second peak in the upper quartile, corresponding to the case where the second most massive sink is located in the nucleus of a subcluster that is in the process of merging. Over all, therefore, we conclude that the most massive sinks are indeed segregated towards the centres of populous ($n\geq30$) subclusters. We will also see that the most massive sinks are preferentially located in subclusters as opposed to the field as evidenced by the steeper slope of the upper tail of the IMF for the entire population as opposed to the total population contained in subclusters (see Fig. \ref{exponent_time}). \section{Evolution of the sink particle mass function}\label{sec_IMF The mass distribution of the stars describes the end product of the star formation process. In this Section we analyse the sink particle mass distribution as proxy for the stellar mass function at each time step, with a focus on the high-mass tail of the mass distribution. We would like to stress that the results presented in this Section are not directly comparable to the observed stellar IMF, as a complete modelling of the star formation process is computationally not possible at the present time. Thus the {\it actual} mass of a star formed is not the sink particle mass, but lower because of simplifications in the computations. Firstly, star formation is modelled by sink particles with radii larger than the proto-stellar radii, so that fragmentation could also occur within the sinks (formation of close binaries). The sink particle mass function is closer to being a system mass function since the observed distribution of binary separations implies that most binary companions would be located within the sink radius (200 au). \citep{weidner-etal2009a} found that the system and individual mass function have only slightly different exponents (difference $< 0.2$). Furthermore, feedback by stellar winds or radiation is not included in the model, so that accretion is not hindered or stopped. These (zero-feedback) calculations thus overestimate system masses. Also, the gravitational force between sink particles is softened on a scale of a few sink radii, so that close encounters and binary formation is suppressed, which could influence the accretion history of the sink particles involved. Thus, the actual mass function of individual stars will have a smaller upper mass limit. Our reference hypothesis for the stellar mass distribution to compare with the sink particle mass function is the two-part power law parametrisation of the mass function by \citet{kroupa2001,kroupa2002}, \< \xi (m) &\propto& \begin{cases} m^{-\alpha_\mathrm{body}}; &\alpha_\mathrm{body}=1.3;\ \quad 0.08 \le m/\ensuremath{\rmn{M}_\odot} < 0.5 \\ m^{-\alpha_\mathrm{tail}}; &\ \ \alpha_\mathrm{tail}=2.35; \quad \ 0.5 \le m/\ensuremath{\rmn{M}_\odot} < 150. \end{cases} \label{stdIMF} \> As upper limit or truncation mass for the IMF, valid for all clusters unless estimated, we adopt the physical upper limit for stellar masses, above which stars do not appear to exist ($m_u = 150 \ensuremath{\rmn{M}_\odot}$, \citealp{weidner+kroupa2004,oey+clarke2005,koen2006}) We use the stellar mass function as a probability density, i.e. normalised such that $\int_{m_l}^{m_u} \xi(m) d m = 1$. The choice of methods for the analysis of the mass function depends on the number of data points. If the dataset contains a sufficiently large number of data ($n \ga 100$) direct methods can be applied, i.e. parameters can be estimated and goodness-of-fit tests can be carried out. For meagre datasets one has to rely on indirect methods, which are usually comparisons of quantities derived using the data with expectations derived using a hypothesis for the distribution, fully specified with all parameters. The most detailed information about the high-mass tail of the stellar mass distribution can be obtained at the end of the calculation, when the dataset has the largest number of data points. Thus we start at this point with our analysis of the mass function and proceed then to the time-evolution, which due to the small sample size can only be studied via more indirect methods. The findings from the final state will facilitate the interpretation of the time evolution. \subsection{Final mass function} \begin{table} \begin{tabular}{lrrrrrr} & $n$ & $n_\mathrm{tail}$ & $\hat{\alpha}_\mathrm{tail}$ & $\hat{m}_u$ & $m_{(n)} $\\ \multicolumn{6}{l}{$10^3\ \Msun$\ calculation, richest subcluster:}\\ & 372 & 110 & 1.67$\pm$0.10 & 23$\pm$2\ \ensuremath{\rmn{M}_\odot} & 21\ \ensuremath{\rmn{M}_\odot} \\ \multicolumn{6}{l}{$10^4\ \Msun$\ calculation, richest subcluster:}\\ & 476 & 98 & 1.93$\pm$0.11 & 39$\pm$8\ \ensuremath{\rmn{M}_\odot} & 30\ \ensuremath{\rmn{M}_\odot} \\ \multicolumn{6}{l}{$10^4\ \Msun$\ calculation, second richest subcluster:}\\ & 174 & 31 & 1.69$\pm$0.36 & 19$\pm$6\ \ensuremath{\rmn{M}_\odot} & 15\ \ensuremath{\rmn{M}_\odot} \\[1em] \multicolumn{6}{l}{$10^3\ \Msun$\ calculation, all sinks:}\\ & 563 & 148 & 1.79$\pm$0.11& 24$\pm$2\ \ensuremath{\rmn{M}_\odot} & 21\ \ensuremath{\rmn{M}_\odot} \\ \multicolumn{6}{l}{$10^4\ \Msun$\ calculation, all sinks:}\\ &1945 & 459 & 2.18$\pm$0.08 & 33$\pm$4\ \ensuremath{\rmn{M}_\odot} & 30\ \ensuremath{\rmn{M}_\odot} \\[1em] \multicolumn{6}{l}{$10^4\ \Msun$\ calculation, all sinks in all subclusters:}\\ &1645 & 267 & 1.92$\pm$0.07 & 34$\pm$4\ \ensuremath{\rmn{M}_\odot} & 30\ \ensuremath{\rmn{M}_\odot} \\ \multicolumn{6}{l}{$10^4\ \Msun$\ calculation, all sinks not in subclusters (``field stars''):}\\ & 890 & 202 & 2.55$\pm$0.14 & 9$\pm$1\ \ensuremath{\rmn{M}_\odot} & 8\ \ensuremath{\rmn{M}_\odot} \\ \end{tabular} \caption{\label{estimates} Estimated parameters of the mass functions for sinks in the high mass tail. $n$ is the total number of sinks in the object, $n_\mathrm{tail}$ the number with $m>0.8 \ensuremath{\rmn{M}_\odot}$. $\hat{\alpha}_\mathrm{tail}$ and $\hat{m}_u$ are the estimated exponent and truncation mass, respectively. $m_{(n)}$ is the mass of the most massive sink particle, given for comparison. The estimates were derived at the end of the simulations, with $\tau = 2.5\ t_\mathrm{ff}$ and $\tau = 1.0\ t_\mathrm{ff}$ for the $10^3\ \Msun$\ and $10^4\ \Msun$\ calculation, respectively. } \end{table} For the analysis of the final mass distribution we just assume for the high-mass tail ($m> 0.8\ \ensuremath{\rmn{M}_\odot}$) that the mass distribution is following a power law truncated at some value, not imposing any assumption about the exponent or the truncation mass. To estimate the exponent, $\hat{\alpha}_\mathrm{tail}$, and truncation mass, $\hat{m}_u$ we use the bias-corrected maximum likelihood method of \citet{maschberger+kroupa2009}. The results are given in Table \ref{estimates}. In the most populous subclusters we find $\hat{\alpha}_\mathrm{tail}$ in the range from $\approx 1.7$--$1.9$. These are much smaller values than the Salpeter value, $\alpha_\mathrm{tail}=2.35$, which can be explained by the preference of massive sinks to be in subclusters. With only three estimates and considering the size of the error bars it is not unreasonable to assume a universal exponent $\alpha_\mathrm{tail} \approx 1.8$, valid within the dense subclusters. There is no apparent dependence of the exponent on the number of sinks in the tail. The estimated truncation masses are only marginally higher (up to $\approx 10\ \ensuremath{\rmn{M}_\odot}$) than the most massive sink particles in the clusters (15--30\ \ensuremath{\rmn{M}_\odot}), see also Table \ref{estimates}. $\hat{m}_u$ increases with increasing (total) number of sinks in the subcluster. This could indicate that the truncation mass of the mass function increases as the number of sinks increases. The truncation mass of a power-law distribution is difficult to estimate, and it is possible that despite the bias correction the ``true'' truncation mass can be underestimated by up to 50\%. Using a graphical goodness-of-fit technique, the SPP plot (stabilised probability-probability plot) described in \citet{maschberger+kroupa2009}, it can be assessed whether the data could be consistent with alternative hypotheses of a larger exponent or a larger truncation mass, and also if the data are obeying the assumed null hypothesis (in our case the power law with the estimated exponent and estimated truncation mass). The SPP plot is constructed by first sorting the data ascending in mass and then calculating for each data point the empirical cumulative density and the hypothetical cumulative density. The empirical cumulative density is given by $P_\mathrm{E} ( m_{(i)}) = \frac{i-0.5}{n}$, where $i$ is the rank of the data point in the ordered sample and $n$ the sample size. The cumulative density for the null hypothesis, $P_\mathrm{H0} (m_{(i)})$, is in our case simply the cumulative density of a truncated power law, where the estimated values are used for the parameters. For a data set perfectly obeying the null hypothesis the pairs $\{P_\mathrm{H0}(m_{(i)}),P_\mathrm{E} (m_{(i)})\}$ would in a plot exactly lie on the $\{0,0\}-\{1,1\}$ diagonal. An additional bonus of this plot is that the Kolmogorov-Smirnov test has the direct graphical interpretation as parallels to the diagonal with their distance depending on the KS probability. However, a direct plot of $P_\mathrm{H0}$ and $P_\mathrm{E}$ is not the best display of the data because the main emphasis lies in the middle region of the plot. But if the cumulative densities are transformed using the stabilising transformation of \citet{maschberger+kroupa2009} this disadvantage can be overcome, and a transformed version of the KS test can be overplotted. This significantly reduces the likelihood of wrongly classifying data stemming from an alternative hypothesis as being from the null hypothesis. \begin{figure \includegraphics[width=0.79\fullcolumn]{fig14_top}\\ \hspace{\fullcolumnspace}\\ \includegraphics[width=0.79\fullcolumn]{fig14_middle}\\ \hspace{\fullcolumnspace}\\ \includegraphics[width=0.79\fullcolumn]{fig14_bottom} \caption{\label{sppplots_clusters}\ SPP-plots for the massive clusters at the end of the calculation (top panel: richest subcluster of the $10^3\ \Msun$\ calculation; middle panel and bottom panel: richest and second richest subcluster of the $10^4\ \Msun$\ calculation). The plots are constructed with a truncated power law as the null hypothesis (diagonal) using the estimated exponent and upper limit; data corresponding to this hypothesis should lie on the diagonal. The parallels to the diagonal confine the 95\% acceptance region. Also shown are the alternative hypotheses of a power law with the estimated exponent and a truncation at 150\ \ensuremath{\rmn{M}_\odot}\ (solid curve), as well as a curve for the ``standard'' Salpeter parameters (dotted, $\alpha=2.35$ and $m_u=150\ \ensuremath{\rmn{M}_\odot}$). } \end{figure} The SPP plots, using a truncated power law as the null hypothesis (= diagonal in the plot), are shown for the most massive subclusters in Fig. \ref{sppplots_clusters}, using the estimated exponent and truncation mass. For all massive clusters the data are following the diagonal and show no systematic trends. They do not exceed the 95\% acceptance region of the stabilised version of the KS test, so that indeed a truncated power law describes the data well. As the truncation mass could be underestimated, we also show the alternative hypothesis of a power law with the same, estimated exponent, but with a truncation mass of $150\ \ensuremath{\rmn{M}_\odot}$ (solid line). The data show no trend to bend in the same direction so that an underestimate of the truncation mass is not likely; instead the mass distribution is indeed truncated only slightly above the most massive sink particle. A power law with $\alpha_{\mathrm{tail}}=2.35$ and $m_u = 150 \ \ensuremath{\rmn{M}_\odot}$ gives the dotted line in Fig. \ref{sppplots_clusters}, which has a curvature completely in disagreement with the data. The standard parameters (eq. \ref{stdIMF}) can therefore be excluded for our data. \begin{figure* \includegraphics[width=\fullcolumn]{fig15_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig15_right} \caption{\label{sppplots_system}\ SPP plots as in Fig. \ref{sppplots_clusters} for all sinks at the end of the simulations ($10^3\ \Msun$\ left and $10^4\ \Msun$\ right), with a truncated power law as null hypothesis (diagonal) using the estimated exponent and truncation mass. Also shown are the alternative hypotheses of a power law with the estimated exponent and a truncation at 150\ \ensuremath{\rmn{M}_\odot}\ (solid curve) and with the ``standard'' Salpeter parameters (dotted, $\alpha=2.35$ and $m_u=150\ \ensuremath{\rmn{M}_\odot}$). } \end{figure*} The SPP plots for the whole systems are shown in Fig. \ref{sppplots_system}. For the $10^3\ \Msun$\ calculation the estimated parameters are $\hat{\alpha}_\mathrm{tail}=1.79\pm0.11$ and $\hat{m}_u=23.5\pm2.1\ \ensuremath{\rmn{M}_\odot}$. \citet{bonnell-etal2003}, analysing the same simulation, already mention that the tail of the mass distribution could be fitted with either an overall exponent of $\alpha_\mathrm{tail}=2.0$, or with a smaller slope in the intermediate-mass range and a steeper slope in the high-mass range. A strong truncation of the mass function can mimic in a histogram a two-part power-law behaviour of the data. From Fig. \ref{sppplots_system} we find that a single power law fits the data well and signs of a two-part power law are not present. Compared to the largest central subcluster the exponent of all sinks is somewhat larger, which means that the sinks in the ``field'' and the other subclusters (containing $<$ 12 sinks) contribute mostly to the low-mass end of the tail and the massive sinks are preferentially found in the central region. Thus the steeper of the mass function for the whole system is a sign of mass segregation. In the $10^4\ \Msun$\ calculation we estimated for all sinks $\hat{\alpha}_\mathrm{tail}=2.18\pm0.08$ and $\hat{m}_u=33.0\pm3.7$, which is again steeper than for the subclusters. Here the data deviate from the assumed truncated power law in a sense that implies a gradual steepening of the mass function at the high mass end. We shall discuss this behaviour in Section \ref{sec-igimf} as a possible manifestation of the IGIMF effect. Finally, we draw attention to the fact that all our IMFs are too flat compared to observed distributions, i.e. high-mass ($m>0.8\ \ensuremath{\rmn{M}_\odot}$) are over-abundant. Internal fragmentation within the sink particles will decrease the number of massive sinks and increase the number of lower-mass sinks. Also, feedback from a massive sink could diminish the amount of accretion of sinks in it's surroundings, so reducing the relative masses of massive sinks. Both fragmentation and feedback can lead to a steeper exponent, so that the agreement with the Salpeter exponent can be reached. Those effects do not alter our conclusion that a strong truncation is needed as internal fragmentation and feedback will push the truncation masses even lower. They also do not affect our finding that the mass function is steeper in the $10^4\ \Msun$ simulation, in which regions of the initial gas are unbound. This change of initial conditions prevents cluster merging from going to completion and prevents the over-production of massive sinks. \subsection{Time-evolution of the exponent} \begin{figure* \includegraphics[width=\fullcolumn]{fig16_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig16_right} \caption{\label{exponent_time} Time evolution of the exponent in the tail ($m>0.8 \ensuremath{\rmn{M}_\odot}$), estimated when more than 24 sinks are in the sample ($10^3\ \Msun$\ calculation left, $10^4\ \Msun$\ calculation right). The exponent was estimated for the whole systems (subclusters and field, big filled dots) and the individual subclusters (big open dots). The small filled symbols show the exponent estimated from all sinks in subclusters together (without the field). } \end{figure*} After the detailed discussion of the mass function at the end of the simulations we now turn to the dependence of the mass function on time and the number of sinks. We first look at the time-evolution of the exponent starting with the larger clusters, which allow us to estimate the parameters, shown in Fig. \ref{exponent_time}. The estimates are made if more than 24 sinks with $m>0.8\ \ensuremath{\rmn{M}_\odot}$ are present. The small filled symbols are for the entire sample and the open points for the individual subclusters. For the whole system the exponent is initially relatively large ($\alpha_\mathrm{tail}>2.5$) consistent with the lack of time available for sinks to grow much by accretion. As sinks gain mass by accretion, the slope rapidly declines over about $5 \times 10^4$ years, and then stabilises at about $1.8$ in the small simulation and $2.2$ in the large simulation. The subclusters only appear when the stable part of the evolution is reached, and their $\alpha_\mathrm{tail}$ stays roughly constant with similar values in both simulations. The small symbols denote the values of $\alpha_\mathrm{tail}$ for the whole population of sinks in subclusters together. The fact that these values are smaller (i.e. a flatter IMF) than for the whole population, including the field, is a sign of mass segregation. In addition we note that in the $10^4\ \Msun$\ simulation the values of $\alpha_\mathrm{tail}$ for individual clusters lie below that for the aggregate cluster population. We however emphasise that the open symbols in Fig. \ref{exponent_time} are not independent data points and actually only correspond to one ($10^3\ \Msun$\ simulation) and up to three ($10^4\ \Msun$\ simulation) clusters. Thus whereas the fact that they lie below the solid symbols is interestingly suggestive of a flatter IMF within individual clusters the result is compromised by small number statistics. We return to this in our discussion of possible IGIMF effects in Section \ref{sec-igimf} below. \begin{figure* \includegraphics[width=\fullcolumn]{fig17_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig17_right} \caption{\label{averagemassplot} Mean mass of sinks in the calculations ($10^3\ \Msun$\ calculation left, $10^4\ \Msun$\ calculation right), derived for the whole system (thick line), all sinks in subclusters (dashed line) and the individual subclusters (dots). The thin lines are the expected mean from the reference IMF (eq. \ref{stdIMF}), and the $1/6$th and $5/6$th quantiles for random sampling. } \end{figure*} When the number of sinks does not suffice to estimate the exponent, the mean stellar mass, $\bar{m}$, can be used. In Figure \ref{averagemassplot} we show $\bar{m}$ as a function of $n$. The value derived from the reference mass function is the horizontal thin line with the expected scatter for random sampling (thin lines at the 1/6th and 5/6th quantiles). For the total stellar sample (thick line) the mean mass increases (by a factor 2--3) over the duration of the simulation as a result of accretion, already deduced from Fig. \ref{clusterassembly}. It only falls out of the 1/6-5/6 region for larger $n$, interestingly at the point in the simulations at which the maximum stellar mass is around 10 \ensuremath{\rmn{M}_\odot}. It is again tempting to speculate that stellar feedback associated with the steep increase in the ultraviolet output of stars at around 10 \ensuremath{\rmn{M}_\odot}\ could remedy this situation. An increasing $\bar{m}$ is also compatible with the decreasing exponent of the tail that is found in Fig. \ref{exponent_time} for the larger-$n$ subclusters. A similar trend of an increasing heaviness of the tail is present if only all sinks in subclusters are considered, shown as blue line. The mean mass of the total population in clusters is generally higher than for the whole system, which is a consequence of mass segregation. The data points are instantaneous values for individual clusters and demonstrate that the mean values are not at all consistent with the expectations of random sampling from an invariant reference mass function. Even the smallest clusters can often have large mean stellar masses as would be expected in a scenario where subclusters form around massive sinks. \subsection{Evolution of the truncation mass} \begin{figure* \includegraphics[width=\fullcolumn]{fig18_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig18_right} \caption{\label{upperlimit} Estimated truncation mass as a function of the number of sinks (left $10^3\ \Msun$\ calculation, right $10^4\ \Msun$\ calculation), on a cluster by cluster basis (large open dots) and for the whole population (large filled dots). The small symbols are the corresponding values of the actual maximum stellar mass in each cluster (small open) or in the population as a whole (small filled). The line is an estimate of the total stellar mass as a function of n (i.e. $m=\bar{m}n$, with the mean stellar mass $\bar{m}=0.54 \ensuremath{\rmn{M}_\odot}$ as implied by the reference IMF eq. \ref{stdIMF}). The number of sinks can serve as a proxy for time. } \end{figure*} The analysis of the three subclusters at the end of the simulations already gave a tentative indication that the truncation mass of the mass function depends on the number of sinks in the subcluster. Figure \ref{upperlimit} illustrates this further by showing the estimated truncation mass as a function of the cluster richness, again with the solid dots for the whole systems and open symbols for the subclusters. The actual most massive sink particle in each cluster is also plotted. As above, the deduced truncation mass is always only marginally larger than the largest datapoint, so that a much larger truncation mass is not likely. The points for the whole system are shifted to the right, as it contains many more sinks. We see clear evidence that the truncation mass is a systematic function of the cluster membership number. \begin{figure* \includegraphics[width=\fullcolumn]{fig19_left} \hspace{\fullcolumnspace} \includegraphics[width=\fullcolumn]{fig19_right}\\ \caption{\label{mmaxntotplot} Evolution of the mass of the most massive, second and third most massive sink particle (top to bottom) as a function of the total number of sinks. The thick solid line shows the evolution of the total system (all sinks), the thick dashed line is the track for all sinks in subclusters and the individual subclusters are represented by dots. The solid and dashed lines represent the predicted expectation value of the mass of the $n$th ranked sink along with the $1/6$th and $5/6$th quantiles for random sampling from the IMF given in eq. \ref{stdIMF}. Note that the simulation data sits progressively higher with respect to the predicted quantiles, as one proceeds from first to second to third most massive sink particle down the page. } \end{figure*} We can further test whether the mass functions within individual subclusters are truncated, by examining the distribution of the most massive, second most massive and third most massive sink particle within each subcluster. These three quantities are plotted in the three panels of Figure \ref{mmaxntotplot} as a function of cluster membership number. These data show the qualitative trend (increasing maximum stellar mass with cluster richness, together with a large scatter in maximum stellar mass at a given cluster $n$) that is seen in observational data (\citealp{weidner+kroupa2004,weidner+kroupa2006}, \citealp{maschberger+clarke2008}, \citealp{weidner-etal2009}) and which is predicted by the statistics of random drawing. The solid and dotted lines on the plot correspond to the mean and $1/6$th and $5/6$th contours in the cumulative distribution that is predicted by random sampling from the reference IMF, eq. \ref{stdIMF}. \begin{figure \includegraphics[width=\fullcolumn]{fig20} \caption{\label{quantile_loc} Location of the mean mass for the most massive, second and third most massive star for different parameters of the mass function (from top to bottom in each group of lines). The solid lines use $\alpha_\mathrm{tail}=2.35$ and $m_u=150\ \ensuremath{\rmn{M}_\odot}$, the dotted lines $\alpha_\mathrm{tail}=1.8$ and $m_u=150\ \ensuremath{\rmn{M}_\odot}$. For the dashed lines again $\alpha_\mathrm{tail}=1.8$ is used, but the truncation mass is a function number of stars, eq. \ref{f_truncation}. } \end{figure} We see that the simulation data lie progressively higher with respect to the theoretical quantiles as one proceeds from most massive to second and third most massive members: in other words, the masses of the three most massive sink particles are more bunched together than one expects from the models. We illustrate how the form of the IMF affects the {\it relative} distributions of the most massive three cluster members in Figure \ref{quantile_loc} where we plot the expectation values of the mass of the three most massive members in the case of three `toy' IMF models. The solid and dotted lines correspond, respectively, to power law distributions with slopes of 1.8 and 2.35 which are truncated at a mass of 150 \ensuremath{\rmn{M}_\odot}. As expected, the flatter power law implies higher means of all three quantities at a given $n$, but the relative spacing between the most massive and the second and third most massive members is not very different in the two cases. In both cases, the three lines would start to converge only for much richer clusters where the expected masses of the three most massive sink particles approached the cut-off at 150 \ensuremath{\rmn{M}_\odot}. The dashed curves, which are provided for purely illustrative purposes, correspond to an input distribution with a slope of 1.8 but where the upper limit is a function of cluster richness, \< m_u (n) &=& \frac{1}{5} n \ensuremath{\rmn{M}_\odot}. \label{f_truncation}\> In this case, truncation is important in all clusters and the effect of this is to make the three dashed lines much closer together than for the other (fixed truncation) cases. We therefore deduce, at a qualitative level, that the effect seen in Figure \ref{mmaxntotplot} (whereby the difference in mass between the three most massive sinks is unexpectedly small) may be a hint that the mass functions are truncated even in the lower-$n$ subclusters. \subsection{An IGIMF effect?}\label{sec-igimf} The IGIMF (integrated galactic IMF) is a concept introduced by \citet{kroupa+weidner2003} and further developed in \citet{weidner+kroupa2005} \citep[a similar notion is already present in ][]{vanbeveren1982,vanbeveren1983}. If the truncation mass of the IMF in star forming regions (i.e. star clusters) depends on the richness of the region (by number or mass), then the IMFs in the regions are not completely identical any more, and thus the stars of all star forming regions in a galaxy together can have a distribution function, the IGIMF, that differs from the IMF within individual clusters. The IMF (here defined as IMF within an individual star forming region) and IGIMF disagree only in the high-mass tail. For example, if there are 1000\ \ensuremath{\rmn{M}_\odot}\ in stars of many small star forming regions, with a truncation mass of, say, 10\ \ensuremath{\rmn{M}_\odot}, and 1000\ \ensuremath{\rmn{M}_\odot}\ from star forming regions with $m_u=100\ \ensuremath{\rmn{M}_\odot}$, then the combined sample of 2000 \ensuremath{\rmn{M}_\odot}\ will have a deficiency of stars between 10--100\ \ensuremath{\rmn{M}_\odot}, compared to a sample of 2000\ensuremath{\rmn{M}_\odot}\ with $m_u=100\ \ensuremath{\rmn{M}_\odot}$. For a more realistic case the general trend is that the IGIMF is steeper than the IMF in the high mass tail ($\alpha_\mathrm{IGIMF} > \alpha_\mathrm{IMF}$) where the exact relationship depends on the spectrum of cluster masses. This effect can influence for example the relation between the star formation rate and the H$\alpha$ flux of galaxies \citep{pflamm-altenburg-etal2007,pflamm-altenburg+kroupa2008,pflamm-altenburg-etal2009} and the metallicity of a galaxy \citep{koeppen-etal2007}. \begin{figure \includegraphics[width=\fullcolumn]{fig21}\\ \caption{\label{sppplot_all_clusters}\ SPP plots using all sinks in subclusters together at the end of the $10^4\ \Msun$\ simulation. Also shown are the alternative hypotheses of a power law with the estimated exponent and no upper truncation (dashed curve) and a truncation at 150 \ensuremath{\rmn{M}_\odot}\ (solid curve), as well as curve for the ``standard'' Salpeter parameters (dotted, $\alpha=2.35$ and $m_u=150 \ensuremath{\rmn{M}_\odot}$). The data show a curvature that implies a suppression of high masses. } \end{figure} The $10^4\ \Msun$\ calculation covers a region that is sufficiently large and massive that it produces not just a single cluster but a population of objects which may evolve into individual star clusters. To our surprise we found that the mass function of this calculation shows signs of the IGIMF effect, as already mentioned in the Sections above. In the SPP plot containing all sinks of the simulation (Fig. \ref{sppplots_system}) the high-mass end of the data bends upwards away from the diagonal, implying a steepening of the mass function. This effect is not only due to the fact that the entire population contains extra (field) sinks that are not included in the cluster and which (due to mass segregation) are of lower mass. Fig. \ref{sppplot_all_clusters} is an SPP plot for the aggregate population of sinks in subclusters and here again the upward curvature is a hallmark of a progressive steepening of the IMF. As noted above we expect to see this effect since we have already seen evidence that the IMFs in individual clusters are truncated. Although the observational reality of such IGIMF effects is controversial (e.g. \citealp{elmegreen2006b} and \citealp{parker+goodwin2007} on the theoretical side, or \citealp{parker-etal1998,chandar-etal2005,hoversten+glazebrook2008,meurer-etal2009}, further discussed in \citealp{elmegreen2009}), it is interesting that the large simulation indeed appears to manifest this behaviour. Although we stress that the process by which the stellar mass function is built up cannot be seen {\it physically} as a random drawing experiment, the net effect of the cluster assembly process is to produce clusters that are {\it mathematically} describable as follows: random drawing from a mass function with an upper cut-off that depends on cluster richness. In this sense, the simulations show a behaviour that is qualitatively similar to the Monte-Carlo simulations of \citet{weidner+kroupa2006}, who constructed model clusters under a similar assumption. The reason, in the case of the simulations, that the upper truncation increases with cluster richness is because the first sinks to form not only tend to attain the largest masses but also have the greatest opportunities to undergo cluster mergers and hence end up in the largest clusters. \section{Discussion It is often stated that the majority of stars form in clusters and indeed in the simulations we find that by an age of half a Myr $60-80 \%$ of sinks are located in clusters% \footnote{ Note that in common with observers we here define clusters in terms of association on the sky and do not imply by this that such clusters are necessarily bound or long lived. Obviously the fraction in clusters depends on the choice of $d_\mathrm{break}$. }% . We also find that by this stage the clusters are strongly mass segregated, that more massive sinks are, in a statistical sense, associated with richer clusters and that massive sinks are under-represented in field regions compared with clusters. We find that in the simulations a sink `forms' (i.e. the mass of bound gas within a radius of $\approx 200\ \ensuremath{\rmn{AU}}$\ increases as a result of infall from the environment) over a variable period which can be as long as the duration of the simulation (of order half a Myr). Given the ambient gas densities, this period is of order a free fall time and sinks can thus move significant distances (several tenths of a parsec) over this period and experience considerable evolution in clustering properties in the process. Indeed, around half of sinks of all masses do not start to form in the central regions of populous clusters, but in their outskirts or in separate small groupings (with $ n < 12$). The more massive sink particles are however those that start to form earlier and are more likely to have undergone mergers into successively larger entities. A consequence of this cluster formation pattern is that sinks that form together in a small-$n$ group tend to stay together and experience similar accretion histories as they merge into larger entities. This is particularly true of sinks that form early and thus acquire a headstart in mass acquisition, since these tend to end up in the cluster core during cluster merging. Thus the mass distribution of sinks in a given cluster often contains a group of massive sinks of similar mass (see Figure \ref{mmaxntotplot}). In terms of a mathematical description of the resulting IMF on a cluster by cluster basis, this is best represented by a power law upper IMF which is truncated at a stellar mass that depends on the cluster richness. As pointed out by \citet{weidner+kroupa2006}, a consequence of such behaviour is that in the integrated IMF (i.e. the IGIMF, being that composed of the summed total of a sample of clusters) the massive sinks are underrepresented, which leads to a steeper slope in the power law for a large sample. The $10^4\ \Msun$\ simulation produces several clusters, and indeed when all sinks of them are combined the mass function deviates from a power law. Because of the small number of clusters we do not find a general steepening of the slope but a lack of massive sinks at the high-mass end, which is the IGIMF effect for a small sample of clusters. While a lot of our analysis has been devoted to understanding the reason that the simulations produce particular observational characteristics, observers can also of course simply use these results as an empirical test of the correctness of the physical ingredients in the simulations. It is of course important for proper comparison that clusters are extracted from spatial distributions on the sky through use of a minimal spanning tree, as here, and that parameters (such as ellipticity) are also derived in the same way. Apart from the issue of IMF slope described above, we here draw attention to two properties that are particularly suitable for observational comparison. First of all, the ellipticity histogram (Figure \ref{ellipticityhistogram}) demonstrates that the clusters are somewhat flattened, typically with an axis ratio of $< 2:1$; this moderate flattening is a combined consequence of the filamentary morphology of the gas and the effects of relaxation that tend to sphericalise the inner regions. It is an easy matter to compare the ellipticity distribution of an ensemble of clusters and decide whether this is statistically consistent with the distribution shown in Figure \ref{ellipticityhistogram}. Secondly, one may readily compare the degree of mass segregation in an observed cluster ensemble with these simulations through construction of a diagram like Figure \ref{dmaxhistogram}. This diagram involves only a scale free quantity and makes no assumption about the radial density profile or cluster morphology: all that is required in order to construct such a diagram is that one can count sources on the sky and can identify the most massive star in the cluster. We note that upcoming Xray surveys, which offer the potential to identify large numbers of low mass pre-main sequence stars in regions that are heavily embedded, offer an excellent opportunity to test the diagnostics presented in this paper. \section{Acknowledgements} We thank Simon Goodwin, the referee, for valuable comments that have clarified and improved this article. Th. M. acknowledges funding by {\sc constellation}, an European Commission FP6 Marie Curie Research Training Network. \bibliographystyle{mn2e}
2024-02-18T23:40:22.373Z
2010-02-23T21:03:48.000Z
algebraic_stack_train_0000
2,191
12,800
proofpile-arXiv_065-10754
\section{#1}\setcounter{equation}{0}} \newcommand{\subsectiono}[1]{\subsection{#1}\setcounter{equation}{0}} \def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}} \def{\hbox{ 0\kern-1.5mm 0}}{{\hbox{ 0\kern-1.5mm 0}}} \def{\wh a}{{\widehat a}} \def{\wh b}{{\widehat b}} \def{\wh c}{{\widehat c}} \def{\wh d}{{\widehat d}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \begin{document} \baselineskip 24pt \begin{center} {\Large \bf Discrete Information from CHL Black Holes} \end{center} \vskip .6cm \medskip \vspace*{4.0ex} \baselineskip=18pt \centerline{\large \rm Ashoke Sen } \vspace*{4.0ex} \centerline{\large \it Harish-Chandra Research Institute} \centerline{\large \it Chhatnag Road, Jhusi, Allahabad 211019, India} \centerline{and} \centerline{\large \it LPTHE, Universite Pierre et Marie Curie, Paris 6} \centerline{\large \it 4 Place Jussieu, 75252 Paris Cedex 05, France} \vspace*{1.0ex} \centerline{E-mail: sen@mri.ernet.in, ashokesen1999@gmail.com} \vspace*{5.0ex} \centerline{\bf Abstract} \bigskip $AdS_2/CFT_1$ correspondence predicts that the logarithm of a ${\hbox{ Z\kern-1.6mm Z}}_N$ twisted index over states carrying a fixed set of charges grows as $1/N$ times the entropy of the black hole carrying the same set of charges. In this paper we verify this explicitly by calculating the microscopic ${\hbox{ Z\kern-1.6mm Z}}_N$ twisted index for a class of states in the CHL models. This demonstrates that black holes carry more information about the microstates than just the total degeneracy. \vfill \eject \baselineskip=18pt \tableofcontents \sectiono{Introduction and Summary} \label{sint} CHL models\cite{9505054,9506048} in four dimensions with ${\cal N}=4$ supersymmetry have proved to be a rich arena for studying the physics of black holes\cite{0510147,0602254,0603066,0605210,0607155,0609109, 0612011}. On the one hand they have as much supersymmetry and hence as much control as the toroidally compactified heterotic string theory. On the other hand they have different effective actions beyond the supergravity approximation and hence make different predictions for the entropy of BPS black holes beyond the leading order result of \cite{9507090,9512031}. Thus they provide us with more data points at which we can compare the macroscopic and microscopic predictions for the black hole entropy. This comparison has been remarkably successful at the level of four derivative corrections to the effective action, reproducing complicated non-trivial functional dependence of the entropy on the charges on both sides.\footnote{We should add a note of caution that this comparison requires us to make assumption of certain non-renormalization results which have not been proven. In particular it assumes that at the level of four derivative terms the Gauss-Bonnet terms (or their supersymmetric completion given in \cite{9602060,9603191,9812082,0007195}) in the action are sufficient to calculate the correction to the black hole entropy. The analysis in this paper does not require us to make any such assumption.} Indeed, most of the results on black holes in heterotic string theory on $T^6$\cite{9607026,0412287,0505094,0506249,0508174,0605210,0705.1433, 0802.0544,0802.1556,0803.2692} have now been generalized to the case of CHL models. In this paper we shall make use of the CHL model to explore another aspect of black holes. Based on $AdS_2/CFT_1$ correspondence\cite{0809.3304,0903.1477} it was argued in \cite{0911.1563} that if a theory has a ${\hbox{ Z\kern-1.6mm Z}}_N$ symmetry that cannot be regarded as part of a $U(1)$ gauge transformation, and if we pick a black hole carrying $U(1)$ charges which are invariant under this ${\hbox{ Z\kern-1.6mm Z}}_N$ transformation, then the logarithm of the trace of the ${\hbox{ Z\kern-1.6mm Z}}_N$ generator over the microstates of the black hole grows as $1/N$ times the entropy of the black hole.\footnote{For a ${\hbox{ Z\kern-1.6mm Z}}_N$ group that can be regarded as a subgroup of a spontaneously broken $U(1)$ gauge group, the possibility of hair modes containing information about the ${\hbox{ Z\kern-1.6mm Z}}_N$ quantum numbers was explored in \cite{wil1,wil2}. In contrast the ${\hbox{ Z\kern-1.6mm Z}}_N$ groups we discuss here cannot be regarded as a subgroup of a spontaneously broken $U(1)$ symmetry. Also our goal here is quite different from the one of \cite{wil1,wil2}.} This can be made more concrete for BPS black holes in supersymmetric string theories by working with protected helicity trace index. In the context of ${\cal N}=4$ supersymmetric string theories in four dimensions the relevant twisted index is the 6th helicity trace index\cite{9708062,9708130,0911.1563}: \begin{equation} \label{ehe1} B^g_6(\vec q) = {1\over 6!}\, Tr_{\vec q} \left\{(-1)^{2h} (2h)^6\, g\right\}\, , \end{equation} where the trace is taken over all states carrying a fixed set of charges $\vec q$, $h$ is the third component of the angular momentum of the state in its rest frame, and $g$ is the generator of a ${\hbox{ Z\kern-1.6mm Z}}_N$ symmetry which leaves $\vec q$ invariant. This index receives contribution from $1/4$ BPS states in this theory which describe large black holes with near horizon $AdS_2\times S^2$ geometry. In this case the analysis of \cite{0911.1563} applies and tells us that \begin{equation} \label{emacpred} \left|B^g_6(\vec q)\right|\sim \exp[S_{BH}(\vec q)/N]\, , \end{equation} where $S_{BH}(\vec q)$ is the entropy of an extremal black hole carrying charge $\vec q$. We shall not review the arguments of \cite{0911.1563} here; but the central idea is that in computing the contribution to \refb{emacpred} from the horizon of the black hole the leading saddle point corresponding to the $AdS_2\times S^2$ near horizon geometry does not contribute. However a ${\hbox{ Z\kern-1.6mm Z}}_N$ orbifold of $AdS_2\times S^2$\cite{0810.3472,0903.1477,0904.4253}, whose asymptotic geometry coincides with that of the original near horizon geometry of the black hole, contributes and gives the answer $\exp[S_{BH}/N]$ in the semiclassical limit. While \refb{emacpred} follows almost trivially from the $AdS_2/CFT_1$ correspondence, it is quite striking from the point of view of the microscopic theory. For large black holes the right hand side of \refb{emacpred} is much smaller than the untwisted helicity trace index carrying the same charges, since the latter is given by $\exp[S_{BH}(\vec q)]$. What this tells us is that in a given charge sector the microstates of different $g$ eigenvalues come in almost equal numbers so that the sum weighted by $g$ is much smaller than the total number of states. This was explicitly verified in \cite{0911.1563} by deriving the microscopic formula for this twisted index in toroidally compactified heterotic and type II string theories and then studying their asymptotic behaviour.\footnote{Even though type II string theory on $T^6$ has ${\cal N}=8$ supersymmetry, only an ${\cal N}=4$ subgroup of this commutes with $g$. Thus effectively we can analyze it in the same way as in an ${\cal N}=4$ supersymmetric theory.} Given the unusual nature of this macroscopic prediction, it is important to test this in as many examples as possible. In this paper we shall verify this in the context of CHL models. The construction of the CHL models that we shall analyze proceeds as follows. We begin with type IIB string theory on ${\cal M}\times S^1\times \widetilde S^1$ where ${\cal M}$ is either K3 or $T^4$ and go to a special subspace of the moduli space of ${\cal M}$ where the theory has a geometric ${\hbox{ Z\kern-1.6mm Z}}_M\times {\hbox{ Z\kern-1.6mm Z}}_N$ symmetry that commutes with 16 supersymmetry generators of the theory. An extensive list of possible symmetries of this type can be found in \cite{9508144,9508154}. Note also that a $Z_{MN}$ group with $M$ and $N$ relatively prime can be considered as a $Z_M\times Z_N$ group for the purpose of our analysis. Let us denote by $g_M$ and $g_N$ the generators of ${\hbox{ Z\kern-1.6mm Z}}_M$ and ${\hbox{ Z\kern-1.6mm Z}}_N$ respectively. We now take an orbifold of this theory by a symmetry that involves $1/M$ unit of translation along the circle $S^1$ accompanied by the transformation $g_M$. This gives a theory with ${\cal N}=4$ supersymmetry in four dimensions and the ${\hbox{ Z\kern-1.6mm Z}}_N$ group generated by $g_N$ is a symmetry of this theory. We now consider a $g_N$ invariant charge vector $\vec q$ in this theory and define the index \begin{equation} \label{ek1} d(\vec q) = -{1\over 6!}\, Tr_{\vec q} \left(e^{2\pi i h} (2h)^6 g_N\right)\, , \end{equation} where the trace is taken over all states carrying the charge $\vec q$. Eq.\refb{emacpred} now translates to \begin{equation} \label{ek2} \left|d(\vec q)\right|\sim \exp[S_{BH}(\vec q)/N]\, , \end{equation} for large charges. Our goal will be to verify this by explicit computation of $d(\vec q)$ in the microscopic theory. Since the explicit counting of states involves technical details, we shall take this opportunity to summarize the results of our analysis. We use a convention in which the coordinate radius of the original circle $S^1$ before orbifolding is $2\pi M$ so that the orbifold action involves translation by $2\pi$ along $S^1$ accompanied by $g_M$. In this convention the minimum amount of momentum along $S^1$ is $1/M$. We focus on states carrying one unit of KK monopole charge associated with the circle $\widetilde S^1$, one unit of D5-brane charge wrapped on ${\cal M}\times S^1$, $Q_1$ units of D1-brane charge wrapped on $S^1$, left-moving momentum $n/M$ along $S^1$ and $J$ units of momentum along $\widetilde S^1$, and define \begin{equation} \label{ek3} Q^2 = 2n/M, \qquad P^2 = 2Q_1, \qquad Q.P=J\, . \end{equation} In this case our result of $d(\vec q)$ is given by \begin{equation} \label{ek4} d(\vec q) = {1\over M} (-1)^{Q.P+1} \int_{{\cal C}} \, d\rho d\sigma d v e^{-\pi i(M\rho Q^2 + \sigma P^2/M + 2 v Q.P)} {1\over \widetilde\Phi(\rho,\sigma,v)}\, . \end{equation} Here ${\cal C}$ is a three real dimensional subspace of the three complex dimensional space labelled by $(\rho=\rho_1+i \rho_2, \sigma=\sigma_1+i\sigma_2, v=v_1+iv_2)$ given by \begin{eqnarray}\displaystyle \label{ek5} && \rho_2=M_1, \qquad \sigma_2 = M_2, \qquad v_2 = M_3, \nonumber \\ && 0\le\rho_1 \le 1, \qquad 0\le\sigma_1\le M, \qquad 0\le v_1 \le 1\, , \end{eqnarray} $M_1$, $M_2$, $M_3$ being large but fixed positive numbers satisfying \begin{equation} \label{ek6} M_1 M_2 > M_3^2\, . \end{equation} The function $\widetilde\Phi(\rho,\sigma,v)$ is a modular form of a subgroup of $Sp(2,{\hbox{ Z\kern-1.6mm Z}})$, given by \begin{eqnarray}\displaystyle \label{ek7} \widetilde\Phi(\rho,\sigma,v) &=& e^{2\pi i (\widetilde\alpha \rho +\widetilde\gamma\sigma + \widetilde\beta v)} \, \prod_{b=0}^1 \prod_{r=0}^{N-1}\prod_{r'=0}^{M-1} \prod_{k\in{\hbox{z\kern-1mm z}}+{r'\over M}, l\in{\hbox{z\kern-1mm z}}, j\in 2{\hbox{z\kern-1mm z}}+b\atop k,l\ge 0, j<0 \, for\, k=l=0} \left[ 1 - e^{2\pi i r/N} \, e^{2\pi i (k\sigma +l\rho + j v)} \right]^a\nonumber \\ a &\equiv& \sum_{s=0}^{N-1} \sum_{s'=0}^{M-1} e^{-2 \pi i (s'l / M + rs/N)} c_b^{(0,s;r',s')}(4kl - j^2)\, , \end{eqnarray} where the coefficients $c_b^{(r,s;r',s')}$ are defined via the equation: \begin{eqnarray}\displaystyle \label{ek8} &&\sum_{b=0}^1 \sum_{j\in 2{\hbox{z\kern-1mm z}}+b, n\in {\hbox{z\kern-1mm z}}/MN} c_b^{(r,s;r',s')} (4n - j^2) e^{2\pi i (n\tau + j z)} \nonumber \\ &=& {1\over MN} Tr_{RR;g_M^{r'} g_N^r} \left (g_M^{s'} g_N^{s} (-1)^{J_L+J_R} e^{2\pi i (\tau L_0-\bar\tau \bar L_0)} e^{2\pi i J_L z} \right)\, . \end{eqnarray} The trace is taken over all the $g_M^{r'} g_N^{r}$ twisted RR sector states in the (4,4) superconformal CFT$_2$ with target space ${\cal M}$. $L_0$ and $\bar L_0$ are the left and right-moving Virasoro generators and $J_L/2$ and $J_R/2$ are the generators of the $U(1)_L\times U(1)_R$ subgroup of the $SU(2)_L\times SU(2)_R$ R-symmetry group of this CFT$_2$. An algorithm for explicitly computing the right hand side of \refb{ek8} has been outlined in appendix \ref{sb}. The coefficients $\widetilde\alpha$, $\widetilde\beta$, $\widetilde\gamma$ are given by \begin{eqnarray}\displaystyle \label{ek9} \widetilde\alpha &=& {1\over 24M} Q_{0,0} -{1\over 2M} \sum_{s'=1}^{M-1} Q_{0,s'} {e^{-2\pi i s'/M} \over (1- e^{-2\pi i s'/M} )^2}\, , \nonumber \\ \widetilde\beta &=& 1\nonumber \\ \widetilde\gamma &=& {1\over 24 M}\, \chi({\cal M}) = {1\over 24 M}\, Q_{0,0}\, , \end{eqnarray} where \begin{equation} \label{edefqrs} Q_{r',s'} = M N \left( c_0^{(0,0;r',s')}(0) + 2 c_1^{(0,0;r',s')}(-1) \right)\, . \end{equation} Eqs. \refb{ek8}, \refb{ek9} define all the quantities which appear in the definition of $\widetilde\Phi$. The only ambiguity that remains in computing the right hand side of \refb{ek4} is the choice of the integration contour encoded in the choice of $(M_1, M_2, M_3)$. As is well known by now, this ambiguity is related to the phenomenon of wall crossing\cite{0702141,0702150,0705.3874,0706.2363}. Different choices of $M_1$, $M_2$ and $M_3$ give the value of $d(\vec q)$ for different values of the asymptotic moduli. However the ambiguity in the value of $d(\vec q)$ that it introduces is sufficiently small so as not to affect our analysis, and hence we shall ignore it in our subsequent discussion. Given the result \refb{ek4} for $d(\vec q)$ we can find its behaviour for large $Q^2$, $P^2$ and $Q.P$ by standard method\cite{9607026,0412287,0510147,0605210}. The result is that $d(\vec q)$ behaves as \begin{equation} \label{ek10} d(\vec q)\sim \exp\left[ \pi\sqrt{Q^2 P^2 - (Q.P)^2}/N\right]\, . \end{equation} Since in this limit a black hole of charge $\vec q$ has entropy\cite{9507090,9512031} \begin{equation} \label{ek11} S_{BH}(\vec q) \simeq \pi\sqrt{Q^2 P^2 - (Q.P)^2}\, , \end{equation} we see that the microscopic result \refb{ek10} is in perfect agreement with the macroscopic prediction \refb{ek2}. Finally we would like to remark that even though we have presented our analysis for the index $Tr((-1)^{2h} (2h)^6 g_N)$, we can repeat the analysis with $g_N$ replaced by $(g_N)^b$ for any integer $b$. In this case the role of $N$ is played by the order of $(g_N)^b$, and in all the formul\ae\ we simply have to replace $g_N$ by $(g_N)^b$. This in turn allows us to compute the index $Tr((-1)^{2h}(2h)^6)$ for states carrying a definite $g_N$ eigenvalue $e^{2\pi i a/N}$ using the combination \begin{equation} \label{ecomb} {1\over N} \, \sum_{b=0}^{N-1} e^{-2\pi i ab/N} \, Tr((-1)^{2h} (2h)^6 (g_N)^b)\, . \end{equation} Thus our result can also be interpreted as the agreement between the macroscopic and the microscopic results for the helicity trace index over states carrying a definite $g_N$ charge. \sectiono{The counting} \label{scount} The counting of states of the D1-D5-KK monopole system proceeds as in \cite{0605210,0607155,0609109,0708.1270}. We take the circle $S^1$ to be large compared to the size of ${\cal M}$ and regard the world-volume theory as a 1+1 dimensional field theory living on $S^1$. Denoting by $d(Q_1,n,J)$ the twisted index \refb{ek1} of states carrying charge labeled by $(Q_1,n,J)$ in the convention of \S\ref{sint}, we define \begin{equation} \label{edefz} Z(\rho,\sigma, v) = \sum_{Q_1,n,J} e^{2\pi i (Q_1\sigma/M + n\rho + vJ)} (-1)^J d(Q_1,n,J)\, . \end{equation} Proving \refb{ek4} is now equivalent to proving that $Z=-1/\widetilde\Phi$. The twisted partition function $Z$ is given by the product of three twisted partition functions, -- that of the excitations living on the KK monopole, that of the dynamics describing the overall motion of the D1-D5 system in the background of the KK monopole and that of the motion of the D1-brane along the D5-brane. For each system we must keep the right-movers in their ground state and excite the left-movers in order to preserve supersymmetry.\footnote{Here and elsewhere left-moving modes will refer to modes carrying momentum along the negative $S^1$ direction. Thus left-moving momentum $n$ will indicate momentum $-n$ along $S^1$.} Since the fermion zero modes associated with the broken supersymmetries are automatically removed while computing the helicity trace (which is $B_6$ in this case since the system breaks 12 of the 16 supersymmetries), we shall ignore their contribution during the rest of our analysis. We begin by analyzing the partition function of the KK monopole. The massless bosonic modes on the world-volume of the KK monopole arise from the motion along the three transverse directions and the components of the $p$-form fields along the product of the harmonic $(p-2)$-forms of ${\cal M}$ and the harmonic 2-form of the Taub-NUT space. The massless fermions are the goldstinos associated with the supersymmetries broken by the Kaluza-Klein monopole. In general one can show that the left-moving bosons and fermions are in one to one correspondence with the even and odd degree harmonic forms on ${\cal M}$\cite{0708.1270}. Furthermore their $(g_M,g_N)$ quantum numbers are also given by the $(g_M,g_N)$ eigenvalues of the harmonic forms on ${\cal M}$. Since the harmonic $(p,q)$ forms on ${\cal M}$ are in one to one correspondence with the RR sector ground states in the supersymmetric $\sigma$-model with target space ${\cal M}$ carrying $L_0=\bar L_0=0$, $J_L=(p-1)$, $J_R=(q-1)$, it follows from \refb{ek8} that the number of left-moving bosons minus the number of left-moving fermions on the KK monopole world-volume, carrying $g_N$ quantum number $e^{2\pi i r/N}$ and $g_M$ quantum number $e^{2\pi i k'/M}$, is given by\cite{0609109,0708.1270} \begin{eqnarray}\displaystyle \label{ea1} && {1\over MN}\, \sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i r s/N} e^{-2\pi i k's'/M} Tr_{RR;I} \left[ (-1)^{J_L+J_R} \, g_M^{s'} g_N^s\, \delta_{L_0,0}\, \delta_{\bar L_0,0}\right]\nonumber \\ &=& \sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i r s/N} e^{-2\pi i k's'/M} \left(c_0^{(0,s;0,s')}(0) + 2 c_1^{(0,s; 0,s')}(-1) \right)\, . \end{eqnarray} In arriving at \refb{ea1} we have used the fact that $c_b^{(r,s;r',s')}(u)=0$ for $u<-1$. Now consider a mode carrying $g_M$ eigenvalue $e^{2\pi i k'/M}$ and left-moving momentum $l/M$ along $S^1$. The requirement of invariance under the simultaneous action of $g_M$ and $2\pi$ translation along $S^1$ gives us the requirement that $l=k'$ mod $M$. Furthermore the contribution to the twisted index from these states will be weighted by the $g_N$ eigenvalue $e^{2\pi i r/N}$. Thus the net contribution to the partition function from these modes is given by \begin{equation} \label{ea2} Z_{KK} = e^{-2\pi i\widetilde\alpha\rho}\, \prod_{r=0}^{N-1} \prod_{l=1}^\infty \left(1 - e^{2\pi i r/N} e^{2\pi i l\rho}\right)^{-\sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i r s/N} e^{-2\pi i ls'/M} \left(c_0^{(0,s;0,s')}(0) + 2 c_1^{(0,s; 0,s')}(-1) \right)}\, . \end{equation} Here the term $e^{-2\pi i\widetilde\alpha\rho}$ reflects the effect of the momentum carried by the ground state of the Kaluza-Klein monopole. The analysis of \cite{0609109,0708.1270} gives \begin{eqnarray}\displaystyle \label{ea3} \widetilde\alpha &=& {1\over 24M} Q_{0,0} -{1\over 2M} \sum_{s'=1}^{M-1} Q_{0,s'} {e^{-2\pi i s'/M}\over (1- e^{-2\pi i s'/M} )^2}\, , \nonumber \\ && Q_{r',s'} \equiv M N \left( c_0^{(0,0;r',s')}(0) + 2 c_1^{(0,0;r',s')}(-1) \right)\, . \end{eqnarray} Next we turn to the dynamics of the overall motion of the D1-D5 system in the KK monopole background. The dynamics in the transverse direction is independent of whether we are working with $K3$ or $T^4$. Furthermore these modes do not carry any $g_N$ or $g_M$ quantum numbers; thus the contribution from these modes to the partition function is universal. The result is\cite{0609109,0708.1270} \begin{eqnarray}\displaystyle \label{ere1} && - e^{-2\pi i v} \left( 1 - e^{-2\pi i v}\right)^{-2}\nonumber \\ &&\prod_{l\in M{\hbox{z\kern-1mm z}}\atop l>0} \{ (1-e^{2\pi i l\rho})^4 (1-e^{2\pi i l\rho+ 2\pi i v})^{-2} (1-e^{2\pi i l\rho- 2\pi i v})^{-2}\}\, . \end{eqnarray} The first line represents the contribution from the zero mode dynamics that binds the D1-D5 system to the KK monopole\cite{pope,9912082,0605210}, and the second line represents the contribution from the oscillators. The last two terms in the second line of \refb{ere1} represent the contribution from the four left-moving bosonic modes representing transverse oscillation of the D1-D5 system whereas the first factor in the same line represents contribution from the left-moving fermionic modes.\footnote{These left-moving bosonic and fermionic modes, as well as those which contribute to \refb{etw1}, are paired by the unbroken superysymmetry transformations on the D1-D5 world volume in flat space-time which commute with ${\hbox{ Z\kern-1.6mm Z}}_M\times {\hbox{ Z\kern-1.6mm Z}}_N$, are charged under the $SU(2)_L$ subgroup of the transverse rotation group, and act on the left-movers. Eventually when we place this system in the background of KK monopole this supersymmetry is broken since in the full system there is no supersymmetry acting on the left-movers. However this is still useful for determining the quantum numbers of the fermions from the known quantum numbers of the bosonic modes\cite{0609109,0708.1270}. \label{f1}} In arriving at \refb{ere1} one needs to use the fact that in the presence of the KK monopole background, the momentum along $\widetilde S^1$ appears as the angular momentum $2J_L$ for the D1-D5 system where $J_L$ is the generator of the $U(1)_L\subset SU(2)_L$ subgroup of the rotation group in transverse space\cite{0503217}. The $v$ dependence of \refb{ere1} then follows from the fact that the bosonic modes, transforming as a vector of the transverse rotation group $SU(2)_L\times SU(2)_R$, carry $J_L=\pm 1$ while the fermionic modes are neutral under $U(1)_L$ as a consequence of footnote \ref{f1}. For ${\cal M}=T^4$ we also have four additional bosonic modes arising from the Wilson lines on the D5-brane along $T^4$ and four additional fermionic modes. In order to find the contribution to the partition function from these modes we need to know the action of $g_M$ and $g_N$ on these modes. If $z_1$ and $z_2$ denote the complex coordinates on $T^4$ then in order to preserve supersymmetry both $g_M$ and $g_N$ must act as equal and opposite rotation of $z_1$ and $z_2$, possibly accompanied by shifts. We shall assume for definiteness that $g_M$ and $g_N$ induce respectively $2\pi/M$ and $2\pi/N$ rotations on these coordinates: \begin{eqnarray}\displaystyle \label{e2pi} g_M &:& (dz_1, dz_2)\to \left( e^{2\pi i/M}dz_1, e^{-2\pi i/M}dz_2\right) \, , \nonumber \\ g_N &:& (dz_1, dz_2)\to \left( e^{2\pi i/N}dz_1, e^{-2\pi i/N}dz_2\right)\, . \end{eqnarray} \refb{e2pi} represents the action of $g_M$ and $g_N$ on the Wilson line variables. Furthermore the Wilson lines are neutral under the rotation group in the transverse space and hence carry $J_L=0$. The result of footnote \ref{f1} now tells us that the additional fermionic modes on the D1D5 system, which arise for ${\cal M}=T^4$, transform in the same way under $g_M$ and $g_N$, and carry $J_L=\pm 1$ uncorrelated with their $(g_M,g_N)$ quantum numbers\cite{0609109,0708.1270}. Thus the contribution from these additional modes to the twisted partition function is given by \begin{eqnarray}\displaystyle \label{etw1} && \prod_{l\in M{\hbox{z\kern-1mm z}}+1\atop l>0} \left(1 - e^{2\pi i/N} e^{2\pi i l\rho} \right)^{-2}\prod_{l\in M{\hbox{z\kern-1mm z}}-1\atop l>0} \left(1 - e^{-2\pi i/N} e^{2\pi i l\rho} \right)^{-2}\prod_{l\in M{\hbox{z\kern-1mm z}}+1\atop l>0} \left(1 - e^{2\pi i/N} e^{2\pi i l\rho + 2\pi i v} \right)\nonumber \\ &&\prod_{l\in M{\hbox{z\kern-1mm z}}+1\atop l>0} \left(1 - e^{2\pi i/N} e^{2\pi i l\rho - 2\pi i v} \right)\prod_{l\in M{\hbox{z\kern-1mm z}}-1\atop l>0} \left(1 - e^{-2\pi i/N} e^{2\pi i l\rho+2\pi i v} \right) \prod_{l\in M{\hbox{z\kern-1mm z}}-1\atop l>0} \left(1 - e^{-2\pi i/N} e^{2\pi i l\rho-2\pi i v} \right)\, . \nonumber \\ \end{eqnarray} The first two factors come from the bosonic modes and the last four factors arise from the fermionic modes whose contribution have not already been included in \refb{ere1}. The only new ingredient in this formula compared to that in \cite{0609109,0708.1270} is the insertion of the factors of $e^{\pm 2\pi i/N}$, -- these arise from the insertion of $g_N$ into the trace. The product of \refb{ere1} and \refb{etw1} can be written in a compact form using the coefficients $c_1^{(0,s;0,s')}(-1)$. It follows from its definition, and the identification of the RR sector ground states in the SCFT with target space ${\cal M}$ carrying $(J_L,J_R)=(p-1,q-1)$ with the harmonic $(p,q)$ forms on ${\cal M}$, that $MNc_1^{(0,s;0,s')}(-1)$ represents trace over the $(0,q)$ forms on ${\cal M}$ weighted by $(-1)^{q} g_N^s g_M^{s'}$\cite{0609109,0708.1270}. On $K3$ the only $(0,q)$ forms are $(0,0)$ forms and $(0,2)$ forms both of which are neutral under $g_N$ and $g_M$, while on $T^4$ we also have a pair of $(0,1)$ forms $dz_1$ and $dz_2$ which we have chosen to carry $(g_N, g_M)$ eigenvalues $(e^{\pm 2\pi i/N}, e^{\pm 2\pi i/M})$. This gives \begin{eqnarray}\displaystyle \label{esh5} c_1^{(0,s;0,s')}(-1) &=& {2\over MN} \quad \hbox{for ${\cal M}=K3$} \, , \nonumber \\ &=& {1\over MN} \left(2 - e^{2\pi i s/N} e^{2\pi i s'/M} - e^{-2\pi i s/N} e^{-2\pi i s'/M} \right) \quad \hbox{for ${\cal M}=T^4$}\, . \end{eqnarray} Using this we can express the total contribution to the partition function from the overall motion of the D1-D5 system in the Taub-NUT space, given by \refb{ere1} for ${\cal M}=K3$ and the product of \refb{ere1} and \refb{etw1} for ${\cal M}=T^4$, as \begin{eqnarray}\displaystyle \label{esg1} Z_{CM} &=& - \, e^{-2\pi i v} \, \prod_{l=1}^\infty \prod_{r=0}^{N-1} (1-e^{2\pi i r/N}\, e^{2\pi i l\rho})^{2\sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i ls'/M} e^{-2\pi i rs/N} c_1^{(0,s;0,s')}(-1)} \nonumber \\ &&\prod_{l=1}^\infty \prod_{r=0}^{N-1} (1-e^{2\pi i r/N}\, e^{2\pi i l\rho+2\pi i v})^{-\sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i ls'/M} e^{-2\pi i rs/N} c_1^{(0,s;0,s')}(-1)} \nonumber \\ &&\prod_{l=0}^\infty \prod_{r=0}^{N-1} (1-e^{2\pi i r/N}\, e^{2\pi i l\rho-2\pi i v})^{-\sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i ls'/M} e^{-2\pi i rs/N} c_1^{(0,s;0,s')}(-1)}\, . \end{eqnarray} Note that the $(1-e^{-2\pi i v})^{-2}$ factor has been absorbed into the $l=0$ term in the last term. Finally let us turn to the contribution to the partition function from the motion of the D1-branes along the D5-branes. First we consider a single D1-brane wrapped $w$ times along $S^1$, and count the number of states $n(w,j,l;r,k')$ of the system carrying left-moving momentum $l/M$ along $S^1$, $g_M$ eigenvalue $e^{2\pi i k'/M}$, $g_N$ eigenvalue $e^{2\pi i r/N}$ and $\widetilde S^1$ momentum $j$. Since the boundary condition on various fields are twisted by $g_M$ under $2\pi$ translation along $S^1$, the CFT on a D1-brane wrapped $w$ times along $S^1$ satisfies boundary condition twisted by $(g_M)^w$. Furthermore since the effective length of the D1-brane is now $2\pi w$, a momentum $l/M$ along $S^1$ will appear as $lw/M$ units of momentum in the CFT living on the D1-brane. It now follows from \refb{ek8} that\cite{0609109,0708.1270} \begin{eqnarray}\displaystyle \label{enu1} n(w,j,l;r,k') &=& \sum_{s=0}^{N-1}\sum_{s'=0}^{M-1} e^{-2\pi i rs/N} e^{-2\pi i k's'/M} c_b^{(0,s;r',s')}( 4lw/M - j^2)\, , \nonumber \\ && b=\hbox{$j$ mod 2}, \quad r'=\hbox{$w$ mod $M$}\, . \end{eqnarray} The requirement that we only keep the modes which are invariant under the transformation $g_M$ accompanied by $2\pi$ translation along $S^1$ forces the constraint $k'=l$ mod $M$. It is now straightforward to evaluate the contribution to the $g_N$ twisted partition function from multiple states of this type, carrying different $(w,l,j)$\cite{9608096}: \begin{equation} \label{erel} Z_{D1D5} =e^{-2\pi i \widetilde\gamma\sigma}\, \prod_{r=0}^{N-1} \prod_{b=0}^1 \prod_{w\in {\hbox{z\kern-1mm z}},l\in{\hbox{z\kern-1mm z}},j\in 2{\hbox{z\kern-1mm z}}+b \atop w>0, l\ge 0} \left( 1 - e^{2\pi i r/N} e^{2\pi i (\sigma w /M + \rho l + vj)}\right)^{-n(w,j,l;r,l)}\, . \end{equation} where \begin{equation} \label{egamdef} \widetilde\gamma = \cases{ \hbox{${1/ M}$ for ${\cal M}=K3$}\cr \hbox{0 for ${\cal M}=T^4$}}\, . \end{equation} The $e^{-2\pi i \widetilde\gamma\sigma}$ in \refb{erel} accounts for the fact that the actual number of D1-branes required to produce a total D1-brane charge $Q_1$ in the background of a D5-brane is given by $Q_1+1$ for ${\cal M}=K3$ and $Q_1$ for ${\cal M}=T^4$. Multiplying \refb{ea2}, \refb{esg1} and \refb{erel} we get the total partition function of the system: \begin{equation} \label{etot} Z(\rho,\sigma,v) = Z_{KK} \, Z_{CM} \, Z_{D1D5}= -1 / \widetilde\Phi(\rho,\sigma, v)\, , \end{equation} where \begin{eqnarray}\displaystyle \label{ephi} \widetilde\Phi(\rho,\sigma, v) &=& e^{2\pi i (\widetilde\alpha \rho +\widetilde\gamma\sigma + \widetilde\beta v)} \, \prod_{b=0}^1 \prod_{r=0}^{N-1}\prod_{r'=0}^{M-1} \prod_{k\in{\hbox{z\kern-1mm z}}+{r'\over M}, l\in{\hbox{z\kern-1mm z}}, j\in 2{\hbox{z\kern-1mm z}}+b\atop k,l\ge 0, j<0 \, for\, k=l=0} \left[ 1 - e^{2\pi i r/N} \, e^{2\pi i (k\sigma +l\rho + j v)} \right]^a\nonumber \\ a &\equiv& \sum_{s=0}^{N-1} \sum_{s'=0}^{M-1} e^{-2 \pi i (s'l / M + rs/N)} c_b^{(0,s;r',s')}(4kl - j^2)\, , \end{eqnarray} with $\widetilde\alpha$, $\widetilde\beta$, $\widetilde\gamma$ defined in \refb{ek9}. Note that the $k=0$ term in this product gives the result for $Z_{KK} Z_{CM}$. \sectiono{Asymptotic Growth} \label{sasymp} We now study the growth of the index for large $Q^2$, $P^2$ and $Q.P$. This can be done by standard procedure described in \cite{9607026,0412287,0510147,0605210}. We deform the three dimensional contour of integration over $(\rho,\sigma,v)$ to small imaginary values of $(\rho,\sigma,v)$. During this deformation we pick up contribution from the residues at various poles, given by the zeroes of $\widetilde\Phi$, which give the leading contribution to the index, -- the contribution from the final contour can be shown to be subleading compared to the contribution from the residues at the poles\cite{0708.1270}. Thus we need to first determine the location of the zeroes of $\widetilde\Phi$. This has been done in appendix \ref{szero} where it is shown that $\widetilde\Phi$ has double zeroes on the subspaces: \begin{equation} \label{ediv1a} n_2(\rho\sigma-v^2) -m_1\rho + n_1 \sigma + m_2 + jv = 0\, , \end{equation} for values of $(m_1,n_1,m_2,n_2,j)$ satisfying \begin{eqnarray}\displaystyle \label{esh3a} && m_1 n_1 +m_2 n_2 +{j^2\over 4} = {1\over 4}\, , \nonumber \\ && m_1\in M{\hbox{ Z\kern-1.6mm Z}}, \quad m_2\in {\hbox{ Z\kern-1.6mm Z}}, \quad n_2 \in N{\hbox{ Z\kern-1.6mm Z}}, \quad n_1\in{\hbox{ Z\kern-1.6mm Z}}, \quad j\in 2{\hbox{ Z\kern-1.6mm Z}}+1\, . \end{eqnarray} Now the analysis of \cite{9607026,0412287,0510147,0605210} tells us that for large $Q^2$, $P^2$, $Q.P$ the contribution from the residue at the pole \refb{ediv1a} of $1/\widetilde\Phi$ grows as \begin{equation} \label{egrow} \exp\left(\pi\sqrt{Q^2P^2-(Q.P)^2}/|n_2|\right) \qquad \hbox{for $ |n_2|>0$}\, . \end{equation} On the other hand the poles at $n_2=0$ are responsible for wall crossing and their contribution grows much slower than \refb{egrow}\cite{0702141,0702150,0705.3874,0706.2363}. Thus the leading contribution comes from the pole at \refb{ediv1a} for the minimum non-zero value of $|n_2|$. Eq.\refb{esh3a} shows that this is $N$. Thus the index grows as \begin{equation} \label{eindgr} \exp\left(\pi\sqrt{Q^2P^2-(Q.P)^2}/N\right)\, . \end{equation} Since for this charge the black hole entropy $S_{BH}$ is given by $\pi\sqrt{Q^2P^2-(Q.P)^2}$\cite{9507090,9512031}, \refb{eindgr} is in precise agreement with the macroscopic prediction \refb{emacpred}. \sectiono{Conclusion} \label{scon} It is widely believed that since string theory provides us with a consistent quantum theory of gravity, black holes in string theory do not lead to a loss of information. If so, the black hole must represent an ensemble of microstates and the black hole entropy must have an interpretation as the logarithm of the degeneracy of microstates. Furthermore quantum string theory around a black hole background must contain all possible information about the microstates. It is therefore important to learn how we can extract information about the black hole microstates by studying quantum string theory around the black hole background. The results of \cite{0911.1563} and this paper provide a small step in this direction. In these papers we discuss how to extract information about one specific feature of the black hole microstates, namely distribution of the ${\hbox{ Z\kern-1.6mm Z}}_N$ charges among the microstates. Quantum string theory around the near horizon background leads to a specific algorithm for extracting this information. Our analysis shows that in the limit of large charges the results of the macroscopic analysis are in exact agreement with the microscopic results in a wide class of models where the latter is computable. While using the rules of $AdS_2/CFT_1$ correspondence we can in principle compute the ensemble average of more general operators on the black hole side, in the absence of non-renormalization results it is not clear how we might compare this with the microscopic results. \medskip \noindent {\bf Acknowledgment:} I wish to thank Nabamita Banerjee, Atish Dabholkar, Joao Gomes and Sameer Murthy for useful discussions. This work was supported in part by the JC Bose fellowship of the Department of Science and Technology, India and by the Blaise Pascal Chair, France. \medskip \noindent {\bf Note added:} I have been informed by Suresh Govandarajan that the modular forms of subgroups of $Sp(2,{\hbox{ Z\kern-1.6mm Z}})$ which appear here have also been constructed independently in \cite{suresh1,suresh2}.
2024-02-18T23:40:22.656Z
2010-02-23T23:44:51.000Z
algebraic_stack_train_0000
2,206
6,365
proofpile-arXiv_065-10803
\section{Introduction} Because of the enhanced dynamo action in stars with thick, turbulent outer-convection zones, rapidly rotating cool stars, both evolved and young, exhibit significantly stronger magnetic activity than is seen in the Sun. This activity means that the spots are also much larger than the spots observed in the Sun. The largest starspot recovered with Doppler imaging is on the active RS~CVn-type binary HD 12545 which, in January 1998, had a spot that extended approximately 12$\times$20 solar radii (Strassmeier \cite{str99}). The lifetime of the large starspots/spot groups can also be much longer than that of the sunspots, even years instead of weeks for sunspots (e.g., Rice \& Strassmeier \cite{rice_str}; Hussain \cite{hussain}). The most typical dynamo signature is the presence of an activity cycle. Cyclic changes in the level of magnetic activity are well documented for the Sun, as well as for many solar-type stars (see, e.g., Ol{\'a}h et al.~\cite{cycles}). It is also interesting that, according to theoretical calculations, cyclic variations in the stellar magnetic activity can only be produced when differential rotation is present (R{\"u}diger et al.~\cite{ruediger}). In this work \object{$\zeta$~Andromedae}, a long-period (17.8 day), single-lined spectroscopic RS~CVn binary (Campbell~\cite{campbell}; Cannon~\cite{cannon}), is investigated in detail. In this system the primary is of spectral type K1~III, and the unseen companion possibly of type F (Strassmeier et al.\,\cite{strass93}). The primary fills approximately 80\% of its Roche lobe, so it has a non-spherical shape. The estimated ellipticity gives a $\sim$4\% difference between the long and short axes of the ellipsoid (K{\H o}v{\'a}ri et al.\,\cite{kovari07}, from here on Paper~1). The mean angular diameter of $\zeta$\,And has been derived to $2.72 \pm 0.036$\,mas using spectro-photometry (Cohen et al. \cite{cohen}) An earlier detailed Doppler imaging study (Paper~1) revealed that the spots on the surface of $\zeta$~And have a temperature contrast of approximately 1000~K and that they occur on a wide latitude range from the equator to an asymmetric polar cap. The strength of the features changed with time, with the polar cap dominating the beginning of the two-month observing period in 1996/97, while the activity during the second half was dominated by medium-to-high latitude features. Also, the investigation revealed a weak solar-type differential rotation. Here, results from Doppler imaging, optical interferometry, and long-term photometry of $\zeta$~And are presented. We discuss the reduction of the interferometric data and the obtained fundamental stellar parameters. The high-resolution spectra are used with Doppler-imaging techniques to obtain a surface temperature map. This surface map is compared to the earlier published temperature maps and also with the chromospheric activity based on observations of the H$\alpha$ line. Finally, the long-term broad band photometry is used to study the temporal evolution of the spottedness, hence the possible spot cycles. \section{Observations} Simultaneous observations were carried out at the European Southern Observatory with UVES (UV-Visual Echelle Spectrograph; Dekker et al. \cite{UVES}) mounted on the 8-m Kueyen telescope of the VLT, and the AMBER (Astronomical Multi BEam combineR; Petrov et al. \cite{AMBER}) instrument of the VLT Interferometer (VLTI). Additionally broad and intermediate band photometry in $V$, $Ic$ and $y$ bands were obtained with the automatic photoelectric telescopes Wolfgang and Amadeus in Arizona, USA (Strassmeier et al. \cite{kgs_APT}; Granzer et al. \cite{granz_APT}). For all the photometric observations HD~5516 was used as the comparison star. All the observations were phased using the same ephemeris as in Paper~1, \begin{displaymath} {\rm HJD} = 2~449~997.223\pm0.017 + 17.769426\pm0.000040\times E, \end{displaymath} referring to the time of the conjunction. \subsection{Optical interferometry} \onltab{1}{ \begin{table*} \caption{Log of the VLTI/AMBER observations.} \label{tab:obsvlti} \begin{tabular}{lllllllrrrr} \hline\hline Date & Time & Target & Purpose & FINITO & AM & Seeing & $\tau_0$\\ Sep. & UTC & & & & & [$\arcsec$]&[msec]\\\hline 15 & 4:24-4:30 & $\theta$\,Psc & Calibrator & on & 1.17 & 0.66 & 3.1 \\ 15 & 4:47-4:51 & $\theta$\,Psc & Calibrator & off & 1.17 & 0.66 & 3.0 \\ 15 & 5:04-5:07 & $\mu$\,Peg & Check star & on & 1.63 & 0.65 & 3.1 \\ 15 & 5:12-5:16 & $\mu$\,Peg & Check star & off & 1.65 & 0.62 & 3.2 \\ 15 & 5:24-5:27 & 41\,Psc & Calibrator & on & 1.19 & 0.67 & 2.9 \\ 15 & 5:32-5:35 & 41\,Psc & Calibrator & off & 1.19 & 0.69 & 2.8 \\ 15 & 5:45-5:49 & $\mu$\,Peg & Check star & on & 1.79 & 0.62 & 3.1 \\ 15 & 5:55-5:58 & $\mu$\,Peg & Check star & off & 1.85 & 0.54 & 3.6 \\ 15 & 6:03-6:06 & $\mu$\,Peg & Check star & on & 1.90 & 0.69 & 2.8 \\ 15 & 6:14-6:17 & $\theta$\,Psc & Calibrator & on & 1.30 & 0.71 & 2.7 \\ 15 & 6:22-6:25 & $\theta$\,Psc & Calibrator & off & 1.33 & 0.62 & 3.1 \\ 15 & 6:46-6:49 & $\zeta$\,And & Science target & on & 1.58 & 0.71 & 2.9 \\ 15 & 6:53-6:57 & $\zeta$\,And & Science target & off & 1.60 & 0.79 & 2.6 \\ 15 & 7:02-7:06 & 41\,Psc & Calibrator & on & 1.32 & 0.71 & 2.9 \\ 15 & 7:13-7:22 & $\zeta$\,And & Science target & on & 1.66 & 0.65 & 3.2 \\ 15 & 7:29-7:33 & HD\,7087 & Calibrator & on & 1.53 & 0.80 & 3.2 \\ 15 & 7:38-7:41 & HD\,7087 & Calibrator & off & 1.55 & 0.82 & 3.2 \\ 15 & 7:53-7:56 & $\zeta$\,And & Science target & on & 1.84 & 0.97 & 2.2 \\ 15 & 8:05-8:08 & HD\,15694 & Calibrator & on & 1.13 & 0.73 & 2.9 \\ \\ 17 & 5:11-5:14 & $\theta$\,Psc & Calibrator & on & 1.19 & 0.91 & 2.4 \\ 17 & 5:42-5:46 & $\mu$\,Peg & Check star & on & 1.82 & 1.02 & 2.1 \\ 17 & 6:08-6:15 & 41\,Psc & Calibrator & on & 1.22 & 1.09 & 2.0 \\ 17 & 6:36-6:40 & $\mu$\,Peg & Check star & on & 2.30 & 0.97 & 2.5 \\ 17 & 6:56-6:57 & $\zeta$\,And & Science target & on & 1.62 & 0.85 & 2.6 \\ 17 & 7:09-7:13 & HD\,7087 & Calibrator & on & 1.50 & 0.86 & 2.5 \\ 17 & 7:25-7:27 & $\zeta$\,And & Science target & on & 1.74 & 0.86 & 2.5 \\ 17 & 7:41-7:44 & HD\,15694 & Calibrator & on & 1.12 & 0.91 & 2.4 \\ 17 & 7:57-8:00 & $\zeta$\,And & Science target & on & 1.92 & 0.73 & 2.9 \\ \\ 19 & 3:47-3:53 & $\theta$\,Psc & Calibrator & on & 1.18 & 0.74 & 1.9 \\ 19 & 5:13-5:16 & $\mu$\,Peg & Check star & on & 1.72 & 0.78 & 1.8 \\ 19 & 5:28-5:31 & $\theta$\,Psc & Calibrator & on & 1.23 & 0.75 & 1.8 \\ 19 & 5:46-5:49 & $\mu$\,Peg & Check star & on & 1.90 & 1.06 & 1.3 \\ 19 & 6:01-6:04 & 41\,Psc & Calibrator & on & 1.22 & 0.97 & 1.4 \\ 19 & 6:19-6:21 & $\mu$\,Peg & Check star & on & 2.18 & 0.89 & 1.3 \\ 19 & 6:54-6:57 & $\zeta$\,And & Science target & on & 1.64 & 0.81 & 1.7 \\ 19 & 7:19-7:22 & HD\,7087 & Calibrator & on & 1.54 & 1.18 & 1.2 \\ 19 & 7:52-7:55 & $\zeta$\,And & Science target & on & 1.95 & 1.55 & 0.9 \\ 19 & 8:18-8:22 & HD\,15694 & Calibrator & on & 1.16 & 1.64 & 0.9 \\ \hline \end{tabular} \end{table*}} The AMBER observations were obtained during the second part of the nights starting on September 14, 16, and 18, 2008, corresponding to orbital phases $\phi=0.05$ (secondary in front), $\phi=0.15$ (intermediate case), and $\phi=0.27$ (secondary to the side), respectively. The details of the observations are listed in Table~\ref{tab:obsvlti}, only available online. During the night starting September 18 the coherence time was very short, so the data quality is lower than during the other half nights. For all the observations, AMBER was used in the low-resolution mode at $J$, $H$, and $K$ passbands, giving a resolving power ($\lambda/\Delta\lambda$) of $\sim 35$ and recording data between about 1.1-2.5\,$\mu$m. Only the $H$ and $K$ band data ($\sim$\,1.5-2.5\,$\mu$m) were used for the data analysis. The $J$ band data were of poor quality owing to vanishing detected flux. The fringe tracker FINITO (Le Bouquin et al. \cite{FINITO}) was used for most observations. During the night starting September 14, data were also taken without the use of FINITO in order to confirm the calibration of the visibility. The Auxiliary Telescopes (ATs) were placed at the stations A0, K0, and G1, giving ground-baseline lengths of 128m (A0-K0) and 90m (A0-G1 and K0-G1). The A0-G1 and K0-G1 baselines have the same ground length, but differ in position angle by 90\,$\deg$. In addition to $\zeta$~And, a circular check star was observed every night. For this \object{$\mu$~Peg} was chosen because it is at a similar position on the sky as $\zeta$~And and it is expected to have a similar angular diameter ($\Theta_\mathrm{LD}$=2.50\,$\pm$\,0.04\,mas; Nordgren et al. \cite{nordgren}; Mozurkewich et al. \cite{mozurkewich}). Observations of $\zeta$~And and $\mu$~Peg were interleaved with observations of the interferometric calibration stars $\theta$\,Psc (K1\,III, $K$=1.86, $\Theta_\mathrm{LD}$=2.00 $\pm$ 0.02\,mas), 41\,Psc (K3\,III, $K$=2.43, $\Theta_\mathrm{LD}$=1.81 $\pm$ 0.02\,mas), HD\,7087 (G9\,III, $K$=2.48, $\Theta_\mathrm{LD}$=1.59 $\pm$ 0.02\,mas), and HD\,15694 (K3\,III, $K$=2.48, $\Theta_\mathrm{LD}$=1.77 $\pm$ 0.02\,mas). The angular diameters for $\theta$\,Psc and 41\,Psc are from Bord{\'e} et al. (\cite{borde}) and those for HD\,7087 and HD\,15694 are from M{\'e}rand et al. (\cite{merand}). \subsection{Spectroscopy} The UVES observations of $\zeta$~And were carried out during 10 nights between September 13, 2008 and October 1, 2009. The red arm in the standard wavelength setting of 600~nm was used with the imageslicer~\#3. This instrument setup gives a spectral resolution ($\lambda/\Delta\lambda$) of 110\,000 and a wavelength coverage of 5000-7000 {\AA}. Each observation consists of three exposures of 8 seconds that were later combined to one very high signal-to-noise ratio (S/N) spectrum. The S/N of combined observations was between 586 and 914 around 6400\,{\AA}. The data were reduced using the UVES pipeline. A summary of the spectroscopic observations is given in the on-line Table~\ref{table_UVES}. \onltab{2}{ \begin{table*} \caption{The high-resolution spectroscopy with UVES at VLT.} \label{table_UVES} \centering \begin{tabular}{c c c c} \hline\hline Date & HJD & phase & S/N \\ & 2450000+ & & \\ \hline 13.09.2008 & 4722.81672 & 0.940 & 866 \\ 15.09.2008 & 4724.70463 & 0.046 & 650 \\ 17.09.2008 & 4726.71012 & 0.159 & 586 \\ 18.09.2008 & 4727.68700 & 0.214 & 628 \\ 19.09.2008 & 4728.73866 & 0.273 & 762 \\ 21.09.2008 & 4730.72025 & 0.384 & 693 \\ 23.09.2008 & 4732.72873 & 0.497 & 914 \\ 25.09.2008 & 4734.65166 & 0.606 & 785 \\ 27.09.2008 & 4736.69360 & 0.721 & 612 \\ 01.10.2008 & 4740.75333 & 0.949 & 802 \\ \hline \end{tabular} \end{table*} } \section{Reduction and analysis of the interferometric data} \subsection{Data reduction} Raw visibility and closure phase values were computed using the latest version of the {\tt amdlib} data reduction package (version 2.2) and the {\tt yorick} interface, both provided by the Jean-Marie Mariotti Center (JMMC). The data reduction principles are described in Tatulli et al. (\cite{tatulli}). Absolute wavelength calibration was performed by correlating the raw spectra with a model of the atmospheric transmission, resulting in a correction of $\Delta\lambda/\lambda=-0.043$ in the K-band with respect to the original wavelength table (cf. Wittkowski et al. \cite{witt08}). For each observation mentioned in Table~\ref{tab:obsvlti}, only some of the individual frames were selected for further analysis. Only those frames were used that had a flux ratio under 3 between the telescopes of the concerned baseline and that had an estimated absolute piston of less than 4\,$\mu$m. Finally, out of these only the 30\% of the frames with the highest fringe signal-to-noise ratio were kept. The selected frames were averaged. The resulting differential phase and visibility values were significantly affected by chromatic piston effects caused by the dispersion of the air (cf. Millour et al. \cite{millour}; Le Bouquin et al. \cite{lebouquin}). This effect was relatively strong for our data because of the combination of long baselines and large airmasses. We used the measured differential phase to estimate the amount of chromatic piston $\delta$ using $\delta=1/2\pi\,\, d\phi/d\sigma$, where $\phi$ is the differential phase and $\sigma$ the wavenumber. The loss of the squared visibility amplitude $\rho$ was estimated using formula (1) of Millour et al. (\cite{millour}). The averaged visibility data were compensated using the estimated $\rho$. Millour et al. note that $\delta$ is the absolute piston value relative to the white light fringe. The absolute piston also includes the frame-by-frame piston that is estimated by the regular AMBER data reduction. This quantity is determined with respect to the pixel-to-visibility matrix (P2VM) reference, which can have an offset to the white light fringe. We selected frames with an estimated piston of less than 4\,$\mu$m, and verified that the piston of our P2VM measurements is less than 2\,$\mu$m in the $H$ band and less than 4\,$\mu$m in the $K$ band. In total, we assumed an error of the piston estimate of 5\,$\mu$m and propagated it to the final visibility amplitude. We also used an alternative compensation of the loss of the squared visibility amplitude $\rho$ that was based on a parametrization of the calibrator star data as a function of optical path difference, i.e., an estimate that does not depend on the measured differential phase of the science target. We obtained results well within the adopted error. As a final data reduction step, the squared visibility amplitudes were calibrated for the interferometric transfer function, which was estimated using an average of the computed transfer functions based on the closest calibration star measurement before that of each science target and the closest thereafter. The final error of the calibrated data includes the statistical error of the frames, the error in the correction for chromatic piston, and the standard deviation of the two transfer function measurements. \begin{figure} \centering \includegraphics[width=8cm]{13736fg1a.ps} \includegraphics[width=8cm]{13736fg1b.ps} \caption{VLTI/AMBER visibility data of $\zeta$~And (top) and of the check star $\mu$~Peg (bottom), compared to uniform disk models.} \label{fig:vlti} \end{figure} \subsection{Analysis of interferometric data} \begin{table} \caption{Uniform disk fit results for the VLTI/AMBER data.} \label{tab:resvlti} \centering \begin{tabular}{ll|rr} \hline\hline Day & FINITO & $\zeta$\,And & $\mu$\,Peg \\\hline 14 & ON & 2.48 $\pm$ 0.06 mas & 2.43 $\pm$ 0.07 mas \\ 14 & OFF & 2.40 $\pm$ 0.21 mas & 2.43 $\pm$ 0.06 mas \\ 16 & ON & 2.53 $\pm$ 0.11 mas & 2.42 $\pm$ 0.19 mas \\ 18 & ON & 2.58 $\pm$ 0.12 mas & 2.62 $\pm$ 0.16 mas \\\hline $\theta_\mathrm{UD}$ & all data & 2.48 $\pm$ 0.09 mas & 2.43 $\pm$ 0.11 mas \\[1ex] $\theta_\mathrm{LD}$ & all data & 2.55 $\pm$ 0.09 mas & 2.49 $\pm$ 0.11 mas \\ \hline \end{tabular} \end{table} Figure~\ref{fig:vlti} shows the resulting visibility data of $\zeta$~And and of the check star $\mu$~Peg obtained from all three observing nights and compared to models of a uniform disk (UD). Because of the relative large errors in the observations, no differences were seen in the measurements from different baselines. Thus all the baselines were used together in the analysis. Table~\ref{tab:resvlti} lists the resulting uniform disk diameters of $\zeta$~And and $\mu$~Peg for each of the nights separately, as well as for all three observing nights. When using the data from all the nights together, the diameter is estimated from the data obtained during the nights starting on September 14 and 16, but the error is from all the data, i.e., including the data from the night starting on September 18. This is done because the data quality is significantly lower on the night starting on September 18 than during the two other observing nights. During the night of September 14, we obtained data with and without the use of the fringe tracker FINITO. The results for these two data sets agree well within the errors for both targets, and we do not see any systematic calibration effects that are caused by the use of FINITO. Deviations between observed visibility values and the UD model are mostly caused by residuals of the compensation of the chromatic piston effect, which was most noticeable on the baseline A0-G1, and by systematic calibration uncertainties due to varying atmospheric conditions. Within the obtained errors of the UD diameter of about 4\%, we do not see indications of any elliptical intensity distribution of $\zeta$~And. However, the ellipticity of $\sim$4\% expected for the night of September 18 is consistent with our data. Correction factors between UD diameter and limb-darkened (LD) disk diameters were computed using ATLAS\,9 model atmospheres (Kurucz \cite{kurucz}). For the spectral types of our target stars $\zeta$~And and $\mu$~Peg and the wavelength range used for our observations, we obtain values for $\theta_\mathrm{UD}/\theta_\mathrm{LD}$ of 0.974 and 0.976, respectively. The resulting LD diameters are $\theta_\mathrm{LD} =2.55 \pm 0.09$\,mas and $\theta_\mathrm{LD} =2.49 \pm 0.11$\,mas, respectively. Cohen et al. (\cite{cohen}) give a diameter of $2.72\pm 0.036$\,mas for $\zeta$\,And, based on spectro-photometry. This diameter is significantly larger, but the error smaller, than what was obtained in this work. Still, the spectro-photometric observations could be affected by the significant magnetic activity exhibited by $\zeta$~And. The LD diameter of $\mu$~Peg obtained here is consistent with the earlier interferometric measurements of $\theta_\mathrm{LD} =2.53 \pm 0.09$\,mas obtained with the NPOI and $\theta_\mathrm{LD} =2.49 \pm 0.04$\,mas obtained with the Mark\,III interferometers (Nordgren et al. \cite{nordgren} and Mozurkewich et al. \cite{mozurkewich}), increasing the confidence in the results presented here. \section{Fundamental parameters} \subsection{Radius} The limb darkened diameter of $\zeta$~And, obtained from the interferometric observations, is $2.55\pm 0.09$ mas. Together with the Hipparcos parallax of $17.24\pm 0.26$ mas (van Leeuwen \cite{hip}) this can be used to determine the stellar radius with the following formula: $R = \Theta_{\rm LD}\frac{C}{2\pi_{\rm p}}$, where $\Theta_{\rm LD}$ is the limb-darkened angular diameter in radians, $\pi_{\rm p}$ the parallax in arcseconds, and $C$ the conversion from parsecs to meters. For $\zeta$~And this gives stellar radius of $R=15.9\pm 0.8 {\rm R}_{\odot}$, which is consistent with the 16.0~R$_{\odot}$ estimated in Paper\,1. \begin{figure*} \centering \includegraphics[width=6cm]{13736fg2a.ps} \includegraphics[width=6cm]{13736fg2b.ps} \includegraphics[width=6cm]{13736fg2c.ps} \includegraphics[width=6cm]{13736fg2d.ps} \includegraphics[width=6cm]{13736fg2e.ps} \includegraphics[width=6cm]{13736fg2f.ps} \caption{Doppler imaging results of $\zeta$~And obtained with the {\sc TempMap}$_\epsilon$ code. In each plot the temperature map for an individual spectral line is shown with the observed spectra and photometry, including the corresponding fits to the data. In the middle panels, under the continuous line fits, the tiny vertical dashes are the 1-$\sigma$ error bars of the spectroscopic observations.} \label{doppler_maps} \end{figure*} \subsection{Effective temperature} The effective temperature of a star can be calculated from the interferometric diameter determination when combined with a bolometric flux measurement using the formula \begin{equation} T_{\rm eff}=\left(\frac{4f_{\rm bol}}{\sigma\Theta_{\rm LD}^{2}}\right)^{1/4}, \label{eq:Teff} \end{equation} where $f_{\rm bol}$ is the bolometric flux and $\sigma$ is the Stefan-Boltzmann constant. The bolometric flux of $\zeta$~And was estimated using measurements on all the available photometric passbands and the {\tt getCal} tool of the NASA Exoplanet Science Institute's interferometric observation planning tool suite. The bolometric flux of $f_{\rm bol} = 9.863 \pm 0.54 \times 10^{-10}$~W/m$^2$ was obtained. Inserting this value to Eq.~\ref{eq:Teff}, together with the limb-darkened angular diameter, gives $T_{\rm eff}$ of $4665\pm 140$~K. This value is very close to, and the same within errors, as the $T_{\rm eff}\approx 4600$\,K used for Doppler imaging in Paper\,1 and the current work. \section{Doppler imaging} For Doppler imaging we used the code {\sc TempMap}, which was originally written by Rice et al. (\cite{rice89}). The code performs a full LTE spectrum synthesis by solving the equation of transfer through a set of ATLAS-9 (Kurucz \cite{kurucz}) model atmospheres at all aspect angles and for a given set of chemical abundances. Simultaneous inversions of the spectral lines, as well as of the two photometric bandpasses, are then carried out using a maximum-entropy regularization. For the non-spherical $\zeta$\,And, a new version of the code was applied: {\sc TempMap}$_\epsilon$ (see Paper~1 and the references therein) takes the distorted geometry of the evolved component in a close binary into account through the distortion parameter $\epsilon$. The elliptical distortion is approximated by a rotation ellipsoid, elongated towards the secondary star: $\epsilon = \sqrt{1 - \left(\frac{b}{a}\right)^2}$ where $a$ and $b$ are the long and the short axes of the ellipsoid, respectively. The appropriate value of $\epsilon$ for $\zeta$\,And, 0.27, as well as the overall system and stellar parameters, were adopted from Paper~1 (Table~2 therein). The 30 available UVES spectra (three exposures per night) covered 18 days, i.e., one full rotation cycle, thus allowing one Doppler reconstruction. The three nightly observations were averaged, since they were taken within approximately 120\,sec. Thus, for further investigation we used the ten averaged spectra with an enhanced S/N value of $\sim$600 or more (see Table~\ref{table_UVES} for more details on observations). Doppler imaging was performed for the well-known mapping lines within the 6392--6440~\AA\ spectral range. Doppler maps for Fe\,{\sc i}~6393, 6400, 6411, 6421, 6430, and Ca\,{\sc i}~6439 are shown in Fig~\ref{doppler_maps}. The individual maps revealed similar spot distributions, i.e., mainly cool spots at low latitudes with temperature contrasts of 600--900\,K with respect to the unspotted surface of 4600\,K. Cool polar features are also recovered, however, with significantly weaker contrast ranging from $\sim$100\,K (Fe\,{\sc i}~6400) to a maximum of $\sim$700\,K (cf. the Fe\,{\sc i}~6430 map). Numerous bright features also appear in the iron maps; however, as they occur near dominant cool spots they can be artifacts, so-called ``rebound'' features (see, e.g., Rice \cite{rice}). Despite the small difference between the temperature contrasts of the respective maps and the spurious bright features, the resulting six Doppler maps are in very good agreement. This similarity is more conspicuous in Fig~\ref{averagemap}, where the average of the six individual maps is plotted. Averaging did not blur the overall structure. The most prominent feature is the belt of cool spots at the equatorial region, with the strongest concentration of spots located at the phase $\phi=$0.75 and at another cool region ranging between phases $\phi=0.0-0.4$. Also a weak polar feature can be detected. This result is reminiscent of the result in Paper~1, where low-latitude dominant features also tended to concentrate at quadrature positions of opposite hemispheres for both observing seasons. \begin{figure} \centering \includegraphics[width=8cm]{13736fg3.ps} \caption{Average map produced from all the maps shown in Fig.~\ref{doppler_maps}. } \label{averagemap} \end{figure} \section{Discussion} \subsection{Comparison between spherical and ellipsoidal surface geometry in the inversion} Another Doppler imaging code, INVERS7PD, which was written by N.\ Piskunov (see, e.g., Piskunov et al.~\cite{pisk90}) and modified by T.\ Hackman (Hackman et al.~\cite{hack}), was also used to obtain a temperature map of $\zeta$~And. In this inversion spherical geometry and only the Fe\,{\sc i}~6400~{\AA} line were used. The observations are compared to a grid of local line profiles calculated with the SPECTRUM spectral synthesis code (Gray \& Corbally \cite{SPECTRUM}) and Kurucz model atmospheres (Kurucz \cite{kurucz}). In the calculations, 10 limb angles and nine temperatures between 3500~K and 5500~K were used. Photometry was not used as a constraint in this inversion as the ellipticity effect seen in the light curves cannot be properly taken into account when using spherical geometry. The result of this inversion is shown in Fig.~\ref{INVERS7}. The main spot structures and the temperature range are similar to the ones in the map obtained with {\sc TempMap}$_\epsilon$ from the Fe\,{\sc i} 6400~{\AA} line. The cool spots concentrate on the equatorial region and especially on the quadrature points. The main spot is seen at the phase $\phi$=0.75 in the equatorial region, and there is also a prominent spot at the phase $\phi$=0.25 at higher latitudes. This is missing from the map obtained using {\sc TempMap}$_\epsilon$, so it is most likely an artifact caused by using spherical geometry on an ellipsoidal star (cf. Fig.3 in Paper 1). Furthermore, at the phase $\phi$=0.25, the equatorial region has a temperature close to that of the unspotted surface, unlike in the map obtained using ellipsoidal geometry. Also, the whole temperature scale is shifted by 100~K towards the cooler temperatures. On the whole, the temperature map obtained with the spherical geometry is very similar to the one obtained using ellipsoidal geometry and {\sc TempMap}$_\epsilon$. As expected, the main differences occur at the quadrature points and especially at the phase $\phi$=0.25. Also, one has to keep in mind that the tests with {\sc TempMap}$_\epsilon$ show that neglecting the ellipticity in the Doppler imaging reconstruction yields $\sim$50--240\% higher $\chi^{2}$ values in comparison to using the correct surface geometry. \begin{figure} \centering \includegraphics[width=8cm]{13736fg4.eps} \caption{Temperature map of $\zeta$~And obtained with INVERS7PD inversion code using spherical geometry and Fe\,{\sc i} 6400~{\AA} line.} \label{INVERS7} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{13736fg5.eps} \caption{Observed $V$ magnitude of $\zeta$~And (crosses) compared to the one calculated from the temperature map obtained using INVERS7PD (line).} \label{INVERS7_phot} \end{figure} The main differences between the results from the spherical and ellipsoidal codes can be seen in the photometry. Figure~\ref{INVERS7_phot} shows the normalised $V$ observations of $\zeta$~And compared to the ones calculated from the temperature map obtained with the code using spherical geometry. Only around the phases 0.4--0.6 the two light curves show similar behaviour, and the photometry calculated from the INVERS7PD temperature map shows completely different behaviour than the observed one especially around the quadrature points. \subsection{Chromospheric activity} Chromospheric activity of $\zeta$~And was investigated using the H$\alpha$ line profiles, which appeared in absorption during the observations, similarly to the other Balmer lines. Variations in the H$\alpha$ line through the rotation cycle are shown in Fig.~\ref{Ha_prof}a. Both the red and the blue wings show strong variation at one, but different, phase. Also, most line profiles clearly show variable behaviour between velocities -100~km/s and +100~km/s. These variations are clearly seen already in the spectra, which have not been corrected to the continuum level. All the spectra show identical continuum shapes, except approximately $\pm$5{\AA} from the H$\alpha$ line, corresponding to the variation also seen in the normalised spectra used in the following analysis. \begin{figure} \centering \includegraphics[width=8cm]{13736fg6.eps} \caption{Variations in the H$\alpha$ profiles of $\zeta$~And. a) The continuum normalised H$\alpha$ profiles. b) Residual profiles after the mean profile has been subtracted. The x-axis gives the velocity in comparison to the mass centre of the giant component.} \label{Ha_prof} \end{figure} To investigate the line-profile variations in more detail, the average profile was subtracted from all the profiles, thus creating the residual profiles shown in Fig.~\ref{Ha_prof}b. The temporal variations are clearly seen in these profiles. The most prominent features are the two strong absorption features seen at the velocities -350~-- 0~km/s and 0~-- 100~km/s. A dynamic spectrum, shown in Fig.~\ref{Ha_hjd}, was also created from the difference profiles. Brighter colours in the plot correspond to enhanced emission and the darker colours to the enhanced absorption. The heliocentric Julian dates of the observations are shown with crosses in the plot. The data for the times where there are no observations are interpolations between the closest timepoints with data. The plotting over the heliocentric Julian date instead of the rotational phase was chosen, as some events are short lived, and the observations in any case cover only slightly more than one rotation. The observational phases are given on the left side of the plot. \begin{figure} \centering \includegraphics[width=8cm]{13736fg7.eps} \caption{Dynamic spectrum of the H$\alpha$ line based on the residual profiles after the subtraction of the average line profile from the individual profiles. The x-axis gives the velocity and the y-axis the heliocentric Julian date on the left and phase on the right side. The crosses on the lefthand side of the plot give the heliocentric Julian dates of the observations and the dotted line the zero velocity. The brighter the colour, the more emission is observed.} \label{Ha_hjd} \end{figure} The most noticeable feature in the dynamical spectrum is the strong enhanced absorption around the phase 1.6 at the velocity +100~km/s. If the velocity seen in this absorption system was caused by the stellar rotation, it would be outside the stellar disk and would not be seen in absorption. Thus it must be a cloud of cool gas in the stellar atmosphere that is falling into the star. This could be the final stages of a flare event. No evidence of such an event is seen in the earlier observations, but it could have occurred during the one-night gap in the observations. More enhanced absorption is seen at the first observation (phase 0.94) extending to the very blue, to -300~km/s and beyond. This could be caused by a mass ejection event with a strong line of sight component. In the following observation, more enhanced absorption is seen spanning the velocities -100~-- +100~km/s. Enhanced emission occurs at three main locations: around the phases 1.15--1.30 at the velocities -40~-- -70~km/s, around the phases 1.2--1.4 at the velocities +60~-- 100~km/s and at the phases 1.7--1.8 at the velocities +40~-- 70~km/s. These features have velocities that place them slightly outside the stellar disk, and thus they could be caused by prominences seen at the stellar limb. The prominence seen at the blue edge around phases 1.1--1.3 is most likely the same one as seen at the red edge 0.5 in a later phase (i.e., at phases 1.7--1.8). Also, a weak enhanced absorption feature is seen at phase 1.4 around the velocity -20~km/s, which could be caused by the prominence starting to cross the stellar disk. This prominence could be centred around phase 1.5, which in the binary reference frame is the phase pointing away from the secondary. The enhanced emission seen in the red around phases 1.2--1.4 are, based on their velocities, also most likely caused by prominences. However, they have to be short lived in nature, as no evidence of them is seen in the observations before or after. These prominences would be at the disk centre approximately at phases 1.0 and 1.1, which places them on the side phasing the secondary. They also coincide with the weaker cool region seen around the phases 0.0--0.4 in the Doppler image. \subsection{Long-term magnetic activity of $\zeta$~And} The long-term activity in $\zeta$~And is investigated based on the photometric $V$ and $y$ band observations obtained with the Wolfgang and Amadeus automatic photometric telescopes. The observations between December 1996 and October 2002 were already used in Paper~1. Here, we also use observations obtained between June 27, 2003 and October 25, 2008, in total 211 new $V$ magnitudes. When all the instrumental differential magnitudes are plotted against the phase, see plot Fig.~\ref{phot_ph}, the variation caused by the ellipticity effect is clearly seen. Still, the observations show much larger scatter around the ellipticity curve than is expected from the measurement error of 0.01--0.02 magnitudes. This indicates that there are also significant variations due to starspots. Evidence of changes in the activity level are also seen when all the observations are plotted against the Julian Date in Fig.~\ref{phot_hjd}. In this plot the small crosses give the individual observations and the large crosses the mean of that time period. No mean is given for some time periods, as there are so few measurements, or they are grouped such, that the full light-curve was not sampled. \begin{figure} \centering \includegraphics[width=8cm]{13736fg8.eps} \caption{All the differential $V$ and $y$ magnitudes of $\zeta$~And in the instrumental system plotted against the rotational phase. } \label{phot_ph} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{13736fg9.eps} \caption{All the differential $V$ and $y$ magnitudes plotted against the heliocentric Julian date. The large crosses show the mean magnitudes for that time. } \label{phot_hjd} \end{figure} Changes in the mean magnitudes could be interpreted as a solar-like activity cycle. A spectral analysis of the mean measurements was carried out using the Lomb method (Press et al.~\cite{press}). The result indicates a presence of a cycle with a cycle length of $5.9\pm 0.7$ years, but the false alarm probability is 0.36. Thus the period can be spurious, and more measurements are needed for confirming it. As can be seen from Fig.~\ref{phot_hjd}, $\zeta$~And is on average brighter by 0.014 magnitudes during the VLT observations presented here than during the December 1997 -- January 1998 KPNO observations presented in Paper~1. This implies that the spot coverage, and/or the spot temperature, is different between the two epochs. For studying this further, the temperature maps obtained with the Ca\,{\sc i} 6439~{\AA} line, were investigated both from the VLT and KPNO observations. The hottest temperature is basically the same in both maps, 4600~K for VLT and 4620~K for KPNO. Still, the coolest temperatures are very different. For the VLT observations, the coolest spots are 3710~K and for the KNPO map 3480~K. Also, the number of surface elements having a temperature of 4000~K or less in the map obtained from the VLT is 75\% of the surface elements with those temperatures in the KPNO map. This investigation supports the existence of a activity cycle, which is also indicated by the long-term photometry of $\zeta$~And. Still, one must keep in mind that the temperatures in the Doppler images are very sensitive to the data quality, and the VLT data are superior to the KPNO ones. \section{Conclusions} The following conclusions can be drawn from the optical interferometry, high-resolution spectroscopy and broad band photometry presented in this work. \begin{enumerate} \item Optical interferometry gives an apparent diameter of $2.55\pm 0.09$\,mas for $\zeta$\,And. Using the Hipparcos parallax, this translates into a stellar radius of $R=15.9\pm 0.8 {\rm R}_{\odot}$, which is in line with the earlier radius determinations. \item Combining the interferometrically determined diameter and bolometric flux gives an effective temperature of $T_{\rm eff} = 4665\pm 140$, which is consistent with the values determined through Doppler imaging. \item The expected ellipsoidal stellar geometry with $\sim$4\% difference between the long and short axes cannot be confirmed with the current interferometric observations, which have errors of about 4\% in the diameter measurement. However, the highest ellipticity expected for the night of September 18 is consistent with the data. \item The Doppler images reveal cool spots on the surface of the primary of the $\zeta$~And binary. The spots are located in the equatorial region, and the main concentration of spots is seen around phase 0.75, i.e., 0.25 in phase from the secondary. Another weaker cool region spans the phases 0.0--0.4, again around the equator. There are also indications of a cool polar cap. On the whole, this spot configuration is very similar to the one seen in the earlier published 1997/1998 data. \item Long-term photometric observations indicate an activity cycle, but more measurements are needed to confirm this and its period. The investigation of the Doppler maps obtained January 1998 and September 2008 also hint at an activity cycle. \item The chromospheric activity, investigated from the H$\alpha$-line, shows evidence of both prominences and cool clouds. The prominences do not seem to show any strong evidence of occurring at certain locations in the binary reference frame, nor are they associated with the coolest spot seen on the surface. On the other hand, one of the detected prominences seems to be related to the group of weaker cool spots located at phases 0.0--0.4. \end{enumerate} \begin{acknowledgements} ZsK is a grantee of the Bolyai J\'anos Fellowship of the Hungarian Academy of Sciences. We also thank the ESO Scientific Visitor Programme for enabling ZsK to visit Garching during the preparation of this paper. This work has made use of the Smithsonian/NASA Astrophysics Data System (ADS) and of the Centre de Donnees astronomiques de Strasbourg (CDS), and the services from the NASA Exoplanet Science Institute, California Institute of Technology, http://nexsci.caltech.edu. \end{acknowledgements}
2024-02-18T23:40:22.866Z
2010-02-22T22:07:29.000Z
algebraic_stack_train_0000
2,217
6,492
proofpile-arXiv_065-10817
\section{Introduction and statement of the main results}\label{intro} We aim to describe the asymptotic behavior near the singularity of solutions to backward evolution equations with inverse square singular potentials of the form \begin{equation}\label{prob} u_t+\Delta u+\dfrac{a(x/|x|)}{|x|^2}\,u+f(x,t, u(x,t))=0, \end{equation} in ${\mathbb R}^N\times (0,T)$, where $T>0$, $N\geq 3$, $a\in L^{\infty}({\mathbb S}^{N-1})$ and $f:{\mathbb R}^N\times(0,T)\times{\mathbb R}\to {\mathbb R}$. Inverse square potentials are related to the well-known classical Hardy's inequality \begin{equation*} \int_{{\mathbb R}^N}\,|\nabla u(x)|^{2}\,dx\geq \bigg(\frac{N-2}{2}\bigg)^{\!\!2}\int_{{\mathbb R}^N} \frac{u^{2}(x)}{|x|^{2}}\,dx, \quad\text{for all }u\in\mathcal{C}_0^\infty({\mathbb R}^N),\quad N\ge 3, \end{equation*} see e.g. \cite{GP,HLP}. Parabolic problems with singular inverse square Hardy potentials arise in the linearization of standard combustion models, see \cite{PV}. The properties of the heat operator are strongly affected by the presence of the singular inverse square potential, which, having the same order of homogeneity as the laplacian and failing to belong to the Kato class, cannot be regarded as a lower order term. Hence, singular problems with inverse square potentials represent a borderline case with respect to the classical theory of parabolic equations. Such a criticality makes parabolic equations of type (\ref{prob}) and their elliptic versions quite challenging from the mathematical point of view, thus motivating a large literature which, starting from the pioneering paper by \cite{BaGo}, has been devoted to their analysis, see e.g. \cite{GP,vazquez_zuazua} for the parabolic case and \cite{AFP,smets,terracini} for the elliptic counterpart. In particular, the influence of the Hardy potential in semilinear parabolic problems has been studied in \cite{APP}, in the case $f(x,t,s)=s^p$, $p>1$, and for $a(x/|x|)=\lambda$, $\lambda>0$; the analysis carried out in \cite{APP} highlighted a deep difference with respect to the classical heat equation ($\lambda=0$), showing that, if $\lambda>0$, there exists a critical exponent $p_+(\lambda)$ such that for $p\ge p_+(\lambda)$, there is no solution even in the weakest sense for any nontrivial initial datum. The present paper is addressed to the problem of describing the behavior of solutions along the directions $(\lambda x,\lambda^2 t)$ naturally related to the heat operator. Indeed, the unperturbed operator $u_t+\Delta u+\frac{a(x/|x|)}{|x|^2}\,u$ is invariant under the action $(x,t)\mapsto(\lambda x,\lambda^2 t)$. Then we are interested in evaluating the asymptotics of $$ u(\sqrt{t} x,t)\quad\text{as }t\to 0^+ $$ for solutions to (\ref{prob}). Our analysis will show that $u(\sqrt{t} x,t)$ behaves as a singular self-similar eigenfunction of the Ornstein-Uhlenbeck operator with inverse square potential, multiplied by a power of $t$ related to the corresponding eigenvalue, which can be selected by the limit of a frequency type function associated to the problem. We consider both linear and subcritical semilinear parabolic equations of type (\ref{prob}). More precisely, we deal with the case $f(x,t,s)= h(x,t)s$ corresponding to the linear problem \begin{equation}\label{prob1} u_t+\Delta u+\dfrac{a(x/|x|)}{|x|^2}\,u+h(x,t)u=0,\quad \text{in }{\mathbb R}^N\times (0,T), \end{equation} with a perturbing potential $h$ satisfying \begin{equation}\label{eq:der} h,h_t\in L^{r}\big((0,T),L^{{N}/{2}}({\mathbb R}^N)\big) \quad\text{for some }r>1, \quad h_t\in L^{\infty}_{\rm loc}\big((0,T),L^{{N}/{2}}({\mathbb R}^N)\big), \end{equation} and negligible with respect to the inverse square potential $|x|^{-2}$ near the singularity in the sense that there exists $C_h>0$ such that \begin{equation}\label{eq:h} |h(x,t)|\leq C_h(1+|x|^{-2+\e}) \quad \text{for all }t\in (0,T),\text{ a.e. }x\in{\mathbb R}^N, \text{ and for some }\e\in(0,2). \end{equation} We also treat the semilinear case $f(x,t,s)= \varphi (x, t, s)$, with a nonlinearity $\varphi\in C^1({\mathbb R}^N\times(0,T)\times{\mathbb R})$ satisfying the following growth condition \begin{equation}\label{eq:fi} \begin{cases} \dfrac{|\varphi (x,t,s)|+|x\cdot\nabla_x \varphi (x,t,s)| +|t\frac{\partial \varphi}{\partial t}(x,t,s)|}{|s|}\leq C_\varphi (1+|s|^{p-1})\\[10pt] \big|\varphi (x,t,s)-s{\textstyle{\frac{\partial \varphi}{\partial s}}}(x,t,s)\big| \leq C_\varphi|s|^q \end{cases} \end{equation} for all $(x,t,s)\in {\mathbb R}^N\times(0,T)\times{\mathbb R}$ and some $1< p<2^*-1$ and $2\leq q<p+1$, where $2^*=\frac{2N}{N-2}$ is the critical exponent for Sobolev's embedding and $C_\varphi>0$ is independent of $x\in{\mathbb R}^N$, $t\in (0,T)$, and $s\in{\mathbb R}$. In particular, we are going to classify the behavior of solutions to the semilinear parabolic problem \begin{equation}\label{prob2} u_t+\Delta u+\dfrac{a(x/|x|)}{|x|^2}\,u+\varphi (x, t, u(x,t))=0, \quad \text{in }{\mathbb R}^N\times (0,T), \end{equation} satisfying \begin{equation}\label{eq:u1} u\in L^{\infty}(0,T, L^{p+1}({\mathbb R}^N)) \end{equation} and \begin{equation}\label{eq:u2} t u_t\in L^{\infty}(0,T, L^{\frac{p+1}{p+1-q}}({\mathbb R}^N)) \mbox { and } \sup\limits_{t\in (0,T)}t^{N/2}\int_{{\mathbb R}^N} |x|^{\frac{2(p+1)}{p-1}} |u(\sqrt t x,t)|^{p+1}\,dx<\infty. \end{equation} In order to introduce a suitable notion of solution to (\ref{prob}), for every $t>0$ let us define the space ${\mathcal H}_t$ as the completion of $C^{\infty}_{\rm c}({\mathbb R}^N)$ with respect to $$ \|u\|_{{\mathcal H}_t}=\bigg(\int_{{\mathbb R}^N}\big(t|\nabla u(x)|^2+|u(x)|^2\big) G(x,t)\,dx\bigg)^{\!\!1/2}, $$ where \begin{equation*} G(x,t)=t^{-N/2}\exp\Big(-\frac{|x|^2}{4t}\Big) \end{equation*} is the heat kernel satisfying \begin{equation}\label{eq:heatker} G_t-\Delta G=0\quad\text{and}\quad \nabla G(x,t)=-\frac{x}{2t}\,G(x,t) \quad\text{in }{\mathbb R}^N\times (0,+\infty). \end{equation} We denote as $\big({\mathcal H}_t\big)^\star$ the dual space of ${\mathcal H}_t$ and by ${}_{({\mathcal H}_t)^\star}\langle \cdot,\cdot\rangle_{{\mathcal H}_t}$ the corresponding duality product. For every $t>0$, we also define the space ${\mathcal L}_t$ as the completion of $C^{\infty}_{\rm c}({\mathbb R}^N)$ with respect to $$ \|u\|_{{\mathcal L}_t}=\bigg(\int_{\{\mathbb R}^N}|u(x)|^2 G(x,t)\,dx\bigg)^{\!\!1/2}. $$ \begin{Definition}\label{def:solution} We say that $u\in L^1_{\rm loc }({\mathbb R}^N\times(0,T))$ is a weak solution to (\ref{prob}) in ${\mathbb R}^N\times(0,T)$ if \begin{align} \label{eq:defsol1}&\int_\tau^T\|u(\cdot,t)\|^2_{{\mathcal H}_t}\,dt<+\infty,\quad\int_\tau^T\Big\|u_t+\frac{\nabla u\cdot x}{2t}\Big\|^2_{({\mathcal H}_t)^\star}dt<+\infty \text{ for all }\tau\in (0,T),\\ \label{eq:defsol2}&{\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle u_t+\frac{\nabla u\cdot x}{2t},\phi \bigg\rangle_{{\mathcal H}_t}\\ &\notag\qquad= \int_{{\mathbb R}^N}\bigg(\nabla u(x,t)\cdot \nabla \phi(x)- \dfrac{a(x/|x|)}{|x|^2}\,u(x,t)\phi(x)-f(x,t,u(x,t))\phi(x)\bigg)G(x,t)\,dx \end{align} for a.e. $t\in (0,T)$ and for each $\phi\in {\mathcal H}_t$. \end{Definition} It will be clear from the parabolic Hardy type inequality of Lemma \ref{Hardytemp} and the Sobolev weighted inequality of Corollary \ref{cor:ineqSob}, that the integral $\int_{{\mathbb R}^N}f(x,t,u(x,t))\phi(x)G(x,t)dx$ in the above definition is finite for a.e. $t\in(0,T)$, both in the linear case $f(x,t,s)= h(x,t)s$ under assumptions (\ref{eq:der}--\ref{eq:h}) and in the semilinear case $f(x,t,s)= \varphi (x, t, s)$ under condition (\ref{eq:fi}) and for $u$ satisfying (\ref{eq:u1}). \begin{remark}\label{rem:uv} If $u\in L^1_{\rm loc }({\mathbb R}^N\times(0,T))$ satisfies (\ref{eq:defsol1}), then the function $$ v(x,t):=u(\sqrt{t}x,t) $$ satisfies \begin{equation}\label{eq:4} v\in L^2(\tau,T;\mathcal H)\quad\text{and}\quad v_t\in L^2(\tau,T;(\mathcal H)^\star) \quad\text{for all }\tau\in (0,T), \end{equation} where we have set $$ \mathcal H:={\mathcal H}_1, $$ i.e. ${\mathcal H}$ is the completion of $C^{\infty}_{\rm c}({\mathbb R}^N)$ with respect to $$ \|v\|_{{\mathcal H}}=\bigg(\int_{{\mathbb R}^N}\big(|\nabla v(x)|^2+|v(x)|^2\big) e^{-|x|^2/4}\,dx\bigg)^{\!\!1/2}. $$ We notice that from (\ref{eq:4}) it follows that $$ v\in C^0([\tau,T],{\mathcal L}), $$ see e.g. \cite[Theorem 1.2]{SH}, where $\mathcal L:={\mathcal L}_1$ is the completion of $C^{\infty}_{\rm c}({\mathbb R}^N)$ with respect to the norm $\|v\|_{{\mathcal L}}=\big(\int_{{\mathbb R}^N}|v(x)|^2e^{-|x|^2/4}\,dx\big)^{1/2}$. Moreover the function $$ t\in[\tau,T]\mapsto \|v(t)\|^2_{{\mathcal L}}=\int_{{\mathbb R}^N}u^2(x,t)G(x,t)\,dx $$ is absolutely continuous and $$ \frac12\frac1{dt} \int_{{\mathbb R}^N}u^2(x,t)G(x,t)= \frac12\frac1{dt}\|v(t)\|_{{\mathcal L}}^2= {}_{{\mathcal H}^\star}\langle v_t(\cdot,t),v(\cdot,t) \rangle_{{\mathcal H}} ={\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle u_t+\frac{\nabla u\cdot x}{2t},u(\cdot,t) \bigg\rangle_{{\mathcal H}_t} $$ for a.e. $t\in(0,T)$. \end{remark} \begin{remark}\label{rem:v2} If $u$ is a weak solution to (\ref{prob}) in the sense of definition \ref{def:solution}, then the function $v(x,t):=u(\sqrt{t}x,t)$ defined in Remark \ref{rem:uv} is a weak solution to \begin{equation*} v_t+\frac1t\bigg(\Delta v-\frac x2\cdot \nabla v+ \dfrac{a(x/|x|)}{|x|^2}\,v+tf(\sqrt tx,t,v(x,t))\bigg)=0, \end{equation*} in the sense that, for every $\phi\in{\mathcal H}$, \begin{multline}\label{eq:24} {\phantom{\big\langle}}_{{\mathcal H}^\star}\!\big\langle v_t,\phi \big\rangle_{{\mathcal H}}\\= \frac1t\int_{{\mathbb R}^N}\!\!\bigg(\!\nabla v(x,t)\!\cdot \!\nabla \phi(x)- \dfrac{a\big(\frac{x}{|x|}\big)}{|x|^2}\,v(x,t)\phi(x) -t\,f(\sqrt tx,t,v(x,t))\phi(x)\!\bigg)G(x,1)\,dx. \end{multline} In particular, if $u$ is a weak solution to (\ref{prob1}), then $v(x,t):=u(\sqrt{t}x,t)$ weakly solves \begin{equation*} v_t+\frac1t\bigg(\Delta v-\frac x2\cdot \nabla v+ \dfrac{a(x/|x|)}{|x|^2}\,v+t h(\sqrt tx,t)v\bigg)=0, \end{equation*} whereas, if $u$ is a weak solution to (\ref{prob2}), then $v(x,t):=u(\sqrt{t}x,t)$ weakly solves \begin{equation*} v_t+\frac1t\bigg(\Delta v-\frac x2\cdot \nabla v+ \dfrac{a(x/|x|)}{|x|^2}\,v+t\varphi(\sqrt tx,t,v)\bigg)=0. \end{equation*} \end{remark} We give a precise description of the asymptotic behavior at the singularity of solutions to (\ref{prob1}) and (\ref{prob2}) in terms of the eigenvalues and eigenfunctions of the Ornstein-Uhlenbeck operator with singular inverse square potential \begin{equation}\label{eq:13} L:{\mathcal H}\to({\mathcal H})^\star,\quad L=-\Delta+\frac{x}2\cdot\nabla-\frac{a(x/|x|)}{|x|^2}, \end{equation} acting as $$ {}_{{\mathcal H}^\star}\langle Lv,w \rangle_{{\mathcal H}} = \int_{{\mathbb R}^N}\bigg(\nabla v(x)\cdot\nabla w(x)-\frac{a(x/|x|)}{|x|^2}\,v(x)w(x)\bigg)G(x,1)\,dx, \quad\text{for all }v,w\in{\mathcal H}. $$ In order to describe the spectrum of $L$, we consider the operator $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ on the unit $(N-1)$-dimensional sphere $\mathbb S^{N-1}$. For any $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$, $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ admits a diverging sequence of eigenvalues $$ \mu_1(a)<\mu_2(a)\leq\cdots\leq\mu_k(a)\leq \cdots, $$ the first of which is simple and can be characterized as \begin{equation}\label{eq:67} \mu_1(a)=\min_{\psi\in H^1(\mathbb S^{N-1})\setminus\{0\}}\frac{\int_{\mathbb S^{N-1}}|\nabla_{\mathbb S^{N-1}}\psi(\theta)|^2\,dS(\theta)-\int_{\mathbb S^{N-1}}a(\theta) \psi^2(\theta)\,dS(\theta)}{\int_{\mathbb S^{N-1}}\psi^2(\theta)\,dS(\theta)}, \end{equation} see \cite{FMT2}. Moreover the quadratic form associated to $-\Delta -\frac{a(x/|x|)}{|x|^2}$ is positive definite if and only if \begin{equation}\label{eq:posde} \mu_1(a)>-\frac{(N-2)^2}4, \end{equation} see \cite[Lemma 2.5]{FMT2}. To each $k\in{\mathbb N}$, $k\geq 1$, we associate a $L^{2}\big({\mathbb S}^{N-1}\big)$-normalized eigenfunction $\psi_k$ of the operator $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ corresponding to the $k$-th eigenvalue $\mu_{k}(a)$, i.e. satisfying \begin{equation}\label{eq:2rad} \begin{cases} -\Delta_{\mathbb S^{N-1}}\psi_k(\theta)-a(\theta)\psi_k(\theta) =\mu_k(a)\,\psi_k(\theta),&\text{in }{\mathbb S}^{N-1},\\[3pt] \int_{{\mathbb S}^{N-1}}|\psi_k(\theta)|^2\,dS(\theta)=1. \end{cases} \end{equation} In the enumeration $\mu_1(a)<\mu_2(a)\leq\cdots\leq\mu_k(a)\leq \cdots$ we repeat each eigenvalue as many times as its multiplicity; thus exactly one eigenfunction $\psi_k$ corresponds to each index $k\in{\mathbb N}$. We can choose the functions $\psi_k$ in such a way that they form an orthonormal basis of $L^2({\mathbb S}^{N-1})$. The following proposition describes completely the spectrum of the operator $L$, thus extending to the anisotropic case the spectral analysis performed in \cite[\S 9.3]{vazquez_zuazua} in the isotropic case $a(\theta)\equiv \lambda$; see also \cite[\S 4.2]{CH} and \cite[\S 2]{ES} for the non singular case. \begin{Proposition}\label{p:explicit_spectrum} The set of the eigenvalues of the operator $L$ is $$ \big\{ \gamma_{m,k}: k,m\in{\mathbb N}, k\geq 1\big\} $$ where \begin{equation}\label{eq:65} \gamma_{m,k}=m-\frac{\alpha_k}2, \quad \alpha_k=\frac{N-2}{2}-\sqrt{\bigg(\frac{N-2}{2}\bigg)^{\!\!2}+\mu_k(a)}, \end{equation} and $\mu_k(a)$ is the $k$-th eigenvalue of the operator $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ on the sphere $\mathbb S^{N-1}$. Each eigenvalue $\gamma_{m,k}$ has finite multiplicity equal to $$ \#\bigg\{j\in{\mathbb N},j\geq 1: \gamma_{m,k}+\frac{\alpha_j}2\in{\mathbb N}\bigg\} $$ and a basis of the corresponding eigenspace is $$ \left\{V_{n,j}: j,n\in{\mathbb N},j\geq 1,\gamma_{m,k}=n-\frac{\alpha_j}2 \right\}, $$ where \begin{equation}\label{eq:66} V_{n,j}(x)= |x|^{-\alpha_j}P_{j,n}\bigg(\frac{|x|^2}{4}\bigg) \psi_j\Big(\frac{x}{|x|}\Big), \end{equation} $\psi_j$ is an eigenfunction of the operator $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ on the sphere $\mathbb S^{N-1}$ associated to the $j$-th eigenvalue $\mu_{j}(a)$ as in (\ref{eq:2rad}), and $P_{j,n}$ is the polynomial of degree $n$ given by $$ P_{j,n}(t)=\sum_{i=0}^n \frac{(-n)_i}{\big(\frac{N}2-\alpha_j\big)_i}\,\frac{t^i}{i!}, $$ denoting as $(s)_i$, for all $s\in{\mathbb R}$, the Pochhammer's symbol $(s)_i=\prod_{j=0}^{i-1}(s+j)$, $(s)_0=1$. \end{Proposition} The following theorems provide a classification of singularity rating of any solution $u$ to (\ref{prob}) based on the limit as $t\to 0^+$ of the {\it Almgren type frequency function} (see \cite{almgren,poon}), \begin{equation}\label{eq:64} {\mathcal N}_{f,u}(t)=\frac{t \int_{{\mathbb R}^N}\big(|\nabla u(x,t)|^2- \frac{a(x/|x|)}{|x|^2}u^2(x,t)-f(x,t, u(x,t))u(x,t)\big)G(x,t)\,dx} {\int_{{\mathbb R}^N}u^2(x,t)\, G(x,t)\,dx}. \end{equation} In the linear case $f(x,t,u)=h(x,t)u$, the behavior of weak solutions to (\ref{prob1}) is described by the following theorem. \begin{Theorem}\label{asym1} Let $u\not\equiv 0$ be a weak solution to (\ref{prob1}) in the sense of Definition \ref{def:solution}, with $h$ satisfying (\ref{eq:der}) and (\ref{eq:h}) and $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfying (\ref{eq:posde}). Then there exist $m_0,k_0\in{\mathbb N}$, $k_0\geq 1$, such that \begin{equation}\label{eq:671} \lim_{t\to 0^+}{\mathcal N}_{hu, u}(t)=\gamma_{m_0,k_0}, \end{equation} where ${\mathcal N}_{hu,u}$ is defined in (\ref{eq:64}) and $\gamma_{m_0,k_0}$ is as in (\ref{eq:65}). Furthermore, denoting as $J_0$ the finite set of indices \begin{equation}\label{eq:79} J_0=\{(m,k)\in{\mathbb N}\times({\mathbb N}\setminus\{0\}):m-\frac{\alpha_{k}}2=\gamma_{m_0,k_0}\}, \end{equation} for all $\tau \in(0,1)$ there holds \begin{equation}\label{eq:80} \lim_{\lambda\to0^+}\int_\tau^1 \bigg\|\lambda^{-2\gamma_{m_0,k_0}}u(\lambda x,\lambda^2t) -t^{\gamma_{m_0,k_0}}\sum_{(m,k)\in J_0}\beta_{m,k}\widetilde V_{m,k}(x/\sqrt t) \bigg\|_{{\mathcal H}_t}^2dt=0 \end{equation} and \begin{equation}\label{eq:81} \lim_{\lambda\to0^+}\sup_{t\in[\tau,1]} \bigg\|\lambda^{-2\gamma_{m_0,k_0}}u(\lambda x,\lambda^2t) -t^{\gamma_{m_0,k_0}}\sum_{(m,k)\in J_0}\beta_{m,k}\widetilde V_{m,k}(x/\sqrt t) \bigg\|_{{\mathcal L}_t}=0, \end{equation} where $\widetilde V_{m,k}= V_{m,k}/\|V_{m,k}\|_{\mathcal L}$, $V_{m,k}$ are as in (\ref{eq:66}), \begin{multline}\label{eq:beta1} \beta_{m,k}=\Lambda^{-2\gamma_{m_0,k_0}} \int_{{\mathbb R}^N}u(\Lambda x,\Lambda^2)\widetilde V_{m,k}(x)G(x,1)\,dx\\ +2\int_0^{\Lambda}s^{1-2\gamma_{m_0,k_0}} \bigg(\int_{{\mathbb R}^N}h(s x, s^2) u(s x,s^2)\widetilde V_{m,k}(x)G(x,1)\,dx\bigg) ds \end{multline} for all $\Lambda\in(0,\Lambda_0)$ and for some $\Lambda_0\in(0,\sqrt{T})$, and $\beta_{m,k}\neq0$ for some $(m,k)\in J_0$. \end{Theorem} An analogous result holds in the semilinear case for solutions to (\ref{prob2}) satisfying the further conditions (\ref{eq:u1}) and (\ref{eq:u2}). \begin{Theorem}\label{asym2} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfy (\ref{eq:posde}) and $\varphi\in C^1({\mathbb R}^N\times(0,T)\times{\mathbb R})$ such that (\ref{eq:fi}) holds. If $u\not\equiv 0$ satisfies (\ref{eq:u1}--\ref{eq:u2}) and is a weak solution to (\ref{prob2}) in the sense of Definition \ref{def:solution}, then there exist $m_0,k_0\in{\mathbb N}$, $k_0\geq 1$, such that \begin{equation}\label{eq:672} \lim_{t\to 0^+}{\mathcal N}_{\varphi,u}(t)=\gamma_{m_0,k_0}, \end{equation} where ${\mathcal N}_{\varphi,u}$ is defined in (\ref{eq:64}) and $\gamma_{m_0,k_0}$ is as in (\ref{eq:65}). Furthermore, letting $J_0$ the finite set of indices defined in (\ref{eq:79}), for all $\tau \in(0,1)$ convergences (\ref{eq:80}) and (\ref{eq:81}) hold with \begin{multline}\label{eq:beta2} \beta_{m,k}=\Lambda^{-2\gamma_{m_0,k_0}} \int_{{\mathbb R}^N}u(\Lambda x,\Lambda^2)\widetilde V_{m,k}(x)G(x,1)\,dx\\ +2\int_0^{\Lambda}s^{1-2\gamma_{m_0,k_0}} \bigg(\int_{{\mathbb R}^N}\varphi(s x, s^2, u(s x,s^2))\widetilde V_{m,k}(x)G(x,1)\,dx\bigg) ds \end{multline} for all $\Lambda\in(0,\Lambda_0)$ and for some $\Lambda_0\in(0,\sqrt{T})$, and $\beta_{m,k}\neq0$ for some $(m,k)\in J_0$. \end{Theorem} \eqref{eq:beta1} and \eqref{eq:beta2} can be seen as Cauchy's integral type formulas for solutions to problems \eqref{prob1} and \eqref{prob2}, since they allow reconstructing, up to the perturbation, the solution at the singularity by the values it takes at any positive time. The proofs of theorems \ref{asym1} and \ref{asym2} are based on a parabolic Almgren type monotonicity formula combined with blow-up methods. Almgren type frequency functions associated to parabolic equations were first introduced by C.-C. Poon in \cite{poon}, where unique continuation properties are derived by proving a monotonicity result which is the parabolic counterpart of the monotonicity formula introduced by Almgren in \cite{almgren} and extended by Garofalo and Lin in \cite{GL} to elliptic operators with variable coefficients. A further development in the use of Almgren monotonicity methods to study regularity of solutions to parabolic problems is due to the recent paper \cite{C}. We also mention that an Almgren type monotonicity method combined with blow-up was used in \cite{FFT} in an elliptic context to study the behavior of solutions to stationary Schr\"odinger equations with singular electromagnetic potentials. Theorem \ref{asym1} and Theorem \ref{asym2} imply {\it a strong unique continuation property} at the singularity, as the following corollary states. \begin{Corollary}\label{cor:uniq_cont} Suppose that either $u$ is a weak solution to (\ref{prob1}) under the assumptions of Theorem \ref{asym1} or $u$ satisfies (\ref{eq:u1}--\ref{eq:u2}) and weakly solves (\ref{prob2}) under the assumptions of Theorem \ref{asym2}. If \begin{equation}\label{eq:uniq_cont} u(x,t)=O\big((|x|^2+t)^k\big)\quad\text{as }(x,t)\to(0,0) \quad\text{for all }k\in{\mathbb N}, \end{equation} then $u\equiv 0$ in ${\mathbb R}^N\times(0,T)$. \end{Corollary} As a byproduct of the proof of Theorems \ref{asym1} and \ref{asym2}, we also obtain the following result, which can be regarded as a {\it unique continuation property} with respect to time. \begin{Proposition}\label{p:uniq_cont} Suppose that either $u$ is a weak solution to (\ref{prob1}) under the assumptions of Theorem \ref{asym1} or $u$ satisfies (\ref{eq:u1}--\ref{eq:u2}) and weakly solves (\ref{prob2}) under the assumptions of Theorem \ref{asym2}. If there exists $t_0\in(0,T)$ such that $$ u(x,t_0)=0\quad\text{for a.e. }x\in{\mathbb R}^N, $$ then $u\equiv 0$ in ${\mathbb R}^N\times(0,T)$. \end{Proposition} There exists a large literature dealing with strong continuation properties in the parabolic setting. \cite{LI1} (see too \cite{LI2}) studies parabolic operators with $L^{\frac{N+1}{2}}$ time-independent coefficients obtaining a unique continuation property at a fixed time $t_0$: the used technique relies on a representation formula for solutions of parabolic equations in terms of eigenvalues of the corresponding elliptic operator and cannot be applied to more general equations with time-dependant coefficients. \cite{SS} and \cite{SO} use parabolic variants of the Carleman weighted inequalities to obtain a unique continuation property at fixed time $t_0$ for parabolic operators with time-dependant coefficients. In this direction, it is worth mentioning the work of Chen \cite{CH} which contains not only a unique continuation result but also some local asymptotic analysis of solutions to parabolic inequalities with bounded coefficients; the approach is based in recasting equations in terms of parabolic self-similar variables. We also quote \cite{AV,E,EF,EKPV,EV,FE} for unique continuation results for parabolic equations with time-dependent potentials by Carleman inequalities and monotonicity methods. The present paper is organized as follows. In section \ref{sec:parabolic-hardy-type}, we state some parabolic Hardy type inequalities and weighted Sobolev embeddings related to equations (\ref{prob1}) and (\ref{prob2}). In section \ref{sec:spectr-analys-self}, we completely describe the spectrum of the operator $L$ defined in (\ref{eq:13}) and prove Proposition \ref{p:explicit_spectrum}. Section \ref{sec:almgren} contains an Almgren parabolic monotonicity formula which provides the unique continuation principle stated in Proposition \ref{p:uniq_cont} and is used in section \ref{sec:blow-up-analysis}, together with a blow-up method, to prove Theorems \ref{asym1} and \ref{asym2}. \medskip \noindent {\bf Notation. } We list below some notation used throughout the paper.\par \begin{itemize} \item[-] ${\rm const}$ denotes some positive constant which may vary from formula to formula. \item[-] $dS$ denotes the volume element on the unit $(N-1)$-dimensional sphere ${\mathbb S}^{N-1}$. \item[-] $\omega_{N-1}$ denotes the volume of ${\mathbb S}^{N-1}$, i.e. $\omega_{N-1}=\int_{{\mathbb S}^{N-1}}dS(\theta)$. \item[-] For all $s\in{\mathbb R}$, $(s)_i$ denotes the Pochhammer's symbol $(s)_i=\prod_{j=0}^{i-1}(s+j)$, $(s)_0=1$. \end{itemize} \section{Parabolic Hardy type inequalities and Weighted Sobolev embeddings}\label{sec:parabolic-hardy-type} The following lemma provides a Hardy type inequality for parabolic operators. We refer to \cite[Proposition 3.1]{poon} for a proof. \begin{Lemma} \label{Hardytemp} For every $t>0$ and $u\in {\mathcal H}_t$ there holds $$ \int_{{\mathbb R}^N}\dfrac{u^2(x)}{|x|^2}\,G(x,t)\,dx\leq \dfrac{1}{(N-2)t}\dyle\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx+ \dfrac{4}{(N-2)^2}\dyle\int_{{\mathbb R}^N}|\nabla u(x)|^2\,G(x,t)\,dx. $$ \end{Lemma} \noindent In the anisotropic version of the above inequality, a crucial role is played by the first eigenvalue $\mu_1(a)$ of the angular operator $-\Delta_{\mathbb S^{N-1}}-a(\theta)$ on the unit sphere $\mathbb S^{N-1}$ defined in (\ref{eq:67}). \begin{Lemma}\label{Hardy_aniso} For every $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$, $t>0$, and $u\in {\mathcal H}_t$, there holds \begin{multline*} \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2-\frac{a(x/|x|)}{|x|^2}\,u^2(x)\bigg) \,G(x,t)\,dx+\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\\ \geq \bigg(\mu_1(a)+\frac{(N-2)^2}4\bigg) \int_{{\mathbb R}^N}\dfrac{u^2(x)}{|x|^2}\,G(x,t)\,dx. \end{multline*} \end{Lemma} \begin{pf} Let $u\in C^\infty_{\rm c}({\mathbb R}^N\setminus\{0\})$. The gradient of $u$ can be written in polar coordinates as \begin{displaymath} \nabla u(x)=\big(\partial_ru(r,\theta)\big)\theta+ \frac1r\nabla_{{\mathbb S}^{N-1}}u(r,\theta), \quad r=|x|,\quad \theta=\frac{x}{|x|}, \end{displaymath} hence \begin{displaymath} |\nabla u(x)|^2 =\big|\partial_ru(r,\theta)\big|^2+ \frac1{r^2}\big|\nabla_{{\mathbb S}^{N-1}}u(r,\theta) \big|^2 \end{displaymath} and \begin{multline}\label{eq:1} \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2-\frac{a(x/|x|)}{|x|^2}\,u^2(x)\bigg) \,G(x,t)\,dx\\= t^{-\frac N2}\int_{{\mathbb S}^{N-1}}\bigg( \int_0^{+\infty}r^{N-1}e^{-\frac{r^2}{4t}} |\partial_ru(r,\theta)|^2\,dr\bigg)\,dS(\theta)\\ +t^{-\frac N2} \int_0^{+\infty}\frac{r^{N-1}e^{-\frac{r^2}{4t}}}{r^2}\bigg(\int_{{\mathbb S}^{N-1}}\left[|\nabla_{{\mathbb S}^{N-1}}u(r,\theta)|^2-a(\theta) |u(r,\theta)|^2\right]\,dS(\theta)\bigg)\,dr. \end{multline} For all $\theta\in{\mathbb S}^{N-1}$, let $\varphi_{\theta}\in C^\infty_{\rm c}((0,+\infty))$ be defined by $\varphi_{\theta}(r)=u(r,\theta)$, and $\widetilde\varphi_{\theta}\in C^\infty_{\rm c}({\mathbb R}^N\setminus\{0\})$ be the radially symmetric function given by $\widetilde\varphi_{\theta}(x)=\varphi_{\theta}(|x|)$. From Lemma \ref{Hardytemp}, it follows that \begin{align}\label{eq:2} t^{-\frac N2} \int_{{\mathbb S}^{N-1}}&\bigg( \int_0^{+\infty}r^{N-1}e^{-\frac{r^2}{4t}} |\partial_ru(r,\theta)|^2\,dr\bigg)\,dS(\theta)\\ \notag& = t^{-\frac N2}\int_{{\mathbb S}^{N-1}}\bigg( \int_0^{+\infty}r^{N-1} e^{-\frac{r^2}{4t}}|\varphi_{\theta}'(r)|^2\,dr\bigg)\,dS(\theta)\\ \notag& =\frac1{\omega_{N-1}} \int_{{\mathbb S}^{N-1}}\bigg( \int_{{\mathbb R}^N}|\nabla\widetilde \varphi_{\theta}(x)|^2G(x,t)\,dx\bigg)\,dS(\theta) \\ &\notag\geq \frac1{\omega_{N-1}} \frac{(N-2)^2}{4} \int_{{\mathbb S}^{N-1}}\bigg(\int_{{\mathbb R}^N}\frac{|\widetilde \varphi_{\theta}(x)|^2}{|x|^2}G(x,t)\,dx\bigg)\,dS(\theta)\\ &\notag \quad-\frac1{\omega_{N-1}}\frac{N-2}{4t}\int_{{\mathbb S}^{N-1}}\bigg(\int_{{\mathbb R}^N}|\widetilde \varphi_{\theta}(x)|^2G(x,t)\,dx\bigg)\,dS(\theta) \\ \notag&=t^{-\frac N2}\frac{(N-2)^2}4 \int_{{\mathbb S}^{N-1}}\bigg( \int_0^{+\infty}\frac{r^{N-1}e^{-\frac{r^2}{4t}}}{r^2}|u(r,\theta)|^2\,dr\bigg) \,dS(\theta)\\ \notag&-t^{-\frac N2}\frac{N-2}{4t}\int_{{\mathbb S}^{N-1}}\bigg( \int_0^{+\infty}r^{N-1}e^{-\frac{r^2}{4t}}|u(r,\theta)|^2\,dr\bigg)\,dS(\theta)\\ \notag& = \frac{(N-2)^2}4 \int_{{\mathbb R}^N}\dfrac{u^2(x)}{|x|^2}\,G(x,t)\,dx- \frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx, \end{align} where $\omega_{N-1}$ denotes the volume of the unit sphere ${\mathbb S}^{N-1}$, i.e. $\omega_{N-1}=\int_{{\mathbb S}^{N-1}}dS(\theta)$. On the other hand, from the definition of $\mu_1(a)$ it follows that \begin{equation} \label{eq:3} \int_{{\mathbb S}^{N-1}}\!\!\left[|\nabla_{{\mathbb S}^{N-1}}u(r,\theta)|^2\!- a(\theta)|u(r,\theta)|^2\right]dS(\theta) \geq \mu_1(a)\int_{{\mathbb S}^{N-1}}\!|u(r,\theta)|^2dS(\theta). \end{equation} From (\ref{eq:1}), (\ref{eq:2}), and (\ref{eq:3}), we deduce that \begin{multline*} \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2-\frac{a(x/|x|)}{|x|^2}\,u^2(x)\bigg) \,G(x,t)\,dx+\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\\ \geq \bigg(\mu_1(a)+\frac{(N-2)^2}4\bigg) \int_{{\mathbb R}^N}\dfrac{u^2(x)}{|x|^2}\,G(x,t)\,dx, \end{multline*} for all $u\in C^\infty_{\rm c}({\mathbb R}^N\setminus\{0\})$, thus yielding the required inequality by density of $C^\infty_{\rm c}({\mathbb R}^N\setminus\{0\})$ in ${\mathcal H}_t$.~\end{pf} The following corollary provides a norm in ${\mathcal H}_t$ equivalent to $\|\cdot\|_{{\mathcal H}_t}$ and naturally related to the heat operator with the Hardy potential of equation (\ref{prob}). \begin{Corollary}\label{c:pos_def} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfying (\ref{eq:posde}). Then, for every $t>0$, \begin{align*} &\inf_{u\in {\mathcal H}_t\setminus\{0\}}\frac{\int_{{\mathbb R}^N}\big(|\nabla u(x)|^2-\frac{a(x/|x|)}{|x|^2}\,u^2(x)\big) \,G(x,t)\,dx+\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx} {\int_{{\mathbb R}^N}|\nabla u(x)|^2 \,G(x,t)\,dx+\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx}\\[5pt] &\quad=\inf_{v\in {\mathcal H}\setminus\{0\}}\frac{\int_{{\mathbb R}^N}\big(|\nabla v(x)|^2-\frac{a(x/|x|)}{|x|^2}\,v^2(x)\big) \,G(x,1)\,dx+\frac{N-2}{4}\int_{{\mathbb R}^N}v^2(x)G(x,1)\,dx} {\int_{{\mathbb R}^N}|\nabla v(x)|^2 \,G(x,1)\,dx+\frac{N-2}{4}\int_{{\mathbb R}^N}v^2(x)G(x,1)\,dx}>0. \end{align*} \end{Corollary} \begin{pf} The equality of the two infimum levels follows by the change of variables $u(x)=v(x/\sqrt t)$. To prove that they are strictly positive, we argue by contradiction and assume that for every $\e>0$ there exists $v_\e\in {\mathcal H}\setminus\{0\}$ such that \begin{multline*} \int_{{\mathbb R}^N}\bigg(|\nabla v_\e(x)|^2-\frac{a(x/|x|)}{|x|^2}\,v_\e^2(x)\bigg) \,G(x,1)\,dx+\frac{N-2}{4}\int_{{\mathbb R}^N}v_\e^2(x)G(x,1)\,dx\\ <\e \bigg(\int_{{\mathbb R}^N}|\nabla v_\e(x)|^2\,G(x,1)\,dx+\frac{N-2}{4}\int_{{\mathbb R}^N}v_\e^2(x)G(x,1)\,dx\bigg), \end{multline*} which, by Lemma \ref{Hardy_aniso}, implies that \begin{multline*} \bigg(\mu_1\bigg(\frac{a}{1-\e}\bigg)+\frac{(N-2)^2}4\bigg) \int_{{\mathbb R}^N}\dfrac{v_\e^2(x)}{|x|^2}\,G(x,1)\,dx\\ \leq \int_{{\mathbb R}^N}\bigg(|\nabla v_\e(x)|^2-\frac{a(x/|x|)}{(1-\e)|x|^2}\,v_\e^2(x)\bigg) \,G(x,1)\,dx+\frac{N-2}{4}\int_{{\mathbb R}^N}v_\e^2(x)G(x,1)\,dx<0 \end{multline*} and consequently $$ \mu_1\bigg(\frac{a}{1-\e}\bigg)+\frac{(N-2)^2}4<0. $$ By continuity of the map $a\mapsto \mu_1(a)$ with respect to the $L^{\infty}\big({\mathbb S}^{N-1}\big)$-norm, letting $\e\to0$ the above inequality yields $\mu_1(a)+\frac{(N-2)^2}4\leq0$, giving rise to a contradiction with (\ref{eq:posde}). \end{pf} The above results combined with the negligibility assumption \eqref{eq:h} on $h$ allow estimating the quadratic form associated to the linearly perturbed equation (\ref{prob1}) for small times as follows. \begin{Corollary}\label{c:pos_per} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfy (\ref{eq:posde}) and $h\in L^\infty_{{\rm loc}}({\mathbb R}^N\setminus \{0\}\times (0,T))$ satisfy (\ref{eq:h}). Then there exist $C_1',C_2>0$ and $\overline{T}_1>0$ such that for every $t\in(0,\overline{T}_1)$, $s\in(0,T)$, and $u\in {\mathcal H}_t$ there holds \begin{align*} \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2&-\frac{a(x/|x|)}{|x|^2}\,u^2(x)-h(x,s)u^2(x)\bigg) \,G(x,t)\,dx\\ &\geq C_1'\int_{{\mathbb R}^N}\frac{u^2(x)}{|x|^2}\,G(x,t)\,dx -\frac{C_2}t\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\\ \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2&-\frac{a(x/|x|)}{|x|^2}\,u^2(x)-h(x,s)u^2(x)\bigg) \,G(x,t)\,dx +\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\\ &\geq C_1'\bigg(\int_{{\mathbb R}^N}|\nabla u(x)|^2\,G(x,t)\,dx+\frac1t\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\bigg). \end{align*} \end{Corollary} \begin{pf} From (\ref{eq:h}), we have that, for every $u\in {\mathcal H}_t$, there holds \begin{align}\label{eq:11} & \left| \int_{{\mathbb R}^N}h(x,s)u^2(x)G(x,t)\,dx\right| \leq C_h\bigg( \int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx+\int_{{\mathbb R}^N}|x|^{-2+\e}u^2(x)G(x,t)\,dx\bigg)\\ \notag &\leq C_h\bigg( \int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx+t^{\e/2}\int\limits_{|x|\leq\sqrt t }\frac{u^2(x)}{|x|^2}G(x,t)\,dx +t^{-1+\e/2}\int\limits_{|x|\geq\sqrt t }u^2(x)G(x,t)\,dx \bigg)\\ \notag&=\frac{C_h}{t}(t+t^{\e/2})\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx+ C_ht^{\e/2}\int_{{\mathbb R}^N}\frac{u^2(x)}{|x|^2}G(x,t)\,dx. \end{align} The stated inequalities follow from (\ref{eq:11}), Lemma \ref{Hardytemp}, Corollary \ref{c:pos_def}, and assumption (\ref{eq:posde}). \end{pf} In order to estimate the quadratic form associated to the nonlinearly perturbed equation (\ref{prob2}), we derive a Sobolev type embedding in spaces ${\mathcal H}_t$. To this purpose, we need the following inequality, whose proof can be found in \cite[Lemma 3]{EFV}. \begin{Lemma}\label{l:ineqx2} For every $u\in {\mathcal H}$, $|x|u\in{\mathcal L}$ and $$ \frac{1}{16} \int_{{\mathbb R}^N}|x|^2u^2(x)G(x,1)\,dx\leq \int_{{\mathbb R}^{N}}|\nabla u(x)|^2G(x,1)\,dx+\frac{N}{4}\int_{{\mathbb R}^N}u^2(x)G(x,1)\,dx. $$ \end{Lemma} The change of variables $u(x)=v(x/\sqrt t)$ in Lemma \ref{l:ineqx2}, yields the following inequality in ${\mathcal H}_t$. \begin{Corollary}\label{cor:ineq} For every $u\in {\mathcal H}_t$, there holds $$ \frac{1}{16t^2} \int_{{\mathbb R}^N}|x|^2u^2(x)G(x,t)\,dx\leq \int_{{\mathbb R}^{N}}|\nabla u(x)|^2G(x,t)\,dx+\frac{N}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx. $$ \end{Corollary} From Lemma \ref{l:ineqx2} and classical Sobolev embeddings, we can easily deduce the following weighted Sobolev inequality (see also \cite{ES}). \begin{Lemma}\label{l:sob} For all $u\in {\mathcal H}$ and $s\in[2,2^*]$, there holds $u\sqrt{G(\cdot,1)}\in L^{s}({\mathbb R}^N)$. Moreover, for every $s\in[2,2^*]$ there exists $C_s>0$ such that $$ \bigg( \int_{{\mathbb R}^N}|u(x)|^sG^{\frac{s}{2}}(x,1)\,dx\bigg)^{\!\!\frac{2}{s}}\leq C_s\bigg( \int_{{\mathbb R}^{N}}\big(|\nabla u(x)|^2+u^2(x)\big)G(x,1)\,dx\bigg) $$ for all $u\in {\mathcal H}$. \end{Lemma} \begin{pf} From Lemma \ref{l:ineqx2}, it follows that, if $u\in{\mathcal H}$, then $u\sqrt{G(\cdot,1)}\in H^1({\mathbb R}^N)$; hence, by classical Sobolev embeddings, $u\sqrt{G(\cdot,1)}\in L^{s}({\mathbb R}^N)$ for all $s\in[2,2^*]$. The stated inequality follows from classical Sobolev inequalities and Lemma \ref{l:ineqx2}. \end{pf} The change of variables $u(x)=v(x/\sqrt t)$ in Lemma \ref{l:sob}, yields the following inequality in ${\mathcal H}_t$. \begin{Corollary}\label{cor:ineqSob} For every $t>0$, $u\in {\mathcal H}_t$, and $2\leq s\leq 2^{*}$, there holds $$ \Big( \int_{{\mathbb R}^N}|u(x)|^{s}G^{\frac{s}{2}}(x,t)\,dx\Big)^{\!\!\frac{2}{s}}\leq C_s t^{-\frac{N}{s}\left(\frac{s-2}{2}\right)}\|u\|^{2}_{{\mathcal H}_t}. $$ \end{Corollary} The above Sobolev estimate allows proving the nonlinear counterpart of Corollary \ref{c:pos_per}. \begin{Corollary}\label{c:pos_per_nonlin} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfy (\ref{eq:posde}) and $\varphi\in C^1({\mathbb R}^N\times(0,T)\times{\mathbb R})$ satisfy (\ref{eq:fi}) for some $1\leq p<2^{*}-1$. Then there exist $C_1''>0$ and a function $\overline{T}_2:(0,+\infty)\to{\mathbb R}$ such that, for every $R>0$, $t\in (0,\overline{T}_2(R))$, $s\in(0,T)$, and $u\in \{v\in {\mathcal H}_t\cap L^{p+1}({\mathbb R}^N): \|v\|_{L^{p+1}({\mathbb R}^N)}\leq R\}$, there holds \begin{align*} \int_{{\mathbb R}^N}\bigg(|\nabla u(x)|^2&-\frac{a(x/|x|)}{|x|^2}\,u^2(x)-\varphi(x,s,u(x))u(x)\bigg) \,G(x,t)\,dx +\frac{N-2}{4t}\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\\ &\geq C_1''\bigg(\int_{{\mathbb R}^N}|\nabla u(x)|^2\,G(x,t)\,dx+\frac1t\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\bigg). \end{align*} \end{Corollary} \begin{pf} From (\ref{eq:fi}), H\"{o}lder's inequality, and Corollary \ref{cor:ineqSob}, we have that, for all $u\in {\mathcal H}_t\cap L^{p+1}({\mathbb R}^N)$, there holds \begin{align}\label{eq:varphi} \bigg| \int_{{\mathbb R}^N}&\varphi(x,s,u(x))u(x)G(x,t)\,dx\bigg| \\ &\notag\leq C_\varphi\bigg( \int_{{\mathbb R}^N}\!\!\!u^{2}(x)G(x,t)\,dx+\int_{{\mathbb R}^N}\!\!\! u^{2}(x)|u(x)|^{p-1}G(x,t)\,dx\bigg)\\ \notag &\leq C_{\varphi}\bigg( \int_{{\mathbb R}^N}\!\!\!u^2(x)G(x,t)\,dx+ \bigg(\int_{{\mathbb R}^N}|u(x)|^{p+1}G^{\frac{p+1}{2}}(x,t)\,dx \bigg)^{\!\!\frac{2}{p+1}}\|u\|^{p-1}_{L^{p+1}({\mathbb R}^N)}\bigg)\\ \notag &\leq C_{\varphi}\bigg( C_{p+1} t^{\frac{(N+2)-p(N-2)}{2(p+1)}}\|u\|^{p-1}_{L^{p+1}({\mathbb R}^N)} \int_{{\mathbb R}^N}|\nabla u(x)|^2\,G(x,t)\,dx\\ \notag&\hskip2cm + \Big(t+C_{p+1}t^{\frac{(N+2)-p(N-2)}{2(p+1)}}\|u\|^{p-1}_{L^{p+1}({\mathbb R}^N)}\Big) \frac1t\int_{{\mathbb R}^N}u^2(x)G(x,t)\,dx\bigg) \end{align} with $C_{p+1}$ as in Corollary \ref{cor:ineqSob}. The stated inequality follows from Corollary \ref{c:pos_def} and (\ref{eq:varphi}) by choosing $t$ sufficiently small depending on $\|u\|_{L^{p+1}({\mathbb R}^N)}$. \end{pf} \section{Spectrum of Ornstein-Uhlenbeck type operators with inverse square potentials}\label{sec:spectr-analys-self} In this section we describe the spectral properties of the operator $L$ defined in (\ref{eq:13}), extending to anisotropic singular potentials the analysis carried out in \cite{vazquez_zuazua} for $a\equiv\lambda$ constant. Following \cite{ES}, we first prove the following compact embedding. \goodbreak \begin{Lemma}\label{l:compact} The space ${\mathcal H}$ is compactly embedded in ${\mathcal L}$. \end{Lemma} \begin{pf} Let us assume that $u_{k}\rightharpoonup u$ weakly in ${\mathcal H}$. From Rellich's theorem $u_{k}\rightarrow u$ in $L^{2}_{\rm loc}({\mathbb R}^N)$. For every $R>0$ and $k\in{\mathbb N}$, we have \begin{equation}\label{eq:6} \dyle\int_{{\mathbb R}^N}|u_{k}-u|^2G(x,1)\,dx= A_k(R)+B_k(R) \end{equation} where \begin{equation}\label{eq:7} A_k(R)=\int_{\{|x|\leq R\}}|u_{k}(x)-u(x)|^2e^{-|x|^2/{4}}\,dx \to 0\quad\text{as }k\to+\infty,\quad \text{for every }R>0 \end{equation} and $$ B_k(R)= \int_{\{|x|>R\}}|u_{k}(x)-u(x)|^2G(x,1)\,dx. $$ From Lemma \ref{l:ineqx2} and boundedness of $u_k$ in ${\mathcal H}$, we deduce that \begin{align}\label{eq:8} &B_{k}(R)\leq R^{-2}\dyle\int_{\{|x|>R\}}|x|^2|u_{k}(x)-u(x)|^2G(x,1)\,dx\\ \notag&\leq \frac{1}{R^2}\bigg(16\int_{{\mathbb R}^N}|\nabla (u_{k}-u)(x)|^2G(x,1)\,dx+4N\int_{{\mathbb R}^N}|u_{k}(x)-u(x)|^2G(x,1)\,dx\bigg) \leq \frac{\rm const}{R^2}. \end{align} Combining (\ref{eq:6}), (\ref{eq:7}), and (\ref{eq:8}), we obtain that $u_{k}\rightarrow u$ strongly in ${\mathcal L}$. \end{pf} From classical spectral theory we deduce the following abstract description of the spectrum of~$L$. \begin{Lemma}\label{l:hilbasis} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ such that (\ref{eq:posde}) holds. Then the spectrum of the operator $L$ defined in (\ref{eq:13}) consists of a diverging sequence of real eigenvalues with finite multiplicity. Moreover, there exists an orthonormal basis of ${\mathcal L}$ whose elements belong to ${\mathcal H}$ and are eigenfunctions of $L$. \end{Lemma} \begin{pf} By Corollary \ref{c:pos_def} and the Lax-Milgram Theorem, the bounded linear self-adjoint operator $$ T:{\mathcal L}\to{\mathcal L},\quad T=\bigg(L+\frac{N-2}{4}\,{\rm Id}\bigg)^{-1} $$ is well defined. Moreover, by Lemma \ref{l:compact}, $T$ is compact. The result then follows from the Spectral Theorem. \end{pf} Let us now compute explicitly the eigenvalues of $L$ with the corresponding multiplicities and eigenfunctions by proving Proposition \ref{p:explicit_spectrum}. \begin{pfn}{Proposition \ref{p:explicit_spectrum}} Assume that $\gamma$ is an eigenvalue of $L$ and $g\in{\mathcal H}\setminus\{0\}$ is a corresponding eigenfunction, so that \begin{equation}\label{eq:50} -\Delta g(x)+ \frac{\nabla g(x)\cdot x}{2}-\frac{a(x/|x|)}{|x|^2}\,g(x)=\gamma\, g(x) \end{equation} in a weak ${\mathcal H}$-sense. From classical regularity theory for elliptic equations, $g\in C^{1,\alpha}_{\rm loc}({\mathbb R}^N\setminus\{0\})$. Hence $g$ can be expanded as \begin{equation*} g(x)=g(r\theta)=\sum_{k=1}^\infty\phi_k(r)\psi_k(\theta) \quad \text{in }L^2({\mathbb S}^{N-1}), \end{equation*} where $r=|x|\in(0,+\infty)$, $\theta=x/|x|\in{{\mathbb S}^{N-1}}$, and \begin{equation*} \phi_k(r)=\int_{{\mathbb S}^{N-1}}g(r\theta) \psi_k(\theta)\,dS(\theta). \end{equation*} Equations (\ref{eq:2rad}) and (\ref{eq:50}) imply that, for every $k$, \begin{equation}\label{eq:51} \phi''_{k}+\left(\dfrac{N-1}{r}-\dfrac{r}{2}\right) \phi'_{k}+\left(\gamma-\dfrac{\mu_k}{r^2}\right)\phi_{k}=0 \quad\text{in }(0,+\infty). \end{equation} Since $g\in {\mathcal H}$, we have that \begin{equation}\label{condition1} \infty>\int_{{\mathbb R}^N}g^2(x)G(x,1)\,dx=\int_{0}^{\infty} \!\bigg(\int_{{\mathbb S}^{N-1}}g^{2}(r\theta)\,dS(\theta)\bigg) r^{N-1}e^{-\frac{r^2}{4}}\,dr\geq \int_{0}^{\infty}r^{N-1}e^{-\frac{r^2}{4}}\phi_{k}^{2}(r)\,dr \end{equation} and, by the Hardy type inequality of Lemma \ref{Hardytemp}, \begin{equation}\label{condition2} \infty>\int_{{\mathbb R}^N}\dfrac{g^2(x)}{|x|^2} G(x,1)\,dx\geq\int_{0}^{\infty}r^{N-3}e^{-\frac{r^2}{4}}\phi_{k}^2(r)\,dr. \end{equation} For all $k=1,2,\dots$ and $t>0$, we define $w_{k}(t)=(4t)^{\frac{\alpha_k}{2}}\phi_k(\sqrt{4t})$, with $\alpha_{k}=\frac{N-2}{2}-\sqrt{\big(\frac{N-2}2\big)^{\!2}+\mu_{k}(a)}$. From (\ref{eq:51}), $w_k$ satisfies \begin{equation*} t w_{k}''(t)+\left(\frac{N}{2}-\alpha_k-t\right)w'_{k}(t)+ \left(\frac{\alpha_k}{2}+\gamma\right)w_{k}(t)=0\quad\text{in }(0,+\infty). \end{equation*} Therefore, $w_{k}$ is a solution of the well known Kummer Confluent Hypergeometric Equation (see \cite{Abramowitz_Stegun} and \cite{macdonald}). Then there exist $A_k,B_k\in{\mathbb R}$ such that $$ w_k(t)=A_k M\Big(-\frac{\alpha_k}{2}-\gamma,\frac N2-\alpha_k,t\Big) +B_k U\Big(-\frac{\alpha_k}{2}-\gamma,\frac N2-\alpha_k,t\Big), \quad t\in (0,+\infty). $$ Here $M(c,b,t)$ and, respectively, $U(c,b,t)$ denote the Kummer function (or confluent hypergeometric function) and, respectively, the Tricomi function (or confluent hypergeometric function of the second kind); $M(c,b,t)$ and $U(c,b,t)$ are two linearly independent solutions to the Kummer Confluent Hypergeometric Equation $$ tw''(t)+(b-t)w'(t)-ct=0,\quad t\in (0,+\infty). $$ Since $\big(\frac N2-\alpha_k\big)>1$, from the well-known asymptotics of $U$ at $0$ (see e.g. \cite{Abramowitz_Stegun}), we have that $$ U\Big(-\frac{\alpha_k}{2}-\gamma,\frac N2-\alpha_k,t\Big) \sim \text{\rm const}\,t^{1-\frac{N}{2}+\alpha_k} \quad\text{as }t\to 0^+, $$ for some $\text{\rm const}\neq 0$ depending only on $N,\gamma$, and $\alpha_k$. On the other hand, $M$ is the sum of the series $$ M(c,b,t)=\sum_{n=0}^\infty \frac{(c)_n}{(b)_n}\,\frac{t^n}{n!}. $$ We notice that $M$ has a finite limit at $0^+$, while its behavior at $\infty$ is singular and depends on the value $-c=\frac{\alpha_k}{2}+\gamma$. If $\frac{\alpha_k}{2}+\gamma=m\in {\mathbb N}=\{0,1,2,\cdots\}$, then $ M\big(-\frac{\alpha_k}{2}-\gamma,\frac N2-\alpha_k,t\big)$ is a polynomial of degree $m$ in $t$, which we will denote as $P_{k,m}$, i.e., $$ P_{k,m}(t)=M\Big(-m,{\textstyle{\frac N2}}-\alpha_k,t\Big)= \sum_{n=0}^m \frac{(-m)_n}{\big(\frac{N}2-\alpha_k\big)_n}\,\frac{t^n}{n!}. $$ If $\big(\frac{\alpha_k}{2}+\gamma\big)\not\in {\mathbb N}$, then from the well-known asymptotics of $M$ at $\infty$ (see e.g. \cite{Abramowitz_Stegun}) we have that $$ M\Big(-\frac{\alpha_k}{2}-\gamma,\frac N2-\alpha_k,t\Big) \sim \text{\rm const}\,e^tt^{-\frac{N}{2}+\frac{\alpha_k}2-\gamma} \quad\text{as }t\to +\infty, $$ for some $\text{\rm const}\neq 0$ depending only on $N,\gamma$, and $\alpha_k$. Now, let us fix $k\in{\mathbb N}$, $k\geq 1$. From the above description, we have that $$ w_k(t)\sim {\rm const\,}B_k t^{1-\frac{N}{2}+\alpha_k} \quad\text{as }t\to 0^+, $$ for some $\text{\rm const}\neq 0$, and hence $$ \phi_k(r)=r^{-\alpha_k}w_k\Big(\frac{r^2}{4}\Big)\sim {\rm const\,}B_k r^{2-N+\alpha_k} \quad\text{as }r\to 0^+, $$ for some $\text{\rm const}\neq 0$. Therefore, condition (\ref{condition2}) can be satisfied only for $B_k=0$. If $\frac{\alpha_k}{2}+\gamma\not\in {\mathbb N}$, then $$ w_k(t)\sim {\rm const\,}A_ke^t t^{-\frac{N}{2}+\frac{\alpha_k}2-\gamma} \quad\text{as }t\to +\infty, $$ for some $\text{\rm const}\neq 0$, and hence $$ \phi_k(r)=r^{-\alpha_k}w_k\Big(\frac{r^2}{4}\Big)\sim {\rm const\,}A_k r^{-N-2\gamma}e^{r^2/4} \quad\text{as }r\to +\infty, $$ for some $\text{\rm const}\neq 0$. Therefore, condition (\ref{condition1}) can be satisfied only for $A_k=0$. If $\frac{\alpha_k}{2}+\gamma=m\in {\mathbb N}$, then $r^{-\alpha_k}P_{k,m}\big(\frac{r^2}{4}\big)$ solves (\ref{eq:51}); moreover the function $$ |x|^{-\alpha_k}P_{k,m}\Big(\frac{|x|^2}{4}\Big) \psi_k\Big(\frac{x}{|x|}\Big) $$ belongs to ${\mathcal H}$, thus providing an eigenfunction of $L$. We can conclude from the above discussion that if $\frac{\alpha_k}{2}+\gamma\not\in {\mathbb N}$ for all $k\in{\mathbb N}$, $k\geq 1$, then $\gamma$ is not an eigenvalue of $L$. On the other hand, if there exist $k_0,m_0\in{\mathbb N}$, $k_0\geq1$, such that $$ \gamma=\gamma_{m_0,k_0}=m_0-\frac{\alpha_{k_0}}{2} $$ then $\gamma$ is an eigenvalue of $L$ with multiplicity \begin{equation}\label{eq:cardinal} m(\gamma)=m(\gamma_{m_0,k_0})=\#\bigg\{j\in{\mathbb N},j\geq 1: \gamma_{m_0,k_0}+\frac{\alpha_j}2\in{\mathbb N}\bigg\}<+\infty \end{equation} and a basis of the corresponding eigenspace is $$ \left\{ |x|^{-\alpha_j}P_{j,\gamma_{m_0,k_0}+{\alpha_j}/2}\bigg(\frac{|x|^2}{4}\bigg) \psi_j\Big(\frac{x}{|x|}\Big): j\in{\mathbb N},j\geq 1,\gamma_{m_0,k_0}+\frac{\alpha_j}2\in{\mathbb N} \right\}. $$ The proof is thereby complete. \end{pfn} \begin{remark}\label{rem:chen} If $a(\theta)\equiv0$, then $\mu_k(0)=k(N+k-2)$, so that $\alpha_k=\frac{(N-2)}{2}-\sqrt{\big(\frac{N-2}{2}+k\big)^2}=-k$, and $\gamma_{m,k}=\frac{k}{2}+m$. Hence, in this case we recover the well known fact (see e.g. \cite{CH} and \cite{ES}) that the eigenvalues of the Ornstein-Uhlenbeck operator $-\Delta+\frac{x}2\cdot\nabla$ are the positive half-integer numbers. \end{remark} \begin{remark}\label{rem:ortho} Due to orthogonality of eigenfunctions $\{\psi_k\}_k$ in $L^2({\mathbb S}^{N-1})$, it is easy to verify that $$ \text{if }(m_1,k_1)\neq(m_2,k_2)\quad\text{then}\quad V_{m_1,k_1}\text{ and } V_{m_2,k_2}\text{ are orthogonal in }{\mathcal L}. $$ By Lemma \ref{l:hilbasis}, it follows that $$ \left\{ \widetilde V_{n,j}= \frac{V_{n,j}}{\|V_{n,j}\|_{\mathcal L}}: j,n\in{\mathbb N},j\geq 1\right\} $$ is an orthonormal basis of ${\mathcal L}$. \end{remark} \section{The parabolic Almgren monotonicity formula }\label{sec:almgren} Throughout this section, we will assume that $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfies (\ref{eq:posde}) and \emph{either} \begin{align}\label{eq:82}\tag{${\bf I}$} \text{$u$ is a weak solution to (\ref{prob1}) with $h$ satisfying (\ref{eq:der}) and (\ref{eq:h})} \end{align} \emph{or} \begin{align}\label{eq:83}\tag{${\bf II}$} \text{$u$ satisfies (\ref{eq:u1}--\ref{eq:u2}) and weakly solves (\ref{prob2}) for some $\varphi\in C^1({\mathbb R}^N\times(0,T)\times{\mathbb R})$ satisfying (\ref{eq:fi})}. \end{align} We denote as $$ f(x,t, s)= \begin{cases} h(x,t)s,&\text{ in case {\bf (I)}},\\ \varphi(x,t, s),&\text{ in case {\bf (II)}}, \end{cases} $$ so that, in both cases, $u$ is a weak solution to (\ref{prob}) in ${\mathbb R}^N\times(0,T)$ in the sense of Definition \ref{def:solution}. Let \begin{equation}\label{eq:5} \overline T= \begin{cases} \overline{T}_1,&\text{ in case {\bf (I)}},\\ \overline{T}_2(R_0),&\text{ in case {\bf (II)}}, \end{cases} \quad\text{and}\quad C_1= \begin{cases} C_1',&\text{ in case {\bf (I)}},\\ C_1'',&\text{ in case {\bf (II)}}, \end{cases} \end{equation} being $C_1',\overline{T}_1$ as in Corollary \ref{c:pos_per} and $C_1'',\overline{T}_2(R_0)$ as in Corollary \ref{c:pos_per_nonlin} with $$ R_0=\sup_{t\in(0,T)}\|u(\cdot,t)\|_{L^{p+1}({\mathbb R}^N)} $$ (notice that $R_0$ is finite by assumption (\ref{eq:u1})). We denote \begin{equation*} \alpha=\frac{T}{2\big(\big\lfloor{T}/{\overline T}\big\rfloor+1\big)}, \end{equation*} where $\lfloor \cdot\rfloor$ denotes the floor function, i.e. $\lfloor x\rfloor:=\max\{j\in{\mathbb Z}:\ j\leq x\}$. Then $$ (0,T)=\bigcup_{i=1}^k(a_i,b_i) $$ where $$ k=2\big(\big\lfloor{T}/{\overline T}\big\rfloor+1\big)-1, \quad a_i=(i-1)\alpha,\quad\text{and}\quad b_i=(i+1)\alpha. $$ We notice that $0<2\alpha<{\overline T}$ and $(a_i,b_i)\cap(a_{i+1},b_{i+1})= (i\alpha,(i+1)\alpha)\not =\emptyset$. For every $i$, $1\leq i\leq k$, we define \begin{equation}\label{eq:76} u_{i}(x,t)=u(x, t+a_i),\quad x\in{\mathbb R}^N,\ t\in(0,2\alpha). \end{equation} \begin{Lemma}\label{l:u_i} For every $i=1,\dots,k$, the function $u_i$ defined in (\ref{eq:76}) is a weak solution to \begin{equation}\label{prob_i} (u_i)_t+\Delta u_i+\dfrac{a(x/|x|)}{|x|^2}\,u_i+f(x,t+a_i, u_i(x,t))=0 \end{equation} in ${\mathbb R}^N\times (0,2\alpha)$ in the sense of Definition \ref{def:solution}. Furthermore, the function $v_i(x,t):=u_i(\sqrt{t}x,t)$ is a weak solution to \begin{equation}\label{eq:eqforv_i} (v_i)_t+\frac1t\bigg(\Delta v_i-\frac x2\cdot \nabla v_i+ \dfrac{a(x/|x|)}{|x|^2}\,v_i+tf\big(\sqrt tx,t+a_i,v_i(x,t)\big)\bigg)=0 \end{equation} in ${\mathbb R}^N\times(0,2\alpha)$ in the sense of Remark \ref{rem:v2}. \end{Lemma} \begin{pf} If $i=1$, then $a_1=0$, $u_1(x,t)=u (x,t)$ in ${\mathbb R}^N\times (0,2\alpha)$, and we immediately conclude. For every $1<i\leq k$, $a_i\neq 0$, and, being $G(x,t)$ as in (\ref {eq:heatker}), the following properties hold for all $t\in(a_i,b_i)$: \begin{align*} {\rm (i)}\quad &G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big)G(x,t)= \big({\textstyle{\frac{t^2}{a_i}}}\big)^{-N/2}G(x,t-a_i);\\ {\rm (ii)}\quad & \text{if $\phi\in {\mathcal H}_{t-a_i}$, then $\phi\,G\big(\cdot,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big) \in {\mathcal H}_{t}$};\\ {\rm (iii)}\quad & \text{if $\psi\in ({\mathcal H}_t)^\star$, then $\psi\in ({\mathcal H}_{t-a_{i}})^\star$ and}\\ &{\phantom{\big\langle}}_{{\mathcal H}_{t-a_i}^\star}\big\langle \psi,\phi \big\rangle_{{\mathcal H}_{t-a_i}}=\bigg(\dfrac{t^2}{a_i}\bigg)^{\!\!\frac{N}{2}} \!\!\!\!\!{\phantom{\big\langle}}_{{\mathcal H}_{t}^\star}\Big\langle \psi, \phi\,G\big(\cdot,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big) \Big\rangle_{{\mathcal H}_{t}},\quad \text{for all $\phi\in {\mathcal H}_{t-a_i}$}. \end{align*} Let $1<i\leq k$ and $\phi\in {\mathcal H}_{t-a_i}$. Due to (ii), $\phi\,G\big(\cdot,\frac{t(t-a_i)}{a_i}\big)\in {\mathcal H}_{t}$ and then, since $u$ is a solution to (\ref{prob}) in the sense of of Definition \ref{def:solution}, for a.e. $t\in(a_i,b_i)$ we have \begin{multline} {\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle u_t+\frac{\nabla u\cdot x}{2t},\phi\,G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big) \bigg\rangle_{{\mathcal H}_t}\\= \int_{{\mathbb R}^N}\nabla u(x,t)\cdot \nabla \phi(x)\, G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big)G(x,t)\,dx - \int_{{\mathbb R}^N}\phi(x) \dfrac{a_{i}x\cdot \nabla u(x,t)}{2(t-a_{i})t} G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big)G(x,t)\,dx\\- \int_{{\mathbb R}^N}\frac{a(x/|x|)}{|x|^2}\,u(x,t)\phi(x) G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big)G(x,t)\,dx - \int_{{\mathbb R}^N}f(x,t,u(x,t))\phi(x) G\big(x,{\textstyle{\frac{t(t-a_i)}{a_i}}}\big)G(x,t)\,dx. \end{multline} Therefore, thanks to (i) and (iii), we obtain \begin{align*} {\phantom{\bigg\langle}}_{{\mathcal H}_{t-a_{i}}^\star}\bigg\langle u_t+\frac{\nabla u(x,t)\cdot x}{2(t-a_{i})} , \phi \bigg\rangle_{{\mathcal H}_{t-a_{i}}}=& \int_{{\mathbb R}^N}\bigg( \nabla u(x,t)\cdot \nabla \phi(x) -\frac{a(x/|x|)}{|x|^2}\,u(x,t)\phi(x)\bigg)\,G(x,t-a_i)\,dx\\ &- \int_{{\mathbb R}^N}f(x,t,u(x,t))\phi(x) G(x,t-a_i)\,dx. \end{align*} By the change of variables $s=t-a_{i}$, we conclude that $u_{i}(x,t)=u(x, t+a_i)$ is a weak solution to (\ref{prob_i}) in ${\mathbb R}^N\times (0,2\alpha)$ in the sense of Definition \ref{def:solution}. By a further change of variables, we easily obtain that $v_i(x,t):=u_i(\sqrt{t}x,t)$ is a weak solution to (\ref{eq:eqforv_i}) in ${\mathbb R}^N\times(0,2\alpha)$ in the sense of Remark~\ref{rem:v2}.~\end{pf} \noindent For every $i=1,\dots,k$, we define \begin{equation}\label{eq:Hi(t)} H_i(t)=\int_{{\mathbb R}^N}u_{i}^2(x,t)\, G(x,t)\,dx, \quad\text{for every }t\in (0,2\alpha), \end{equation} and \begin{equation}\label{eq:Di(t)} D_i(t)=\!\!\int_{{\mathbb R}^N}\!\!\bigg(|\nabla u_i(x,t)|^2- \dfrac{a\big(\frac{x}{|x|}\big)}{|x|^2}u_i^2(x,t)- f(x,t+a_i,u_i(x,t))u_{i}(x,t)\bigg)G(x,t)\,dx \end{equation} for a.e. $t\in (0,2\alpha)$. \begin{Lemma}\label{l:Hprime} For every $1\leq i\leq k$, $H_i\in W^{1,1}_{\rm loc}(0,2\alpha)$ and \begin{equation}\label{eq:10i} H'_i(t)=2\!\!\!\!{\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle (u_i)_t+\frac{\nabla u_i\cdot x}{2t},u_i(\cdot,t) \bigg\rangle_{{\mathcal H}_t}=2D_i(t)\quad\text{for a.e. }t\in(0,2\alpha). \end{equation} \end{Lemma} \begin{pf} It follows from Lemma \ref{l:u_i} and Remark \ref{rem:uv}. \end{pf} \noindent \begin{Lemma}\label{l:Hcreas} If $C_1$ is as in (\ref{eq:5}), then, for every $i=1,\dots,k$, the function $$ t\mapsto t^{-2C_1+\frac{N-2}{2}}H_i(t)$$ is nondecreasing in $(0,2\alpha)$. \end{Lemma} \begin{pf} From Lemma \ref{l:Hprime} and Corollaries \ref{c:pos_per} and \ref{c:pos_per_nonlin}, taking into account that $2\alpha<{\overline T}$, we have that, for all $t\in(0,2\alpha)$, $$ H'_{i}(t)\geq \frac1t\bigg(2C_1-\frac{N-2}2\bigg)H_{i}(t), $$ which implies $$ \frac{d}{dt}\bigg(t^{-2C_1+\frac{N-2}{2}}H_{i}(t)\bigg)\geq 0. $$ Hence the function $t\mapsto t^{-2C_1+\frac{N-2}{2}}H_i(t)$ is nondecreasing in $(0,2\alpha)$. \end{pf} \noindent \begin{Lemma}\label{l:Hpos} If $1\leq i\leq k$ and $H_i(\bar t)=0$ for some $\bar t\in(0,2\alpha)$, then $H_{i}(t)=0$ for all $t\in (0,\bar t\,]$. \end{Lemma} \begin{pf} From Lemma \ref{l:Hcreas}, the function $t\mapsto t^{-2C_1+\frac{N-2}{2}}H_{i}(t)$ is nondecreasing in $(0,2\alpha)$, nonnegative, and vanishing at $\bar t$. It follows that $H_i(t)=0$ for all $t\in (0,\bar t]$. \end{pf} \noindent The regularity of $D_i$ in $(0,2\alpha)$ is analyzed in the following lemma. \begin{Lemma}\label{l:Dprime} If $1\leq i\leq k$ and $T_i\in(0,2\alpha)$ is such that $u_i(\cdot,T_i)\in{\mathcal H}_{T_i}$, then \begin{itemize} \item[(i)] $\dyle\int_\tau^{T_i}\dyle\int_{{\mathbb R}^N}\bigg(\left|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\right|^2G(x,t)\,dx\bigg)\,dt<+\infty \quad\text{for all }\tau\in (0,T_i)$;\\[5pt] \item[(ii)] the function \begin{equation*} t\mapsto t D_i(t) \end{equation*} belongs to $W^{1,1}_{\rm loc}(0,T_i)$ and its weak derivative is, for a.e. $t\in(0,T_i)$, as follows: \end{itemize} \smallskip\noindent {in case \bf (I)} \begin{multline*} \frac{d}{dt}\, \big(tD_i(t)\big)=2t\int_{{\mathbb R}^N}\left|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\right|^2G(x,t)\,dx\\ + \int_{{\mathbb R}^N}h(x,t+a_i)\left(\frac{N-2}{2}u_i^2(x,t)+(\nabla u_i(x,t)\cdot x)u_i(x,t)-\frac{|x|^2}{4t}u_i^2(x,t)\right) \,G(x,t)\,dx\\ -t\int_{{\mathbb R}^N}h_{t}(x,t+a_i)u_i^2(x,t)G(x,t)\,dx; \end{multline*} {in case \bf (II)} \begin{align*} \frac{d}{dt}&\, \big(tD_i(t)\big)= 2t \int_{{\mathbb R}^N}\left|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\right|^2G(x,t)\,dx\\ & +t\int_{{\mathbb R}^N} \bigg( \varphi(x,t+a_i,u_i(x,t))-\frac{\partial\varphi}{\partial u_i} (x,t+a_i,u_i(x,t))u_i(x,t)\bigg)(u_i)_t(x,t)G(x,t)\,dx\\ &+\int_{{\mathbb R}^N} \bigg(\frac{N-2}2\varphi(x,t+a_i,u_i(x,t))u_i(x,t)-t \frac{\partial\varphi}{\partial t}(x,t+a_i,u_i(x,t)) u_i(x,t)\\ &\hskip2cm-N\Phi(x,t+a_i,u_i(x,t))-\nabla_x\Phi(x,t+a_i,u_i(x,t))\cdot x\bigg)G(x,t)\,dx\\ &+\int_{{\mathbb R}^N}\frac{|x|^2}{4t} \bigg(2\Phi(x,t+a_i,u_i(x,t))- \varphi(x,t+a_i,u_i(x,t))u_i(x,t)\bigg)G(x,t)\,dx \end{align*} where $$ \Phi(x,t,s)=\int_0^s\varphi(x,t,\xi)\,d\xi. $$ \end{Lemma} \begin{pf} Let us first consider case {\bf (I)}, i.e. $f (x,t, u)=h(x,t)u$, with $h(x,t)$ under conditions (\ref{eq:der}--\ref{eq:h}). We test equation (\ref{eq:eqforv_i}) with $(v_i)_t$; we notice that this is not an admissible test function for equation (\ref{eq:eqforv_i}) since a priori $(v_i)_t$ does not take values in ${\mathcal H}$. However the formal testing procedure can be made rigorous by a suitable approximation. Such a test combined with Corollary \ref{c:pos_per} yields, for all $t\in(0,T_i)$, \begin{align*} & \int_t^{T_i}s\bigg(\int_{{\mathbb R}^N}(v_i)_t^2(x,s)G(x,1)\,dx\bigg)\,ds\leq{\rm const\,}\bigg( \|u_i(\sqrt {T_i}\,\cdot,T_i)\|^2_{\mathcal H}+ \int_{{\mathbb R}^N}v_i^2(x,t)G(x,1)\,dx\\ & +\int_t^{T_i}\!\!\bigg(\int_{{\mathbb R}^N}\!\!h(\sqrt s x, s+a_i)\bigg( \frac{|x|^2}8v_i^2(x,s)- \frac{\nabla v_i(x,s)\cdot x }{2}v_i(x,s) -\frac{N-2}{4}v_i^2(x,s)\bigg)G(x,1)\,dx\bigg)\,ds\\ &+\frac{1}{2}\int_t^{T_i}\!\!s\bigg(\int_{{\mathbb R}^N}h_{s}(\sqrt s x, s+a_i)v_i^2(x,s)G(x,1)\,dx\bigg)\,ds\bigg). \end{align*} Since, in view of (\ref{eq:der}--\ref{eq:h}) and Lemmas \ref{Hardytemp} and \ref{l:ineqx2}, the integrals in the last two lines of the previous formula are finite for every $t\in(0,T_i)$, we conclude that $$ (v_i)_t\in L^2(\tau,T_i;{\mathcal L})\quad\text{for all }\tau\in (0,T_i). $$ Testing (\ref{eq:eqforv_i}) with $(v_i)_t$ also yields \begin{align*} &\int_t^{T_i}s\bigg(\int_{{\mathbb R}^N}(v_i)_t^2(x,s)G(x,1)\,dx\bigg)\,ds\\ &+ \frac12 \int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-th(\sqrt t x, t+a_i)v_i^2(x,t)\bigg) G(x,1)\,dx\\ &=\frac12 \int_{{\mathbb R}^N}\bigg(|\nabla v_{0,i}(x)|^2-\frac{a(x/|x|)}{|x|^2}\,v^2_{0,i}(x)- T_i h(\sqrt { T_i} x, T_i+a_i)v_{0,i}^2(x)\bigg) G(x,1)\,dx \\ &+\int_t^{T_i}\!\!\bigg(\int_{{\mathbb R}^N}h(\sqrt s x, s+a_i)\bigg( \frac{|x|^2}8v_i^2(x,s)- \frac{\nabla v_i(x,s)\cdot x }{2}v_i(x,s) -\frac{N-2}{4}v_i^2(x,s)\bigg)G(x,1)\,dx\bigg)\,ds\\ &+\frac{1}{2}\int_t^{T_i}s\bigg(\int_{{\mathbb R}^N}h_{s}(\sqrt s x, s+a_i)v_i^2(x,s)G(x,1)\,dx\bigg)\,ds, \end{align*} for all $t\in (0, T_i)$, where $v_{0,i}(x):=u_i(\sqrt { T_i} x, T_i) \in{\mathcal H}$. Therefore the function $$ t\mapsto \int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-th(\sqrt t x, t+a_i)v_i^2(x,t)\bigg) G(x,1)\,dx $$ is absolutely continuous in $(\tau,T_i)$ for all $\tau\in(0,T_i)$ and \begin{align*} \frac{d}{dt}&\int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-th(\sqrt t x, t+a_i)v_i^2(x,t)\bigg) G(x,1)\,dx\\ &=2t \int_{{\mathbb R}^N}(v_i)_t^2(x,t)G(x,1)\,dx\\ &\quad-\int_{{\mathbb R}^N}h(\sqrt t x, t+a_i)\bigg( \frac{|x|^2}4v_i^2(x,t)- (\nabla v_i(x,t)\cdot x )v_i(x,t) -\frac{N-2}{2}v_i^2(x,t)\bigg)G(x,1)\,dx\\ &-t\int_{{\mathbb R}^N}h_{s}(\sqrt s x, s+a_i)v_i^2(x,s)G(x,1)\,dx. \end{align*} The change of variables $u_i(x,t)=v_i(x/\sqrt t,t)$ leads to the conclusion in case {\bf (I)}. Let us now consider case {\bf (II)}, i.e. $f (x,t, u)=\varphi (x,t, u)$ with $\varphi$ satisfying \eqref{eq:fi} and $u$ satisfying (\ref{eq:u1}--\ref{eq:u2}). We test equation (\ref{eq:eqforv_i}) with $(v_i)_t$ (passing through a suitable approximation) and, by Corollary \ref{c:pos_per_nonlin}, we obtain, for all $t\in(0,T_i)$, \begin{align*} \int_t^{T_i}s&\bigg(\int_{{\mathbb R}^N}(v_i)_t^2(x,s)G(x,1)\,dx\bigg)\,ds\leq{\rm const\,}\bigg( \|u_i(\sqrt {T_i}\,\cdot,T_i)\|^2_{\mathcal H}+ \int_{{\mathbb R}^N}v_i^2(x,t)G(x,1)\,dx\bigg)\\ &\quad -\int_t^{T_i}s\bigg(\int_{{\mathbb R}^N}\varphi(\sqrt s x, s+a_i, v_i(x,t))(v_i)_{t}(x,t)G(x,1)\,dx\bigg)\,ds\\ &\quad+\frac{1}{2}\int_t^{T_i}\frac{d}{ds}\bigg(s\int_{{\mathbb R}^N}\varphi(\sqrt s x, s+a_i,v_i(x,t))v_i(x,s)G(x,1)\,dx\bigg)\,ds. \end{align*} Since in view of hypothesis \eqref{eq:fi} on $\varphi$, conditions \eqref{eq:u1} and \eqref{eq:u2} on $u$, and Lemma \ref{l:sob} the integrals at the right hand side lines of the previous formula are finite for every $t\in(0,T_i)$, we conclude that $$ (v_i)_t\in L^2(\tau,T_i;{\mathcal L})\quad\text{for all }\tau\in (0,T_i). $$ Testing (\ref{eq:eqforv_i}) for $v_i$ with $(v_i)_t$ also yields \begin{align*} \int_t^{T_i}s&\bigg(\int_{{\mathbb R}^N}(v_i)_t^2(x,s)G(x,1)\,dx\bigg)\,ds\\ &\quad+ \frac12 \int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-t\varphi(\sqrt t x, t+a_i, v_i(x,t))v_i(x,t)\bigg) G(x,1)\,dx\\ &=\frac12 \int_{{\mathbb R}^N}\bigg(|\nabla v_{0,i}(x)|^2-\frac{a(x/|x|)}{|x|^2}\,v^2_{0,i}(x)- T_i \varphi(\sqrt { T_i} x, T+a_i, v_{0,i}) v_{0,i}(x)\bigg) G(x,1)\,dx \\ &\quad -\int_t^{T_i}s\bigg(\int_{{\mathbb R}^N} \varphi(\sqrt s x, s+a_i, v_i(x,t))(v_i)_{t}(x,t)G(x,1)\,dx\bigg)\,ds\\ &\quad+\frac{1}{2}\int_t^{T_i}\frac{d}{ds}\bigg(s\int_{{\mathbb R}^N}\varphi(\sqrt s x, s+a_i,v_i(x,s))v_i(x,s)G(x,1)\,dx\bigg)\,ds, \end{align*} for a.e. $t\in (0, T_i)$, where $v_{0,i}(x):=u_i(\sqrt { T_i} x, T_i) \in{\mathcal H}$. Therefore the function $$ t\mapsto \int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-t\varphi(\sqrt t x, t+a_i, v_i(x,t))v_i(x,t)\bigg) G(x,1)\,dx $$ is absolutely continuous in $(0,\tau)$ for all $\tau\in(0,T_i)$ and \begin{align*} \frac{d}{dt}&\int_{{\mathbb R}^N}\bigg(|\nabla v_i(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\,v_i^2(x,t)-t\varphi(\sqrt t x, t+a_i, v_i(x,t))v_i (x,t)\bigg) G(x,1)\,dx\\ &=2t \int_{{\mathbb R}^N}(v_i)_t^2(x,t)G(x,1)\,dx +2t\int_{{\mathbb R}^N} \varphi(\sqrt t x, t+a_i, v_i(x,t))(v_i)_{t}(x,t)G(x,1)\,dx\\ &\quad-\frac{d}{dt}\bigg(t\int_{{\mathbb R}^N}\varphi(\sqrt t x, t+a_i,v_i(x,t))v_i(x,t)G(x,1)\,dx\bigg). \end{align*} The change of variables $u_i(x,t)=v_i(x/\sqrt t,t)$ leads to \begin{gather*} \frac{d}{dt}(t D_i(t))=2t \int_{{\mathbb R}^N}\left|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\right|^2G(x,t)\,dx\\ +2t\int_{{\mathbb R}^N} \varphi(x, t+a_i, u_i(x,t)) (u_i)_t(x,t)G(x,t)\,dx+\int_{{\mathbb R}^N} \varphi(x, t+a_i, u_i(x,t)) \nabla u_i(x,t)\cdot x G(x,t)\,dx\\ -\frac{d}{dt}\bigg(t\int_{{\mathbb R}^N}\varphi( x, t+a_i,u_i(x,t))u_i(x,t)G(x,t)\,dx\bigg) \end{gather*} and hence \begin{gather*} \frac{d}{dt}(t D_i(t))=2t \int_{{\mathbb R}^N}\left|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\right|^2G(x,t)\,dx\\ +2t\int_{{\mathbb R}^N} \varphi(x, t+a_i, u_i(x,t)) (u_i)_t(x,t)G(x,t)\,dx+\int_{{\mathbb R}^N} \varphi(x, t+a_i, u_i(x,t)) \nabla u_i(x,t)\cdot x\, G(x,t)\,dx\\ +\frac{N-2}2\int_{{\mathbb R}^N}\varphi( x, t+a_i,u_i(x,t))u_i(x,t)G(x,t)\,dx -t \int_{{\mathbb R}^N}\frac{\partial \varphi}{\partial t}( x, t+a_i,u_i(x,t))u_i(x,t)G(x,t)\,dx\\ -t \int_{{\mathbb R}^N}\bigg(\frac{\partial \varphi}{\partial u_i}( x, t+a_i,u_i(x,t))u_i(x,t)+\varphi( x, t+a_i,u_i(x,t))\bigg) (u_i)_t(x,t)G(x,t)\,dx\\ -\int_{{\mathbb R}^N}\frac{|x|^2}{4t}\varphi( x, t+a_i,u_i(x,t))u_i(x,t)G(x,t)\,dx. \end{gather*} Integration by parts yields (these formal computations can be made rigorous through a suitable approximation) \begin{gather*} \int_{{\mathbb R}^N} \varphi(x, t+a_i, u_i(x,t)) \nabla u_i(x,t)\cdot x\, G(x,t)\,dx=-N\int_{{\mathbb R}^N} \Phi(x,t+a_i, u_i(x,t)) G(x,t)\,dx\\ + \int_{{\mathbb R}^N} \frac{|x|^2}{2t}\Phi(x,t+a_i, u_i(x,t)) G(x,t)\,dx- \int_{{\mathbb R}^N} \nabla_x\Phi(x,t+a_i, u_i(x,t))\cdot x G(x,t)\,dx \end{gather*} thus yielding the conclusion in case {\bf (II)}. \end{pf} For all $i=1,\dots,k$, let us introduce the \emph{Almgren type frequency function} associated to $u_i$ \begin{equation}\label{eq:9} N_i:(0,2\alpha)\to{\mathbb R}\cup\{-\infty,+\infty\}, \quad N_i(t):=\frac{tD_i(t)}{H_i(t)}. \end{equation} Frequency functions associated to unperturbed parabolic equations of type (\ref{prob}) (i.e. in the case $f(x,t,s)\equiv 0$) were first studied by C.-C. Poon in \cite{poon}, where unique continuation properties are derived by proving monotonicity of the quotient in (\ref{eq:9}). Due to the presence of the perturbing function $f(x,t+a_i,u(x,t))$, the functions $N_i$ will not be nondecreasing as in the case treated by Poon; however in both cases {\bf (I)} and {\bf (II)}, we can prove that their derivatives are integrable perturbations of nonnegative functions wherever the $N_i$'s assume finite values. Moreover our analysis will show that actually the $N_i$'s assume finite values all over $(0,2\alpha)$. \begin{Lemma}\label{l:Nprime} Let $i\in \{1,\dots,k\}$. If there exist $\beta_i,T_i\in (0,2\alpha)$ such that \begin{equation}\label{eq:10} \beta_i<T_i, \quad H_i(t)>0 \text{ for all $t\in (\beta_i,T_i)$},\quad \text{and}\quad u_i(\cdot,T_i)\in {\mathcal H}_{T_i}, \end{equation} then the function $N_i$ defined in \eqref{eq:9} belongs to $W^{1,1}_{\rm loc}(\beta_i,T_i)$ and \begin{align*} N'_i(t)={\nu}_{1i}(t)+{\nu}_{2i}(t) \end{align*} in a distributional sense and a.e. in $(\beta_i,T_i)$ where \begin{align*} {\nu}_{1i}(t)&=\frac{2t}{H_{i}^2(t)}{{ \Bigg(\bigg(\int_{{\mathbb R}^N}\bigg|(u_i)_t(x,t)+\frac{\nabla u_i(x,t)\cdot x}{2t}\bigg|^2G(x,t)\,dx\bigg) \bigg(\int_{{\mathbb R}^N}u_{i}^2(x,t)\, G(x,t)\,dx\bigg)}}\\ &\hskip4cm{{-\bigg(\int_{{\mathbb R}^N}\Big((u_i)_t(x,t)+\frac{\nabla u_{i}(x,t)\cdot x}{2t}\Big)u_{i}(x,t)G(x,t)\,dx \bigg)^{\!2}\Bigg)}} \end{align*} and ${\nu}_{2i}$ is as follows: \smallskip\noindent {in case \bf (I)} \begin{align*} {\nu}_{2i}(t)&={{\dfrac1{H_{i}(t)} \int_{{\mathbb R}^N}h(x,t+a_i)\left(\frac{N-2}{2}u_{i}^2(x,t)+(\nabla u_{i}(x,t)\cdot x)u_i(x,t)-\frac{|x|^2}{4t}u_{i}^2(x,t)\right) G(x,t)\,dx}}\\ &\quad{{-\dfrac t{H_{i}(t)}\bigg(\int_{{\mathbb R}^N}h_{t}(x,t+a_i)u_{i}^2(x,t)G(x,t)\,dx\bigg)}}, \end{align*} {in case \bf (II)} \begin{align*} {\nu}_{2i}(t)=\,&{\dfrac1{H_{i}(t)}} \bigg( t\int_{{\mathbb R}^N} \Big( \varphi(x,t+a_i,u_i(x,t))-\frac{\partial\varphi}{\partial u_i} (x,t+a_i,u_i(x,t))u_i(x,t)\Big)(u_i)_t(x,t)G(x,t)\,dx\\ &\hskip1cm+\int_{{\mathbb R}^N} \Big(\frac{N-2}2\varphi(x,t+a_i,u_i(x,t))u_i(x,t)-t \frac{\partial\varphi}{\partial t}(x,t+a_i,u_i(x,t)) u_i(x,t)\\ &\hskip3cm-N\Phi(x,t+a_i,u_i(x,t))-\nabla_x\Phi(x,t+a_i,u_i(x,t))\cdot x\Big)G(x,t)\,dx\\ &\hskip1cm+\int_{{\mathbb R}^N}\frac{|x|^2}{4t} \Big(2\Phi(x,t+a_i,u_i(x,t))- \varphi(x,t+a_i,u_i(x,t))u_i(x,t)\Big)G(x,t)\,dx\bigg). \end{align*} \end{Lemma} \begin{pf} From Lemma \ref{l:Hprime} and \ref{l:Dprime}, it follows that $N_{i}\in W^{1,1}_{\rm loc}(\beta_i,T_i)$. From (\ref{eq:10i}) we deduce that $$ N'_{i}(t)=\frac{(tD_{i}(t))'H_{i}(t)-tD_{i}(t)H'_{i}(t)}{H_{i}^2(t)}= \frac{(tD_{i}(t))'H_{i}(t)-2tD_{i}^2(t)}{H_{i}^2(t)}, $$ which yields the conclusion in view of (\ref{eq:Hi(t)}), (\ref{eq:Di(t)}), and Lemma \ref{l:Dprime}. \end{pf} \noindent The term ${\nu_{2i}}$ can estimated as follows. \begin{Lemma}\label{l:est_N_2} There exists $C_3>0$ such that, if $i\in \{1,\dots,k\}$ and $\beta_i,T_i\in (0,2\alpha)$ satisfy (\ref{eq:10}), then \begin{equation*} \Big|{\nu_{2i}}(t)\Big|\leq \begin{cases} C_3\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \big(t^{-1+\e/2}+\|h_t(\cdot,t+a_i)\|_{L^{N/2}({\mathbb R}^N)}\big), &\text{in case {\bf (I)}},\\[5pt] C_3\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}}, &\text{in case {\bf (II)} if $i=1$},\\[5pt] C_3\beta_i^{-1}\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}}, &\text{in case {\bf (II)} if $i>1$}, \end{cases} \end{equation*} for a.e. $t\in(\beta_i,T_i)$, where ${\nu_{2i}}$ is as in Lemma \ref{l:Nprime}. \end{Lemma} \begin{pf} Let us first consider case {\bf (I)}, i.e. $f (x,t, u)=h(x,t)u$, with $h(x,t)$ under conditions (\ref{eq:der}--\ref{eq:h}). In order to estimate ${\nu}_{2i}$ we observe that, from (\ref{eq:h}), \begin{align}\label{eq:15} &\bigg|\int_{{\mathbb R}^N}h(x,t+a_i)(\nabla u_{i}(x,t)\cdot x)u_{i}(x,t) G(x,t)\,dx\bigg|\\ &\notag\leq C_h\int_{{\mathbb R}^N}(1+|x|^{-2+\e})|\nabla u_{i}(x,t)||x||u_{i}(x,t) |G(x,t)\,dx\\ \notag &\leq C_ht \int_{{\mathbb R}^N}|\nabla u_{i}(x,t)|\frac{|x|}{t}|u_{i}(x,t)| G(x,t)\,dx +C_ht^{\e/2} \int_{\{|x|\leq \sqrt t\}}|\nabla u_{i}(x,t)|\frac{|u_{i}(x,t)|}{|x|} G(x,t)\,dx\\ \notag &\hskip4cm +C_ht^{\e/2}\int_{\{|x|\geq \sqrt t\}}|\nabla u_{i}(x,t)|\frac{|x|}{t}|u_{i}(x,t)| G(x,t)\,dx \\ \notag & \leq \frac12C_h(t+t^{\e/2})\int_{{\mathbb R}^N}|\nabla u_{i}(x,t)|^2G(x,t)\,dx+ \frac12C_h(t+t^{\e/2})\int_{{\mathbb R}^N}\frac{|x|^2}{t^2}u_{i}^2(x,t)G(x,t)\,dx\\ \notag &\hskip2cm + \frac12C_ht^{\e/2}\int_{{\mathbb R}^N}|\nabla u_{i}(x,t)|^2G(x,t)\,dx + \frac12C_ht^{\e/2}\int_{{\mathbb R}^N}\frac{u_{i}^2(x,t)}{|x|^2}G(x,t)\,dx\\ \notag&\leq \frac12C_ht^{\e/2}(2+{\overline T}^{1-\e/2}) \int_{{\mathbb R}^N}|\nabla u_{i}(x,t)|^2G(x,t)\,dx\\ \notag &\quad + \frac12C_ht^{\e/2}(1+{\overline T}^{1-\e/2})\int_{{\mathbb R}^N} \frac{|x|^2}{t^2}u_{i}^2(x,t)G(x,t)\,dx + \frac12C_ht^{\e/2}\int_{{\mathbb R}^N}\frac{u_{i}^2(x,t)}{|x|^2}G(x,t)\,dx , \end{align} and \begin{align}\label{eq:14} &\int_{{\mathbb R}^N}|h(x,t+a_i)||x|^2u_{i}^2(x,t) G(x,t)\,dx\leq C_h \int_{{\mathbb R}^N}|x|^2u_{i}^2(x,t) G(x,t)\,dx\\ \notag&\hskip7cm+C_h \int_{{\mathbb R}^N}|x|^{-2+\e}|x|^2u_{i}^2(x,t) G(x,t)\,dx\\ \notag &\leq C_h \int_{{\mathbb R}^N}|x|^2u_{i}^2(x,t) G(x,t)\,dx+C_h t^{\e/2} \int_{\{|x|\leq \sqrt t\}}u_{i}^2(x,t) G(x,t)\,dx\\ \notag & \hskip7cm+C_ht^{-1+\e/2} \int_{\{|x|\geq \sqrt t \}}|x|^2u_{i}^2(x,t) G(x,t)\,dx\\ \notag &\leq C_ht^{-1+\e/2}(1+{\overline T}^{1-\e/2}) \int_{{\mathbb R}^N}|x|^2u_{i}^2(x,t) G(x,t)\,dx+C_h t^{\e/2} \int_{{\mathbb R}^N}u_{i}^2(x,t) G(x,t)\,dx, \end{align} for a.e. $t\in (\beta_i,T_i)$. Moreover, by H\"older's inequality and Corollary \ref{cor:ineqSob}, \begin{align}\label{eq:hip} &\int_{{\mathbb R}^N}|h_t(x,t+a_i)|u_{i}^2(x,t) G(x,t)\,dx\leq C_{2^*}t^{-1} \|u_i\|^2_{{\mathcal H}_t}\|h_t(\cdot,t+a_i)\|_{L^{{N}/{2}}({\mathbb R}^N)} \end{align} for a.e. $t\in (\beta_i,T_i)$. Collecting (\ref{eq:11}), (\ref{eq:15}), (\ref{eq:14}) and (\ref{eq:hip}), we obtain that \begin{multline}\label{eq:16} \Big|{\nu_{2i}}(t)\Big|\leq \frac{{\rm const}\, t^{\e/2}}{H_{i}(t)} \bigg(\frac1t\int_{{\mathbb R}^N}u_{i}^2(x,t) G(x,t)\,dx+ \int_{{\mathbb R}^N}\frac{u_{i}^2(x,t)}{|x|^2}G(x,t)\,dx\\ +\int_{{\mathbb R}^N}|\nabla u_{i}(x,t)|^2G(x,t)\,dx+ \frac1{t^2}\int_{{\mathbb R}^N}|x|^2u_{i}^2(x,t) G(x,t)\,dx\bigg)\\ +\frac{C_{2^*}}{H_{i}(t)} \|u_i\|^2_{{\mathcal H}_t}\|h_t(\cdot,t+a_i)\|_{L^{{N}/{2}}({\mathbb R}^N)}. \end{multline} From inequality (\ref{eq:16}), Lemma \ref{Hardytemp}, Corollary \ref{c:pos_per}, and Corollary \ref{cor:ineq}, we deduce that there exists $C_3>0$ depending only on $C_h$, ${\overline T}$, and $N$, such that, for a.e. $t\in(\beta_i,T_i)$, \begin{align*} \Big|{\nu_{2i}}(t)\Big|&\leq \frac{C_3}{H_{i}(t)}\Big(tD_i(t)+{\textstyle{\frac{N-2}{4}}}H_i(t)\Big)\Big( t^{-1+\e/2}+\|h_t(\cdot,t+a_i)\|_{L^{{N}/{2}}({\mathbb R}^N)}\Big)\\ &=C_3\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big)\Big( t^{-1+\e/2}+\|h_t(\cdot,t+a_i)\|_{L^{{N}/{2}}({\mathbb R}^N)}\Big) \end{align*} thus completing the proof in case {\bf (I)}. Let us now consider case {\bf (II)}, i.e. $f (x,t, s)=\varphi (x,t, s)$ with $\varphi$ under condition \eqref{eq:fi} and $u$ satisfying \eqref{eq:u1} and \eqref{eq:u2}. From (\ref{eq:fi}), we have that \begin{gather}\label{eq:84} \Big|{\nu_{2i}}(t)\Big|\leq\frac{\rm const}{H_i(t)} \bigg(t\int_{{\mathbb R}^N}|u_i(x,t)|^q|(u_i)_t(x,t)|G(x,t)\,dx\\ \notag\quad+\int_{{\mathbb R}^N}\big(|u_i(x,t)|^2+|u_i(x,t)|^{p+1}\big)G(x,t)\,dx +\int_{{\mathbb R}^N}\frac{|x|^2}{t}\big(|u_i(x,t)|^2+|u_i(x,t)|^{p+1}\big)G(x,t)\,dx \bigg). \end{gather} From H\"older's inequality, Corollary \ref{cor:ineqSob}, and assumptions (\ref{eq:u1}--\ref{eq:u2}), it follows that \begin{align}\label{eq:85} t&\int_{{\mathbb R}^N}|u_i(x,t)|^q|(u_i)_t(x,t)|G(x,t)\,dx\\ \notag&\leq t \bigg(\int_{{\mathbb R}^N}|u_i(x,t)|^{p+1}G^{\frac{p+1}{2}}(x,t)\,dx \bigg)^{\!\!\frac{2}{p+1}}\|u(\cdot,t+a_i)\|^{q-2}_{L^{p+1}({\mathbb R}^N)} \|u_t(\cdot,t+a_i)\|_{L^{\frac{p+1}{p+1-q}}({\mathbb R}^N)}\\ \notag&\leq {\rm const\,}t^{-\frac N{p+1}\frac{p-1}2}\|u_i\|^2_{{\mathcal H}_t} \end{align} and, taking into account also Corollary \ref{cor:ineq}, \begin{align}\label{eq:86} \int_{{\mathbb R}^N}&\frac{|x|^2}{t}\big(|u_i(x,t)|^2+|u_i(x,t)|^{p+1}\big)G(x,t)\,dx \leq \int_{{\mathbb R}^N}\frac{|x|^2}{t}|u_i(x,t)|^2 G(x,t)\,dx \\ \notag &+ \frac{t+a_i}{t} \bigg(\int_{{\mathbb R}^N}|u_i(x,t)|^{p+1}G^{\frac{p+1}{2}}(x,t)\,dx \bigg)^{\!\!\frac{2}{p+1}} \bigg(\int_{{\mathbb R}^N}\bigg(\frac{|x|^2}{t+a_i} \bigg)^{\!\!\frac{p+1}{p-1}}|u(x,t+a_i)|^{p+1}\,dx \bigg)^{\frac{p-1}{p+1}}\\[8pt] \notag&\hskip3cm\leq \begin{cases} {\rm const\,}t^{-\frac N{p+1}\frac{p-1}2}\|u_i\|^2_{{\mathcal H}_t},&\text{if }i=1,\\[8pt] {\rm const\,}b_i\beta_i^{-1}t^{-\frac N{p+1}\frac{p-1}2}\|u_i\|^2_{{\mathcal H}_t},&\text{if }i>1. \end{cases} \end{align} As in (\ref{eq:varphi}) we can estimate \begin{align}\label{eq:87} \int_{{\mathbb R}^N}\big(|u_i(x,t)|^2+|u_i(x,t)|^{p+1}\big)G(x,t)\,dx \leq {\rm const\,}t^{-\frac N{p+1}\frac{p-1}2}\|u_i\|^2_{{\mathcal H}_t}. \end{align} Collecting (\ref{eq:84}), (\ref{eq:85}), (\ref{eq:86}), and (\ref{eq:87}), and using Corollary \ref{c:pos_per_nonlin}, we obtain that there exists some positive constant $C_3$ such that, for a.e. $t\in(\beta_i,T_i)$, \begin{align*} \Big|{\nu_{2i}}(t)\Big|\leq \begin{cases} \frac{C_3}{H_{i}(t)}\,t^{-\frac N{p+1}\frac{p-1}2}\big(tD_i(t) +{\textstyle{\frac{N-2}{4}}}H_i(t)\big)=C_3\big(N_i(t)+ {\textstyle{\frac{N-2}{4}}}\big)\,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}},&\!\!\!\text{if }i=1,\\[8pt] \frac{C_3\beta_i^{-1}}{H_{i}(t)}\,t^{-\frac N{p+1}\frac{p-1}2}\big(tD_i(t)+{\textstyle{\frac{N-2}{4}}} H_i(t)\big)=C_3\beta_i^{-1} \big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big)\,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}}\!\!,&\!\!\!\text{if }i>1, \end{cases} \end{align*} thus completing the proof in case {\bf (II)}. \end{pf} \begin{Lemma}\label{l:Nabove} There exists $C_4>0$ such that, if $i\in \{1,\dots,k\}$ and $\beta_i,T_i\in (0,2\alpha)$ satisfy (\ref{eq:10}), then, for every $t\in(\beta_i,T_i)$, \begin{equation*} N_i(t)\leq \begin{cases} -\frac{N-2}{4}+C_{4}\big(N_i(T_i)+\frac{N-2}4\big), &\text{in case {\bf (I)} and in case {\bf (II)} if $i=1$},\\[8pt] -\frac{N-2}{4}+C_{4}^{1/\beta_i}\big(N_i(T_i)+\frac{N-2}4\big), &\text{in case {\bf (II)} if $i>1$}. \end{cases} \end{equation*} \end{Lemma} \begin{pf} Let ${\nu}_{1i}$ and ${\nu}_{2i}$ as in Lemma \ref{l:Nprime}. By Schwarz's inequality, \begin{equation}\label{eq:17} {\nu}_{1i}\geq 0 \quad\text{a.e. in }(\beta_i,T_i). \end{equation} From Lemma \ref{l:Nprime}, (\ref{eq:17}), and Lemma \ref{l:est_N_2}, we deduce that $$ \frac{d}{dt}N_{i}(t)\geq \begin{cases} - C_3\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \Big(t^{-1+\e/2}+\|h_t(\cdot,t+a_i)\|_{L^{N/2}({\mathbb R}^N)}\Big), &\text{in case {\bf (I)}},\\[8pt] -C_3\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}}, &\text{in case {\bf (II)} if $i=1$},\\[8pt] -C_3\beta_i^{-1}\big(N_i(t)+{\textstyle{\frac{N-2}{4}}}\big) \,t^{-1+\frac {N+2-p(N-2)}{2(p+1)}}, &\text{in case {\bf (II)} if $i>1$}, \end{cases} $$ for a.e. $t\in(\beta_i,T_i)$. After integration, it follows that \begin{multline*} N_{i}(t)\\ \leq \begin{cases} -\frac{N-2}{4} +\Big( N_{i}(T_{i})+\frac{N-2}{4}\Big)\exp\Big({\frac{2C_3}{\e}T_{i}^{\e/2}+C_3 \|h_t\|_{L^{1}((0,T),L^{{N}/{2}}({\mathbb R}^N))}}\Big), &\text{in case {\bf (I)}},\\[8pt] -\frac{N-2}{4} +\Big( N_{i}(T_{i})+\frac{N-2}{4}\Big) \exp\Big(\frac{2(p+1)C_3}{N+2-p(N-2)}T_{i}^{\frac{N+2-p(N-2)}{2(p+1)}}\Big), &\text{in case {\bf (II)}, $i=1$},\\[8pt] -\frac{N-2}{4} +\Big( N_{i}(T_{i})+\frac{N-2}{4}\Big) \exp\Big(\frac{2(p+1)C_3\beta_i^{-1}}{N+2-p(N-2)}T_{i}^{\frac{N+2-p(N-2)}{2(p+1)}}\Big), &\text{in case {\bf (II)}, $i>1$}, \end{cases} \end{multline*} for any $t\in (\beta_i,T_i)$, thus yielding the conclusion. \end{pf} \begin{Lemma}\label{l:t_0=0} Let $i\in\{1,\dots,k\}$. If $H_i\not\equiv 0$, then $$ H_i(t)>0 \quad\text{for all }t\in(0,2\alpha). $$ \end{Lemma} \begin{pf} From continuity of $H_i$, the assumption $H_i\not\equiv0$, and the fact that $u_i(\cdot,t)\in {\mathcal H}_t$ for a.e. $t\in (0,2\alpha)$, we deduce that there exists $T_i\in (0,2\alpha)$ such that \begin{equation}\label{eq:78} H_i(T_i)>0\quad\text{and} \quad u_i(\cdot,T_i)\in {\mathcal H}_{T_i}. \end{equation} Lemma \ref{l:Hpos} implies that $H_i(t)>0$ for all $t\in [T_i,2\alpha)$. We consider \begin{equation*} t_i:=\inf\{s\in (0,T_i):H_i(t)>0\text{ for all }t\in(s,2\alpha)\}. \end{equation*} Due to Lemma \ref{l:Hpos}, either \begin{equation}\label{eq:40} t_i=0 \text{ and } H_i(t)>0\text{ for all }t\in (0,2\alpha) \end{equation} or \begin{equation}\label{eq:39} 0<t_i<T_i\text{ and } \begin{cases} H_i(t)=0&\text{if }t\in(0,t_i]\\ H_i(t)>0&\text{if }t\in(t_i,2\alpha) \end{cases}. \end{equation} The argument below will exclude alternative \eqref{eq:39}. Assume by contradiction that \eqref{eq:39} holds. From Lemma \ref{l:Nabove} and (\ref{eq:10i}), it follows $$ \frac t2H'_{i}(t)\leq c_i H_{i}(t) $$ where $$ c_i= \begin{cases} -\frac{N-2}{4}+C_{4}\big(N_i(T_i)+\frac{N-2}4\big) &\text{in case {\bf (I)} and in case {\bf (II)} if $i=1$},\\[8pt] -\frac{N-2}{4}+C_{4}^{1/t_i}\big(N_i(T_i)+\frac{N-2}4\big) &\text{in case {\bf (II)} if $i>1$}, \end{cases} $$ for a.e. $t\in (t_i,T_i)$. By integration, it follows that \begin{equation}\label{eq:18} H_{i}(t)\geq \frac{H_{i}(T_i)}{T_i^{2c_i}}\,t^{2c_i} \quad\text{for all }t\in[t_i,T_i). \end{equation} By (\ref{eq:39}) $H_{i}(t_i)=0$, giving rise to contradiction with \eqref{eq:18} because of (\ref{eq:78}). Therefore, we exclude (\ref{eq:39}) and conclude that (\ref{eq:40}) holds. \end{pf} \begin{Lemma}\label{l:H_{i+1}} Let $i\in\{1,\dots,k\}$. Then $$ H_i(t)\equiv 0\text { in $(0,2\alpha)$ if and only if } H_{i+1}(t)\equiv 0 \text { in $(0,2\alpha)$}. $$ \end{Lemma} \begin{pf} First, we prove that $H_i(t)\equiv 0$ in $(0,2\alpha)\text { implies } H_{i+1}(t)\equiv 0$ in $(0,2\alpha).$ Let's suppose by contradiction that $H_{i+1}(t)\not\equiv 0$. By Lemma \ref{l:t_0=0}, we conclude that $H_{i+1}(t)>0$ for all $t\in(0,2\alpha)$. It follows that $u_{i+1}(\cdot, t)\not\equiv 0$ for all $t\in(0,2\alpha)$ and $u(\cdot, t)\not\equiv 0$, for all $t\in(i\alpha,(i+1)\alpha)$. Hence, $u_{i}(\cdot, t)\not\equiv 0$, for all $t\in (\alpha, 2\alpha)$ and thus $H_{i}\not\equiv 0$ in $(0,2\alpha)$, a contradiction. Let us now prove that $H_{i+1}(t)\equiv 0$ in $(0,2\alpha)\text { implies } H_{i}(t)\equiv 0$ in $(0,2\alpha).$ Let's suppose by contradiction that $H_{i}(t)\not\equiv 0$, then, by Lemma \ref{l:Hpos}, $H_{i}(t)>0$ in $(\overline{t},2\alpha)$ for some $\overline{t}\in (\alpha,2\alpha)$. Hence, $u_{i}(\cdot, t)\not\equiv 0$ in $(\overline{t},2\alpha)$ and then $u_{i+1}(\cdot, t)\not\equiv 0$ in $(\overline{t}-\alpha,\alpha)$, thus implying $H_{i+1}(t)\not\equiv 0$, a contradiction. \end{pf} \begin{Corollary}\label{c:ucontinuation} If $u\not\equiv0$ in ${\mathbb R}^N\times(0,T)$, then $$ H_i(t)>0 $$ for all $t\in(0,2\alpha)$ and $i=1,\dots,k$. In particular, \begin{equation}\label{eq:88} \int_{{\mathbb R}^N}u^2(x,t)G(x,t)\,dx>0\quad\text{for all }t\in(0,T). \end{equation} \end{Corollary} \begin{pf} If $u\not\equiv0$, then there exists some $i_0\in\{1,\dots,k\}$ such that $u_{i_0}\not\equiv 0$ in $(0,2\alpha)$. Hence, $H_{i_0}(t)\not\equiv 0$ in $(0,2\alpha)$ and, thanks to lemma \ref {l:H_{i+1}}, $H_{i}(t)\not\equiv 0$ in $(0,2\alpha)$ for all $i=1,\dots,k$. Applying Lemma \ref{l:t_0=0}, we conclude that, for all $i=1,\dots,k$, $H_{i}(t)>0$ in $(0,2\alpha)$, thus implying (\ref{eq:88}). \end{pf} \begin{pfn}{Proposition \ref{p:uniq_cont}} It follows immediately from Corollary \ref{c:ucontinuation}. \end{pfn} \noindent Henceforward, we assume $u\not\equiv 0$ and we denote, for all $t\in(0,2\alpha)$, \begin{align*} &H(t)=H_{1}(t)=\int_{{\mathbb R}^N}u^2(x,t)\, G(x,t)\,dx,\\ & D(t)=D_{1}(t)=\!\!\int_{{\mathbb R}^N}\!\!\bigg(|\nabla u(x,t)|^2- \dfrac{a\big(\frac{x}{|x|}\big)}{|x|^2}u^2(x,t)- f(x,t,u(x,t))u(x,t)\bigg)G(x,t)\,dx. \end{align*} Corollary \ref{c:ucontinuation} ensures that, if $u\not\equiv 0$ in ${\mathbb R}^N\times (0,T)$, $H(t)>0$ for all $t\in(0,2\alpha)$ and hence the \emph{Almgren type frequency function} $$ {\mathcal N}(t)={\mathcal N}_{f,u}(t)=N_1(t)=\frac{tD(t)}{H(t)} $$ is well defined over all $(0,2\alpha)$. Moreover, by Lemma \ref{l:Nprime}, ${\mathcal N}\in W^{1,1}_{\rm loc}(0,2\alpha)$ and $$ {\mathcal N}'(t)={\nu}_1(t)+{\nu}_2(t) \quad\text{for a.e. }t\in(0,2\alpha), $$ where \begin{equation}\label{eq:nu1nu2} {\nu}_1(t)={\nu}_{11}(t) \quad\text{and}\quad {\nu}_2(t)={\nu}_{21}(t), \end{equation} with ${\nu}_{11},{\nu}_{21}$ as in Lemma \ref{l:Nprime}. Since, by (\ref{eq:defsol1}), $u(\cdot,t)\in {\mathcal H}_t$ for a.e. $t\in (0,T)$, we can fix $T_0$ such that \begin{equation}\label{eq:T_0} T_0\in (0,2\alpha) \quad\text{and}\quad u(\cdot,T_0)\in {\mathcal H}_{T_0}. \end{equation} The following result clarifies the behavior of ${\mathcal N}(t)$ as $t\to 0^+$. \begin{Lemma}\label{l:limit} The limit $$ \gamma:=\lim_{t\to 0^+}{\mathcal N}(t) $$ exists and it is finite. \end{Lemma} \begin{pf} We first observe that ${\mathcal N}(t)$ is bounded from below in $(0,2\alpha)$. Indeed from Corollaries \ref{c:pos_per} and \ref{c:pos_per_nonlin}, we obtain that, for all $t\in(0,2\alpha)$, $$ tD(t)\geq \bigg(C_1-\frac{N-2}4\bigg)H(t), $$ and hence \begin{equation}\label{cota} {\mathcal N}(t)\geq C_1-\frac{N-2}4. \end{equation} Let $T_0$ as in (\ref{eq:T_0}). By Schwarz's inequality, ${\nu}_1(t)\geq 0$ for a.e. $t\in (0,T_0)$. Furthermore, from Lemmas \ref{l:est_N_2} and \ref{l:Nabove}, ${\nu}_2$ belongs to $L^{1}(0,T_0)$. In particular, ${\mathcal N}'(t)$ turns out to be the sum of a nonnegative function and of a $L^1$ function over $(0,T_0)$. Therefore, $$ {\mathcal N}(t)={\mathcal N}(T_0)-\dyle\int_{t}^{T_0}{\mathcal N}'(s)\,ds $$ admits a limit as $t\rightarrow 0^{+}$ which is finite in view of \eqref{cota} and Lemma \ref{l:Nabove}. \end{pf} \begin{Lemma}\label{stimaH} Let $\gamma:=\lim_{t\rightarrow 0^+} {\mathcal N}(t)$ be as in Lemma \ref{l:limit}. Then there exists a constant $K_1>0$ such that \begin{equation}\label{eq:52} H(t)\leq K_1 t^{2\gamma} \quad \text{for all } t\in (0,T_0). \end{equation} Furthermore, for any $\sigma>0$, there exists a constant $K_2(\sigma)>0$ depending on $\sigma$ such that \begin{equation}\label{eq:53} H(t)\geq K_2(\sigma)\, t^{2\gamma+\sigma}\quad \text{for all } t\in (0,T_0). \end{equation} \end{Lemma} \begin{pf} From Lemma \ref{l:Nprime}, \eqref{eq:17}, Lemma \ref{l:est_N_2}, and Lemma \ref{l:Nabove}, we infer that \begin{align*} {\mathcal N}(t)-\gamma&=\int_0^t ({\nu}_1(s)+{\nu}_2(s))ds \geq \int_0^t {\nu}_2(s)ds\\[5pt] &\geq \begin{cases} - C_3C_4\big({\mathcal N}(T_0)+{\textstyle{\frac{N-2}{4}}}\big) \int_0^t \Big(s^{-1+\e/2}+\|h_t(\cdot,s)\|_{L^{N/2}({\mathbb R}^N)}\Big)\,ds, &\text{in case {\bf (I)}},\\[5pt] - C_3C_4\big({\mathcal N}(T_0)+{\textstyle{\frac{N-2}{4}}}\big) \int_0^t s^{-1+\frac {N+2-p(N-2)}{2(p+1)}}\,ds, &\text{in case {\bf (II)}}, \end{cases}\\[5pt] &\geq \begin{cases} - C_3C_4\big({\mathcal N}(T_0)+{\textstyle{\frac{N-2}{4}}}\big) \Big(\frac2\e t^{\e/2}+\|h_t\|_{L^{r}((0,T),L^{{N}/{2}}({\mathbb R}^N))}t^{1-1/r}\Big), &\text{in case {\bf (I)}},\\[5pt] - C_3C_4\big({\mathcal N}(T_0)+{\textstyle{\frac{N-2}{4}}}\big)\frac{2(p+1)}{N+2-p(N-2)} t^{\frac {N+2-p(N-2)}{2(p+1)}}, &\text{in case {\bf (II)}} \end{cases} \\[5pt] &\geq - C_5t^\delta \end{align*} with \begin{equation}\label{eq:delta} \delta= \begin{cases} \min\{\e/2,1-1/r\}, &\text{in case {\bf (I)}},\\[5pt] \frac {N+2-p(N-2)}{2(p+1)}, &\text{in case {\bf (II)}}, \end{cases} \end{equation} for some constant $C_5>0$ and for all $t\in(0,T_0)$. From above and \eqref{eq:10i}, we deduce that $$ ( \log H(t))'=\frac{H'(t)}{H(t)}=\frac{2}{t}{\mathcal N}(t)\geq \frac2t\gamma-2 C_5t^{-1+\delta}. $$ Integrating over $(t, T_0)$ we obtain $$ H(t)\leq \frac{H(T_0)}{T_0^{2\gamma}}e^{2 C_5T_0^{\delta}}t^{2\gamma} $$ for all $t\in(0,T_0)$, thus proving \eqref{eq:52}. Let us prove \eqref{eq:53}. Since $\gamma=\lim_{t\rightarrow 0^+} {\mathcal N}(t)$, for any $\sigma>0$ there exists $t_\sigma>0$ such that ${\mathcal N}(t)<\gamma+\sigma/2$ for any $t\in (0,t_\sigma)$ and hence $$ \frac{H'(t)}{H(t)}=\frac{2\,{\mathcal N}(t)}{t}<\frac{2\gamma+\sigma}{t}. $$ Integrating over the interval $(t,t_\sigma)$ and by continuity of $H$ outside $0$, we obtain \eqref{eq:53} for some constant $K_2(\sigma)$ depending on $\sigma$. \end{pf} \section{The blow-up analysis}\label{sec:blow-up-analysis} If $u$ is a weak solution to (\ref{prob}) in the sense of Definition \ref{def:solution}, then, for every $\lambda>0$, the function \begin{equation*} u_{\lambda}(x,t)=u(\lambda x,\lambda^2t) \end{equation*} is a weak solution to \begin{equation}\label{lambda} (u_{\lambda})_t+\Delta u_{\lambda}+\dfrac{a(x/|x|)}{|x|^2}u_{\lambda} +\lambda^2f(\lambda x, \lambda^2 t, u_{\lambda})=0\quad\text{in }{\mathbb R}^N\times (0,T/\lambda^2), \end{equation} in the sense that \begin{align*} &\int_\tau^{\frac{T}{\lambda^2}}\|u_{\lambda}(\cdot,t)\|^2_{{\mathcal H}_t}\,dt<+\infty,\quad\int_\tau^{\frac{T}{\lambda^2}} \Big\|(u_{\lambda})_t+\frac{\nabla u_{\lambda}\cdot x}{2t}\Big\|^2_{({\mathcal H}_t)^\star}\!\!<+\infty \text{ for all }\tau\in \Big(0,{\frac{T}{\lambda^2}}\Big),\\ &{\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle (u_{\lambda})_t+\frac{\nabla u_{\lambda}\cdot x}{2t},w \bigg\rangle_{{\mathcal H}_t}\\ &\notag\qquad= \int_{{\mathbb R}^N}\bigg(\nabla u_{\lambda}(x,t)\cdot \nabla w(x)- \dfrac{a(x/|x|)}{|x|^2}\,u_{\lambda}(x,t)w(x)-\lambda^2 f(\lambda x, \lambda^2 t, u_{\lambda}(x,t))w(x)\bigg)G(x,t)\,dx \end{align*} for a.e. $t\in \big(0,{\frac{T}{\lambda^2}}\big)$ and for each $w\in {\mathcal H}_t$. The frequency function associated to the scaled equation (\ref{lambda}) is \begin{equation}\label{eq:34} {\mathcal N}_\lambda(t)=\frac{t\,D_\lambda(t)}{H_\lambda(t)}, \end{equation} where \begin{align*} D_{\lambda}(t)&=\int_{{\mathbb R}^N}\bigg(|\nabla u_{\lambda}(x,t)|^2- \frac{a(x/|x|)}{|x|^2}u_{\lambda}^2(x,t)-\lambda^2 f(\lambda x, \lambda^2 t, u_{\lambda}(x,t))u_{\lambda}(x,t)\bigg)G(x, t)\,dx,\\ H_{\lambda}(t)&=\int_{{\mathbb R}^N}u_{\lambda}^2(x,t) G(x,t)\,dx. \end{align*} The scaling properties of the operator combined with a suitable change of variables easily imply that \begin{align}\label{eq:19} D_\lambda(t)=\lambda^2D(\lambda^2t)\quad\text{and} \quad H_\lambda(t)=H(\lambda^2t), \end{align} and consequently \begin{align}\label{eq:scale_for_N} {\mathcal N}_\lambda(t)={\mathcal N}(\lambda^2t)\quad \text{for all }t\in\Big(0,{\frac{2\alpha}{\lambda^2}}\Big). \end{align} \begin{Lemma}\label{l:blow_up} Let $a\in L^{\infty}\big({\mathbb S}^{N-1}\big)$ satisfy (\ref{eq:posde}) and $u\not\equiv 0$ be, in the sense of Definition \ref{def:solution}, either a weak solution to (\ref{prob1}), with $h$ satisfying (\ref{eq:der}) and (\ref{eq:h}), or a weak solution to (\ref{prob2}) satisfying (\ref{eq:u1}--\ref{eq:u2}) with $\varphi\in C^1({\mathbb R}^N\times(0,T)\times{\mathbb R})$ under assumption (\ref{eq:fi}). Let $\gamma:=\lim_{t\to 0^+}{\mathcal N}(t)$ as in Lemma \ref{l:limit}. Then \begin{itemize} \item[(i)] $\gamma$ is an eigenvalue of the operator $L$ defined in (\ref{eq:13}); \item[(ii)] for every sequence $\lambda_n\to 0^+$, there exists a subsequence $\{\lambda_{n_k}\}_{k\in{\mathbb N}}$ and an eigenfunction $g$ of the operator $L$ associated to $\gamma$ such that, for all $\tau\in (0,1)$, \begin{equation*} \lim_{k\to+\infty}\int_\tau^1 \bigg\|\frac{u(\lambda_{n_k}x,\lambda_{n_k}^2t)}{\sqrt{H(\lambda_{n_k}^2)}} -t^{\gamma}g(x/\sqrt t)\bigg\|_{{\mathcal H}_t}^2dt=0 \end{equation*} and $$ \lim_{k\to+\infty}\sup_{t\in[\tau,1]} \bigg\|\frac{u(\lambda_{n_k}x,\lambda_{n_k}^2t)}{\sqrt{H(\lambda_{n_k}^2)}} -t^{\gamma}g(x/\sqrt t)\bigg\|_{{\mathcal L}_t}=0. $$ \end{itemize} \end{Lemma} \begin{pf} Let \begin{align}\label{eq:33} w_{\lambda}(x,t):=\frac{u_\lambda(x,t)}{\sqrt{H(\lambda^2)}}, \end{align} with $\lambda\in(0,\sqrt T_0)$, so that $1<T_0/\lambda^2$. From Lemma \ref{l:Hcreas} we obtain that, for all $t\in(0,1)$, \begin{equation}\label{eq:21} \int_{{\mathbb R}^N}w_{\lambda}^2(x,t)G(x,t)\,dx =\frac{H(\lambda^2t)}{H(\lambda^2)}\leq t^{2C_1-\frac{N-2}{2}}, \end{equation} with $C_1$ as in (\ref{eq:5}). Lemma \ref{l:Nabove}, Corollaries \ref{c:pos_per} and \ref{c:pos_per_nonlin}, and \eqref{eq:19} imply that \begin{multline*} \frac1t \bigg(-\frac{N-2}4+{C_4}\bigg({\mathcal N}(T_0)+\frac{N-2}4\bigg)\bigg) H_\lambda(t)\geq \lambda^2D(\lambda^2 t)\\ \geq \frac1t\bigg(C_1-\frac{N-2}{4}\bigg)H_\lambda(t)+C_1 \int_{{\mathbb R}^N}|\nabla u_{\lambda}(x,t)|^2G(x,t)\,dx \end{multline*} and hence, in view of \eqref{eq:21}, \begin{align}\label{eq:20} t\int_{{\mathbb R}^N}|\nabla w_{\lambda}(x,t)|^2G(x,t)\,dx&\leq C_1^{-1} \big({\textstyle{C_4\big({\mathcal N}(T_0)+\frac{N-2}4\big)-C_1}}\big) \int_{{\mathbb R}^N}w_{\lambda}^2(x,t)G(x,t)\,dx\\ &\notag \leq C_1^{-1} \big({\textstyle{C_4\big({\mathcal N}(T_0)+\frac{N-2}4\big)-C_1}}\big)t^{2C_1-\frac{N-2}{2}}, \end{align} for a.e. $t\in (0,1)$. Let us consider the family of functions $$ \widetilde{w}_{\lambda}(x,t)=w_{\lambda}(\sqrt{t}x,t)=\dfrac{u (\lambda \sqrt{t}x, \lambda^2t)}{\sqrt{H(\lambda^2)}}, $$ which, by scaling, satisfy \begin{equation}\label{eq:22} \int_{{\mathbb R}^N} \widetilde{w}_{\lambda}^{2}(x,t)G(x,1)\,dx =\int_{{\mathbb R}^N} w_{\lambda}^{2}(x,t)G(x,t)\,dx \end{equation} and \begin{equation}\label{eq:23} \int_{{\mathbb R}^N} |\nabla\widetilde{w}_{\lambda}(x,t)|^{2}G(x,1)\,dx =t\int_{{\mathbb R}^N}|\nabla w_{\lambda}(x,t)|^{2}G(x,t)\,dx. \end{equation} From \eqref{eq:21}, \eqref{eq:20}, \eqref{eq:22}, and \eqref{eq:23}, we deduce that, for all $\tau \in (0,1)$, \begin{equation}\label{eq:27} \big\{\widetilde{w}_{\lambda}\big\}_{\lambda\in(0,\sqrt T_0)} \text{ is bounded in } L^\infty(\tau,1;{\mathcal H}) \end{equation} uniformly with respect to $\lambda\in(0,\sqrt T_0)$. Since $$ \widetilde{w}_{\lambda}(x,t)=\frac{v(x,\lambda^2t)}{\sqrt{H(\lambda^2)}} \quad\text{and}\quad (\widetilde{w}_{\lambda})_t(x,t)=\frac{\lambda^2}{\sqrt{H(\lambda^2)}}\, v_t(x,\lambda^2t) $$ with $v$ as in Remark \ref{rem:uv}, from (\ref{eq:24}) we deduce that, for all $\phi\in{\mathcal H}$, \begin{multline}\label{eq:25} {\phantom{\big\langle}}_{{\mathcal H}^\star}\big\langle (\widetilde{w}_{\lambda})_t,\phi \big\rangle_{{\mathcal H}} =\frac1t\int_{{\mathbb R}^N}\bigg(\nabla \widetilde{w}_{\lambda}(x,t) \cdot \nabla \phi(x)- \dfrac{a\big(\frac{x}{|x|}\big)}{|x|^2}\,\widetilde{w}_{\lambda}(x,t) \phi(x)\\ -\frac{\lambda^2t}{\sqrt{H(\lambda^2)}}f\Big(\lambda\sqrt tx, \lambda^2 t, \sqrt{H(\lambda^2)}\widetilde{w}_{\lambda}(x,t)\Big)\phi(x)\bigg)G(x,1)\,dx. \end{multline} In case {\bf (I)}, from (\ref{eq:h}) and Lemma \ref{Hardytemp}, we can estimate the last term in the above integral as \begin{align}\label{eq:26} &\lambda^2\left|\int_{{\mathbb R}^N}h(\lambda\sqrt tx, \lambda^2 t)\widetilde{w}_{\lambda}(x,t)\phi(x)G(x,1)\,dx\right|\\ \notag&\leq C_h\lambda^2\int_{{\mathbb R}^N}|\widetilde{w}_{\lambda}(x,t)||\phi(x)|G(x,1)\,dx +C_h\frac{\lambda^\e}{t}\int_{{\mathbb R}^N}|x|^{-2+\e} |\widetilde{w}_{\lambda}(x,t)||\phi(x)|G(x,1)\,dx\\ \notag& \leq C_h\lambda^2\|\widetilde{w}_{\lambda}(\cdot,t)\|_{\mathcal H} \|\phi\|_{\mathcal H}+ C_h\frac{\lambda^\e}{t}\int_{|x|\leq1} \frac{ |\widetilde{w}_{\lambda}(x,t)||\phi(x)|}{|x|^2}G(x,1)\,dx\\ \notag& \hskip6cm +C_h\frac{\lambda^\e}{t}\int_{|x|\geq1} |\widetilde{w}_{\lambda}(x,t)||\phi(x)|G(x,1)\,dx\\ \notag&\leq C_h\frac{\lambda^\e}{t}\bigg(t\,\lambda^{2-\e}+\frac{\max\{4,N-2\}}{(N-2)^2}+1 \bigg) \|\widetilde{w}_{\lambda}(\cdot,t)\|_{\mathcal H} \|\phi\|_{\mathcal H} \end{align} for all $\lambda\in (0,\sqrt T_0)$ and a.e. $t\in(0,1)$. From (\ref{eq:25}), (\ref{eq:26}), and Lemma \ref{Hardytemp} it follows that, for all $\lambda\in (0,\sqrt T_0)$ and a.e. $t\in(0,1)$, \begin{multline*} \big| \!\!{\phantom{\big\langle}}_{{\mathcal H}^\star}\big\langle (\widetilde{w}_{\lambda})_t,\phi \big\rangle_{{\mathcal H}}\big|\\\leq \Big({\textstyle{1+\frac{\max\{4,N-2\}}{(N-2)^2}\|a\|_{L^{\infty}({\mathbb S}^{N-1})} +C_hT_0^{\e/2}\Big(T_0^{1-\e/2}+\frac{\max\{4,N-2\}}{(N-2)^2}+1 \Big)}}\Big)\frac{\|\widetilde{w}_{\lambda}(\cdot,t)\|_{\mathcal H} \|\phi\|_{\mathcal H}}{t} \end{multline*} and hence \begin{equation}\label{eq:89} \|(\widetilde{w}_{\lambda})_t(\cdot,t)\|_{{\mathcal H}^\star}\leq \frac{{\rm const}}{t}\|\widetilde{w}_{\lambda}(\cdot,t)\|_{\mathcal H}. \end{equation} In case {\bf (II)}, from (\ref{eq:fi}), H\"older's inequality, and Lemma \ref{l:sob}, we obtain \begin{multline}\label{eq:90} \bigg|\frac{\lambda^2}{\sqrt{H(\lambda^2)}}\int_{{\mathbb R}^N}\varphi\Big(\lambda\sqrt tx, \lambda^2 t, \sqrt{H(\lambda^2)}\widetilde{w}_{\lambda}(x,t)\Big)\phi(x)G(x,1)\,dx \bigg|\\ \leq C_\varphi\frac{\lambda^2}{\sqrt{H(\lambda^2)}} \int_{{\mathbb R}^N}\Big(\sqrt{H(\lambda^2)}|\widetilde{w}_{\lambda}(x,t)|+ (\sqrt{H(\lambda^2)})^p|\widetilde{w}_{\lambda}(x,t)|^p\Big) |\phi(x)|G(x,1)\,dx\\ \leq C_\varphi \lambda^2\int_{{\mathbb R}^N}|\widetilde{w}_{\lambda}(x,t)||\phi(x)|G(x,1)\,dx+ C_\varphi \lambda^2(H(\lambda^2))^{\frac{p-1}{2}}\int_{{\mathbb R}^N}|\widetilde{w}_{\lambda}(x,t)|^p |\phi(x)|G(x,1)\,dx\\ \leq \|\widetilde{w}_{\lambda}(\cdot,t)\|_{\mathcal H} \|\phi\|_{\mathcal H} \frac{\lambda^{\frac{N+2-p(N-2)}{p+1}}}{t} \bigg(C_\varphi t \lambda^{\frac{N(p-1)}{p+1}}+ C_\varphi C_{p+1} t^{\frac{N+2-p(N-2)}{2(p+1)}} \bigg(\int_{{\mathbb R}^N}|u(x,\lambda^2t)|^{p+1}\,dx\bigg)^{\!\!\frac{p-1}{p+1}}\bigg). \end{multline} From (\ref{eq:25}), (\ref{eq:90}), Lemma \ref{Hardytemp}, the fact that $p<2^*-1$, and (\ref{eq:u1}), it follows that, for all $\lambda\in (0,\sqrt T_0)$ and a.e. $t\in(0,1)$, estimate (\ref{eq:89}) holds also in case {\bf (II)}. Then, in view of (\ref{eq:27}), estimate (\ref{eq:89}) yields, for all $\tau \in (0,1)$, \begin{equation}\label{eq:28} \big\{(\widetilde{w}_{\lambda})_t\big\}_{\lambda\in(0,\sqrt T_0)} \text{ is bounded in } L^\infty(\tau,1;{\mathcal H}^\star) \end{equation} uniformly with respect to $\lambda\in(0,\sqrt T_0)$. From (\ref{eq:27}), (\ref{eq:28}), and \cite[Corollary 8]{S}, we deduce that $\big\{\widetilde{w}_{\lambda}\big\}_{\lambda\in(0,\sqrt T_0)}$ is relatively compact in $C^0([\tau,1],{\mathcal L})$ for all $\tau\in (0,1)$. Therefore, for any given sequence $\lambda_n\to 0^+$, there exists a subsequence $\lambda_{n_k}\to0^+$ such that \begin{equation}\label{eq:29} \widetilde{w}_{\lambda_{n_k}}\to\widetilde{w} \quad\text{in} \quad C^0([\tau,1],{\mathcal L}) \end{equation} for all $\tau\in(0,1)$ and for some $\widetilde{w}\in \bigcap_{\tau\in(0,1)}C^0([\tau,1],{\mathcal L})$. We notice that a diagonal procedure allows subtracting a subsequence which does not depend on $\tau$. Since $$ 1=\|\widetilde{w}_{\lambda_{n_k}}(\cdot,1)\|_{\mathcal L}, $$ the convergence (\ref{eq:29}) ensures that \begin{equation}\label{eq:41} \|\widetilde{w}(\cdot,1)\|_{\mathcal L}=1. \end{equation} In particular $\widetilde{w}$ is nontrivial. Furthermore, by (\ref{eq:27}) and (\ref{eq:28}), the subsequence can be chosen in such a way that also \begin{equation}\label{eq:30} \widetilde{w}_{\lambda_{n_k}}\rightharpoonup\widetilde{w} \quad\text{weakly in } L^2(\tau,1;{\mathcal H}) \quad\text{and}\quad (\widetilde{w}_{\lambda_{n_k}})_t\rightharpoonup\widetilde{w}_t \quad\text{weakly in } L^2(\tau,1;{\mathcal H}^\star) \end{equation} for all $\tau\in(0,1)$; in particular $\widetilde{w}\in \bigcap_{\tau\in(0,1)}L^2(\tau,1;{\mathcal H})$ and $\widetilde{w}_t\in \bigcap_{\tau\in(0,1)}L^2(\tau,1;{\mathcal H}^\star)$. We now claim that \begin{equation}\label{eq:31} \widetilde{w}_{\lambda_{n_k}}\to\widetilde{w} \quad\text{strongly in} \quad L^2(\tau,1;{\mathcal H})\quad\text{for all }\tau\in(0,1). \end{equation} To prove the claim, we notice that (\ref{eq:30}) allows passing to the limit in (\ref{eq:25}). Therefore, in view of (\ref{eq:26}) and (\ref{eq:89}) which ensure the vanishing at the limit of the perturbation term, \begin{equation}\label{eq:32} {\phantom{\big\langle}}_{{\mathcal H}^\star}\big\langle \widetilde{w}_t,\phi \big\rangle_{{\mathcal H}}=\frac1t\int_{{\mathbb R}^N}\bigg(\nabla \widetilde{w}(x,t) \cdot \nabla \phi(x)- \dfrac{a\big(\frac{x}{|x|}\big)}{|x|^2}\,\widetilde{w}(x,t) \phi(x)\bigg)G(x,1)\,dx \end{equation} for all $\phi\in{\mathcal H}$ and a.e. $t \in (0,1)$, i.e. $\widetilde w$ is a weak solution to \begin{equation*} \widetilde{w}_t+\frac1t\bigg(\Delta \widetilde{w}- \frac x2\cdot \nabla \widetilde{w}+ \dfrac{a(x/|x|)}{|x|^2}\,\widetilde{w}\bigg)=0. \end{equation*} Testing the difference between (\ref{eq:25}) and (\ref{eq:32}) with $(\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})$ and integrating with respect to $t$ between $\tau$ and $1$, we obtain \begin{multline*} \int_\tau^1\bigg(\int_{{\mathbb R}^N}\bigg(|\nabla (\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\, |(\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t)|^2\bigg) \,G(x,1)\,dx\bigg)dt\\ = \frac{1}2\|\widetilde{w}_{\lambda_{n_k}}(1)- \widetilde{w}(1)\|^2_{\mathcal L}- \frac{\tau}2\|\widetilde{w}_{\lambda_{n_k}}(\tau)- \widetilde{w}(\tau)\|^2_{\mathcal L}-\int_\tau^1\bigg(\int_{{\mathbb R}^N} |(\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t)|^2 \,G(x,1)\,dx\bigg)dt\\ +\frac{\lambda_{n_k}^2}{\sqrt{H(\lambda_{n_k}^2)}}\int_\tau^1\bigg(\int_{{\mathbb R}^N} t f\Big(\lambda_{n_k}\sqrt tx, \lambda_{n_k}^2 t, \sqrt{H(\lambda_{n_k}^2)}\widetilde{w}_{\lambda_{n_k}}(x,t)\Big) (\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t) G(x,1)\,dx \bigg)dt. \end{multline*} Then, from (\ref{eq:26}), (\ref{eq:90}), and (\ref{eq:29}), we obtain that, for all $\tau\in (0,1)$, $$ \lim_{k\to+\infty} \int_\tau^1\bigg(\int_{{\mathbb R}^N}\bigg(|\nabla (\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t)|^2-\frac{a(x/|x|)}{|x|^2}\, |(\widetilde{w}_{\lambda_{n_k}}- \widetilde{w})(x,t)|^2\bigg) \,G(x,1)\,dx\bigg)dt=0, $$ which, by Corollary \ref{c:pos_def} and (\ref{eq:29}), implies the convergence claimed in (\ref{eq:31}). Thus, we have obtained that, for all $\tau\in(0,1)$, \begin{equation}\label{eq:35} \lim_{k\to+\infty}\int_\tau^1 \|{w}_{\lambda_{n_k}}(\cdot,t)-w(\cdot,t)\|_{{\mathcal H}_t}^2dt=0 \end{equation} and $$ \lim_{k\to+\infty}\sup_{t\in[\tau,1]} \|{w}_{\lambda_{n_k}}(\cdot,t)-w(\cdot,t)\|_{{\mathcal L}_t}=0, $$ where $$ w(x,t):=\widetilde{w}\Big(\frac{x}{\sqrt t },t\Big) $$ is a weak solution (in the sense of Definition \ref{def:solution}) of \begin{equation}\label{eq:limit_equation} w_t+\Delta w+\dfrac{a(x/|x|)}{|x|^2}\,w=0. \end{equation} We notice that, by (\ref{eq:34}) and (\ref{eq:33}), \begin{align*} &{\mathcal N}_\lambda(t)\\ &=\frac{ t\int_{{\mathbb R}^N}\Big(|\nabla w_{\lambda}(x,t)|^2- \frac{a(x/|x|)}{|x|^2}w_{\lambda}^2(x,t)-\frac{\lambda^2}{\sqrt{H(\lambda^2)}} f\big(\lambda x,\lambda^2 t, \sqrt{H(\lambda^2)}w_{\lambda}(x,t)\big) w_{\lambda}(x,t)\Big)G(x, t)\,dx}{\int_{{\mathbb R}^N}w_{\lambda}^2(x,t) G(x,t)\,dx} \end{align*} for all $t\in(0,1)$. Since, by \eqref{eq:35}, ${w}_{\lambda_{n_k}}(\cdot,t)\to w(\cdot,t)$ in ${\mathcal H}_t$ for a.e. $t\in (0,1)$, and, by \eqref{eq:26} and~(\ref{eq:90}), $$ \frac{t\lambda^2_{n_k}}{\sqrt{H(\lambda^2_{n_k})}}\int_{{\mathbb R}^N} f\Big(\lambda_{n_k} x,\lambda_{n_k}^2 t, \sqrt{H(\lambda_{n_k}^2)} w_{\lambda_{n_k}}(x,t)\Big) w_{\lambda_{n_k}}(x,t)G(x, t)\,dx \to 0 $$ for a.e. $t\in (0,1)$, we obtain that \begin{multline}\label{eq:43} \int_{{\mathbb R}^N}\bigg(|\nabla w_{\lambda_{n_k}}(x,t)|^2- \frac{a(x/|x|)}{|x|^2}w_{\lambda_{n_k}}^2(x,t)-\\ \frac{\lambda^2_{n_k}}{\sqrt{H(\lambda^2_{n_k})}} f\Big(\lambda_{n_k} x,\lambda^2 t, \sqrt{H(\lambda^2_{n_k})}w_{\lambda_{n_k}}(x,t)\Big) w_{\lambda_{n_k}}(x,t)\bigg)G(x, t)\,dx \to D_w(t) \end{multline} and \begin{equation}\label{eq:44} \int_{{\mathbb R}^N}w_{\lambda_{n_k}}^2(x,t) G(x,t)\,dx\to H_w(t) \end{equation} for a.e. $t\in (0,1)$, where $$ D_w(t)=\int_{{\mathbb R}^N}\bigg(|\nabla w(x,t)|^2- \frac{a(x/|x|)}{|x|^2}w^2(x,t)\bigg)G(x, t)\,dx \ \text{ and }\ H_w(t)=\int_{{\mathbb R}^N}w^2(x,t) G(x,t)\,dx. $$ We point out that \begin{equation}\label{eq:42} H_w(t)>0 \quad\text{for all}\quad t\in(0,1); \end{equation} indeed, (\ref{eq:41}) yields \begin{equation}\label{eq:wl1} \int_{{\mathbb R}^N}w^2(x,1)G(x,1)\,dx=1, \end{equation} which, arguing as in Lemma \ref{l:t_0=0} or applying directly the Unique Continuation Principle proved by \cite[Theorem 1.2]{poon} to equation (\ref{eq:limit_equation}), implies that $\int_{{\mathbb R}^N}w^2(x,t) G(x,t)\,dx>0$ for all $t\in(0,1)$. From (\ref{eq:43}) and (\ref{eq:44}), it follows that \begin{equation}\label{eq:36} {\mathcal N}_{\lambda_{n_k}}(t)\to {\mathcal N}_w(t) \quad\text{for a.e. }t\in(0,1), \end{equation} where ${\mathcal N}_w$ is the frequency function associated to the limit equation \eqref{eq:limit_equation}, i.e. \begin{equation}\label{eq:Nw} {\mathcal N}_w(t)=\frac{ tD_w(t)} {H_w(t)}, \end{equation} which is well defined on $(0,1)$ by (\ref{eq:42}). On the other hand, \eqref{eq:scale_for_N} implies that ${\mathcal N}_{\lambda_{n_k}}(t)={\mathcal N}(\lambda^2_{n_k}t)$ for all $t\in (0,1)$ and $k\in{\mathbb N}$. Fixing $t\in (0,1)$ and passing to the limit as $k\to+\infty$, from Lemma \ref{l:limit} we obtain \begin{equation}\label{eq:37} {\mathcal N}_{\lambda_{n_k}}(t)\to \gamma \quad\text{for all }t\in(0,1). \end{equation} Combining \eqref{eq:36} and \eqref{eq:37}, we deduce that \begin{equation}\label{eq:Nw-gamma} {\mathcal N}_w(t)=\gamma\quad \text{for all }t\in(0,1). \end{equation} Therefore ${\mathcal N}_w$ is constant in $(0,1)$ and hence ${\mathcal N}_w'(t)=0$ for any $t\in (0,1)$. By \eqref{eq:limit_equation} and Lemma~\ref{l:Nprime} with $f\equiv 0$, we obtain \begin{multline*} \bigg(\int_{{\mathbb R}^N}\Big|w_t(x,t)+\frac{\nabla w(x,t)\cdot x}{2t}\Big|^2G(x,t)\,dx\bigg) \bigg(\int_{{\mathbb R}^N}w^2(x,t)\, G(x,t)\,dx\bigg)\\ -\bigg(\int_{{\mathbb R}^N}\Big(w_t(x,t)+\frac{\nabla w(x,t)\cdot x}{2t}\Big)w(x,t)G(x,t)\,dx \bigg)^{\!2}=0 \quad \text{for all } t\in (0,1), \end{multline*} i.e. $$ \left(w_t(\cdot,t)+\frac{\nabla w(\cdot,t)\cdot x}{2t},w(\cdot,t)\right)^2_{{\mathcal L}_t} =\left\|w_t(\cdot,t)+\frac{\nabla w(\cdot,t)\cdot x}{2t}\right\|^2_{{\mathcal L}_t} \|w(\cdot,t)\|^2_{{\mathcal L}_t}, $$ where $(\cdot,\cdot)_{{\mathcal L}_t}$ denotes the scalar product in ${\mathcal L}_t$. This shows that, for all $t\in(0,1)$, $w_t(\cdot,t)+\frac{\nabla w(\cdot,t)\cdot x}{2t}$ and $w(\cdot,t)$ have the same direction as vectors in ${\mathcal L}_t$ and hence there exists a function $\beta:(0,1)\to{\mathbb R}$ such that \begin{equation}\label{eq:38} w_t(x,t)+\frac{\nabla w(x,t)\cdot x}{2t} =\beta(t)w(x,t)\quad\text{for a.e. }t\in(0,1)\text{ and a.e. }x\in{\mathbb R}^N. \end{equation} Testing \eqref{eq:limit_equation} with $\phi=w(\cdot,t)$ in the sense of \eqref{eq:defsol2} and taking into account \eqref{eq:38}, we find that \begin{align*} D_w(t)= {\phantom{\bigg\langle}}_{{\mathcal H}_t^\star}\bigg\langle w_t(\cdot,t)+\frac{\nabla w(\cdot,t)\cdot x}{2t},w(\cdot,t) \bigg\rangle_{{\mathcal H}_t}=\beta(t)H_w(t), \end{align*} which, by (\ref{eq:Nw}) and (\ref{eq:Nw-gamma}), implies that $$ \beta(t)=\frac{\gamma}{t}\quad\text{for a.e. }t\in(0,1). $$ Hence (\ref{eq:38}) becomes \begin{equation}\label{eq:45} w_t(x,t)+\frac{\nabla w(x,t)\cdot x}{2t} =\frac{\gamma}{t}\,w(x,t)\text{ for a.e. }(x,t)\in{\mathbb R}^N\times (0,1)\text{ and in a distributional sense}. \end{equation} Combining (\ref{eq:45}) with (\ref{eq:limit_equation}), we obtain \begin{equation}\label{eq:46} \Delta w+\dfrac{a(x/|x|)}{|x|^2}\,w -\frac{\nabla w(x,t)\cdot x}{2t}+ \frac{\gamma}{t}\,w(x,t)=0 \end{equation} for a.e. $(x,t)\in{\mathbb R}^N\times (0,1)$ and in a weak sense. From (\ref{eq:45}), it follows that, letting, for all $\eta>0$ and a.e. $(x,t)\in{\mathbb R}^N\times(0,1)$, $w^\eta(x,t):=w(\eta x,\eta^2t)$, there holds $$ \frac{d w^\eta}{d\eta}=\frac{2\gamma}{\eta}w^\eta $$ a.e. and in a distributional sense. By integration, we obtain that \begin{equation} \label{eq:weta} w^\eta(x,t)=w(\eta x,\eta^2t)=\eta^{2\gamma}w(x,t) \quad\text{for all }\eta>0 \text{ and a.e. }(x,t)\in{\mathbb R}^N\times(0,1). \end{equation} Let $$ g(x)=w(x,1); $$ from (\ref{eq:wl1}), we have that $g\in{\mathcal L}$, $\|g\|_{{\mathcal L}}=1$, and, from (\ref{eq:weta}), \begin{equation}\label{eq:47} w(x,t)=w^{\sqrt t }\Big(\frac{x}{\sqrt t },1\Big)=t^{\gamma}w\Big(\frac{x}{\sqrt t },1\Big)= t^{\gamma}g\Big(\frac{x}{\sqrt t}\Big) \quad\text{for a.e. }(x,t)\in{\mathbb R}^N\times(0,1). \end{equation} In particular, from (\ref{eq:47}), $g\big({\cdot}/{\sqrt t}\big)\in {\mathcal H}_t$ for a.e. $t\in (0,1)$ and hence, by scaling, $g\in{\mathcal H}$. From (\ref{eq:46}) and (\ref{eq:47}), we obtain that $g\in{\mathcal H}\setminus\{0\}$ weakly solves $$ -\Delta g(x)+ \frac{\nabla g(x)\cdot x}{2}-\frac{a(x/|x|)}{|x|^2}\,g(x)=\gamma\, g(x), $$ i.e. $\gamma$ is an eigenvalue of the operator $L$ defined in (\ref{eq:13}) and $g$ is an eigenfunction of $L$ associated to $\gamma$. The proof is now complete. \end{pf} \noindent Let us now describe the behavior of $H(t)$ as $t\to 0^+$. \begin{Lemma} \label{l:limite} Under the same assumptions as in Lemma \ref{l:blow_up}, let $\gamma:=\lim_{t\rightarrow 0^+} {\mathcal N}(t)$ be as in Lemma \ref{l:limit}. Then the limit \[ \lim_{t\to 0^+}t^{-2\gamma}H(t) \] exists and it is finite. \end{Lemma} \begin{pf} In view of \eqref{eq:52}, it is sufficient to prove that the limit exists. By \eqref{eq:10i}, Lemma \ref{l:limit}, and Lemma \ref{l:Nprime}, we have, for all $t\in(0,T_0)$, \begin{align*} \frac{d}{dt} \frac{H(t)}{t^{2\gamma}} &=-2\gamma t^{-2\gamma-1} H(t)+t^{-2\gamma} H'(t) =2t^{-2\gamma-1} (tD(t)-\gamma H(t))\\ &= 2t^{-2\gamma-1} H(t) \int_0^t (\nu_{1}(s)+\nu_{2}(s))\, ds, \end{align*} with $\nu_1,\nu_2$ as in (\ref{eq:nu1nu2}). After integration over $(t,T_0)$, \begin{equation}\label{inte} \frac{H(T_0)}{T_0^{2\gamma}}-\frac{H(t)}{t^{2\gamma}}= \int_t^{T_0} 2s^{-2\gamma-1} H(s) \left( \int_0^s \nu_{1}(r)dr \right) ds + \int_t^{T_0} 2s^{-2\gamma-1} H(s) \left( \int_0^s \nu_{2}(r)dr \right) ds. \end{equation} By \eqref{eq:17}, $\nu_{1}(t)\geq 0$ and hence $$ \lim_{t\to 0^+} \int_t^{T_0} 2s^{-2\gamma-1} H(s) \left( \int_0^s \nu_{1}(r)dr \right) ds $$ exists. On the other hand, by Lemmas \ref{l:est_N_2} and \ref{l:Nabove} we have that $s^{-\delta}\int_0^s|\nu_{2}(r)|dr$ is bounded in $(0,T_0)$ with $\delta$ defined in (\ref{eq:delta}), while, from Lemma \ref{stimaH}, we deduce that $t^{-2\gamma}H(t)$ is bounded in $(0,T_0)$. Therefore, for some ${\rm const}>0$, there holds $$ \left| 2s^{-2\gamma-1} H(s) \left( \int_0^s \nu_{2}(r)dr \right) \right|\leq {\rm const\,} s^{-1+\delta} $$ for all $s\in(0,T_0)$, which proves that $s^{-2\gamma-1} H(s) \left( \int_0^s \nu_{2}(r) dr \right)\in L^1(0,T_0)$. We conclude that both terms at the right hand side of (\ref{inte}) admit a limit as $t\to 0^+$ thus completing the proof. \end{pf} \noindent In the following lemma, we prove that $\lim_{t\to 0^+}t^{-2\gamma}H(t)$ is indeed strictly positive. \begin{Lemma}\label{limite_pos} Under the same assumptions as in Lemma \ref{l:blow_up} and letting $\gamma:=\lim_{t\rightarrow 0^+} {\mathcal N}(t)$ be as in Lemma \ref{l:limit}, there holds \[ \lim_{t\to 0^+}t^{-2\gamma}H(t)>0. \] \end{Lemma} \begin{pf} Let us assume by contradiction that $\lim_{t\to 0^+}t^{-2\gamma}H(t)=0$ and let $\{\widetilde V_{n,j}: j,n\in{\mathbb N},j\geq 1\}$ be the orthonormal basis of ${\mathcal L}$ introduced in Remark \ref{rem:ortho}. Since $u_{\lambda}(x,1)=u(\lambda x,\lambda^2)\in{\mathcal L}$ for all $\lambda\in (0,\sqrt T_0)$, $u_{\lambda}(x,1)\in{\mathcal H}$ for a.e. $\lambda\in (0,\sqrt T_0)$, and $f(\lambda x,\lambda^2, u_{\lambda}(x,1))\in{\mathcal H}^\star$ for a.e. $\lambda\in (0,\sqrt T_0)$, we can expand them as \begin{align}\label{eq:70} u_{\lambda}(x,1)&=\sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} u_{m,k}(\lambda) \widetilde V_{m,k}(x)\quad \text{in }{\mathcal L},\\ \notag f(\lambda x,\lambda^2, u_{\lambda}(x,1))&=\sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} \xi_{m,k}(\lambda) \widetilde V_{m,k}(x)\quad \text{in }{\mathcal H}^\star, \end{align} where \begin{equation}\label{eq:68} u_{m,k}(\lambda)=\int_{{\mathbb R}^N}u_{\lambda}(x,1)\widetilde V_{m,k}(x)G(x,1)\,dx \end{equation} and \begin{equation}\label{eq:69} \xi_{m,k}(\lambda)= {\phantom{\bigg\langle}}_{{\mathcal H}^\star}\bigg\langle f(\lambda \cdot,\lambda^2,u_{\lambda}(\cdot,1)), \widetilde V_{m,k} \bigg\rangle_{{\mathcal H}} =\int_{{\mathbb R}^N}f(\lambda x,\lambda^2, u_{\lambda}(x,1))\widetilde V_{m,k}(x)G(x,1)\,dx. \end{equation} By orthogonality of the $\widetilde V_{m,k}$'s in ${\mathcal L}$, we have that $$ H(\lambda^2)=\sum_{\substack{n,j\in{\mathbb N}\\j\geq1}} (u_{n,j}(\lambda) )^2 \geq (u_{m,k}(\lambda) )^2\quad\text{for all } \lambda\in(0,\sqrt{T_0})\text{ and }m,k\in{\mathbb N},\ k\geq1. $$ Hence, $\lim_{t\to 0^+}t^{-2\gamma}H(t)=0$ implies that \begin{equation}\label{eq:58} \lim_{\lambda\to 0^+}\lambda^{-2\gamma}u_{m,k}(\lambda)=0 \quad\text{for all } m,k\in{\mathbb N},\ k\geq1. \end{equation} Moreover, we can show that the function $\lambda\mapsto u_{m,k}(\lambda)$ is absolutely continuous in $(0,\sqrt{T_0})$ and $u_{m,k}'(\lambda)= {\phantom{\langle}}_{{\mathcal H}^\star}\langle \frac{d}{d\lambda}u_{\lambda}(x,1),\widetilde V_{m,k}(x) \rangle_{{\mathcal H}}$. Hence $$ \frac{d}{d\lambda}u_{\lambda}(x,1)= \sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} u_{m,k}'(\lambda) \widetilde V_{m,k}(x)\quad \text{in }{\mathcal H}^\star. $$ Furthermore, $$ \Delta u_{\lambda}(x,1)=\lambda^2\Delta u(\lambda x,\lambda^2) = \sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} u_{m,k}(\lambda) \Delta\widetilde V_{m,k}(x)\quad \text{in }{\mathcal H}^\star. $$ From \eqref{prob} and the fact that $\widetilde V_{m,k}(x)$ is an eigenfuntion of the operator $L$ associated to the eigenvalue $\gamma_{m,k}$ defined in (\ref{eq:65}), it follows that \begin{align*} \dfrac{d}{d\lambda}&u_{\lambda}(x,1)=2\lambda u_{t}(\lambda x,\lambda^2)+ \nabla u(\lambda x,\lambda^2)\cdot x\\ &=2\lambda \bigg(-\Delta u(\lambda x, \lambda^2)- \frac{a(x/|x|)}{\lambda^2|x|^2}u(\lambda x, \lambda^2)- f(\lambda x,\lambda^2,u(\lambda x, \lambda^2))\bigg)+\nabla u(\lambda x,\lambda^2) \cdot x\\ &=\dfrac{2}{\lambda}\sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} u_{m,k}(\lambda)\bigg(-\Delta \widetilde V_{m,k}(x)-\frac{a(x/|x|)}{|x|^2}\widetilde V_{m,k}(x) +\frac{\nabla \widetilde V_{m,k}\cdot x}2\bigg) -2\lambda\sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} \xi_{m,k}(\lambda) \widetilde V_{m,k}(x)\\ &=\dfrac{2}{\lambda}\sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} \gamma_{m,k}u_{m,k}(\lambda)\widetilde V_{m,k}(x)-2\lambda \sum_{\substack{m,k\in{\mathbb N}\\k\geq1}} \xi_{m,k}(\lambda) \widetilde V_{m,k}(x). \end{align*} Therefore, we have that $$ u'_{m,k}(\lambda)=\frac{2}{\lambda}\gamma_{m,k}u_{m,k}(\lambda)-2\lambda \xi_{m,k}(\lambda) \quad\text{for all }m,k\in{\mathbb N},\ k\geq 1, $$ a.e. and distributionally in $(0,\sqrt{T_0})$. By integration, we obtain, for all $\lambda,\bar\lambda\in(0,\sqrt{T_0})$, \begin{equation}\label{eq:59} u_{m,k}(\bar\lambda)= \bar\lambda^{2\gamma_{m,k}}\left(\lambda^{-2\gamma_{m,k}} u_{m,k}(\lambda)+ 2\int_{\bar\lambda}^{\lambda}s^{1-2\gamma_{m,k}}\xi_{m,k}(s)\,ds\right). \end{equation} From Lemma \ref{l:blow_up}, $\gamma$ is an eigenvalue of the operator $L$, hence, by Proposition \ref{p:explicit_spectrum}, there exist $m_0,k_0\in{\mathbb N}$, $k_0\geq 1$, such that $\gamma=\gamma_{m_0,k_0}=m_0-\frac{\alpha_{k_0}}2$. Let us denote as $E_0$ the associated eigenspace and by $J_0$ the finite set of indices $\{(m,k)\in{\mathbb N}\times({\mathbb N}\setminus\{0\}):\gamma=m-\frac{\alpha_{k}}2\}$, so that $\#J_0=m(\gamma)$, with $m(\gamma)$ as in \eqref{eq:cardinal}, and an orthonormal basis of $E_0$ is given by $\{\widetilde V_{m,k} : (m,k)\in J_0\}$. In order to estimate $\xi_{m,k}$, we distinguish between case {\bf (I)} and case {\bf (II)}. \begin{description} \item[Case {\bf (I)}] From \eqref{eq:h}, for all $(m,k)\in J_0$, we can estimate $\xi_{m,k}$ as \begin{align}\label{eq:54} & |\xi_{m,k}(\lambda)|\leq C_h\int_{{\mathbb R}^N}(1+\lambda^{-2+\e}|x|^{-2+\e})|u(\lambda x,\lambda^2)| |\widetilde V_{m,k}(x)|G(x,1)\,dx\\ \notag&\leq C_h\bigg(\int_{{\mathbb R}^N}u^2(\lambda x,\lambda^2) G(x,1)\,dx\bigg)^{\!\!1/2}\bigg(\int_{{\mathbb R}^N}\widetilde V_{m,k}^2(x) G(x,1)\,dx\bigg)^{\!\!1/2}\\ \notag&\qquad+ C_h\lambda^{-2+\e/2}\int_{|x|\leq \lambda^{-1/2}}\frac{|u(\lambda x,\lambda^2)| |\widetilde V_{m,k}(x)|}{|x|^2}G(x,1)\,dx\\ \notag&\qquad+C_h\lambda^{-1+\e/2}\int_{|x|\geq \lambda^{-1/2}}|u(\lambda x,\lambda^2)| |\widetilde V_{m,k}(x)|G(x,1)\,dx\\ \notag&\hskip-0.5cm\leq C_h(1+\lambda^{-1+\frac\e2})\sqrt{H(\lambda^2)} +C_h\lambda^{-2+\frac\e2}\bigg(\int_{{\mathbb R}^N}{\textstyle{\frac{u^2(\lambda x,\lambda^2)} {|x|^2}}}G(x,1)\,dx\bigg)^{\!\!\frac12} \bigg(\int_{{\mathbb R}^N}{\textstyle{\frac{\widetilde V_{m,k}^2(x)} {|x|^2}G(x,1)\,dx}}\bigg)^{\!\!\frac12}. \end{align} From Corollary \ref{c:pos_per} and Lemma \ref{l:Nabove}, it follows that \begin{multline}\label{eq:55} \int_{{\mathbb R}^N}\frac{u^2(\lambda x,\lambda^2)} {|x|^2}G(x,1)\,dx =\lambda^2\int_{{\mathbb R}^N}\frac{u^2(y,\lambda^2)} {|y|^2}G(y,\lambda^2)\,dy \leq \frac{\lambda^2}{C_1'}\bigg(D(\lambda^2)+ \frac{C_2}{\lambda^2}H(\lambda^2)\bigg)\\= \frac{H(\lambda^2)}{C_1'}\big({\mathcal N}(\lambda^2)+C_2\big)\leq\frac{C_2 -\frac{N-2}{4}+C_{4}\big({\mathcal N}(T_0)+\frac{N-2}4\big) }{C_1'} H(\lambda^2), \end{multline} while, from Lemma \ref{Hardy_aniso}, for all $(m,k)\in J_0$, \begin{equation}\label{eq:56} \int_{{\mathbb R}^N}\frac{\widetilde V_{m,k}^2(x)} {|x|^2}G(x,1)\,dx\leq \bigg(\mu_1(a)+\frac{(N-2)^2}4\bigg)^{-1}\bigg(\gamma+\frac{N-2}4 \bigg). \end{equation} From (\ref{eq:54}), (\ref{eq:55}), (\ref{eq:56}), and Lemma \ref{stimaH}, we deduce that \begin{align}\label{eq:57} |\xi_{m,k}(\lambda)|\leq C_6 \lambda^{-2+\frac{\e}{2}+2\gamma}, \quad\text{for all }\lambda\in(0,\sqrt{T_0}) \end{align} and for some positive constant $C_6$ depending on $a,N,\gamma,h,T_0,K_1,\e$ but independent of $\lambda$ and $(m,k)\in J_0$. \medskip \item[Case {\bf (II)}] From \eqref{eq:fi} and Lemma \ref{l:sob}, for all $(m,k)\in J_0$, we can estimate $\xi_{m,k}$ as \begin{align}\label{eq:94} & |\xi_{m,k}(\lambda)|\leq C_\varphi\int_{{\mathbb R}^N}\big(|u(\lambda x,\lambda^2)|+|u(\lambda x,\lambda^2)|^p\big) |\widetilde V_{m,k}(x)|G(x,1)\,dx\\ \notag&\leq C_\varphi\bigg(\int_{{\mathbb R}^N}u^2(\lambda x,\lambda^2) G(x,1)\,dx\bigg)^{\!\!1/2}\bigg(\int_{{\mathbb R}^N}\widetilde V_{m,k}^2(x) G(x,1)\,dx\bigg)^{\!\!1/2}\\ \notag&\qquad+ C_\varphi \bigg(\int_{{\mathbb R}^N}|u(\lambda x,\lambda^2)|^{p+1}|G(x,1)|^{\frac{p+1}2}\,dx\bigg)^{\!\!\frac1{p+1}} \bigg(\int_{{\mathbb R}^N}|\widetilde V_{m,k}(x)|^{p+1}|G(x,1)|^{\frac{p+1}2}\,dx\bigg)^{\!\!\frac1{p+1}}\\ &\notag\hskip8cm\times \bigg(\int_{{\mathbb R}^N}|u(\lambda x,\lambda^2)|^{p+1}\,dx\bigg)^{\!\!\frac{p-1}{p+1}}\\ &\notag\leq C_\varphi\sqrt{H(\lambda^2)}+ C_{\varphi}C_{p+1}\lambda^{-N\frac{p-1}{p+1}}\|u_\lambda(\cdot,1)\|_{\mathcal H} \|\widetilde V_{m,k}\|_{\mathcal H}\bigg(\int_{{\mathbb R}^N}|u(y,\lambda^2)|^{p+1}\,dy \bigg)^{\!\!\frac{p-1}{p+1}}. \end{align} From Corollary \ref{c:pos_per_nonlin} and Lemma \ref{l:Nabove}, it follows that \begin{multline}\label{eq:92} \|u_\lambda(\cdot,1)\|_{\mathcal H}^2=\|u(\cdot,\lambda^2)\|_{\mathcal H_{\lambda^2}}^2 \leq \frac{\lambda^2}{C_1''}\bigg(D(\lambda^2)+ \frac{N-2}{4\lambda^2}H(\lambda^2)\bigg)\\= \frac{H(\lambda^2)}{C_1''}\bigg({\mathcal N}(\lambda^2)+\frac{N-2}{4}\bigg) \leq\frac{ C_{4}\big({\mathcal N}(T_0)+\frac{N-2}4\big)}{C_1''} H(\lambda^2), \end{multline} while, from Corollary \ref{c:pos_def}, for all $(m,k)\in J_0$, \begin{equation}\label{eq:95} \|\widetilde V_{m,k}\|_{\mathcal H}\leq {\rm const\,}\bigg(\gamma+\frac{N-2}4 \bigg). \end{equation} From (\ref{eq:94}), (\ref{eq:92}), (\ref{eq:95}), and Lemma \ref{stimaH}, we deduce that \begin{align}\label{eq:91} |\xi_{m,k}(\lambda)|\leq C_7 \lambda^{-2+\frac{N+2-p(N-2)}{p+1}+2\gamma}, \quad\text{for all }\lambda\in(0,\sqrt{T_0}) \end{align} and for some positive constant $C_7$ depending on $\|u\|_{L^{\infty}(0,T, L^{p+1}({\mathbb R}^N))}$, $a$, $N$, $\gamma$, $\varphi$, $T_0$, $K_1$, $p$, but independent of $\lambda$ and $(m,k)\in J_0$. \end{description} Collecting (\ref{eq:57}) and (\ref{eq:91}), we have that \begin{align}\label{eq:93} |\xi_{m,k}(\lambda)|\leq C_8 \lambda^{-2+\tilde\delta+2\gamma}, \quad\text{for all }\lambda\in(0,\sqrt{T_0}) \end{align} for some $C_8>0$ which is independent of $\lambda$ and $(m,k)\in J_0$ and $$ \tilde\delta= \begin{cases} \e/2, &\text{in case {\bf (I)}},\\[5pt] \frac {N+2-p(N-2)}{p+1}, &\text{in case {\bf (II)}}. \end{cases} $$ Estimate (\ref{eq:93}) implies that the function $s\mapsto s^{1-2\gamma}\xi_{m,k}(s)$ belongs to $L^1(0,\sqrt{T_0})$. Therefore, letting $\bar \lambda\to 0^+$ in (\ref{eq:59}) and using (\ref{eq:58}), we deduce that, for all $\lambda\in(0,\sqrt{T_0})$, \begin{equation}\label{eq:60} u_{m,k}(\lambda)= -2\lambda^{2\gamma}\int_0^\lambda s^{1-2\gamma}\xi_{m,k}(s)\,ds. \end{equation} From (\ref{eq:93}) and (\ref{eq:60}), we obtain that, for all $(m,k)\in J_0$ and $\lambda\in(0,\sqrt{T_0})$, \begin{equation}\label{eq:61} |u_{m,k}(\lambda)|\leq \frac{2C_8}{\tilde\delta}\lambda^{2\gamma+\tilde\delta}. \end{equation} Let us fix $\sigma\in \big(0,\tilde\delta)$; by Lemma \ref{stimaH}, there exists $K_2(\sigma)$ such that $$ H(\lambda^2)\geq K_2(\sigma)\lambda^{2(2\gamma+\sigma)} \quad\text{for }\lambda\in(0,\sqrt{T_0}). $$ Therefore, in view of (\ref{eq:61}), for all $(m,k)\in J_0$ and $\lambda\in(0,\sqrt{T_0})$, $$ \frac{|u_{m,k}(\lambda)|}{\sqrt{H(\lambda^2)}}\leq \frac{2C_8}{\tilde\delta \sqrt{K_2(\sigma)}}\lambda^{\tilde\delta-\sigma} $$ and hence \begin{equation}\label{eq:62} \frac{u_{m,k}(\lambda)}{\sqrt{H(\lambda^2)}}\to 0\quad \text{as }\lambda\to 0^+. \end{equation} On the other hand, by Lemma \ref{l:blow_up}, for every sequence $\lambda_n\to 0^+$, there exists a subsequence $\{\lambda_{n_j}\}_{j\in{\mathbb N}}$ and an eigenfunction $g\in E_0\setminus\{0\}$ of the operator $L$ associated to $\gamma$ such that \begin{equation*} \frac {u_{\lambda_{n_j}}(x,1)}{\sqrt{H(\lambda_{n_j}^2)}} \to g\quad \text{in }{\mathcal L}\quad\text{as }j\to+\infty, \end{equation*} thus implying, for all $(m,k)\in J_0$, \begin{equation}\label{eq:63} \frac{u_{m,k}(\lambda_{n_j})}{\sqrt{H(\lambda_{n_j}^2)}} =\left(\frac {u_{\lambda_{n_j}}(x,1)}{\sqrt{H(\lambda_{n_j}^2)}}, \widetilde V_{m,k}\right)_{\mathcal L}\to(g, \widetilde V_{m,k})_{\mathcal L}\quad\text{as }j\to+\infty. \end{equation} From (\ref{eq:62}) and (\ref{eq:63}), we deduce that $(g,\widetilde V_{m,k})_{\mathcal L}=0$ for all $(m,k)\in J_0$. Since $g\in E_0$ and $\{\widetilde V_{m,k} : (m,k)\in J_0\}$ is an orthonormal basis of $E_0$, this implies that $g=0$, a contradiction. \end{pf} We now complete the description of the asymptotics of solutions by combining Lemmas \ref{l:blow_up} and \ref{limite_pos} and obtaining some convergence of the blowed-up solution continuously as $\lambda\to 0^+$ and not only along subsequences, thus proving Theorems \ref{asym1} and \ref{asym2}. \medskip \begin{pfn}{Theorems \ref{asym1} and \ref{asym2}} Identities (\ref{eq:671}) and (\ref{eq:672}) follow from part (i) of Lemma \ref{l:blow_up} and Proposition \ref{p:explicit_spectrum}, which imply that there exists an eigenvalue $\gamma_{m_0,k_0}=m_0-\frac{\alpha_{k_0}}2$ of $L$, $m_0,k_0\in{\mathbb N}$, $k_0\geq 1$, such that $\gamma=\lim_{t\to 0^+}{\mathcal N}(t)=\gamma_{m_0,k_0}$. Let $E_0$ be the associated eigenspace and $J_0$ the finite set of indices $\{(m,k)\in{\mathbb N}\times({\mathbb N}\setminus\{0\}):\gamma_{m_0,k_0}=m-\frac{\alpha_{k}}2\}$, so that $\{\widetilde V_{m,k} : (m,k)\in J_0\}$, with the $\widetilde V_{m,k}$'s as in Remark \ref{rem:ortho}, is an orthonormal basis of $E_0$. Let $\{\lambda_n\}_{n\in{\mathbb N}}\subset (0,+\infty)$ such that $\lim_{n\to+\infty}\lambda_n=0$. Then, from part (ii) of Lemma \ref{l:blow_up} and Lemmas \ref{l:limite} and \ref{limite_pos}, there exist a subsequence $\{\lambda_{n_k}\}_{k\in{\mathbb N}}$ and real numbers $\{\beta_{n,j}:(n,j)\in J_0\}$ such that $\beta_{n,j}\neq 0$ for some $(n,j)\in J_0$ and, for any $\tau\in(0,1)$, \begin{equation}\label{eq:74} \lim_{k\to+\infty}\int_\tau^1 \bigg\|\lambda_{n_k}^{-2\gamma}u(\lambda_{n_k} x,\lambda_{n_k}^2t) -t^{\gamma}\sum_{(n,j)\in J_0}\beta_{n,j}\widetilde V_{n,j}(x/\sqrt t) \bigg\|_{{\mathcal H}_t}^2dt=0 \end{equation} and \begin{equation}\label{eq:75} \lim_{k\to+\infty}\sup_{t\in[\tau,1]} \bigg\|\lambda_{n_k}^{-2\gamma}u(\lambda_{n_k} x,\lambda_{n_k}^2t) -t^{\gamma}\sum_{(n,j)\in J_0}\beta_{n,j}\widetilde V_{n,j}(x/\sqrt t) \bigg\|_{{\mathcal L}_t}=0. \end{equation} In particular, \begin{equation}\label{eq:71} \lambda_{n_k}^{-2\gamma}u(\lambda_{n_k} x,\lambda_{n_k}^2) \mathop{\longrightarrow}\limits_{k\to+\infty} \sum_{(n,j)\in J_0}\beta_{n,j}\widetilde V_{n,j}(x) \quad\text{in }{\mathcal L}. \end{equation} We now prove that the $\beta_{n,j}$'s depend neither on the sequence $\{\lambda_n\}_{n\in{\mathbb N}}$ nor on its subsequence $\{\lambda_{n_k}\}_{k\in{\mathbb N}}$. Let us fix $\Lambda\in(0,\sqrt{T_0})$ and define $u_{m,i}$ and $\xi_{m,i}$ as in (\ref{eq:68}-\ref{eq:69}). By expanding $u_{\lambda}(x,1)=u(\lambda x,\lambda^2)\in{\mathcal L}$ in Fourier series as in (\ref{eq:70}), from (\ref{eq:71}) it follows that, for any $(m,i)\in J_0$, \begin{equation}\label{eq:72} \lambda_{n_k}^{-2\gamma}u_{m,i}(\lambda_{n_k}) \to\sum_{(n,j)\in J_0} \beta_{n,j}\int_{{\mathbb R}^N} \widetilde V_{n,j}(x)\widetilde V_{m,i}(x)G(x,1)\,dx=\beta_{m,i} \end{equation} as $k\to+\infty$. As deduced in the proof of Lemma \ref{limite_pos} (see (\ref{eq:59})), for any $(m,i)\in J_0$ and $\lambda\in(0,\Lambda)$ there holds \begin{align}\label{eq:73} u_{m,i}(\lambda)= \lambda^{2\gamma}\left(\Lambda^{-2\gamma} u_{m,i}(\Lambda)+ 2\int_{\lambda}^{\Lambda}s^{1-2\gamma}\xi_{m,i}(s)\,ds\right). \end{align} Furthermore, arguing again as in Lemma \ref{limite_pos} (see (\ref{eq:93})), $s\mapsto s^{1-2\gamma}\xi_{m,i}(s)$ belongs to $L^1(0,\sqrt{T_0})$. Hence, combining (\ref{eq:72}) and (\ref{eq:73}), we obtain, for every $(m,i)\in J_0$, \begin{align*} \beta_{m,i}&=\Lambda^{-2\gamma} u_{m,i}(\Lambda)+ 2\int_{0}^{\Lambda}s^{1-2\gamma}\xi_{m,i}(s)\,ds\\ &=\Lambda^{-2\gamma} \int_{{\mathbb R}^N}u(\Lambda x,\Lambda^2)\widetilde V_{m,i}(x)G(x,1)\,dx\\ &\hskip3cm+2\int_0^{\Lambda}s^{1-2\gamma} \bigg( \int_{{\mathbb R}^N}f(s x,s^2, u(sx,s^2))\widetilde V_{m,i}(x)G(x,1)\,dx \bigg) ds. \end{align*} In particular the $\beta_{m,i}$'s depend neither on the sequence $\{\lambda_n\}_{n\in{\mathbb N}}$ nor on its subsequence $\{\lambda_{n_k}\}_{k\in{\mathbb N}}$, thus implying that the convergences in (\ref{eq:74}) and (\ref{eq:75}) actually hold as $\lambda\to 0^+$ and proving the theorems. \end{pfn} \noindent The strong unique continuation property is a direct consequence of Theorems \ref{asym1} and \ref{asym2}. \begin{pfn}{Corollary \ref{cor:uniq_cont}} Let us assume by contradiction that $u\not\equiv 0$ in ${\mathbb R}^N\times (0,T)$ and fix $k\in{\mathbb N}$ such that $k>\gamma$, with $\gamma=\gamma_{m_0,k_0}$ as in Theorems \ref{asym1} and \ref{asym2}. From assumption (\ref{eq:uniq_cont}), it follows that, for a.e. $(x,t)\in{\mathbb R}^N\times (0,1)$, \begin{equation}\label{eq:97} \lim_{\lambda\to 0^+}|\lambda^{-2\gamma}t^{-\gamma}u(\lambda x,\lambda^2 t)|=0. \end{equation} On the other hand, from Theorems \ref{asym1} and \ref{asym2}, it follows that there exists $g\in{\mathcal H}\setminus\{0\}$ such that $g$ is an eigenfunction of the operator $L$ associated to $\gamma$ and, for all $t\in(0,1)$ and a.e. $x\in{\mathbb R}^N$, $$ \lambda^{-2\gamma}t^{-\gamma}u(\lambda x,\lambda^2 t) \to g(x/\sqrt t), $$ which, in view of (\ref{eq:97}), implies $g\equiv 0$, a contradiction. \end{pfn} \textbf{Aknowledgements.} The authors would like to thank Prof. Susanna Terracini for her interest in their work and for helpful comments and discussions.
2024-02-18T23:40:22.912Z
2010-02-18T17:29:58.000Z
algebraic_stack_train_0000
2,219
24,847
proofpile-arXiv_065-10844
\section{INTRODUCTION} \label{Introduction} Despite the fundamental role they play in gravitational-wave astronomy, no undisputed observational evidence of the existence of supermassive binary black holes (SMBBHs) systems has been found yet. However, circumstantial evidence does exist for a number of potential candidates. This is the case, for instance, of the radio galaxy \textit{0402+379}, which shows a projected separation between the two black holes of $7.3\, \rm {pc}$ and a total mass of $\sim 1.5\times 10^8M_\odot$~\citep{Rodriguez2006}. Similarly, the ultraluminous infrared galaxy \textit{NGC6240} shows two optical nuclei and is thought to be in an early merger phase~\citep{Komossa2003}. Finally, potential candidates have been suggested in few other cases where the two galaxies are more widely separated, like in the pair \textit{IC694/NGC3690} hosting two active nuclei as revealed by the presence of two distinct K$\alpha$ lines in their X-ray spectra \citep{Ballo2004}. In addition, there is a large family of even more uncertain SMBBH candidates, that are spatially unresolved and whose ultimate nature is, of course, a matter of strong debate. They include the class of X-shaped radio galaxies, in which the observed changes in the orientation of the black hole spin axis could be due to an ongoing merger with a second black hole~\citep{Gopal2003}; or the class of double-double radio galaxies presenting a pair of double-lobed radio structures that could be the remnants of a SMBBH merger event \citep{Liu2003}; or, finally, the class of sources showing periodicities in the light curves, like in BL Lac Object \textit{OJ287}~\citep{Komossa2006}. Quite recently, also the quasar \textit{SDSSJ0927} (at a redshift $z\approx 0.7$) has been identified as a promising SMBBH candidate, with a mass for the primary black hole of $M\approx 2 \times 10^9 M_\odot$ and a semi-major axis of $0.34\, {\rm pc}$~\citep{Dotti2009}. A strong motivation for studying the merger of supermassive binary black hole systems comes from the fact that their gravitational signal will be detected by the planned Laser Interferometric Space Antenna (LISA), whose optimum sensitivity is placed in the range $(10^{-4} \div 0.1) \ \rm{Hz}$. Considerable attention has therefore been recently attracted by the possibility of detecting also the electromagnetic (EM) counterpart of these events through the emission coming from the circumbinary accretion disc that is expected to form when the binary is still widely separated. Such a detection will not only act as a confirmation of the gravitational wave (GW) detection, but it will also provide a new tool for testing a number of fundamental astrophysical issues~\citep{Haiman2009b}. More specifically, it will offer the possibility of testing models of galaxy mergers and accretion discs perturbation, probing basic aspects of gravitational physics, and allowing for the measurement of the Eddington ratio and for the determination of cosmological parameter once the redshift is known \citep{Phinney2009}. In spite of the presence of the disturbing effects due to weak-lensing errors,~\citet{Kocsis2006} have computed the average number of quasars in the three-dimensional LISA error volume and have shown that for mergers with masses in the range $\sim 4\times (10^5\div 10^7)M_\odot$ at redshift $z\sim 1$, the error box may contain $1$ quasar with a luminosity $L_B\sim (10^{10}\div 10^{11}) L_\odot$ ~\citep[see also][]{Kocsis:2008,Kocsis:2008b}. As a product of this increased interest, a number of studies have been recently carried out to investigate the properties of these EM counterparts either during the stages that precede the merger, or in those following it. As an example, recent work has considered the interaction between the binary and a dense gas cloud~\citep{Armitage:2002,vanMeter:2009gu,Bode:2009mt,Farris:2009mt,Lodato2009,Chang2009} even though astrophysical considerations seem to suggest that during the very final stages of the merger the SMBBH will inspiral in a rather tenuous intergalactic medium. At the same time, other scenarios not involving matter have also been considered. In these cases the SMBBH is considered to be inspiralling in vacuum but in the presence of an external magnetic field which is anchored to the circumbinary disc~\citep{Palenzuela:2009yr, Moesta:2009}. The extensive analysis of~\citet{Moesta:2009}, in particular, has shown that, even though the electromagnetic radiation in the lowest $\ell= 2$ and $m = 2$ multipole accurately reflects the gravitational one, the energy emitted in EM waves is $13$ orders of magnitude smaller than the one emitted in GW, thus making the direct detection of the two different signals very unlikely. The situation changes in the post-merger phase. In this case, in fact, the EM counterpart is supposed to be mainly due to the radiation from the circumbinary accretion disc, and, because of that, it will contain an imprint of any strong dynamical change produced on the disc by the merger event. There are indeed two major such dynamical effects. The first one is the abrupt reduction of the rest-mass of the binary, emitted away in GWs, which is a function of the binary mass ratio, amounting up to $\simeq 10\%$ for equal-mass spinning systems~\citep{Reisswig:2009vc}. The second one is the recoil velocity of the merged system, resulting in a {\em kick} velocity of the resulting black hole with respect to the hosting galaxy (see~\citet{Bekenstein1973},~\citet{Redmount:1989}) for a first discussion of the process and~\citet{Rezzolla:2008sd} for a recent review). Leaving aside possible problems due to the actual value of the kicked velocity, which in some cases could be even larger than the escape velocity, it is clear that both events mentioned above can significantly affect the dynamics of the circumbinary disc, mainly as they contribute to the formation and propagation of shocks, thus enhancing the possibility of a strong EM counterpart. After the first smooth-particle-hydrodynamics approach to the dynamical evolution of circumbinary discs performed by~\citet{Artymowicz1994}, several additional numerical investigations have been proposed in the very recent past. \citet{MacFadyen2008}, for example, performed two dimensional hydrodynamical simulations and studied in detail the evolution of the binary separation and of the disc eccentricity. By perturbing Keplerian orbits of collisionless test particles, on the other hand,~\citet{Lippai:2008} found a clear spiral shock pattern in the plane of the disc as a response to the kick. By performing pseudo-Newtonian numerical simulations of Keplerian discs~\citet{Oneill2009} have recently questioned the contribution of the shocks to the expected bremsstrahlung emissivity, while~\citet{Megevand2009} showed that the intensity of bremsstrahlung luminosity is not much affected by the magnitude of the kick velocity, provided this is less than the smallest orbital speed of the fluid. Although they represent the first fully general relativistic calculations of this process, the simulations of~\citet{Megevand2009} used unrealistically small discs which were also placed extremely close to the recoiling black hole. As a considerable improvement over all the previous investigations,~\citet{Corrales2009} have carried out a systematic study of the effects of the mass-loss and recoil over a number of $\alpha$-discs in Newtonian gravity and two-dimensions. While confirming the existence of spiral shocks, they also provided a first realistic estimate of the resulting enhanced luminosity, which can be as large as few $\times 10^{43}\rm{erg/s}$ when the disc is assumed to be extremely efficient in radiating any local increase of the temperature. Very interesting results have also been obtained by \citet{Rossi2010}, who estimated the maximum disc-to-hole mass ratio that would be stable against fragmentation due to self-gravity to be $M_d/M\sim 6\times10^{-4}$ for a supermassive black hole with mass $M=10^6 M_\odot$. In addition, by performing three-dimensional but Newtonian SPH simulations of geometrically thin discs, they found that the emitted luminosity corresponding to such small disc-to-hole mass ratios is unlikely to make the EM counterpart visible via wide-area sky surveys. In this paper we present the results of two-dimensional relativistic numerical simulations of \textit{extended} circumbinary discs in the post-merger phase of the merger, when the disc reacts to the mass loss of the central black hole and to the received kick velocity. By accurately capturing the dynamics of the perturbed disc in the relativistic regime, we investigate the dependence of the accretion rate on the black-hole spin and on the kick velocity. At the same time, we introduce a new technique to locate the shocks that are potentially produced by the recoil and can therefore assess under what conditions a spiral pattern can develop, producing a variability in the accretion rate and, hence, in the luminosity. Our ``shock detector'' is based on the analysis of the initial states of the Riemann problem solved at each cell interface and can therefore determine the location of the shock with extreme precision, thus revealing that the previously proposed criteria for the occurrence of the shock are often inaccurate. To compare with the general-relativistic calculations performed by~\citet{Megevand2009}, our initial models consider {\em small-size} discs with an inner radius at $r\sim 40 M$ and an outer one at $r\sim 120 M$. In addition, however, we also study the dynamics of {\em large-size} discs with an inner radius at $r\sim 400 M$ and an outer one at $r\sim 4700 M$. These configurations have almost Keplerian distributions of angular momentum and are therefore closer to what is believed to be a realistic configuration for a circumbinary disc. Furthermore, because the mass in the discs is always much smaller than the mass of the black hole (\textit{i.e.}~ with a mass ratio $\sim 10^{-3}$), we solve the equations of relativistic hydrodynamics in the fixed spacetime of the final black hole. At first sight it may appear that the use of general-relativistic hydrodynamics is unnecessary when simulating astrophysical systems such as the ones considered here and especially for the case of large-size discs. Such a view, however, does not take into account that much of the dynamics in these EM counterparts takes place near the black-hole horizon, where general relativistic effects are not only large but essential for a correct physical description. Moreover, we do not have any firm theoretical basis to exclude small-size discs and for which the relativistic corrections are non-negligible. Finally, even in a scenario in which gravity could be approximated by the Newton law, we cannot exclude the importance of special relativistic effects. As all of the above mentioned investigations, also our approach suffers from the absence of a a fully consistent treatment of the radiation transfer, thus allowing only for tentative conclusions about the energetics involved in circumbinary accretion discs. However, we extend to the relativistic framework the strategy reported in \citet{Corrales2009} of performing an {\em isothermal} evolution as a tool to extract luminosity curves more realistic than those obtained from thermal bremsstrahlung, although it exaggerates some features of the dynamics, such as the formation of shocks. The paper is organized as follows. In Section~\ref{Numerical_method} we provide the essential information about the numerical code adopted in the simulations. Section~\ref{Initial_models} describes the physical properties of the initial models, while Section~\ref{Monitored_quantities} highlights the most relevant diagnostic quantities used in the rest of the paper. Section~\ref{Results} is devoted to the presentations of the results, and, finally, Section~\ref{Conclusions} contains a summary of our work. We assume a signature $\{-,+,+,+\}$ for the space-time metric and we will use Greek letters (running from $0$ to $3$) for four-dimensional space-time tensor components, while Latin letters (running from $1$ to $3$) will be employed for three-dimensional spatial tensor components. Moreover, we set $c=G=1$ and we extend the geometric units by setting $m_p/k_B=1$, where $m_p$ is the mass of the proton, while $k_B$ is the Boltzmann constant. In this way the temperature is a dimensionless quantity. \section{Numerical methods} \label{Numerical_method} In the stationary spacetime of a Schwarzschild or Kerr black hole we consider the time evolution of a perfect fluid described by the usual energy momentum tensor \begin{equation} T_{\mu\nu}=\rho h\,u_{\,\mu}u_{\nu}+pg_{\,\mu\nu}, \label{eq:T_matter} \end{equation} where $u_\mu$ is the four velocity of the fluid, $g_{\,\mu\nu}$ is the space-time metric tensor, $\rho$ is the rest-mass density, $h=1+\epsilon + p/\rho$ the specific enthalpy (including rest-mass energy contribution), $\epsilon$ the specific internal energy, $p$ the thermal pressure, related to $\rho$ and $\epsilon$ through the usual ideal-gas equation of state (EOS) \begin{equation} p=\rho\epsilon(\gamma-1) \ , \end{equation} where $\gamma$ is the (constant) adiabatic ratio of the gas. We solve the corresponding equations of general relativistic non-dissipative hydrodynamics through the \texttt{ECHO} code~\citep{DelZanna2007}. Because the dynamics of the EM emission takes place on a timescale which is of the order of the orbital one and because the latter is much shorter than the viscous timescale\footnote{We recall that in geometrically thin accretion discs the local viscous timescale is given by $t_{vis} \simeq r^2/({\tilde \alpha} H^2 \Omega)$, where ${\tilde \alpha}$ is the standard alpha parameter ad $H$ is the half-thickness of the disc.}, the use of inviscid hydrodynamics is indeed a very good approximation. \texttt{ECHO} adopts a $3+1$ split of spacetime in which the space-time metric is decomposed according to \begin{equation} \mathrm{d}s^2 = \! -\alpha^2\mathrm{d}t^2+\gamma_{ij}\, (\mathrm{d}x^i\!+\beta^i\mathrm{d}t)(\mathrm{d}x^j\!+\beta^j\mathrm{d}t), \label{eq:adm} \end{equation} where $\alpha$ is the lapse function, $\beta^i$ is the shift vector, and $\gamma_{ij}$ is the spatial metric tensor. The general-relativistic hydrodynamical equations are written in the following conservative form \begin{equation} \partial_t\vec{\mathcal{U}} + \partial_i\vec{\mathcal{F}}^i=\vec{\mathcal{S}}, \label{eq:UFS} \end{equation} which is appropriate for numerical integration via standard high-resolution shock-capturing (HRSC) methods developed for the Euler equations. The conservative variables and the corresponding fluxes in the $i$ direction are respectively given by \begin{equation} \vec{\mathcal{U}}\equiv\sqrt{\gamma}\left[\begin{array}{c} D \\ \\ S_j \\ \\U \end{array}\right],~~~ \vec{\mathcal{F}}^i\equiv\sqrt{\gamma}\left[\begin{array}{c} \alpha v^i D-\beta^i D \\\\ \alpha W^i_j-\beta^i S_j \\\\ \alpha S^i-\beta^i U \end{array}\right] , \label{eq:fluxes} \end{equation} whereas the sources, in any stationary background metric, can be written as \begin{equation} \vec{\mathcal{S}} \equiv \sqrt{\gamma}\left[\begin{array}{c} 0 \\ \\ \frac{1}{2}\alpha W^{ik}\partial_j\gamma_{ik}+ S_i\partial_j\beta^i-U\partial_j\alpha \\ \\ \frac{1}{2}W^{ik}\beta^j\partial_j\gamma_{ik}+{W_i}^j\partial_j\beta^i -S^j\partial_j\alpha \end{array}\right], \end{equation} where only purely spatial quantities are present. We note that $\sqrt{\gamma}\equiv \sqrt{-g}/\alpha$ is the determinant of the spatial metric. The relation between the evolved conservative variables $(D,S_j,U)$ and the primitive variables is given by \begin{eqnarray} &&D \equiv \rho\Gamma ,\\ &&S_i \equiv \rho h \Gamma^2 v_i, \\ &&U \equiv \rho h \Gamma^2 - p, \label{eq:cons} \end{eqnarray} where $\Gamma=(1-v^2)^{-1/2}$ is the Lorentz factor of the bulk flow with respect to the Eulerian observer associated to the $3+1$ splitting of the spacetime, and \begin{equation} W_{ij} \equiv \rho h \Gamma^2 v_i v_j +p \gamma_{ij},\\ \label{eq:W} \end{equation} is the fully spatial component of the energy-momentum tensor. In our setup for two dimensional disc simulations we assume the Kerr spacetime metric in Boyer-Lindquist coordinates (\textit{i.e.}~ only $\beta^\phi\neq 0$), with the limiting case of Schwarzschild metric for vanishing black-hole spins, and lay our coordinates in the equatorial plane of the disc (\textit{i.e.}~ $\theta=\pi/2$). The radial numerical grid is discretised by choosing $N_r$ points from $r_\mathrm{min}$ to $r_\mathrm{max}$, non-uniformly distributed according to the following scheme \begin{eqnarray} r_i &=& r_\mathrm{min} + a_1 \tan{(a_2 x_i)} \\ x_i &=& (\tilde{r}_i-r_\mathrm{min})/(r_\mathrm{max}-r_\mathrm{min}) \end{eqnarray} where $a_1=(r_\mathrm{max}-r_\mathrm{min})/a_0$, $a_2=\arctan{a_0}$, while $\tilde{r}_i$ are the coordinate points of the uniform grid from $r_\mathrm{min}$ to $r_\mathrm{max}$. In practice, the free parameter $a_0$ controls the extent to which the gridpoints of the original uniform grid are concentrated towards $r_\mathrm{min}$, and we have chosen $a_0=5$ in most of our simulations. The actual value of $N_r$ depends on the size of the disc, and it varies between $N_r=600$ and $N_r=1200$. Outflow boundary conditions are adopted both at $r_\mathrm{min}$ and $r_\mathrm{max}$. The azimuthal grid extends from $0$ to $2\pi$, with periodic boundary conditions, and $N_\phi=200$. All runs are performed with a Courant-Friedrichs-Lewy coefficient ${\rm CFL}=1/2$. The set of hydrodynamics equations is discretised in time with the method of lines and the evolution is performed with a second-order modified Euler scheme. A fifth-order finite-difference algorithm based on an upwind \emph{monotonicity preserving} filter is employed for spatial reconstruction of primitive variables, whereas a two-wave HLL Riemann solver is used to ensure the shock-capturing properties ~\citep[see][for further details]{DelZanna2007}. The timestep is generically chosen to be sufficiently small so that the second-order truncation error in time is comparable with the fifth-order one in space. As a final remark we note that as customary in HRSC methods, we introduce a tenuous and static ``atmosphere'' in the regions of the fluid outside the initial model for the disc and follow the prescription detailed in~\citet{Baiotti04} for its evolution. In practice we set to zero the velocity field and reset to a pre-defined floor value the rest-mass density of any cell whose density falls below the chosen threshold value. Such a threshold is set to be $8$ orders of magnitude below the maximum rest-mass density and we have checked that essentially identical results are obtained when changing this value of one or more orders of magnitude. \section{Initial models} \label{Initial_models} As initial models we adopt stationary and axisymmetric configurations that are consistent solutions of the relativistic Euler equations and describe a fluid in sub-Keplerian rotation around a Kerr black hole of prescribed mass and spin~\citep{Abramowicz78}. The resulting discs are geometrically thick and they can either have a constant or, more generally, a non-constant radial distribution of the specific angular momentum $\ell$. In our simulations we have considered both \textit{``small-size''} models, for which we adopt a constant distribution of $\ell$, and \textit{``large-size'}' models, with a distribution of $\ell$ that, on the equatorial plane, obeys a power law \begin{equation} \label{power_law} \ell (r, \theta = \pi/2) = {\cal S} r^q , \end{equation} where ${\cal S}$ is chosen to be positive, thus providing a disc rotation that is prograde with respect to the black hole rotation. A detailed description of the equilibrium models for non-constant specific angular momentum discs can be found~\citet{Daigne04}. In particular, we have chosen a value of ${\cal S}$ such that the resulting thick discs possess two well defined Keplerian points, namely the ``cusp'' (which is where matter can accrete onto the black hole) and the ``centre'' (which is where the pressure has zero gradient); \textit{cf.}~ Table~\ref{tab1}. When the exponent $q$ in~(\ref{power_law}) is chosen close to $1/2$, the rotation law tends to the Keplerian one, and the disc flattens towards the equatorial plane. In these circumstances the vertical structure of the disc can be essentially neglected and two-dimensional simulations are therefore indicative of the full three-dimensional dynamics. It is worth mentioning that discs with a rotation law given by~(\ref{power_law}) have been the subject of a long-standing debate about whether they are subject to the so called ``runaway instability''~\citep{Abramowicz83}, which would lead to an exponentially rapid accretion onto the black hole~\citep{Font02a,Font02b,Zanotti03,Daigne04,Zanotti05,Montero07}. Because the onset and development of this instability depends on the response of the torus to the increased mass of the black hole, simulating this instability accurately requires also the evolution of the Einstein equations. Recent calculations of this type have been performed by~\citet{Montero09} and reveal that the tori are indeed stable irrespective of the angular momentum distribution, thus excluding any role of the runaway instability in the dynamics of the discs simulated here. However, as we will further comment in Sect.~\ref{Shock_propagation}, other non-axisymmetric instabilities are possible and have been indeed found. Table~\ref{tab1} reports the main properties of the models chosen, where the naming convention used allows to easily distinguish the small-size models (\texttt{S*}) from the large-size ones (\texttt{L*}), and where the number \texttt{*} in the name refers to the spin of the black hole, thus ranging between $0.00$ and $0.99$. As already commented in the Introduction, while the small-size models are particularly suitable for investigating any effect of the black hole spin, the large-size models are those that are (astro)physically more realistic. The inner radius of these large-size models is typically of a few hundreds of gravitational radii and represents the size of the cavity produced by the torque of the SMBBH as estimated from the expression deduced from Table~1 of ~\citet{Milosavljevic05} \begin{equation} r_{\rm cavity}\simeq \left(\frac{117}{\alpha_{-1}^{0.34}}\right) \left(\frac{\eta_{-1}}{\dot{M}_{\rm Edd}}\right)^{0.24} \left(\frac{M}{10^6M_\odot}\right)^{0.08}\!\!\! [4q/(1+q)^2]^{0.42}, \end{equation} where $\alpha_{-1} \equiv \alpha/0.1$, $\eta_{-1} \equiv \eta/0.1$ (${\tilde \alpha}$ and ${\tilde \beta}$ being the effective ${\tilde \alpha}$-parameter of thin accretion discs and the radiative efficiency, respectively), $\dot{M}_{\rm Edd}$ is the mass accretion rate in Eddington units and $q$ is the mass ratio between the two coalescing black holes. By construction, the recoil velocities that can be studied in our setup are those contained in the equatorial (\textit{i.e.}~ $(r, \phi)$) plane and because it is much more advantageous to study the dynamics of the disc in a reference frame comoving with the black hole, we impose a net velocity field in addition to the equilibrium orbital one. In practice, at time $t=0$ we perform a Lorentz boost of the fluid velocity along the radial direction with $\phi=0$, thus mimicking a recoil velocity of the black hole in radial direction but with $\phi=\pi$. We treat the imparted recoil velocity $V_{\rm k}$ essentially as a free parameter ranging from $V_{\rm k}=100\,\rm{km}/\rm{s}$ to $V_{\rm k}=3000\,\rm{km}/\rm{s}$, where the latter values are not realistic and serve here only to appreciate the disc dynamics under extreme conditions. We recall, in fact, that the recoil velocities in the orbital plane are expected to be $\lesssim 450\,\rm{km}/\rm{s}$~\citep{Koppitz-etal-2007aa, Herrmann:2007ac, Pollney:2007ss:shortal}. In addition to the recoil, in some initial models we also consider the effects of the mass lost to gravitational waves and which we account by first computing the initial model in the gravitational potential of the full black hole mass, and then evolving it in the gravitational potential of the reduced mass. As a reference value we consider a decrease in the mass of $\sim 3\%$, thus corresponding to that obtained from the typical merger of equal-mass spinning black holes with spins anti-aligned with the orbital angular momentum ~\citep[see][Fig.~$11$]{Reisswig:2009vc}. Higher values of the mass loss do not introduce qualitative changes in the overall dynamics. A final comment is devoted to the EOS of the initial model, that we chose to be that of a polytrope $p=\kappa \rho^\gamma$, with $\gamma=4/3$ or $\gamma=5/3$. We recall that a peculiar property of these equilibrium models is that the ratio $p/\rho$, and therefore the temperature $T$ and the sound speed $c_s$, do not depend on the polytropic constant $\kappa$\footnote{The argument consists in proving that the function $\Psi \equiv \kappa(n+1)\Theta$, where $\gamma=1+1/n$ and $\rho=\Theta^n$, does not depend on $\kappa$. From this it follows that $p/\rho=\kappa\Theta$ does not depend on $\kappa$ either.}, which, on the other hand, determines the mass of the disc~\citep[see][Appendix B]{Rezzolla_qpo_03b}. As a result, the size of the torus is fixed, also the temperature is uniquely determined and cannot be rescaled further. The last column in Table~\ref{tab1} reports such a temperature at the centre of the torus, $r_{\rm c}$, as estimated from the ideal-gas EOS via the expression \begin{equation} \label{t_estimate} T=\frac{m_p}{k_B}\frac{p}{\rho}, \end{equation} where, we recall, $k_B$ is the Boltzmann constant and $m_p$ the rest-mass of the proton. In geometric units and with $m_p/k_B=1$ the transformation of the temperature from the dimensionless values to Kelvin degrees is given by \begin{equation} T = 1.088\times 10^{13}\left(\frac{p}{\rho}\right) \, \, \, K \ . \end{equation} \begin{table*} \begin{center} \caption{Main properties of the initial models. From left to right the columns report the name of the model, the black hole spin parameter $a$, the mass of the black hole, the disc-to-hole mass ratio, the power-law index $q$ of the angular momentum distribution and the parameter ${\cal S}$ (only for models with non constant distribution of the specific angular momentum), the constant value of the specific angular momentum $\ell$ (only for models with constant distribution of the specific angular momentum) the adiabatic index $\gamma$, the inner and the outer radius of the tours, $r_{\rm in}$ and $r_{\rm out}$, the radius of the maximum rest-mass density $r_{\rm c}$, the orbital period at the radius of maximum rest-mass density $\tau_{\rm c}$, the maximum temperature $T_{\rm c}$, the orbital velocity $|v^{\phi}|=(v_{\phi}v^{\phi})^{1/2}$ at $r_{\rm in}$. } \label{tab1} \begin{tabular}{l|ccccc|cccc|ccccc} \hline \hline Model & $J/M^2$ & $M$ & $M_{\rm d}/M$ & $ q $ & ${\cal S}$ & $\ell$ & $\gamma$ & $r_{\rm in}$ & $r_{\rm out}$ & $r_{\rm c}$ & $\tau_{\rm c}$ & $T_{\rm c}$ & $|v^{\phi}|_{\rm in}$ \\ & & $(M_{\odot})$ & & & & & & & $(M)$ & $(M)$ & $(M)$ & $(K)$ & $({\rm km/s})$\\ \hline \texttt{S.00} & $0.00$ & $1.0\times 10^{6}$ & $2.0\times 10^{-3}$ & $-$ & $-$ & $8.0$ & $4/3$ & $40.0$ & $118.2$ & $59.8$ & $3.97$ (h)& $5.4\times 10^{9}$&$57000$\\ \texttt{S.25} & $0.25$ & $1.0\times 10^{6}$ & $2.1\times 10^{-3}$ & $-$ & $-$ & $8.0$ & $4/3$ & $40.0$ & $120.0$ & $60.0$ & $4.00$ (h)& $5.6\times 10^{9}$&$57000$\\ \texttt{S.50} & $0.50$ & $1.0\times 10^{6}$ & $2.2\times 10^{-3}$ & $-$ & $-$ & $8.0$ & $4/3$ & $40.0$ & $121.7$ & $60.2$ & $4.02$ (h)& $5.7\times 10^{9}$&$57000$\\ \texttt{S.75} & $0.75$ & $1.0\times 10^{6}$ & $2.1\times 10^{-3}$ & $-$ & $-$ & $8.0$ & $4/3$ & $40.0$ & $123.5$ & $60.4$ & $4.04$ (h)& $5.8\times 10^{9}$&$57000$\\ \texttt{S.99} & $0.99$ & $1.0\times 10^{6}$ & $2.1\times 10^{-3}$ & $-$ & $-$ & $8.0$ & $4/3$ & $40.0$ & $125.2$ & $60.6$ & $4.06$ (h)& $5.9\times 10^{9}$&$57000$\\ \hline \texttt{L.00} & $0.00$ & $1.0\times 10^{6}$ & $1.0\times10^{-4}$ & $0.4950$ & $1.037$ & $-$ & $4/3$ & $984.6$ & $403.6$ & $4713.7$ & $11.06$ (d)& $8.7\times 10^{6}$&$14970$\\ \texttt{L.00.MA} & $0.00$ & $1.0\times 10^{6}$ & $1.0\times10^{-4}$ & $0.4950$ & $1.037$ & $-$ & $5/3$ & $984.6$ & $403.6$ & $4713.7$ & $11.06$ (d)& $1.4\times 10^{7}$&$14970$\\ \texttt{L.90} & $0.90$ & $1.0\times 10^{6}$ & $1.0\times 10^{-4}$ & $0.4955$ & $1.033$ & $-$ & $4/3$ & $988.4$ & $400.9$ & $4760.0$ & $11.13$ (d)& $7.8\times 10^{6}$&$15030$\\ \hline \hline \end{tabular} \begin{flushleft} \end{flushleft} \end{center} \end{table*} \section{Methodology of the analysis} \label{Monitored_quantities} In what follows we discuss in detail the physical quantities computed during the evolution either as representatives of the global evolution or of the local one. \subsection{Global quantities} \label{Global_quantities} In addition to the local Eulerian fluid variables, during the evolution we also monitor a few global quantities that are very helpful for interpreting the main properties of the dynamics. These are: the rest-mass, the internal energy and the accretion rate at the innermost radial point of the grid, each of which is computed as \begin{eqnarray} \label{mass} &&M_{{\rm disc}} \equiv 2 H\int \sqrt{\gamma} D dr d\phi , \\ \label{int_enrgy} &&E_{\rm int} \equiv 2 H \int\epsilon\sqrt{\gamma} D dr d\phi , \\ \label{dotm} &&\dot{M}(r_{\rm min}) \equiv - 2 H \int \alpha \sqrt{\gamma} D v^r d\phi \ . \end{eqnarray} Note that when computing the volume integral we consider the discs to have half thickness $H$ which is assumed to be constant in radius, \textit{i.e.}~ with $H\sim c_s/\Omega$ as in the standard thin disc approximation, with $c_s$ the sound speed and $\Omega$ the orbital velocity. In addition to~(\ref{mass})--(\ref{dotm}) we also compute a few more diagnostic quantities that, on the contrary, rely on simplified assumptions reflecting the fact that the implementation does not account for processes such as radiation transfer and viscous dissipation. In particular, we compute the bremsstrahlung emissivity of the electron-proton collision as~\citep{Rybicki_Lightman1986} \begin{eqnarray} \epsilon_{{\rm BR}}&\simeq& 2.0\times 10^{-27}T^{1/2}Z_i^2n_e n_i \ \ {\rm erg} \ \ {\rm cm}^{-3} \ \ {\rm s}^{-1} \\ &\simeq& 7.14\times 10^{20}T^{1/2}\rho_{{\rm cgs}}^2 \ \ {\rm erg} \ \ {\rm cm}^{-3} \ \ {\rm s}^{-1} , \end{eqnarray} where $n_e$ and $n_i\simeq n_e$ are the number densities of electrons and ions (protons), respectively, while $T$ is the equilibrium temperature of both electrons and protons. The bremsstrahlung luminosity is then obtained after performing the volume integral \begin{equation} \label{brem_geo} L_{{\rm BR}}\simeq 3 \times 10^{78} \int \left( T^{1/2}\rho^2 \Gamma \sqrt{\gamma}d^3x\right) \left(\frac{M_\odot}{M}\right) \ \ {\rm erg}/{\rm s} \ , \end{equation} where the large multiplicative factor comes from the fact that both $T$ and $\rho$ in~(\ref{brem_geo}) are expressed in geometrized units. \subsection{A relativistic ``shock detector''} \label{Shock_detection} An obvious expectation, which has been confirmed by all of the numerical simulations to date, is that the as the recoiling black hole will move in the plane of the accretion disc it will introduce spiral shocks which will move outwards on a timescale which is comparable with the orbital one. Because determining the accurate position of the shocks is important to correlate the latter to the EM emission, a number of suggestions have been made in the literature, which have a varying degree of precision. In particular,~\citet{Lippai:2008, Oneill2009, Megevand2009}, all just looked at density and/or pressure gradients to infer the propagation of a spiral caustic and, therefore, of a possible shock (we note that in the collisionless particles treatment of~\citet{Lippai:2008}, the existence of a shock is purely indicative as no shocks can be produced in this approximation). On the other hand, \citet{Rossi2010} used the introduction of an artificial viscosity, which is itself related to local density increases, to identify the location of shocks. Finally,~\citet{Corrales2009} used a shock detector present in the \texttt{FLASH} code, which marks a given region as a shocked one if $\vec{\nabla} \cdot \vec{v} < 0$ and if the pressure difference between the monitored zone and at least one of its neighbors exceeds the difference expected from the Rankine-Hugoniot jump condition for a shock of a pre-specified minimum Mach number. While more robust than those considered by the other authors, also this prescription is a delicate one as we will discuss in Sec.~\ref{Shock_propagation}. All of the methods mentioned above contain rather empirical criteria and can fail to detect shocks unless they are very strong. To improve the determination of the location of the shock, even when the latter are arbitrarily weak, we have devised a relativistic ``shock detector'' which exploits an idea discussed in all its details in~\citet{Rezzolla02} and~\citet{Rezzolla03}, and which consists essentially in the possibility of predicting the outcome of the wave pattern in a Riemann problem. (We note that a similar detector can be prescribed also for non-relativistic flows; the interested reader can find a detailed discussion in \textsection 100 of~\citet{Landau-Lifshitz6}.). To illustrate the logic of our shock detector let us suppose that along a given direction, say the $x-$direction, two adjacent fluid elements $1$ and $2$ manifest a jump in the hydrodynamical quantities, such as pressure, density and velocity, thus reproducing the typical conditions of a local Riemann problem. In the absence of magnetic fields, the time evolution of a Riemann problem consists in the propagation along opposite directions of two nonlinear waves, either rarefactions or shocks, separated by a third wave, the contact discontinuity. As a result, a shock front will be produced if the wave pattern generated by the Riemann problem contains at least one shock wave, while the other wave can be a rarefaction wave. As shown by~\citet{Rezzolla02}, there is a simple criterion for predicting the occurrence of a wave pattern containing a shock wave and this amounts to the requirement that the relative velocity between the two states $1$ and $2$ (\textit{i.e.}~ between two adjacent fluid cells) is larger than a threshold value \begin{equation} \label{condition} v_{12}>({\tilde v}_{12})_{_{SR}}, \end{equation} where $({\tilde v}_{12})_{_{SR}}$ is a function of the thermodynamic states of $1$ and $2$, while $v_{12}\equiv(v_1-v_2)/(1-v_1 v_2)$ is the special relativistic expression for the relative velocity. When there are nonzero velocities in the direction tangential to the discontinuity front, the analytic form of $({\tilde v}_{12})_{_{SR}}$ is given by~\citep{Rezzolla03} \begin{equation} \label{capo2_analytic} (\widetilde{v}_{12})_{_{SR}}\equiv \tanh\left(\int_{p_1}^{p_2} \frac{\sqrt{h^2 + {\cal A}^2_1(1-c_s^2)}} {(h^2 + {\cal A}^2_1)\rho~c_s} dp \right)\ , \end{equation} where ${\cal A}_1 \equiv h_1 W_1 v^y_1$ while $c_s$ is the sound speed. If, on the contrary, the relative velocity $v_{12}$ is smaller than $({\tilde v}_{12})_{_{SR}}$, then no shock wave can be produced and the wave pattern of the corresponding Riemann problem consists of two rarefaction waves propagating in opposite directions. With this idea in mind we have constructed a sensitive shock detector to locate the regions of the disc which are producing a spiral shock. In practice we first select the direction along which we want to monitor the propagation of shock waves. Secondly, since (\ref{condition}) and~(\ref{capo2_analytic}) have been derived in a flat spacetime, we project the velocity field $v^j$ in a local tetrad in Boyer-Lindquist coordinates so as to obtain the new components $v^{\hat j}$ \begin{eqnarray} \label{tetrad1} &&v^{\hat r}=\sqrt{g_{rr}}v^r , \\ \label{tetrad2} &&v^{\hat \phi}=\sqrt{g_{\phi\phi}}v^\phi \ . \end{eqnarray} Thirdly, we calculate $v^{\hat x}$ and $v^{\hat y}$ from $v^{\hat r}$ and $v^{\hat \phi}$ through a simple rotation. Finally, we compute the integral (\ref{capo2_analytic}) in terms of the hatted Cartesian components and compare the result with $v_{12}$. Note that the integral~(\ref{capo2_analytic}) effectively provides the minimum value for the occurrence of a wave pattern containing a single shock wave. In the limit of $(\widetilde{v}_{12})_{_{SR}}\rightarrow v_{12}$, in fact, the pressure jump across the shock wave becomes vanishingly small and a single rarefaction wave joining $p_1$ and $p_2$ propagates in the direction opposite to that of the vanishing shock wave. Therefore, when computing~(\ref{capo2_analytic}) we are actually integrating inside the rarefaction wave, that is notoriously a self-similar solution and hence isentropic. This means that in evaluating~(\ref{capo2_analytic}) we can use the isentropic expression for the sound speed \begin{equation} \label{sound_speed} c_s=\sqrt{\frac{\gamma(\gamma-1)p} {(\gamma-1)\rho+\gamma p}}\ , \end{equation} where the density $\rho$ is given in terms of $p$ from $p=p_1 (\rho/\rho_1)^\gamma$. \begin{figure*} \centering {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_T0_038.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_T0_038.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_T0_300.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_T0_300.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_T0_623.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_T0_623.png}} \caption{Rest-mass density distributions (left columns) and shock structure (right columns) and at three different times (\textit{i.e.}~ $t=6.07, 47.90$ and $99.46\,{\rm h}$) for model \texttt{S.00} and a recoiling velocity $V_{\rm k}=300\, {\rm km/s}$. Note that the last panel refers to almost $25$ orbital revolutions. The rest-mass density is plotted on logarithmic scale and in ${\rm cgs}$ units, while the shock structure is obtained by plotting the quantity $S_d$ (see beginning of Sec.~\ref{Shock_propagation} for a definition); shock waves can form in regions where $S_d>0$. Note that is very hard to locate a shock by simply looking at the density distribution.} \label{fig1} \end{figure*} \begin{figure*} \centering {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/sound_T0_623.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/mac_T0_623.png}} \caption{Sound speed normalized to the kick velocity $c_s/V_{\rm k}$ (left panel) and relativistic Mach number ${\cal M}$ (right panel) at $t=99.46\,{\rm h}$ for model \texttt{S.00} when subject to a recoil of $V_{\rm k}=300\, {\rm km/s}$. Note that $V_{\rm k} \geq c_s$ is not a good criterion for the localization of the shock (\textit{cf.}~ left panel) and that no obvious correlation is present between the supersonicity of the flow and the appearance of the shock (\textit{cf.}~ right panel). } \label{fig2} \end{figure*} The procedure described above is completely general and can be proposed as an efficient shock detector for numerical relativistic hydrodynamics. However, two subtleties should also to be taken into account. The first subtlety is that, for more complicated spacetimes or coordinates systems, the flat-spacetime projection~(\ref{tetrad1})--(\ref{tetrad2}) should be replaced by the more general form \begin{equation} v^{\hat{i}}=M_{~j}^{\hat{i}} v^j, \end{equation} with $M_{~j}^{\hat{i}}$ given by~\citep{Pons1998} \begin{equation} \displaystyle{ M_{~j}^{\hat{i}} = \left( \begin{array}{ccc} \sqrt{\gamma_{11}} & \frac{-\gamma^{12} \gamma_{22}+\gamma^{13} \gamma_{23} }{\gamma^{11} \sqrt{\gamma_{22}}} & \frac{-\gamma^{13} \sqrt{\gamma_{22} \gamma_{33}-(\gamma_{23})^2} }{\gamma^{11} \sqrt{\gamma_{22}}} \cr & & \cr 0 & \sqrt{\gamma_{22}} & 0 \cr & & \cr 0 & \frac{\gamma_{23}}{\sqrt{\gamma_{22}}} & \frac{\sqrt{\gamma_{22} \gamma_{33}-(\gamma_{23})^2} }{\sqrt{\gamma_{22}}} \cr \end{array} \right) \ . } \end{equation} The second subtlety concerns the fact that, because the shock detector validates the inequality~(\ref{condition}), it can be arbitrarily sensitive. Although this certainly represents an advantage, one often wishes to disregard the whole class of weak shocks, for which the contribution to the entropy jump is of higher order and $\Delta s\propto (\Delta p)^3$~\citep{Thorne73}. In these cases the weakest shocks can be filtered out by making the condition~(\ref{condition}) somewhat more restrictive and require therefore that a shock is detected if \begin{equation} \label{condition2} v_{12}>{\tilde v}_{12}=({\tilde v}_{12})_{_{SR}}+\chi\left[({\tilde v}_{12})_{_{2S}} - ({\tilde v}_{12})_{_{SR}}\right] \end{equation} where \begin{equation} (\widetilde{v}^x_{12})_{_{2S}} =\frac{(p_1-p_2)(1-v^x_2 \bar{V}_s)} {(\bar{V}_s-v^x_2)\{h_2 \rho_2 (\Gamma_2)^2 [ 1-(v^x_2)^2 ] + p_1 - p_2\}} \ , \end{equation} with $\bar{V}_s$ being the velocity of the shock wave propagating towards state $2$~\citep[see][for the explicit expression]{Rezzolla03}. Because $({\tilde v}_{12})_{_{2S}} \geq ({\tilde v}_{12})_{_{SR}}$, any value of $\chi$ between $0$ and $1$ will effectively raise the threshold for the detection of the shocks, filtering out the weakest ones; the shocks encountered in the simulations reported here were all rather weak and we have therefore always used $\chi=0$. The whole procedure is repeated for as many directions as the dimensions of the problem. Finally, a prescription of the relativistic shock detector as adapted for Newtonian fluids is presented in Appendix~\ref{appendixA}. \section{Results} \label{Results} \subsection{Small-size models} \label{Small_size_models} Although the small-size models are not astrophysically very realistic as they presume the existence of small tori in equilibrium near the recoiling black hole, they serve to set a comparison with the other general-relativistic calculations of~\citet{Megevand2009}, where similar tori were considered. In addition, by being so close to the black hole, they are helpful in capturing those features of the dynamics that are most influenced by the regions of strong gravity. However, because of their limited extensions and high densities/temperatures (as an example, the model \texttt{S.00} has $\rho_{{\rm c}}=3.38\times 10^{-3}{\rm g/cm^3}$ and $T_{{\rm c}}=7.9\times 10^8 K$) they will not be used to draw any conclusion on the emitted luminosity, which will be instead discussed in more detail when analyzing the large-size models in Sec.~\ref{Large_size_models}. \subsubsection{Shock Dynamics} \label{Shock_propagation} The different panels in the left column of Fig.~\ref{fig1} show the rest-mass density at three different times (\textit{i.e.}~ $t=6.07, 47.90$ and $99.46\,{\rm h}$) for model \texttt{S.00} and a recoiling velocity $V_{\rm k}=300\, {\rm km/s}$. Although the imparted velocity is rather small (but close to the maximum possible in the orbital plane), the disc undergoes large variations in size and density, with a shock front that expands from the inner parts of the disc in an initially axisymmetric manner. This is essentially due to the reduction in the black-hole mass and which moves all of the equilibrium orbits to larger radii. As the influence area of the black hole becomes larger and the orbital velocities become comparable with that of the recoil, the disc develops shocks with the characteristic spiral structure discussed also in previous works~\citep{Lippai:2008, Corrales2009, Rossi2010} and that transports angular momentum outwards. This is shown by the panels in the right column of Fig.~\ref{fig1}, which report the location of the shocks as obtained with the procedure illustrated in Sec.~\ref{Shock_detection}. More specifically, they show the quantity $S_d \equiv {\rm {max}}\{0,v^x_{12} - \widetilde{v}^x_{12}, v^y_{12} - \widetilde{v}^y_{12}\}$, whereby any positive value of $S_d$ marks a shocked region. Note that the region very close to the black-hole horizon, namely at radii smaller than $\sim 10 M$, is always a highly shocked one. Furthermore, as the evolution proceeds and the disc expands first in response to the mass loss and subsequently to the shocks, very little (if any) of the computational domain is filled by atmosphere, thus removing de-facto any role it can play in the dynamics of the disc. Figure~\ref{fig2} provides additional information about the dynamics of the disc by showing the local sound speed normalized to the kick velocity, and the relativistic Mach number, the latter computed as ${\cal M}\equiv \Gamma v/\Gamma_s c_s$, where $\Gamma_s\equiv 1/\sqrt{1-c_s^2}$ is the Lorentz factor of the sound speed~\citep{Konigl1980}. Our results shows that the criterion suggested by ~\citet{Corrales2009} for the occurrence of shocks, namely $V_{\rm k}\geq c_s$, can be rather misleading in the relativistic context. Indeed, as shown by the snapshots at time $t=99.46\,{\rm h}$ in Fig.~\ref{fig1}, and which refers to almost $25$ orbital revolutions, a clear spiral shock forms even if the sound speed is more than $30$ times larger than the kick velocity. Indeed, the left panel of Fig.~\ref{fig2} seems to suggest that if any, $c_s/V_{\rm k} \gg 1$ is possibly a reasonable necessary condition for the approximate location of the shock. In addition, the correlation between the occurrence of a shock and the local sound speed is very weak and this is apparent in the right panel of Fig.~\ref{fig2}, where it is clear that the flow is highly supersonic in the inner regions of the disc and mildly subsonic in the outer regions. Yet, the spiral-shock structure extends continuously across the whole disc. We also note that although the precise morphology of the spiral shocks will depend on the spin of the black hole, this dependence is only very weak and all the considerations made above for model \texttt{S.00} hold true qualitatively also for spinning black holes. It is finally worth remarking that the shocks formed here are very mild and not relativistic. Even for $V_{\rm k}=3000\, {\rm km/s}$, the shock velocity maintains a typical value $V_s\sim 0.15$ and the velocity jump at the shocks is also rather limited, producing a $v/\Delta v\sim 30$. This means, for instance, that such shocks are unable of accelerating electrons through the classical mechanism of \citet{Bell1978}. \begin{figure} {\includegraphics[angle=0,width=9.0cm]{./figures/T0_no_kick.pdf}} \vspace{-2.0cm} \caption{Time evolution of the internal energy when normalized to the its initial value (top panel) and the corresponding power spectrum (bottom panel) in a model with $V_{\rm k}=0$ and with a mass loss of $1\%$ the initial mass of the black hole. } \label{fig4} \end{figure} \subsubsection{Mass loss and Quasi-Periodic Dynamics} The dynamics of the disc can change considerably if the black hole is assumed to be recoiling with negligible velocity in the orbital plane and only mass loss is taken into account. By considering mass losses in the range $1\%-10\%$, ~\citet{Oneill2009} showed that shocks can form even in the absence of a recoil velocity, provided that the mass loss is larger than the half thickness of the disc. The perturbation induced by the mass loss is spherically symmetric and it causes the disc to expand as each fluid element will want to move to the larger radii corresponding to the equilibrium orbit for the given initial angular momentum. Together with this expansion, however, restoring forces will also induce the disc to contract in the effective-potential well of the black hole with the characteristic frequency of the lowest order $p$-mode, and which is not too different from the epicyclic frequency at the disc centre~\citep{Rezzolla_qpo_03b}. We recall that the restoring force responsible for the appearance of such $p$ modes is a combination of pressure gradients, centrifugal and gravitational forces, with the last two playing the dominant roles for the discs considered here~\citep{Kato2001,Rezzolla_qpo_03b}. \begin{figure} {\includegraphics[angle=0,width=9.0cm]{./figures/accretion_versus_spin_T.pdf}} \vspace{-2.0cm} \caption{Mass accretion rate measured at $r=r_{\rm{min}}$ for different values of the black hole spin-parameter in the small-size models, \textit{i.e.}~ \texttt{S.00}--\texttt{S.99}. Shown in the inset as a function of the black-hole spin is the stationary accretion rate reached after $5\,{\rm d}$.} \label{fig3} \end{figure} The oscillating behavior induced by the sudden change of the potential well and the subsequent development of the instability is shown in Fig.~\ref{fig4}, where the top panel reports the time evolution of a typical global quantity, \textit{i.e.}~ the internal energy, when normalized to its initial value. Interestingly, the remarkable periodicity that characterizes the dynamics is the same as found by~\citet{Zanotti03} when studying global modes of oscillation of thick discs around black holes. The corresponding power spectrum is shown in the bottom panel of Fig.~\ref{fig4}, obtained through a FFT of the time series for $t\lesssim 3\,{\rm d}$ reveals the presence of a fundamental mode of oscillations at $f\sim 4.17\times 10^{-5}\, {\rm Hz}$ and of two overtones. The first overtone is at $o_1\sim 6.28\times 10^{-5}\, {\rm Hz}$, while the second one, very close to twice the fundamental frequency, $o_2\sim 8.37\times 10^{-5}\, {\rm Hz}\sim 2 f$, is produced by nonlinear coupling of the fundamental mode with itself~\citep[see][]{Zanotti05}. Collectively, these modes of oscillations provide a series of modes in the same ratio of the integer numbers $2:3:4$ observed in the QPOs of low-mass X-ray binaries containing a black hole~\citep[see][for a recent review]{Remillard2006} and for which a simple model based on basic $p$-mode oscillations of a small accretion torus orbiting close to the black hole was recently proposed~\citep{Rezzolla_qpo_03a}, and which remains one of the most convincing explanations of the observed phenomenology~\citep{Schnittman06}. \begin{figure} {\includegraphics[angle=0,width=9.0cm]{./figures/accretion_versus_kick_T.pdf}} \vspace{-2.0cm} \caption{Mass accretion rate measured at $r=r_{\rm{min}}$ for the small-size model \texttt{S.00} and different values of the recoil velocity. Shown in the inset as a function of the recoil velocity is the relative baryon mass accreted after $3\,{\rm d}$.} \label{fig5} \end{figure} It is difficult not to note that the harmonic behaviour shown in the top panel of Fig.~\ref{fig4} is lost at $t\simeq 3\,{\rm d}$ and that the internal energy increases monotonically after that. This is due to the onset of a non-axisymmetric instability which produces spiral arms that rapidly spread to cover the whole disc. An instability of this type was not pointed out by~\citet{Megevand2009} although they have used similar models and we believe that this is probably because their simulations were interrupted after $\sim 11\,{\rm h}$ (or $\sim 6$ orbital periods as measured at the point of maximum rest mass density), which is too early for the development of the instability. On the other hand, such type of instabilities in non-Keplerian discs have been discussed by a number of authors, starting from the pioneering work by~\citet{Papaloizou84}. A detailed comparison between the linear perturbative analysis of these instabilities and two-dimensional numerical simulations in a Schwarzschild spacetime was already proposed more than twenty years ago by~\citet{Blaes1988}, who found the development of the same spiral structures, which transport both mass and angular momentum outwards even in the absence of mass loss\footnote{We have verified that the spiral arms do indeed develop also in the absence of a mass loss or of a recoil and that also in this case the instability takes place after $\sim 4\,{\rm d}$.}. While we cannot concentrate here on a detailed discussion of these instabilities, it is sufficient to remark that a spiral-shock pattern and all of the associated phenomenology, can be generated even when the recoil velocity is zero. \begin{figure*} \centering {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_kick2_187.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_kick2_187.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_kick2_612.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_kick2_612.png}} \caption{Left column: Rest-mass density after $60$ and $180\,{\rm d}$ for a recoil velocity $V_{\rm k} = 500\,{\rm km/s}$ applied to model \texttt{L.00}. The scale is logarithmic and expressed in ${\rm cgs}$ units. Right column: Shock structure as presented in Fig.~\ref{fig1} for the same panels in the left column. Once again we remark that is very hard to locate a shock by simply looking at the density distribution, especially when they are very weak (here we have used $\chi=0$ in Eq.~\ref{condition2}). Note also that the temperature distribution is inversionally proportional to that of the density (not shown here).} \label{fig7} \end{figure*} Overall, the dynamics observed for model \texttt{S.00} suggests that transient oscillating phenomena may exist in the post-merger phase of SMBBH. In this case, the occurrence of QPOs in the accretion and thus in the luminosity of potential EM counterparts, followed then by the development of non-axisymmetric instabilities would be a unique and convincing signature that a SMBBH merger with small recoil velocities has taken place. \subsubsection{Accretion rates} As originally pointed out by~\citet{Kozlowski1978}, in non-Keplerian discs no viscosity is needed in order to support accretion in the vicinity of the cusp. Figure~\ref{fig3} reports the baryon mass accretion rate measured at $r=r_{\rm{min}}$ for different values of the black hole spin-parameter in the small-size models, \textit{i.e.}~ \texttt{S.00}--\texttt{S.99}, and provides further support to the interpretation of the shock dynamics discussed above. In particular, it is easy to realize that because of the sudden mass loss and hence of the reduced gravitational attraction, the disc reacts to the excess of angular momentum by expanding. This effect only lasts for a couple of orbital periods after the merger and leads to the large decrease in mass accretion rate shown in Fig.~\ref{fig3} for $t \lesssim 0.6\,{\rm d}$. Subsequently, as the effect of the kick velocity extends to regions of the flow with smaller orbital velocities and becomes dominant, non-axisymmetric density structures form and the perturbed disc starts filling the low density central cavity while increasing the accretion rate. This is reflected by the the large increase in the mass accretion rate shown in Fig.~\ref{fig3} for $t \gtrsim 0.6\,{\rm d}$, which is essentially independent of the black-hole spin. \begin{figure*} \centering {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_kick4_189.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_kick4_189.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/rho_kick4_574.png}} {\includegraphics[angle=0,width=8.3cm,height=7.5cm]{./figures/shock_kick4_574.png}} \caption{The same as in Fig.~\ref{fig7} but for a recoil velocity $V_{\rm k}= 3000\,{\rm km/s}$. Note that the spiral-shock structure is never present and that the inner cavity is rapidly filled by accreting gas. In addition, an oblique shock comprising a low-density region is formed in the inner parts of the flow. } \label{fig8} \end{figure*} Since our treatment does not include the compensating effect of the radiation drag exerted by the photons on the accreting matter, the accretion rate increases undisturbed reaching values that are $\sim 6$ orders of magnitude above the Eddington limit. After about $6$ orbital periods ($\approx 1\,{\rm d}$) $\dot M$ saturates and after $\sim 5\,{\rm d}$ all of the models have lost $\sim 32\%$ of their mass. Fig.~\ref{fig3} also shows that the spin of the black hole has little influence on the dynamics of the disc, although not as little as inferred from Fig.~13 of~\citet{Megevand2009}. In particular, we have found that after $\sim 3\,{\rm d}$ the accretion rate slightly decreases when increasing the spin-parameter (see the inset of Fig.~\ref{fig3}), so that $\dot{M}_{\vert a=0}\sim 2.5\, \dot{M}_{\vert a=0.99}$ after $\sim 5\,{\rm d}$ of evolution. Interestingly, this dependence of the accretion rate on the spin of the black hole is rather generic and has been found also in other simulations (not reported here) where no perturbation on the flow is introduced. Of course, the effect of an increasingly large kick velocity on the disc dynamics is much more pronounced in this case, as it emerges from Fig.~\ref{fig5}, which reports the accretion rate for different values of $V_{\rm k}$. As already found by~\citet{Megevand2009}, larger recoil velocities tend to anticipate the occurrence of the burst in the accretion rate and, simultaneously, produce larger absolute values of $\dot M$ at the burst. After the initial burst, however, and no later than $\sim 2$ days, the accretion rates become nearly independent of the recoil velocity. The differences in $\dot M$ are illustrated in the inset of Fig.~\ref{fig5} which shows the relative baryon mass accreted after $3\,{\rm d}$ and which shows a variation of $\sim 50\%$ at most. It is also interesting to note that there is no clear imprint of the spiral shock dynamics onto the accretion rate. This is due to the fact that the spiral shocks essentially redistribute angular momentum and thus modify the disc structure and dynamics far from the black hole. \subsection{Large-size model} \label{Large_size_models} We next switch to discussing the dynamics of the large-size model \texttt{L.00} for different values of the recoil velocity. We recall that there are at least two different reasons to consider this second class of models. The first one is that they reflect the expectations of a circumbinary disc: they are quasi-Keplerian, extended and at a large distance from the binary. The second one is that by having a lower density, \textit{i.e.}~ $\rho_{{\rm c}}=1.38\times 10^{-10}{\rm g/cm^3}$, and hence a lower temperature, \textit{i.e.}~ $T_{{\rm c}}=8.7\times 10^6 K$, they lead to much more reasonable values of the recoil-enhanced luminosity. In what follows we briefly review the overall dynamics and then concentrate on how to compute a realistic estimate for the luminosity. \subsubsection{Shock Dynamics} The overall dynamics of the large-size models is qualitatively similar to that of the smaller counterparts but with three important differences. The first one is a much weaker dependence of the dynamics on the recoil velocity. This is essentially due to the fact that the inner edge of the discs is so far from the recoiling black hole that only extremely large (and unrealistic) values of the recoil velocity induce a significant modification of the orbital velocity. This is shown in Fig.~\ref{fig7}, which reports in the left column the rest-mass density after $60$ and $180\,{\rm d}$ for a recoil velocity $V_{\rm k} = 500\,{\rm km/s}$ applied to model \texttt{L.00}. The right column shows instead the corresponding shock structure with the same notation used in Fig.~\ref{fig1} and highlights that a clear spiral structure is lost already after $180\,{\rm d}$, which now corresponds to almost $18$ orbital periods. Note that the temperature distribution is inversionally proportional to that of the density (not shown in Fig.~\ref{fig7}) and that the central region is filled only with the atmosphere fluid and thus appears as white in the left column. However, several small shocks are produced in this cavity and these are clearly revealed by the shock detector images on the right column which reports the central region as dark; this is just an artifact that does not have a dynamical impact. The behaviour in Fig.~\ref{fig7} should also be contrasted with the spiral-shock structure of model \texttt{S.00}, which instead persisted intact after almost $25$ orbital periods (\textit{cf.}~ bottom right panel of Fig.~\ref{fig1}). This behaviour and the rapid disappearance of the spiral shock structure is further pronounced as the recoil velocity is increased to $V_{\rm k} = 1000\,{\rm km/s}$ (not shown here). The second difference is that for sufficiently large recoils, namely for $V_{\rm k}\geq 2000 \,{\rm km/s}$, the initial cavity between the black hole horizon and the inner edge of the disc can be filled rapidly by infalling material. This is evident when looking at Fig.~\ref{fig8}, which reports the rest-mass density and shock structure after $60$ and $180\,{\rm d}$ for a recoil velocity $V_{\rm k} = 3000\,{\rm km/s}$ applied to model \texttt{L.00}. When contrasted with Fig.~\ref{fig7}, which has the same spatial extent, it is easy to notice that already after $\sim 60\,{\rm d}$, or $\sim 5$ orbital revolutions, the central cavity is filled with high-density material, some of which extends right onto the black hole. Similarly to what already seen for smaller recoils, also the right column of Fig.~\ref{fig8} shows that in this case the spiral structure is not present, although spatially extended shocks are formed both in the inner regions and in the outer parts of the disc. Note that the velocity jump at the shocks is again rather small, being at most $\Delta v\sim 2.5\times 10^{-4}$, hence insufficient to accelerate particles through the various acceleration mechanisms involving shock waves. Interestingly, because the black hole is moving at very large velocities in an ambient fluid which also has a non-negligible angular momentum, a low-density region which resembles a horn-shaped ``cavity'' is produced in the downstream part of the flow when $V_{\rm k}= 3000 \,{\rm km/s}$ at $t=180\,{\rm d}$ after the merger. This cavity is shown in Fig.~\ref{fig12} and its orientation is directly related to the direction of the recoil velocity. A simple change of sign in the recoil velocity, in fact, would rotate the cavity of 180 degrees around the black hole (not shown in Fig.~\ref{fig12}). The formation of such a cavity is noticeable also in the simulations of~\citet{Rossi2010}, where however it is not discussed. The cavity leads to quasi-periodic variation of the accretion rate as clumps of matter in the downstream of the flow enter the cavity and streams onto the black hole (\textit{cf.}~ the oscillations of ${\dot M}$ in Fig.~\ref{fig6} after $t\sim 200\,{\rm d}$ for $V_{\rm K}=3000\,{\rm km/s}$). The generation of this flow pattern has an interest of its own, being a non-trivial variant of the Bondi-Hoyle accretion flow onto a moving black hole and will be investigated with greater detail in a distinct work. The third and final difference is really a combination of the phenomenology discussed above as it manifests itself in terms of the accretion rate. This is shown in Fig.~\ref{fig6}, which reports the mass accretion rate at $r=r_{\rm{min}}$ for different values of the kick velocity in the large-size model \texttt{L.00}. As already discussed with Fig.~\ref{fig5} for the small-size models, also in these larger discs the accretion rate shows a rapid increase when the black hole ``meets'' the disc, reaching values $\dot M \simeq 10 M_\odot/{\rm yr}$, that are well above the Eddington one because of the absence of the radiation-drag contribution. Differently from the smaller discs, however, the lag between the merger and the increase of the accretion rate is not linear but rather increases nonlinearly as the recoil velocity is decreased. This is clearly shown in Fig.~\ref{fig6}, where the accretion rate jumps to high values after $\sim 35,{\rm d}$ for $V_{\rm k}=3000\,{\rm km/s}$, after $\sim 100,{\rm d}$ for $V_{\rm k}=2000\,{\rm km/s}$ and after almost one year, \textit{i.e.}~ $\sim 330,{\rm d}$ for $V_{\rm k}=1000\,{\rm km/s}$. The reason for this nonlinear response of the disc has to be found in the fact that for smaller recoil velocities the time required by the black hole to cross the initial cavity $\tau_{\rm cross}$ will be much larger than the typical orbital revolution time. As an example, for $V_{\rm k}=1000\,{\rm km/s}$ the timescale ratio is $\tau_{\rm cross}/\tau_{\rm c} \sim 33$, so that the disc will have sufficient time to readjust itself to the new gravitational field and thus redistribute its orbital angular momentum. In practice the inner edge of the disc will move to larger radii as a result of the mass loss and of the varied potential, thus delaying nonlinearly the contact with the black hole and thus the steep increase in the accretion rate. \begin{figure} {\includegraphics[angle=0,width=8.75cm,height=7.91cm]{./figures/rho_focuss_positive_x_kick.png}} \caption{Magnification of the central region of the rest-mass density in model \texttt{L.00} after $180\,{\rm d}$ and for a recoil velocity $V_{\rm k}=3000 {\rm km/s}$. Note that the colormap is slightly different from the one in Fig.~\ref{fig8} to highlight the presence of a cavity.} \label{fig12} \end{figure} As a final remark we note that all of the phenomenology discussed here for the model \texttt{L.00} has been found also for a large-size model around a rapidly spinning black hole, namely \texttt{L.90}. Because the overall differences are minute and of the order of few percent at most, they will not be discussed here. The reasons behind these similarities are rather obvious: the large-size discs have inner radii that are too far from the black hole to be sensitive to the spin-induced corrections which decay much more rapidly and as $1/r^3$. \begin{figure} {\includegraphics[angle=0,width=9.0cm]{./figures/accretion_versus_kick_L.pdf}} \vspace{-2.0cm} \caption{Mass accretion rate at $r=r_{\rm{min}}$ for different values of the kick velocity in the large-size model \texttt{L.00}. } \label{fig6} \end{figure} \subsubsection{EM Luminosities} Because of the relatively high temperature of the gas and of the generation of a shock pattern, thermal bremsstrahlung is thought to be an efficient emission mechanism through which circumbinary discs may become visible in the electromagnetic spectrum \citep{Megevand2009, Corrales2009, Bode:2009mt, Anderson2009}. However, thermal-bremsstrahlung emission from circumbinary is affected by a serious problem which has been so far underestimated or not sufficiently emphasized. This has to do with the fact that bremsstrahlung cooling time is too short~\citep{Corrales2009} or, stated differently, that the internal energy budget of the emitting gas is not large enough to allow for the bremsstrahlung emission to last but for a few seconds. This can be easily estimated as $t_{\rm {cool}}=E_{\rm int}/L_{{\rm BR}}$, with $E_{\rm int}$ and $L_{{\rm BR}}$ obtained from (\ref{int_enrgy}) and~(\ref{brem_geo}), respectively. For the large model \texttt{L.00} we have $E_{\rm int}\sim 3.4\times10^{50}{\rm erg}$ and $L_{{\rm BR}}\sim 2.8\times10^{49}{\rm erg/s}$ at time $t=0$ and this estimate remains of the same order of magnitude during the evolution. This yields to $t_{\rm {cool}} \simeq 12\,{\rm s}$. The situation is even worse if we consider the transition to the relativistic regime. In this case, in fact, not only the bremsstrahlung emissivity is increased by a factor\footnote{ It should be remarked, however, that when the electron become relativistic, \textit{i.e.}~ for $T\geq 5.9\times 10^9 K$, other emission mechanisms, such as inverse Compton or synchrotron (if a magnetic field is also present), are generally more efficient.} $\propto [1+4.4 T/(10^{10} K)]$~\citep{Rybicki_Lightman1986}, but also the collisions between particles of the same species start contributing significantly to the bremsstrahlung emission~\citep{Svensson1982} through radiation in moments other than the electric dipole (which is strictly zero for particles of the same species~\citep{Krolik1999}). \begin{figure} {\includegraphics[angle=0,width=9.0cm]{./figures/iso_luminosity_compare_gamma.pdf}} \vspace{-2.0cm} \caption{Top panel: Luminosity computed in the isothermal evolution $L_{\rm isot}$ for different values of the kick velocity for model \texttt{L.00} and for a polytropic index $\gamma=4/3$. Note the presence of a peak at about $\sim 20\,{\rm d}$ after the merger, of a binary with total mass $M\simeq 10^6 M_\odot$ and of a persistent luminosity for several days at values which are a factor of a few smaller. Bottom panel: Comparison of $L_{\rm isot}$ for model \texttt{L.00} when computed with a polytropic index $\gamma=4/3$ (thick lines) or $\gamma=5/3$ (thin lines). The comparison is made for two reference recoil velocities and shows that the results are very similar, although a stiffer EOS leads to slightly larger luminosities.} \label{fig10} \end{figure} Of course, there are also other factors that can work in favour of a bremsstrahlung emission and which we have not taken into account. A first one is that we have neglected the thermal bremsstrahlung absorption, which is likely to enhance significantly the bremsstrahlung cooling time by acting as a source of additional internal energy. Moreover, it is also possible that the spiral shock originating from the very central region dissipates considerably as it propagates outwards, hence confining the bulk of the bremsstrahlung luminosity from within a very small portion of the disc (we recall that the bremsstrahlung luminosity is proportional to the volume integral over the emitting source). Overall, while we cannot rule out thermal bremsstrahlung as an emission mechanism from the circumbinary disc, it is also evident to us that the luminosity estimates made so far without a proper treatment of radiation transfer are excessively optimistic. A second possible estimate of the luminosity is given by the accretion-powered luminosity, $L_{\rm acc}=\eta \dot M c^2$, where $\eta$ is the radiative efficiency. However, lacking any treatment of the radiation pressure effects, such an estimate would only provide misleading conclusions and therefore we will note use it hereafter. A third and possibly more accurate estimate of the luminosity can be made by assuming that all the changes in the temperature that are due to a local compression will be dissipated as radiation. This idea, proposed in Newtonian physics by~\citet{Corrales2009}, can be summarized and implemented in a general relativistic context as detailed below. Consider the evolution of the disc with an equation of state $p(T)=\rho k_b T/m_p$ and a specific internal energy given by $\epsilon(T)=k_b T/[(\gamma-1) m_p] = \frac{3}{2}p/\rho$, where the last equality has been obtained for $\gamma=5/3$. In general, there is no necessity to evolve the energy equation in an isothermal evolution since the energy can be computed directly from the temperature and the latter is constant by construction. However, the internal energy can be nevertheless evolved in time with the only aim of computing the difference $\rho[\epsilon - \epsilon(T)]$, which is then assumed to be radiated instantaneously. The relativistic equation for the evolution of the total internal energy density $e\equiv\rho(1+\epsilon)$ is~\citep{Anile_book} \begin{equation} \label{internal_energy} u^\mu\nabla_\mu e + (e+p)\Theta=0, \end{equation} where $\Theta\equiv\nabla_\mu u^\mu$ is the expansion of the fluid. The continuity equation $\nabla_\mu(\rho u^\mu)=0$ can then be used to rewrite Eq.~(\ref{internal_energy}) as \begin{eqnarray} \label{internal_energy1} \partial_t(\sqrt{\gamma}W\rho\epsilon)+\partial_i[\sqrt{\gamma}\rho\epsilon W(\alpha v^i-\beta^i)] = && \nonumber \\ && \hskip -2.5cm -p\partial_t(\sqrt{\gamma}W) -p\partial_i(\alpha\sqrt{\gamma}u^i) \ . \end{eqnarray} One aspect to note is that Eq.~(\ref{internal_energy1}) is not written in a conservative form because of the derivatives on the right hand side acting on the flow variables. While this is not ideal within our formulation of the hydrodynamics equations, the modifications are minimal. Indeed, since the Lorentz factor does not change significantly during the evolution, it is reasonable to neglect the term $\propto \partial_t(\sqrt{\gamma}W)$, while the spatial derivatives of the term $\partial_i(\alpha\sqrt{\gamma}u^i)$ can be treated with standard finite-difference methods without a significant loss of accuracy across the discontinuities. In practice we have assumed an initial temperature for the disc which is uniform in space and set it to be $T_0=\frac{1}{2} T_{\rm c}$, where $T_{\rm c}$ is the maximum temperature at the center of the disc (\textit{cf.}~ Table~\ref{tab1}). An estimate of the luminosity is then trivially computed by performing at each timestep a volume integration of the difference $\rho[\epsilon - \epsilon(T)]$ and by dividing it by the simulation timestep. Finally, we reset the specific internal energy to $\epsilon = \epsilon(T_0)$, so as to guarantee that the evolution is effectively isothermal. \begin{figure*} {\includegraphics[angle=0,width=9.0cm]{./figures/compare_evolutions.pdf}} \hskip 0.5cm {\includegraphics[angle=0,width=9.0cm]{./figures/accretion_versus_kick_L_iso.pdf}} \vspace{-2.0cm} \caption{Left panel: Comparison of the rest-mass density along the $\phi=0$ direction as obtained with the standard evolution of the energy equation (red solid line) and with the isothermal evolution (blue dashed line), at two different times as shown in the two panels. The data refers to model \texttt{L.00} with a recoil of $1000\,{\rm km/s}$. Right panel: Mass accretion rate at $r=r_{\rm{min}}$ for the isothermal evolution and different values of the kick velocity in the large-size model \texttt{L.00}. This panel should be compared with Fig.~\ref{fig6} and which refers to an evolution of the energy equation.} \label{fig9} \end{figure*} The top panel of Fig.~\ref{fig10} shows the luminosity computed in this way for model \texttt{L.00} with a polytropic index $\gamma=4/3$ and for different values of the recoil velocity. Following~\citet{Corrales2009}, the values reported have been computed by neglecting the negative contributions to the luminosity that are produced in regions experiencing rarefactions. Because when accounted for these negative contributions typically yield values that are one order of magnitude smaller, the values in Fig.~\ref{fig10} should be taken as upper limits to the emitted luminosity~\citep[see][]{Corrales2009}. Clearly, the evolution of the emitted energy has a peak that is larger for stronger recoils and that appears at $t\sim 33, 18$, and $7$ d after the merger for $V_{\rm k}=300, 1000$ and $3000 \,{\rm km/s}$, respectively. While the peaks are the consequence of the strong shocks that are produced in the inner parts of the disc as the latter approaches the black hole, the asymptotic values of the isothermal luminosity are instead produced by the local compressions in the disc. As such, the peaks in the luminosity are not related to the encounter of the black hole with the disc and therefore they are not correlated with the increase in the mass accretion rate, which in general takes place at later times (\textit{cf.}~ right panel of Fig.~\ref{fig9}). The bottom panel of Fig.~\ref{fig10}, on the other hand, shows a comparison of $L_{\rm isot}$ for model \texttt{L.00} when computed with a polytropic index $\gamma=4/3$ (thick lines) or $\gamma=5/3$ (thin lines). The comparison is made for two reference recoil velocities of $V_{\rm k}=300\,{\rm km/s}$ and $V_{\rm k}=1000\,{\rm km/s}$ and it shows that the results are very similar, although a stiffer EOS leads to slightly larger luminosities. Overall, our estimates of the luminosity computed within the isothermal evolution approximation confirm those by~\citet{Corrales2009} even though the temperature of our large size model is one order of magnitude larger than that reported in Fig.~7 of~\citet{Corrales2009}. While we believe that these estimates of the luminosity are the most reasonable ones that can be obtained with a code that is intrinsically unable to account for radiation losses, we should also stress that the isothermal evolution by itself provides a less reliable description of the overall dynamics. This is evident, for example, from Fig.~\ref{fig9}, whose left panel offers a comparison of the rest-mass density profiles along $\phi=0$ as obtained with the standard evolution of the energy equation (red solid line) and with the isothermal evolution (blue dashed line) for the large-size model \texttt{L.00} and a recoil of $V_{\rm k}=1000\,{\rm km/s}$. Note that the isothermal evolution tends to increase the density gradients, especially during the initial phases, which is also when the peak in the luminosity appears in the light curves. Also shown in the right panel of Fig.~\ref{fig9} is the mass accretion rate at $r=r_{\rm{min}}$ for the isothermal evolution and different values of the kick velocity. This panel shows that since the discs are more sensitives to compressions, their response to the recoil is different and, in particular, takes place earlier than in the full-evolution case (\textit{cf.}~ with Fig.~\ref{fig6}). Furthermore, with the exception of the very large kick, the mass accretion rate does not show a very large jump, but rather a first small jump followed by smooth increase to the final asymptotic behaviour which is reached at times comparable to those of the full evolution. This is clearly the case for $V_{\rm k}=1000\,{\rm km/s}$ and is due to the fact that accretion starts earlier for this matter whose energy has been decreased by the radiative losses. In conclusion, and in spite of the caveats made above, we believe that luminosities as large as few $L \simeq 10^{43} \ {\rm erg/s} $ should be expected at about $\sim 30\,{\rm d}$ after the merger of a binary with total mass $M\simeq 10^6 M_\odot$ and that these luminosities should persist for several days to values which are a factor of a few smaller. \section{Conclusions} \label{Conclusions} We have presented the results of two-dimensional general relativistic numerical simulations of small \textit{and} extended circumbinary discs in the post-merger phase of the merger, when the disc reacts to the mass loss of the central black hole and to its recoil velocity. Our analysis benefitted from being able to capture accurately the dynamics of the perturbed disc in the relativistic regime, thus allowing us to investigate the dependence of the accretion rate on the black-hole spin and on the kick velocity. Furthermore, by considering discs that are quasi-Keplerian, extended and at a large distance from the binary, we were able to consider realistic scenarios even in the general relativistic framework. Another important aspect of our work is the use of a novel and accurate technique to construct a ``shock detector'' and hence to determine where, within the flow, the shocks produced by the recoil are located. This, in turn, has allowed us to assess under what conditions a spiral-shock pattern can develop, produce a variability in the accretion rate and, hence, in the luminosity. Our relativistic shock detector (for which we also present a Newtonian equivalent in Appendix~\ref{appendixA}) is based on the analysis of the initial states of the Riemann problem solved at each cell interface and can therefore determine the location of the shock with the same resolution as that of the spatial grid, revealing that the previously proposed criteria for the occurrence of the shock are often inaccurate. Overall, we can confirm within a general relativistic regime many of the results found previously in Newtonian or pseudo-Newtonian gravity. More specifically, we find that for discs that are sufficiently small and close to the black-hole, a regular spiral-shock develops as a result of the recoiling black hole. The strength, shape and persistence of the shocks, however, depend sensitively on both the size of the tori and on the intensity of the recoil. As a result, while the spiral shock is stable over many orbital periods in the case of small discs subject to small recoils, it never develops or is rapidly destroyed in discs that are large and subject to large recoil velocities. It is worth noting that the typical velocity jumps at the shocks are $\Delta v \lesssim 2.5\times 10^{-4}$, even with a kick velocity $V_{\rm k}=3000 \ {\rm km}/{\rm s}$. It is therefore possible that such shocks may be damped by dissipative viscous processes or radiative losses. Besides the interesting shock properties described above, we also found that the disc dynamics is only very weakly dependent on the black-hole spin. The latter influences only the small-size tori by modifying the accretion rate and by leading to smaller accreted masses per unit time for more rapidly spinning black holes. Finally, we found that even in the limit of a vanishing kick velocity and as long as a mass loss is present, the disc goes through a phase of regular oscillations characterized by the excitation of the $p$ modes of the disc; this is then followed by the appearance of a spiral-shock pattern generated by non-axisymmetric instabilities affecting the disc. This opens the door to the possibility that quasi-periodic oscillations are observed in the post-merger phase of SMBBH. Computing the EM counterpart to the merger event represents of course a fundamental aspect of our investigation. In contrast with other works, however, we have questioned the estimates of the bremsstrahlung luminosity when computed without properly taking into account the radiation transfer. The energetic reservoir available for the EM emission is, in fact, too small to yield but unrealistically short cooling times. While we cannot rule out thermal bremsstrahlung as the main EM emission mechanism, we believe that the bremsstrahlung-luminosity estimates made so far without a proper treatment of radiation transfer are excessively optimistic. A somewhat realistic estimates of the emitted luminosity can be obtained by assuming that the internal-energy enhancements due to local compressions in the perturbed disc are immediately radiated away, \textit{i.e.}~ by considering an ``isothermal'' evolution like the one recently investigated in Newtonian physics by~\citet{Corrales2009}. In this case, and despite the fact that the isothermal evolution tends enhance the compressibility of the fluid, we find that the energy emitted can reach a peak value above $L \simeq 10^{43} \ {\rm erg/s} $ at about $\sim 30\,{\rm d}$ after the merger of a binary with total mass $M\simeq 10^6 M_\odot$ and persist for several days at values which are a factor of a few smaller. If confirmed by more sophisticated calculations such a signal could represent an interesting EM counterpart of the merger of binary black-hole system. As a final remark we note that while a rather robust picture is emerging from the collective work done so far on the post-merger dynamics of the circumbinary disc around a SMBBH, much remains to be done to compute realistically the resulting EM emission. Important improvements to the treatment considered here must include the presence of a magnetic field, a proper treatment of the radiation transfer and the extension to three-dimensional calculations. All of these will be the focus of our future work on this subject. \begin{acknowledgements} We are grateful to Marek Abramowicz and Constanze Roedig for useful discussion. The computations were performed on the Damiana cluster at the AEI and on the IBM/SP6 of CINECA (Italy) through the ``INAF-CINECA'' agreement 2008-2010. This work was supported in part by the DFG grant SFB/Transregio~7. \end{acknowledgements}
2024-02-18T23:40:23.014Z
2010-09-14T02:01:49.000Z
algebraic_stack_train_0000
2,221
14,028
proofpile-arXiv_065-10896
\section{Introduction} \label{sec:Introduction} Self-similar solutions to the hydrodynamic equations describing adiabatic one-dimensional flows of an ideal gas are of interest for mainly two reasons. First, the nonlinear partial differential hydrodynamic equations are reduced for self-similar flows to ordinary differential equations, which greatly simplifies the mathematical problem of solving the equations and in certain cases allows one to find analytic solutions. Second, self-similar solutions often describe the limiting behavior approached asymptotically by flows which take place over a characteristic scale, $R$, which diverges or tends to zero \citep[see][for reviews]{SedovBook,ZeldovichRaizer,BarenblattBook}. It is reasonable to assume that in the limit $R\rightarrow\infty(0)$ the flow becomes independent of any characteristic length scale. Using dimensional arguments, it is possible to show that in this case the flow fields must be of the self-similar form \citep{ZeldovichRaizer,WaxmanShvarts10} \begin{equation}\label{eq:ss_scaling} u(r,t)=\dot{R}\xi U(\xi),\quad c(r,t)=\dot{R}\xi C(\xi),\quad \rho(r,t)=BR^\epsilon G(\xi), \end{equation} where $u$, $c$, and $\rho$ are the fluid velocity, sound speed, and density, respectively (the pressure is given by $p=\rho c^{2}/\gamma$), and \begin{equation}\label{eq:Rdot} \dot{R}=AR^\delta, \quad \xi(r,t)=r/R(t) \end{equation} \citep[for a somewhat different approach to self-similarity, based on Lie group methods, see][]{Coggeshall1986lgi,Coggeshall1991ash,Coggeshall1992gis}. For a self-similar solution of the form given in Equations~\eqref{eq:ss_scaling} and~\eqref{eq:Rdot}, the hydrodynamic equations, Equations~(\ref{eq:hydro_eq}), are replaced with a single ordinary differential equation, Equation~(\ref{eq:dUdC}), \begin{equation} \frac{dU}{dC}=\frac{\Delta_{1}(U,C)}{\Delta_{2}(U,C)}, \nonumber \end{equation} and one quadrature, Equation~(\ref{eq:quadrature}), \begin{equation} \frac{d\ln\xi}{dU}=\frac{\Delta(U,C)}{\Delta_{1}(U,C)}\qquad {\rm or} \qquad \frac{d\ln\xi}{dC}=\frac{\Delta(U,C)}{\Delta_{2}(U,C)}. \nonumber \end{equation} $\Delta$, $\Delta_1$, and $\Delta_2$ are given by Equations~(\ref{eq:deltas}). As illustrated in Section~\ref{sec:gap} \citep[see also][]{Guderley42,Meyer-ter-Vehn82,WaxmanShvarts93}, many of the properties of self-similar flows may be inferred by analyzing the contours in the $(U,C)$-plane determined by Equation~(\ref{eq:dUdC}). In this paper, we revisit the ``strong explosion problem'', which is one of the most familiar problems where asymptotic self-similarity is encountered. Consider the blast wave produced by the deposition of energy $E$ within a region of characteristic size $d$ at the center of an initially cold ($p=0$ at $r>d$) gas sphere with initial density $\rho_0=K r^{-\omega}$ (at $r>d$). As the shock radius $R$ diverges, we expect the flow to approach a self-similar solution of the form given by Equations~(\ref{eq:ss_scaling}) and~(\ref{eq:Rdot}). The asymptotic flow is described by the Sedov--von Neumann--Taylor (ST) solutions \citep{Sedov46,vonNeumann47,Taylor50} for $\omega<3$, and by the solutions derived by Waxman \& Shvrats \citep[WS;][]{WaxmanShvarts93,WaxmanShvarts10} for $\omega>3$. The ST solutions describe decelerating shocks ($\delta=(\omega-3)/2<0$) and are of the ``first-type'', where the similarity exponents, $\delta$ and $\epsilon$, are determined by dimensional considerations. The WS solutions describe accelerating shocks ($\delta>0$) and are of the ``second-type'', where the similarity exponents are determined by the condition that the solutions must pass through a singular point of Equation~(\ref{eq:dUdC}). \begin{figure} \epsscale{1.2} \plotone{p01.eps} \caption{Self-similar exponent $\delta$ as a function of $\omega$ for $\gamma=4/3$ (dashed line) and $\gamma=5/3$ (solid line). \label{fig:delta-omega-full}} \end{figure} We revisit the strong explosion problem for several reasons. First, there exists a ``gap'' in the $(\gamma,\omega)$-plane, $3\leq\omega\leq\omega_{g}(\gamma)$, where neither the ST nor the WS solutions describe the asymptotic flow \citep[$\omega_{g}$ is increasing with $\gamma$, $\omega_{g}=3$ for $\gamma=1$ and $\omega_{g}\simeq3.26$ for adiabatic index $\gamma=5/3$;][]{WaxmanShvarts93}. Our first goal is to close this ``gap''. We argue in Section~\ref{sec:gap} that second-type solutions should not be required in general to include a sonic point, and that it is sufficient to require the existence of a characteristic line $r_c(t)$, such that the energy in the region $r_c(t)<r<R$ approaches a constant as $R\rightarrow\infty$. We show that the two requirements coincide for $\omega>\omega_g$ and that the latter requirement identifies $\delta=0$ solutions as the asymptotic solutions for $3\leq\omega\leq\omega_{g}$. This result is in agreement with that of \citet{Gruzinov03}, who suggested based on heuristic arguments that the $R\propto t$ solutions are the correct asymptotic solutions in the gap. As we explain in some detail at the end of Section~\ref{sec:gap_solutions}, the validity of the heuristic arguments is not obvious. We use a different reasoning, based on an extension of the analysis of \citet{WaxmanShvarts93}. In Section~\ref{sec:numerical solution}, we compare the asymptotic, $R/d\gg1$, behavior of numerical solutions of the hydrodynamic equations, Equations~(\ref{eq:hydro_eq}), to that expected based on the $\delta=0$ self-similar solutions. We show that the convergence to self-similarity is very slow for $\omega\sim3$. Hence, it is difficult to check using numerical solutions whether the flow indeed approaches a $\delta=0$ self-similar behavior as $R\rightarrow\infty$. We show in Section~\ref{sec:mod_slfsim} that in this case the flow may be described by a modified self-similar solution, $d\ln\dot{R}/d\ln R=\delta$ with slowly varying $\delta(R)$, $\eta\equiv d\delta/d\ln R\ll1$, and spatial profiles given by a sum of the self-similar solution corresponding to the instantaneous value of $\delta$ and a self-similar correction linear in $\eta$. The modified self-similar solutions provide an excellent approximation to numerical solutions obtained for $\omega\sim3$ at large $R$, with $\delta\rightarrow0$ (and $\eta\neq0$) for $3\leq\omega\leq\omega_{g}$. The second reason for revisiting the strong explosion problem is that it is of general methodological interest. It demonstrates transitions between (mathematically and physically) different types of solutions as the value of $\omega$ changes, as illustrated in Figure~\ref{fig:delta-omega-full}: as the value of $\omega$ increases above $\omega=3$, a transition occurs between first-type solutions (at $\omega<3$) and second-type solutions \citep[at $\omega>3$;][]{WaxmanShvarts93}; as the value of $\omega$ increases above $\omega=\omega_c(\gamma)$ ($\omega_c\sim8$ for $4/3<\gamma<5/3$), the $\omega<\omega_c$ power-law solutions ($R\propto t^{1/(1-\delta)}$, $\delta<1$) are replaced with exponential solutions ($R\propto e^{t/\tau}$, $\delta=1$) at $\omega=\omega_c$ and with solutions diverging in finite time ($R\propto (-t)^{1/(\delta-1)}$, $\delta>1$) at $\omega>\omega_c$ \citep{WaxmanShvarts10}. We show here that the strong explosion problem also exhibits a transition between two sub-types of second-type solutions at $\omega=\omega_{g}(\gamma)$. We argue in Section~\ref{sec:Discussion} that, based on the results presented in this paper, the definition of the two types of self-similar solutions should be somewhat modified and that the family of asymptotic second-type solutions should be expanded. Finally, we note that the propagation of shock waves in steep density gradients is of interest in a wide variety of astrophysical contexts \citep[e.g.][and references therein]{OstrikerMcKee88,KooMcKee90}, such as supernova explosions \citep[e.g.][and references therein]{MM99}. It is worth noting that self-similar solutions for shock propagation in power-law density profiles, $\rho\propto r^{-\omega}$, are useful for describing shock propagation in more general density profiles \citep[e.g.][]{MM99,OrenSari09}, as well as for the general study of shock wave stability \citep[e.g.][]{Goodman90,Chevalier90,Kushnir2005tsd,SWS00}. \section{Self-similar solutions in the ``gap'' region} \label{sec:gap} As explained in Section~\ref{sec:Introduction}, we expect the strong explosion flow to approach a self-similar behavior, of the form given by Equations~(\ref{eq:ss_scaling}) and~(\ref{eq:Rdot}), as $R$ diverges. Since for strong shocks the density just behind the shock wave is a constant factor, $(\gamma+1)/(\gamma-1)$, times the density just ahead of the shock, we must have $\epsilon=-\omega$, and we may choose $B=K$. With this normalization, the Rankine--Hugoniot relations at the shock front determine the boundary conditions for the self-similar solutions to be \citep[e.g.][]{ZeldovichRaizer} \begin{equation}\label{eq:shock_boundary} U(1)=\frac{2}{\gamma+1},\quad C(1)=\frac{\sqrt{2\gamma(\gamma-1)}}{\gamma+1}, \quad G(1)=\frac{\gamma+1}{\gamma-1}. \end{equation} The only parameter of the self-similar solution that remains to be determined is $\delta$. \subsection{ST and WS solutions} \label{sec:ST_WS} The self-similar solution, given by Equations~(\ref{eq:ss_scaling}) and~(\ref{eq:Rdot}), depends on two independent dimensional constants, $A$ and $B=K$. In the ST analysis, it is assumed that the second dimensional constant, in addition to $K$, that determines the self-similar solution is $E$. In this case, dimensional considerations imply $R\propto (Et^2/K)^{1/(5-\omega)}$, i.e., \begin{equation}\label{eq:delta_ST} \delta=\delta_{\textrm{ST}}\equiv\frac{\omega-3}{2}. \end{equation} For $\omega<3$ we have $\delta<0$, i.e., decelerating blast waves. The flow properties are qualitatively different in the regimes $\omega<\omega_{\textrm{vac}}\equiv(7-\gamma)/(\gamma+1)$ and $\omega_{\textrm{vac}}<\omega<3$ (see Figure~\ref{fig:UC_curves}). For $\omega<\omega_{\textrm{vac}}$, $U$ tends to $1/\gamma$ and $C$ tends to infinity as $\xi$ tends to zero. For $\omega_{\textrm{vac}}<\omega<3$, the self-similar solution contains an ``evacuated'' region: there exists some finite $\xi_{\rm in}>0$, such that the spatial region $0<\xi<\xi_{\rm in}$ is evacuated ($\rho=0$). The self-similar solution describes the flow for $\xi_{\rm in}\le \xi\le 1$ and is matched to the evacuated region, $\xi<\xi_{\rm in}$, by a weak discontinuity, which lies at $\xi=\xi_{\rm in}$. In this case, $U$ tends to 1 and $C$ tends to 0 as $\xi$ tends to $\xi_{\rm in}$. A detailed discussion of the ST solutions is given by \citet{Korobeinikov1991ppb} and \citet{Book1994}. As explained in detail in \citep{WaxmanShvarts93}, the ST solutions are the correct asymptotic solutions only for $\omega<3$, for which $\delta<0$. For larger values of $\omega$ the mass and energy contained in the self-similar solution are infinite, reflecting the fact that the initial gas mass at $r>d$ diverges for $d\rightarrow0$. It was therefore suggested by \citet{WaxmanShvarts93} that for $\omega>3$ the asymptotic solution is given by a self-similar solution only over part of the $(\xi,R)$-plane, bounded by $\xi=1$ and $\xi_c(R)<1$, and by a different solution at $0<\xi<\xi_c(R)$. Since such a solution includes a contact or a weak discontinuity at $\xi_c(R)$, $\xi_c(R)$ must coincide with a characteristic of the self-similar solution. For the self-similar flow, the characteristic lines \begin{equation}\label{eq:characteristics} C_0:\frac{dr_0}{dt}=u,\quad C_\pm: \frac{dr_\pm}{dt}=u\pm c \end{equation} are given by \begin{eqnarray}\label{eq:slfsim_char} C_{0}&:&\frac{d \ln \xi_{0}}{d \ln R}=U(\xi_{0})-1,\nonumber \\ C_{\pm}&:&\frac{d \ln \xi_{\pm}}{d \ln R}=U(\xi_{\pm})\pm C(\xi_{\pm})-1. \end{eqnarray} The directions in which the different characteristics propagate are illustrated in Figure~\ref{fig:UC_curves_zoom}. The physical interpretation of the behavior illustrated in this figure is as follows. The flow just behind the shock is always subsonic: the shock-front point $(U,C)=(U(1),C(1))$ lies above the ``sonic line'' $U+C=1$, i.e., $U(1)+C(1)>1$, which implies that $C_+$ characteristics emerging from points just behind the shock always overtake it. $C_+$ characteristics that do not overtake the shock exist only if the self-similar solution crosses the $U+C=1$ line in the $(U,C)$-plane into the region where $U+C<1$. $C_{0}$ characteristics, however, never overtake the shock and propagate away from it (in $\xi$ space). Requiring $\xi_c$ to coincide with a $C_+$ characteristic, that does not overtake the shock, implies therefore that the solution must cross the $U+C=1$ line. Since $\Delta=0$ along the sonic line, Equation~(\ref{eq:quadrature}) imply that a physical solution must cross the sonic line at a singular point $\Delta_1=\Delta_2=0$, as otherwise $U(\xi)$ and/or $C(\xi)$ are not single valued. This requirement determines the correct value of $\delta$ for the $\omega>3$ asymptotic solutions. \begin{figure} \epsscale{1} \plotone{p0_color.eps} \caption{Different types of the $C(U)$ curves of the solutions of the strong explosion problem, obtained for $\omega$ values in the regimes $\omega<\omega_{\rm vac}=(7-\gamma)/(\gamma+1)$, $\omega_{\rm vac}<\omega<3$, $\omega_g<\omega<\omega_c$, and $\omega_c<\omega$. Also shown is the sonic line, $U+C=1$. The square denotes the strong shock point, Equations~(\ref{eq:shock_boundary}), and the circles denote the singular points $(U,C)=(0,0)$, $(U,C)=(1-\delta,0)$, and $(U,C)=(1,0)$. \label{fig:UC_curves}} \end{figure} \begin{figure} \epsscale{1} \plotone{p00_color.eps} \caption{Zoomed version of Figure~\ref{fig:UC_curves} (using the same line types). Solid arrows indicate the propagation direction of $C_{+}$ characteristics and dashed arrows indicate the propagation direction of $C_{0}$ characteristics. \label{fig:UC_curves_zoom}} \end{figure} The self-similar solutions obtained in this way for $\omega>3$ were analyzed in detail in \citep{WaxmanShvarts93,WaxmanShvarts10}. They describe accelerating blast waves, with $\delta>0$. As $\xi\rightarrow0$ the $C(U)$ curves of these solutions approach the singular point $(U,C)=(1-\delta,0)$ for $\delta<1$, and the singular point $(U,C)=(0,0)$ for $\delta\ge1$ (see Figure~\ref{fig:UC_curves}). Analyzing the behavior of the solutions near these singular points, it was shown that although the mass and energy contained in the self-similar solution are infinite, the mass and energy contained within the region $\xi_c(R)<\xi<1$ approach finite values as $R\rightarrow\infty$, for any $C_+$ characteristic which satisfies $\xi_c(R)\rightarrow0$ as $R\rightarrow\infty$. Moreover, it was shown that $\xi_c(R)R\propto t$ as $R\rightarrow\infty$, implying that the asymptotic flow within the region $0<r<\xi_c(R)R$ is described by the self-similar solution of expansion into vacuum. Finally, it was demonstrated by numerical calculations that the asymptotic behavior described above is indeed approached for $R/d\gg1$ \citep{WaxmanShvarts93,WaxmanShvarts10}. The self-similar solutions derived by \citet{WaxmanShvarts93} exist only for $\omega>\omega_g(\gamma)>3$. Within the range $3<\omega<\omega_g(\gamma)$, there is no value of $\delta$ for which the $C(U)$ curve crosses the sonic line at a singular point ($\omega_g(\gamma)>3$ is increasing with $\gamma$, with $\omega_g=3.26$ for $\gamma=5/3$ and $\omega_g\rightarrow3$ for $\gamma\rightarrow1$, see Figure~\ref{fig:delta-omega-full}). Since the ST solutions provide the correct asymptotic solutions only for $\omega<3$, the asymptotic behavior within the (narrow) range of $3<\omega<\omega_g(\gamma)$ is not described by either of the two types of solutions. \subsection{The asymptotic solutions for $\omega$ values within the ``gap''} \label{sec:gap_solutions} As explained in the previous section, the divergence of the energy enclosed in the self-similar solutions for $\omega>3$ suggests that the asymptotic solution should be composed of a self-similar solution describing the flow at $\xi_c(R)<\xi<1$, matched to a different solution at $0<\xi<\xi_c(R)$. In general, $\xi_c$ may be a characteristic line of any type. In the analysis of \citep{WaxmanShvarts93}, it was assumed that $\xi_c$ is a $C_+$ characteristic. This was mainly motivated by the fact that requiring the existence of a $C_+$ characteristic that does not overtake the shock front as $R\rightarrow\infty$ is equivalent to requiring that the solution passes through a sonic point, and it is commonly accepted that the similarity exponents of a second-type solution are determined by the requirement that the solution passes through such a singular point. We argue here that in order to determine the similarity exponents it is sufficient to require the existence of any characteristic that does not overtake the shock and for which the energy contained in the self-similar part of the flow, $\xi_c(R)<\xi<1$, does not diverge as $R\rightarrow\infty$. Examining $\omega$ values below, within, and above the ``gap'' we show that for each value of $\omega$ there is only one value of $\delta$ which yields a valid solution: a solution that does not cross the sonic line at a non-singular point and that either contains a finite energy or a characteristic line satisfying the conditions described above. The energy contained in the $\xi_c(R)<\xi<1$ region of a self-similar solution is \begin{eqnarray}\label{eq:E_ss} E_{s}(R)&\equiv&\int_{\xi_c(R)R}^Rdr 4\pi r^2\left(\frac{1}{2}\rho u^2+\frac{1}{\gamma-1}p\right) \nonumber\\ &=&4\pi R^{3-\omega+2\delta}A^2 K \left\{I_{k}[\xi_{c}(R)]+I_{i}[\xi_{c}(R)]\right\}, \end{eqnarray} with \begin{equation}\label{eq:Ik def} I_{k}(\xi)=\int\limits_{\xi}^{1}d\xi'\xi'^{4}G\frac{1}{2}U^{2}, \quad I_{i}(\xi)=\int\limits_{\xi}^{1}d\xi'\xi'^{4}G\frac{1}{\gamma(\gamma-1)}C^{2}. \end{equation} The $I_{k}$ and $I_{i}$ terms give the kinetic and internal energy of the gas, respectively. In order for $E_s(R)$ not to diverge as $R\rightarrow\infty$ we must have \begin{equation}\label{eq:delta_lim} \delta\le\delta_{\textrm{ST}}=\frac{\omega-3}{2}. \end{equation} For $\delta>\delta_{\textrm{ST}}$ the energy contained in any $\xi_1<\xi<\xi_2$ region of the self-similar solution diverges since $R^{3-\omega+2\delta}$ diverges as $R\rightarrow\infty$. For the WS solutions, $\delta<\delta_{\textrm{ST}}$ and $I_{k}(\xi)$ diverges as $\xi\rightarrow0$ in such a manner that the product $R^{3-\omega+2\delta}I_{k}[\xi_{c}(R)]$ tends to a constant as $R$ diverges. \begin{deluxetable*}{cccc} \tablecaption{Determining $\delta$ Based on the Properties of the Self-similar Solutions \label{tbl:tbl1}} \tablewidth{0pt} \tablehead{ \colhead{Properties of the self-similar solutions} & \colhead{$\omega_{\textrm{vac}}<\omega<3$} & \colhead{$3<\omega<\omega_{g}(\gamma)$} & \colhead{$\omega_{g}(\gamma)<\omega$}} \startdata $C(U)$ crosses the sonic line at a non-singular point & $\delta<\delta_{\textrm{ST}}$ & $\delta<0$ & $\delta<\delta_{\textrm{WS}}$ \\ \hline $C(U)$ crosses the sonic line at a singular point and & & & \\ $E_s(R)$ does not diverge as $R\rightarrow\infty$ & ... & ... & $\delta=\delta_{\textrm{WS}}$ \\ \hline $C(U)$ terminates at $(U,C)=(1,0)$ and & & & \\ $E_s(R)$ does not diverge as $R\rightarrow\infty$ & $\delta=\delta_{\textrm{ST}}$ & $\delta=0$ & ... \\ \hline $C(U)$ terminates at $(U,C)=(1,0)$ and & & & \\ $E_s(R)$ diverges as $R\rightarrow\infty$ & $\delta>\delta_{\textrm{ST}}$ & $\delta>0$ & $\delta>\delta_{\textrm{WS}}$ \\ \enddata \label{table} \end{deluxetable*} Let us consider the $C(U)$ curves, the behavior of characteristic lines, and the behavior of $E_s(R)$ for self-similar solutions with $\epsilon=-\omega$, that satisfy the strong shock boundary conditions, Equations~(\ref{eq:shock_boundary}). We examine $\omega$ values below the gap, $\omega_{\textrm{vac}}<\omega<3$, within the gap, and above the gap, $\omega_g<\omega<\omega_c$. Table~\ref{table} summarizes the relevant properties of the solutions obtained at the different $\omega$ regions for different values of $\delta$. The derivation of these properties is described below. First, we consider the properties of the $C(U)$ curves by numerically investigating the solutions of Equation~(\ref{eq:dUdC}). We find that for $\delta$ values smaller than a critical value, $\delta_*(\omega)$, the $C(U)$ curves starting at the strong shock point cross the sonic line at a non-singular point. We find that $\delta_*=\delta_{\textrm{ST}}$ for $\omega_{\textrm{vac}}<\omega<3$, $\delta_*=0$ for $3<\omega<\omega_g$, and $\delta_*=\delta_{\textrm{WS}}$ for $\omega_g<\omega<\omega_c$. As explained above, the self-similar solutions obtained for $\delta<\delta_*(\omega)$ are not physical. For $\omega_g<\omega$ and $\delta=\delta_*(\omega)=\delta_{\textrm{WS}}$, the $C(U)$ curves cross the sonic line at a singular point. Finally, for other values of $\delta$, $\delta\ge\delta_*(\omega)$ for $3<\omega<\omega_g$ and $\delta>\delta_*(\omega)$ for $\omega_g<\omega$, the $C(U)$ curves do not cross the sonic line and terminate at the singular point $(U,C)=(1,0)$. Next, we examine the energy contained in the self-similar solution. For $\omega<3$, the only physical solution is that obtained for $\delta=\delta_{\textrm{ST}}$: $\delta$ must satisfy Equation~(\ref{eq:delta_lim}), $\delta\le\delta_{\textrm{ST}}$, in order for the energy in any $\xi_1<\xi<\xi_2$ part of the solution not to diverge, and the $C(U)$ curve crosses the sonic line at a non-singular point for $\delta<\delta_{\textrm{ST}}$. In order to examine the energy content of solutions in the $(3<\omega<\omega_g,\delta\ge0)$ and $(\omega_g<\omega,\delta>\delta_{\textrm{WS}})$ regions we need to analyze the behavior of the solutions near the singular point $(U,C)=(1,0)$. The $C(U)$ curves of the solutions in these regions of $(\omega,\delta)$ lie above the sonic line, which implies that all $C_+$ characteristics emerging behind the shock overtake the shock. $C_0$ and $C_-$ characteristics, on the other hand, move along the $C(U)$ curve toward the $(U,C)=(1,0)$ singular point (see Figures~\ref{fig:UC_curves} and~\ref{fig:UC_curves_zoom}). Defining $f=1-U$, Equation~(\ref{eq:dUdC}) is given, to leading orders in $f$ and $C$, by \begin{equation}\label{eq:fC near edge} \frac{d\ln f}{d\ln C}= \begin{cases} {\frac{\delta f+[3-(\omega-2\delta)/\gamma]C^2} {-\delta f(\gamma-1)/2+\{[(\gamma-1)\omega+2\delta]/(2\gamma)\}C^2}} & \text{for $\delta\neq0$,} \\ {\frac{f^2-(3-\omega/\gamma)C^2} {(\gamma-1)f^2-[(\gamma-1)\omega/(2\gamma)]C^2}} & \text{for $\delta=0$.} \ \end{cases} \end{equation} Let us consider first the $\delta>0$ case. Regardless of whether $f$ approaches 0 faster or slower than $C^2$, Equation~(\ref{eq:fC near edge}) implies that in this limit \begin{equation}\label{eq:f on C near edge} \lim_{f\to 0}\frac{d\ln f}{d\ln C}=\nu, \end{equation} where $\nu$ is some constant. Assuming that $f$ approaches zero slower than $C^2$, i.e., that $\nu<2$, leads to contradictions since Equation~(\ref{eq:fC near edge}) gives $\nu=-2/(\gamma-1)<0$. Assuming that $f$ approaches zero faster than $C^2$, i.e., that $\nu>2$, Equation~(\ref{eq:fC near edge}) gives $\nu=(6\gamma-2\omega+4\delta)/[(\gamma-1)\omega+2\delta]$, for which $\nu>2$ implies $\omega<3$. Thus, for $\omega>3$ and $\delta>0$ we must have $\nu=2$, i.e., $f\propto C^2$, and Equation~(\ref{eq:fC near edge}) gives \begin{equation}\label{eq:f_C1} f=\frac{\omega-3}{\gamma\delta} C^2. \end{equation} Using this result, the quadrature, Equation~(\ref{eq:quadrature}), gives \begin{equation}\label{eq:f_xi_1} f=\frac{3(\gamma-1)+2\delta}{\gamma}\ln\left(\frac{\xi}{\xi_{\rm in}}\right), \end{equation} i.e., the singular point $(U,C)=(1,0)$ is approached for finite $\xi_{\rm in}>0$. Using these results and Equation~(\ref{eq:G}) we find \begin{equation}\label{eq:G1} G\propto f^{-(\gamma\omega+2\delta-3)/[3(\gamma-1)+2\delta]}. \end{equation} We may now determine the $R$ dependence of the energy $E_s(R)$, contained within the $\xi_0(R)<\xi<1$ region of the self-similar solution, where $\xi_0$ is a $C_0$ characteristic. Equation~(\ref{eq:slfsim_char}) may be solved, using Equation~(\ref{eq:f_xi_1}), to give \begin{equation}\label{eq:C0_1} \ln\left(\frac{\xi_0}{\xi_{\rm in}}\right)\propto R^{-[3(\gamma-1)+2\delta]/\gamma} \end{equation} for the evolution of $C_0$ characteristics. Using Equations~(\ref{eq:f_xi_1}) and~(\ref{eq:G1}), we find that the kinetic energy integral, $I_k(\xi)$ given in Equation~(\ref{eq:Ik def}), diverges in the limit $\xi\rightarrow\xi_{\rm in}$ as \begin{eqnarray}\label{eq:Ik_eta1} I_k(\xi)&\propto& f(\xi)^{1-(\gamma\omega+2\delta-3)/[3(\gamma-1)+2\delta]}\nonumber\\ &\propto& \left[\ln\left(\frac{\xi}{\xi_{\rm in}}\right)\right]^{-\gamma(\omega-3)/[3(\gamma-1)+2\delta]}. \end{eqnarray} Using Equation~(\ref{eq:C0_1}) we then find \begin{eqnarray}\label{eq:Ik_C01} I_k[\xi_0(R)]&\propto& R^{(\omega-3)}, \end{eqnarray} which, using Equation~(\ref{eq:E_ss}), implies that the energy diverges (for $\delta>0$) as $E_s\propto R^{2\delta}$. This divergence of energy implies that the $\delta=\delta_{\textrm{WS}}>0$ solutions are the only physical solutions for $\omega>\omega_g$. These solutions cross the sonic line at a singular point and terminate at $(U,C)=(1-\delta,C=0)$ with finite energy $E_s$, while the $\delta>\delta_{\textrm{WS}}>0$ solutions terminate at $(U,C)=(1,0)$ with diverging energy $E_s$. The divergence also implies that the $\delta=0$ solutions are the only solutions that may be physical solutions within the gap. Let us consider therefore the behavior of the $\delta=0$ solutions next. The solution of Equation~(\ref{eq:fC near edge}) for $\delta=0$ must also be of the form given by Equation~(\ref{eq:f on C near edge}). Assuming $f$ tends to 0 slower than $C$, i.e., $\nu<1$, leads to a contradiction since Equation~(\ref{eq:fC near edge}) gives $\nu=1/(\gamma-1)>1$. Therefore, $\nu$ must satisfy $\nu\ge1$. For $\nu>1$, Equation~(\ref{eq:fC near edge}) gives \begin{equation}\label{eq:f_C} \nu=\frac{6\gamma-2\omega}{(\gamma-1)\omega}, \end{equation} which satisfies $\nu>1$ for $\omega<6\gamma/(\gamma+1)$. For $\nu=1$, Equation~(\ref{eq:fC near edge}) gives \begin{equation}\label{eq:f_C_nu1} f^2=\frac{6\gamma-(\gamma+1)\omega}{2\gamma(2-\gamma)} C^2. \end{equation} The solution of the quadrature, Equation~(\ref{eq:quadrature}), gives \begin{equation}\label{eq:f_xi} f=\theta\ln\left(\frac{\xi}{\xi_{\rm in}}\right),\quad\theta= \left\{ \begin{array}{ll} \left(3-\frac{\omega}{\gamma}\right), & \hbox{$\nu>1$ ;} \\ \frac{3(\gamma-1)}{\gamma+1}, & \hbox{$\nu=1$,} \end{array} \right. \end{equation} which implies \begin{equation}\label{eq:C0} \ln\left(\frac{\xi_0}{\xi_{\rm in}}\right)\propto R^{-\theta}. \end{equation} Using these results and Equation~(\ref{eq:G}) we find \begin{equation}\label{eq:Geta} G\propto f^{-\mu}, \quad \mu= \left\{ \begin{array}{ll} (\gamma-1)\omega/(3\gamma-\omega), & \hbox{$\nu>1$ ;} \\ ((\gamma+1)\omega-6)/3(\gamma-1), & \hbox{$\nu=1$,} \end{array} \right. \end{equation} which implies that $I_k(\xi)$ diverges in the limit $\xi\rightarrow\xi_{\rm in}$ as $I_k(\xi)\propto f(\xi)^{1-\mu}$, which gives \begin{eqnarray}\label{eq:Ik_C0} I_k(\xi_0)&\propto& R^{\theta(\mu-1)}=R^{\omega-3}. \end{eqnarray} Numerical integration of Equations~(\ref{eq:dUdC}) and~(\ref{eq:quadrature}) shows that solutions starting at the strong shock point, Equations~(\ref{eq:shock_boundary}), approach the $(U,C)=(1,0)$ singular point along a $\nu>1$ curve. The asymptotic behavior of $I_k$ is the same, as we find here, for both $\nu=1$ and $\nu>1$. Using Equation~(\ref{eq:E_ss}), we find that the kinetic energy part of $E_s$ approaches a finite, non-zero, constant as $R$ diverges (it is straightforward to verify that the internal energy part of $E_s$ vanishes in the limit $R\rightarrow\infty$). The fact that the kinetic energy approaches a constant also implies that the mass contained within $\xi_0(R)<\xi<1$ approaches a finite non-zero constant, since the velocity of each fluid element, $\xi_0\dot{R}$, approaches a constant, $\xi_{\rm in}\dot{R}$. Thus, the $\delta=0$ solutions satisfy the requirement for the existence of a characteristic $\xi_c(R)$ that does not overtake the shock, and for which the energy contained in the self-similar part of the flow, $\xi_c(R)<\xi<1$, does not diverge as $R\rightarrow\infty$. As pointed out in Section~\ref{sec:Introduction}, it was suggested by \citet{Gruzinov03} that the asymptotic solutions in the gap are the $\delta=0$ solutions. The justification given there is based on the argument that the non-self-similar part of the asymptotic flow, which must exist since the self-similar solution contains infinite energy, acts as an infinite mass piston, which must move at a constant speed and may support the flow ahead of it. As we show here, the mass and energy contained in the self-similar and non-self-similar parts of the flow are both finite and may be comparable (this is also the case for the WS solutions obtained for $\omega>\omega_g$). Thus, the validity of the heuristic argument given in \citep{Gruzinov03} is not obvious. \section{Slow convergence to self-similarity: Numerical results and modified self-similar solutions} \label{sec:slowly converging} \subsection{Comparison to numerical solutions} \label{sec:numerical solution} We present in this section a comparison between the $\delta=0$ self-similar solutions and numerical solutions of the flow equations obtained for $\omega$ values within the gap. We have numerically calculated the propagation of a strong spherical shock wave through an ideal gas using a Lagrangian scheme with total energy conservation \citep[e.g.][]{Caramana98}. Shock waves are described in our calculations using a von Neumann artificial viscosity, implemented as a pressure term in all cells with negative difference between the nodes' velocities, $\Delta u<0$, \begin{equation}\label{eq:art visc} q=\Delta u\left(x_{q}\Delta u-x_{l}c\right), \end{equation} with $x_{q}=4$ and $x_{l}=0.1$. The initial conditions used are zero velocity everywhere, constant density and pressure at $r<d$, relatively small pressure at $r>d$ (see below), and a density profile proportional to $r^{-\omega}$ at $r>d$. The initial mesh spacing was uniform, $\Delta r=d$, i.e., the pressure was high only in the innermost cell. We chose the density of the innermost cell such that its mass is $10$ times higher than its neighbor's, and we chose its pressure to be $10^{6}$ times higher than its neighbor's. The pressure at the rest of the cells was chosen such that the outgoing shock wave is always strong. The mesh included $10^{5}$ cells, i.e., we were able to calculate explosions up to a radius of $R/d\simeq10^{5}$. \begin{figure} \epsscale{1} \plotone{p2_color.eps} \caption{$C(U)$ curve for $\omega=3.1,\, \gamma=5/3$. Shown are the numerical solution at $R/d=9\times10^{4}$ and two self-similar solutions, corresponding to $\delta=0$ and $\delta=-0.01865$. Note that the $C(U)$ curve of the $\delta=-0.01865$ self-similar solution crosses the sonic line (at a non-singular point). This is not visible in the figure since the crossing takes place very close to $(U,C)=(1,0)$. \label{CU}} \end{figure} \begin{figure} \epsscale{1} \plotone{p1.eps} \caption{$\delta(R)$ determined from a numerical simulation, using the zero slope method (see the text), for an explosion with $\omega=3.1,\, \gamma=5/3$. The error bars are an estimate of the accuracy of the determination of $\delta$ using this method. The accuracy is better for larger values of $R/d$, where the flow behind the shock is better resolved. \label{deltaofR}} \end{figure} We would like to examine the behavior of $\delta(R)\equiv d\ln \dot{R}/d\ln R=R\ddot{R}/\dot{R}^2$. Since $\delta$ depends on $\ddot{R}$ and the numerically determined value of $R$ is noisy due to the finite resolution, a derivation of $\delta$ by a direct differentiation of $R$ is not accurate enough for our study. In order to overcome this problem, we derive $\delta$ from the numerical spatial profiles: we choose $\delta$ as the value for which the difference between the numerical profiles and the self-similar profiles, determined by Equations~(\ref{eq:dUdC})--(\ref{eq:G}), has a zero slope at the shock front (this ensures that the acceleration of the shock wave, which is determined by the spatial profiles in the vicinity of the shock front, is the same for the self-similar and numerical solutions). In what follows, we give some details regarding this method for the determination of $\delta$, which we term the ``zero slope method''. We examine the difference $f(\xi)$ between the self-similar profiles ($U$, $C$, and $P$), obtained for a chosen value of $\delta$, and the profiles obtained in the numerical simulations at some $R/d$. For the comparison of the self-similar and numerical profiles we consider the radial range $0.999>\xi=r/R>0.9995$. The upper limit of this range is set by requiring that the oscillations behind the shock, caused by the artificial viscosity, are damped considerably, while the lower limit is chosen to ensure that deviations from the self-similar solutions are small (see Section~\ref{sec:mod_slfsim}). In order to determine whether or not the difference $f(\xi)$ is consistent with a zero slope, i.e., with $f'=0$, it is insufficient to use a simple linear fit for $f(\xi)$, since oscillations in the numerical profile caused by the artificial viscosity are not random and cannot be neglected. We define therefore $\bar{f}=(f-\mu(f))/\sigma(f)$ and $\bar{\xi}=(\xi-\mu(\xi))/\sigma(\xi)$, where $\mu$ and $\sigma$ stand for mean and standard deviation over all numerical cells in the range, and examine the number of points for which $\bar{\xi}>1$ and $\bar{f}>0(<0)$, denoted $N_{++(-)}$, and the number of points for which $\bar{\xi}<1$ and $\bar{f}>0(<0)$, denoted by $N_{-+(-)}$. In the absence of numerical inaccuracies, the ratios $r_{+(-)}=N_{+(-)+}/N_{+(-)-}$ should all equal unity for $f'=0$. In order to allow for numerical inaccuracies, we consider $f'$ to be consistent with 0 for $0.1<r_{+(-)}<10$, and determine the range of allowed values of $\delta$ as the range for which $0.1<r_{+(-)}<10$. Since $r_{+(-)}$ (or $1/r_{+(-)}$) grow rapidly as $\delta$ is modified, the range of allowed values of $\delta$ is not sensitive to the exact choice of the allowed range of $r_{+(-)}$. In Figure~\ref{CU}, we compare the numerical $C(U)$ curve obtained at $R/d=9\times10^{4}$, for an explosion with $\omega=3.1,\, \gamma=5/3$, with the self-similar curve obtained for the value of $\delta$ determined by the zero slope method, $\delta=-0.01865$. The numerical curve and the self-similar one are very close near the shock front and show small discrepancy far from the shock. Similar results are obtained for other values of $R/d$. $\delta(R)$ determined by the method describe above is shown in Figure~\ref{deltaofR} for an explosion with $\gamma=5/3,\,\omega=3.1$. The error bars are an estimate of the accuracy of the determination of $\delta$ using this method. It is apparent that the convergence of the inferred value of $\delta$ is very slow, and that it has not converged for a very large value of $R/d$, $R/d\approx10^{5}$. It is difficult to determine, based on the simulation, whether or not $\delta$ approaches 0 for $R/d\rightarrow\infty$: the rate of change of $\delta$ obtained at $R/d\approx10^{5}$, $d\delta/d\ln(R)\approx3\times10^{-3}$ implies that an increase in $R/d$ by a factor of $10$ will modify the inferred value of $\delta$ from $\delta\approx-0.019$ to only give $\delta\approx-0.014$. A comment is in order regarding the convergence of our numerical solutions. We have checked convergence by examining the modification of the inferred value of $\delta$ at fixed $R/d$, obtained when the number of cells within the region of high initial pressure is increased, i.e., choosing $\Delta r/d<1$ (and keeping a uniform initial grid spacing). The results presented in this section for $\delta$ at large $R/d$ using $\Delta r/d=1$ are converged to a few percent. For example, for the $\gamma=5/3,\,\omega=3.1$ case, increasing the number of cells by a factor of $2$, i.e., using $\Delta r/d=1/2$, changes the inferred values of $\delta$ by less than a $5\%$. Slow convergence of the numerical solutions to an asymptotic self-similar behavior is obtained also for other values of $\gamma,\, \omega$ within or near the gap (see also Section~\ref{sec:Discussion}). This result motivates us to explore in Section~\ref{sec:mod_slfsim} modified self-similar solutions that describe the approach of the flow to self-similarity. \subsection{Modified self-similar solutions} \label{sec:mod_slfsim} To quantitatively consider the approach to self-similarity, we examine solutions of the hydrodynamic equations of the form \begin{eqnarray}\label{eq:pert definition} u(r,t)&=&\dot{R}\xi[U(\xi,\delta(t))+\eta U_1(\xi,\delta(t))],\nonumber \\ c(r,t)&=&\dot{R}\xi[C(\xi,\delta(t))+\eta C_1(\xi,\delta(t))], \nonumber \\ \rho(r,t)&=&BR^{\varepsilon}[G(\xi,\delta(t))+\eta G_1(\xi,\delta(t))], \nonumber \\ p(r,t)&=&BR^{\varepsilon}\dot{R}^{2}[P(\xi,\delta(t))+\eta P_1(\xi,\delta(t))]. \end{eqnarray} Here, $\delta(t)\equiv d\ln\dot{R}/d\ln R$, $\eta\equiv d\delta/d\ln R$, and $F(\xi,\delta(t))$ stand for the self-similar solution $F(\xi)$ obtained for the instantaneous value of $\delta$, $\delta(t)$. The flow fields are of the general form \begin{equation}\label{eq:general pert} f(r,t)=R^{\alpha}\dot{R}^{\beta}[F(\xi,\delta(t))+\eta F_1(\xi,\delta(t))]. \end{equation} For this form, \begin{eqnarray}\label{eq:general r diff} \left(\frac{\partial f}{\partial r}\right)_{t} = R^{\alpha-1}\dot{R}^{\beta}(F'+\eta F_1'), \end{eqnarray} where $'\equiv\partial/\partial\xi$, and, using $\dot{\delta}=\eta\dot{R}/R$, \begin{eqnarray}\label{eq:general t diff} \left(\frac{\partial f}{\partial t}\right)_{r}= &\alpha& R^{\alpha-1}\dot{R}^{\beta+1}[F(\xi,\delta(t))+\eta F_1(\xi,\delta(t))] \nonumber \\ &+&\beta R^{\alpha}\dot{R}^{\beta-1}\ddot{R}[F(\xi,\delta(t))+\eta F_1(\xi,\delta(t))] \nonumber \\ &+&R^{\alpha}\dot{R}^{\beta}[(F'+\eta F_1'(\xi,\delta(t)))(-\frac{r}{R^{2}}\dot{R})\nonumber \\ &+&\frac{\partial F}{\partial \delta} \dot{\delta}+\eta\frac{\partial F_1}{\partial \delta}\dot{\delta}+\dot{\eta} F_1] \nonumber \\ =&&R^{\alpha-1}\dot{R}^{\beta+1}[(F+\eta F_1)(\alpha+\beta\delta)-\xi( F'+\eta F_1')\nonumber\\ &+&\eta(\frac{\partial F}{\partial\delta}+\eta\frac{\partial F_1}{\partial\delta}+\frac{d\ln\eta}{d\ln R} F_1)]. \end{eqnarray} Restricting to solutions with $d\ln\eta/d\ln R=0$ and neglecting $\eta^2$ terms, assuming $\eta\equiv d\delta/d\ln R\ll1$, we obtain a set of ordinary differential equations for $U_1,\,C_1,\,G_1$, which may be written as \begin{equation}\label{eq:main pert} \mathbf{A}\left(\begin{array}{c} U_1 \\ G_1 \\ P_1 \end{array}\right)' = \mathbf{B}\left(\begin{array}{c} U_1 \\ G_1 \\ P_1 \end{array}\right) + \mathbf{C}\frac{\partial}{\partial\delta}\left(\begin{array}{c} U \\ G \\ P \end{array}\right). \end{equation} The matrices $\mathbf{A}$, $\mathbf{B}$, and $\mathbf{C}$ are given in Appendix~\ref{sec:mod_eqs}. The equations for $U,\,C,\,G$ are the same self-similar equations as before, given in Appendix~\ref{sec:slfsim_eqns}. The boundary conditions at the shock are set by the Rankine--Hugoniot relations, which imply \begin{eqnarray}\label{eq:boundary condition for pert by Hugoniot} U_1(\delta(t),1)=P_1(\delta(t),1)=G_1(\delta(t),1)=0. \end{eqnarray} Note that $\eta$ does not appear in the equations describing the modified solutions, Equation~(\ref{eq:main pert}), and cannot therefore be determined by these equations. \begin{figure} \epsscale{1} \plotone{p3_color.eps} \caption{$C(U)$ curves for explosions with $\omega=3.1, \gamma=5/3$ and $\omega=2.8,\, \gamma=5/3$. Shown are the numerical solutions, self-similar solutions ($\eta=0$) and modified self-similar solutions ($\eta\neq0$). The difference between the $\eta=0$ and $\eta\neq0$ solutions is difficult to identify in this plot. It is clearly shown in Figures~\ref{CU-pert}--\ref{G-pert}. \label{CU-pert}} \end{figure} \begin{figure} \epsscale{1} \plotone{p4_color.eps} \caption{Numerical and modified self-similar spatial velocity profiles of the calculations presented in Figure~\ref{CU-pert}. Top panels present a comparison of the numerical profiles with the self-similar profiles obtained for the appropriate value of $\delta$ (solutions of Equations~(\ref{eq:dUdC})-(\ref{eq:G})). Bottom panels present a comparison of the difference between the two, i.e., of the numerical deviation from the self-similar solution, with the deviation predicted by the modified self-similar solutions, defined by Equations~(\ref{eq:pert definition}) and determined by Equation~(\ref{eq:main pert}). \label{U-pert}} \end{figure} We compare in Figures~\ref{CU-pert}--\ref{G-pert} the modified self-similar solutions to the results of numerical calculations, for $\omega=3.1,\, \gamma=5/3$ and $\omega=2.8,\, \gamma=5/3$. We have chosen an example with $\omega=2.8<3$ below the gap, in order to emphasize that slow convergence to self-similarity is not unique to the gap region. The $C(U)$ curves are compared in Figure~\ref{CU-pert}, and the spatial profiles are compared in Figures~\ref{U-pert}--\ref{G-pert}. The spatial profiles $U_1,\,C_1,\,P_1,\,G_1$ of the numerical solutions were obtained by subtracting from the numerical profiles the self-similar profiles $U,\,C,\,P,\,G$ corresponding to the appropriate value of $\delta(R)$. The value of $\eta$ was determined by comparing the numerical results for $U_1,\,C_1,\,P_1,\,G_1$ with the solutions of Equation~(\ref{eq:main pert}) (note that the solutions of Equation~(\ref{eq:main pert}) are independent of $\eta$, which just determines the normalization of the deviation from the self-similar solution, as given by Equations~(\ref{eq:pert definition})). The agreement between the numerical solutions and the modified self-similar solutions, demonstrated in the plots of Figures \ref{CU-pert}--\ref{G-pert}, suggests that the modified self-similar solutions provide an approximate description of the approach to self-similarity. \begin{figure} \epsscale{1} \plotone{p5_color.eps} \caption{Same as Figure~\ref{U-pert}, but for the spatial pressure profiles. \label{P-pert}} \end{figure} \begin{figure} \epsscale{1} \plotone{p6_color.eps} \caption{Same as Figure~\ref{U-pert}, but for the spatial sound velocity profiles. \label{C-pert}} \end{figure} \begin{figure} \epsscale{1} \plotone{p7_color.eps} \caption{Same as Figure~\ref{U-pert}, but for the spatial density profiles. \label{G-pert}} \end{figure} \section{Summary and discussion} \label{sec:Discussion} We have shown that the self-similar solutions describing the asymptotic flow of the strong explosion problem for $\omega$ values within the gap, $3<\omega<\omega_g(\gamma)$, are the $\delta=0$, $\dot{R}=$~constant, self-similar solutions. For $\omega>3$, the energy in the self-similar solutions is infinite, implying that the self-similar solution may describe the flow only in part of the $(\xi,R)$-plane. This suggests that for $\omega>3$ the asymptotic solution should be composed of a self-similar solution describing the flow at $\xi_c(R)<\xi<1$, matched along some characteristic line $\xi_c(R)$ to a different solution at $0<\xi<\xi_c(R)$, such that the energy contained in the self-similar part of the flow, $\xi_c(R)<\xi<1$, does not diverge as $R\rightarrow\infty$ \citep{WaxmanShvarts93}. In the analysis of \citep{WaxmanShvarts93}, it was assumed that $\xi_c$ is a $C_+$ characteristic. This was mainly motivated by the fact that requiring the existence of a $C_+$ characteristic, that does not overtake the shock front as $R\rightarrow\infty$, is equivalent to requiring that the solution passes through a sonic point, and it is commonly accepted that the similarity exponents of a second-type solution are determined by the requirement that the solution passes through such a singular point. We showed here that in order to determine the similarity exponents it is sufficient to require the existence of any characteristic, that does not overtake the shock, and for which the energy contained in the self-similar part of the flow, $\xi_c(R)<\xi<1$, does not diverge as $R\rightarrow\infty$. Examining $\omega$ values below, within, and above the ``gap'', we showed that for each value of $\omega$ there is only one value of $\delta$, $\delta=\delta_*(\omega)$, which yields a valid physical self-similar solution (see Table~\ref{table}). For $\delta<\delta_*$ the $C(U)$ curve determined by Equation~(\ref{eq:dUdC}) crosses the sonic line $U+C=1$ at a non-singular point (yielding a non-single-valued solution, see Equations~(\ref{eq:quadrature}) and~(\ref{eq:deltas})). For $\delta>\delta_*$, the self-similar solution energy diverges, in the sense that the energy contained in $\xi_c(R)<\xi<1$ diverges for any choice of a characteristic, $\xi_c(R)$, that does not overtake the shock wave. The physical solutions are the ST solutions, $\delta_*=\delta_{\textrm{ST}}=(\omega-3)/2$ for $\omega<3$, the solutions derived in \citep{WaxmanShvarts93,WaxmanShvarts10}, $\delta_*=\delta_{\textrm{WS}}<\delta_{\textrm{ST}}$, for $\omega>\omega_g$, and the $\delta_*=0$ solutions for $3<\omega<\omega_g$. The $C(U)$ curves of the $3<\omega<\omega_g$ solutions do not cross the sonic line and $\xi_c(R)$ must be chosen as a $C_0$ characteristic for these solutions. In Section~\ref{sec:numerical solution}, we compared the asymptotic, $R/d\gg1$, behavior of numerical solutions of the hydrodynamic equations, Equations~(\ref{eq:hydro_eq}), to that expected based on the $\delta=0$ self-similar solutions. We find that while the flow approaches a self-similar behavior with $|\delta|\ll1$, the convergence to self-similarity is very slow for $\omega\sim3$ (e.g., Figure~\ref{deltaofR}). It should be noted that convergence to self-similarity is slow for any value of $\omega\sim3$, both within and below the gap, as demonstrated in Figure~\ref{delta-omega}. Hence, it is difficult to check using numerical solutions whether for $\omega$ values within the gap the flow indeed approaches a $\delta=0$ self-similar behavior as $R\rightarrow\infty$. We showed in Section~\ref{sec:mod_slfsim} that in this case the flow may be described by a modified self-similar solution, $d\ln\dot{R}/d\ln R=\delta$ with slowly varying $\delta(R)$, $\eta\equiv d\delta/d\ln R\ll1$. In these solutions, the spatial profiles are given by a sum of the self-similar solution corresponding to the instantaneous value of $\delta$ and a self-similar correction linear in $\eta$, see Equations~(\ref{eq:pert definition}). The equations describing the self-similar corrections are given in Equation~(\ref{eq:main pert}). We have shown that for $\omega\sim3$ the modified self-similar solutions provide an approximate description of the flow at large $R$, see Figures~\ref{U-pert}--\ref{G-pert}, with $\delta\rightarrow0$ (and $\eta\neq0$) for $3\leq\omega\leq\omega_{g}$. These results support the conclusion that the flow approaches the $\delta=0$ self-similar solutions as $R$ diverges. \begin{figure} \epsscale{1} \plotone{p8.eps} \caption{Self-similar exponent $\delta$ as a function of $\omega$ for $\gamma=5/3$ (solid line), compared with the value of $\delta$ inferred from numerical simulations (using the method described in Section~\ref{sec:mod_slfsim}) at $R/d=9\times10^4$ (dashed). \label{delta-omega}} \end{figure} Based on the analysis presented here, we suggest that the definition of first- and second-type similarity solutions should be somewhat modified, and that the family of second-type solutions should be expanded. Solutions of the first-type may be defined as solutions that are valid over the entire $(r,t)$-plane (or the part of which where the flow takes place). Such solutions must satisfy the global conservations laws (of mass, momentum, and energy), and hence the values of the similarity exponents of such solutions may be determined by dimensional considerations. Solutions of the second-type may be defined as solutions, which are valid only in part of the region in the $(r,t)$-plane over which the flow takes place. Such solutions should be required to allow the existence of a characteristic line, $\xi_c(R)$, along which the self-similar solution is matched to another solution, and to comply with the global conservation laws within the region of the $(r,t)$-plane described by the self-similar solution. \acknowledgments This research was partially supported by ISF, AEC and Minerva grants.
2024-02-18T23:40:23.224Z
2010-08-30T02:01:31.000Z
algebraic_stack_train_0000
2,233
8,211
proofpile-arXiv_065-11017
\section*{METHODS} \subsection*{Centrality measures} The degree $d_i$ of node $i$ is the number of nodes $i$ is connected to. In directed networks, in- and out-degree $d_i^\text{in}$ and $d_i^\text{out}$ are distinguished. For the matrix averaging over all adjacency matrices of networks with fixed node degrees, $c_i=d_i$ is a left eigenvector for the largest eigenvalue. Likewise, the degree ratios $d_i^\text{out} / d_i^\text{in}$ form a left eigenvector of the Laplacian matrix averaging over all networks with given node degrees \cite{Serrano:2009, Masuda:2009a}. The betweenness centrality $b_i$ of a node $i$ quantifies the fraction of shortest paths that pass through this node \cite{Freeman:1977}. It is defined as \begin{equation} b_i = \sum_{(j,k)} \frac{\sigma_{jk}(i)}{\sigma_{jk}}~, \end{equation} where the summation runs over all ordered node pairs $(j,k)$; $\sigma_{jk}$ denotes the total number of shortest paths from node $j$ to node $k$; $\sigma_{jk}(i)$ is the number of such paths running through node $i$. The shell index $k_i$ \cite{Seidman:1983,Dorogovtsev:2006,Kitsak:2010} of a node $i$ is derived from the consideration of the $k$-core \cite{Dorogovtsev:2006} for integer $k \ge 0$. The $k$-core of a network is the largest induced subnetwork in which all nodes have degree at least $k$. Starting from the full network, the $k$-core is obtained by deleting nodes (together with their edges) with degree strictly less than $k$ until no such nodes are left. The shell-index $k_i$ is the largest value $k$ such that node $i$ is contained in the $k$-core. In case of directed network, the $k$-core is based on the out-degree. \subsection*{Epidemic models} We simulate the {\em SIR model} of epidemic spreading in the time-discrete version. Transitions between the three states (S,I,R) are as follows. If node $i$ is in the S (susceptible) state and has $\nu$ infected (I) neighbors at time $t$, then node $i$ remains susceptible with probability $(1-\beta)^\nu$, otherwise $i$ is infected at time $t+1$. If node $i$ is in the infected state at time $i$ then $i$ is in the R (removed) state at time $t+1$. In the {\em SIS model}, at difference with SIR, a node infected at time $t$ is susceptible again at time $t+1$. The probability of being removed in the {\em SIR model} does not enter in the linearized Equation~(\ref{eq:sirlin}) because it appears only in a second order term in the equation for $x$. Therefore Equation~(\ref{eq:sirlin}) gives the same linear description for the {\em SIR} and {\em SIS} models. The system is in an absorbing configuration if none of the nodes is infected. For both models, outbreak size is the number of nodes having been infected at least once before reaching an absorbing configuration. The spreading efficiency of node $i$ is the average outbreak size when initiating the dynamics with node $i$ infected and all others susceptible. \subsection*{Rank order correlation} For a vector $x \in \mathbb{R}^n$, the rank of component $i$ is given by \begin{equation} r_i(x) = 1 + | \{ j \neq i | x_j > x_i \}| + \frac{1}{2}|\{j \neq i | x_j = x_i \}| \end{equation} The rank order correlation coefficient $\rho(x,y)$ between two such vectors $x$ and $y$ is the Pearson correlation coefficient between the rank vectors $r(x)$ and $r(y)$. Thus $\rho(x,y)$ takes values in $[-1,1]$ with $\rho(x,y) = +1$ $(-1)$ if and only if $x$ and $y$ are in a strictly increasing (decreasing) relation. \bigskip \section*{ACKNOWLEDGMENTS} K.K.\ acknowledges financial support from VolkswagenStiftung and from European Commission NEST Pathfinder initiative on Complexity through project SYNLET (Contract 043312). M.A.S.\ acknowledges financial support by the Ram\'on y Cajal program of the Spanish Ministry of Science, MICINN Project No. BFU2010-21847-C02-02, and Generalitat de Catalunya grant No. 2009SGR1055. V.M.E.\ and M.S.M.\ acknowledge financial support from MEC (Spain) through project FISICOS (FIS2007-60327).
2024-02-18T23:40:23.644Z
2011-02-14T02:01:52.000Z
algebraic_stack_train_0000
2,253
678
proofpile-arXiv_065-11167
\section{Adaptive Algorithm} \label{section:adaptivealgorithm} A limitation of Algorithms \ref{algorithm:offline-greedy} and \ref{algorithm:online-greedy} is that $C$ needs to be known ahead for each window $W$ that the algorithms are applied to. We show here that it is possible to guess the value $C$ in a window $W$. We present the Algorithm {\sf Adaptive-Greedy} (Algorithm \ref{algorithm:adaptive-greedy}) which can guess the value of $C$. From the analysis of Algorithms \ref{algorithm:offline-greedy} and \ref{algorithm:online-greedy}, we know that the knowledge of the value $C$ plays vital role in the probability of success of the algorithms. \begin{algorithm}[t] {\small \KwIn{An $M \times N$ execution window $W$ with $M$ threads each with $N$ transactions, where $C$ is unknown\;} \KwOut{A greedy execution schedule for the window of transactions\;} \BlankLine Associate triplet of priorities $\langle \pi^{(3)}, \pi^{(2)}, \pi^{(1)} \rangle$ to each transaction when available for execution\; \textbf{Code for thread $P_i$}\; \Begin{ Initial contention estimate $C_i \leftarrow 1$\; \Repeat {all transactions are committed} { {\sf Online-Greedy($C_i$, $W$)}\; \If {bad event}{$C_i\leftarrow 2 \cdot C_i$ ; } } \caption{{\sf Adaptive-Greedy}} \label{algorithm:adaptive-greedy} } } \end{algorithm} In {\sf Adaptive-Greedy} each thread $P_i$ attempts to guess individually the right value of $C$. The algorithm works based on the exponential back-off strategy used by many contention managers developed in the literature such as {\sf Polka}. The algorithm works as follows: each thread starts with assuming $C = 1$. Based on the current estimate $C$ then the thread attempts to execute Algorithm \ref{algorithm:online-greedy}, for each of its transactions assuming the window size $M \times N$. Now, if the choice of $C$ is correct then each transactions of the thread in the window $W$ of the thread $P_i$ should commit within the designated frame that it becomes high priority. Thus, all transactions of the frame should commit within the makespan time estimate Algorithm \ref{algorithm:online-greedy} which is $\tau_C = O(C \log (MN) + N \log^2(MN))$. However, if during $\tau_C$ some thread does not commit within its designated frame (bad event), then thread $P_i$ will assume that the choice of $C$ was incorrect, and will start over again with the remaining transactions assuming $C = 2C'$, where $C'$ is the previous estimate for $C$. Eventually thread $P_i$ will guess the correct value of $C$ for the window $W$, and all its transactions will commit within the respective time. The different threads adapt independently from each other to the correct value of $C$. At the same moment of time the various threads may have assumed different values of $C$. The threads with higher estimate of $C$ will be given higher priority in conflicts, since threads with lower $C$ most likely have guessed the wrong $C$ and are still adapting. In order to handle conflicts each transaction uses a vector of priorities with three values $\langle \pi^{(3)}, \pi^{(2)}, \pi^{(1)} \rangle$. The value of priority entry $\pi^3$ is inversely proportional to the current guess of $C$ for the thread, so that higher value of $C$ implies higher priority. The last two entries $\pi^{(2)}$ and $\pi^{(1)}$ are the same as in Algorithm \ref{algorithm:online-greedy}. It is easy to that the correct choice of $C$ will be reached by a thread $P_i$ within $\log C$ iterations. The total makespan and response time is asymptotically the same as with Algorithm \ref{algorithm:online-greedy}. \section{Introduction} Multi-core architectures present both an opportunity and challenge for multi-threaded software. The opportunity is that threads will be available to an unprecedented degree, and the challenge is that more programmers will be exposed to concurrency related synchronization problems that until now were of concern only to a selected few. Writing concurrent programs is difficult because of the complexity of ensuring proper synchronization. Conventional lock based synchronization suffers from well known limitations, so researchers considered non-blocking transactions as an alternative. Software Transactional Memory \cite{Shavit95, Her03, Herlihy93transactionalmemory} systems use lightweight and composable in-memory software transactions to address concurrency in multi-threaded systems ensuring safety all the time \cite{Har03, Har05}. A contention management strategy is responsible for the STM system as a whole to make progress. If transaction $T$ discovers it is about to conflict with $T'$, it has two choices, it can pause, giving $T'$ a chance to finish, or it can proceed, forcing $T'$ to abort. To solve this problem efficiently, $T$ will consult the contention manager module which choice to make. Of particular interest are {\em greedy contention managers} where a transaction starts again immediately after every abort. Several (greedy) contention managers have been proposed in the literature. However, most contention managers have been assessed only experimentally by specific benchmarks. There is a small amount of work in the literature which analyzes formally the performance of contention managers. The competitive ratio results are not encouraging since the bounds are not tight. For example with respect to the $O(s)$ bound in \cite{Att06}, when the number of resources increases, the performance degrades linearly. A question arises whether someone can achieve tighter bounds. A difficulty in obtaining tight bounds is that the algorithms studied in \cite{Att06,Gue05a,Gue05b, scherer05, Schneider09} apply to the {\em one-shot scheduling problem}, where each thread issues a single transaction. One-shot problems can be related to graph coloring. It can be shown that the problem of finding the chromatic number of a graph can be reduced to finding an optimal schedule for a one-shot problem. Since it is known that graph coloring is a very hard problem to approximate, the one-shot problem is very hard to approximate too \cite{Schneider09}. In order to obtain better formal bounds, we propose to investigate execution window of transactions (see the left part of Figure \ref{figure:window}), which has the potential to overcome the limitations of coloring in certain circumstances. An $M \times N$ window of transactions $W$ consists of $M$ threads with an execution sequence of $N$ different transactions per thread. Let $C$ denote the maximum number of conflicting transactions for any transaction in the window ($C$ is the maximum degree of the respective conflict graph of the window). A straightforward upper bound is $\min(C N, MN)$, since $CN$ follows from the observation that each transaction in a thread may be delayed at most $C$ time steps by its conflicting transactions, and $MN$ follows from the serialization of the transactions. If we partition the window into $N$ one-shot transaction sets, each of size $M$, then the competitive ratio using the one-shot analysis results is $O(s N)$. When we use the Algorithm {\sf RandomizedRounds} \cite{Schneider09} $N$ times then the completion time is in the worst case $O(C N \log n)$ (for some appropriate choice of $n$). \begin{figure*}[ht] \centerline{\subfloat[Before execution]{\includegraphics[height=1.6in,width=2.3in]{Graphic1}} \hfil \subfloat[After execution]{\includegraphics[height=1.9in,width=3.3in]{Graphic2}} } \caption{Execution window model for transactional memory} \label{figure:window} \end{figure*} We have results that indicate that we can obtain better bounds under certain circumstances in the window. We present two randomized greedy algorithms transactions are assigned priorities values, such that for some random initial interval in the beginning of the window $W$ each transaction is in low priority mode and then after the random period expires the transactions switch to high priority mode. In high priority mode the transaction can only be aborted by other high priority transactions. The random initial delays have the property that the conflicting transactions are shifted inside their window and their execution times may not coincide (see the right part of Figure \ref{figure:window}). The benefit is that conflicting transactions can execute at different time slots and potentially many conflicts are avoided. The benefits become more apparent in scenarios where the conflicts are more frequent inside the same column transactions and less frequent between different column transactions. \paragraph{Contributions:} We propose the contention measure $C$ within the window to allow more precise statements about the worst-case complexity bound of any contention management algorithm. We give two window-based randomized greedy algorithms for the contention management in any execution window $W$. Our first Algorithm {\sf Offline-Greedy} gives a schedule of length $O(C + N \log(MN))$ with high probability, and improves on one-shot contention managers from a worst-case perspective. The algorithm is offline in the sense that it uses explicitly the conflict graph of the transactions to resolve the conflicts. Our second Algorithm {\sf Online-Greedy} produces a schedule of length $O(C \log (MN)+ N \log^2(MN))$ with high probability, which is only a factor of $O(\log (MN))$ worse in comparison to {\sf Offline-Greedy}. The benefit of the online algorithm is that does not need to know the conflict graph of the transactions to resolve the conflicts. The online algorithm uses as a subroutine Algorithm {\sf RandomizedRounds} \cite{Schneider09}. We also give a third algorithm {\sf Adaptive-Greedy} which is the adaptive version of the previous algorithms which achieves similar worst-case performance and adaptively guesses the value of the contention measure $C$. The technique we use for the analysis of these algorithms is similar to the one used by Leighton {\it et al.}~\cite{LMR94} to analyze an online packet scheduling problem. Moreover, one advantage of our algorithms is that if the conflicts in the window are bounded by $C \leq N \log MN$ then the upper bounds we have obtained is within poly-logarithmic factors from optimal, since $N$ is a lower bound for the execution time. By finding window sizes in the program execution where $C$ is small compared to $N$ our algorithm provide better bounds than previously known algorithms. We prove the existence of an algorithm based on dynamic programming to find in polynomial time the optimal decomposition for any arbitrary window $W$, into sub-windows $W_1, \ldots, W_k$, such the maximum contention {\em density} in each is the smallest possible. The density simply measures how much larger is $C$ with respect to the number of transactions per thread. By applying our greedy contention management algorithms in the sub-windows we can obtain schedules which are asymptotically better than executing the algorithm in the whole window $W$. \paragraph{Outline of Paper:} The rest of the paper is organized as follows: In Section \ref{section:related}, we discuss the related work. We present the transactional memory model in Section \ref{section:model}. We present and formally analyze an offline randomized greedy algorithm in Section \ref{section:offlinealgorithm}. The online version is given in Section \ref{section:onlinealgorithm}. In Section \ref{section:adaptivealgorithm}, we describe the adaptive version of the aforementioned algorithms. We discuss the issues of window decomposition for the optimal window generation in Section \ref{section:decomposition}. Section \ref{section:conclusion} concludes the paper. \section{Execution Window Model} \label{section:model} We consider a model that is based on a $M \times N$ execution window $W$ consisting of a set of transactions $W = \{(T_{11},\cdots, T_{1N}),$ $(T_{21},\cdots, T_{2N}),$ $\ldots,(T_{M1},\cdots,T_{MN})\}$ executed by the $M$ threads running on $M$ processors $P_1, \cdots, P_M$ where each thread issues $N$ transactions in a sequence. For the simplicity of the analysis we assume that a single processor runs one thread only, i.e., in total at most $M$ threads are running concurrently. A thread running on processor $P_i$ executes transactions $T_{i1}, \cdots, T_{iN}$ one after the other and transaction $T_{ij}$ is executed as soon as $T_{i(j-1)}$ has completed or committed. Transactions share a set of objects $\Psi =\{O_1,\cdots,O_s\}$. Each transaction $T_{ij}$ may use at most $s$ different objects. Each transaction is a sequence of actions that is either a read to some shared resource $O_l$, a write to some shared resource $O_k$, a commit, or an abort. Concurrent write-write actions or read-write actions to shared objects by two or more transactions cause conflicts between transactions. Each transaction completes with a commit when each action performed without conflicts. If conflicts occur then a transaction either aborts, or it may commit and force to abort all other conflicting transactions. In a {\em greedy} schedule, if a transaction aborts then it immediately attempts to execute again until it commits. Each transaction $T_{ij}$ has execution time duration $\tau_{ij}$ which is greater than 0. Here, for simplicity, we assume that $\tau_{ij} = 1$, i.e., each transaction needs one time unit to execute. We also assume that the execution of the transactions starts at time 0 and the execution time advances synchronously for all threads step by step. We also assume that all transactions inside the execution window are correct, i.e., there are no faulty transactions. Our results can be extended by relaxing these assumptions. The {\em makespan} of a schedule for a set of transactions $\Gamma$ is defined as the duration from the start of the schedule, i.e., the time when some transaction $T_{ij}\in \Gamma$ is available for scheduling, until all transactions in $\Gamma$ have committed. The makespan of the transaction scheduling algorithm for the sequences of transactions can be compared to the makespan of an optimal off-line scheduling algorithm, which is denoted by OPT. We evaluate the efficiency of our new contention management algorithms by comparing their makespan with the makespan of the optimal off-line scheduler. \begin{definition}[Competitive Ratio] The competitive ratio of the combination of $(A, \Gamma)$ for a contention management algorithm $A$ under a set of jobs $\Gamma$ is defined as $$CR(A, \Gamma) = \frac{makespan(A, \Gamma)}{makespan(\textsc{OPT}, \Gamma)}.$$ \end{definition} \paragraph{Conflict Graph:} For a set of transactions $V \subseteq \Gamma$, we use the notion of conflict graph $G=(V,E)$. The neighbors of a transaction $T$ in the conflict graph are denoted by $N_{T}$ and represent all transactions that have a conflict with $T$ in $G$. The degree $d_{T}$ of $T$ in the graph corresponds to the number of its neighbors in the conflict graph, i.e., $d_{T}=|N_{T}|$. Note $d_{T} \leq |V|$. The congestion $C$ of the window $W$ is the largest degree of the conflict graph $G' = (W,E')$, which consists of all the transactions in the window. \section{Offline Algorithm} \label{section:offlinealgorithm} We present Algorithm {\sf Offline-Greedy} (Algorithm \ref{algorithm:offline-greedy}) which is an offline greedy contention resolution algorithm that uses the conflict graph explicitly to resolve conflicts of transactions. First, we divide the time into frames of duration $\Phi = \Theta(\ln(MM))$. Then, each thread $P_i$ is assigned an initial time period consisting of $R_i$ frames (with total duration $R_i \cdot \Phi$), where $R_i$ is chosen randomly, independently and uniformly, from the range $\left[0, \alpha -1\right]$, where $\alpha = C / \ln(MN)$. Each transaction has two priorities: $low$ or $high$ associated with them. Transaction $T_{ij}$ is initially in low priority. Transaction $T_{ij}$ switches to high priority (or normal priority) in the first time step of frame $F_{ij} = R_i + (j-1)$ and remains in high priority thereafter until it commits. The priorities are used to resolve conflicts. A high priority transaction may only be aborted by another high priority transaction. A low priority transaction is always aborted if it conflicts with a high priority transaction. Let $G_{t}$ denote the conflict graph of transactions at time $t$ where each transaction corresponds to a node and two transactions are connected with an edge if they conflict in at least one shared resource. Note that the maximum degree of $G_{t}$ is bounded by $C$ for the transactions in window $W$. At each time step $t$ we select to commit a maximal independent set of transactions in $G_{t}$. We first select a maximal independent set $I_H$ of high priority transactions then remove this set and its neighbors from $G_{t}$, and then select a maximal independent set $I_L$ of low priority transactions from the remaining conflict graph. The transactions that commit are $I_H \cup I_L$. The intuition behind the algorithm is as follows: Consider a thread $i$ and its first transaction in the window $T_{i1}$. According to the algorithm, $T_{i1}$ becomes high priority in the beginning of frame $F_{i1}$. Because $R_i$ is chosen at random among $\alpha C/\ln(MN)$ positions it is expected that $T_{i1}$ will conflict with at most $O(\ln(MN))$ transactions which become simultaneously high priority in the same time frame (in $F_{ij}$). Since the duration of a time frame is $\Phi = \Theta(\ln(MN))$, transaction $T_{i1}$ and all its high priority conflicting transactions will be able to commit by the end of time frame $Y_i$, using the conflict resolution graph. The initial randomization period of $R_i \cdot \Phi$ frames will have the same effect to the remaining transactions of the thread $i$, which will also commit within their chosen high priority frames. \begin{algorithm}[t] {\small \KwIn{A $M \times N$ window $W$ of transactions with $M$ threads each with $N$ transactions, where $C$ is the maximum number of transactions that a transaction can conflict within the window\;} \KwOut{A greedy execution schedule for the window of transactions $W$\;} \BlankLine Divide time into time frames of duration $\Phi = 1 + (e^2 + 2) \ln(MN)$\; Each thread $P_i$ chooses a random number $R_i \in [0, \alpha -1]$ for $\alpha = C / \ln(MN)$\; \ForEach{time step $t = 0,1,2,\ldots$}{ \textbf{Phase 1: Priority Assignment}\; \ForEach{transaction $T_{ij}$}{ $F_{ij} \leftarrow R_i + (j-1)$\; \eIf{$t < F_{ij} \cdot \Phi$} { $Priority(T_{ij}) \gets Low$\; }{ $Priority(T_{ij}) \gets High$\; }} \textbf{Phase 2: Conflict Resolution}\; \Begin{ Let $G_{t}$ be the conflict graph at time $t$\; Compute $G_{t}^H$ and $G_{t}^L$, the subgraphs of $G_{t}$ induced by high and low priority nodes, respectively\; Compute $I_H \gets I(G_{t}^H)$, maximal independent set of nodes in graph $G_{t}^H$\; $Q \gets $ low priority nodes adjacent to nodes in $I_H$\; Compute $I_L = I(G_{t}^L - Q)$, maximal independent set of nodes in graph $G_{t}^L$ after removing $Q$ nodes\; Commit $I_H \cup I_L$\; } } \caption{{\sf Offline-Greedy}} \label{algorithm:offline-greedy} } \end{algorithm} \subsection{Analysis of Offline Algorithm} We study two classic efficiency measures for the analysis of our contention management algorithm: (a) the makespan, which gives the total time to complete all the $MN$ transactions in the window; and (b) the response time of the system, which gives how much time a transaction takes to commit. According to the algorithm, when a transaction $T_{ij}$ enters into the system, it will be in low priority until $F_{ij}$ starts. As soon as $F_{ij}$ starts, it will enter into its respective frame and begin executing in high priority. Let $A$ denote the set of conflicting transactions with $T_{ij}$. Let $A' \subseteq A$ denote the subset of conflicting transactions of $T_{ij}$ which become high priority during frame $F_{ij}$ (simultaneously with $T_{ij}$). \begin{lemma} \label{lemma:commit-frame} If $|A'| \leq \Phi - 1$ then transaction $T_{ij}$ will commit in frame $F_{ij}$. \end{lemma} \begin{proof} Due to the use of the high priority independent sets in the conflict graph $G_t$, if in time $t$ during frame $F_{ij}$ transaction $T_{ij}$ does not commit, then some conflicting transaction in $A'$ must commit. Since there are at most $\Phi - 1$ high priority conflicting transactions, and the length of the frame $F_{ij}$ is at most $\Phi$, $T_{ij}$ will commit by the end of frame $F_{ij}$. \end{proof} We show next that it is unlikely that $|A'| > \Phi - 1$. We will use the following version of the Chernoff bound: \begin{lemma}[Chernoff bound 1] \label{lemma:chernoff1} Let $X_1, X_2, \ldots, X_n$ be independent Poisson trials such that, for $1 \leq i \leq n$, ${\bf Pr}(X_i = 1) = pr_i$, where $0 < pr_i < 1$. Then, for $X = \sum_{i=1}^{n} X_i$, $\mu = {\bf E}[X] = \sum_{i=1}^{n} pr_i$, and any $\delta > e^2$, ${\bf Pr}(X > \delta \mu) < e^{-\delta \mu}.$ \end{lemma} \begin{lemma} \label{lemma:one-transaction} $|A'| > \Phi - 1$ with probability at most $(1/MN)^2$. \end{lemma} \begin{proof} Let $A_k \subseteq A$, where $1 \leq k \leq M$, denote the set of transactions of thread $P_k$ that conflict with transaction $T_ij$. We partition the threads $P_1, \ldots, P_M$ into $3$ classes $Q_0$, $Q_1$, and $Q_3$, such that: \begin{itemize} \item $Q_0$ contains every thread $P_k$ which either $|A_k| = 0$, or $|A_k| > 0$ but the positions of the transactions in $A_k$ are such that it is impossible to overlap with $F_{ij}$ for any random intervals $R_i$ and $R_k$. \item $Q_1$ contains every thread $P_k$ with $0 < |A_k| < \alpha$, and at least one of the transactions in $A_k$ is positioned so that it is possible to overlap with with frame $F_{ij}$ for some choices of the random intervals $R_i$ and $R_k$. \item $Q_2$ contains every thread $P_k$ with $\alpha \leq |A_k|$. Note that $|Q_2| \leq C/\alpha = \ln(NM)$. \end{itemize} Let $Y_k$ be a random binary variable, such that $Y_k = 1$ if in thread $P_k$ any of the transactions in $A_k$ becomes high priority in $F_{ij}$ (same frame with $T_{ij}$), and $Y_k = 0$ otherwise. Let $Y = \sum_{k=1}^{M} Y_k$. Note that $|A'| = Y$. Denote $pr_k = {\bf Pr}(Y_k = 1)$. We can write $Y = Z_0 + Z_1 + Z_2$, where $Z_\ell = \sum_{P_k \in Q_\ell} Y_k$, where $0 \leq \ell \leq 2$. Clearly, $Z_0 = 0$. and $Z_2 \leq |Q_2| \leq \ln(MN)$. Recall that for each thread $P_k$ there is a random initial interval with $R_k$ frames, where $R_k$ is chosen uniformly at random in $[0,\alpha-1]$. Therefore, for each $P_k \in Q_1$, $0 < pr_k \leq |A_k| / \alpha < 1$, since there are $|A_k| < \alpha$ conflicting transactions in $A_i$ and there are at least $\alpha$ random choices for the relative position of transaction $T_{ij}$. Consequently, $$\mu = {\bf E}[Z_1] = \sum_{P_k \in Z_1} pr_k \leq \sum_{P_k \in Z_1} \frac {|A_k|} {\alpha} = \frac{1} {\alpha} \cdot \sum_{P_k \in Z_1} |A_k| \leq \frac C \alpha \leq {\ln(MN)}.$$ By applying the Chernoff bound of Lemma \ref{lemma:chernoff1} we obtain that $${\bf Pr}(Z_1 > (e^2+1) \mu) < e^{-(e^2+1) \mu} < e^{-2 \ln(MN)} = (MN)^{-2}.$$ Since $Y = Z_0 + Z_1 + Z_2$, and $Z_2 \leq \ln(MN)$, we obtain ${\bf Pr}(|A'| = Y > (e^2 + 2) \mu = \Phi - 1) < (MN)^{-2}$, as needed. \end{proof} \begin{theorem}[makespan of {\sf Offline-Greedy}] \label{theorem:offline-makespan} Algorithm {\sf Offline-Greedy} produces a schedule of length $O(C + N \log(MN))$ with probability at least $1-\frac{1}{MN}$. \end{theorem} \begin{proof} From Lemmas \ref{lemma:commit-frame} and \ref{lemma:one-transaction} the frame length $\Phi$ does not suffice to commit transaction $T_{ij}$ within frame $F_{ij}$ (bad event) with probability at most ${NM}^{-2}$. Considering all the $MN$ transactions in the window a bad event occurs with probability at most $MN \cdot MN^{-2} = MN^{-1}$. Thus, with probability at least $1 - MN^{-1}$ all transactions will commit with the frames that they become high priority. The total time used by any thread is bounded by $(\alpha + N)\cdot \Phi = O(C + N \log(MN))$. \end{proof} Since $N$ is a lower bound for the makespan, Theorem \ref{theorem:offline-makespan} implies the following competitive ratio for the $M \times N$ window $W$: \begin{corollary}[competitive ratio of {\sf Offline-Greedy}] When $ C \leq N \cdot \ln(MN)$, $CR(\mbox{{\sf Offline-Greedy}}, W) = O(\log(NM))$, with high probability. \end{corollary} The following corollary follows immediately from Lemmas \ref{lemma:commit-frame} and \ref{lemma:one-transaction}: \begin{corollary}[response time of {\sf Offline-Greedy}] \label{theorem:offline-response} The time that a transaction $T_{ij}$ needs to commit from the moment it starts is $O(C + j \cdot\log(MN))$ with probability at least $1-\frac{1}{(MN)^2}$. \end{corollary} \section{Online Algorithm} \label{section:onlinealgorithm} We present Algorithm {\sf Online-Greedy} (Algorithm \ref{algorithm:online-greedy}), which is online in the sense that it does not depend on knowing the dependency graph to resolve conflicts. This algorithm is similar to Algorithm \ref{algorithm:offline-greedy} with the difference that in the conflict resolution phase we use as a subroutine a variation of Algorithm {\sf RandomizedRounds} proposed by Schneider and Wattenhofer \cite{Schneider09}. The makespan of the online algorithm is slightly worse than the offline algorithm, since the duration of the phase is now $\Phi' = O(\log^2(MN))$. There are two different priorities associated with each transaction under this algorithm. The pair of priorities for a transaction is given as a vector $\langle \pi^{(2)}, \pi^{(1)} \rangle$, where $\pi^{(2)}$ represents the Boolean priority value $low$ or $high$ (with respective values 1 and 0) as described in Algorithm \ref{algorithm:offline-greedy}, and $\pi^{(1)} \in [1, M]$ represents the random priorities used in Algorithm {\sf RandomizedRounds}. The conflicts are resolved in lexicographical order based on the priority vectors, so that vectors with lower lexicographic order have higher priority. When a transaction $T$ enters the system, it starts to execute immediately in low priority ($\pi^{(2)} = 1$) until the respective randomly chosen time frame $F$ starts where it switches to high priority ($\pi^{(2)} = 0$). Once in high priority, the field $\pi^{(1)}$ will be used to resolve conflicts with other high priority transactions. A transaction chooses a discrete number $\pi^{(1)}$ uniformly at random in the interval $[1, M]$ on start of the frame $F_{ij}$, and after every abort. In case of a conflict with another high priority transaction $K$ but which has higher random number ($\pi^{(1)}$) than $T$, then $T$ proceeds and $K$ aborts. The procedure $Abort(T,K)$ aborts transaction $K$ and $K$ must hold off on restarting (i.e. hold off attempting to commit) until $T$ has been committed or aborted. \begin{algorithm}[t] {\small \KwIn{A $M \times N$ window $W$ of transactions with $M$ threads each with $N$ transactions, where $C$ is the maximum number of transactions that a transaction can conflict within the window\;} \KwOut{A greedy execution schedule for the window of transactions $W$\;} \BlankLine Divide time into time frames of duration $\Phi' = 16 e \Phi \ln (MN)$\; Associate pair of priorities $\langle \pi_{ij}^{(2)}, \pi_{ij}^{(1)} \rangle$ to each transaction $T_{ij}$\; Each thread $P_i$ chooses a random number $R_i \in [0, \alpha -1]$ for $\alpha = C / \ln(NM)$\; \ForEach{time step $t = 0,1,2,\ldots$}{ \textbf{Phase 1: Priority Assignment}\; \ForEach{transaction $T_{ij}$}{ $F_{ij} \leftarrow R_i + (j-1)$\; \eIf{$t < F_{ij} \cdot \Phi'$} { Priority $\pi_{ij}^{(2)} \gets 1 ~(Low)$\; }{ Priority $\pi_{ij}^{(2)} \gets 0 ~(High)$\; } \textbf{Phase 2: Conflict Resolution}\; \Begin{ \If{$\pi_{ij}^{(2)} == 0$ ($T_{ij}$ has high priority)}{ \textbf{On (re)start} of transaction $T_{ij}$\; \Begin{ $\pi_{ij}^{(1)} \leftarrow $ random integer in $[1,M]$\; } \BlankLine \textbf{On conflict} of transaction $T_{ij}$ with high priority transaction $T_{kl}$\; \Begin{ \eIf{$\pi_{ij}^{(1)} < \pi_{kl}^{(1)}$} { $Abort(T_{ij}, T_{kl})$\; }{ $Abort(T_{kl}, T_{ij})$\; } } } } } \caption{{\sf Online-Greedy}} \label{algorithm:online-greedy} } \end{algorithm} \subsection{Analysis of Online Algorithm} In the analysis given below, we study the makespan and the response time of Algorithm {\sf Online-Greedy}. The analysis is based on the following adaptation of the response time analysis of a one-shot transaction problem with Algorithm {\sf RandomizedRounds} \cite{Schneider09}. It uses the following Chernoff bound: \begin{lemma}[Chernoff bound 2] \label{lemma:chernoff2} Let $X_1, X_2, \ldots, X_n$ be independent Poisson trials such that, for $1 \leq i \leq n$, ${\bf Pr}(X_i = 1) = pr_i$, where $0 < pr_i < 1$. Then, for $X = \sum_{i=1}^{n} X_i$, $\mu = {\bf E}[X] = \sum_{i=1}^{n} pr_i$, and any $0 < \delta \leq 1$, ${\bf Pr}(X < (1-\delta) \mu) < e^{-\delta^2 \mu / 2}.$ \end{lemma} \begin{lemma} \label{theorem:roger} \textbf{(Adaptation from Schneider and Wattenhofer \cite{Schneider09})} Given a one-shot transaction scheduling problem with $M$ transactions, the time span a transaction $T$ needs from its first start until commit is $16 e (d_{T}+1) \log n$ with probability at least $1-\frac{1}{n^2}$, where $d_{T}$ is the number of transactions conflicting with $T$. \end{lemma} \begin{proof} Consider the conflict graph $G$. Let $N_T$ denote the set of conflicting transactions for $T$ (these are the neighbors of $T$ in $G$). We have $d_{T} = |N_T| \leq m$. Let $y_T$ denote the random priority number choice of $T$ in range $[1,M]$. The probability that for transaction $T$ no transaction $K \in N_{T}$ has the same random number is: $${\bf Pr}(\nexists K \in N_{T} | y_T = y_K)= \left (1-\frac{1}{M} \right )^{d_{T}} \geq \left ( 1-\frac{1}{M} \right )^M \geq \frac{1}{e}.$$ The probability that $y_{T}$ is at least as small as $y_K$ for any transaction $K \in N_{T}$ is $\frac{1}{d_{T}+1}$. Thus, the chance that $y_{T}$ is smallest and different among all its neighbors in $N_T$ is at least $\frac{1}{e (d_{T}+1)}$. If we conduct $16 e (d_{T}+1) \ln n$ trials, each having success probability $\frac{1}{e (d_{T}+1)}$, then the probability that the number of successes $Z$ is less than $8 \ln n$ becomes: ${\bf Pr}(Z < 8 \cdot \ln n) <e^{-2\cdot \ln n} =\frac{1}{n^2}$, using the Chernoff bound of Lemma \ref{lemma:chernoff2}. \end{proof} \begin{theorem} [makesspan of {\sf Online-Greedy}] \label{theorem:online-makespan} Algorithm {\sf Online-Greedy} produces a schedule of length $O(C \log (MN) + N \log^2(MN))$ with probability at least $1-\frac{2}{MN}$. \end{theorem} \begin{proof} According to the algorithm, a transaction $T_{ij}$ becomes high priority ($\pi_{ij}^{(1)} = 0$) in frame $F_{ij}$. When this occurs the transaction will start to compete with other transactions which became high priority during the same frame. Lemma \ref{lemma:commit-frame} from the analysis of Algorithm \ref{algorithm:offline-greedy}, implies that the effective degree of $T_{ij}$ with respect to high priority transactions is $d_T > \Phi - 1$ with probability at most $(MN)^{-2}$ (we call this bad event $A$). From Lemma \ref{theorem:roger}, if $d_T \leq \Phi - 1$, the transaction will not commit within $16 e (d_{T}+1) \log n \leq \Phi'$ time slots with probability at most $(MN)^{-2}$ (we call this bad event $B$). Therefore, the bad event that $T_{ij}$ does not commit in $F_{ij}$ occurs when either bad event $A$ or bad event $B$ occurs, which happens with probability at most $(MN)^{-2} + (MN)^{-2} = 2(MN)^{-2}$. Considering now all the $MN$ transactions, the probability of failure is at most $2/MN$. Thus, with probability at least $1- 2/NM$, every transaction $T_{ij}$ commits during the $F_{ij}$ frame. The total duration of the schedule is bounded by $(\alpha + N)\Phi' = O(C \log (MN)+ N \log^2(MN)).$ \end{proof} \begin{corollary}[competitive ratio of {\sf Online-Greedy}] When $ C \leq N \cdot \ln(MN)$, $CR(\mbox{{\sf Online-Greedy}}, W) = O(\log^2(NM))$, with high probability. \end{corollary} \begin{corollary}[response time of {\sf Online-Greedy}] \label{theorem:online-timespan} The time that a transaction $T_{ij}$ needs to commit from the moment it starts is $O(C \log (MN) + j \cdot \log^2 (MN))$ with probability at least $1-\frac{2}{(MN)^2}$. \end{corollary} \section{Conclusions} \label{section:conclusion} In this paper, we consider greedy contention managers for transactional memory for $M \times N$ windows of transactions with $M$ threads and $N$ transactions per thread and present three new algorithms for contention management in transactional memory from a worst-case perspective. These algorithms are efficient, adaptive, and handle windows of transactions and improve on the worst-case performance of previous results. These are the first such results for the execution of sequences of transactions instead of the one-shot problem used in other literature. our algorithms present new trade-offs in the analysis of greedy contention managers for transactional memory. We also show that the optimal window decomposition can be determined using dynamic programming for any arbitrary window. With this work, we are left with some issues for future work. One may consider arbitrary time durations for the transactions to execute instead of the $O(1)$ time we considered in our analysis. We believe that our results scale by a factor proportional to the longest transaction duration. The other aspects may be to explore in deep the alternative algorithms where the randomization does not occur at the beginning of each window but rather during the executions of the algorithm by inserting random periods of low priority between the transactions in each thread. One may also consider the dynamic expansion and contraction of the execution window to preserve the congestion measure $C$. Thus, the execution window will not be a part of the algorithm but only a part of the analysis. This will result to more practical algorithms which at the same time achieve good performance guarantees. \small \bibliographystyle{plain} \section{Related Work} \label{section:related} Transactional Memory (TM) has been proposed in the early nineties as an alternative implementation of mutual exclusion that avoids many of the drawbacks of locks (e.g., deadlock, reliance on the programmer to associate shared data with locks, priority inversion, and failures of threads while holding locks) \cite{Herlihy93transactionalmemory}. A few years later the term Software Transactional Memory (STM) was suggested by Shavit and Touitou \cite{Shavit95} and a so called Dynamic STM (DSTM) for dynamic data structures which uses a contention manager as an independent module was proposed \cite{Her03}. DSTM is a practical obstruction-free STM system that seeks advice from the contention manager module to either wait or abort an transaction at the time of conflict. Several contention managers have been proposed in the literature. Most of them have been assessed by specific benchmarks only and not analytically. A comparison of contention managers based on different benchmarks can be found in \cite{Sch05, Sch04, Ramadan08, scherer05}. They found out that the choice of the contention manager varies with the complexity of the considered benchmark. The more detailed analysis of the performance of different contention managers in complex benchmarks has recently been studied by Ansari {\it et al.}~\cite{Ansari09}. From all the aforementioned references, it has been turned out that the coordination cost and the overhead involved in contention management is very high. The first formal analysis of the performance of a contention manager was given by Guerraoui {\it et al.}~\cite{Gue05a} which presented the {\sf Greedy} contention manager and proved that it achieves $O(s^{2})$ competitive ratio in comparison to the optimal off-line schedulers for $n$ concurrent transactions that share $s$ objects. Later, Guerraoui {\it et al.}~\cite{Gue05b} studied the impact of transaction failures on contention management and proved the $O(ks^{2})$ competitive ratio when some running transaction may abort $k$ times and then eventually commits. Attiya {\it et al.}~\cite{Att06} improved the result of \cite{Gue05a} to $O(s)$, and the result of \cite{Gue05b} to $O(ks)$, which are significant improvements over the competitive ratio of {\sf Greedy}. The also proved the matching lower bound of $\Omega(s)$ for the competitive ratio for deterministic work-conserving algorithms which schedule as many transactions as possible. The complexity measures provided by the aforementioned studies are not satisfying as they are based on number of shared resources only. One can notice that number of shared resources in total is not really related to the actual conflicting transactions potentially encountered by an transaction. Recently, Schneider and Wattenhofer \cite{Schneider09} analyzed some of the issues related to the number of potential conflicts; and presented a deterministic algorithm {\sf CommitBounds} with competitive ratio $\Theta(s)$ for $n$ concurrent transactions using $s$ shared resources and a randomized algorithm {\sf RandomizedRounds} with makespan $O(C \log n)$, for the one-shot problem of a set of $M$ transactions in separate threads with $C$ conflicts (assuming unit delays for transactions), with high probability (proportional to $1-n^{-1}$). Which means, {\sf RandomizedRounds} is only a factor of $\log n$ from optimal, with high probability, for the case where $C < M$. However, if other transactions comes into play that are able to reduce the parallelism by a factor of $k$, the approximation of {\sf RandomizedRounds} also worsens by a factor of $k$.\ While previous studies showed that contention managers {\sf Polka} \cite{Sch05} and {\sf SizeMatters} \cite{Ramadan08} exhibits good overall performance for variety of benchmarks, this work showed that they may perform exponentially worse than {\sf RandomizedRounds} from a worst-case perspective. \section{Optimal Window Decomposition} \label{section:decomposition} In this section we are interested in partitioning a $M \times N$ window $W$ into some decomposition of sub-windows such that if we schedule the transactions of each sub-window separately using one of our greedy contention managers then the sum of the makespans of the sub-windows is better than scheduling all the transactions of $W$ as a single window. In particular we are seeking a decomposition that minimizes the maximum {\em density} of the sub-windows, where the density expresses how much larger is the contention with respect to the number of transactions per thread. For window $W$ with congestion $C$ we define the density as $r = C/N$. Consider some decomposition $D$ of window $W$ into different sub-windows $D = \{W_1, \cdots, W_k\}$, where sub-window $W_i$ has respective size $M \times X_i$. Let $C_i$ denote the contention of window $w_i$. The density of $W_i$ is $r_i = C_i / X_i$. Let $r_D = \max_{W_i \in D} r_i$. The {\em optimal window decomposition} $D^*$ has density $r_{D^*} = min_{D \in {\cal D}} r_D$, where ${\cal D}$ denotes the set all possible decompositions of $W$. Note that different decompositions in ${\cal D}$ may have different number of windows. Two example decompositions members of ${\cal D}$ is one that consists only of $W$, and another that consists of all single column windows of $W$. The optimal window decomposition $D^*$ can provide asymptotically better makespan for $W$ if $r_{D^*} = o(r)$. Using one of our greedy algorithms, the makespan of each sub-window $W_i \in D^*$ is ${\widetilde O}((1+r_{D^*})X_i)$ (where the notation ${\widetilde O}$ hides polylog factors). Thus, using $D^*$, the makespan for the whole window $W$ becomes ${\widetilde O} ((1+r_{D^*})\sum_{W_i \in D^*} X_i) = {\widetilde O} ((1+r_{D^*}) N)$. If we apply one of our greedy algorithms in the whole window $W$ directly, then the makespan for $W$ is ${\widetilde O} ((1+r) N)$, which may be asymptotically worse than using the optimal decomposition $D^*$ when $r_{D^*} = o(r)$. \begin{figure}[ht] \centering \includegraphics[height=2.1in,width=4.5in]{optimalwindow} \caption{Optimal window decomposition} \label{figure:optimalwindow} \end{figure} We use a dynamic programming approach to compute the optimal decomposition $D^*$ of $W$. The idea is compute the optimal decomposition of all prefix windows of $W$. As shown in Figure \ref{figure:optimalwindow}, our goal is to determine the optimal window decomposition including the prefix window up to column $k$ provided that optimal window decomposition till column $k-1$ has been already computed. In this case, there are $k$ possible combinations to examine for finding the optimal window size which will minimize the maximum of all the contention densities. The details are in the proof of the following theorem. \begin{theorem}[optimal window decomposition] \label{lemma: window-decomposition} The optimal window decomposition $D^*$ for an arbitrary $M \times N$ window $W$ can be computed in polynomial time. \end{theorem} \begin{proof} From the problem description, we can readily see the overlapping-subproblems property in the optimal window decomposition problem. Let $r_{j,k}$ denote the density in the decomposition of the sub-window $W_{j,k}$, which starts at column $j$ and ends at column $k$, where $j \leq k$. Let $r^*_{j,k}$ denote the maximum density in the optimal decomposition of the sub-window $W_{j,k}$. The optimal window decomposition in this scenario can be determined from this recursive formula: $$r^*_{j,k}=\displaystyle{\mathop{\mbox{min}}_{1\leq j\leq k-1}}\{\max(r^*_{1,j}, (r_{j,k}))\}.$$ To find the optimal window decomposition for the $k$-th prefix window $W_{1,j}$, we have to check for all the combinations from first to $k-1$ prefix window and the suffix up to $k$. Using the formula we can compute $r^*_{1,k}$ for each prefix $W_{1,k}$. Our algorithm needs $O(k)$ time to compute optimal window size for the $k$-th prefix provided that the optimal window computation till the $(k-1)$-th prefix is known. To compute then all the values for each window combination from 1 to $k$, our algorithm recursively takes $O(k^2)$ steps. The final density is $r_{D^*} = r^*_{1,N}$. \end{proof}
2024-02-18T23:40:24.150Z
2010-02-22T21:02:25.000Z
algebraic_stack_train_0000
2,298
6,908
proofpile-arXiv_065-11184
\section{Motivation} \label{sec:motiv} A recent attractive area of research has been the detection of statistically relevant sequences or mining interesting patterns from within a given string~\cite{bioinfo,anomaly}. Given an input string composed of symbols from a defined alphabet set with a probability distribution defining the chance of occurrence of the symbols, and thereby defining its expected composition, we would like to find the portions of the string which deviate from the expected behavior and can thus be potent sources of study for hidden pattern and information. An automated monitoring system like a cluster of sensors sensing the temperature of the surrounding environment for fire alert, or a connection server sniffing the network for possible intrusion detection provides a few of the applications where such pattern detection is essential. Other applications involve text analysis of e-mails and blogs to predict terrorist activities or judging prevalent public sentiments, studying trends of the stock market, and identifying sudden changes in the mutation characteristics of protein sequence of an organism. Similarly, information extracted from a series of Internet websites visited, the advertisements clicked on them or from the nature of transactions on a database, can capture the interests of the end user, prospective clients and also the periods of heavy traffic in the system. An interesting field of application can be the identification of good and bad career patches of a sports icon. For example, given the runs scored by Sachin Tendulkar in each innings of his one-day international cricket career, we may be interested in finding his in-form and off-form patches. Quantifying a substring or an observation as unexpected under a given circumstance relies on the probabilistic analysis used to model the deviation of the behavior from its expected nature. Such an outcome that deviates from the expected, then becomes interesting and may reveal certain information regarding the source and nature of the variance, and we are interested in detecting such pockets of hidden data within substrings of an input string. A statistical model is used to determine the relationship of an experimental or observed outcome with the factors affecting the system, or to establish the occurrence as pure chance. An observation is said to be statistically significant if its presence cannot be attributed to randomness alone. For example, within a large DNA sequence, the recognition of hugely variational patterns involve probability matching with large fluctuations, thereby the need to predict the locations uses self-consistent statistical procedures~\cite{mot}. The degree of uniqueness of a pattern can be captured by several measures including the p-value and z-score~\cite{dp,crit}. For evaluating the significance of a substring, it has been shown that the p-value provides a more precise conclusion as compared to that by the z-score~\cite{bioinfo}. However, computing the p-value entails enumerating all the possible outcomes, which can be exponential in number, thus rendering it impractical. So, heuristics based on branch-and-bound techniques have been proposed~\cite{categ}. The \emph{log--likelihood ratio}, $G^2$~\cite{multi} provides such a measure based on the extent of deviation of the substring from its expected nature. For multinomial models, the $\chi^2$ statistic approximates the importance of a string more closely than the $G^2$ statistic~\cite{multi,pear}. Existing systems for intrusion detection use multivariate process control techniques such as Hotelling's $T^2$ measure~\cite{hotel}, which is again computationally intensive. The chi-square measure, on the other hand, provides an easy way to closely approximate the p-value of a sequence~\cite{multi}. To simplify computations, the $\chi^2$ measure, unlike Hotelling's method, does not consider multiple variable relationship, but is as effective in identifying ``abnormal'' patterns~\cite{anomaly}. Thus, in this paper, we use the Pearson's $\chi^2$ statistic as a measure of the p-value of a substring~\cite{multi,pear}. The $\chi^2$ distribution is characterized by the \emph{degrees of freedom}, which in the case of a string, is the number of symbols in the alphabet set minus one. The larger the $\chi^2$ value of a string, the smaller is its p-value, and hence more is its deviation from the expected behavior. So, essentially our problem reduces to finding the substring that has the maximum $\chi^2$ value. We propose to extract such substrings efficiently.\\ \noindent \textbf{Related Work:}\\ Formally, given a string $S$ composed of symbols from the alphabet set $\Sigma$ with a given probability distribution $P$ modeling the chance of occurrence of each symbol, the problem is to identify and extract the top-$k$ substrings having the maximum chi-square value or the largest deviation within the framework of p-value measure for the given probability distribution of the symbols. Na\"\i vely we can compute the $\chi^2$ value of all the substrings present in $S$ and determine the top-k substrings in $O(l^2)$ time for a string of length $l$ (see Algorithm~\ref{alg:naive}). The blocking algorithm and its heap variant proposed in~\cite{sumit}, reduce the practical running time for finding such statistically important substrings, but suffers from a high worst-case running time. The number of blocks found by this strategy increases with the size of the alphabet set and also when the probabilities of the occurrence of the symbols are nearly similar. In such scenarios, the number of blocks formed can be almost equal to the length of the given string, thereby degenerating the algorithm to that of the na\"\i ve one. The heap variant requires a high storage space for maintaining the separate \emph{max} and \emph{min} heap structures and also manipulates a large number of pointers. Further, the algorithm does not easily generalize beyond static input strings, and cannot handle top-k queries. In time-series databases, categorizing a pattern as surprising based on its frequency of occurrence and mining it efficiently using suffix trees has been proposed in~\cite{keo}. However, the $\chi^2$ measure, as discussed earlier, seems to provides a better parameter for judging whether a pattern is indeed interesting. \begin{algorithm}[t] \caption{Na\"\i ve Algorithm} \label{alg:naive} \begin{algorithmic}[1] \REQUIRE String $S$ with the probability of occurrence of each symbol in the alphabet set. \ENSURE Top-k substrings having the maximum $\chi^2$ value. \STATE Extract all the substrings in $S$. \STATE Compute the $\chi^2$ value of all the substrings. \STATE Return the substrings having the top-k $\chi^2$ value. \end{algorithmic} \end{algorithm} In this paper, we propose two algorithms, \emph{All-Pair Refined Local Maxima Search} (ARLM) and \emph{Approximate Greedy Maximum Maxima Search} (AGMM) to efficiently search and identify interesting patterns within a string. We show that the running time of the algorithms are far better than the existing algorithms with lesser space requirements. The procedures can also be easily extended to work in streaming environments. ARLM, a quadratic algorithm in the number of local maxima found in the input string, and AGMM, a linear time algorithm, both use the presence of local maxima in the string. We show that the approximation ratio of the reported results to the actual is 0.96 or more. Empirical results emphasize that the algorithms work efficiently. The outline of the paper is as follows: Section~\ref{sec:def} formulates the properties and behavior of strings under the $\chi^2$ measure. Section~\ref{sec:algo} describes the two proposed algorithms along with their runtime complexity analysis. Section~\ref{sec:expt} shows the experimental results performed on real and synthetic data, before Section~\ref{sec:conc} concludes the paper. \section{Definition and Properties} \label{sec:def} Let $str = s_1s_2 \dots s_l$ be a given string of length $l$ composed of symbols $s_i$ taken from the alphabet set $\Sigma = \{\sigma_1,\sigma_2, \dots ,\sigma_m\}$, where $|\Sigma| = m$. To each symbol $\sigma_i \in \Sigma$ is associated a $p_{\sigma_i}$ (henceforth represented as $p_i$), denoting the probability of occurrence of that symbol, such that $\sum^m_{i=1}p_i = 1$. Let $\theta_{\sigma_i,str}$ (henceforth represented as $\theta_{i,str}$) denote the observed number of the symbol $\sigma_i$ in the string $str$, where $\sigma_i \in \Sigma$ and $str \in \Sigma^*$. The chi-square value of a string $str \in \Sigma^*$ of length $l$ is computed as \begin{align} \label{eq:chi} \chi^2_{str} &= \sum^m_{i=1} \frac {\left(p_il - \theta_{i,str}\right)^2} {p_il} \end{align} The chi-square measure thus calculates the deviation of the composition of the string from its expected nature by computing the sum of the normalized square of difference of the observed value of each symbol in the alphabet set from the expected value of occurrence. \begin{observation} Under string concatenation operation (.), for two arbitrary strings $a$ and $b$ drawn from the same alphabet set and probability distribution of the symbols (henceforth referred to as the \emph{same universe}), the $\chi^2$ measure of the concatenated string is commutative in the order of concatenation. \end{observation} \begin{proof} It is easy to observe that the lengths of $a.b$ and $b.a$ are the same. Further, the observed values of the different symbols and their probabilities of occurrence are the same in both the concatenated strings. Hence, the $\chi^2_{a.b}$ is equal to $\chi^2_{b.a}$ according to Eq.~\eqref{eq:chi}. \end{proof} \begin{lemma} The $\chi^2$ value of the concatenation of two strings drawn from the same universe is less than or equal to the sum of the $\chi^2$ values of the individual strings. \end{lemma} \begin{proof} Let $a$ and $b$ be two strings, of length $l_a$ and $l_b$ respectively. Let $a.b$ form the concatenated string having length $(l_a+l_b)$. Using Eq.~\eqref{eq:chi}, the sum of the chi-square values of the strings is % \begin{align} \label{eq:a&b} &\chi^2_a + \chi^2_b = \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2}{p_il_a} + \frac{\left(p_il_b - \theta_{i,b}\right)^2}{p_il_b}\right) \\ &\text{Now, } \label{eq:ab} \chi^2_{ab} = \sum^m_{i=1}\frac{\left(p_i\left(l_a+l_b\right)-\theta_{i,{ab}}\right)^2}{p_i \left(l_a+l_b\right)} \\ &\text{Using $\theta_{i,ab} = \theta_{i,a} + \theta_{i,b}$ and Eqs.~\eqref{eq:a&b} and~\eqref{eq:ab}, we have } \nonumber \\ &\chi^2_a + \chi^2_b - \chi^2_{ab} = \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2} {p_il_a} + \right. \nonumber \\ &\left. \qquad \qquad \frac{\left(p_il_b - \theta_{i,b}\right)^2}{p_il_b} - \frac{\left(p_i\left(l_a+l_b\right)-\theta_{i,{ab}}\right)^2}{p_i\left(l_a+l_b\right)}\right) \nonumber \\ &= \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2}{p_il_a} + \frac{\left(p_il_b - \theta_{i,b}\right)^2}{p_il_b} - \right. \nonumber \\ &\left. \qquad \qquad \qquad \quad \frac{\left(\left(p_il_a-\theta_{i,a}\right) + \left(p_il_b - \theta_{i,b}\right)\right)^2}{p_i\left(l_a+l_b\right)}\right) \nonumber \end{align} % \begin{align} &= \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2}{p_i} \left[\frac{1}{l_a} - \frac{1}{l_a+l_b}\right] + \right. \nonumber \\ &\left. \qquad \qquad \quad \frac{\left(p_il_b-\theta_{i,b}\right)^2}{p_i }\left[\frac{1}{l_b} - \frac{1}{l_a+l_b}\right] - \right. \nonumber \\ &\left. \qquad \qquad \qquad \qquad \quad 2\frac{\left(p_il_a - \theta_{i,a}\right)\left(p_il_b - \theta_{i,b}\right)}{p_i\left(l_a + l_b\right)} \right) \nonumber \\ &= \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2l_b}{p_il_a\left(l_a + l_b\right)} + \frac{\left(p_il_b - \theta_{i,b}\right)^2l_a}{p_il_b\left(l_a + l_b\right)} - \right. \nonumber \\ &\left. \qquad \qquad \qquad \qquad \quad 2\frac{\left(p_il_a - \theta_{i,a}\right)\left(p_il_b - \theta_{i,b}\right)}{p_i\left(l_a + l_b\right)} \right) \nonumber \\ \quad &= \frac{1}{l_al_b} \sum^m_{i=1}\left( \frac{\left(p_il_a - \theta_{i,a}\right)^2l^2_b}{p_i\left(l_a + l_b\right)} + \frac{\left(p_il_b - \theta_{i,b}\right)^2l^2_a}{p_i\left(l_a + l_b\right)} - \right. \nonumber \\ &\left. \qquad \qquad \qquad \quad 2\frac{l_al_b\left(p_il_a - \theta_{i,a}\right)\left(p_il_b -\theta_{i,b}\right)}{p_i\left(l_a + l_b\right)} \right) \nonumber \\ &= \frac{1}{l_al_b} \sum^m_{i=1}\left(\frac{\left(p_il_a - \theta_{i,a}\right)l_b}{\sqrt{p_i \left(l_a + l_b\right)}} - \frac{\left(p_il_b - \theta_{i,b}\right)l_a}{\sqrt{p_i\left(l_a + l_b\right)}} \right)^2 \nonumber \\ &\geq 0 \nonumber \end{align} Therefore, $\chi^2_a + \chi^2_b \geq \chi^2_{ab}$. \end{proof} \begin{lemma} \label{lem:single} The chi-square value of a string composed of only a single type of symbol increases with the length of the string. \end{lemma} \begin{proof} Let $str$ be a string of length $l$ composed only of the symbol $\sigma_j$, drawn from the alphabet set $\Sigma$. Here, $\theta_{i,str} = 0, \forall i \in \{1,2, \dots ,m\}, i\neq j$ and $\theta_{j,str} = l$, as $str$ consists only $\sigma_j$. Substituting the values in Eq.~\eqref{eq:chi}, we have \begin{align} \label{eq:len_l} \chi^2_{str} &= \frac{\left(p_j-1\right)^2l}{p_j} + \sum^m_{i=1,i\neq j}p_il \end{align} If the length of $str$ is increased by one, by including another $\sigma_j$, its chi-square value becomes \begin{align} &\chi^2_{str^\prime} = \frac{\left(p_j-1\right)^2\left(l+1\right)}{p_j} + \sum^m_{i=1,i\neq j}p_i\left(l+1\right) \nonumber \\ \label{eq:len_l1} &= \frac{\left(p_j-1\right)^2l}{p_j} + \frac{\left(p_j-1\right)^2}{p_j} + \sum^m_{i=1,i\neq j}p_il + \sum^m_{i=1,i\neq j}p_i \end{align} Comparing Eq.~\eqref{eq:len_l1} with Eq.~\eqref{eq:len_l}, we observe that the chi-square value increases, since $p_i \geq 0, \forall i \in \{1, 2, \dots ,m\}$. \end{proof} With this setting, we now define the term \emph{local maxima} and describe the procedure for finding such a local maxima within a given string. \begin{definition}[Local maxima] The \emph{local maxima} is a substring, such that while traversing through it, the inclusion of the next symbol does not decrease the $\chi^2$ value of the resultant sequence. \end{definition} Let $s_1s_2 \ldots s_n$ be a local maxima of length $n$, where $s_i \in \Sigma, \forall i$. Then the following holds \begin{align} \chi^2_{s_1s_2} &\geq \chi^2_{s_1} \text{ , } \chi^2_{s_1s_2s_3} \geq \chi^2_{s_1s_2} \text{ , } \dots \nonumber \\ \chi^2_{s_1s_2 \ldots s_{n}} &\geq \chi^2_{s_1s_2 \ldots s_{n-1}} \text{ and, } \chi^2_{s_1s_2 \ldots s_{n+1}} \leq \chi^2_{s_1s_2 \ldots s_n} \nonumber \end{align} The process of finding the local maxima involves a single scan of the entire string. We consider the first local maxima to start at the beginning of the given string. We keep appending the next symbol to the current substring until there is a decrease in the chi-square value of the new substring. The present substring is then considered to be a local maxima ending at the previous position and the last symbol appended signifies the start of the next local maxima. Thus by traversing through the entire string once, we find all the local maxima present, which takes $O(l)$ time for a string of length $l$. As an example, consider the string $str=$ \emph{aaaabbba}, having $\Sigma = \{a,b\}$ with the probability of occurrence of symbol \emph{a} as $0.2$, and that of \emph{b} as $0.8$. Starting from the beginning, we compute the $\chi^2$ value of \emph{a} to be $4$. This is considered to be the starting of a local maxima. Appending the next symbol, the chi-square value of \emph{aa} increases to $8$. Since the score increases, the current local maxima is updated to \emph{aa}. We keep appending the next symbol into the current maxima. We find that $\chi^2_{aaaa} = 16$ and $\chi^2_{aaaab} = 11.25$. As there is a decrease in the chi-square value of the substring after insertion of \emph{b}, the current local maxima becomes \emph{aaaa} and the next local maxima is said to begin at \emph{b}. Repeating this procedure for the entire string $str$, the local maxima found are \emph{aaaa}, \emph{bbb} and \emph{a}. \begin{lemma} The expected number of local maxima present in a string of length $l$ is $O(l)$. \end{lemma} \begin{proof} From Lemma~\ref{lem:single}, we can observe that, in a local maxima if the two adjacent symbols are the same, then the chi-square value cannot decrease. Thus the current local maxima may end only when a pair of adjacent symbols are different. We would like to find the expected number of positions in the string where such a boundary may exist. Let us define an indicator variable $x_i$, where $x_i=1$ if the $i^{th}$ and the $(i+1)^{th}$ symbols in the string are dissimilar, and $x_i=0$ otherwise. Let $X=\sum^{l-1}_{i=1}x_i$, where $E[X]$ gives the expected number of local maxima boundaries for a string of length $l$. $P(x_i=1)$ denotes the probability of the event $x_i=1$. Therefore, % \begin{align} &P(x_i=1) = \sum_{\forall j,k,~j\neq k}p_jp_k = 2\times \sum_{\forall j,k,~j<k}p_jp_k \nonumber \\ &\qquad \qquad \qquad \qquad \quad \qquad \text{[where $j,k \in \{1,2,\dots,m\}$]} \nonumber \\ &E[X] = E[\sum^{l-1}_{i=1}x_i]= \sum^{l-1}_{i=1}E[x_i] ~~\text{[Linearity of expectation]} \nonumber \\ &= \sum^{l-1}_{i=1}P(x_i=1) = 2 \times \sum^{l-1}_{i=1} \sum_{\forall j,k,~j<k}p_jp_k \nonumber \\ \label{eq:expected} &= 2\times (l-1) \times \sum_{\forall j,k,~j<k}p_jp_k \end{align} % Hence, the expected number of local maxima is $O(l)$ for a string of length $l$. \end{proof} However, practically the number of local maxima will be much less than $l$, as all adjacent positions of dissimilar symbols may not correspond to a local maxima boundary. Using Eq.~\eqref{eq:expected}, for $m=2$ the maximum number of expected local maxima is $(l-1)/2$ and is $2(l-1)/3$ for $m=3$, which is obtained by substituting the maximum possible value of $P(x_i=1)$. \\ We further optimize the local maxima finding procedure by initially \emph{blocking} the string $str$, as described in~\cite{sumit}, and then searching for the local maxima. This makes the procedure faster and concise. A contiguous sequence of the same symbol is considered to be a block, and is replaced by a single instance of that symbol representing the block. If a symbol is selected, the entire block associated with it is considered to be selected. The next lemma states that if the inclusion of the symbol representing a block increases the $\chi^2$ value, then the inclusion of the entire block will further increase the $\chi^2$ value. This has been proved in Lemma 3.2.5 and Corollary 3.2.6 on page 35-37 of~\cite{sumit}. For completeness, we include a sketch of the proof in this paper. \begin{lemma} If the insertion of a symbol of a block increases the chi-squared value of the current substring, then the chi-squared value will be maximized if the entire block is inserted. \end{lemma} \begin{proof} Let the current substring be $sub$ and the adjacent block of length $n$ be composed of symbol $\sigma_e \in \Sigma$. Appending one $\sigma_e$ to $sub$ increases the $\chi^2$ value of the new substring. \begin{align} &\text{Given, }\sum^m_{i=1, i \neq e} \frac{\left(p_i \left(l_{sub}+1 \right) - \theta_{i,sub+1} \right)^2}{p_i\left(l_{sub}+1 \right)} + \nonumber \\ &\qquad \qquad \qquad \qquad \frac{\left( p_e \left (l_{sub}+1\right) - \theta_{e,sub+1}\right)^2}{p_e \left( l_{sub}+1 \right)} \geq \nonumber \\ &\qquad \qquad \sum^m_{i=1, i \neq e} \frac{\left(p_il_{sub} - \theta_{i,sub} \right)^2}{p_il_{sub}} + \frac{\left( p_el_{sub} - \theta_{e,sub} \right)^2}{p_el_{sub}} \nonumber \\ &\text{Or, } \chi^2_{sub+1} \geq \chi^2_{sub} \nonumber \\ &\text{By simple algebraic manipulation, we can show that, } \nonumber \\ &\chi^2_{sub+n} \geq \chi^2_{sub+n-1} \geq \dots \geq \chi^2_{sub+2} \geq \chi^2_{sub+1} \nonumber \end{align} Hence, by including the entire block the $\chi^2$ value of the substring will be maximized. \end{proof} The entire string is now block-ed and the local maxima finding procedure works not with the original $str$ but with \emph{aba}, where the first \emph{a} represents the four contiguous $a$'s in $str$, the \emph{b} represents the next three $b$'s, and the final \emph{a} stands for the last occurrence of a. The local maxima thus found are \emph{a}, \emph{b} and \emph{a}. The positions of the local maxima are $1$, $5$ and $8$ respectively, according to their position in the original string. Given the position of component $y$ for each local maxima of a string, we need to extract the \emph{global maxima}, which we formally define as follows. \begin{definition}[Global maxima] \emph{Global maxima} is the substring having the maximum chi-square value, and is the substring that we are interested in extracting, i.e., the output substring. \end{definition} The global maxima has the maximum score among all possible substrings present in the input string. \section{Algorithms} \label{sec:algo} Based on the observations, lemmas and local maxima extracting procedure discussed previously, in this section we explain the \emph{All-Pair Refined Local Maxima} (ARLM) and \emph{Approximate Greedy Maximum Maxima} (AGMM) search algorithms for mining the most significant substring based on the chi-square value. \subsection{All-Pair Refined Local Maxima Search Algorithm (ARLM)} \label{ssec:allpair} Given a string $str$ of length $l$ and composed of symbols from the alphabet set $\Sigma$, we first extract all the local maxima present in it in linear time, as described earlier. We also optimize the local maxima finding procedure by incorporating the idea of the blocking algorithm. With $str$ partitioned into its local maxima, the global maxima can either start from the beginning of a local maxima or from a position within it. Thus, it can contain an entire local maxima, a suffix of it or itself be a substring of a local maxima. It is thus intuitive that the global maxima should begin at a position such that the subsequent sequence of characters offer the maximum chi-square value. Otherwise, we could keep adding to or deleting symbols from the front of such a substring and will still be able to increase its $\chi^2$ value. Based on this, the ARLM heuristic finds within each local maxima the suffix having the maximum chi-square value, and considers the position of the suffix as a potential starting point for the global maxima. Let $xyz$ be a local maxima, where $x$ is a prefix of length $l_x$, $y$ is a single symbol at position $pos$, and $z$ be the remaining suffix having length $l_z$. Categorizing the components, namely $x,y$ and $z$ of a local maxima appropriately, is extremely crucial for finding the global maxima. Let $start\_pos$ and $end\_pos$ be two lists which are initially empty and will contain the position of component $y$, i.e., $pos$, for each of the local maxima. For a local maxima the chi-square value of all its suffices is computed. The starting position of the suffix having the maximum chi-square value provides the position $pos$ for the component $y$, i.e, $yz$ will be the suffix of $xyz$ having the maximum chi-square value. The position $pos$ is inserted into the list $start\_pos$. If no such proper suffix exists for the local maxima, the starting position of the local maxima $xyz$ relative to the original string is inserted in the list. After populating the $start\_pos$ list with position entries of $y$ for each of the local maxima of the input string, the list contains the prospective positions from where the $global~maxima$ may start. The string $str$ is now reversed and the same algorithm is re-run. This time, the $end\_pos$ list is similarly filled with positions $y^{\prime}$ relative to the beginning of the string. For simplicity and efficiency of operations, we maintain a table, $symbol\_count$ having $m$ rows and $l$ columns, where $m$ is the cardinality of the alphabet set. The rows of the table contain the observed number of each associated symbols present in the length of the string denoted by the column. The observed count of a symbol between two given positions of the string can thus be easily found from this table in $O(1)$ time. The space required in this case becomes $O(lm)$. However, the table reduces the number of accesses of the original string for computing the maximum suffix within each local maxima. It also helps to generalize the algorithm to streaming environments, where it is not possible to store the entire string. Given the two non-empty $start\_pos$ and $end\_pos$ lists, we now find the chi-square value of substrings from position $g \in start\_pos$ to $h \in end\_pos$, and $g \leq h$. The substring having the maximum value is reported as the global maxima. While computing the chi-square values for all the pairs of positions in the two list, the top-k substrings can be maintained using a heap of $k$ elements (see Algorithm~\ref{alg:arlm} for the pseudo-code). \begin{algorithm}[t] \caption{ARLM Algorithm} \label{alg:arlm} \begin{algorithmic}[1] \REQUIRE String $S$ with the probability of occurrence of each symbol in the alphabet set. \ENSURE Top-k substrings with the maximum $\chi^2$ value. \STATE Find all the local maxima in $S$ and $S$ reversed. \STATE $start\_pos \leftarrow$ position of suffices with maximum $\chi^2$ value in each local maxima of $S$. \STATE $end\_pos \leftarrow$ position of suffices with maximum $\chi^2$ value in each local maxima of $S$ reversed. \STATE Based on the $\chi^2$ value, return the top-k substrings formed from all pairs of positions from the two lists. \end{algorithmic} \end{algorithm} Continuing with our example of $str=aaaabbba$, the $start\_pos$ list contains 1, 5 and 8, as the final local maxima does not contain a proper suffix with a larger chi-square value greater than itself. Computing on $str$ reversed, the $end\_pos$ list will contain 8, 7 and 4. We now consider the substrings formed by the pairs (1,8), (1,7), (1,4), (5,8), (5,7), and (8,8). Calculating all the chi-square values and comparing them, we find that (1,4) has the maximum value and is reported as the global maxima which is \emph{aaaa}. Taking $k=2$, we find that the substring \emph{aaaabbba} corresponding to (1,8) provides the second highest chi-square value. \subsection{Analysis of ARLM} \label{ssec:analy} \begin{conjecture} The starting position of the \emph{global maxima} is always present in the $start\_pos$ list. \end{conjecture} \begin{corollary} From the above conjecture, it follows that the ending position of the \emph{global maxima} is also present in the $end\_pos$ list. \end{corollary} \begin{proof} This directly follows from the commutative property stated in Section~\ref{sec:def}. \end{proof} Finding all the local maxima in the string requires a single pass, which takes $O(l)$ time for a string of length $l$. Let the number of local maxima in the string be $d$. Finding the maximum valued suffix for each local maxima using the $symbol\_count$ table, requires another pass of each of the local maxima, and thus also takes $O(l)$ time. Since, each local maximum contributes one position to the lists, the number of elements in both the lists is $d$. In the rare case that a local maxima contains two or more suffices with the same maximum $\chi^2$ value greater than that of the local maxima, we store all such positions in the corresponding list. Thus, the lists are of $O(d)$ length. We then evaluate the substrings formed by each possible pair of start and end positions, which takes $O(d^2)$. So in total, the time complexity of the algorithm becomes $O(l+d^2)$. We justified that although $d$ is of $O(l)$, the expected number of local maxima is far less than that (supported by empirical values shown in Section~\ref{sec:expt}). So although the theoretical running time degenerates to $O(l^2)$, practically it is found to be much better. The following optimization further reduces the running time of the algorithm. We evaluate the chi-square values only when the substrings are properly formed from the two lists, i.e., for a given pair of start and end positions obtained from the two lists, the ending position is greater than or equal to the starting position. This further reduces the actual running time required compared to that given by $O(d^2)$. We empirically show that the running time is actually 3-4 times less than the na\"\i ve algorithm which computes and compares the value of all the possible substrings present in the original string. \subsection{Approximate Greedy Maximum Maxima Search Algorithm (AGMM)} \label{ssec:greedy} In this section, we propose a linear time greedy algorithm for finding the maximum substring, which is linear in the size of the input string $str$. We extract all the local maxima of the input string and its reverse, and populate the $start\_pos$ and $end\_pos$ lists as discussed previously. We identify the local maxima suffix $max$ having the maximum chi-square value among all the local maxima present in the string. AGMM assumes this local maxima suffix to be completely present within the global maxima. We then find a position $g \in start\_pos$ for which the new substring starting at $g$ and ending with $max$ as a suffix has the maximum $\chi^2$ value, for all $g$. Using this reconstructed substring, we find a position $h \in end\_pos$ such that the new string starting at the selected position $g$ and ending at position $h$ has the maximum chi-square measure for all positions of $h$. This new substring is reported by the algorithm as the global maxima. \begin{algorithm}[t] \caption{AGMM Algorithm} \label{alg:agmm} \begin{algorithmic}[1] \REQUIRE String $S$ with $start\_pos$ and $end\_pos$ lists. \ENSURE Top-k statistically most significant substrings. \STATE $max \leftarrow$ suffix having the maximum $\chi^2$ value. \STATE $G \leftarrow$ strings starting at positions from $start\_pos$ with $max$ as suffix. \STATE $H \leftarrow$ strings in $G$ ending at positions from $end\_pos$. \STATE Return the top-k strings of $H$ based on $\chi^2$ value. \end{algorithmic} \end{algorithm} Again, using the example of $str=aaaabbba$, we find $max=aaaa$ and using the two lists, $aaaa$ is returned as the global maxima. For $k=2$, the heuristic returns $aaaabbba$ as the second most significant substring. Using the $symbol\_count$ table, AGMM takes $O(d)$ time, where $d$ is the number of local maxima found. The total running time of the algorithm is $O(d+l)$. However, the substring returned may not be the actual global maxima at all times. The intuition is that the global maxima will contain the maximum of the local maxima to maximize its value. This assumption is justified by empirical results in Section~\ref{sec:expt}, which shows that almost always we obtain an approximation ratio of 0.96 or more, if not the exact value. Being a linear time algorithm, it provides a order of increase in the runtime as compared to the other algorithms. While finding the values of $g$ and $h$, we can keep a track of the chi-squared values of all the strings thus formed. Using these values, the heuristics can be used to output the top-k substrings (see Algorithm~\ref{alg:agmm} for the pseudo-code). \section{Experiments} \label{sec:expt} To assess the performance of the two proposed heuristics ARLM and AGMM, we conduct tests on multiple datasets and compare it with the results of the na\"\i ve algorithm and the blocking algorithm~\cite{sumit}. The heap variant of the blocking algorithm is not efficient as it has a higher running time complexity and uses more memory, and hence has not been compared with. The accuracy of the results returned by the heuristics is compared with that returned by the na\"\i ve algorithm, which provides the optimal answer. We used two real datasets: (i)~innings by innings runs scored by Sachin Tendulkar in one-day internationals (ODI)\footnote{ \url{http://stats.cricinfo.com/ci/engine/player/35320.html?}\\\url{class=2;template=results;type=batting;view=innings}.} and (ii)~the number of user clicks on the front page of \url{msnbc.com}\footnote{ \url{http://archive.ics.uci.edu/ml/}\\\url{datasets/MSNBC.com+Anonymous+Web+Data}.}. We have also used synthetic data to assess the scalability and practicality of the heuristics. We compare the results based on the following parameters: (i)~search time for top-k queries, (ii)~number of local maxima found, and (iii)~accuracy of the result based on the ratio of the optimal $\chi^2$ value obtained from the na\"\i ve algorithm to that returned by the algorithms. The experiments were conducted on a 2.1 GHz desktop PC with 2 GB of memory using C++ in Linux environment. \subsection{Real Datasets} \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{|c|c|c|c|c|c|} \hline {\bfseries \# innings} & {\bfseries Total runs} & {\bfseries Avg.} & {\bfseries \# 100} & {\bfseries \# 50} & {\bfseries \# 0} \\ \hline \hline 425 & 17178 & 44.50 & 45 & 91 & 20 \\ \hline \end{tabular} \caption{Sachin Tendulkar's ODI career statistics (as on November, 2009).} \label{tab:sachin} \end{small} \end{center} \end{table} Table~\ref{tab:sachin} summarizes the statistics of Sachin Tendulkar's present ODI career. The innings where he did not bat were not considered. Given his runs, we quantized the runs scored into 5 symbols as follows: 0-9 is represented by $A$ (Poor), 10-24 by $B$ (Bad), 25-49 by $C$ (Fair), 50-99 by $D$ (Good) and 100+ by $E$ (Excellent). His innings-wise runs were categorized, and from the entire data we calculated the actual probability of occurrences of the different symbols, which were $0.28$, $0.18$, $0.22$, $0.22$ and $0.10$ respectively for the five symbols. With this setting, we extracted the top-k substring with the maximum chi-square value. These results reflect the periods of his career when he was in top form or when there was a bad patch, since in both cases his performance would deviate from the expected. Table~\ref{tab:sachinres} summarizes the findings. We find that during his best patch he had scored $8$ centuries and $3$ half-centuries in 20 innings with an average of $84.31$, while in the off-form period he had an average of $21.89$ in 9 innings without a score above 40. \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{|c|c|c|c|} \hline {\bfseries Form} & {\bfseries Date} & {\bfseries Avg.} & {\bfseries Runs scored} \\ \hline \hline & {22/04/1998} & & {143,134,33,18,100*} \\ {\bfseries Best} & {to} & {84.31} & {65,53,17,128,77} \\ {\bfseries patch }& {13/11/1998} & & {127*,29,2,141,8,3} \\ & & & {118*,18,11,124*} \\ \hline {\bfseries Worst} & {15/03/1992} & & {14,39,15} \\ {\bfseries patch} & {to} & {21.89} & {10,22,21} \\ & {19/12/1992} & & {32,23,21}\\ \hline \end{tabular} \caption{Result from Sachin's records.} \label{tab:sachinres} \end{small} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{sachin_timevsk} \caption{Time for finding the top-k query in Sachin's run dataset.} \label{grap:sachin_time} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{sachin_accvsk} \caption{Approximation ratio of the top-k query in Sachin's run dataset.} \label{grap:sachin_timeb} \end{center} \end{figure} Figure~\ref{grap:sachin_time} and Figure~\ref{grap:sachin_timeb} plot the times taken by the different algorithms and the approximation factor or accuracy of result for the heuristics respectively while varying the values of top-k queries. The ARLM algorithm takes lesser running time as compared to the other procedures, while the AGMM method, being a linear time algorithm, is very fast. The accuracy of the ARLM heuristic is found to be 100\% for the top-1 query, i.e., it provides the correct result validating the conjecture we proposed in Section~\ref{sec:algo}. As the value of $k$ increases we find an increase in the approximation ratio of both the heuristics as the number of pairs of local maxima involved increases, giving better results. The number of local maxima found is lesser than the number of blocks constructed by the blocking algorithm (see Table~\ref{tab:comp}). So, the heuristic prunes the search space more efficiently. The second real dataset that we considered contained the number of user clicks encountered on the front page of the website \url{msnbc.com} during various periods of a day taken from a sample of 989819 users. Analysis of the clicks from a group of users provides an insight into potent clients for the organization or customers for e-commerce purposes. The number of clicks have been categorized as follows: 1-3 clicks have been represented by $A$ (Low), 4-9 clicks by $B$ (Medium) and 10+ clicks by $C$ (High). We accordingly quantized the dataset and then performed the experiments by calculating the actual probability of occurrences of the different symbols which were $0.43$, $0.36$ and $0.21$ respectively. Table~\ref{tab:click} describes the data values and tabulates the result for the top-1 query. Due to time-consuming nature of this dataset, we did not search for the top-k queries with varying values of $k$. The results show that the ARLM technique has a better running time than the others, and also operates on a lesser number of local maxima as opposed to the number of blocks for the blocking algorithm (see Table~\ref{tab:comp}). The approximation factor for both the heuristics is $1$ for the top-1 search, thereby yielding the correct result. \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{|c|c|c|} \hline {\bfseries Algorithm} & {\bfseries Searching time} & {\bfseries Approx. ratio} \\ \hline \hline {Na\"\i ve} & {75+ hrs} & 1 \\ \hline {Blocking} & {52 hrs} & 1 \\ \hline {ARLM} & {40 hrs} & 1 \\ \hline {AGMM} & {3 hrs} & 1 \\ \hline \end{tabular} \caption{Results for dataset (containing 989819 records) of number of user clicks.} \label{tab:click} \end{small} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{|c|c|c|} \hline {\bfseries Dataset} & {\bfseries \# Blocks} & {\bfseries \# Local maxima} \\ \hline \hline {Sachin} & {319} & {281} \\ \hline {Web clicks} & {835142} & {759921} \\ \hline \end{tabular} \caption{Number of blocks versus local maxima for real datasets.} \label{tab:comp} \end{small} \end{center} \end{table} \subsection{Synthetic datasets} We now benchmark the ARLM and AGMM heuristics against datasets generated randomly using a uniform distribution. To simulate the deviations from the expected characteristics as observed in real applications, we perturb the random data thus generated with chunks of data generated from a geometric distribution with parameter $p=0.3$. These strings are now mined to extract the top-k substrings with largest chi-square values. The parameters that affect the performance of the heuristics are: (i)~length of the input string, $l$, (ii)~size of the alphabet set, $m$, and (iii)~number of top-k values. For different values of these parameters we compare our algorithms with the existing ones on the basis of (a)~time to search, (b)~approximation ratio of the results, and (c)~the number of blocks evaluated in case of blocking algorithm to the number of local maxima found by our algorithm. \subsection{Effect of parameters} \begin{table}[t] \begin{center} \begin{small} \begin{tabular}{|c|c|c|c|} \hline {\bfseries Parameters} &{\bfseries Variable} & {\bfseries \# Blocks} & {\bfseries \# Local maxima} \\ \hline \hline {m=5,} & {l=$10^3$} & {831} & {742} \\ {k=1} & {l=$10^4$} & {7821} & {6740} \\ & {l=$10^5$} & {77869} & {66771} \\ \hline {l=$10^4$,} & {m=5} & {7821} & {6740} \\ {k=1} & {m=25} & {8104} & {7203} \\ & {m=50} & {8704} & {7993} \\ \hline \end{tabular} \caption{Results for uniform dataset.} \label{tab:unif} \end{small} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{time_l_5_1} \caption{Effect of length on search time.} \label{grap:time_l} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{time_10e4_m_1} \caption{Effect of size of alphabet size on search time.} \label{grap:time_lb} \end{center} \end{figure} Figure~\ref{grap:time_l} shows that with the increase in the length of the input string $l$, the time taken for searching the top-k queries increases. The number of blocks or local maxima increases with the size of the string and hence the time to compute also increases. The time increases more or less quadratically for ARLM and the other existing algorithms according to the analysis shown in Section~\ref{ssec:analy}. ARLM takes less running time than the other techniques, as the number of local maxima found is less than the number of blocks found by the blocking algorithm (see Table~\ref{tab:unif}). Hence, it provides better pruning of the search space and is faster. On the other hand, AGMM being a linear time heuristic runs an order of time faster than the others. We also find that the accuracy of the top-k results reported by AGMM show an improvement with the increase in the string length (see Figure~\ref{grap:acc_k}), as the deviation of substrings become more prominent with respect to the large portions of the string depicting expected behavior. The approximation factor for ARLM is $1$ for the top-$1$ query in all the cases tested, while for other top-k queries and for AGMM it is always above $0.96$. \\ \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{time_10e4_5_k} \caption{Effect of value of k for top-k query on search time.} \label{grap:time_pb} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{time_p0} \caption{Effect of probability in two symbol string on search time.} \label{grap:time_p} \end{center} \end{figure} Varying the size of the alphabet set $m$, we find that the time taken for searching the top-k query as well as the number of blocks formed increases (Table~\ref{tab:unif} and Figure~\ref{grap:time_lb}). As $m$ increases, the number of blocks increases as the probability of the same symbol occurring contiguously falls off. We have observed in Section~\ref{sec:def} that a local maxima can only end at positions containing adjacent dissimilar symbols. So the number of local maxima found increases, thereby increasing the computation time of the algorithms. There seems to be no appreciable effect of $m$ on the approximation ratio of the results returned by the algorithms. We tested with varying values of $m$ with $l=10^4$ and $k=2$, and found the ratio to be $1$ in all cases. Figure~\ref{grap:time_p} shows the effect of varying probability of occurrence of one of the symbols in a string composed of two symbols only. The approximation ratio remained $1$ for both heuristics for the top-$1$ query. \\ \begin{figure}[t] \begin{center} \includegraphics[width=0.85\columnwidth]{acc_k} \caption{Approximation ratio of the top-k query.} \label{grap:acc_k} \end{center} \end{figure} We next show the scalability of our algorithms by conducting experiments for varying values of $k$ for top-k substrings. Figure~\ref{grap:time_pb} shows that search time increases with the increase in the value of $k$. This is evident as we are required to perform more computations. The accuracy of the results for the heuristics increases with $k$. For $k = 2$, it is 0.96, and increases up to 1 when $k$ becomes more than 10. The number of blocks or local maxima found remains unchanged with the variation of $k$. \section{Conclusions} \label{sec:conc} In this paper, we have proposed two heuristics for searching a given string for the top-k substrings having the maximum chi-square value representing its deviation from the expected nature, with the possibility of hidden pattern or information. We described how the chi-square measure closely approximates p-value and is apt for mining such substrings. We provided a set of observations based on which we developed two heuristics, one which runs in time quadratic with the number of local maxima, and the other which is linear. Our experiments showed that the proposed heuristics are faster than the existing algorithms. The algorithms return results that have an approximation ratio of more than 0.96. { \bibliographystyle{abbrv}
2024-02-18T23:40:24.212Z
2010-03-09T02:00:16.000Z
algebraic_stack_train_0000
2,301
7,549
proofpile-arXiv_065-11240
\section{Introduction} In standard cosmology, we assume that our universe is isotropic and homogeneous, and accordingly is described by the Friedmann-Lema$\hat{\mbox{\i}}$tre-Robertson-Walker (FLRW) metric. Recent observation of Cosmic Microwave Background (CMB) temperature distribution on the celestial sphere shows that the spatial curvature is flat. Furthermore, the distance-redshift relation of type Ia supernovae indicates that the expansion of the present universe is accelerated. Then, we are led to introduce, within the flat FLRW model, ``dark energy,'' which has negative pressure and behaves just like a positive cosmological constant. However, no satisfactory model that explains the origin of dark energy has so far been proposed. As an attempt to explain the SNIa distance-redshift relation without invoking dark energy, Tomita proposed a ``local void model'' \ccite{Tomita1st}. In this model, our universe is no longer assumed to be homogeneous, having instead an underdense local void in the surrounding overdense universe. The isotropic nature of cosmological observations is realized by assuming the spherical symmetry and demanding that we live near the center of the void. Furthermore, the model is supposed to contain only ordinary dust like cosmic matter. Since such a spacetime can be described by Lema$\hat{\mbox{\i}}$tre-Tolman-Bondi (LTB) spacetime \ccite{L}-\ccite{B}, we also call this model the ``LTB cosmological model.'' Since the rate of expansion in the void region is larger than that in the outer overdense region, it can explain the observed dimming of SNIa luminosity. In fact, many numerical analysis \ccite{Tomita1st2}-\ccite{GBHkSZ} have recently shown that this LTB model can accurately reproduce the SNIa distance-redshift relation. However, in order to verify the LTB model as a viable cosmological model, one has to test the LTB model by various observations---such as CMB temperature anisotropy---other than the distance-redshift relation\footnote{ Recently, some constraints on the LTB model from BAO and kSZ effects have also been discussed, see e.g. \ccite{GBHkSZ}. Still, the possibility of the LTB model is not completely excluded. }. For this purpose, in this paper, we derive some analytic formulae that can be used to rigorously compare consequences of the LTB model with observations of CMB anisotropy. More precisely, we derive analytic formulae for CMB temperature anisotropy for dipole and quadrupole momenta, and then use the dipole formula to place the constraint on the distance between an observer and the symmetry center of the LTB model. We also check the consistency of our formulae with some numerical analysis of the CMB anisotropy in the LTB model, previously made by Alnes and Amarzguioui \ccite{AACMB}. In \refsec{LTB}, we briefly summarize the LTB metric. In \refsec{multipole}, we derive analytic formulae for CMB anisotropy in the LTB model. In \refsec{constraint}, we obtain some constraints concerning the position of the observer. \refsec{summary} is devoted to a summary. \section{LTB spacetime} \label{O57_sec: LTB} A spherically symmetric spacetime with only non-relativistic matter is described by the Lema$\hat{\mbox{\i}}$tre-Tolman-Bondi (LTB) metric \ccite{L}-\ccite{B} \Eq{ ds^2 = -dt^2 + \frac{\{R' (t, r)\}^2}{1-k(r)r^2}dr^2 + R^2 (t, r) d\Omega_2^2, } where $' \equiv \p r$, $k(r)$ is an arbitrary function of only $r$. The Einstein equations reduce to \Eqr{ \left(\frac{\dot R}{R}\right)^2 &=& \frac{2GM(r)}{R^3} - \frac{k(r)r^2}{R^2}, \\ 4\pi\rho (t,r) &=& \frac{M' (r)}{R^2 R'}, } where $\dot{} \equiv \p t$, $M(r)$ is an arbitrary function of only $r$, and $\rho(t, r)$ is the energy density of the non-relativistic matter. The general solution for the Einstein equations in this model admits two arbitrary functions $k(r)$ and $M(r)$. By appropriately choosing the profile of these functions, one can construct some models which can reproduce the distance-redshift relation of SNIa in this model. \section{Analytic formulae for CMB anisotropy in LTB model} \label{O57_sec: multipole} In this section, we derive analytic formulae for the CMB anisotropy in the LTB model. First, we assumed that the universe was locally in thermal equilibrium (that is, the distribution function $F$ was Planck distribution $\Phi$) at the last scattering surface, and the direction of the CMB photon traveling is fixed. In this case, $F$ can be written as $F = \Phi (\omega/T)$, where $\omega \equiv p^t$, and $T$ is the temperature. Then, the CMB temperature anisotropy $\delta T/T$ is defined by \Eq{ \delta F = -\frac{\delta T}{T}\omega\p\omega F. } Second, supposing that an observer lives at a distance of $\delta x^i$ from the center of the void, it follows that \Eqr{ (\delta F)^{(1)} &=& \delta x^i (\p i F)_0, \\ (\delta F)^{(2)} &=& \frac{1}{2}\delta x^i \delta x^j (\p i \p j F)_0, } where the subscript 0 means the value at the center ($r = 0$) at the present time ($t = t_0$). From these, the CMB temperature anisotropy dipole $(\delta T/T)^{(1)}$ and quadrupole $(\delta T/T)^{(2)}$ are written as \Eqr{ \left(\frac{\delta T}{T}\right)^{(1)} &=& -\frac{\delta x^i (\p i F)_0}{\Fy}, \\ \left(\frac{\delta T}{T}\right)^{(2)} &=& -\frac{1}{2}\frac{\delta x^i \delta x^j (\p i\p j F)_0}{\Fy} +\frac{1}{2}\left\{\left(\frac{\delta T}{T}\right)^{(1)}\right\}^2 \frac{\Fyy}{\Fy}. } We assume that the distribution function $F(x, p)$ itself is spherically symmetric. Then, $F$ can be written as $F(x, p) = F_0 (t, r, \omega, \mu)$, where $\mu \equiv R'p^r /(\sqrt{1 - kr^2}\omega)$. This implies that $\p i F = (\p i r)\Fr + (\p i \omega)\Fo + (\p i \mu)\Fm$. Then, we can derive analytic formulae for the CMB anisotropy dipole by solving the Boltzmann equation $\mathscr{L}[F_0] = \Ft + \dot r \Fr + \dot{\omega}\Fo + \dot{\mu}\Fm = 0$. The result is \Eq{ \left(\frac{\delta T}{T}\right)^{(1)} = \delta L n^j \Omega_j \left\{\frac{\sqrt{1 - k(r_i)r_i^2}}{R'_0}\eP{t_0}{t_i}\left(\frac{\Fr}{\Fy}\right)_i +\int^{r_i}_0 dr \Hpa' \exp\left[\int^t_{t_0}dt_1 \Hpa(t_1)\right]\right\}, \label{O57_eq: dipole} } where $\delta L n^j$ is the position vector of the observer, $\Omega^j \equiv x^j /r$, $\tilde P(t_0, t_i) \equiv \int^{r_i}_0 dr R''/R'$, $\Hpa \equiv \dot R'/R'$, and the subscript $i$ denotes the value at the last scattering surface. By a similar method, we also derive the CMB anisotropy quadrupole formula \Eqr{ \left(\frac{\delta T}{T}\right)^{(2)} &=& -\frac{\delta x^i \delta x^j}{2(\Fy)_i} \Biggl[(\delta_{ij} - \Omega_i \Omega_j )\left(\frac{\Fr}{r} - \mu\frac{\Fm}{r^2}\right)_0 + \Omega_i \Omega_j (\Frr)_0 \nonumber\\ && \hspace{62pt} + \left\{\frac{\ape''}{\ape}\delta_{ij} +\ape\left(\frac{R'}{\sqrt{1 - kr^2}} - \ape\right)''\frac{\Omega_i \Omega_j}{(R')^2}\right\}_0 (\Fy)_i \Biggr] \nonumber\\ && +\frac{1}{2}\left\{\left(\frac{\delta T}{T}\right)^{(1)}\right\}^2 \frac{\Fyy}{\Fy}, \label{O57_eq: quadrupole} } where $\ape \equiv R/r$. \section{Constraint on LTB model} \label{O57_sec: constraint} In this section, we derive some constraints concerning the position of the off-center observers in the LTB model from the CMB dipole formula \refeq{dipole}. In general, the CMB temperature anisotropy is decomposed in terms of the spherical harmonics $Y_{lm}$ by \Eq{ \frac{\delta T}{T} = \sum_{l, m} a_{lm}Y_{lm}, } where the amplitudes in the expression are recovered as \Eq{ a_{lm} = \int^{2\pi}_0 d\phi \int^{\pi}_0d\theta \sin\theta\frac{\delta T}{T}Y_{lm}. } We are interested in $a_{10}$ as the dipole moment. We estimate the CMB dipole formula \refeq{dipole} numerically by using the profile considered in \ccite{AACMB} (\reffig{AA}), \Eqr{ M(r) &=& \frac{1}{2}H_{\perp}^{2}(t_{0}, r_{\rm out})r^{3}\left[\alpha_{0} - \Delta\alpha\left(\frac{1}{2} - \frac{1}{2}\tanh{\frac{r - r_0}{2\Delta r}}\right)\right], \\ k(r) &=& -H_{\perp}^{2}(t_{0}, r_{\rm out})\left[\beta_{0} - \Delta\beta\left(\frac{1}{2} - \frac{1}{2}\tanh{\frac{r - r_0}{2\Delta r}}\right)\right], \label{O57_eq: profile1} } where \Eqr{ t_{s}(r) = 0, \ \ \Hpe(t_{0}, r_{\rm out}) = 51 \ {\rm km/s/Mpc}, \ \ \alpha_{0} = 1, \ \ \Delta\alpha = 0.90, \nonumber\\ r_{0} = 1.34 \ {\rm Gpc}, \ \ \Delta r = 0.536 \ {\rm Gpc}, \ \ \beta_{0} = 1 - \alpha_{0} = 0, \ \ \Delta\beta = -\Delta\alpha = -0.90, \label{O57_eq: profile2} } and $\Hpe \equiv \dot{\ape}/\ape$. \begin{figure} \begin{tabular}{cc} \begin{minipage}{0.46\hsize} \begin{center} \includegraphics[width=0.77\hsize]{O57_AA-I-rho.eps} \end{center} \end{minipage} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=0.77\hsize]{O57_AA-I-H.eps} \end{center} \end{minipage} \end{tabular} \caption{The profile considered in \ccite{AACMB}. The subscript $rec$ denotes the value at the recombination, and $\Hpa ({\rm or}\ \Hpe) = 100h \rm km/s/Mpc$. } \label{O57_fig: AA} \end{figure} The induced $a_{10}$ is of order $10^{-3}$ or less observed by Cosmic Background Explorer (COBE) \ccite{COBE}, so we find that \Eq{ \delta L \lineq 15 \rm Mpc, } where $\delta L$ is the distance from the observer to the center of the void. This is consist with the result of \ccite{AACMB}. \section{Summary} \label{O57_sec: summary} In the LTB model, we have derived the analytic formulae for the CMB anisotropy dipole \refeq{dipole} and quadrupole \refeq{quadrupole}, which can be used to rigorously compare consequences of this model with observations of the CMB anisotropy. Moreover, we checked the consistency of our formulae with results of the numerical analysis in \ccite{AACMB}, and constrained the distance from an observer to the center of the void. One of the advantages in obtaining analytic formulae is that we can identify physical origins of the CMB anisotropy in the LTB model. For example, in the CMB dipole formula \refeq{dipole}, we can regard the first term as the initial condition at the last scattering surface, and the second term as the Integrated Sachs-Wolfe effect. \section*{Acknowledgments} The authors would like to thank all participants of the workshop $\Lambda$-LTB Cosmology (LLTB2009) held at KEK from 20 to 23 October 2009 for useful discussions. We would also like to thank Hajime Goto for useful discussions. This work is supported by the project Shinryoiki of the SOKENDAI Hayama Center for Advanced Studies and the MEXT Grant-in-Aid for Scientific Research on Innovative Areas (No. 21111006).
2024-02-18T23:40:24.418Z
2010-04-12T02:00:52.000Z
algebraic_stack_train_0000
2,315
1,781
proofpile-arXiv_065-11279
\section{Details on Hartree-Fock calculation of the Hubbard model} We start with the Hubbard model on the honeycomb lattice: \begin{align} H& =-t_{AB} \sum_{\langle ij \rangle}(c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.)+D\sum_{i\in A} n_i-D \sum_{i\in B}n_i \notag \\ &-t_A \sum_{\langle \langle ij \rangle \rangle_A}(e^{i \phi^A_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.) -t_B \sum_{\langle \langle ij \rangle \rangle_B}(e^{i \phi^B_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.) \notag \\ &+ \frac{U}{2} \sum_{i}n_i(n_i-1)+ V\sum_{\langle ij\rangle}n_in_j + V'\sum_{\langle\langle ij\rangle\rangle}n_in_j \end{align} We can decouple the interaction part through mean field ansatz: \begin{align} H_{HF}& =-t_{AB} \sum_{\langle ij \rangle}(c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.)+D\sum_i (-1)^{\sigma_z(i)}n_i \notag \\ &-t_A \sum_{\langle \langle ij \rangle \rangle_A}(e^{i \phi^A_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.) -t_B \sum_{\langle \langle ij \rangle \rangle_B}(e^{i \phi^B_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.) \notag \\ &+ U \sum_{i}(\rho_i c^\dagger_{i,\alpha}c_{i,\alpha} -\frac{1}{2}\rho_i^2) - U \sum_{i} (\tau^{\mu}_i c^\dagger_{i,\alpha}\tau^\mu_{\alpha\beta}c_{i,\beta} -\frac{1}{2} \tau_i^{\mu}\tau_i^{\mu}) \notag \\ &+ 2V\sum_{\langle ij\rangle} (\rho_j c^\dagger_{i,\alpha}c_{i,\alpha} - \chi_{ji,\beta\alpha} c^\dagger_{i,\alpha}c_{j,\beta} -\frac{1}{2}\rho_i\rho_j +\frac{1}{2}|\chi_{ij,\alpha\beta}|^2) \notag \\ &+ 2V'\sum_{\langle\langle ij\rangle\rangle} \rho_j c^\dagger_{i,\alpha}c_{i,\alpha} - \frac{1}{2}\rho_i\rho_j \end{align} with self consistent equations: \begin{align} \chi_{ij,\alpha\beta}&=\langle c^\dagger_{i,\alpha} c_{j,\beta} \rangle \;\mathrm{for\;} i,j\in \mathrm{NN} \notag \\ \rho_{i} &= \langle c^\dagger_{i,\alpha} c_{j,\alpha} \rangle \notag \\ \tau^{\mu}_{i} &= \langle c^\dagger_{i,\alpha} \tau^{\mu}_{\alpha,\beta}c_{j,\beta} \rangle \end{align} The mean field Hamiltonian is solved self-consistently. In addition, we use a $\sqrt{3}\times\sqrt{3}$ unit cell to allow states with translation symmetry breaking. In Fig.\ref{compare}, we show that the phase diagram from Hartree-Fock calculation is consistent with the Schwinger boson mean field theory of the t-J model detailed in the following section. In the Hartree-Fock calculation, magnetic orders in both layers are self-consistently generated. We find that the A layer is fully valley-polarized from this self-consistent calculation. \begin{figure}[!ht] \centering \subfloat[][]{\includegraphics[width=.48\textwidth]{image/summary_phasediagram/M-V_pd_gap.pdf}}\quad \subfloat[][]{\includegraphics[width=.48\textwidth]{image/summary_compare/M-V_gap_HF.pdf}} \caption{Comparison between phase diagram from (a) t-J model treated with Schwinger boson mean field (b) Hubbard model treated with Hartree-Fock mean field. Color shows the charge gap. The two approaches are qualitatively consistent. MI is the layer polarized Mott insulator with $120^\circ$ anti-ferromagnetic order. FL is the Fermi liquid or electron hole liquid. There are three different excitonic insulators: AF-EI, NEI and ECI phase. $\tau_z$ is polarized in the A layer for all these three phases. The magnetic order in the B layer is $120^\circ$ ordered in the AF-EI phase and $\tau_z$ polarized in the NEI and ECI phase. ECI has a $p\pm ip$ exciton condensation symmetry and a non-zero Chern number $C=\pm 1$, while NEI has $s+p$ pairing symmetry for the exciton condensation and zero Chern number. } \label{compare} \end{figure} \section{Details on Schwinger boson calculation of the t-J model} In the limit of large $U$, the Hubbard model is approximated by a $t-J$ model \begin{align} H&=-t_A \sum_{\langle \langle ij \rangle \rangle_A}(e^{i \phi^A_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.)+D\sum_{i \in A}n_i \notag \\ &~~~-t_B \sum_{\langle \langle ij \rangle \rangle_B}(e^{i \phi^B_{ij} \tau_z}Pc^\dagger_{i;\alpha}c_{j;\alpha}P+h.c.)-D \sum_{i \in B} n_i \notag \\ &~~~ + J_B\sum_{\langle\langle ij\rangle\rangle_B} \tau^\mu_iR^{\mu\nu}_z(2\phi^B_{ij})\tau^\nu_j +(V'-\frac{J_B}{4}) \sum_{\langle \langle ij \rangle \rangle_B}n_i n_j\notag \\ &~~~+V\sum_{\langle ij\rangle}n_in_j + V'\sum_{\langle\langle ij\rangle\rangle_A}n_in_j \label{HamtJ2} \end{align} where $J=\frac{4t^2}{U}$, and $R^{\mu\nu}_z$ is the matrix representing rotation around $z$ axis, namely, $R(\phi) = \begin{pmatrix} \cos{\phi} & \sin{\phi} & 0\\ -\sin{\phi} & \cos{\phi} & 0\\ 0 & 0& 1 \end{pmatrix}$ in the basis of $(\tau_x,\tau_y,\tau_z)$. Note that since $A$ layer has a low density $x$, the double-occupancy projection $P$ is not expected to be relevant. So we have treated A layer as free fermions instead of using the composite degrees of freedom. In the calculation, we will set the magnetization in A layer to be polarized along $\tau^z$ direction, which is found to hold throughout Hartree-Fock phase diagram. We use a fermionic holon, bosonic spinon construction for B layer to take care of the projected operator: $c_{i\alpha}=b_{i\alpha}f_i$, with a constraint $n^f_i=n^b_i$. $b_{i;\alpha}$ will be assumed to be condensed and just represents a magnetic order. This is just a convenient way to deal with charge degree of freedom moving in the background of magnetic order. No exotic phase with fractionalization is targeted in our calculation. The Hamiltonian in Eq.\ref{HamtJ} is rewritten as \begin{equation} H = H_t + H_D + H_J + H_V \end{equation} where \begin{equation} \begin{split} H_t &=-t_A \sum_{\langle \langle ij \rangle \rangle_A}(e^{i \phi^A_{ij} \tau_z}c^\dagger_{i;\alpha}c_{j;\alpha}+h.c.)+D\sum_{i \in A}n_i \notag \\ &~~~-t_B \sum_{\langle \langle ij \rangle \rangle_B}(e^{i \phi^B_{ij} \tau_z}b^\dagger_{i\alpha} b_{j\alpha}f^\dagger_{i}f_{j}+h.c.) \notag\\ H_D &= D \sum_{i\in A}f_i^\dagger f_i-D \sum_{i\in B} f_i^\dagger f_i \notag \\ H_J &= J \sum_{\langle\langle ij\rangle\rangle}b^\dagger_{i\alpha}\sigma^\mu_{\alpha\beta} b_{i\beta}R^{\mu\nu}_z(\phi_{ij})b^\dagger_{j\gamma}\sigma^\nu_{\gamma\delta} b_{j\delta}\\ H_V &= V \sum_{\langle ij\rangle}f^\dagger_{i} f_{i}f^\dagger_{j}f_{j}+V' \sum_{a=A,B}\sum_{\langle \langle ij \rangle \rangle_a}n_i n_j \end{split} \end{equation} The exciton insulators found in Hartree-Fock always have $\tau_z$ polarization in A layer. But the magnetic order of the B layer could be either $120^\circ$ order or $\tau_z$ polarized FM order. To study the competition between the two magnetic order of the B phase, we adopt the following ansatz for the Schwinger boson part \begin{equation} \begin{pmatrix} \langle b_{i;+} \rangle \\ \langle b_{i;-} \rangle\end{pmatrix}=F_i \end{equation} with $ F^A_i = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$, $ F^B_i =\sqrt{n_B} \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ for the FM order. or $F^B_i=\sqrt{n_B} e^{\frac{i}{2}(\v Q \cdot \v r_i) \tau_z} \begin{pmatrix} \cos{\frac{\theta}{2}} \\ \sin{\frac{\theta}{2}} \end{pmatrix}$ for the canted AFM order, where $\v Q = \mathbf K_M \ \mathrm{or} -\mathbf K_M$. After the magnetic orders are decided for both layers, the charge degree of freedom is captured by a spinless model. We also label the valley polarized operator $c_{i;+}$ in the A layer as $f_i$ with $i \in A$. The kinetic term is now \begin{align} H_t^f&=-\sum_{a=A,B} \sum_{\langle \langle ij \rangle \rangle_a} t^f_{ij;a} f_i^\dagger f_j +D\sum_{i \in A} f_i^\dagger f_i -D \sum_{i \in B}f_i^\dagger f_i \label{eq:Ht_appendix} \end{align} where $t^f_{ij;a}=t_a (F_i^{a\dagger} e^{i \phi_{ij;a} \tau_z} F^a_j)$. $n_i=f_i^\dagger f_i$ is constrained to be the physical density. The total filling is $n_A+n_B=1$ so there are electron and hole pockets in the two layers respectively. The $H_J$ term simply contributes an energy for each magnetic pattern: \begin{equation} H_J = J \sum_{\langle\langle ij\rangle\rangle_B} n_B \bigg(\cos^2{\theta} + \sin^2{\theta}\cos{(2\phi_{ij}+\v Q\cdot\v r_{ij})}\bigg) \end{equation} where $\v Q =\v \Gamma_M, \v K_M \;\mathrm{or} -\v K_M$. Here $\theta$ is the angle between $z$-axis and the spin vector $\vec{\tau}_B$, and $\phi$ is the valley-contrasting hopping phase. $H_V^{MF}$ term is taken care of by a mean field decoupling \begin{equation} H_V=-V\sum_{\langle ij \rangle} \chi_{ji} f^\dagger_i f_j + h.c. - |\chi_{ij}|^2 \end{equation} with self consistent equation \begin{equation} \chi_{ij}=\langle f_i^\dagger f_j \rangle \end{equation} The mean field energy is \begin{align} H_{MF} =&- \sum_{a=A,B} \sum_{\langle \langle ij \rangle \rangle_a} t_a (F_i^{a\dagger} e^{i \phi_{ij;a} \tau_z} F^a_j) \chi_{ij} + h.c. +D\sum_{i \in A} \langle n_i\rangle -D \sum_{j \in B} \langle n_j\rangle \\ \nonumber &- V\sum_{\langle ij \rangle} |\chi_{ij}|^2 + J \sum_{\langle\langle jj'\rangle\rangle_B} n_B \bigg(\cos^2{\theta} + \sin^2{\theta}\cos{(2\phi_{jj'}+\v Q\cdot\v r_{jj'})}\bigg) \end{align} The calculation proceeds in the following way. We target either FM or $120^\circ$ AFM in B layer and assume $\tau_z$ FM in A layer, after that for each magnetic order pattern we deal with $H_t^f+H_V$ term with $H_t^f$ in Eq.~\ref{eq:Ht_appendix}. We use a $\sqrt{3}\times \sqrt{3}$ unit cell, allowing the exciton order to break translation symmetry. In the end, for each magnetic order pattern we add the spin coupling energy from the $H_J$ term and compare their total energies. The phase diagram is shown in the main text. \section{Exciton order parameter \label{appendix:exciton order parameter}} In this section we discuss the symmetry of exciton order parameters. Following the Schwinger boson theory, we only need to consider a spinless model for the charge degree of freedom. The exciton arises from pairing between $f_A$ and $h_B=f^\dagger_B$. This exciton order parameter \begin{equation} \chi_{ij} = \sum_{L,Q,\v k} \langle f^\dagger_{A,L\v K+\v k} f_{B,L\v K+Q\v K+\v k}\rangle e^{-i(L\v K+\v k)(\v r_i-\v r_j)+iQ \v K \v r_j} \end{equation} induces hybridization between A-layer particle pocket at $L \v K$ and B-layer hole pocket at $(L+Q) \v K$. We note in passing that Eq.5 in main text is given by defining the Fourier transform \begin{equation} \chi_{Q,L} = \chi_{Q,L}(r_i, r_j) = \sum_{\v k} \langle \chi^{Q,L}_{\v k}\rangle e^{i\v k\v (r_j-r_i)} = \sum_{\v k}\langle f^\dagger_{A,L\v K+\v k} f_{B,L\v K+Q\v K+\v k}\rangle e^{i\v k\v (r_j-r_i)} \end{equation} To extract the intrinsic exciton pairing angular momentum, we follow the convention of first shifting the band bottom of A and band top of B to the vicinity of $\Gamma$ through a gauge transform, \begin{align} &f_{A,i}\rightarrow f_{A,i}e^{-i \delta L \v K\cdot \v r_i} \notag\\ &f_{B,j}\rightarrow f_{B,j}e^{-i (\delta L+\delta Q) \v K\cdot \v r_j} \label{gt} \end{align} For $\phi_A = (-2/3+1/12)\pi$, $\phi_B = (2/3+1/12)\pi$, the particle and holon pockets before and after gauge transform is shown in Fig.\ref{fermi_surface} as an example. \begin{figure}[!ht] \centering \subfloat[][]{\includegraphics[width=.2\textwidth]{{image/appendix_p+ip/t2a=1.0000_t2b=1.0000_phia=-1.8326_phib=-2.3562_M=2.0000_2d.pdf}}} \subfloat[][]{\includegraphics[width=.2\textwidth]{{image/appendix_p+ip/t2a=1.0000_t2b=1.0000_phia=-1.8326_phib=-2.3562_M=2.0000_1d.pdf}}} \subfloat[][]{\includegraphics[width=.2\textwidth]{{image/appendix_p+ip/t2a=1.0000_t2b=1.0000_phia=0.2618_phib=-2.3562_M=2.0000_2d.pdf}}} \subfloat[][]{\includegraphics[width=.2\textwidth]{{image/appendix_p+ip/t2a=1.0000_t2b=1.0000_phia=0.2618_phib=-2.3562_M=2.0000_1d.pdf}}} \caption{Band structure and fermi surface without pairing for $D=2.0$ (a)(b) $\phi_A=-\frac{7}{12}\pi$, $\phi_B=\frac{3}{4}\pi$, (c)(d) gauged to $\phi_A=\frac{1}{12}\pi$, $\phi_B=\frac{3}{4}\pi$. We choose the gauge under which the center of fermi pockets are at $\Gamma$. For this choice of $\phi_A, \phi_B$, the two fermi surfaces are perfectly nested after gauging, leading to a logarithmic divergence of exciton susceptibility. However, fermi liquid is still stable in this case for regions without full valley polarization.} \label{fermi_surface} \end{figure} We focus on the simple scenario with only one particle and one hole pocket. Then $\delta L$ and $\delta L+\delta Q$ correspond to the band bottom of A-layer and band top of B-layer in the free model (before exciton condensation). After the gauge transform Eq.\ref{gt}, exciton order should only couple two pockets at $\v \Gamma_M$, therefore it carries no momentum $Q'=Q-\delta Q=0$. Now we are ready to read off the exciton order $\chi(\v k)$ from the Fourier transform of the gauged $\chi_{ij}$. To determine the large momentum $\delta L$ and $\delta Q$ for gauge transform, we note that the mean field Hamiltonians for $f_A$ and $f_B$ are just two triangular lattice tight-binding models with only nearest neighbour hopping, \begin{align} H_A &= -\sum_{\langle i,i'\rangle_A} \tilde{t}^A e^{i\tilde{\phi}^A_{ii'}} f^{A\dagger}_i f^A_{i'} +h.c. \notag \\ H_B &= -\sum_{\langle j,j'\rangle_B} \tilde{t}^B e^{i\tilde{\phi}^B_{jj'}} f^{B\dagger}_j f^B_{j'} +h.c. \end{align} whose band top/bottom positions can be easily read off from the hopping phases $\tilde{\phi}_A$, $\tilde{\phi}_B$. (Here we use tilde to distinguish from the valley contrasting phases in the original spinful model, these phases are the effective hopping phases for charge once the spin gets integrated out.) Namely, the A-band bottom is at $\v \Gamma_M$ for $\tilde{\phi}_A\in (-\pi/3, \pi/3)$, while the band top is at $\v \Gamma_M$ for $\tilde{\phi}_B\in (2\pi/3, 4\pi/3)$. Furthermore, under the gauge transform Eq.\ref{gt} the $\tilde{\phi}$ transform as \begin{align} \tilde{\phi}^{ii'}_A &\rightarrow \tilde{\phi}^{ii'}_A + \delta L \v K\cdot (\v r_{i}-\v r_{i'})\notag \\ \tilde{\phi}^{jj'}_B &\rightarrow \tilde{\phi}^{jj'}_B + (\delta L+\delta Q) \v K\cdot (\v r_{j}-\v r_{j'}) \end{align} and the order parameter becomes \begin{equation} \tilde{\chi}_{ij} = \chi_{ij} e^{i \delta L\v K\cdot \v r_i-i(\delta L\v K+\delta Q\v K) \cdot \v r_j} \end{equation} \begin{figure}[!ht] \centering \subfloat[][]{\includegraphics[width=.3\textwidth]{{image/summary_pwave/M=0.0000_V=6.0000_pairing_phase.pdf}}}\quad \subfloat[][]{\includegraphics[width=.3\textwidth]{{image/summary_pwave/M=2.0000_V=6.0000_pairing_phase.pdf}}}\\ \caption{ Momentum space distribution of exciton. (a) $p+ip$ order of ECI at $D=0.0$, $V=6.0$ (b)$s+p$-wave exciton order of EI at $D=6.0$, $V=2.0$. A strong nematicity is observed in the latter example. The transition from ECI to EI happens when the vortex in $\chi(\v k)$ gets pushed out of the small fermi surface region by $s$-wave component. Plotted here is the phase of the gauged exciton order. } \label{vortex} \end{figure} Therefore our procedure for extracting pairing symmetries will be \begin{itemize} \item (1) For given a spin configuration $F_i$, extract the effective hopping phase $\tilde{\phi}^{ij}_{A/B}=\mathrm{Arg}(\langle F_i|e^{i\phi^{ij}_{A/B} \sigma_z}|F_j\rangle)$ \item (2) Find the $(\delta Q,\delta L)$ that sends $\tilde{\phi}_{A/B}$ to the proper ranges mentioned above. \item (3) Fourier transform the gauged exciton order $\tilde{\chi}_{Q'L'}=\tilde{\chi}_{ij} e^{i L'\v K\cdot \v r_i-i(L'\v K+Q'\v K) \cdot \v r_j} $ to read off its remaining momentum $Q'$ and angular momentum $L'$. \item (4) Fourier transform to extract $\tilde{\chi}(\v k) = \sum_{ij}\tilde{\chi}_{ij} e^{i\v k(\v r_i-\v r_j)} = \sum_{ij}\chi_{ij} e^{i(L\v K+\v k)r_i-i(L\v K+Q\v K+\v k)\v r_j}$ \end{itemize} Note that by gauging the pockets to $\v \Gamma_M$ following step (2), we have basically picked a gauge under which the center of mass of exciton pair carries momentum $\v Q=0$. In some cases, there can be coexisting $p$-wave and $s$-wave exciton orders. We extract each component through a Fourier transform for $\chi(\v k)$ \begin{equation} \chi_l(|\v k|) = \int d\theta_{\v k} \chi(\v k) e^{-il\theta_{\v k}} \end{equation} Alternatively, one can read off the strength of $p$/$s$ wave orders from the $(Q',L')$ Fourier components of $\chi_{ij}$. In Fig.\ref{vortex} we show typical patterns phase variation of exciton order $\chi$ in momentum space for the $p+ip$ ECI and $s+p$ nematic EI. \section{Competition between the $s$ wave and $p$ wave exciton condensation} In this section we discuss why $p$ wave is favored over the $s$ wave exciton condensation. The nearest neighbour interaction is decoupled into \begin{equation} H_{V} = -V \sum_{\langle ij\rangle} \chi_{ij}^* c^\dagger_{i\sigma} f_{j\sigma} +h.c. = -V\sum_{\v k,l} \chi^*_l F^l_{\v k} c^\dagger_{i\alpha} \sigma^\nu_{\alpha\beta}f_{j\beta} +h.c. \end{equation} where $F^l_\v k = \sum_{\v \delta}^{NN} e^{il\theta_\delta}e^{-i\v k \v \delta}$, and $\chi_l =\frac{1}{3}\sum_{\v \delta}^{NN} \chi_{\v \delta}e^{-il\theta_{\v\delta}}$. The self-consistent equation \begin{align} \chi_{l} &= \frac{1}{3}\sum_{\v k,\omega} F^{l*}_{\v k}\langle c^\dagger_\v k f_\v k\rangle\\ & = -\frac{1}{3}\sum_{\v k,\omega} G^c(\v k, i\omega) V\chi_{m} F^m_{\v k} \tilde{G}^f (\v k, i\omega)F^{l*}_{\v k}\\ & = -\frac{V}{3}\sum_{\v k,\omega} G^c(\v k, i\omega) \tilde{G}^f (\v k, i\omega) F^{l*}_{\v k} F^m_{\v k} \chi_{m} \end{align} Here $G_c = (i\omega-\epsilon^A_\v k)^{-1}$ and $\tilde{G}_f = (i\omega-\epsilon^B_\v k -\Sigma_f(\v k,i\omega))^{-1}$. This is an eigenvalue problem. The off-diagonal($l\neq m$) contributions would be responsible for the $s+p$ wave exciton order observed. For notational simplicity, we will restrict ourselves to the diagonal part. Existence of a non-trivial solution for $\chi_l$ demands \begin{align} \frac{1}{V} &= -\sum_{\v k, \omega} G^c(\v k, i\omega) \tilde{G}^f (\v k, i\omega) |F^{l}_{\v k}|^2/3\\ &= -\sum_{\v k} \frac{f(\xi_{\v k,+})-f(\xi_{\v k,-})}{\xi_{\v k,+}-\xi_{\v k,-}}|F^{l}_{\v k}|^2/3\\ &= - \int^{E_+}_{E_-} d^2\v k \frac{|F^{l}_{\v k}|^2/3}{\xi_{\v k,+}-\xi_{\v k,-}}\\ &= - \int^{E_+}_{E_-} d^2\v k \frac{|F^{l}_{\v k}|^2}{\sqrt{\delta_\v k^2+V^2|\chi_l|^2|F^{l}_{\v k}|^2/3}} \end{align} where in the third line we have considered a simplest case with particle hole symmetry $\epsilon^{A/B}_\v k = \epsilon_F\pm\delta_{\v k}/2$, namely the particle and hole pockets are perfectly nested. A short discussion for cases without perfect nesting will be presented later. Defining $\Phi_l(\v k) = \frac{1}{\sqrt{3}}V\chi_l F^l_\v k$, we find \begin{equation} \frac{1}{V} = - \int^{E_+}_{E_-} d^2\v k \frac{|F^{l}_{\v k}|^2}{\sqrt{\delta_\v k^2+4\Phi_l^2(\v k)}} \label{sc} \end{equation} This is analogous to the self-consistency equation for BCS instability. At zero temperature the logarithmic divergence of integral at $\Phi=0$ indicates that an infinitesimal $V$ would lead to exciton condensation. So the fermi liquid is always unstable in the $V>0$ regime. From Eq.\ref{sc} we obtain the scaling of exciton order \begin{equation} \chi^l \sim \frac{\rho(\epsilon_F)}{V\eta^{l}(\epsilon_F)}e^{-\frac{1}{2V\eta^{l}(\epsilon_F)}} \end{equation} where $\eta(\epsilon_F)$ is the weighted average of form factor in channel $l$ around the fermi surfaces \begin{equation} \eta^l(\epsilon_F) = \int_{FS}\frac{d\v k_F}{(2\pi)^2}\frac{|F^{l}_{\v k}|^2}{v_{\v k_F}} \end{equation} here the weight is density of states. This expression reduces to \begin{equation} \eta^l(\epsilon_F) \sim \rho(\epsilon_F) \int_{FS} d\v k_F |F^{l}_{\v k_F}|^2 \label{F12} \end{equation} in the small $k_F$ limit where the band structure has an approximate rotational symmetry. The angular momentum channel of condensed exciton is efficiently predicted by comparing the parameter $\eta^l(\epsilon_F)$ across all angular momentum channels $l=0,1,-1$. \begin{figure}[!ht] \centering \subfloat[][]{\includegraphics[width=.3\textwidth]{image/appendix_p+ip/M-V_pd_x.pdf}}\quad \subfloat[][]{\includegraphics[width=.3\textwidth]{image/appendix_p+ip/form_factor_2d.pdf}} \caption{(a) Exciton density. The contour in red corresponds to $n_{A(c)}=0.25$. It closely tracks the onset of $p+ip$ order. (b) Difference between form factors in $l=0$ and $l=1$ channels. The purple dashed contour marks momentum where two form factors are equal. Overlaid contours show the Fermi surface at a sequence of exciton densities $n_A$. Above $n_A=0.2$, the Fermi surface starts pass through the area with stronger $p$-wave form factor. At $n_A=0.3$ the Fermi surface is predominantly through the blue area. Somewhere in between these two densities the pairing instability should change from $s$-wave to $p$-wave. This estimation is consistent with the critical density read off from (a)} \label{instability} \end{figure} In the following, we explore the cases without nesting between particle and hole pockets. For convenience we denote the susceptibility $\Pi_{\v k'} = \int d\omega G^A(\v k', \omega)G^B(\v k', \omega)$. Generally speaking, at the transition point $\chi=0$, exciton susceptibility \begin{equation} \Pi_{\v k} \sim -\frac{f(\epsilon^A_{\v k})-f(\epsilon^B_{\v k})}{\epsilon^A_{\v k} - \epsilon^B_{\v k}}\ \end{equation} where $f(\epsilon) = (1+e^{\beta (\epsilon-\epsilon_F)})^{-1}$ is the Fermi-Dirac distribution. The dominating contribution comes from the $\v k$ points with $\epsilon^A_{\v k}=\epsilon^B_{\v k}=0$. For generic parameters $(\phi_A, \phi_B)$, the two fermi surfaces do not overlap, but the total filling $\nu_T=1$ dictates that they intersect at $3N$ momentum points, where $N$ is an even number. Assume $f^l_{\v k}$ vary slowly in vecinity of this spot. The dispersion is \begin{equation} \epsilon^A(k) = u_A k_x + v_A k_y, \quad \epsilon^B(k) = u_B k_x + v_B k_y \end{equation} for simplicity we will take $u_A = -u_B = u$ and $v_A = v_B = v$, assume $u/v=\cot\theta>1$, where $2\theta$ is the angle between two fermi surfaces \begin{equation} \int_0^\Lambda d^2\v k \Pi_{\v k} =4\int^\Lambda_0 dk_y\int^\Lambda_{\frac{v}{u}k_y} dk_x \frac{1}{2u k_x}=2\int^\Lambda_0 dk_y \ln \frac{u\Lambda}{vk_y} = 2\Lambda(1+\ln\frac{u}{v}) \end{equation} Each one of these spots contributes a finite instability. This means now it would require a finite strength of $V=V_c>0$ for the condensation to happen. A logarithmic divergence shows up as we approach the perfect nesting limit $\theta\rightarrow0$. And the $p_x\pm ip_y$ wave exciton gets favored as long as these intersection points fall into the $|F^{\pm 1}_{\v k}|>|F^0_{\v k}|$ region. For parameters closer to experimental setup (see caption of Fig.\ref{realistic}), we show in Fig.\ref{fermi_surface} the Fermi pockets and band structure before exciton condensation happens. For the model used in main text (parameter listed in caption of Fig.\ref{pd}), there is an exact nesting of particle and hole pockets after gauging. In Fig.\ref{instability}(b) we overlay the Fermi surface of this model on top of the color map of the function $\Gamma_\v k = |F^0_\v k|^2-|F^1_\v k|^2$. This allows us to read off a critical density $n_{A(c)}\sim 0.25$ that is consistent with the transition between $s$ and $p+ip$ exciton order indicated by red contour in Fig.\ref{instability}(a). \section{Phase diagram for other values of $\Phi_A, \Phi_B$} Fig.\ref{realistic} we show the phase diagram obtained using parameters closer the experimental system of AB-stacked WSe$_2$-MoTe$_2$ bilayer. The phase diagram Fig.\ref{realistic} is qualitatively similar to Fig.\ref{pd} in main text. A $p+ip$ instability shows up for exciton density $n_A>0.35$. \begin{figure}[!h] \centering \includegraphics[width=.3\textwidth]{image/appendix_phasediagram/M-V_pd_gap.pdf} \includegraphics[width=.3\textwidth]{image/appendix_phasediagram/M-V_pd_x.pdf} \caption{Schwinger boson mean field phase diagram in $D-V$ plane. (a) Colors show the single electron charge gap $\Delta_c$. We use the following parameters: $U=50$, $t_A=2$, $t_B=1$, $\phi_A = (-2/3+1/12)\pi$, $\phi_B = (2/3+1/12)\pi$, $V'=0.5V$. (b) shows the exciton density $n_A$. The dashed contour in red marks the $n_A=0.35$, which maps out the phase boundary between ECI and EI.} \label{realistic} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.3\linewidth]{image/appendix_phasediagram/phia-phib_Ising.pdf}\quad \includegraphics[width=0.3\linewidth]{image/appendix_phasediagram/phia-phib_gap.pdf} \caption{Hartree-Fock phase diagram for $V=4.0$, $D=0.5$. Other parameters are the same as in Fig.\ref{pd}. (a) Sign of $\langle \tau^z_A\tau^z_B \rangle$. (b) Charge gap. For time-reversal breaking phases, there is a two-fold ground state degeneracy. Our convention is to focus on the one with $\langle \tau^z_A\rangle \sim \langle S^z_A\rangle>0$. } \label{app_pd} \end{figure} In Fig.\ref{app_pd} we show AB valley alignment and charge gap for same parameter space as in Fig.\ref{pd}. For $\Phi=n\pi$, the model has an enlarged SU(2)$\times$SU(2) symmetry. Therefore the valley-polarized and spin-polarized states are degenerate. A finite valley contrasting flux breaks the symmetry down to a time reversal, lifting this degeneracy and stabilizes one of the two phases on either side of $\Phi=0$. For this particular choice of $V/t=4$, we find Fermi liquid in vicinity of $\Phi_A=\Phi_B=\pi$ and exciton Chern insulators in a wide range of $\Phi$. As we will find in Fig.\ref{app_SU2} of Appendix.\ref{APP_SU2}, with a stronger $V$ we get a nematic EI at $\Phi\sim0\;(\mathrm{mod} \pi)$. \section{$U(2)\times U(2)$ symmetric point with $\Phi_A=\Phi_B=0$} \label{APP_SU2} At $\Phi_A=\Phi_B=0$, there is a valley SU(2) symmetry in each layer. In Fig.\ref{app_SU2} we present the $D-V$ phase diagram for this case. We show in Fig.\ref{app_SU2}(b) the magnitude of order parameters. The $p+ip$ order now appears only in a narrow region around $V=4.0$, $D=0$. \begin{figure}[!ht] \centering \includegraphics[width=0.23\linewidth]{image/appendix_SU2/M-V_pd_gap.pdf} \includegraphics[width=0.23\linewidth]{image/appendix_SU2/M-V_pd_chi.pdf} \includegraphics[width=0.23\linewidth]{image/appendix_SU2/M-V_pd_x.pdf} \includegraphics[width=0.23\linewidth]{image/appendix_SU2/M-V_Chernnumber.pdf}\caption{Schwinger boson mean field phase diagram in $D-V$ plane without valley contrasting flux. $t_{AB}=0$ and $t_A=t_B=1$. (a) Charge gap. (b) Magnitude of exciton order. (c) Exciton density. (d) Chern number.} \label{app_SU2} \end{figure} We start our discussion from the low exciton density regime $n_A\sim0$. As before, we find the AF-EI phase with s wave exciton order at small exciton density. But now the AF-EI phase is more robust and the NEI phase with p wave exciton order starts only when $n_A> 0.35$. The reason is the following. With the $\tau_z$ polarized in both layers, there are two degenerate holon pockets from B layer around $\v K_M$ and $\v K'_M$ points, and one electron pocket from A layer at $\v \Gamma_M$. In this case exciton order only hybridizes one of the hole pocket with the electron pocket and is not effective to gap out the electron-hole pocket. This penalizes the energy of the FM phase. On the other hand, with the $120^\circ$ AF order in the B layer, there is only one hole pocket at $\Gamma_M$ and a s wave exciton order effectively gap out the electron pocket and the hole pocket. So the AF EI phase has better energy and occupy a larger phase regions when $\Phi_A=\Phi_B=0$. \begin{figure}[!ht] \includegraphics[width=0.3\linewidth]{{image/appendix_SU2/form_factor_2d_1.pdf}} \includegraphics[width=0.3\linewidth]{{image/appendix_SU2/form_factor_2d_2.pdf}} \caption{(a) Form factor difference between $l=1$ and $l=0$ channels. The overlaid contours are Fermi surfaces at various fillings for a triangular lattice tight binding model with no hopping phases. This plot helps determine pairing instability when B layer is in $120^\circ$ magnetic order, so that the holon and particle bands are perfectly nested. Numbers labeling these contours correspond to the $n_A$ associated with the two Fermi surfaces. (b) Gauged Fermi surfaces for $n_A=0.35$. The gauge choice is such that the two pockets are both centered at $\v \Gamma_M$. At this density, the intersections between two Fermi surfaces enter the region where $p+ip$ pairing is stronger than that of $s$ wave.} \label{instability2} \end{figure} On the other hand, consider the competing state with ferromagnetic order in B layer. In contrast to the AFM state, with an FM background, there is no $\pi$ Berry flux contributed by the spin configuration. As a result, the holon band gets inverted, and the Fermi surface now sits in the vicinity of $\v K_M$ and $\v K'_M$, where the $p$ wave form factors $F^{\pm1}_\v k$ dominate over that of $s$ wave (see orange contour in Fig.\ref{instability2}(a), or blue contour in Fig.\ref{instability2}(b) for the holon pocket after gauging). So we get the ECI phase with $\tau_z$ FM when $n_A>0.35$. However, the ECI phase is quite fragile and unstable to the nematic EI (NEI) phase for this parameter. To get a robust Chern insulator, a small valley contrasting flux $\Phi_A$ or $\Phi_B$ is needed. \section{Competition between band Chern insulator and excitonic insulator with $t_{AB}$} With a strong enough $t_{AB}$ to hybridize the two layers, a band Chern insulator (bCI) with canted $120^\circ$ antiferromagnetic order was found in a previous study\cite{devakul2022quantum}. Note here band Chern insulator may not be a good name because the phase apparently still require strong intra-layer repulsion to generate the AF order. Here we mainly focus on the effective Hamiltonian $H_f$ of the charge part after the magnetic order is fixed. Then the phase in Ref.~\onlinecite{devakul2022quantum} is realized in the large $t_{AB}$ but small $V$ limit. The hopping of $f$ there is decided entirely by the original single electron hopping $t_{AB}$ and we call it band insulator simply to distinguish it from the excitonic insulator in this paper. In Fig.\ref{app_CI2ECI} we investigate the transition from this band Chern insulator to our interaction driven exciton insulators by turning on the repulsion $V$ and $V'$. In presence of a $t_{AB}$ term, the U(1) symmetry associated with layer charge conservation is explicitly broken. \begin{figure}[!ht] \centering \includegraphics[width=0.3\linewidth]{image/appendix_CI2ECI/t1-V_gap.pdf} \includegraphics[width=0.3\linewidth]{image/appendix_CI2ECI/t1-V_Chernnumber.pdf} \includegraphics[width=0.3\linewidth]{image/appendix_CI2ECI/t1-V_Ising.pdf} \caption{Hartree-Fock phase diagram in $t_{AB}-V$ plane. (a) Charge gap. (b) Chern number. (c) The sign of $\tau^z_A \tau^z_B$ when it is nonzero. This phase diagram is calculated using the same set of parameters as in Ref\onlinecite{devakul2022quantum}, $t_A=t_B=1$, $\Phi_A=2\pi$, $\Phi_B=0$, and $D=3$.} \label{app_CI2ECI} \end{figure} The effective hopping of the bCI phase is actually equivalent to the $p+ip$ exciton order in our ECI phase. To simplify the discussion, we follow the Schwinger boson mean field theory, where spin magnetic order contributes to the charge part through the Berry phase. The mean field Hamiltonian for charge part is again purely a spinless model: \begin{align} H_c^{MF} =&- \sum_{\langle i,j \rangle_{A,B}} t_{AB} (F_i^{a\dagger} F^a_j) f^\dagger_i f_j + h.c. \\ \nonumber&- \sum_{a=A,B} \sum_{\langle \langle ij \rangle \rangle_a} t_a (F_i^{a\dagger} e^{i \phi_{ij;a} \tau_z} F^a_j) f^\dagger_i f_j + h.c. \\ \nonumber &- V\sum_{\langle ij \rangle} \chi_{ji} f^\dagger_i f_j + D\sum_{i \in A} n_i -D \sum_{j \in B} n_j \end{align} And we will define the effective charge hopping phase as \begin{equation} t^f_{ij;a} =t_a (F_i^{a\dagger} e^{i \phi_{ij;a} \tau_z} F^a_j) =\tilde{t}_a e^{i\tilde{\phi}_a} \end{equation} along with effective flux $\tilde{\Phi}_a=3\tilde{\phi}_a$. $\chi_{ji}=\langle f^\dagger_i f_j \rangle$ gives the exciton order. The bCI has a full $\tau_z$ polarization in layer A. In layer B it has a $120^\circ$ order in $\tau_{xy}$ plane, along with a uniform $\tau^z$ component, which satisfies $\langle \tau^z_A\tau^z_B\rangle>0$. The Schwinger boson mean field ansatz is \begin{align} \label{gauge} F^A_i &= (1,0)^T\\ F^B_j &= e^{\frac{i}{2}(\sigma^z+\mathit{\mathbb{1}})\v K \cdot \v r_j}(\cos{\frac{\theta}2},\sin{\frac{\theta}2})^T \end{align} where $\theta\in(0,\pi/2)$ is the angle of canting. There is a competition between $t_{AB}$ and $V$. At $V=0$ and before layer hybridization, both the electron pocket of the A layer and the holon pocket of B layer are centered at $\v K_M$ under the gauge choice of Eq.\ref{gauge}. The effect of $t_{AB}$ term is similar to that of mean field Hamiltonian $H_V = - V\sum_{\langle ij \rangle} \chi_{ji} f^\dagger_i f_j$ when $\chi_{Q,L}\neq0$ for $Q=L=0$, namely, a completely uniform distribution of exciton order on every bond. To match the discussion of Sec.~\ref{appendix:exciton order parameter} , we gauge the electron and hole pockets back to $\v \Gamma_M$ by gauge transformation \begin{align} &f_{A,i}\rightarrow f_{A,i}e^{-i \v K\cdot \v r_j} \notag\\ &f_{B,j}\rightarrow f_{B,j}e^{-i \v K\cdot \v r_j} \label{gt} \end{align} under which the $t_{AB}$ term becomes \begin{equation} t_{AB} f^\dagger_{A,i} f_{B,j} \rightarrow t_{AB} e^{i\v K \cdot \v (\v r_j-\v r_i)} f^\dagger_{A,i} f_{B,j} = \tilde{t}_{AB} f^\dagger_{A,i} f_{B,j} \end{equation} Now the new inter-layer tunneling $\tilde t_{AB}$ transforms non-trivially under $C_3$ rotations. At the mean field level for $f$, this $\tilde{t}_{AB}$ is equivalent to the $p+ip$ exciton order in the ECI phase. So the charge part of the bCI and our ECI has the same mean field ansatz and thus both of them have $C=1$. When increasing the $V$ term, exciton order $\chi_{ij}$ decoupled from the interaction dominates over the $t_{AB}$ term. At low density $s$ wave exciton condensation is favored. Hence the effective hopping now has both the $p+ip$ component from the external $t_{AB}$ term and $s$ component from exciton order. Consequently, we expect a NEI with $s+p$ exciton order. This NEI has $120^\circ$ magnetic order in both layers with the same chirality and no $\tau_z$ polarization. This is demonstrated in Fig.\ref{app_CI2ECI}(c). The strongest bond connects the two AB sites with parallel valley $\tau_{xy}$. So this NEI is connected to the $s$ wave AF-EI phase we found earlier at $t_{AB}=0$. For the parameter $\Phi_A=2\pi, \Phi_B=0$ here, the ECI phase is quite fragile as shown in the previous section. But if we choose a different flux $\Phi$, the bCI phase can be driven into the ECI phase by increasing $V$. Note although both the bCI and the ECI have the same $p+ip$ order in the mean field level of the electron and holon $f$, their magnetic orders are different and there must still be a phase transition between them. \begin{figure}[!ht] \centering \includegraphics[width=0.9\linewidth]{image/appendix_CI2ECI/t1-V_pd_chi.pdf} \caption{Order parameter in each angular momentum channel. This is the result before gauging. After the gauge transform (see text), the angular momentum quantum number $L$ gets shifted by $1$.} \label{app_CI2ECI_chiL} \end{figure} The bCI phase by Ref.~\onlinecite{devakul2022quantum} and our ECI phase have the same origin of the non-zero Chern number. But there are some essential differences: (1) The effective hopping in the ECI phase is spontaneously generated from inter-layer Coulomb interaction while the hopping of the bCI phase inherits from single electron hopping $t_{AB}$. (2) The $p\pm ip$ exciton order of the ECI phase does not rely on the $120^\circ$ magnetic order in the B layer. Actually in our ECI phase both layers are fully valley polarized. On the other hand, $120^\circ$ order is essential for the bCI phase. (3) The bCI picture is appropriate only when the inter-layer interaction $V$ is small. In real system the inter-layer tunneling $t_{AB}$ is small, but the repulsion $V$ is quite large. So we should be in the limit $t_{AB}\ll t_A\ll V$. Thus the effective hopping of the charge $f$ is dominated by the exciton order $V \chi_{ij}$, which could be $s$ wave or $p\pm ip$ depending on parameters. In the small $t_{AB}$, large $V$ regime appropriate to realistic systems, the bCI is unstable and a Chern insulator phase, if exists, should originate from exciton order as in our ECI phase. \section{Excitonic topological insulator at $\nu_T=2$} Let us also have some discussions for the total filling $\nu_T=n_A+n_B=2$. When $D$ is large, the ground state should be a band insulator with two electrons per triangular site on the B layer while the A layer is empty. Again electrons are transferred to the A layer when $D$ is decreased. Experimentally a topological insulator with helical edge modes is observed at $\nu_T=2$\cite{li2021quantum} for intermediate values of $D$. Presumably two Chern bands with $C=\pm 1$ are occupied. Each Chern band can be described by a mean field theory similar to our discussion at $\nu_T=1$ and can still arise from excitonic mechanism. Both intra-valley and inter-valley exciton condensation are possible, leading to either a quantum valley Hall (QVH) insulator or a quantum spin Hall (QSH) insulator QVH insulator can crossover to a band insulator from pure single particle physics. QSH insulator has additional inter-valley-coherent (IVC) order and may be a parent state of superconductivity upon doping\cite{po2018origin,kozii2020superconductivity,chatterjee2021inter}. We hope future experiments can distinguish these two different states. \end{document}
2024-02-18T23:40:24.584Z
2022-07-28T02:00:41.000Z
algebraic_stack_train_0000
2,328
6,702
proofpile-arXiv_065-11313
\section{Introduction} Initializing Deep Neural Networks (DNNs) correctly is crucial for trainability and convergence. In the recent years, there has been remarkable progress in tackling the problem of exploding and vanishing gradients. One line of work utilizes the convergence of DNNs to Gaussian Processes in the limit of infinite width \citep{neal1996priors, lee2018deep, matthews2018gaussian, novak2018bayesian, garriga2018deep, hron2020infinite, yang2019tensor}. The infinite width analysis is then used to determine critical initialization for the hyperparameters of the network \citep{he2015delving, poole2016exponential, schoenholz2016deep, lee2018deep, roberts2021principles, doshi2021critical}. It has further been shown that dynamical isometry can improve the performance of DNNs \citep{pennington2018spectral, xiao2018dynamical}. Exploding and vanishing gradients can also be regulated with special activation functions such as SELU \citep{klambauer2017self-normalizing} and GPN \citep{lu2020bidirectionally}. Deep Kernel shaping \citep{martens2021deep, zhang2022deep} improves trainability of deep networks by systematically controlling $Q$ and $C$ maps. Normalization layers such as LayerNorm \citep{ba2016layer}, BatchNorm \citep{ioffe2015batch} and \citep{wu2018group} facilitate training of DNNs by significantly enhancing the critical regime \citep{doshi2021critical}. There have also been algorithmic attempts at regulating the forward pass, such as LSUV \citep{mishkin2015lsuv}. Another line of work sets the networks with residual connections to criticality by suppressing the contribution from the residual branches at initialization. In Highway Networks \citep{srivastava2015training}, this is achieved by initializing the network to have a small ``transform gate''. \citet{goyal2017accurate} achieve this in ResNets, by initializing the scaling coefficient for the residual block's last BatchNorm at 0. In Fixup \citep{zhang2019fixup} and T-Fixup \citep{huang2020improving}, careful weight-initialization schemes ensure suppression of residual branches in deep networks. Techniques such as SkipInit \citep{de2020batch}, LayerScale \citep{touvron2021cait} and ReZero \citep{bachlechner2021rezero} multiply the residual branches by a trainable parameter, initialized to a small value or to 0. Despite this progress, the aforementioned techniques are limited by either the availability of analytical solutions, specific use of normalization layers, or the use of residual connections. One needs to manually decide on the techniques to be employed on a case-by-case basis. In this work, we propose a simple algorithm, which we term $\texttt{AutoInit}$, that automatically initializes a DNN to criticality. Notably, the algorithm can be applied to any feedforward DNN, irrespective of the architectural details, large width assumption or existence of analytic treatment. We expect that $\texttt{AutoInit}$ will be an essential tool in architecture search tasks because it will always ensure that a never-before-seen architecture is initialized well. \subsection{Criticality in Deep Neural Networks} In the following, we employ the definition of criticality using \emph{Partial Jacobian} \citep{doshi2021critical}. Consider a DNN made up of a sequence of blocks. Each block consists of Fully Connected layers, Lipschitz activation functions, Convolutional layers, Residual Connections, LayerNorm\citep{ba2016layer}, BatchNorm\citep{ioffe2015batch}, AffineNorm\citep{touvron2021resmlp}, LayerScale\citep{touvron2021cait}, or any combination of thereof. We consider a batched input to the network, where each input tensor $x \in \mathbb{R}^{n^0_1} \otimes \mathbb{R}^{n^0_2} \otimes \cdots$ is taken from the batch $B$ of size $\lvert B \rvert$. The output tensor of the $l^{th}$ block is denoted by $h^l (x) \in \mathbb{R}^{n^l_1} \otimes \mathbb{R}^{n^l_2} \otimes \cdots$. $h^{l+1}(x)$ depends on $h^l(x)$ through a layer-dependent function $\mathcal{F}^{l}$, denoting the operations of the aforementioned layers. This function, in turn, depends on the parameters of the various layers within the block, denoted collectively by $\theta^{l+1}$. The explicit layer dependence of the function $\mathcal{F}^{l}$ highlights that we do not require the network to have self-repeating layers (blocks). We note that $h^{l+1} (x)$ can, in general, depend on $h^{l} (x')$ for all $x'$ in the batch $B$; which will indeed be the case when we employ BatchNorm. The recurrence relation for such a network can be written as \begin{align}\label{eq:DNNrecursion} h^{l+1} (x) = \mathcal{F}^{l+1}_{\theta^{l+1}} \left( \{h^l(x') \;|\; \forall x' \in B \} \right) \,, \end{align} where we have suppressed all the indices for clarity. Each parameter matrix $\theta^{l+1}$ is sampled from a zero-mean distribution. We will assume that some $2+\delta$ moments of $|\theta^{l+1}|$ are finite such that the Central Limit Theorem holds. Then the variances of $\theta^{l+1}$ can be viewed as hyperparameters and will be denoted by $\sigma^{l+1}_{\theta}$ for each $\theta^{l+1}$. We define the $\texttt{Flatten}$ operation, which reshapes the output $h^l(x)$ by merging all its dimensions. \begin{align} \bar h^l(x) = \texttt{Flatten}\left( h^l(x) \right) \sim \mathbb{R}^{N^l} \,, \end{align} where $N^l \equiv n^l_1 n^l_2 \cdots$. \begin{definition}[Average Partial Jacobian Norm (APJN)] \label{def:APJN} For a DNN given by \eqref{eq:DNNrecursion}, APJN is defined as \begin{align} \mathcal J^{l_0, l} \equiv \mathbb E_{\theta} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l}} \sum_{i=1}^{N_{l_0}} \sum_{x, x' \in B} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \right] \,, \end{align} where $\mathbb E_\theta[\cdot]$ denotes the average over parameter initializations. \end{definition} \begin{remark} For DNNs without BatchNorm and normalized inputs, definition of APJN for $|B|>1$ is equivalent to the one in $|B|=1$ case. \end{remark} We use APJN as the empirical diagnostic of criticality. \begin{definition}[Critical Initialization] \label{def:critical} A DNN given by \eqref{eq:DNNrecursion}, consisting of $L+2$ blocks, including input and output layers, is critically initialized if all block-to-block APJN are equal to $1$, i.e. \begin{align} \mathcal J^{l,l+1} = 1 \,, \quad \forall \quad 1 \leq l \leq L \,. \end{align} \end{definition} Critical initialization as defined by \Cref{def:critical} is essential, as it prevents the gradients from exploding or vanishing at $t=0$. One can readily see this by calculating the gradient for any flattened parameter matrix $\theta$ at initialization: \begin{align}\label{eq:grad} \frac{1}{|\theta^l|}\|\nabla_{\theta^l} \mathcal L \|^2_2 =& \frac{1}{|\theta^l|} \left\|\sum_{\mathrm{all}} \frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \frac{\partial \bar{h}^{L+1}_i}{\partial \bar{h}^L_j}\cdots \frac{\partial \bar{h}^{l+1}_k}{\partial \bar{h}^l_m} \frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_2 \nonumber \\ \sim &\, O \left( \frac{1}{|\theta^l|} \left\|\frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \right\|^2_2 \cdot \mathcal J^{L, L+1} \cdots \mathcal J^{l, l+1} \cdot \left \|\frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_F \right)\,, \end{align} where $\| \cdot \|_F$ denotes the Frobenius norm. In the second line, we utilized the factorization property of APJN \begin{align}\label{eq:factor} \mathcal J^{l_0,l} = \prod_{l'=l_0}^{l-1} \mathcal J^{l', l'+1} \,, \end{align} which holds in the infinite width limit given there is no weight sharing across the blocks. One may further require $\left\| \partial \mathcal L / \partial \bar{h}^{L+1}_i \right\|_2 \sim O(1)$. However, in practice we observe that this requirement is less important once the condition in \Cref{def:critical} is met. \subsection{Automatic Critical Initialization} For general architectures, analytically calculating APJN is often difficult or even impossible. This poses a challenge in determining the correct parameter initializations to ensure criticality; especially in networks without self-similar layers. Moreover, finite network width is known to have nontrivial corrections to the criticality condition \citep{roberts2021principles}. This calls for an algorithmic method to find critical initialization. To that end, we propose the \Cref{alg:j_general} that we called $\texttt{AutoInit}$ for critically initializing deep neural networks \emph{automatically}, without the need for analytic solutions of the signal propagation or of the meanfield approximation. The algorithm works for general feedforward DNNs, as defined in \eqref{eq:DNNrecursion}. Moreover, it naturally takes into account all finite width corrections to criticality because it works directly with an instance of a network. We do tacitly assume the existence of a critical initialization. If the network cannot be initialized critically, the algorithm will return a network that can propagate gradients well because the APJNs will be pushed as close to $1$ as possible. The central idea behind the algorithm is to choose the hyperparameters for all layers such that the condition in \Cref{def:critical} is met. This is achieved by optimizing a few auxiliary scalar parameters $a^l_{\theta}$ of a twin network with parameters $a^l_{\theta} \theta^{l}$ while freezing the parameters $\theta^{l}$. The loss function is minimized by the condition mentioned in \Cref{def:critical}. \begin{algorithm}[h] \caption{\texttt{AutoInit} (SGD)} \label{alg:j_general} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma^l_\theta;\, a^l_{\theta}(t) \; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$ and $\{a_\theta^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l_\theta(t+1) = a^l_\theta(t) - \eta \nabla_{a^l_\theta} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{\sigma^l_\theta = \sigma^l_{\theta} a^l_{\theta}(t) ;\, 1\; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$ \end{algorithmic} \end{algorithm} In practice, for speed and memory reasons we use an unbiased estimator \citep{hoffman2019robust} of APJN in \Cref{alg:j_general}, defined as \begin{align}\label{eq:j_est} \hat {\mathcal{J}}^{l, l+1} \equiv \frac{1}{N_v} \sum_{\mu=1}^{N_v} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l+1}} \sum_{k=1}^{N_{l+1}} \sum_{i=1}^{N_{l}} \sum_{x, x' \in B} \frac{\partial (v_{\mu j} \bar{h}^{l+1}_j(x'))}{\partial \bar{h}^{l}_i(x)} \frac{\partial (v_{\mu k} \bar{h}^{l+1}_k(x'))}{\partial \bar{h}^{l}_i(x)} \right] \,, \end{align} where each $v_{\mu i}$ is a unit Gaussian random vector for a given $\mu$. The Jacobian-Vector Product (JVP) structure in the estimator speeds up the computation by a factor of $N_{l+1} / N_v$ and consumes less memory at the cost of introducing some noise. In \Cref{sec:auto} we analyze $\texttt{AutoInit}$ for multi-layer perceptron (MLP) networks. Then we discuss the problem of exploding and vanishing gradients of the tuning itself; and derive bounds on the learning rate for ReLU or linear MLPs. In \Cref{sec:bn} we extend the discussion to BatchNorm and provide a strategy for using $\texttt{AutoInit}$ for a general network architecture. In \Cref{sec:exp} we provide experimental results for more complex architectures: VGG19\_BN and ResMLP-S12. \section{AutoInit for MLP networks} \label{sec:auto} MLPs are described by the following recurrence relation for preactivations \begin{align}\label{eq:mlp_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} W^{l+1}_{ij} \phi(h^l_j(x)) + b^{l+1}_i \,. \end{align} Here $x$ is an input vector, weights $W^{l+1}_{ij} \sim \mathcal N(0, \sigma_w^2/N_l)$ and biases $b^{l+1}_i \sim \mathcal N(0, \sigma_b^2)$ are collectively denoted as $\theta^{l+1}$. We assume $\phi$ is a Lipschitz activation function throughout this paper. For a network with $L$ hidden layers, in infinite width limit $N_l \rightarrow \infty$, preactivations \{$h^l_i(x) \,|\, 1 \leq l \leq L, \forall i \in N_l\}$ are Gaussian Processes (GPs). The distribution of preactivations is then determined by the Neural Network Gaussian Process (NNGP) kernel \begin{align} \mathcal K^{l}(x, x') = \mathbb E_{\theta} \left[ h^l_i(x) h^l_i(x') \right] \,, \end{align} which value is independent of neuron index $i$. The NNGP kernel can be calculated recursively via \begin{align} \mathcal K^{l+1}(x, x') = \sigma_w^2 \mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} \left[\phi\left(h_i^l(x)\right) \phi\left(h_i^l(x')\right) \right] + \sigma_b^2 \,. \end{align} Note that we have replaced the average over parameter initializations $\mathbb{E}_\theta[\cdot]$ with an average over preactivation-distributions $\mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} [\cdot]$; which are interchangeable in the infinite width limit \citep{lee2018deep, roberts2021principles}. Critical initialization of such a network is defined according to \Cref{def:critical}. In practice, we define a twin network with extra parameters, for MLP networks the twin preactivations can be written as \begin{align}\label{eq:twin_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} a_W^{l+1} W^{l+1}_{ij} \phi(h^l_j(x)) + a_b^{l+1} b^{l+1}_i \,, \end{align} where $a_{\theta}^{l+1} \equiv \{a^{l+1}_W, a^{l+1}_b\}$ are auxiliary parameters that will be tuned by \Cref{alg:j_train}. \begin{algorithm}[h] \caption{\texttt{AutoInit} for MLP (SGD)} \label{alg:j_train} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$, $\{a_W^l(0)=1\}$ and $\{a_b^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l(t+1) = a^l(t) - \eta \nabla_{a^l} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{a^l_W(t) \sigma_w, \, a^l_b(t) \sigma_b, 1,\, 1 \;| \; \forall 1 \leq l \leq L \})$ \end{algorithmic} \end{algorithm} In \Cref{alg:j_train}, one may also return $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, while freezing all $a^l_{\theta}$. However, this leads to different training dynamics while updating weights and biases. Alternatively, one can leave the auxiliary parameters trainable, but in practice this leads to unstable training dynamics. \paragraph{Loss function} The choice of loss function $\mathcal L$ is important. We will use the following loss \begin{align}\label{eq_loss_sq_J} \mathcal L_{\log} = \frac{1}{2} \sum_{l=1}^L \left[\log(\mathcal J^{l, l+1})\right]^2 \,, \end{align} We will refer to \eqref{eq_loss_sq_J} as Jacobian Log Loss (JLL). This definition is inspired by the factorization property \eqref{eq:factor}, which allows one to optimize each of the partial Jacobian norms independently. Thus the tuning dynamics is less sensitive to the depth. One could naively use $\log(\mathcal J^{0, L+1})^2$ as a loss function, however optimization will encounter the same level of exploding or vanishing gradients problem as \eqref{eq:grad}. One may worry that the factorization property will be violated for $t>0$, due to the possible correlation across all $\{a^l(t)\}$. It turns out that the correlation introduced by \Cref{alg:j_train} does not change the fact that all weights and biases are iid, ensuring that \eqref{eq:factor} holds for any $t \geq 0$. Another choice for the loss is Jacobian Square Loss (JSL), defined as $\mathcal L_2 = \frac{1}{2} \sum_{l=1}^L \left(\mathcal J^{l, l+1} - 1 \right)^2$. However JSL has poor convergence properties when $\mathcal J^{l, l+1} \gg 1$. One may further restrict the forward pass by adding terms that penalize the difference between $\mathcal K^l(x, x)$ and $\mathcal K^{l+1}(x,x)$. For brevity, we leave these discussions for the appendix. \paragraph{Exploding and Vanishing Gradients} While the objective of \Cref{alg:j_train} is to solve the exploding and vanishing gradients problem, the \Cref{alg:j_train} itself has the same problem, although not as severe. Consider optimizing MLP networks using $\mathcal L_{\log}$, where the forward pass is defined by \eqref{eq:twin_preact}. Assuming the input data $x$ is normalized, the SGD update (omit x) of $a^{l}_{\theta}$ at time $t$ can be written as \begin{align}\label{eq:a_update} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) \end{align} For a deep neural network, i.e. $|L - l| \gg 1$ holds for some $l$, the depth dependent term of \eqref{eq:a_update} can lead to exploding or vanishing gradients problems. We will show next that this is not the familiar exploding or vanishing gradients problem. First we would like to explain the vanishing gradients problem for $a_W^{l+1}$. Rewrite the right hand side of \eqref{eq:a_update} as \begin{align}\label{eq:iso} - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{W}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) = - \eta \frac{2}{a_W^{l+1}(t)} \log \mathcal J^{l, l+1}(t) + (l' > l\; \mathrm{terms}) \,. \end{align} Vanishing gradients can only occur if the isolated term is exactly canceled by the other terms for all $t \geq 0$, which does not happen in practice. To discuss the exploding gradients problem for $a_W^{l+1}$ we consider the update of $a_W^{l+1}$ (omit t). Depth dependent terms can be written as \begin{align}\label{eq:tauto_aw} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} \log \mathcal J^{l', l'+1} = \sum_{l'>l}^L \left(\frac{4\chi^{l'}_{\Delta}}{a_W^{l+1} \mathcal J^{l', l'+1}} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x, x)\right) \log \mathcal J^{l', l'+1} \,, \end{align} where we have defined two new quantities $\chi^{l'}_{\Delta} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l'}_i) \phi''(h^{l'}_i) + \phi'(h^{l'}_i) \phi'''(h^{l'}_i) \right]$ and $\chi^{l'}_{\mathcal K} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi'(h^{l'}_i) \phi'(h^{l'}_i) + \phi(h^{l'}_i) \phi''(h^{l'}_i) \right]$. We note that the exploding gradients problem for $a_W^{l+1}$ in $\texttt{AutoInit}$ is not severe for commonly used activation functions: \begin{itemize} \item $\tanh$-like bounded odd activation functions: $\chi_{\mathcal K}^{l'} \leq 1$ holds and $\mathcal K^l(x,x)$ saturates to a constant for large $l$. Thus the divergence problem of \eqref{eq:tauto_aw} is less severe than the one of \eqref{eq:grad} when $\mathcal J^{l', l'+1} > 1$. \item $\mathrm{ReLU}$: $\chi^{l'}_{\Delta}=0$. \item $\mathrm{GELU}$: The sum in \eqref{eq:tauto_aw} scales like $O(L \prod_{\ell=1}^L \chi^{\ell}_{\mathcal K})$ for large $L$, which may lead to worse exploding gradients than \eqref{eq:grad} for a reasonable $L$. Fortunately, for $\chi^{l'}_{\mathcal K} > 1$ cases, $\chi_{\Delta}^{l'}$ is close to zero. As a result, we find numerically that the contribution from \eqref{eq:tauto_aw} is very small. \end{itemize} For $a_b^{l+1}$, there is no isolated term like the one in \eqref{eq:iso}. Then the update of $a_b^{l+1}$ is proportional to \begin{align}\label{eq:tauto_ab} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_b^{l+1}} \log \mathcal J^{l, l+1} = \sum_{l'>l}^L \left(\frac{4 a_b^{l+1}}{\mathcal J^{l', l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \sigma_b^2 \right) \log \mathcal J^{l, l+1} \,. \end{align} Comparing \eqref{eq:tauto_ab} and \eqref{eq:tauto_aw}, it is clear that the exploding gradients problem for $a_b^{l+1}$ is the same as that for $a_W^{l+1}$, hence not severe for common activation functions. The vanishing gradients problem is seemingly more serious, especially for $\sigma_b=0$. However, the vanishing gradients for $a_b^{l+1}$ does not prevent \texttt{AutoInit} from reaching a critical initialization: \begin{itemize} \item For $\sigma_b > 0$, as $a_W^{l+1}$ gets updated, the update in \eqref{eq:tauto_ab} gets larger with time t. \item For $\sigma_b=0$ the phase boundary is at $\sigma_w \geq 0$, which can be reached by $a_W^{l+1}$ updates. \end{itemize} \subsection{Linear and ReLU networks} In general, it is hard to predict a good learning rate $\eta$ for the \Cref{alg:j_train}. However, for ReLU (and linear) networks, we can estimate the optimal learning rates. We will discuss ReLU in detail. Since $a_b^l$ can not receive updates in this case, we only discuss updates for $a_W^l$. Different APJN $\{\mathcal J^{l,l+1}\}$ for ReLU networks evolve in time independently according to \begin{align}\label{eq:relu_jupdate} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \frac{\sigma_w^2}{\sqrt{\mathcal J^{l,l+1}(t)}} \log \mathcal J^{l,l+1}(t) \,. \end{align} Then one can show that for any time $t$: \begin{align}\label{eq:eta_t} \eta_t < & \min_{1 \leq l \leq L} \left\{\frac{2\left( \sqrt{\mathcal J^{l,l+1}(t)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(t)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(t) } \right\} \end{align} guarantees a convergence. In this case, the value of $\mathcal J^{l, l+1}(t)$ can be used to create a scheduler for \Cref{alg:j_train}. Moreover, one can solve \eqref{eq:relu_jupdate} and find a learning rate that allows the \Cref{alg:j_train} to converge in 1-step: \begin{align}\label{eq:1step_lr} \eta^l_{\mathrm{1-step}} = \frac{\left( \sqrt{\mathcal J^{l,l+1}(0)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(0)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(0) } \,, \end{align} Next we study the dynamics of the optimization while using a single learning rate $\eta$. We estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align}\label{eq:eta_0_jl} \eta_0 = \frac{\left(a_W^{l+1}\sigma_w - \sqrt{2}\right) a_W^{l+1}}{\sigma_w \left(\log \left[(a_W^{l+1}\sigma_w)^2 \right] - \log 2\right)} \,. \end{align} In \Cref{fig:relu_jac}, we checked our results with \cref{alg:j_train}. All $\mathcal J^{l,l+1}(t)$ values plotted in the figure agree with the values we obtained by iterating \eqref{eq:relu_jupdate} for $t$ steps. The gap between $\eta_0$ and trainable regions can be explained by analyzing \eqref{eq:eta_t}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 < \eta < \eta_t$, there is still a chance that \Cref{alg:j_train} can converge. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 > \eta > \eta_t$ holds, \Cref{alg:j_train} may diverge at a later time. A similar analysis for JSL is performed in the appendix. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JLL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JLL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{fig:relu_jac} \end{figure} \section{BatchNorm, Residual Connections and General Strategy} \label{sec:bn} \paragraph{BatchNorm and Residual Connections} For MLP networks, the APJN value is only a function of $t$ and it is independent of $|B|$. This property holds except when there is a BatchNorm (BN). We consider a Pre-BN MLP network with residual connections. The preactivations are given by \begin{align}\label{eq:bnmlp_preact} h^{l+1}_{x; i} = \sum_{j=1}^N a_W^{l+1} W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + a_b^{l+1} b^{l+1}_i + \mu h^l_{x;i} \,, \end{align} where we label different inputs with indices $x,x^\prime, \cdots$ and $\mu$ quantifies the strength of the residual connections (common choice is $\mu=1$). At initialization, the normalized preactivations are defined as \begin{align} \tilde h^l_{x; i} = \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \,. \end{align} The change in batch statistics leads to non-trivial $\mathcal J^{l,l+1}$ values, which can be approximated using \Cref{conj:bn}. \begin{conjecture}[APJN with BatchNorm]\label{conj:bn} In infinite width limit and at large depth $l$, APJN of Pre-BN MLPs \eqref{eq:bnmlp_preact} converges to a deterministic value determined by the NNGP kernel as $B \rightarrow \infty$: \begin{align} \mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} & (a_W^{l+1} \sigma_w)^2 \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, 1)} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where the actual value of indices $x'$ and $x$ is not important, as long as $x \neq x'$. \end{conjecture} \begin{remark} Under the condition of \Cref{conj:bn} $\mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} 1 + O(l^{-1})$, if $\mu=1$. The finite $|B|$ correction is further suppressed by $l^{-1}$. \end{remark} In \Cref{fig:bn_relu} we show numerical results that can justify our conjecture, where we empirically find that finite $|B|$ correction for $|B| \geq 128$. Analytical details are in appendix. Similar results without residual connections have been obtained for finite $|B|$ by \citet{yang2018mean}. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{Figures/bn_relu_mu1.pdf} \caption{$\mathcal J^{l,l+1}(0)$ phase diagrams for $|B|=256$ in $\sigma_b-\sigma_w$ plane ($\mu=0$ and $\mu=1$); $\mathcal J^{l, l+1}$-$|B|$ plot. From left to right: 1) Pre-BN MLP networks with $\mu=0$ are everywhere chaotic; 2) Pre-BN MLP networks with $\mu=1$ are critical everywhere; 3) For $|B|\geq 128$, the finite $|B|$ corrections are negligible. In all plots we use $L=30$, $N_l=500$, $a_W^l(0)=1$ and averaged over 50 initializations.} \label{fig:bn_relu} \end{figure} \paragraph{General Strategy} For general network architectures, we propose the following strategy for using \Cref{alg:j_general} with normalized inputs: \begin{itemize} \item If the network does not have BatchNorm, use the algorithm with $|B|=1$. \item If the network has BatchNorm, and the user has enough resources, use the algorithm with a $|B|$ which will be used for training. When $|B|$ is large, one should make $\mathcal J^{l,l+1}$ vs. $|B|$ plots like the one in \Cref{fig:bn_relu}, then choose a $|B|$ that needs less computation. \item When resources are limited, one can use a non-overlapping set $\{\mathcal{J}^{l, l+k}\}$ with $k>1$ to cover the whole network. \end{itemize} The computational cost of the algorithm depends on $k$ and $|B|$. \section{Experiments} \label{sec:exp} In this section, we use a modified version of $\mathcal L_{\log}$, where we further penalize the ratio between NNGP kernels from adjacent layers. The Jacobian-Kernel Loss (JKL) is defined as: \begin{align}\label{eq:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=0}^{L+1} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=0}^{L+1} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,, \end{align} where we introduced an extra hyperparameter $\lambda$ to control the penalization strength. We also included input and output layers. Both APJNs and NNGP kernels will be calculated using flattened preactivations. \subsection{ResMLP} ResMLP \citep{touvron2021resmlp} is an architecture for image recognition built entirely on MLPs. It offers competitive performance in both image recognition and machine translation tasks. The architecture consists of cross-channel and cross patch MLP layers, combined with residual connections. The presence of residual connections and the absence of normalization techniques such as LayerNorm \citep{ba2016layer} or BatchNorm \citep{ioffe2015batch} render ResMLP to be initialized off criticality. To mitigate this issue, ResMLP architecture utilizes LayerScale \citep{touvron2021cait}; which multiplies the output residual branch with a trainable matrix, initialized with small diagonal entries. \paragraph{CIFAR-10} Here we obtain a critical initialization for ResMLP-S12 using \Cref{alg:j_train} with loss \eqref{eq:jkle}, with $a^l_{\theta}$ introduced for all layers. In our initialization, the ``smallnes'' is distributed across all parameters of the residual block, including those of linear, affine normalization and LayerScale layers. As we show in \Cref{fig:resmlp}, Kaiming initialization is far from criticality. $\texttt{AutoInit}$ finds an initialization with almost identical $\{\mathcal J^{l, l+1} \}$ and similar $\{\mathcal K^{l, l+1}(x, x)\}$ compared to the prescription proposed by \citet{touvron2021resmlp}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/resmlp_comparison.pdf} \caption{From left to right: 1) and 2) Comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ for ResMLP-S12 for Kaiming, original and $\texttt{AutoInit}$ initializations. Depth $l$ is equal to the number of residual connections. The network function in the $\texttt{AutoInit}$ case is very close to identity at initialization. 3) Training and validation accuracy. Both, original and \texttt{AutoInit}, models are trained on CIFAR-10 dataset for 600 epochs using \texttt{LAMB} optimizer\citep{You2020Large} with $|B|=256$. The learning rate is decreased by a factor of 0.1 at 450 and 550 epochs. Training accuracy is measured on training samples with Mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:resmlp} \end{figure} \paragraph{ImageNet \citep{liILSVRC15}} We report $74.0\%$ top-1 accuracy for ResMLP-S12 initialized using \texttt{AutoInit}, whereas the top-1 accuracy reported in \citep{touvron2021resmlp} for the same architecture is $76.6\%$. The model has $15$ million parameters. We used a setup similar to the one in original paper, which is based on timm library \citep{rw2019timm} under Apache-2.0 license \citep{apachev2}. However, we made the following modifications in our training: 1) We use learning rate $\eta=0.001$ and $|B|=1024$. 2) We use mixed precision. 3) We do not use \texttt{ExponentialMovingAverage}. The training was performed on two NVIDIA RTX 3090 GPUs; and took around $3.5$ days to converge (400 epochs). The auto-initialized model are obtained by tuning the Kaiming initialization using \Cref{alg:j_train} with $\mathcal L_{\mathcal J \mathcal K\log}(\lambda=0.5)$, $\eta=0.03$ and $|B|=32$ for 500 steps. \subsection{VGG} VGG \citep{simonyan2014very} is an old SOTA architecture, which was notoriously difficult to train before Kaiming initialization was invented. The BatchNorm variants $\mathrm{VGG19\_BN}$ further improve the training speed and performances compared to the original version. PyTorch version of VGG \citep{NEURIPS2019_9015} is initialized with $\mathrm{fan\_out}$ Kaiming initialization \citep{he2015delving}. In \Cref{fig:bn_relu} we show that the BatchNorm makes Kaiming-initialized ReLU networks chaotic. We obtain a close to critical initialization using \Cref{alg:j_train} for $\mathrm{VGG19\_BN}$, where we introduce the auxiliary parameters $a^l_{\theta}$ for all BatchNorm layers. $\mathcal J^{l, l+1}$ is measured by the number of composite (Conv2d-BatchNorm-ReLU) blocks or MaxPool2d layers. We compared $\mathcal J^{l, l+1}$, $\mathcal K^l(x,x)$ and accuracies on CIFAR-10 datasets between auto-initialized model and the one from PyTorch\citep{krizhevsky2009learning}, see \Cref{fig:vgg}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/vgg_comparison.pdf} \caption{From left to right: 1) and 2) comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ between PyTorch version $\mathrm{VGG19\_BN}$ and \texttt{AutoInit} version, we ensure $\mathcal J^{l, l+1}=1$ with a high priority ($\lambda=0.05$); 3) training and validation accuracy. We train both models on CIFAR-10 dataset using SGD with $\mathrm{momentum}=0.9$ and $|B|=256$ for 300 epochs, where we decrease the learning rate by a factor of 0.1 at 150 and 225 epochs. Training accuracy is measured on training samples with mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:vgg} \end{figure} \section{Conclusions} \label{sec:conclu} In this work we have introduced an algorithm, \texttt{AutoInit}, that allows to initialize an arbitrary feed-forward deep neural network to criticality. \texttt{AutoInit} is an unsupervised learning algorithm that forces norms of all nearby partial Jacobians to have a unit norm via minimizing the loss function \eqref{eq_loss_sq_J}. A slight variation of the \texttt{AutoInit} also tunes the forward pass to ensure that gradients in all layers of a DNN are well-behaved. To gain some intuition about the algorithm we have solved the training dynamics for MLPs with ReLU activation and discussed the choice of hyperparameters for the tuning procedure that ensures its convergence. Then we have evaluated the performance of \texttt{AutoInit}-initialized networks against initialization schemes used in literature. We considered two examples: ResMLP architecture and VGG. The latter was notoriously difficult to train at the time it was introduced. \texttt{AutoInit} finds a good initialization (somewhat close to Kaiming) and ensures training. ResMLP uses a variation of ReZero initialization scheme that puts it close to dynamical isometry condition. \texttt{AutoInit} finds a good initialization that appears very different from the original, however the network function is also very close to the identity map at initialization. In both cases the performance of the \texttt{AutoInit}-initialized networks is competitive with the original models. We emphasize that \texttt{AutoInit} removes the necessity for trial-and-error search for a working initialization. We expect that \texttt{AutoInit} will be useful in automatic neural architecture search tasks as well as for general exploration of new architectures. \begin{ack} T.H., D.D. and A.G. were supported, in part, by the NSF CAREER Award DMR-2045181 and by the Salomon Award. \end{ack} \bibliographystyle{plainnat} \section{Experimental Details} \Cref{fig:relu_jac}: The the second panel is made of 1200 points, each point takes around $1.5$ minutes running on a single single NVIDIA RTX 3090 GPU. \Cref{fig:bn_relu}: We scanned over $400$ points for each phase diagram, which overall takes around $5$ hours on a single NVIDIA RTX 3090 GPU. \Cref{fig:resmlp}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset. We use SGD with $\eta=0.03$ and $N_v=2$ for $392$ steps, $|B|=256$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.005, 0.01\}$, $\mathrm{weight\; decay}=\{10^{-5}, 10^{-4}\}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip, Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{fig:vgg}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset for $392$ steps with $\eta=0.01$ $|B|=128$ and $N_v=3$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.001, 0.002, 0.005, 0.01, 0.02\}$, $\mathrm{weight\; decay}=\{0.0005, 0.001, 0.002, 0.005 \}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip and Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. We froze auxiliary parameters instead of scale the weights. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{figapp:relu_jac}: Exactly the same as \Cref{fig:relu_jac}, except we used JSL. \section{Theoretical Details} \subsection{Factorization of APJN} We the factorization property using MLP networks in infinite width limit. This proof works for any iid $\theta^l$ where $|\theta^l|$ has some $2+\delta$ moments. We start from the definition of the partial Jacobian, set $a^l_{\theta}=1$ for simplicity. \begin{align} \nonumber \mathcal{J}^{l,l+2} &\equiv \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_l} \sum_{k,m=1}^{N_{l+1}} \left(W^{l+2}_{ik} \phi'(h^{l+1}_k) \right) \left(W^{l+2}_{im} \phi'(h^{l+1}_m) \right) \left(\frac{\partial h^{l+1}_k}{\partial h^{l+1}_j} \frac{\partial h^{l+1}_m}{\partial h^{l}_j}\right) \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{\theta} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{W^{l+1}, b^{l+1}} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) W^{l+1}_{kj} W^{l+1}_{kj} \right] \mathbb E_{\theta} \left[\phi'(h^l_k) \phi'(h^l_k) \right ] \\ &= \mathcal J^{l,l+1} \mathcal J^{l+1, l+2} + O\left(\frac{\chi^l_{\Delta}}{N_l} \right) \,, \end{align} where the $1/N_l$ correction is zero in the infinite width limit. We used the fact that in infinite width limit $h^{l+1}_k$ is independent of $h^l_k$, and calculated the first expectation value of the fourth line using integration by parts. Recall that for a single input (omit x) \begin{align} \chi^{l}_{\Delta} \equiv (a_W^{l+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l}_i) \phi''(h^{l}_i) + \phi'(h^{l}_i) \phi'''(h^{l}_i) \right] \,. \end{align} \subsection{Exploding and Vanishing Gradients} We show details for deriving \eqref{eq:tauto_aw} for MLP networks, assuming $l'>l$: \begin{align} \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial \mathbb E_{h^{l'}_i \sim \mathcal N(0, \mathcal K^{l'}(x,x))} \left[(a_W^{l'+1} \sigma_w)^2 \phi'(h^{l'}_i) \phi'(h^{l'}_i) \right]}{\partial a_W^{l+1}} \nonumber \\ =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial}{\partial K^{l'}(x,x)} \left(\frac{{(a_W^{l'+1} \sigma_w)^2}}{\sqrt{2\pi \mathcal K^{l'}(x,x)}} \int \phi'(h^{l'}_i) \phi'(h^{l'}_i) e^{-\frac{h^{l'}_i h^{l'}_i}{2\mathcal K^{l'}(x,x)}} dh^{l'}_i \right) \frac{\partial K^l(x,x')}{\partial a_W^{l+1}}\nonumber \\ =& \frac{2}{\mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \frac{\partial \mathcal K^{l'}(x, x)}{\partial \mathcal K^{l'-1}(x,x)} \cdots \frac{\partial \mathcal K^{l+1}(x,x)}{\partial a_W^{l+1}} \nonumber \\ =& \frac{4}{a_W^{l+1} \mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x,x) \,, \end{align} where we calculated the derivative respect to $\mathcal K^{l'}(x,x)$, then used integration by parts to get the third line. The derivation for \eqref{eq:tauto_ab} is similar. \subsection{ReLU Details} \paragraph{Learning rate $\eta$} The learning rate bound \eqref{eq:eta_t} is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} -1 |$ to decrease monotonically with time $t$. \paragraph{\texorpdfstring{Derivation for $\chi_{\Delta}^l=0$}{}} This is straightforward to show by direct calculation in the infinite width limit. We set $a_W^l=1$ and ignore neuron index $i$ for simplicity. \begin{align} \chi^{l'}_{\Delta} =& \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left[\phi''(h^{l}) \phi''(h^{l}) + \phi'(h^{l}) \phi'''(h^{l'}) \right] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \phi'(h^l) \phi''(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{h^l}{\mathcal K^l(x,x)} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= 0 \,, \end{align} where $\Theta(h^l)$ is Heaviside step function and $\delta(h^l)$ is Dirac delta function. To get the last line we used $h^l \delta(h^l)=0$. \subsection{\texorpdfstring{\Cref{conj:bn}}{}} Here we offer an non-rigorous explanation for the conjecture in the infinite $|B|$ and the infinite width limit. We use a MLP model with $a_{\theta}^l=1$ as an example. We consider \begin{align} h^{l+1}_{x; i} = \sum_{j=1}^N W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + b^{l+1}_j + \mu h^l_{x; j}\,, \end{align} where \begin{align}\label{eq:BN} \tilde h^l_{x; i} =& \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \nonumber \\ =& \frac{\sqrt{|B|} \sum_{x' \in B} P_{x x'} h^l_{x'; i}}{\sqrt{ \sum_{x \in B} \left(\sum_{x' \in B} P_{x x'} h^l_{x'; i}\right)^2 }} \,, \end{align} where $P_{xx'} \equiv \delta_{xx'} - 1/ |B|$. It is a projector in the sense that $\sum_{x' \in B} P_{xx'} P_{x'x''} = P_{xx''}$. Derivative of the normalized preactivation: \begin{align} \frac{\partial \tilde h^l_{x; i}}{\partial h^l_{x';j}} = \sqrt{|B|} \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2}} - \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; i} \sum_{x'' \in B} P_{x'x''} h^l_{x''; i}}{\left( \sqrt{\sum_{x\in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2 } \right)^3} \right)\delta_{ij} \,. \end{align} Then the one layer APJN: \begin{align}\label{eqapp:bn_apjn} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \sum_{x,x' \in B} \sum_{j=1}^{N_l} \mathbb E_{\theta} \left[\left(\phi'(\tilde h^l_{x; j}) \right)^2 \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2}} \right. \right. \nonumber \\ &\left. \left.- \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; j} \sum_{x'' \in B} P_{x'x''} h^l_{x''; j}}{\left( \sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2 } \right)^3} \right)^2 \right] + \mu^2 \,. \end{align} In the infinite $|B|$ limit, only one term can contribute: \begin{align} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{x,x' \in B} \sum_{j=1}^{N_l} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx'} P_{xx'}}{\sum_{x=1}^B \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2}\right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[ \sum_{j=1}^{N_l} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx}}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{j=1}^{N_l} \left( \left[\frac{1}{|B|} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \right] \frac{|B|-1}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right) \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ \xrightarrow{B \rightarrow \infty} & \frac{\sigma_w^2}{N_l} \sum_{j=1}^{N_l} \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, \delta_{xx'} )} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where $x'$ is a dummy index, just to label the off-diagonal term. We used \cref{conjecture:proj} and \cref{conjecture:tilde_h} to get the result. \begin{conjecture}[Projected Norm]\label{conjecture:proj} In the infinite width limit. For a large depth $l$, $\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2$ converges to a deterministic value $\frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right)$ as batch size $|B| \rightarrow \infty$. \end{conjecture} \begin{proof}[Non-regirous "proof"] In the infinite width limit $h^l_{x; j}$ is sampled from a Gaussian distribution $\mathcal N(0, \mathcal K^l_{xx'})$, where the value $\mathcal K_{xx'}$ only depends on if $x$ is the same as $x'$ or not. We first simplify the formula: \begin{align}\label{eq:proj} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ =& \frac{1}{|B|} \sum_{x', x'' \in B} P_{x' x''} h^l_{x'; j} h^l_{x''; j} \nonumber \\ =& \frac{1}{|B|} \left(\sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x', x'' \in B} h^l_{x';j} h^l_{x'';j} \right) \nonumber \\ =& \frac{1}{|B|} \left(\frac{|B|-1}{|B|} \sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x' \neq x''}^B h^l_{x';j} h^l_{x'';j} \right) \,. \end{align} The average over $x'$ and $x''$ in infinite $|B|$ limit can be replaced by integration over their distribution (this is the non-rigorous step, complete rigorous proof see \citet{yang2018mean}): \begin{align} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ \xrightarrow{|B| \rightarrow \infty} & \frac{|B|-1}{|B|} \left(\mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{xx'}) } \left[(h^l_{x'; j})^2 \right] - \mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{x'x''}) } \left[h^l_{x';j} h^l_{x'';j} \right] \right) \nonumber \\ = & \frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right) \,, \end{align} \end{proof} Next, we need to show how to calculate $\mathcal K^{l}_{xx'}$. Before that we first try to simplify find the distribution of $\tilde h^l_{x; i}$ in the infinite $|B|$ limit. \begin{conjecture}[$\tilde h^l_{x;i}$ distribution]\label{conjecture:tilde_h} In the infinite $|B|$ limit and the infinite width limit, assume for large depth $\mathcal K^l_{xx}$ reaches a fixed point. Then $\tilde h^l_{x;i}$ can be seen as sampled from a Gaussian distribution with the covariance matrix \begin{align} \lim_{|B| \rightarrow \infty} \mathbb E_{\theta} \left[\tilde h^l_{x;i} \tilde h^l_{y;j}\right] = & \mathbb E_{\theta} \left[ \frac{\sum_{x', x'' \in B} P_{x x'} P_{y x''} h^l_{x';i} h^l_{x'';j}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \right] \nonumber \\ =& \frac{\sum_{x', x'' \in B} P_{xx'} P_{yx''} \mathcal K^l_{xx'}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =& \frac{\mathcal K^l_{xy} - \frac{1}{|B|} \mathcal K^l_{\hat{x}\hat{x}} - \frac{|B|-1}{|B|} \mathcal K^l_{x\hat{x}}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =&\delta_{xy} \delta_{ij} \,\, \end{align} where we used \cref{conjecture:proj} in the first line. \end{conjecture} For ReLU: \begin{align} \mathcal K^{l+1}_{xx'} = \begin{cases} \frac{\sigma_w^2}{2} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx} & \text{if $x=x'$} \\ \frac{\sigma_w^2}{2\pi} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx'} & \text{if $x \neq x'$} \,. \end{cases} \end{align} Then for $\mu=0$ APJN is independent of $\sigma_w^2$ and $\sigma_b^2$ in infinite $|B|$ limit: \begin{align} \mathcal J^{l, l+1} = \frac{\pi}{\pi - 1} \,, \end{align} and for $\mu=1$: \begin{align} \mathcal J^{l, l+1} = 1 + O \left(\frac{1}{l} \right) \,. \end{align} It is also intuitively clear by realizing the denominator of \eqref{eqapp:bn_apjn} is growing with $l$ when $\mu=1$. Thus the finite $|B|$ corrections are further suppressed. We checked our results in \Cref{fig:bn_relu}. \section{JSL and JKL} \subsection{JSL} Since we already discussed our results for JL, we show details for JSL in this section. Derivation for JL is almost identical. Using JSL, The SGD update of $a^l_{\theta}$ at time t is \begin{align}\label{eqapp:a_update_jsl} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \left(\mathcal J^{l', l'+1}(t) - 1 \right) \,. \end{align} We focus on ReLU networks to demonstrate the difference between JL and JSL. For ReLU networks, we can rewrite \eqref{eqapp:a_update_jsl} as \begin{align}\label{eqapp:j_update_jsl} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(\mathcal J^{l,l+1}(t) - 1\right) \,. \end{align} \paragraph{Learning Rate $\eta$} The learning rate limit $\eta_t$ is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} - 1|$ monotonically decrease with time $t$, for any $l$, then we have \begin{align}\label{eqapp:eta_t_jsl} \eta_t < \min_{1 \leq l \leq L} \left\{\frac{2}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(1 + \sqrt{\mathcal J^{l, l+1}(t)}\right)} \right \} \,. \end{align} Or by solving \eqref{eqapp:j_update_jsl} with $J^{l,l+1}(1)=1$: \begin{align} \eta_{\mathrm{1-step}} = \frac{1}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(0)} \left(1 + \sqrt{\mathcal J^{l, l+1}(0)}\right)} \,. \end{align} For the dynamics of the optimization while using a single learning rate $\eta$. We again estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align} \eta_0 = \frac{4}{\sigma_w^3 a_W^l \left(\sqrt{2} + a_W^l \sigma_w \right)} \,. \end{align} Compared to \eqref{eq:eta_0_jl}, which scales as $1/\log \sigma_w$ for large $\sigma_w$, $\eta_0$ for JSL scales as $\sigma_w^{-4}$ for large $\sigma_w$. This makes the JSL a way worse choice than JLE when $\mathcal J^{l,l+1} \gg 1$. In \Cref{figapp:relu_jac}, we checked our results with \cref{alg:j_train} using JSL. All other details are the same as \Cref{fig:relu_jac}. The gap between $\eta_0$ and trainable regions can again be explained similarly by analyzing \eqref{eqapp:eta_t_jsl}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 > \eta > \eta_t$, there is still a chance that \Cref{alg:j_train} diverges for some $t>0$. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 < \eta < \eta_t$ holds, \Cref{alg:j_train} may say still have a chance to converge for some $t>0$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jsl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JSL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JSL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{figapp:relu_jac} \end{figure} \subsection{JKL} We mentioned the following loss function in the main text. \begin{align}\label{eqapp:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} There are other possible choices for controlling the forward pass, we will discuss this one briefly. First we calculate the derivative from kernel terms. We omit $x$ and $t$ dependency and introduce $r^{l+1, l} = \mathcal K^{l+1} / \mathcal K^l$ for clarity: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l'\geq l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{2\lambda}{a_W^{l+1}} \log r^{l+1,l} + \frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l' > l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \,, \end{align} which has a similar structure as APJN terms. Next we pick a term with $l' > l$ in the parentheses: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{\lambda}{\mathcal K^{l'+1} \mathcal K^{l'}} \left(\mathcal K^{l'} \frac{\partial \mathcal K^{l'+1}}{\partial a_W^{l+1}} - \mathcal K^{l'+1} \frac{\partial \mathcal K^{l'}}{\partial a_W^{l+1}}\right)\log{r^{l'+1,l'}} \,, \end{align} which is independent of depth for $\sigma_b=0$, and is always finite for $\sigma_b$. We find that update from the forward pass term for $a_b^{l+1}$ is subtle. For $\sigma_b=0$, similar to the discussion of APJN terms, the update of $a_b^{l+1}$ is zero. For $\sigma_b > 0$, there are two possibilities: \begin{itemize} \item Unbounded activation functions, when $\chi_{\mathcal K}^l > 1$: $\mathcal K^l \rightarrow \infty$ as $l\rightarrow \infty$, thus updates of $a_b^{l+1}$ from the forward pass term vanishes. \item Bounded activation functions or unbounded activation functions with $\chi_{\mathcal K}^l < 1$: $\mathcal K^l \rightarrow \mathcal K^{\star}$, thus the contribution from the forward pass term is always $O(1)$. \end{itemize} Summarizing the discussion above, we do not have exploding and vanishing gradients problem originated from the term we introduced to tune the forward pass. The forward pass term simply speeds the update of $a_W^{l+1}$ and $a_b^{l+1}$ in most cases. \paragraph{ReLU} Again we use a ReLU MLP network as an example. For $\sigma_b=0$, $\mathcal L_{\mathcal J \mathcal K \log}$ is equivalent to $(1+\lambda)\mathcal L_{\log}$ due to the scale invariance property of the ReLU activation function, which can be checked by using $\mathcal K^{l+1}(x,x) = \sigma_w^2 \mathcal K^l(x,x) / 2$. For finite $\sigma_b$, we use $\mathcal K^{l,l+1}(x,x) = \mathcal J^{l,l+1} \mathcal K^l(x,x) + \sigma_b^2$: \begin{align}\label{eqapp:relu_jkl} \mathcal L_{\mathcal J \mathcal K \log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\mathcal J^{l,l+1} + \frac{\sigma_b^2}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} In ordered phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 < 1$, one can prove that $\mathcal K^l(x,x) \rightarrow \sigma_b^2 /(1 - (a_W^{l+1}(t)\sigma_w)^2/2)$ as $l \rightarrow \infty$, thus \eqref{eqapp:relu_jkl} is equivalent to JL at large depth. In chaotic phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 > 1$ and $\mathcal K^l(x,x) \rightarrow \infty$. \eqref{eqapp:relu_jkl} is equivalent to JL with an extra overall factor $1+\lambda$ at large depth. \section{ResMLP} \subsection{Network Recursion Relation} \paragraph{Input} The input image is chopped into an $N \times N$ grid of patches of size $P\times P$ pixels (often $16 \times 16$). The patches are fed into (the same) Linear layer to form a set of $N^2$ $d$-dimensional embeddings, referred to as channels. The resulting input to the ResMLP blocks : $h^0_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. Here and in what follows, Greek letters ($\mu,\nu$ etc.) index the patches, while Latin letters ($i,j$ etc.) index the channels. Note that in practice, the above two operations are combined into a Convolutional layer with the filer size coinciding with the patch-resolution ($P \times P \times C$); and the stride equal to $P$ so as to avoid overlap between patches. Here, $C$ is the number of channels in the original image. \paragraph{ResMLP block} The input embedding $h^0_{\mu i}$ is passed through a series of ($L$) self-similar ResMLP blocks, which output $h^L_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. In the following, we use the notation $1^{2,3}_{4}$ for the parameters; where 1 denotes the parameter, 2 denotes the block-index, 3 denotes the specific action within the block, and 4 denotes the neural indices. A ResMLP block consists of the following operations. \begin{align}\label{appeq:resmlp_rec_full} &\texttt{AffineNorm1:} & a^{l+1}_{\mu i} &= \left( \alpha^{{l+1},a}_i h^{l+1}_{\nu i} + \beta^{{l+1},a}_i \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear1:} & b^{l+1}_{\mu i} &= \sum^{N^2}_{\nu=1} W^{{l+1},b}_{\mu\nu} a^{l+1}_{\nu i} + B^{{l+1},b}_{\mu} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual1:} & c^{l+1}_{\mu i} &= \mathcal E^{{l+1},c}_i b^{l+1}_{\mu i} + \mu_1 a^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ & \texttt{AffineNorm2:} & d^{l+1}_{\mu i} &= \left( \alpha^{{l+1},d}_i c^{l+1}_{\mu i} + \beta^{{l+1},d}_i \right) \,, &&\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear2:} & e^{l+1}_{\mu i} &= \sum^d_{j=1} W^{{l+1},e}_{ij} d^{l+1}_{\mu j} + B^{{l+1},e}_{i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{activation:} & f^{l+1}_{\mu i} &= \phi\left(e^{l+1}_{\mu j} \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{linear3:} & g^{l+1}_{\mu i} &= \sum^{4d}_{j=1} W^{{l+1},g}_{ij} f^{l+1}_{\mu j} + B^{{l+1},g}_{i}\,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual2:} & h^{l+1}_{\mu i} &= \mathcal E^{{l+1},h}_i g^{l+1}_{\mu i} + \mu_2 c^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \end{align} where the brackets on the right contain the dimensions of the output of the layers. We consider linear layers with weights and biases initialized with standard fan\_in. \texttt{linear1} acts on the patches, with parameters initialized as $W^{l+1,a}_{\mu\nu} \sim \mathcal N(0, \sigma_w^2/N)\,; B^{l+1,a}_{\mu} \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear2} acts on the channels, with parameter initialized as $W^{l+1,e}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{d})\,; B^{l+1,e}_i \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear3} also acts on the channels, with parameters initialized as $W^{l+1,g}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{4d})\,; B^{l+1,g}_i \sim \mathcal N(0, \sigma_b^2)$. GELU is used as the activation function $\phi$. \texttt{AffineNrom1} and \texttt{AffineNrom2} perform an element-wise multiplication with a trainable vector of weights $\alpha^{l+1,a}_i, \alpha^{l+1,d}_i \in \mathbb R^d$ and an addition of a trainable bias vector $\beta^{l+1,a}_i, \beta^{l+1,d}_i \in \mathbb R^d$. Residual branches are scaled by a trainable vector $\mathcal E^{l+1,c}_i, \mathcal E^{l+1,h}_i \in \mathbb R^{d}$ (\texttt{LayerScale}), whereas the skip connections are scaled by scalar strengths $\mu_1$ and $\mu_2$. \paragraph{Output} The action of blocks is followed by an Average-Pooling layer, to to convert the output to a $d$-dimensional vector. This vector is fed into a linear classifier that gives the output of the network $h^{L+1}_i$. \subsection{NNGP Kernel Recursion Relation} At initialization, $\alpha^{l+1,a}_i = \alpha^{l+1,d}_i = \mathbf 1_d$ and $\beta^{l+1,a}_i = \beta^{l+1,d}_i = \mathbf 0_d$. Thus, AffineNorm layers perform identity operations at initialization. \texttt{LayerScale} is initialized as $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$, where $\mathcal E$ is chosen to be a small scalar. (For examlpe, $\mathcal E$ is taken to be 0.1 for 12-block ResMLP and $10^{-5}$ for a 24-block ResMLP network.) Additionally, we also take $\mu_1 = \mu_2 = \mu$. With these simplifications, we can obtain the recursion relation for the diagonal part of the Neural Network Gaussian Process (NNGP) kernel for the ResMLP block-outputs. We note that the the full NNGP kernel $\mathcal K^l_{\mu\nu;ij}$ is a tensor in $\mathbb R^{N^2} \otimes \mathbb R^{N^2} \otimes \mathbb R^{d} \times \mathbb R^{d}$. Here, we focus on its diagonal part $\mathcal K^l_{\mu\mu;ii}$. For clarity, we remove the subscripts ($\mu\mu;ii$). The diagonal part of the NNGP kernel for a block output $h^l_{\mu i}$ is defined as \begin{align}\label{appeq:remlp_nngpk} \mathcal K^l \equiv \mathbb E_\theta \left[ h^l_{\mu i} h^l_{\mu i} \right] \,, \end{align} which is independent of its patch and channel indices, $\mu$ and $i$, in the infinite width limit. The recursion relation can be obtained by propagating the NNGP through a block. For clarity, we define NNGP kernel for the intermediate outputs within the blocks. For example, $\mathcal K^{l+1}_a \equiv \mathbb{E}_{\theta} \left[ a^{l+1}_{\mu i} a^{l+1}_{\mu i} \right]$, $\mathcal K^{l+1}_b \equiv \mathbb{E}_{\theta} \left[ b^{l+1}_{\mu i} b^{l+1}_{\mu i} \right]$, etc. \begin{align}\label{appeq:resmlp_nngpk_rec} &\texttt{AffineNorm1:} & \mathcal K^{l+1}_a &= \mathcal K^l \,, \\ \nonumber &\texttt{linear1:} & \mathcal K^{l+1}_b &= \sigma_w^2 \mathcal K^{l+1}_a + \sigma_b^2 \\ &&&= \sigma_w^2 \mathcal K^l + \sigma_b^2 \,, \\ \nonumber &\texttt{residual1:} & \mathcal K^{l+1}_c &= \mathcal E^2 \mathcal K^{l+1}_b + \mu^2 \mathcal K^l \\ \nonumber &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \\ &&&= \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \,, \\ \nonumber &\texttt{AffineNorm2:} & \mathcal K^{l+1}_d &= \mathcal K^{l+1}_c \\ &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \,, \\ \nonumber &\texttt{linear2:} & \mathcal K^{l+1}_e &= \sigma_w^2 \mathcal K^{l+1}_d + \sigma_b^2 \\ &&&= \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \,, \\ \nonumber &\texttt{activation:} & \mathcal K^{l+1}_f &= \frac{\mathcal K^{l+1}_e}{4} + \frac{\mathcal K^{l+1}_e}{2\pi} \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\left( \mathcal K^{l+1}_e \right)^2}{\pi \left( 1 + \mathcal K^{l+1}_e \right) \sqrt{1 + 2\mathcal K^{l+1}_e}} \\ \nonumber &&&\equiv \mathcal G \left[ \mathcal K^{l+1}_e \right] \\ &&&= \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \\ \nonumber &\texttt{linear3:} & \mathcal K^{l+1}_g &= \sigma_w^2 \mathcal K^{l+1}_f + \sigma_b^2 \\ &&&= \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \,, \\ \nonumber &\texttt{residual2:} & \mathcal K^{l+1} &= \mathcal E^2 \mathcal K^{l+1}_g + \mu^2 \mathcal K^{l+1}_c \\ \nonumber &&&= \mathcal \mu^2 \left( \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \right) + \\ \nonumber &&& \quad + \mathcal E^2 \, \left\{ \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \right\} \\ \nonumber &&&= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \\ &&&\quad + \mathcal E^2 \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \end{align} where we have defined \begin{equation} \mathcal G [z] = \frac{z}{4} + \frac{z}{2\pi} \arcsin{\left( \frac{z}{1 + z} \right)} + \frac{\left( z \right)^2}{\pi \left( 1 + z \right) \sqrt{1 + 2z}} \,. \end{equation} Thus, we have a recursion relation, representing $\mathcal K^{l+1}$ in terms of $\mathcal K^l$. As a side note, if we replace \texttt{GELU} activation function with \texttt{ReLU}, the relation simplifies greatly, offering us intuition. Specifically, $\mathcal G[z]$ gets replaced by $z/2$ in this case. This gives us the following recursion relation for ResMLP with \texttt{ReLU}. \begin{align} \nonumber \mathcal K^{l+1} &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \left( \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right) \\ &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 + \frac{1}{2} \mu^2 \mathcal E^2 \sigma_w^4 + \frac{1}{2} \mathcal E^4 \sigma_w^6 \right) \mathcal K^l + (1 + \mu^2 + \frac{1}{2}\sigma_w^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \sigma_b^2 \end{align} \subsection{Jacobian Recursion Relation} Next, we calculate the APJN for ResMLP, between two consecutive blocks. For clarity, we first derive the expression for the partial derivative of ${l+1}^{th}$ block output $h^{l+1}_{\mu i}$ with respect to $l^{th}$ block output $h^l_{\nu j}$. \begin{align}\label{appeq:resmlp_derivative} \nonumber \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} &= \mathcal E^{l+1,h}_i \frac{\partial g^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \frac{\partial f^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) \frac{\partial e^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial d^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial c^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m \frac{\partial b^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \frac{\partial a^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \mathcal E^{l+1,c}_i \frac{\partial b^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \frac{\partial a^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d \sum_{\lambda=1}^{N^2} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \delta_{\mu\nu} \delta_{mj} + \\ \nonumber &\quad + \mu_2 \mathcal E^{l+1,c}_i \sum_{\lambda=1}^{N^2} W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^{l+1,h}_i \mathcal E^{l+1,c}_j \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \\ \nonumber &\quad + \mu_1 \mathcal E^{l+1,h}_i \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \mu_2 \mathcal E^{l+1,c}_i \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^2 \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \mu \mathcal E \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \\ &\quad + \mu\mathcal E \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu^2 \delta_{\mu\nu} \delta_{ij} \,, \end{align} where in the last step, we have used the initial values of the parameters : $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$ and $\mu_1 = \mu_2 = \mu$. Next, we calculate the APJN using \eqref{appeq:resmlp_derivative}. We will perform the calculation in the limit of large $N^2$ and $d$; dropping all the corrections of order $\frac{1}{N^2}$ and $\frac{1}{d}$. \begin{align}\label{appeq:resmlp_apjn} \nonumber \mathcal J^{l,l+1} &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} \delta_{\mu\nu} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{ij} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \mu^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{\mu\nu} \delta_{ij} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \delta_{\mu\nu} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} + \right. \\ \nonumber &\qquad\qquad\quad + \left. \mu^2\mathcal E^2 \sigma_w^2 N^2 d + \mu^4 N^2 d \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \frac{1}{4} \mathcal E^4 \sigma_w^6 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \mu^2\mathcal E^2 \sigma_w^4 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \right. \\ \nonumber &\qquad\qquad\quad \left. + (\mathcal E^2 \sigma_w^2 + \mu^2) \mu^2 N^2 d \right] \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \mathbb E_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \right) \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H_e [\mathcal K^{l+1}_e] \right) \\ &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H [\mathcal K^l] \right) \,, \end{align} where we have defined \begin{align} \nonumber \mathcal H_e [\mathcal K^{l+1}_e] &\equiv \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \\ &= \frac{1}{4} + \frac{1}{2\pi} \left( \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\mathcal K^{l+1}_e (3 + 5 \mathcal K^{l+1}_e)}{(1 + \mathcal K^{l+1}_e) (1 + 2\mathcal K^{l+1}_e)^{3/2}} \right) \,. \end{align} We also write $\mathcal K^{l+1}_e$ in terms of $\mathcal K^l$ and define \begin{align} \nonumber \mathcal H [\mathcal K^l] &= \mathcal H_e [\mathcal K^{l+1}_e] \\ &= \mathcal H_e \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 (\sigma_w^2 \mathcal K^l + \sigma_b^2) + \sigma_b^2 \right) \right] \,. \end{align} It is clear from \eqref{appeq:resmlp_apjn} that for $\mu=1$, $\mathcal J^{l,l+1} > 1$, rendering the network off criticality. However, $\mathcal J^{l,l+1}$ can be tuned arbitrarily close to criticality by taking $\mathcal E$ to be small at $t=0$. This explains the necessity for \texttt{LayerScale} with small initial value in the ResMLP architecture. We note that the results in \eqref{appeq:resmlp_apjn} greatly simplify on using \texttt{ReLU} instead of \texttt{GELU} as $\phi$. We mention them here to provide intuition. $\mathcal H_e [\mathcal K^{l+1}_e] = \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] = \frac{1}{2}$ in this case. This gives us the simple result \begin{equation} \mathcal J^{l,l+1} = (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \frac{1}{2}\mathcal E^2 \sigma_w^4 \right) \end{equation} for \texttt{ReLU} activation function. \section{Experimental Details} \Cref{fig:relu_jac}: The the second panel is made of 1200 points, each point takes around $1.5$ minutes running on a single single NVIDIA RTX 3090 GPU. \Cref{fig:bn_relu}: We scanned over $400$ points for each phase diagram, which overall takes around $5$ hours on a single NVIDIA RTX 3090 GPU. \Cref{fig:resmlp}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset. We use SGD with $\eta=0.03$ and $N_v=2$ for $392$ steps, $|B|=256$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.005, 0.01\}$, $\mathrm{weight\; decay}=\{10^{-5}, 10^{-4}\}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip, Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{fig:vgg}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset for $392$ steps with $\eta=0.01$ $|B|=128$ and $N_v=3$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.001, 0.002, 0.005, 0.01, 0.02\}$, $\mathrm{weight\; decay}=\{0.0005, 0.001, 0.002, 0.005 \}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip and Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. We froze auxiliary parameters instead of scale the weights. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{figapp:relu_jac}: Exactly the same as \Cref{fig:relu_jac}, except we used JSL. \section{Theoretical Details} \subsection{Factorization of APJN} We the factorization property using MLP networks in infinite width limit. This proof works for any iid $\theta^l$ where $|\theta^l|$ has some $2+\delta$ moments. We start from the definition of the partial Jacobian, set $a^l_{\theta}=1$ for simplicity. \begin{align} \nonumber \mathcal{J}^{l,l+2} &\equiv \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_l} \sum_{k,m=1}^{N_{l+1}} \left(W^{l+2}_{ik} \phi'(h^{l+1}_k) \right) \left(W^{l+2}_{im} \phi'(h^{l+1}_m) \right) \left(\frac{\partial h^{l+1}_k}{\partial h^{l+1}_j} \frac{\partial h^{l+1}_m}{\partial h^{l}_j}\right) \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{\theta} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{W^{l+1}, b^{l+1}} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) W^{l+1}_{kj} W^{l+1}_{kj} \right] \mathbb E_{\theta} \left[\phi'(h^l_k) \phi'(h^l_k) \right ] \\ &= \mathcal J^{l,l+1} \mathcal J^{l+1, l+2} + O\left(\frac{\chi^l_{\Delta}}{N_l} \right) \,, \end{align} where the $1/N_l$ correction is zero in the infinite width limit. We used the fact that in infinite width limit $h^{l+1}_k$ is independent of $h^l_k$, and calculated the first expectation value of the fourth line using integration by parts. Recall that for a single input (omit x) \begin{align} \chi^{l}_{\Delta} \equiv (a_W^{l+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l}_i) \phi''(h^{l}_i) + \phi'(h^{l}_i) \phi'''(h^{l}_i) \right] \,. \end{align} \subsection{Exploding and Vanishing Gradients} We show details for deriving \eqref{eq:tauto_aw} for MLP networks, assuming $l'>l$: \begin{align} \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial \mathbb E_{h^{l'}_i \sim \mathcal N(0, \mathcal K^{l'}(x,x))} \left[(a_W^{l'+1} \sigma_w)^2 \phi'(h^{l'}_i) \phi'(h^{l'}_i) \right]}{\partial a_W^{l+1}} \nonumber \\ =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial}{\partial K^{l'}(x,x)} \left(\frac{{(a_W^{l'+1} \sigma_w)^2}}{\sqrt{2\pi \mathcal K^{l'}(x,x)}} \int \phi'(h^{l'}_i) \phi'(h^{l'}_i) e^{-\frac{h^{l'}_i h^{l'}_i}{2\mathcal K^{l'}(x,x)}} dh^{l'}_i \right) \frac{\partial K^l(x,x')}{\partial a_W^{l+1}}\nonumber \\ =& \frac{2}{\mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \frac{\partial \mathcal K^{l'}(x, x)}{\partial \mathcal K^{l'-1}(x,x)} \cdots \frac{\partial \mathcal K^{l+1}(x,x)}{\partial a_W^{l+1}} \nonumber \\ =& \frac{4}{a_W^{l+1} \mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x,x) \,, \end{align} where we calculated the derivative respect to $\mathcal K^{l'}(x,x)$, then used integration by parts to get the third line. The derivation for \eqref{eq:tauto_ab} is similar. \subsection{ReLU Details} \paragraph{Learning rate $\eta$} The learning rate bound \eqref{eq:eta_t} is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} -1 |$ to decrease monotonically with time $t$. \paragraph{\texorpdfstring{Derivation for $\chi_{\Delta}^l=0$}{}} This is straightforward to show by direct calculation in the infinite width limit. We set $a_W^l=1$ and ignore neuron index $i$ for simplicity. \begin{align} \chi^{l'}_{\Delta} =& \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left[\phi''(h^{l}) \phi''(h^{l}) + \phi'(h^{l}) \phi'''(h^{l'}) \right] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \phi'(h^l) \phi''(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{h^l}{\mathcal K^l(x,x)} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= 0 \,, \end{align} where $\Theta(h^l)$ is Heaviside step function and $\delta(h^l)$ is Dirac delta function. To get the last line we used $h^l \delta(h^l)=0$. \subsection{\texorpdfstring{\Cref{conj:bn}}{}} Here we offer an non-rigorous explanation for the conjecture in the infinite $|B|$ and the infinite width limit. We use a MLP model with $a_{\theta}^l=1$ as an example. We consider \begin{align} h^{l+1}_{x; i} = \sum_{j=1}^N W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + b^{l+1}_j + \mu h^l_{x; j}\,, \end{align} where \begin{align}\label{eq:BN} \tilde h^l_{x; i} =& \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \nonumber \\ =& \frac{\sqrt{|B|} \sum_{x' \in B} P_{x x'} h^l_{x'; i}}{\sqrt{ \sum_{x \in B} \left(\sum_{x' \in B} P_{x x'} h^l_{x'; i}\right)^2 }} \,, \end{align} where $P_{xx'} \equiv \delta_{xx'} - 1/ |B|$. It is a projector in the sense that $\sum_{x' \in B} P_{xx'} P_{x'x''} = P_{xx''}$. Derivative of the normalized preactivation: \begin{align} \frac{\partial \tilde h^l_{x; i}}{\partial h^l_{x';j}} = \sqrt{|B|} \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2}} - \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; i} \sum_{x'' \in B} P_{x'x''} h^l_{x''; i}}{\left( \sqrt{\sum_{x\in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2 } \right)^3} \right)\delta_{ij} \,. \end{align} Then the one layer APJN: \begin{align}\label{eqapp:bn_apjn} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \sum_{x,x' \in B} \sum_{j=1}^{N_l} \mathbb E_{\theta} \left[\left(\phi'(\tilde h^l_{x; j}) \right)^2 \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2}} \right. \right. \nonumber \\ &\left. \left.- \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; j} \sum_{x'' \in B} P_{x'x''} h^l_{x''; j}}{\left( \sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2 } \right)^3} \right)^2 \right] + \mu^2 \,. \end{align} In the infinite $|B|$ limit, only one term can contribute: \begin{align} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{x,x' \in B} \sum_{j=1}^{N_l} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx'} P_{xx'}}{\sum_{x=1}^B \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2}\right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[ \sum_{j=1}^{N_l} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx}}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{j=1}^{N_l} \left( \left[\frac{1}{|B|} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \right] \frac{|B|-1}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right) \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ \xrightarrow{B \rightarrow \infty} & \frac{\sigma_w^2}{N_l} \sum_{j=1}^{N_l} \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, \delta_{xx'} )} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where $x'$ is a dummy index, just to label the off-diagonal term. We used \cref{conjecture:proj} and \cref{conjecture:tilde_h} to get the result. \begin{conjecture}[Projected Norm]\label{conjecture:proj} In the infinite width limit. For a large depth $l$, $\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2$ converges to a deterministic value $\frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right)$ as batch size $|B| \rightarrow \infty$. \end{conjecture} \begin{proof}[Non-regirous "proof"] In the infinite width limit $h^l_{x; j}$ is sampled from a Gaussian distribution $\mathcal N(0, \mathcal K^l_{xx'})$, where the value $\mathcal K_{xx'}$ only depends on if $x$ is the same as $x'$ or not. We first simplify the formula: \begin{align}\label{eq:proj} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ =& \frac{1}{|B|} \sum_{x', x'' \in B} P_{x' x''} h^l_{x'; j} h^l_{x''; j} \nonumber \\ =& \frac{1}{|B|} \left(\sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x', x'' \in B} h^l_{x';j} h^l_{x'';j} \right) \nonumber \\ =& \frac{1}{|B|} \left(\frac{|B|-1}{|B|} \sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x' \neq x''}^B h^l_{x';j} h^l_{x'';j} \right) \,. \end{align} The average over $x'$ and $x''$ in infinite $|B|$ limit can be replaced by integration over their distribution (this is the non-rigorous step, complete rigorous proof see \citet{yang2018mean}): \begin{align} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ \xrightarrow{|B| \rightarrow \infty} & \frac{|B|-1}{|B|} \left(\mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{xx'}) } \left[(h^l_{x'; j})^2 \right] - \mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{x'x''}) } \left[h^l_{x';j} h^l_{x'';j} \right] \right) \nonumber \\ = & \frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right) \,, \end{align} \end{proof} Next, we need to show how to calculate $\mathcal K^{l}_{xx'}$. Before that we first try to simplify find the distribution of $\tilde h^l_{x; i}$ in the infinite $|B|$ limit. \begin{conjecture}[$\tilde h^l_{x;i}$ distribution]\label{conjecture:tilde_h} In the infinite $|B|$ limit and the infinite width limit, assume for large depth $\mathcal K^l_{xx}$ reaches a fixed point. Then $\tilde h^l_{x;i}$ can be seen as sampled from a Gaussian distribution with the covariance matrix \begin{align} \lim_{|B| \rightarrow \infty} \mathbb E_{\theta} \left[\tilde h^l_{x;i} \tilde h^l_{y;j}\right] = & \mathbb E_{\theta} \left[ \frac{\sum_{x', x'' \in B} P_{x x'} P_{y x''} h^l_{x';i} h^l_{x'';j}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \right] \nonumber \\ =& \frac{\sum_{x', x'' \in B} P_{xx'} P_{yx''} \mathcal K^l_{xx'}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =& \frac{\mathcal K^l_{xy} - \frac{1}{|B|} \mathcal K^l_{\hat{x}\hat{x}} - \frac{|B|-1}{|B|} \mathcal K^l_{x\hat{x}}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =&\delta_{xy} \delta_{ij} \,\, \end{align} where we used \cref{conjecture:proj} in the first line. \end{conjecture} For ReLU: \begin{align} \mathcal K^{l+1}_{xx'} = \begin{cases} \frac{\sigma_w^2}{2} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx} & \text{if $x=x'$} \\ \frac{\sigma_w^2}{2\pi} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx'} & \text{if $x \neq x'$} \,. \end{cases} \end{align} Then for $\mu=0$ APJN is independent of $\sigma_w^2$ and $\sigma_b^2$ in infinite $|B|$ limit: \begin{align} \mathcal J^{l, l+1} = \frac{\pi}{\pi - 1} \,, \end{align} and for $\mu=1$: \begin{align} \mathcal J^{l, l+1} = 1 + O \left(\frac{1}{l} \right) \,. \end{align} It is also intuitively clear by realizing the denominator of \eqref{eqapp:bn_apjn} is growing with $l$ when $\mu=1$. Thus the finite $|B|$ corrections are further suppressed. We checked our results in \Cref{fig:bn_relu}. \section{JSL and JKL} \subsection{JSL} Since we already discussed our results for JL, we show details for JSL in this section. Derivation for JL is almost identical. Using JSL, The SGD update of $a^l_{\theta}$ at time t is \begin{align}\label{eqapp:a_update_jsl} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \left(\mathcal J^{l', l'+1}(t) - 1 \right) \,. \end{align} We focus on ReLU networks to demonstrate the difference between JL and JSL. For ReLU networks, we can rewrite \eqref{eqapp:a_update_jsl} as \begin{align}\label{eqapp:j_update_jsl} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(\mathcal J^{l,l+1}(t) - 1\right) \,. \end{align} \paragraph{Learning Rate $\eta$} The learning rate limit $\eta_t$ is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} - 1|$ monotonically decrease with time $t$, for any $l$, then we have \begin{align}\label{eqapp:eta_t_jsl} \eta_t < \min_{1 \leq l \leq L} \left\{\frac{2}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(1 + \sqrt{\mathcal J^{l, l+1}(t)}\right)} \right \} \,. \end{align} Or by solving \eqref{eqapp:j_update_jsl} with $J^{l,l+1}(1)=1$: \begin{align} \eta_{\mathrm{1-step}} = \frac{1}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(0)} \left(1 + \sqrt{\mathcal J^{l, l+1}(0)}\right)} \,. \end{align} For the dynamics of the optimization while using a single learning rate $\eta$. We again estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align} \eta_0 = \frac{4}{\sigma_w^3 a_W^l \left(\sqrt{2} + a_W^l \sigma_w \right)} \,. \end{align} Compared to \eqref{eq:eta_0_jl}, which scales as $1/\log \sigma_w$ for large $\sigma_w$, $\eta_0$ for JSL scales as $\sigma_w^{-4}$ for large $\sigma_w$. This makes the JSL a way worse choice than JLE when $\mathcal J^{l,l+1} \gg 1$. In \Cref{figapp:relu_jac}, we checked our results with \cref{alg:j_train} using JSL. All other details are the same as \Cref{fig:relu_jac}. The gap between $\eta_0$ and trainable regions can again be explained similarly by analyzing \eqref{eqapp:eta_t_jsl}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 > \eta > \eta_t$, there is still a chance that \Cref{alg:j_train} diverges for some $t>0$. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 < \eta < \eta_t$ holds, \Cref{alg:j_train} may say still have a chance to converge for some $t>0$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jsl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JSL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JSL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{figapp:relu_jac} \end{figure} \subsection{JKL} We mentioned the following loss function in the main text. \begin{align}\label{eqapp:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} There are other possible choices for controlling the forward pass, we will discuss this one briefly. First we calculate the derivative from kernel terms. We omit $x$ and $t$ dependency and introduce $r^{l+1, l} = \mathcal K^{l+1} / \mathcal K^l$ for clarity: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l'\geq l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{2\lambda}{a_W^{l+1}} \log r^{l+1,l} + \frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l' > l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \,, \end{align} which has a similar structure as APJN terms. Next we pick a term with $l' > l$ in the parentheses: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{\lambda}{\mathcal K^{l'+1} \mathcal K^{l'}} \left(\mathcal K^{l'} \frac{\partial \mathcal K^{l'+1}}{\partial a_W^{l+1}} - \mathcal K^{l'+1} \frac{\partial \mathcal K^{l'}}{\partial a_W^{l+1}}\right)\log{r^{l'+1,l'}} \,, \end{align} which is independent of depth for $\sigma_b=0$, and is always finite for $\sigma_b$. We find that update from the forward pass term for $a_b^{l+1}$ is subtle. For $\sigma_b=0$, similar to the discussion of APJN terms, the update of $a_b^{l+1}$ is zero. For $\sigma_b > 0$, there are two possibilities: \begin{itemize} \item Unbounded activation functions, when $\chi_{\mathcal K}^l > 1$: $\mathcal K^l \rightarrow \infty$ as $l\rightarrow \infty$, thus updates of $a_b^{l+1}$ from the forward pass term vanishes. \item Bounded activation functions or unbounded activation functions with $\chi_{\mathcal K}^l < 1$: $\mathcal K^l \rightarrow \mathcal K^{\star}$, thus the contribution from the forward pass term is always $O(1)$. \end{itemize} Summarizing the discussion above, we do not have exploding and vanishing gradients problem originated from the term we introduced to tune the forward pass. The forward pass term simply speeds the update of $a_W^{l+1}$ and $a_b^{l+1}$ in most cases. \paragraph{ReLU} Again we use a ReLU MLP network as an example. For $\sigma_b=0$, $\mathcal L_{\mathcal J \mathcal K \log}$ is equivalent to $(1+\lambda)\mathcal L_{\log}$ due to the scale invariance property of the ReLU activation function, which can be checked by using $\mathcal K^{l+1}(x,x) = \sigma_w^2 \mathcal K^l(x,x) / 2$. For finite $\sigma_b$, we use $\mathcal K^{l,l+1}(x,x) = \mathcal J^{l,l+1} \mathcal K^l(x,x) + \sigma_b^2$: \begin{align}\label{eqapp:relu_jkl} \mathcal L_{\mathcal J \mathcal K \log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\mathcal J^{l,l+1} + \frac{\sigma_b^2}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} In ordered phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 < 1$, one can prove that $\mathcal K^l(x,x) \rightarrow \sigma_b^2 /(1 - (a_W^{l+1}(t)\sigma_w)^2/2)$ as $l \rightarrow \infty$, thus \eqref{eqapp:relu_jkl} is equivalent to JL at large depth. In chaotic phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 > 1$ and $\mathcal K^l(x,x) \rightarrow \infty$. \eqref{eqapp:relu_jkl} is equivalent to JL with an extra overall factor $1+\lambda$ at large depth. \section{ResMLP} \subsection{Network Recursion Relation} \paragraph{Input} The input image is chopped into an $N \times N$ grid of patches of size $P\times P$ pixels (often $16 \times 16$). The patches are fed into (the same) Linear layer to form a set of $N^2$ $d$-dimensional embeddings, referred to as channels. The resulting input to the ResMLP blocks : $h^0_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. Here and in what follows, Greek letters ($\mu,\nu$ etc.) index the patches, while Latin letters ($i,j$ etc.) index the channels. Note that in practice, the above two operations are combined into a Convolutional layer with the filer size coinciding with the patch-resolution ($P \times P \times C$); and the stride equal to $P$ so as to avoid overlap between patches. Here, $C$ is the number of channels in the original image. \paragraph{ResMLP block} The input embedding $h^0_{\mu i}$ is passed through a series of ($L$) self-similar ResMLP blocks, which output $h^L_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. In the following, we use the notation $1^{2,3}_{4}$ for the parameters; where 1 denotes the parameter, 2 denotes the block-index, 3 denotes the specific action within the block, and 4 denotes the neural indices. A ResMLP block consists of the following operations. \begin{align}\label{appeq:resmlp_rec_full} &\texttt{AffineNorm1:} & a^{l+1}_{\mu i} &= \left( \alpha^{{l+1},a}_i h^{l+1}_{\nu i} + \beta^{{l+1},a}_i \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear1:} & b^{l+1}_{\mu i} &= \sum^{N^2}_{\nu=1} W^{{l+1},b}_{\mu\nu} a^{l+1}_{\nu i} + B^{{l+1},b}_{\mu} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual1:} & c^{l+1}_{\mu i} &= \mathcal E^{{l+1},c}_i b^{l+1}_{\mu i} + \mu_1 a^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ & \texttt{AffineNorm2:} & d^{l+1}_{\mu i} &= \left( \alpha^{{l+1},d}_i c^{l+1}_{\mu i} + \beta^{{l+1},d}_i \right) \,, &&\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear2:} & e^{l+1}_{\mu i} &= \sum^d_{j=1} W^{{l+1},e}_{ij} d^{l+1}_{\mu j} + B^{{l+1},e}_{i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{activation:} & f^{l+1}_{\mu i} &= \phi\left(e^{l+1}_{\mu j} \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{linear3:} & g^{l+1}_{\mu i} &= \sum^{4d}_{j=1} W^{{l+1},g}_{ij} f^{l+1}_{\mu j} + B^{{l+1},g}_{i}\,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual2:} & h^{l+1}_{\mu i} &= \mathcal E^{{l+1},h}_i g^{l+1}_{\mu i} + \mu_2 c^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \end{align} where the brackets on the right contain the dimensions of the output of the layers. We consider linear layers with weights and biases initialized with standard fan\_in. \texttt{linear1} acts on the patches, with parameters initialized as $W^{l+1,a}_{\mu\nu} \sim \mathcal N(0, \sigma_w^2/N)\,; B^{l+1,a}_{\mu} \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear2} acts on the channels, with parameter initialized as $W^{l+1,e}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{d})\,; B^{l+1,e}_i \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear3} also acts on the channels, with parameters initialized as $W^{l+1,g}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{4d})\,; B^{l+1,g}_i \sim \mathcal N(0, \sigma_b^2)$. GELU is used as the activation function $\phi$. \texttt{AffineNrom1} and \texttt{AffineNrom2} perform an element-wise multiplication with a trainable vector of weights $\alpha^{l+1,a}_i, \alpha^{l+1,d}_i \in \mathbb R^d$ and an addition of a trainable bias vector $\beta^{l+1,a}_i, \beta^{l+1,d}_i \in \mathbb R^d$. Residual branches are scaled by a trainable vector $\mathcal E^{l+1,c}_i, \mathcal E^{l+1,h}_i \in \mathbb R^{d}$ (\texttt{LayerScale}), whereas the skip connections are scaled by scalar strengths $\mu_1$ and $\mu_2$. \paragraph{Output} The action of blocks is followed by an Average-Pooling layer, to to convert the output to a $d$-dimensional vector. This vector is fed into a linear classifier that gives the output of the network $h^{L+1}_i$. \subsection{NNGP Kernel Recursion Relation} At initialization, $\alpha^{l+1,a}_i = \alpha^{l+1,d}_i = \mathbf 1_d$ and $\beta^{l+1,a}_i = \beta^{l+1,d}_i = \mathbf 0_d$. Thus, AffineNorm layers perform identity operations at initialization. \texttt{LayerScale} is initialized as $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$, where $\mathcal E$ is chosen to be a small scalar. (For examlpe, $\mathcal E$ is taken to be 0.1 for 12-block ResMLP and $10^{-5}$ for a 24-block ResMLP network.) Additionally, we also take $\mu_1 = \mu_2 = \mu$. With these simplifications, we can obtain the recursion relation for the diagonal part of the Neural Network Gaussian Process (NNGP) kernel for the ResMLP block-outputs. We note that the the full NNGP kernel $\mathcal K^l_{\mu\nu;ij}$ is a tensor in $\mathbb R^{N^2} \otimes \mathbb R^{N^2} \otimes \mathbb R^{d} \times \mathbb R^{d}$. Here, we focus on its diagonal part $\mathcal K^l_{\mu\mu;ii}$. For clarity, we remove the subscripts ($\mu\mu;ii$). The diagonal part of the NNGP kernel for a block output $h^l_{\mu i}$ is defined as \begin{align}\label{appeq:remlp_nngpk} \mathcal K^l \equiv \mathbb E_\theta \left[ h^l_{\mu i} h^l_{\mu i} \right] \,, \end{align} which is independent of its patch and channel indices, $\mu$ and $i$, in the infinite width limit. The recursion relation can be obtained by propagating the NNGP through a block. For clarity, we define NNGP kernel for the intermediate outputs within the blocks. For example, $\mathcal K^{l+1}_a \equiv \mathbb{E}_{\theta} \left[ a^{l+1}_{\mu i} a^{l+1}_{\mu i} \right]$, $\mathcal K^{l+1}_b \equiv \mathbb{E}_{\theta} \left[ b^{l+1}_{\mu i} b^{l+1}_{\mu i} \right]$, etc. \begin{align}\label{appeq:resmlp_nngpk_rec} &\texttt{AffineNorm1:} & \mathcal K^{l+1}_a &= \mathcal K^l \,, \\ \nonumber &\texttt{linear1:} & \mathcal K^{l+1}_b &= \sigma_w^2 \mathcal K^{l+1}_a + \sigma_b^2 \\ &&&= \sigma_w^2 \mathcal K^l + \sigma_b^2 \,, \\ \nonumber &\texttt{residual1:} & \mathcal K^{l+1}_c &= \mathcal E^2 \mathcal K^{l+1}_b + \mu^2 \mathcal K^l \\ \nonumber &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \\ &&&= \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \,, \\ \nonumber &\texttt{AffineNorm2:} & \mathcal K^{l+1}_d &= \mathcal K^{l+1}_c \\ &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \,, \\ \nonumber &\texttt{linear2:} & \mathcal K^{l+1}_e &= \sigma_w^2 \mathcal K^{l+1}_d + \sigma_b^2 \\ &&&= \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \,, \\ \nonumber &\texttt{activation:} & \mathcal K^{l+1}_f &= \frac{\mathcal K^{l+1}_e}{4} + \frac{\mathcal K^{l+1}_e}{2\pi} \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\left( \mathcal K^{l+1}_e \right)^2}{\pi \left( 1 + \mathcal K^{l+1}_e \right) \sqrt{1 + 2\mathcal K^{l+1}_e}} \\ \nonumber &&&\equiv \mathcal G \left[ \mathcal K^{l+1}_e \right] \\ &&&= \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \\ \nonumber &\texttt{linear3:} & \mathcal K^{l+1}_g &= \sigma_w^2 \mathcal K^{l+1}_f + \sigma_b^2 \\ &&&= \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \,, \\ \nonumber &\texttt{residual2:} & \mathcal K^{l+1} &= \mathcal E^2 \mathcal K^{l+1}_g + \mu^2 \mathcal K^{l+1}_c \\ \nonumber &&&= \mathcal \mu^2 \left( \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \right) + \\ \nonumber &&& \quad + \mathcal E^2 \, \left\{ \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \right\} \\ \nonumber &&&= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \\ &&&\quad + \mathcal E^2 \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \end{align} where we have defined \begin{equation} \mathcal G [z] = \frac{z}{4} + \frac{z}{2\pi} \arcsin{\left( \frac{z}{1 + z} \right)} + \frac{\left( z \right)^2}{\pi \left( 1 + z \right) \sqrt{1 + 2z}} \,. \end{equation} Thus, we have a recursion relation, representing $\mathcal K^{l+1}$ in terms of $\mathcal K^l$. As a side note, if we replace \texttt{GELU} activation function with \texttt{ReLU}, the relation simplifies greatly, offering us intuition. Specifically, $\mathcal G[z]$ gets replaced by $z/2$ in this case. This gives us the following recursion relation for ResMLP with \texttt{ReLU}. \begin{align} \nonumber \mathcal K^{l+1} &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \left( \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right) \\ &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 + \frac{1}{2} \mu^2 \mathcal E^2 \sigma_w^4 + \frac{1}{2} \mathcal E^4 \sigma_w^6 \right) \mathcal K^l + (1 + \mu^2 + \frac{1}{2}\sigma_w^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \sigma_b^2 \end{align} \subsection{Jacobian Recursion Relation} Next, we calculate the APJN for ResMLP, between two consecutive blocks. For clarity, we first derive the expression for the partial derivative of ${l+1}^{th}$ block output $h^{l+1}_{\mu i}$ with respect to $l^{th}$ block output $h^l_{\nu j}$. \begin{align}\label{appeq:resmlp_derivative} \nonumber \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} &= \mathcal E^{l+1,h}_i \frac{\partial g^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \frac{\partial f^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) \frac{\partial e^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial d^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial c^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m \frac{\partial b^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \frac{\partial a^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \mathcal E^{l+1,c}_i \frac{\partial b^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \frac{\partial a^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d \sum_{\lambda=1}^{N^2} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \delta_{\mu\nu} \delta_{mj} + \\ \nonumber &\quad + \mu_2 \mathcal E^{l+1,c}_i \sum_{\lambda=1}^{N^2} W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^{l+1,h}_i \mathcal E^{l+1,c}_j \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \\ \nonumber &\quad + \mu_1 \mathcal E^{l+1,h}_i \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \mu_2 \mathcal E^{l+1,c}_i \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^2 \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \mu \mathcal E \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \\ &\quad + \mu\mathcal E \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu^2 \delta_{\mu\nu} \delta_{ij} \,, \end{align} where in the last step, we have used the initial values of the parameters : $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$ and $\mu_1 = \mu_2 = \mu$. Next, we calculate the APJN using \eqref{appeq:resmlp_derivative}. We will perform the calculation in the limit of large $N^2$ and $d$; dropping all the corrections of order $\frac{1}{N^2}$ and $\frac{1}{d}$. \begin{align}\label{appeq:resmlp_apjn} \nonumber \mathcal J^{l,l+1} &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} \delta_{\mu\nu} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{ij} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \mu^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{\mu\nu} \delta_{ij} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \delta_{\mu\nu} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} + \right. \\ \nonumber &\qquad\qquad\quad + \left. \mu^2\mathcal E^2 \sigma_w^2 N^2 d + \mu^4 N^2 d \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \frac{1}{4} \mathcal E^4 \sigma_w^6 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \mu^2\mathcal E^2 \sigma_w^4 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \right. \\ \nonumber &\qquad\qquad\quad \left. + (\mathcal E^2 \sigma_w^2 + \mu^2) \mu^2 N^2 d \right] \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \mathbb E_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \right) \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H_e [\mathcal K^{l+1}_e] \right) \\ &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H [\mathcal K^l] \right) \,, \end{align} where we have defined \begin{align} \nonumber \mathcal H_e [\mathcal K^{l+1}_e] &\equiv \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \\ &= \frac{1}{4} + \frac{1}{2\pi} \left( \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\mathcal K^{l+1}_e (3 + 5 \mathcal K^{l+1}_e)}{(1 + \mathcal K^{l+1}_e) (1 + 2\mathcal K^{l+1}_e)^{3/2}} \right) \,. \end{align} We also write $\mathcal K^{l+1}_e$ in terms of $\mathcal K^l$ and define \begin{align} \nonumber \mathcal H [\mathcal K^l] &= \mathcal H_e [\mathcal K^{l+1}_e] \\ &= \mathcal H_e \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 (\sigma_w^2 \mathcal K^l + \sigma_b^2) + \sigma_b^2 \right) \right] \,. \end{align} It is clear from \eqref{appeq:resmlp_apjn} that for $\mu=1$, $\mathcal J^{l,l+1} > 1$, rendering the network off criticality. However, $\mathcal J^{l,l+1}$ can be tuned arbitrarily close to criticality by taking $\mathcal E$ to be small at $t=0$. This explains the necessity for \texttt{LayerScale} with small initial value in the ResMLP architecture. We note that the results in \eqref{appeq:resmlp_apjn} greatly simplify on using \texttt{ReLU} instead of \texttt{GELU} as $\phi$. We mention them here to provide intuition. $\mathcal H_e [\mathcal K^{l+1}_e] = \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] = \frac{1}{2}$ in this case. This gives us the simple result \begin{equation} \mathcal J^{l,l+1} = (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \frac{1}{2}\mathcal E^2 \sigma_w^4 \right) \end{equation} for \texttt{ReLU} activation function. \section{Introduction} Initializing Deep Neural Networks (DNNs) correctly is crucial for trainability and convergence. In the recent years, there has been remarkable progress in tackling the problem of exploding and vanishing gradients. One line of work utilizes the convergence of DNNs to Gaussian Processes in the limit of infinite width \citep{neal1996priors, lee2018deep, matthews2018gaussian, novak2018bayesian, garriga2018deep, hron2020infinite, yang2019tensor}. The infinite width analysis is then used to determine critical initialization for the hyperparameters of the network \citep{he2015delving, poole2016exponential, schoenholz2016deep, lee2018deep, roberts2021principles, doshi2021critical}. It has further been shown that dynamical isometry can improve the performance of DNNs \citep{pennington2018spectral, xiao2018dynamical}. Exploding and vanishing gradients can also be regulated with special activation functions such as SELU \citep{klambauer2017self-normalizing} and GPN \citep{lu2020bidirectionally}. Deep Kernel shaping \citep{martens2021deep, zhang2022deep} improves trainability of deep networks by systematically controlling $Q$ and $C$ maps. Normalization layers such as LayerNorm \citep{ba2016layer}, BatchNorm \citep{ioffe2015batch} and \citep{wu2018group} facilitate training of DNNs by significantly enhancing the critical regime \citep{doshi2021critical}. There have also been algorithmic attempts at regulating the forward pass, such as LSUV \citep{mishkin2015lsuv}. Another line of work sets the networks with residual connections to criticality by suppressing the contribution from the residual branches at initialization. In Highway Networks \citep{srivastava2015training}, this is achieved by initializing the network to have a small ``transform gate''. \citet{goyal2017accurate} achieve this in ResNets, by initializing the scaling coefficient for the residual block's last BatchNorm at 0. In Fixup \citep{zhang2019fixup} and T-Fixup \citep{huang2020improving}, careful weight-initialization schemes ensure suppression of residual branches in deep networks. Techniques such as SkipInit \citep{de2020batch}, LayerScale \citep{touvron2021cait} and ReZero \citep{bachlechner2021rezero} multiply the residual branches by a trainable parameter, initialized to a small value or to 0. Despite this progress, the aforementioned techniques are limited by either the availability of analytical solutions, specific use of normalization layers, or the use of residual connections. One needs to manually decide on the techniques to be employed on a case-by-case basis. In this work, we propose a simple algorithm, which we term $\texttt{AutoInit}$, that automatically initializes a DNN to criticality. Notably, the algorithm can be applied to any feedforward DNN, irrespective of the architectural details, large width assumption or existence of analytic treatment. We expect that $\texttt{AutoInit}$ will be an essential tool in architecture search tasks because it will always ensure that a never-before-seen architecture is initialized well. \subsection{Criticality in Deep Neural Networks} In the following, we employ the definition of criticality using \emph{Partial Jacobian} \citep{doshi2021critical}. Consider a DNN made up of a sequence of blocks. Each block consists of Fully Connected layers, Lipschitz activation functions, Convolutional layers, Residual Connections, LayerNorm\citep{ba2016layer}, BatchNorm\citep{ioffe2015batch}, AffineNorm\citep{touvron2021resmlp}, LayerScale\citep{touvron2021cait}, or any combination of thereof. We consider a batched input to the network, where each input tensor $x \in \mathbb{R}^{n^0_1} \otimes \mathbb{R}^{n^0_2} \otimes \cdots$ is taken from the batch $B$ of size $\lvert B \rvert$. The output tensor of the $l^{th}$ block is denoted by $h^l (x) \in \mathbb{R}^{n^l_1} \otimes \mathbb{R}^{n^l_2} \otimes \cdots$. $h^{l+1}(x)$ depends on $h^l(x)$ through a layer-dependent function $\mathcal{F}^{l}$, denoting the operations of the aforementioned layers. This function, in turn, depends on the parameters of the various layers within the block, denoted collectively by $\theta^{l+1}$. The explicit layer dependence of the function $\mathcal{F}^{l}$ highlights that we do not require the network to have self-repeating layers (blocks). We note that $h^{l+1} (x)$ can, in general, depend on $h^{l} (x')$ for all $x'$ in the batch $B$; which will indeed be the case when we employ BatchNorm. The recurrence relation for such a network can be written as \begin{align}\label{eq:DNNrecursion} h^{l+1} (x) = \mathcal{F}^{l+1}_{\theta^{l+1}} \left( \{h^l(x') \;|\; \forall x' \in B \} \right) \,, \end{align} where we have suppressed all the indices for clarity. Each parameter matrix $\theta^{l+1}$ is sampled from a zero-mean distribution. We will assume that some $2+\delta$ moments of $|\theta^{l+1}|$ are finite such that the Central Limit Theorem holds. Then the variances of $\theta^{l+1}$ can be viewed as hyperparameters and will be denoted by $\sigma^{l+1}_{\theta}$ for each $\theta^{l+1}$. We define the $\texttt{Flatten}$ operation, which reshapes the output $h^l(x)$ by merging all its dimensions. \begin{align} \bar h^l(x) = \texttt{Flatten}\left( h^l(x) \right) \sim \mathbb{R}^{N^l} \,, \end{align} where $N^l \equiv n^l_1 n^l_2 \cdots$. \begin{definition}[Average Partial Jacobian Norm (APJN)] \label{def:APJN} For a DNN given by \eqref{eq:DNNrecursion}, APJN is defined as \begin{align} \mathcal J^{l_0, l} \equiv \mathbb E_{\theta} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l}} \sum_{i=1}^{N_{l_0}} \sum_{x, x' \in B} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \right] \,, \end{align} where $\mathbb E_\theta[\cdot]$ denotes the average over parameter initializations. \end{definition} \begin{remark} For DNNs without BatchNorm and normalized inputs, definition of APJN for $|B|>1$ is equivalent to the one in $|B|=1$ case. \end{remark} We use APJN as the empirical diagnostic of criticality. \begin{definition}[Critical Initialization] \label{def:critical} A DNN given by \eqref{eq:DNNrecursion}, consisting of $L+2$ blocks, including input and output layers, is critically initialized if all block-to-block APJN are equal to $1$, i.e. \begin{align} \mathcal J^{l,l+1} = 1 \,, \quad \forall \quad 1 \leq l \leq L \,. \end{align} \end{definition} Critical initialization as defined by \Cref{def:critical} is essential, as it prevents the gradients from exploding or vanishing at $t=0$. One can readily see this by calculating the gradient for any flattened parameter matrix $\theta$ at initialization: \begin{align}\label{eq:grad} \frac{1}{|\theta^l|}\|\nabla_{\theta^l} \mathcal L \|^2_2 =& \frac{1}{|\theta^l|} \left\|\sum_{\mathrm{all}} \frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \frac{\partial \bar{h}^{L+1}_i}{\partial \bar{h}^L_j}\cdots \frac{\partial \bar{h}^{l+1}_k}{\partial \bar{h}^l_m} \frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_2 \nonumber \\ \sim &\, O \left( \frac{1}{|\theta^l|} \left\|\frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \right\|^2_2 \cdot \mathcal J^{L, L+1} \cdots \mathcal J^{l, l+1} \cdot \left \|\frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_F \right)\,, \end{align} where $\| \cdot \|_F$ denotes the Frobenius norm. In the second line, we utilized the factorization property of APJN \begin{align}\label{eq:factor} \mathcal J^{l_0,l} = \prod_{l'=l_0}^{l-1} \mathcal J^{l', l'+1} \,, \end{align} which holds in the infinite width limit given there is no weight sharing across the blocks. One may further require $\left\| \partial \mathcal L / \partial \bar{h}^{L+1}_i \right\|_2 \sim O(1)$. However, in practice we observe that this requirement is less important once the condition in \Cref{def:critical} is met. \subsection{Automatic Critical Initialization} For general architectures, analytically calculating APJN is often difficult or even impossible. This poses a challenge in determining the correct parameter initializations to ensure criticality; especially in networks without self-similar layers. Moreover, finite network width is known to have nontrivial corrections to the criticality condition \citep{roberts2021principles}. This calls for an algorithmic method to find critical initialization. To that end, we propose the \Cref{alg:j_general} that we called $\texttt{AutoInit}$ for critically initializing deep neural networks \emph{automatically}, without the need for analytic solutions of the signal propagation or of the meanfield approximation. The algorithm works for general feedforward DNNs, as defined in \eqref{eq:DNNrecursion}. Moreover, it naturally takes into account all finite width corrections to criticality because it works directly with an instance of a network. We do tacitly assume the existence of a critical initialization. If the network cannot be initialized critically, the algorithm will return a network that can propagate gradients well because the APJNs will be pushed as close to $1$ as possible. The central idea behind the algorithm is to choose the hyperparameters for all layers such that the condition in \Cref{def:critical} is met. This is achieved by optimizing a few auxiliary scalar parameters $a^l_{\theta}$ of a twin network with parameters $a^l_{\theta} \theta^{l}$ while freezing the parameters $\theta^{l}$. The loss function is minimized by the condition mentioned in \Cref{def:critical}. \begin{algorithm}[h] \caption{\texttt{AutoInit} (SGD)} \label{alg:j_general} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma^l_\theta;\, a^l_{\theta}(t) \; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$ and $\{a_\theta^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l_\theta(t+1) = a^l_\theta(t) - \eta \nabla_{a^l_\theta} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{\sigma^l_\theta = \sigma^l_{\theta} a^l_{\theta}(t) ;\, 1\; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$ \end{algorithmic} \end{algorithm} In practice, for speed and memory reasons we use an unbiased estimator \citep{hoffman2019robust} of APJN in \Cref{alg:j_general}, defined as \begin{align}\label{eq:j_est} \hat {\mathcal{J}}^{l, l+1} \equiv \frac{1}{N_v} \sum_{\mu=1}^{N_v} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l+1}} \sum_{k=1}^{N_{l+1}} \sum_{i=1}^{N_{l}} \sum_{x, x' \in B} \frac{\partial (v_{\mu j} \bar{h}^{l+1}_j(x'))}{\partial \bar{h}^{l}_i(x)} \frac{\partial (v_{\mu k} \bar{h}^{l+1}_k(x'))}{\partial \bar{h}^{l}_i(x)} \right] \,, \end{align} where each $v_{\mu i}$ is a unit Gaussian random vector for a given $\mu$. The Jacobian-Vector Product (JVP) structure in the estimator speeds up the computation by a factor of $N_{l+1} / N_v$ and consumes less memory at the cost of introducing some noise. In \Cref{sec:auto} we analyze $\texttt{AutoInit}$ for multi-layer perceptron (MLP) networks. Then we discuss the problem of exploding and vanishing gradients of the tuning itself; and derive bounds on the learning rate for ReLU or linear MLPs. In \Cref{sec:bn} we extend the discussion to BatchNorm and provide a strategy for using $\texttt{AutoInit}$ for a general network architecture. In \Cref{sec:exp} we provide experimental results for more complex architectures: VGG19\_BN and ResMLP-S12. \section{AutoInit for MLP networks} \label{sec:auto} MLPs are described by the following recurrence relation for preactivations \begin{align}\label{eq:mlp_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} W^{l+1}_{ij} \phi(h^l_j(x)) + b^{l+1}_i \,. \end{align} Here $x$ is an input vector, weights $W^{l+1}_{ij} \sim \mathcal N(0, \sigma_w^2/N_l)$ and biases $b^{l+1}_i \sim \mathcal N(0, \sigma_b^2)$ are collectively denoted as $\theta^{l+1}$. We assume $\phi$ is a Lipschitz activation function throughout this paper. For a network with $L$ hidden layers, in infinite width limit $N_l \rightarrow \infty$, preactivations \{$h^l_i(x) \,|\, 1 \leq l \leq L, \forall i \in N_l\}$ are Gaussian Processes (GPs). The distribution of preactivations is then determined by the Neural Network Gaussian Process (NNGP) kernel \begin{align} \mathcal K^{l}(x, x') = \mathbb E_{\theta} \left[ h^l_i(x) h^l_i(x') \right] \,, \end{align} which value is independent of neuron index $i$. The NNGP kernel can be calculated recursively via \begin{align} \mathcal K^{l+1}(x, x') = \sigma_w^2 \mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} \left[\phi\left(h_i^l(x)\right) \phi\left(h_i^l(x')\right) \right] + \sigma_b^2 \,. \end{align} Note that we have replaced the average over parameter initializations $\mathbb{E}_\theta[\cdot]$ with an average over preactivation-distributions $\mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} [\cdot]$; which are interchangeable in the infinite width limit \citep{lee2018deep, roberts2021principles}. Critical initialization of such a network is defined according to \Cref{def:critical}. In practice, we define a twin network with extra parameters, for MLP networks the twin preactivations can be written as \begin{align}\label{eq:twin_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} a_W^{l+1} W^{l+1}_{ij} \phi(h^l_j(x)) + a_b^{l+1} b^{l+1}_i \,, \end{align} where $a_{\theta}^{l+1} \equiv \{a^{l+1}_W, a^{l+1}_b\}$ are auxiliary parameters that will be tuned by \Cref{alg:j_train}. \begin{algorithm}[h] \caption{\texttt{AutoInit} for MLP (SGD)} \label{alg:j_train} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$, $\{a_W^l(0)=1\}$ and $\{a_b^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l(t+1) = a^l(t) - \eta \nabla_{a^l} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{a^l_W(t) \sigma_w, \, a^l_b(t) \sigma_b, 1,\, 1 \;| \; \forall 1 \leq l \leq L \})$ \end{algorithmic} \end{algorithm} In \Cref{alg:j_train}, one may also return $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, while freezing all $a^l_{\theta}$. However, this leads to different training dynamics while updating weights and biases. Alternatively, one can leave the auxiliary parameters trainable, but in practice this leads to unstable training dynamics. \paragraph{Loss function} The choice of loss function $\mathcal L$ is important. We will use the following loss \begin{align}\label{eq_loss_sq_J} \mathcal L_{\log} = \frac{1}{2} \sum_{l=1}^L \left[\log(\mathcal J^{l, l+1})\right]^2 \,, \end{align} We will refer to \eqref{eq_loss_sq_J} as Jacobian Log Loss (JLL). This definition is inspired by the factorization property \eqref{eq:factor}, which allows one to optimize each of the partial Jacobian norms independently. Thus the tuning dynamics is less sensitive to the depth. One could naively use $\log(\mathcal J^{0, L+1})^2$ as a loss function, however optimization will encounter the same level of exploding or vanishing gradients problem as \eqref{eq:grad}. One may worry that the factorization property will be violated for $t>0$, due to the possible correlation across all $\{a^l(t)\}$. It turns out that the correlation introduced by \Cref{alg:j_train} does not change the fact that all weights and biases are iid, ensuring that \eqref{eq:factor} holds for any $t \geq 0$. Another choice for the loss is Jacobian Square Loss (JSL), defined as $\mathcal L_2 = \frac{1}{2} \sum_{l=1}^L \left(\mathcal J^{l, l+1} - 1 \right)^2$. However JSL has poor convergence properties when $\mathcal J^{l, l+1} \gg 1$. One may further restrict the forward pass by adding terms that penalize the difference between $\mathcal K^l(x, x)$ and $\mathcal K^{l+1}(x,x)$. For brevity, we leave these discussions for the appendix. \paragraph{Exploding and Vanishing Gradients} While the objective of \Cref{alg:j_train} is to solve the exploding and vanishing gradients problem, the \Cref{alg:j_train} itself has the same problem, although not as severe. Consider optimizing MLP networks using $\mathcal L_{\log}$, where the forward pass is defined by \eqref{eq:twin_preact}. Assuming the input data $x$ is normalized, the SGD update (omit x) of $a^{l}_{\theta}$ at time $t$ can be written as \begin{align}\label{eq:a_update} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) \end{align} For a deep neural network, i.e. $|L - l| \gg 1$ holds for some $l$, the depth dependent term of \eqref{eq:a_update} can lead to exploding or vanishing gradients problems. We will show next that this is not the familiar exploding or vanishing gradients problem. First we would like to explain the vanishing gradients problem for $a_W^{l+1}$. Rewrite the right hand side of \eqref{eq:a_update} as \begin{align}\label{eq:iso} - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{W}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) = - \eta \frac{2}{a_W^{l+1}(t)} \log \mathcal J^{l, l+1}(t) + (l' > l\; \mathrm{terms}) \,. \end{align} Vanishing gradients can only occur if the isolated term is exactly canceled by the other terms for all $t \geq 0$, which does not happen in practice. To discuss the exploding gradients problem for $a_W^{l+1}$ we consider the update of $a_W^{l+1}$ (omit t). Depth dependent terms can be written as \begin{align}\label{eq:tauto_aw} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} \log \mathcal J^{l', l'+1} = \sum_{l'>l}^L \left(\frac{4\chi^{l'}_{\Delta}}{a_W^{l+1} \mathcal J^{l', l'+1}} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x, x)\right) \log \mathcal J^{l', l'+1} \,, \end{align} where we have defined two new quantities $\chi^{l'}_{\Delta} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l'}_i) \phi''(h^{l'}_i) + \phi'(h^{l'}_i) \phi'''(h^{l'}_i) \right]$ and $\chi^{l'}_{\mathcal K} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi'(h^{l'}_i) \phi'(h^{l'}_i) + \phi(h^{l'}_i) \phi''(h^{l'}_i) \right]$. We note that the exploding gradients problem for $a_W^{l+1}$ in $\texttt{AutoInit}$ is not severe for commonly used activation functions: \begin{itemize} \item $\tanh$-like bounded odd activation functions: $\chi_{\mathcal K}^{l'} \leq 1$ holds and $\mathcal K^l(x,x)$ saturates to a constant for large $l$. Thus the divergence problem of \eqref{eq:tauto_aw} is less severe than the one of \eqref{eq:grad} when $\mathcal J^{l', l'+1} > 1$. \item $\mathrm{ReLU}$: $\chi^{l'}_{\Delta}=0$. \item $\mathrm{GELU}$: The sum in \eqref{eq:tauto_aw} scales like $O(L \prod_{\ell=1}^L \chi^{\ell}_{\mathcal K})$ for large $L$, which may lead to worse exploding gradients than \eqref{eq:grad} for a reasonable $L$. Fortunately, for $\chi^{l'}_{\mathcal K} > 1$ cases, $\chi_{\Delta}^{l'}$ is close to zero. As a result, we find numerically that the contribution from \eqref{eq:tauto_aw} is very small. \end{itemize} For $a_b^{l+1}$, there is no isolated term like the one in \eqref{eq:iso}. Then the update of $a_b^{l+1}$ is proportional to \begin{align}\label{eq:tauto_ab} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_b^{l+1}} \log \mathcal J^{l, l+1} = \sum_{l'>l}^L \left(\frac{4 a_b^{l+1}}{\mathcal J^{l', l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \sigma_b^2 \right) \log \mathcal J^{l, l+1} \,. \end{align} Comparing \eqref{eq:tauto_ab} and \eqref{eq:tauto_aw}, it is clear that the exploding gradients problem for $a_b^{l+1}$ is the same as that for $a_W^{l+1}$, hence not severe for common activation functions. The vanishing gradients problem is seemingly more serious, especially for $\sigma_b=0$. However, the vanishing gradients for $a_b^{l+1}$ does not prevent \texttt{AutoInit} from reaching a critical initialization: \begin{itemize} \item For $\sigma_b > 0$, as $a_W^{l+1}$ gets updated, the update in \eqref{eq:tauto_ab} gets larger with time t. \item For $\sigma_b=0$ the phase boundary is at $\sigma_w \geq 0$, which can be reached by $a_W^{l+1}$ updates. \end{itemize} \subsection{Linear and ReLU networks} In general, it is hard to predict a good learning rate $\eta$ for the \Cref{alg:j_train}. However, for ReLU (and linear) networks, we can estimate the optimal learning rates. We will discuss ReLU in detail. Since $a_b^l$ can not receive updates in this case, we only discuss updates for $a_W^l$. Different APJN $\{\mathcal J^{l,l+1}\}$ for ReLU networks evolve in time independently according to \begin{align}\label{eq:relu_jupdate} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \frac{\sigma_w^2}{\sqrt{\mathcal J^{l,l+1}(t)}} \log \mathcal J^{l,l+1}(t) \,. \end{align} Then one can show that for any time $t$: \begin{align}\label{eq:eta_t} \eta_t < & \min_{1 \leq l \leq L} \left\{\frac{2\left( \sqrt{\mathcal J^{l,l+1}(t)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(t)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(t) } \right\} \end{align} guarantees a convergence. In this case, the value of $\mathcal J^{l, l+1}(t)$ can be used to create a scheduler for \Cref{alg:j_train}. Moreover, one can solve \eqref{eq:relu_jupdate} and find a learning rate that allows the \Cref{alg:j_train} to converge in 1-step: \begin{align}\label{eq:1step_lr} \eta^l_{\mathrm{1-step}} = \frac{\left( \sqrt{\mathcal J^{l,l+1}(0)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(0)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(0) } \,, \end{align} Next we study the dynamics of the optimization while using a single learning rate $\eta$. We estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align}\label{eq:eta_0_jl} \eta_0 = \frac{\left(a_W^{l+1}\sigma_w - \sqrt{2}\right) a_W^{l+1}}{\sigma_w \left(\log \left[(a_W^{l+1}\sigma_w)^2 \right] - \log 2\right)} \,. \end{align} In \Cref{fig:relu_jac}, we checked our results with \cref{alg:j_train}. All $\mathcal J^{l,l+1}(t)$ values plotted in the figure agree with the values we obtained by iterating \eqref{eq:relu_jupdate} for $t$ steps. The gap between $\eta_0$ and trainable regions can be explained by analyzing \eqref{eq:eta_t}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 < \eta < \eta_t$, there is still a chance that \Cref{alg:j_train} can converge. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 > \eta > \eta_t$ holds, \Cref{alg:j_train} may diverge at a later time. A similar analysis for JSL is performed in the appendix. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JLL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JLL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{fig:relu_jac} \end{figure} \section{BatchNorm, Residual Connections and General Strategy} \label{sec:bn} \paragraph{BatchNorm and Residual Connections} For MLP networks, the APJN value is only a function of $t$ and it is independent of $|B|$. This property holds except when there is a BatchNorm (BN). We consider a Pre-BN MLP network with residual connections. The preactivations are given by \begin{align}\label{eq:bnmlp_preact} h^{l+1}_{x; i} = \sum_{j=1}^N a_W^{l+1} W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + a_b^{l+1} b^{l+1}_i + \mu h^l_{x;i} \,, \end{align} where we label different inputs with indices $x,x^\prime, \cdots$ and $\mu$ quantifies the strength of the residual connections (common choice is $\mu=1$). At initialization, the normalized preactivations are defined as \begin{align} \tilde h^l_{x; i} = \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \,. \end{align} The change in batch statistics leads to non-trivial $\mathcal J^{l,l+1}$ values, which can be approximated using \Cref{conj:bn}. \begin{conjecture}[APJN with BatchNorm]\label{conj:bn} In infinite width limit and at large depth $l$, APJN of Pre-BN MLPs \eqref{eq:bnmlp_preact} converges to a deterministic value determined by the NNGP kernel as $B \rightarrow \infty$: \begin{align} \mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} & (a_W^{l+1} \sigma_w)^2 \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, 1)} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where the actual value of indices $x'$ and $x$ is not important, as long as $x \neq x'$. \end{conjecture} \begin{remark} Under the condition of \Cref{conj:bn} $\mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} 1 + O(l^{-1})$, if $\mu=1$. The finite $|B|$ correction is further suppressed by $l^{-1}$. \end{remark} In \Cref{fig:bn_relu} we show numerical results that can justify our conjecture, where we empirically find that finite $|B|$ correction for $|B| \geq 128$. Analytical details are in appendix. Similar results without residual connections have been obtained for finite $|B|$ by \citet{yang2018mean}. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{Figures/bn_relu_mu1.pdf} \caption{$\mathcal J^{l,l+1}(0)$ phase diagrams for $|B|=256$ in $\sigma_b-\sigma_w$ plane ($\mu=0$ and $\mu=1$); $\mathcal J^{l, l+1}$-$|B|$ plot. From left to right: 1) Pre-BN MLP networks with $\mu=0$ are everywhere chaotic; 2) Pre-BN MLP networks with $\mu=1$ are critical everywhere; 3) For $|B|\geq 128$, the finite $|B|$ corrections are negligible. In all plots we use $L=30$, $N_l=500$, $a_W^l(0)=1$ and averaged over 50 initializations.} \label{fig:bn_relu} \end{figure} \paragraph{General Strategy} For general network architectures, we propose the following strategy for using \Cref{alg:j_general} with normalized inputs: \begin{itemize} \item If the network does not have BatchNorm, use the algorithm with $|B|=1$. \item If the network has BatchNorm, and the user has enough resources, use the algorithm with a $|B|$ which will be used for training. When $|B|$ is large, one should make $\mathcal J^{l,l+1}$ vs. $|B|$ plots like the one in \Cref{fig:bn_relu}, then choose a $|B|$ that needs less computation. \item When resources are limited, one can use a non-overlapping set $\{\mathcal{J}^{l, l+k}\}$ with $k>1$ to cover the whole network. \end{itemize} The computational cost of the algorithm depends on $k$ and $|B|$. \section{Experiments} \label{sec:exp} In this section, we use a modified version of $\mathcal L_{\log}$, where we further penalize the ratio between NNGP kernels from adjacent layers. The Jacobian-Kernel Loss (JKL) is defined as: \begin{align}\label{eq:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=0}^{L+1} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=0}^{L+1} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,, \end{align} where we introduced an extra hyperparameter $\lambda$ to control the penalization strength. We also included input and output layers. Both APJNs and NNGP kernels will be calculated using flattened preactivations. \subsection{ResMLP} ResMLP \citep{touvron2021resmlp} is an architecture for image recognition built entirely on MLPs. It offers competitive performance in both image recognition and machine translation tasks. The architecture consists of cross-channel and cross patch MLP layers, combined with residual connections. The presence of residual connections and the absence of normalization techniques such as LayerNorm \citep{ba2016layer} or BatchNorm \citep{ioffe2015batch} render ResMLP to be initialized off criticality. To mitigate this issue, ResMLP architecture utilizes LayerScale \citep{touvron2021cait}; which multiplies the output residual branch with a trainable matrix, initialized with small diagonal entries. \paragraph{CIFAR-10} Here we obtain a critical initialization for ResMLP-S12 using \Cref{alg:j_train} with loss \eqref{eq:jkle}, with $a^l_{\theta}$ introduced for all layers. In our initialization, the ``smallnes'' is distributed across all parameters of the residual block, including those of linear, affine normalization and LayerScale layers. As we show in \Cref{fig:resmlp}, Kaiming initialization is far from criticality. $\texttt{AutoInit}$ finds an initialization with almost identical $\{\mathcal J^{l, l+1} \}$ and similar $\{\mathcal K^{l, l+1}(x, x)\}$ compared to the prescription proposed by \citet{touvron2021resmlp}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/resmlp_comparison.pdf} \caption{From left to right: 1) and 2) Comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ for ResMLP-S12 for Kaiming, original and $\texttt{AutoInit}$ initializations. Depth $l$ is equal to the number of residual connections. The network function in the $\texttt{AutoInit}$ case is very close to identity at initialization. 3) Training and validation accuracy. Both, original and \texttt{AutoInit}, models are trained on CIFAR-10 dataset for 600 epochs using \texttt{LAMB} optimizer\citep{You2020Large} with $|B|=256$. The learning rate is decreased by a factor of 0.1 at 450 and 550 epochs. Training accuracy is measured on training samples with Mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:resmlp} \end{figure} \paragraph{ImageNet \citep{liILSVRC15}} We report $74.0\%$ top-1 accuracy for ResMLP-S12 initialized using \texttt{AutoInit}, whereas the top-1 accuracy reported in \citep{touvron2021resmlp} for the same architecture is $76.6\%$. The model has $15$ million parameters. We used a setup similar to the one in original paper, which is based on timm library \citep{rw2019timm} under Apache-2.0 license \citep{apachev2}. However, we made the following modifications in our training: 1) We use learning rate $\eta=0.001$ and $|B|=1024$. 2) We use mixed precision. 3) We do not use \texttt{ExponentialMovingAverage}. The training was performed on two NVIDIA RTX 3090 GPUs; and took around $3.5$ days to converge (400 epochs). The auto-initialized model are obtained by tuning the Kaiming initialization using \Cref{alg:j_train} with $\mathcal L_{\mathcal J \mathcal K\log}(\lambda=0.5)$, $\eta=0.03$ and $|B|=32$ for 500 steps. \subsection{VGG} VGG \citep{simonyan2014very} is an old SOTA architecture, which was notoriously difficult to train before Kaiming initialization was invented. The BatchNorm variants $\mathrm{VGG19\_BN}$ further improve the training speed and performances compared to the original version. PyTorch version of VGG \citep{NEURIPS2019_9015} is initialized with $\mathrm{fan\_out}$ Kaiming initialization \citep{he2015delving}. In \Cref{fig:bn_relu} we show that the BatchNorm makes Kaiming-initialized ReLU networks chaotic. We obtain a close to critical initialization using \Cref{alg:j_train} for $\mathrm{VGG19\_BN}$, where we introduce the auxiliary parameters $a^l_{\theta}$ for all BatchNorm layers. $\mathcal J^{l, l+1}$ is measured by the number of composite (Conv2d-BatchNorm-ReLU) blocks or MaxPool2d layers. We compared $\mathcal J^{l, l+1}$, $\mathcal K^l(x,x)$ and accuracies on CIFAR-10 datasets between auto-initialized model and the one from PyTorch\citep{krizhevsky2009learning}, see \Cref{fig:vgg}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/vgg_comparison.pdf} \caption{From left to right: 1) and 2) comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ between PyTorch version $\mathrm{VGG19\_BN}$ and \texttt{AutoInit} version, we ensure $\mathcal J^{l, l+1}=1$ with a high priority ($\lambda=0.05$); 3) training and validation accuracy. We train both models on CIFAR-10 dataset using SGD with $\mathrm{momentum}=0.9$ and $|B|=256$ for 300 epochs, where we decrease the learning rate by a factor of 0.1 at 150 and 225 epochs. Training accuracy is measured on training samples with mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:vgg} \end{figure} \section{Conclusions} \label{sec:conclu} In this work we have introduced an algorithm, \texttt{AutoInit}, that allows to initialize an arbitrary feed-forward deep neural network to criticality. \texttt{AutoInit} is an unsupervised learning algorithm that forces norms of all nearby partial Jacobians to have a unit norm via minimizing the loss function \eqref{eq_loss_sq_J}. A slight variation of the \texttt{AutoInit} also tunes the forward pass to ensure that gradients in all layers of a DNN are well-behaved. To gain some intuition about the algorithm we have solved the training dynamics for MLPs with ReLU activation and discussed the choice of hyperparameters for the tuning procedure that ensures its convergence. Then we have evaluated the performance of \texttt{AutoInit}-initialized networks against initialization schemes used in literature. We considered two examples: ResMLP architecture and VGG. The latter was notoriously difficult to train at the time it was introduced. \texttt{AutoInit} finds a good initialization (somewhat close to Kaiming) and ensures training. ResMLP uses a variation of ReZero initialization scheme that puts it close to dynamical isometry condition. \texttt{AutoInit} finds a good initialization that appears very different from the original, however the network function is also very close to the identity map at initialization. In both cases the performance of the \texttt{AutoInit}-initialized networks is competitive with the original models. We emphasize that \texttt{AutoInit} removes the necessity for trial-and-error search for a working initialization. We expect that \texttt{AutoInit} will be useful in automatic neural architecture search tasks as well as for general exploration of new architectures. \begin{ack} T.H., D.D. and A.G. were supported, in part, by the NSF CAREER Award DMR-2045181 and by the Salomon Award. \end{ack} \bibliographystyle{plainnat}
2024-02-18T23:40:24.749Z
2022-06-29T02:01:24.000Z
algebraic_stack_train_0000
2,334
26,386
proofpile-arXiv_065-11354
\section{Introduction} \IEEEPARstart{R}{\lowercase{egression}} is one of the fundamental problems of statistics, system identification, signal processing and machine learning { \cite{cucker2007learning}}. Given a finite sample of input-output pairs, the typical aim is to estimate the so-called {\em regression function}{ , which, given an input, encodes the conditional expectation of the corresponding output} \cite{ljung2010perspectives}. There are several well-known (parametric and nonparametric) approaches for regression, from linear regression to neural networks and kernel methods, which provide {\em point-estimates} from a { given} model class \cite{gyorfi2002distribution}. However, sole point-estimates are often not sufficient and {\em region-estimates} are also needed, for example, to support {\em robust} approaches. These region-estimates have several variants, such as {\em confidence regions} for the ``true'' function generating the observations \cite{Algo2018}; for the {\em expected} output at a given input \cite{quinonero2005unifying}; and {\em prediction regions} for the next (noisy) observation \cite{vovk2005algorithmic}. In this paper, we focus on building {\em confidence bands} for the regression function. These bands have natural connections to filtering and smoothing methods. While in a {\em parametric} setting such region-estimates are typically induced by confidence sets in the parameter space, in a {\em nonparametric} setting this indirect approach is not feasible. Therefore, nonparametic confidence bands for the expected outputs should be constructed directly. Regarding prediction intervals for the {\em next observation}, promising distribution-free approaches are {\em interval predictor models} (IPMs) based on the scenario approach \cite{campi2009interval, garatti2019class}, and the {\em conformal prediction} framework also offers { several nonparametric methods for regression and classification} \cite{vovk2005algorithmic}. { If} the data is jointly Gaussian, a powerful methodology is offered by {\em Gaussian process regression} \cite{quinonero2005unifying} that can provide prediction regions for the outputs, and { credible regions for the expected outputs}. However, the Gaussianity assumption is sometimes unrealistic that calls for alternative approaches. In this paper, we suggest a {\em nonparametric} approach using Paley-Wiener kernels, to build data-driven {\em simultaneous} confidence bands for an unknown bounded, {\em band-limited} function, based on an independent and identically distributed (i.i.d.) sample of input-output pairs. The method is {\em distribution-free} in the sense that only very mild assumptions are needed about the observation noises, such as they are distributed {\em symmetrically} about zero. On the other hand, we assume that the {\em distribution of the inputs} is known, particularly, we assume uniformly distributed inputs, as more general cases can often be traced back to this assumption. First, the case without observation noises is studied, then the ideas are extended to the general, noisy case. The results are supported by both {\em non-asymptotic} theoretical guarantees and numerical experiments. \section{Kernels and Band-Limited Functions} { Kernel methods have an immerse range of applications in machine learning and related fields \cite{pillonetto2014kernel}. In this section, we review some of their fundamental theoretical concepts.} \subsection{Reproducing Kernel Hilbert Spaces} A Hilbert space $\mathcal{H}$ of $f: \mathbb{X} \to \mathbb{R}$ functions with an inner product $\langle\cdot,\cdot\rangle_{\mathcal{H}}$ is called a {\em Reproducing Kernel Hilbert Space} (RKHS), if each Dirac functional, which evaluates functions at a point, $\delta_z: f \to f(z)$, is bounded for all $z \in \mathbb{X}$, that is $\forall z \in \mathbb{X}: \exists \, \kappa_z > 0$ with $|\hspace{0.3mm}\delta_z(f)\hspace{0.3mm}| \leq \kappa_z\, \| f \|_{\mathcal{H}}$ for all $f \in \mathcal{H}$. Then, by building on the Riesz representation theorem, a unique {\em kernel}, $k: \mathbb{X} \times \mathbb{X} \to \mathbb{R}$, can be constructed % encoding the Dirac functionals satisfying $\langle k(\cdot,z),f \rangle_{\mathcal{H}} = f(z),$ for all $z \in \mathbb{X}$ and $f \in \mathcal{H}$, which formula is called the {\em reproducing property}. As a special case of this property, we also have for all $z,s \in \mathbb{X}$ { that} $k(z,s)=\langle k(\cdot,z),k(\cdot,s) \rangle_{\mathcal{H}}.$ Therefore, the kernel of an RKHS is a symmetric and positive-definite function. Furthermore, the Moore-Aronszajn theorem asserts that the converse statement holds true, as well: for every symmetric and positive-definite function $k: \mathbb{X} \times \mathbb{X} \to \mathbb{R}$, there exists a unique RKHS for which $k$ is its reproducing kernel \cite{berlinet2004reproducing}. The {\em Gram} or kernel matrix of a given kernel $k$ w.r.t.\ (input) points $x_1, \dots, x_n$ is $K_{i,j} \doteq k(x_i,x_j)$, for all { $i, j \in [n] \doteq \{1,\dots, n\}$}. Observe that $K \in \mathbb{R}^{n \times n}$ is always positive semi-definite. A kernel is called {\em strictly} positive-definite, if its Gram matrix is positive-definite for all {\em distinct} inputs $\{x_i\}$. { Archetypal} kernels include the Gaussian kernel $k(z,s)=\exp (-||z-s||^2 / (2 \sigma^2)),$ where $\sigma >0$; the polynomial kernel $k(z,s)=(\langle z,s \rangle +c)^p,$ where $c \geq 0$, $p \in \mathbb{N}$; and the sigmoidal kernel $k(z,s)=\tanh (a \langle z,s \rangle +b),$ for some $a,b \geq 0$. \subsection{Paley-Wiener Spaces} Let $\mathcal{H}$ be the space of $f \in \mathcal{L}^2 (\mathbb{R}, \lambda)$ functions, where $\lambda$ is the Lebesgue measure, such that the support of the {\em Fourier transform} of $f$ is included in $[\hspace{0.3mm}-\eta,\, \eta\hspace{0.5mm}]$, where $\eta > 0$. It is a subspace of $\mathcal{L}^2$ and thus we use the $\mathcal{L}^2$ inner product:\vspace{-0.5mm} $$\langle f, g \rangle_\mathcal{H} \, \doteq \int_{\mathbb{R}} f(x)\,g(x) \: \mathrm{d} \lambda(x).$$ This space of {\em band-limited} functions, called the {\em Paley-Wiener space} \cite{berlinet2004reproducing}, is an RKHS. Its reproducing kernel is $$k(z,s) \, \doteq \, \frac{\sin (\eta(z-s))}{\pi(z-s)},$$ for $z \neq s$, where $z, s \in \mathbb{R}$; and $k(z, z) \doteq \eta/\pi$. Henceforth, we will work with the above defined {\em Paley-Wiener kernel}. \begin{remark} Paley-Wiener spaces can also be defined on $\mathbb{R}^d$ \cite{iosevich2015exponential}, but for simplicity we focus on the scalar input case. \end{remark} \section{Nonparametric { Confidence Bands}} Let $(x_1, y_1), \dots, (x_n, y_n)$ be a finite sample of i.i.d. pairs of random variables with unknown joint distribution $\mathbb{P}_{\! \scriptscriptstyle X,Y}$, where $x_k$ and $y_k$ are $\mathbb{R}$-valued, and $\mathbb{E}[\hspace{0.3mm}y^2_k\hspace{0.3mm}] < \infty$. We assume that\vspace{-0.5mm} $$ y_k \, = \, f_*(x_k) + \varepsilon_k, $$ for $k \in [n]$, where $\mathbb{E}[\hspace{0.3mm}\varepsilon_k\hspace{0.3mm}] = 0$. Variables $\{\varepsilon_k\}$ represent the measurement or observation {\em noises} { on the ``true'' $f_*$.} We call $f_*$ the {\em regression function}\hspace*{-0.5mm} { \cite{cucker2007learning}}, as on the support of $\{x_k\}$ it can also be written as $ f_*(x) \,= \, \mathbb{E} \left[\hspace{0.5mm} Y\hspace{0.5mm} |\hspace{0.5mm} X = x \hspace{0.5mm}\right] $, where $(X,Y)$ is a random vector with distribution $\mathbb{P}_{\! \scriptscriptstyle X,Y}$. \subsection{Objectives and Reliability} \label{sec:objectives} Our aim is to { build a (simultaneous) {\em confidence band}} for $f_*$, i.e., a function $I:\mathcal{D} \to { \mathbb{R} \times \mathbb{R}}$, where $\mathcal{D}$ is the {\em support} of the input distribution, such that { $I(x) = (\hspace{0.3mm}I_1(x), I_2(x)\hspace{0.3mm})$ specifies the {\em endpoints} of an interval estimate for $f_*(x)$, for all $x \in \mathcal{D}$}. More precisely, we would like to construct $I$ with\vspace{-0.5mm} % $$ \nu(I)\,\doteq \, \mathbb{P} \big(\, \forall x \in \mathcal{D}: { I_1(x) \leq f_*(x) \leq I_2(x)} \,\big) \, \geq \, 1- \alpha, $$ where $\alpha \in (0,1)$ is a user-chosen {\em risk} probability, and $\nu(I)$ is { the {\em reliability} of the confidence band. Let us introduce} \vspace{-0.2mm} $$ \mathcal{I} \, \doteq \, \big\{\hspace{0.5mm} (x,y) \in \mathcal{D} \times \mathbb{R} : y \in [ \hspace{0.3mm} I_1(x), I_2(x) \hspace{0.3mm} ] \hspace{0.5mm} \big\}. $$ { Based} on this, the reliability is $\nu(I) = \mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,)$, where we define $\mathrm{graph}_{\mathcal{D}}(f_*) \doteq \{\, (x, f_*(x)) : x\in \mathcal{D} \,\}$. { For notational simplicity, we will use $I(x) = \emptyset$ to denote $I(x) = (\hspace{0.3mm}1,-1\hspace{0.3mm})$, i.e., the endpoints of an empty interval.} Hence, we aim at building a {confidence band} that contains the graph (w.r.t.\ domain $\mathcal{D}$) of the ``true'' $f_*$ with a {\em user-chosen} probability { level}. Moreover, we would like to have a {\em distribution-free} method (w.r.t.\ the noises) and the region should have {\em finite-sample} guarantees without a parametric model of $f_*$, namely, we take a {\em nonparametric} approach. \begin{remark} We note here, as well, that in the IPMs \cite{campi2009interval}\cite{garatti2019class} and in the conformal prediction framework \cite{vovk2005algorithmic}, the aim is to build a guaranteed prediction region for the {\em next observation}, while here we aim at predicting the value of the {\em regression function} instead. In this sense, { our objective is similar to that of the region estimates of Gaussian process regression} \cite{quinonero2005unifying}, however, without the assumption { of joint Gaussianity}. \end{remark} \subsection{Main Assumptions} Our core assumptions can be summarized as follows: \smallskip \setcounter{assumption}{-1} \begin{assumption} \label{A0} % {\em The dataset, $(x_1, y_1), \dots, (x_n, y_n) \in \mathbb{R} \times \mathbb{R}$, is an i.i.d.\ sample of input-output pairs; and $\mathbb{E}[\hspace{0.3mm}y^2_k\hspace{0.3mm}] < \infty$, for $k \in [n]$}. \end{assumption} \smallskip \begin{assumption} \label{A1} {\em Each (measurement) noise, $\varepsilon_k \doteq y_k - f_*(x_k)$, for $k \in [n]$, has a {symmetric} probability distribution about zero.} \end{assumption} \smallskip \begin{assumption} \label{A2} {\em The inputs, $\{x_k\}$, are distributed uniformly on $[\hspace{0.4mm}0, 1\hspace{0.2mm}]$.} \end{assumption} \smallskip \begin{assumption} \label{A3} {\em Function $f_*$ is from a Paley-Wiener space $\mathcal{H}$; $\forall\, x\in[\hspace{0.4mm}0, 1\hspace{0.2mm}]: { |f_*(x)|} \leq 1$; and $f_*$ is almost time-limited to $[\hspace{0.4mm}0, 1\hspace{0.3mm}]:$ $$ \int_{\mathbb{R}} f^2_*(x)\,\mathbb{I}(x \notin [\hspace{0.4mm}0, 1\hspace{0.2mm}]) \: \mathrm{d}\lambda(x) \, \leq \, \delta_0, $$ where $\mathbb{I}(\cdot)$ is an indicator and $\delta_0 > 0$ is a universal constant.} \end{assumption} \smallskip Now, let us briefly discuss these assumptions. The i.i.d.\ requirement of A\ref{A0} is standard in mathematical statistics and supervised learning \cite{Vapnik1998}. The square-integrability of the outputs is needed to estimate the $\mathcal{L}^2$ norm of $f_*$ based on the sample and to have a well-defined regression function. The assumption on the noises, A\ref{A1}, is very mild, as most standard distributions (e.g., Gauss, Laplace and uniform) satisfy this. Our strongest assumption is certainly A\ref{A2}, which basically { amounts} to the assumption that {\em we know the distribution of the inputs} and it is absolutely continuous. The more general case when the inputs, $\{x'_k\}$, have a {\em known}, strictly monotone { increasing} and continuous cumulative distribution function $F$, could be traced back to assumption A\ref{A2}, { since} it is well-known that $x_k \doteq F(x'_k)$ is distributed uniformly on $[\hspace{0.4mm}0, 1\hspace{0.2mm}]$. Assumption A\ref{A3}, especially limiting the frequency domain of $f_*$, is needed to restrict the model class and to ensure that we can effectively generalize to unknown data points. We allow the ``true'' function to be defined outside the support of the inputs, cf.\ the Fourier uncertainty principle{ \cite{pinsky2008introduction}}, but the part of $f_*$ outside of $\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$ should be ``negligible'', i.e., its norm cannot exceed a { (known)} small constant, $\delta_0$. { A crucial property of Paley-Wiener spaces is that their norms coincide with the standard $\mathcal{L}^2$ norm, which will allow us to efficiently upper bound $\|f_*\|_{\mathcal{H}}^2$ based on the sample.} \section{{ Confidence Bands}: Noise-Free Case} In order to motivate our solution, we start with a simplified problem, in which we observe the regression function perfectly at random inputs. In this noise-free case, we can recall the celebrated Nyquist–Shannon sampling theorem, which states that a band-limited function can be fully reconstructed from the samples, assuming the sampling rate exceeds twice the maximum frequency. On the other hand, if we only have a small number of observations, we cannot apply this result. Nevertheless, we still would like to have at least a region estimate. In this section we provide such an algorithm. Recall that for a dataset $\{(x_k, y_k)\}$, where inputs $\{x_k\}$ are {\em distinct} (which has probability one under A\ref{A2}), the element from $\mathcal{H}$ that has the {\em minimum norm} and {\em interpolates} each output $y_k$ at the corresponding input $x_k$, that is\vspace{-0.5mm} $$ \bar{f} \, \doteq \, \operatornamewithlimits{arg\,min} \big\{\,\|\hspace{0.3mm}f\hspace{0.4mm}\|_{\mathcal{H}} : f \in \mathcal{H}\hspace{1.5mm} \&\hspace{1.5mm} \forall\hspace{0.3mm} k \in [n]: f(x_k) =\, y_k \, \big\},\vspace{-0.5mm} $$ takes the following form \cite{berlinet2004reproducing} for all input $x \in \mathbb{X}:$\vspace{-0.5mm} $$\bar{f}(x)\,=\, \sum_{k=1}^n \bar{\alpha}_k k(x, x_k),\vspace{-0.5mm}$$ where the weights are $\bar{\alpha} = K^{-1} y$ with $y\doteq (y_1, \dots, y_n)\tr$ and $\bar{\alpha} \doteq (\bar{\alpha}_1, \dots, \bar{\alpha}_n)\tr$; we also used that the Paley-Wiener kernel is strictly positive-definite, thus matrix $K$ is invertible. We will exploit, as well, that the norm square of $\bar{f}$ is\vspace{-0.5mm} $$\|\hspace{0.3mm}\bar{f}\hspace{0.4mm}\|_{\mathcal{H}}^2 = \bar{\alpha}\tr \hspace{-0.3mm}K \bar{\alpha},\vspace{-0.5mm}$$ which is a direct consequence of the reproducing property. Assuming we have a stochastic upper bound for the norm square of the regression function, denoted by $\kappa$, the idea of our construction is as follows. We include those $(x_0,y_0)$ pairs in the { confidence band}, for which the minimum norm interpolation of $\{(x_k, y_k)\} \,\cup\, \{(x_0,y_0)\}$, namely, which simultaneously interpolates the original dataset and $(x_0,y_0)$, has a norm square which is less than or equal to $\kappa$. In order to make this approach practical, we need (1) a guaranteed upper bound for the norm square of the { ``true''} data-generating function; and (2) an efficient method to decide the endpoints of the { confidence} interval for each potential input $x_0 \in \mathcal{D}$. \subsection{Bounding the Norm: Noise-Free Case} It is easy to see that in the noise-free case, if $y_k = f_*(x_k)$, for $k \in [n]$, the norm square of $f_*$ can be estimated by \vspace{-0.5mm} $$\frac{1}{n} \sum_{k=1}^n y_k^2 = \frac{1}{n} \sum_{k=1}^n f_*^2(x_k) \approx \mathbb{E}\big[ f^2_*(X)\big] \approx \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{2}^2 = \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2,$$ since in the Paley-Wiener space the norm is the $\mathcal{L}^2$ norm, and we also used that $\{x_k\}$ are uniform on domain $\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$. As the next lemma demonstrates, we can construct such a guaranteed upper bound using the Hoeffding inequality: \medskip \begin{lemma} \label{lemma:Hoeffding.noiseless} {\em Assuming A\ref{A0}, A\ref{A2}, A\ref{A3} and that $y_k = f_*(x_k)$, for $k \in [n]$, { we have for any risk probability $\alpha\in (0,1)$,\vspace{-0.5mm} $$ \mathbb{P}\big(\norm{f_*}_{\mathcal{H}}^2 \leq \kappa \hspace{0.3mm}\big) \, \geq \, 1-\alpha, $$ with the following choice of the upper bound $\kappa$:\vspace{-0.5mm} $$ \kappa \, \doteq\, \frac{1}{n} \sum_{k=1}^n y_k^2 + \sqrt{\frac{\ln(\alpha)}{-2n}} + \delta_0.$$} } \end{lemma} \vspace{-1.5mm} \hspace*{-8mm} \begin{proof} By using the notation ${ R} \doteq \nicefrac{1}{n}\sum_{k=1}^n y_k^2$, we have $$\mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm} ]\, =\, \|\hspace{0.3mm} f_* \cdot \mathbb{I}_{\mathcal{D}} \hspace{0.3mm} \|_2^2\, \geq\, \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2 - \delta_0, $$ where $\mathbb{I}_{\mathcal{D}}$ is the indicator function of $\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$. That is, ${ R}$ is a Monte Carlo estimate of the integral of this $\mathcal{L}^2$ norm. Then, from the Hoeffding inequality, for all $t>0$: $$\mathbb{P}({ R} - \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm} ] \leq -t) \leq \mbox{exp} (-2n t^2).$$ According to the complement rule, we also have $$\mathbb{P} ( \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm}] < { R} + t) \geq 1-\mbox{exp}(-2nt^2).$$ We would like choose a threshold $t > 0$ such that $$1-\alpha \, \leq\, \mathbb{P} ( \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm}] < { R}+t).$$ { This} inequality is satisfied if we choose a $t>0$ with $$1-\alpha \leq 1-\mbox{exp}(-2nt^2)\; \Longrightarrow \;\mbox{exp}(-2nt^2) \leq \alpha.$$ After taking the natural logarithm, we get $-2nt^2 \leq \ln(\alpha)$, hence, the choice of $t^* = \sqrt{\ln(\alpha)/(-2n)}$ guarantees $$\mathbb{P}\big( \hspace{0.3mm} \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2 \geq { R} +t^*+\delta_0 \hspace{0.3mm} \big) \leq \alpha,$$ which completes the proof of the lemma. \end{proof} \smallskip \subsection{Interval Endpoints: Noise-Free Case} Now, we construct a { confidence} interval for a given input {\em query point} $x_0 \in \mathcal{D}$, for which $x_0 \neq x_k$, for $k \in [n]$. That is, we build an { interval $[I_1(x_0),I_2(x_0)]$} that contains $f_*(x_0)$ with probability at least $1-\alpha$, where $\alpha \in (0,1)$ is given. First, we extend the Gram matrix with query point $x_0$, $$ K_{0}({i+1},{j+1})\, \doteq \, k(x_i,x_j), $$ for $i, j = 0,1, \dots ,n$. As $\{x_k\}_{k=0}^n$ are distinct (a.s.), this Gramian can be inverted. Hence, for any $y_0$, the minimum norm interpolation of $(x_0, y_0), (x_1, y_1), \dots, (x_n, y_n)$ is \vspace{-0.5mm} $$\tilde{f}(x)\,=\, \sum_{k=0}^n \tilde{\alpha}_k k(x, x_k),$$ where the weights are $\tilde{\alpha} = K_{0}^{-1} \tilde{y}$ with $\tilde{y}\doteq (y_0, y_1, \dots, y_n)\tr$ and $\tilde{\alpha} \doteq (\tilde{\alpha}_0, \dots, \tilde{\alpha}_n)\tr.$ The norm square of $\tilde{f}$ is $$ \|\hspace{0.3mm}\tilde{f}\hspace{0.4mm}\|_{\mathcal{H}}^2 \,=\, \tilde{\alpha}\tr\hspace{-0.3mm} K_{0} \tilde{\alpha}\, =\, \tilde{y}\tr\hspace{-0.3mm} K_{0}^{-1} K_{0} K_{0}^{-1} \tilde{y}\,=\, \tilde{y}\tr\hspace{-0.3mm} K_{0}^{-1} \tilde{y}. $$ Since the output query point $y_0$ in $\tilde{y} = (y_0, y\tr)\tr$ is arbitrary, we can compute the minimum norm needed to interpolate the original dataset extended by $(x_0, y_0)$ for any candidate $y_0$. Therefore, having a bound $\kappa$ on the norm square (which is guaranteed with probability $\geq 1-\alpha$), we can compute the highest and the lowest $y_0$ values which can be interpolated with a function from $\mathcal{H}$ having at most norm square $\kappa$. This leads to the following {\em two} optimization problems: \begin{equation} \label{noiseless-opt-min-max} \begin{split} \mbox{min\,/\,max} &\quad y_{0} \\[0.5mm] \mbox{subject to} &\quad (y_0, y\tr) K_{0}^{-1} (y_0, y\tr)\tr \leq\, \kappa\\[1mm] \end{split} \end{equation} where ``min\,/\,max'' means that we have to solve the problem as a minimization and also as a maximization (separately). The optimal values of these problems, denoted by $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$, respectively, determine the {\em endpoints} of the { confidence} interval for $f_*(x_0)$, that is $I_1(x_0) \doteq y_{\mathrm{min}}$ and $I_2(x_0) \doteq y_{\mathrm{max}}$. Problems \eqref{noiseless-opt-min-max} are convex, moreover, as we will show, their optimal vales can be calculated {\em analytically}. First, note that the only decision variable of these problems is $y_0$, everything else is constant (including the input $x_0$, which is also given). Let us partition the inverse Gramian, $K_{0}^{-1}$, as\vspace{-0.2mm} $$ \begin{bmatrix} \; c & b\tr\\ \; b & A \,\end{bmatrix} \doteq\, K_{0}^{-1}\!\!, $$ where $c \in \mathbb{R}$, $b\in \mathbb{R}^n$ and $A \in \mathbb{R}^{n\times n}$; after which $$ \quad (y_0, y\tr) K_{0}^{-1} (y_0, y\tr)\tr =\, c\, y_0^2 + 2\, b\tr y\, y_0 + y\tr\hspace{-0.3mm} A y. $$ Then, introducing $a_0 \doteq c$, $b_0 \doteq 2b\tr y$ and $c_0 = y\tr\hspace{-0.3mm} A y - \kappa$, the two optimization problems \eqref{noiseless-opt-min-max} can be written as \begin{equation} \label{noiseless-opt-proof} \begin{split} \mbox{min\,/\,max} &\quad y_{0} \\[0.5mm] \mbox{subject to} &\quad a_0 y_0^2 + b_0 y_0 + c_0 \, \leq \, 0 \end{split} \end{equation} in which $a_0$, $b_0$ and $c_0$ are constants (w.r.t.\ the optimization). Since these are (convex) quadratic programming problems (with linear objectives), their optimal solutions must be on the boundary of the constraint. This can be easily verified directly, for example, by the technique of Lagrange multipliers. There are at most two solutions of the quadratic equation $a_0 y_0^2 + b_0 y_0 + c_0 = 0.$ The smaller one will be denoted by $y_{\mathrm{min}}$ and the larger one by $y_{\mathrm{max}}$ (they are allowed to be the same, if there is only one solution). Then, we set $I_1(x_0) \doteq y_{\mathrm{min}}$, and $I_2(x_0) \doteq y_{\mathrm{max}}$; or $I(x_0) \doteq \emptyset$, in case there is no solution. Finally, we define $I_1(x_k) = I_2(x_k) = y_k$, for all $k \in [n]$, as the outputs are noise-free, that is $y_k = f_*(x_k)$, for $k \in [n]$. {\renewcommand{\arraystretch}{1.3} \begin{table}[!t] \centering \caption{\vspace*{-4mm}} \begin{tabular}{|cl|} \hline \multicolumn{2}{|c|}{\textsc{Pseudocode: { Confidence} interval for the noise-free case}} \\ \hline\hline {\em Input:} & Data sample $\{(x_k, y_k)\}_{k=1}^{n}$, input query point $x_0 \in \mathcal{D}$,\\ & and risk probability $\alpha \in (0,1)$.\\ {\em Output:} & { The endpoints of the confidence interval $[\hspace{0.3mm}I_1(x_0), I_2(x_0)\hspace{0.3mm}]$}\\ & { which has confidence probability at least $1-\alpha$.}\\[0.5mm] \hline \hline 1. & If $x_0 = x_k$ for any $k \in [n]$, return $I_1(x_0) = I_2(x_0) = y_k$.\\ 2.& Calculate $\kappa \doteq \frac{1}{n} \sum_{k=1}^n y_k^2 + \sqrt{\frac{\log(\alpha)}{-2n}} + \delta_0$. \\ 3. & Create the extended Gram matrix\\ & $K_{0}(i+1, j+1)\doteq k(x_i,x_j),$ for $i,j=0,1,...,n$. \\ 4.& Calculate $K_{0}^{-1}$ and partition it as:\\ & $ \begin{bmatrix} \; c & b\tr\\ \; b & A \,\end{bmatrix} \doteq\, K_{0}^{-1} $\\ 5. & Solve the quadratic equation $a_0 y_0^2 + b_0 y_0 + c_0 = 0$, \\ & where $a_0 \doteq c$, $b_0 \doteq 2b\tr y$ and $c_0 = y\tr\hspace{-0.3mm} A y - \kappa$.\\ 6. & If there is no solution, return $I(x_0) \doteq \emptyset$; otherwise return\\ & $I_1(x_0) \doteq y_{\mathrm{min}}$, and $I_2(x_0) \doteq y_{\mathrm{max}}$, where $y_{\mathrm{min}} \leq y_{\mathrm{max}}$\\ & are the solutions (which are allowed to coincide).\\[0.5mm] \hline \end{tabular} \label{table:pseudo-noise-free} \vspace*{-4mm} \end{table}} Table \ref{table:pseudo-noise-free} summarizes the proposed algorithm for the case without measurement noise. By observing that if $\kappa$ satisfies $\norm{f}_{\mathcal{H}}^2 \leq \kappa$, which has probability at least $1-\alpha$, then the construction guarantees that $\mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}$, as the region contains all outputs that can be interpolated with a function from $\mathcal{H}$ which also interpolates the original dataset and has norm square at most $\kappa$. Hence, we can conclude that \medskip \begin{theorem}{\em Assume that A\ref{A0}, A\ref{A2}, A\ref{A3} and $y_k = f_*(x_k)$, for $k \in [n]$, are satisfied. Let $\alpha \in (0,1)$ be a risk probability. Then, the { confidence} band of Algorithm \ref{table:pseudo-noise-free} guarantees $$\mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,) \, \geq \, 1-\alpha.\vspace*{0.8mm}$$} \end{theorem} \section{{ Confidence Bands} with Measurement Noise} Now, we turn to the general case, when the observations of $f_*$ are affected by {\em noises}{ ,} $y_k = f_*(x_k) + \varepsilon_k$, for $k \in [n]$. Since now we do not have exact knowledge of the function values at the sample inputs, we cannot directly apply our previous approach. The main idea in this case is that first we need to construct {\em interval estimates} of $f_*$ at some {\em { observed} inputs}, $\{x_k\}$, which then can be used to bound the norm and to build { confidence} intervals for the {\em unobserved} inputs. \subsection{{ Confidence} Intervals at the { Observed} Inputs} \label{sec:SPS} We employ the {\em kernel gradient perturbation} (KGP) method, proposed in \cite{csaji2019distribution}, to build {\em non-asymptotically} guaranteed, {\em distribution-free} { confidence} intervals for $f_*$ at some of the {\em observed} inputs. The KGP algorithm is based on ideas from {\em finite-sample system identification} \cite{Algo2018}, particularly, it is an extension of the {\em Sign-Perturbed Sums} (SPS) method \cite{csaji2014sign}. The KGP method can build non-asymptotically guaranteed distribution-free confidence regions for the RKHS coefficients of the {\em ideal} representation (w.r.t.\ given input points) of $f_*$. A representation $f \in \mathcal{H}$ is called ideal w.r.t.\ $\{x_k\}_{k=1}^{d}$, if it has the property that $f(x_k) = f_*(x_k)$, for all $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$. { The KGP construction guarantees \cite[Theorem 2]{csaji2019distribution} that the confidence set contains the coefficients of an ideal representation w.r.t.\ $\{x_k\}_{k=1}^{d}$ {\em exactly} with a user-chosen confidence probability, assuming the noises satisfy regularity conditions, e.g., they are symmetric and independent (cf.\ A\ref{A0} and A\ref{A1}). Note that KGP regions are only guaranteed at the {\em observed} inputs. KGP cannot provide confidence bands directly.} The KGP approach can be used together with a number of kernel methods, such { as} support vector regression and kernelized LASSO. Here, we use it with {\em kernel ridge regression} (KRR) { which} is the kernelized version of Tikhonov regularized least squares (LS). It solves the following problem: \begin{equation} \label{krr:objective} \hat{f}_{\scriptscriptstyle\text{KRR}} \; \doteq \; \operatornamewithlimits{arg\,min}_{f \in \mathcal{H}}\, \frac{1}{n}\,\sum_{k=1}^n w_i (y_k - f(x_k))^2 \,+\, \lambda\, \| f \|^2_{\mathcal{H}}, \vspace{1mm} \end{equation} where $\lambda > 0$, $w_k > 0$, $i \in [n]$, are given (constant) weights. Using the { representer theorem} \cite{hofmann2008kernel} and the reproducing property, the objective of \eqref{krr:objective} can be rewritten as \cite{csaji2019distribution}\vspace{-0.5mm} \begin{equation} \label{krr:obj2} \frac{1}{n}\,(y - K\hspace{0.2mm} \theta)\tr W (y - K\hspace{0.2mm} \theta) \,+\, \lambda\, \theta\tr \hspace{-0.3mm}K\hspace{0.2mm} \theta, \end{equation} where $W \doteq \mbox{diag}(w_1,\dots, w_n)$, $K$ is the Gramian matrix, and $\theta = (\theta_1, \dots, \theta_n)$ are the coefficients of the solution. Minimizing \eqref{krr:obj2} can be further reformulated as a canonical {\em ordinary least squares} (OLS) problem, $\|\hspace{0.3mm}{ v} \,-\, \Phi\hspace{0.2mm} \theta\hspace{0.3mm}\|^2$, by using\vspace{-0.5mm} \begin{equation*} \Phi\, =\, \left[ \begin{array}{c} \,(\nicefrac{1}{\sqrt{n}})\,W^{\frac{1}{2}} K\, \\[1mm] \sqrt{\lambda}\, K^{\frac{1}{2}} \end{array} \right]\!,\quad { v} \,=\, \left[ \begin{array}{c}\, (\nicefrac{1}{\sqrt{n}})\, W^{\frac{1}{2}} y\, \\[1mm] \;0_n\; \end{array} \right]\!, \end{equation*} where $W^{\frac{1}{2}}$ and $K^{\frac{1}{2}}$ denote the principal, non-negative square roots of matrices $W$ and $K$, respectively. Note that the square roots exist as these matrices are positive semi-definite. For convex quadratic problems (such as KRR) and {\em symmetric} noises (cf.\ A\ref{A1}), the KGP confidence regions coincide with SPS regions. They are {\em star convex} with the LS estimate, $\hat{\theta}$, as a star center. Furthermore, they have {\em ellipsoidal outer approximations}, that is there are regions of the form \vspace{-0.5mm} \begin{equation} \widehat{\Theta}_{\beta} \; \doteq \; \Big\{\, \theta \in \mathbb{R}^n\, :\, (\theta-\hat{\theta})^\mathrm{T}\frac{1}{n}\Phi\tr\Phi\hspace{0.3mm}(\theta-\hat{\theta})\,\leq\, r \, \Big\}, \end{equation} where $1-\beta \in (0,1)$ is a given confidence probability \cite{csaji2014sign}. The radius of this confidence ellipsoid, $r$, can be computed by {\em semi-definite programming}: see \cite[{ Section VI.B}]{csaji2014sign}. Hence, the construction guarantees $\mathbb{P}(\hspace{0.3mm}\tilde{\theta} \in \Theta_\beta\hspace{0.3mm}) \geq 1-\beta$, where $\tilde{\theta}$ is the coefficient vector of an {\em ideal} representation: \vspace{-1mm} $$ \sum_{i=1}^n \tilde{\theta}_i k(x_i, x_k) \,=\, f_*(x_k), $$ for $k \in [n]$. By defining $\varphi_k \doteq (k(x_1,x_k), \dots, k(x_n,x_k))\tr$, we know that $f_*(x_k) = \varphi_k\tr\tilde{\theta}$, but of course $\tilde{\theta}$ is unknown. Since $\tilde{\theta}$ is inside the ellipsoid $\widehat{\Theta}_{\beta}$ with probability $\geq 1-\beta$, we could construct (probabilistic) upper and lower bounds of $f_*(x_k)$ by maximizing and minimizing $\varphi_k\tr\theta$, for $\theta \in \widehat{\Theta}_{\beta}$. These problems (linear objective and ellipsoid constraint) have known solutions: the minimum and the maximum are $$ \nu_k = \varphi_k\tr\hat{\theta} - (\varphi_k\tr P\varphi_k)^{\frac{1}{2}}, \qquad \mu_k = \varphi_k\tr\hat{\theta} + (\varphi_k\tr P\varphi_k)^{\frac{1}{2}}, $$ where $P = (nr)^{-1} \Phi\tr\Phi$, and $\hat{\theta}$ is the center of the ellipsoid, i.e., the { solution} of the OLS formulation $\|\hspace{0.3mm}{ v} \,-\, \Phi\hspace{0.2mm} \theta\hspace{0.3mm}\|^2$. { Due to the construction of KGP confidence regions, there is a (extremely small, but nonzero) probability of getting an empty region. In this case, we define $\nu_k = 1$ and $\mu_k = -1$, for all $k \in [n]$. That is, we give an {\em empty interval} for each $f(x_k)$, using a similar representation as in Section \ref{sec:objectives}.} Finally, we introduced a slight modification to this construction. We can also construct confidence intervals just for the first $d \leq n$ observations by redefining objective \eqref{krr:obj2} as \begin{equation*} \frac{1}{n}\,(y - K_1\hspace{0.2mm} \theta)\tr W (y - K_1\hspace{0.2mm} \theta) \,+\, \lambda\, \theta\tr \hspace{-0.3mm}K_2\hspace{0.2mm} \theta, \end{equation*} where $K_1 \in \mathbb{R}^{n \times d}$ is $K$ having the last $n-d$ columns removed, and $K_2\in \mathbb{R}^{d \times d}$ is $K_1$ having the last $n-d$ rows removed. Hence, we search for $\tilde{\theta} \in \mathbb{R}^d$ ideal vector, such that for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, we have $(K_1 \tilde{\theta})(k)= f_*(x_k)$. For the error computation we still use {\em all} measurements ($K_1$ still has $n$ rows). { It is important that in this case only the first $d$ residuals are perturbed in the construction of the KGP ellipsoid. This} usually considerably reduces the sizes of the intervals, but then we only have guarantees { at $d\leq n$ observed inputs}. \subsection{Bounding the Norm with Measurement Noise} In the previous section, we built {\em simultaneous} confidence intervals at the sample inputs for the first $d\leq n$ observations, $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$; that is, they have the property \vspace{-0.2mm} \begin{equation} \label{eq:sym.conf.int} \mathbb{P}\big(\hspace{0.3mm} \forall \hspace{0.3mm}k \in [\hspace{0.3mm}d\hspace{0.5mm}]: f_*(x_k) \in [\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]\hspace{0.3mm}\big)\, \geq\, 1 - \beta, \end{equation} for some (user-chosen) risk probability $\beta \in (0,1)$. Recall that by Lemma \ref{lemma:Hoeffding.noiseless}, for any $n$, the variable \begin{equation} \label{eq:Hoeffdieng} \kappa \, \doteq \frac{1}{n} \sum_{k=1}^n f^2_*(x_k) + \sqrt{\frac{\ln(\alpha)}{-2n}} + \delta_0, \end{equation} is an upper bound of $\norm{f_*}_{\mathcal{H}}^2$ with probability at least $1-\alpha$. Using property \eqref{eq:sym.conf.int}, we also know that\vspace{-1mm} \begin{equation} \label{eq:sum.max.nu.mu.square} \sum_{k=1}^{d} f_*^2(x_k) \,\leq\, \sum_{k=1}^{d} \max\{\nu_k^2, \mu_k^2\}, \end{equation} with probability at least $1-\beta$. By combining property \eqref{eq:sym.conf.int}, formulas \eqref{eq:Hoeffdieng} and \eqref{eq:sum.max.nu.mu.square}, the results of Lemma \ref{lemma:Hoeffding.noiseless}, as well as using Boole's inequality (the union bound), we have \medskip \begin{lemma} \label{lemma:Hoeffding.noisy} {\em Assume that A\ref{A0}, A\ref{A2}, A\ref{A3} hold and that confidence intervals $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, satisfy \eqref{eq:sym.conf.int}. { Then, $$ \mathbb{P}\big(\norm{f_*}_{\mathcal{H}}^2 \leq \tau \hspace{0.3mm}\big)\, \geq \,1-\alpha-\beta, $$ with the following choice of the upper bound $\tau$: $$ \tau \, \doteq\, \frac{1}{d} \sum_{k=1}^{d} \max\{\nu^2_k,\mu^2_k \} + \sqrt{\frac{\ln(\alpha)}{-2d}} + \delta_0.$$ }} \vspace{0mm} \end{lemma} \begin{remark} Although we only used the first $d$ observations for estimating the norm (square), the intervals $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, incorporate information about the {\em whole} sample. { The ``optimal'' choice of $d$ leading to small intervals is an open question, in practice $d = \mathcal{O}(\sqrt{n})$ often works well.} \end{remark} \subsection{Interval Endpoints with Measurement Noise} The final step is to construct a { confidence} interval for a given input {\em query point} $x_0 \in \mathcal{D}$ with $x_0 \neq x_k$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$. We extend the Gram matrix with query point $x_0$, $$ \widetilde{K}_{0}({i+1},{j+1})\, \doteq \, k(x_i,x_j), $$ for $i, j = 0,1, \dots ,d$; but we only use the first $d$ data points. {\renewcommand{\arraystretch}{1.3} \begin{table}[!t] \centering \caption{\vspace*{-4mm}} \begin{tabular}{|cl|} \hline \multicolumn{2}{|c|}{\textsc{Pseudocode: { Confidence} interval with measurement noise}} \\ \hline\hline {\em Input:} & Data sample $\{(x_k, y_k)\}_{k=1}^{n}$, input query point $x_0 \in \mathcal{D}$,\\ & risk probabilities $\alpha \in (0,1)$ and $\beta \in (0,1)$.\\ {\em Output:} & { The endpoints of the confidence interval $[\hspace{0.3mm}I_1(x_0), I_2(x_0)\hspace{0.3mm}]$}\\ & { which has confidence probability at least $1-\alpha-\beta$.}\\[0.5mm] \hline \hline 1. & Select $d \in [n]$, the number of confidence intervals built for\\ & a subset of { observed} inputs. Default choice: $d = \ceil{\sqrt{n}\hspace{0.3mm}}$. \\ 2.& Construct $1-\beta$ level simultaneous confidence intervals for\\ & $\{f_*(x_k)\}_{k=1}^{d}$, that is $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, with \eqref{eq:sym.conf.int}.\\ & (e.g., apply the KGP method discussed in Section \ref{sec:SPS})\\ 3.& Set $\tau \, \doteq\, \frac{1}{d} \sum_{k=1}^{d} \max\{\nu_k^2, \mu_k^2 \} + \sqrt{\frac{\ln(\alpha)}{-2d}} + \delta_0$. \\ 4. & Solve both convex optimization problems given by \eqref{noisy-opt-min-max}.\\ 5. & If there is no solution, return $I(x_0) \doteq \emptyset$; otherwise return\\ & $I_1(x_0) \doteq z_{\mathrm{min}}$ and $I_2(x_0) \doteq z_{\mathrm{max}}$, where $z_{\mathrm{min}} \leq z_{\mathrm{max}}$\\ & are the solutions (which are allowed to coincide).\\[0.5mm] \hline \end{tabular} \label{table:pseudo-noisy} \vspace*{-4mm} \end{table}} We have to be careful with the optimization problems, as now we do not know the exact function values, we only have potential intervals for them. Therefore, all function values are treated as decision-variables, which can take values from the given confidence intervals. Hence, we have to solve \begin{equation} \label{noisy-opt-min-max} \begin{split} \mbox{min\,/\,max} &\quad z_{0} \\[0.5mm] \mbox{subject to} &\quad (z_0, \dots, z_d) \widetilde{K}_{0}^{-1} (z_0, \dots, z_d)\tr \leq\, \tau\\ &\quad \nu_1 \leq z_1 \leq \mu_1,\; \dots,\; \nu_d \leq z_d \leq \mu_d\\[0.5mm] \end{split} \end{equation} where ``min\,/\,max'' again means that the problem have to be solved as a minimization and as a maximization (separately). These problems are {\em convex}, therefore, they can be solved efficiently. The optimal values, denoted by $z_{\mathrm{min}}$ and $z_{\mathrm{max}}$, are the {\em endpoints} of the { confidence} interval: $I_1(x_0) \doteq z_{\mathrm{min}}$, and $I_2(x_0) \doteq z_{\mathrm{max}}$. If \eqref{noisy-opt-min-max} is infeasible, e.g., we get an empty KGP ellipsoid, we set $I(x_0) = \emptyset$, i.e., we use $I(x_0) =(\hspace{0.3mm}1, -1\hspace{0.3mm})$. Table \ref{table:pseudo-noisy} summarizes the algorithm to construct the endpoints of a confidence interval at a given query point, in case of having measurement noises. Its theoretical guarantee is: \medskip \begin{theorem}{\em Assume that A\ref{A0}, A\ref{A1}, A\ref{A2}, A\ref{A3} are satisfied. Let $\alpha, \beta \in (0,1)$ be given risk probabilities. Then, the confidence band built by Algorithm \ref{table:pseudo-noisy} described above guarantees $$\mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,) \, \geq \, 1-\alpha - \beta.$$} \end{theorem} \vspace{4mm} \begin{remark} Applying the KGP approach in Algorithm \ref{table:pseudo-noisy} is optional. One could use any other construction that provides simultaneous confidence intervals for a subset of $\{f_*(x_k)\}$, cf.\ \eqref{eq:sym.conf.int}. Another approach could be to assume sub-Gaussian or sub-exponential noises and use their tail bounds to ensure \eqref{eq:sym.conf.int}. \end{remark} \begin{figure}[!t] \centering \hspace*{-2mm} % \includegraphics[width = 1.02\columnwidth]{zajmentesabra_b_30.pdf} % % \caption{Nonparametric { confidence bands} for the noise-free setting.} \label{fig:experiment1} \end{figure} \begin{figure}[!t] \centering \hspace*{-2mm} % \includegraphics[width = 1.02\columnwidth]{zajosabra6_Lap_04.pdf} % % \caption{Nonparametric { confidence bands} with measurement noise.} \label{fig:experiment2} \vspace*{-2mm} \end{figure} \section{Numerical Experiments} The algorithms were also tested numerically. We used a Paley-Wiener RKHS with $\eta = 30$. The ``true'' function was constructed as follows: first, $20$ random input points $\{\bar{x}_k\}_{k=1}^{20}$ were generated, with uniform distribution on $[\hspace{0.3mm}0,1]$. Then $f_*(x) = \sum_{k=1}^{20} w_k k(x, \bar{x}_k)$ was created, where each $w_k$ had a uniform distribution on $[-1,1]$. The function was normalized, in case its maximum exceeded $1$. Then, $n$ random observations were generated about $f_*$. In the noisy case, $\{\varepsilon_k\}$ had Laplace distribution with location $\mu = 0$ and scale $b = 0.4$ parameters. In the noise-free case, we used $n=10$ observations, and created confidence bands with risk $\alpha = 0.1$ and $0.5$. Figure \ref{fig:experiment1} demonstrates that in the noise-free setting a very small sample size can lead to informative nonparametric confidence bands. In case of measurement noises, $n=100$ sample size was used with $d=20$ (orange points). { Confidence bands} with risk $\alpha + \beta = 0.1$ and $0.5$ are illustrated in Figure \ref{fig:experiment2}. We simply used $\alpha = \beta$ in these cases. The results indicated that even with limited information, adequate regions can be created. \section{Conclusions} In this paper a nonparametric and distribution-free { method was introduced to build simultaneous confidence bands for bounded, band-limited functions}. The construction was first presented for the case when there are no measurement noises, then it was extended allowing symmetric noises. Besides having non-asymptotic theoretical guarantees, the approach was also demonstrated numerically, supporting its feasibility. \bibliographystyle{ieeetr}
2024-02-18T23:40:24.950Z
2022-06-29T02:03:43.000Z
algebraic_stack_train_0000
2,342
6,715
proofpile-arXiv_065-11395
\section{Introduction}\label{sec:intro} We consider a general case of a two-component univariate mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown. Such a model can be defined at its most general as \begin{equation}\label{model2} g(x)=(1-p)f_{0}(x)+pf(x) \end{equation} where $f_{0}$ is a known density component, while $p\in (0,1)$ and $f(x)$ are the unknown weight and the unknown density component, respectively. The semiparametric mixtures of density functions have been considered by now in a number of publications. The earliest seminal publications in this area are \cite{Hall_Zhou} and \cite{Hall_Pakyari_Elmore}. From the practical viewpoint, the model \eqref{model2} is related to the multiple testing problem where $p$-values are uniformly distributed on $[0,1]$ under the null hypothesis but their distribution under the alternative is unknown. In the setting of model \eqref{model2}, this means that the known distribution is uniform while the goal is to estimate the proportion of the false null hypothesis $p$ and the distribution of the $p-$ values under the alternative. More detailed descriptions in statistical literature can be found in e.g. \cite{efron2012large} and \cite{robin2007semi}. Historically, whenever a two-component mixture model with a known component was considered, some assumptions were imposed on the unknown density function $f(x)$. Most commonly, it was assumed that an unknown distribution belongs to a particular parametric family. In such a case, \cite{cohen1967estimation} and \cite{lindsay1983geometry} used the maximum likelihood-based method to fit it; \cite{day1969estimating} used the minimum $\chi^{2}$ method, while \cite{lindsay1993multivariate} used the method of moments. \cite{jin2008proportion} and \cite{cai2010optimal} used empirical characteristic functions to estimate the unknown cumulative density function under a semiparametric normal mixture model. A somewhat different direction was taken by \cite{bordes_delmas} who considered a special case of the model \eqref{model2} where the unknown component belonged to a location family. In other words, their model is defined as \begin{equation}\label{model1} g(x)=(1-p)f_{0}(x)+pf(x-\mu) \end{equation} where $f_{0}$ is known while $p\in (0,1)$, the non-null location parameter $\mu$ and the pdf $f$ that is symmetric around $\mu$ are the unknown parameters. The model of \cite{bordes_delmas} was motivated by the problem of detection of differentially expressed genes under two or more conditions in microarray data. Typically, a test statistic is built for each gene. Under the null hypothesis (which corresponds to a lack of difference in expression), such a test statistic has a {\it known} distribution (commonly, Student's or Fisher). Then, the response of thousands of genes is observed; such a response can be thought of as coming from a mixture of two distributions: the known distribution $f_0$ (under the null hypothesis) and the unknown distribution $f$ under the alternative hypothesis. Once the parameters $p$, $\mu$, and $f$ are estimated, we can estimate the probability that a gene belongs to a null component distribution conditionally on observations. \cite{bordes_delmas} establishes some sufficient identifiability conditions for their proposed model; they also suggest two estimation methods for it, both of which rely heavily on the fact that the density function of the unknown component is symmetric. A sequel paper, \cite{bordes2010semiparametric}, also establishes a joint central limit theorem for estimators that are based on one of these methods. There is no particular practical reason to make the unknown component symmetric and \cite{bordes_delmas} themselves note that ``In our opinion, a challenging problem would be to consider model \eqref{model1} without the symmetry assumption on the unknown component". This is the goal we set for ourselves in this manuscript. Our approach is based on, first, defining the (joint) estimator of $f$ and $p$ as a minimizer of the log-likelihood type objective functional of $p$ and $f$. Such a definition is implicit in nature; however, we construct an MM (Majorization-Minimization) iterative algorithm that possesses a descent property with respect to that objective functional. Moreover, we also show that the resulting algorithm actually converges. Our simulation studies also show that the algorithm is rather well-behaved in practice. Just as we were finishing our work, a related publication \citet{patra2015estimation} came to our attention. \citet{patra2015estimation} also consider a two-component mixture model with one unknown component. Their identifiability approach is somewhat more general as our discussion mostly concerns sufficient conditions for specific functional classes containing the function $f$. They propose some general identifiability criteria for this model and obtain a separate estimator of the weight $p$; moreover, they also construct a distribution free finite sample lower confidence bound for the weight $p$. \citet{patra2015estimation} start with estimating parameter $p$ first; then, a completely automated and tuning parameter free estimate of $f$ can be constructed when $f$ is decreasing. When $f$ is not decreasing, one can start with e.g. estimating $\hat g$ based on observations $X_{1},\ldots,X_{n}$; then, one can construct e.g. a kernel estimate of unknown $f$ that is proportional to $\max(\hat g-(1-\hat p)f_{0},0)$. In contrast to their approach, our approach estimates both $p$ and $f$ jointly and the algorithm works the same way regardless of the shape of $f$. The rest of our manuscript is structured as follows. Section \eqref{ident} discusses identifiability of the model \eqref{model2}. Section \eqref{est} introduces our approach to estimation of the model \eqref{model2}. Section \eqref{emp_sec} suggests an empirical version of the algorithm first introduced in Section \eqref{est} that can be implemented in practice. Section \eqref{sim} provides simulated examples of the performance of our algorithm. Section \eqref{realdata} gives a real data example. Finally, Section \eqref{Discussion} rounds out the manuscript with a discussion of our result and delineation of some related future research. \section{Identifiability}\label{ident} In general, the model \eqref{model2} is not identifiable. In what follows, we investigate some special cases. For an unknown density function $f$, let us denote its mean $\mu_{f}$ and its variance $\sigma^{2}_{f}$. To state a sufficient identifiability result, we consider a general equation \begin{equation}\label{unid} (1-p)f_{0}(x)+pf(x)=(1-p_{1})f_{0}(x)+p_{1}f_{1}(x). \end{equation} We also denote variance of the distribution $f(x)$ as a function of its mean $\mu_{f}$ as $V(\mu_{f})$. \begin{Lemma}\label{identifiability} Consider the model \eqref{model2} with the unknown density function $f$. Without loss of generality, assume that the first moment of $f_{0}$ is zero while its second moment is finite. We assume that the function $f$ belongs to a set of density functions whose first two moments are finite, whose means are not equal to zero and that are all of the same sign; that is, $f\in {\cal F}=\{f:\int x^{2}f(x)\,dx <+\infty; \mu_{f}>0 \mbox{ or } \mu_{f}<0\}$. Moreover, we assume that for any $f\in {\cal F}$ the function $G(\mu_f)=\frac{V(\mu_f)}{\mu_f}$ is strictly increasing. Then, the equation \eqref{unid} has the unique solution $p_{1}=p$ and $f_{1}=f$. \end{Lemma} \begin{proof} First, let us assume that the mean $\mu_{f}> 0$. Then, the assumption of our lemma implies that the function $V: (0,\infty)\mapsto (0,\infty)$ is strictly increasing. Let us use the notation $\theta_{0}$ for the second moment of $f_{0}$. If we assume that there are distinct $p_{1}\ne p$ and $f_{1}\ne f$ such that $(1-p)f_{0}(x)+pf(x)=(1-p_{1})f_{0}(x)+p_{1}f_{1}(x)$, the following two moment equations are easily obtained \begin{equation}\label{1st} \zeta=p_1\mu_{f_1}=p\mu_f \end{equation} and \begin{equation}\label{2nd} (p_1-p)\theta_0=\zeta(\mu_{f_1}-\mu_f)+p_1V(\mu_{f_1})-pV(\mu_f), \end{equation} where $\zeta>0$. Our task is now to show that if \eqref{1st} and \eqref{2nd} are true, then $p=p_1$ and $f=f_1$. To see this, let us assume $p_1>p$ (the case $p_1<p$ can be treated in exactly the same way). Then from the first equation we have immediately that $\mu_{f_1}<\mu_f$; moreover, since the function $G(\mu_f)$ is a strictly increasing one, then so is the function $\mu_f+G(\mu_f)$. With this in mind, we have \[ \mu_{f_1}+\frac{V(\mu_{f_1})}{\mu_{f_1}}<\mu_{f}+\frac{V(\mu_{f})}{\mu_{f}}. \] On the other hand, $(p_1-p)\theta_0\ge0$ which implies \[ 0\le\zeta(\mu_{f_1}-\mu_f)+p_1V(\mu_{f_1})-pV(\mu_f)=\zeta(\mu_{f_1}-\mu_f)+\zeta \left(\frac{V(\mu_{f_1})}{\mu_{f_1}}-\frac{V(\mu_{f})}{\mu_{f}}\right). \] Therefore, this implies that \[ \mu_{f_1}+\frac{V(\mu_{f_1})}{\mu_{f_1}}\ge\mu_{f}+\frac{V(\mu_{f})}{\mu_{f}}. \] and we end up with a contradiction. Therefore, we must have $p=p_1$. This, in turn, implies immediately that $f=f_1$. The case where $\mu_{f}<0$ proceeds similarly. Let us now consider the case where the variance function $V:(-\infty,0)\rightarrow (0,\infty)$ and is strictly monotonically increasing. As a first step, again take $p_{1}>p$. Clearly, the first moment equation is yet again \eqref{1st} where now $\zeta<0$. If $p_{1}>p$, we now have $\mu_{f_{1}}>\mu_{f}$ and, due to the strict monotonicity of $G(\mu)$, we have $\mu_{f_{1}}+\frac{V(\mu_{f_{1}})}{\mu_{f_{1}}}>\mu_{f}+\frac{V(\mu_{f})}{\mu_{f}}$. On the other hand, since $(p_{1}-p)\theta_{0}\ge 0$, we have \begin{align*} &0\le \zeta(\mu_{f_{1}}-\mu_{f})+p_{1}V(\mu_{f_{1}})-pV(\mu_{f})\\ &=\zeta\left(\left\{\mu_{f_{1}}+\frac{V(\mu_{f_{1}})}{\mu_{f_{1}}}\right\}-\left\{\mu_{f}+\frac{V(\mu_{f})}{\mu_{f}}\right\}\right). \end{align*} Because $\zeta<0$, the above implies that $ \left\{\mu_{f_{1}}+\frac{V(\mu_{f_{1}})}{\mu_{f_{1}}}\right\}-\left\{\mu_{f}+\frac{V(\mu_{f})}{\mu_{f}}\right\}<0$ which contradicts the assumption that the function $G(\mu)$ is strictly increasing. \end{proof} \begin{Remark} To understand better what is going on here, it is helpful if we can suggest a more specific density class which satisfies the sufficient condition in Lemma \eqref{identifiability}. The form of Lemma \eqref{identifiability} suggests one such possiibility - a family of natural exponential families with power variance functions (NEF-PVF). For convenience, we give the definition due to \cite{Bar-Lev_Stramer1987}: ``A natural exponential family (NEF for short) is said to have a power variance function if its variance function is of the form $V(\mu)=\alpha\mu^{\gamma}$, $\mu\in \Omega$, for some constants $\alpha\ne0$ and $\gamma$, called the scale and power parameters, respectively". This family of distributions is discussed in detail in \cite{bar1986reproducibility} and \cite{Bar-Lev_Stramer1987}. In particular, they establish that the parameter space $\Omega$ can only be ${\bR}$, $\bR^{+}$ and $\bR^{-}$; moreover, we can only have $\gamma=0$ iff $\Omega=\bR$. The most interesting for us property is that (see Theorem 2.1 from \cite{Bar-Lev_Stramer1987} for details) is that for any NEF-PVF, it is necessary that $\gamma\notin (-\infty,0)\cup(0,1)$; in other words, possible values of $\gamma$ are $0$, corresponding to the normal distribution, $1$, corresponding to Poisson, and any positive real numbers that are greater than $1$. In particular, the case $\gamma=2$ corresponds to gamma distribution. Out of these choices, the only one that does not result in a monotonically increasing function $G(\mu)$ is $\gamma=0$ that corresponds to the normal distribution; thus, we have to exclude it from consideration. With this exception gone, the NEF-PVF framework includes only density families with either strictly positive or strictly negative means; due to this, it seems a rather good fit for the description of the family of density functions $f$ in the Lemma \eqref{identifiability}. Note that the exclusion of the normal distribution is also rather sensible from the practical viewpoint because it belongs to a location family; therefore, it can be treated in the framework of \cite{bordes_delmas}. More specifically, Proposition $1$ of \cite{bordes_delmas} suggests that, when $f(x)$ is normal, the equation \eqref{unid} has at most two solutions if $f_{0}$ is an even pdf and at most three solutions if $f_{0}$ is not an even pdf. \end{Remark} \begin{Remark} It is also of interest to compare our Lemma \eqref{identifiability} with the Lemma 4 of \citet{patra2015estimation} that also establishes an identifiability result for the model \eqref{model2}. The notions of identifiability that are considered in the two results differ: whereas we discuss the identifiability based on the first two moments, Lemma 4 of \citet{patra2015estimation} looks at a somewhat different definition of identifiability. At the same time, the interpretation given in the previous Remark, suggests an interesting connection. For example, the case where the unknown density function $f$ is gamma corresponds to the power parameter of the NEF-PVF family being equal to $2$. According to our identifiability result Lemma \eqref{identifiability}, the mixture model \eqref{model2} is, then, identifiable with respect to the first two moments. On the other hand, let us assume that the known density function $f_{0}$ is the standard normal. Since its support fully contains the support of any density from the gamma family, identifiability in the sense of \citet{patra2015estimation} now follows from their Lemma 4. \end{Remark} \begin{Remark} We only assumed that the first moment of $f_{0}$ is equal to zero for simplicity. It is not hard to reformulate the Lemma \eqref{identifiability} if this is not the case. The proof is analogous. \begin{Lemma}\label{identifiability_1} Consider the model \eqref{model2} with the unknown density function $f$. We assume that the known density $f_{0}$ has finite first two moments and denote its first moment $\mu_{f_{0}}$. We also assume that the function $f$ belongs to a set of density functions whose first two moments are finite, and whose means are all either greater than $\mu_{f_{0}}$ or less than $\mu_{f_{0}}$: \[ f\in {\cal F}=\{f:\int x^{2}f(x)\,dx<+\infty; \mu_{f}>\mu_{f_{0}} \mbox{ or } \mu_{f}<\mu_{f_{0}}\}. \] Let us assume that $G(\mu_{f})=\frac{V(\mu_{f})}{\mu_{f}-\mu_{f_{0}}}$ is a strictly increasing function in $\mu_{f}$ for a fixed, known $f_{0}$. Then, the equation \eqref{unid} has the unique solution $p_{1}=p$ and $f_{1}=f$. \end{Lemma} \end{Remark} \section{Estimation}\label{est} \subsection{Possible interpretations of our approach} Let $h$ be a positive bandwidth and $K$ a symmetric positive-valued kernel function that is also a true density; as a technical assumption, we will also assume that $K$ is continuously differentiable. The rescaled version of this kernel function is denoted $K_h(x)=K(x/h)/h$ for any $x\in\bR$. We will also need a linear smoothing operator $\mathcal{S}f(x)=\int K_h(x-u)f(u)du$ and a nonlinear smoothing operator $\mathcal{N}f(x)=\exp(\mathcal{S}\log{f(x)})$ for any generic density function $f$. For simplicity, let us assume that our densities are defined on a closed interval, e.g. $[0,1]$. This assumption is here for technical convenience only when proving algorithmic convergence related results. Simulations in the Section \eqref{sim} show that the algorithm works well also when the support of the density $f$ is e.g. half the real line. In the future, we will omit these integration limits whenever doing so doesn't cause confusion. A simple application of Jensen's inequality and Fubini's theorem suggests that $\int{\cal N}f(x)\,dx\le \int Sf(x)\,dx=\int f(x)\,dx=1$. Our estimation approach is based on selecting $p$ and $f$ that minimize the following log-likelihood type objective functional: \begin{equation}\label{obj.fc1} \ell(p,f)=\int g(x)\log\frac{g(x)}{(1-p)f_0(x)+p\mathcal{N}f(x)}dx. \end{equation} The reason the functional \eqref{obj.fc1} is of interest as an objective functional is as follows. First, recall that $KL(a(x),b(x))=\int \left[a(x)\log \frac{a(x)}{b(x)}+b(x)-a(x)\right]\,dx$ is a Kullback-Leibler distance between the two arbitrary positive integrable functions (not necessarily densities) $a(x)$ and $b(x)$; as usual, $KL(a,b)\ge 0$. Note that the functional \eqref{obj.fc1} can be represented as a penalized Kullback-Leibler distance between the target density $g(x)$ and the smoothed version of the mixture $(1-p)f_{0}(x)+p{\cal N}f(x)$; indeed, we can represent $\ell(p,f)$ as \begin{equation}\label{main.problem} \ell (p,f)=KL(g(x),(1-p)f_{0}(x)+p{\cal N}f(x))+p\left\{1-\int {\cal N}f(x)\,dx\right\}. \end{equation} The quantity $1-\int {\cal N}f(x)\,dx=\int [f(x)-{\cal N}f(x)]\,dx$ is effectively the penalty on the smoothness of the unknown density. Thus, the functional \eqref{obj.fc1} can be interpreted as a penalized smoothed likelihood functional. Of course, it is a matter of serious interest to find out if the optimization problem \eqref{main.problem} has a solution at all. This problem can be thought of as a kind of generalized Tikhonov regularization problem; these problems have recently become an object of substantial interest in the area of ill-posed inverse problems. A very nice framework for these problems is described in the monograph \cite{flemming2011generalized} and we will follow it here. First of all, we define the domain of the operator ${\cal N}$ to be a set of square integrable densities, t.i. all densities on $[0,1]$ that belong in $L_{2}[0,1]$. We also define $L_{2}^{+}([0,1])$ to be the set of all non-negative functions that belong to $L_{2}([0,1])$. Define a nonlinear smoothing operator $A:{\cal D}(A)\subseteq L_{2}(D)\rightarrow L_{2}(D)$ as \begin{align*} Af(x):=(1-p)f_{0}(x)+p{\cal N}f(x) \end{align*} where ${\cal D}(A)=\{f(x):f \in L_{2}^{+}([0,1]),\ f(x)\geq \eta >0,\ \int_{0}^{1} f(x)\,dx=1,\ \exists F\in \mathbb{R}^+\ s.t.\ ||f||_2\leq F\}$. In optimization theory, $A$ is commonly called a {\it forward operator}. Note that, as long as $||K||_2^{2}:=\int K^{2}(u)\,du<\infty$, it is easy to show that ${\cal N}f(x)\in L_{2}([0,1])$ if $f(x)\in L_{2}([0,1])$ and, therefore, this framework is justified. Next, for any two functions $a(x)$ and $b(x)$, we define a {\it fitting functional} $Q:L_2([0,1])\times L_2([0,1])\rightarrow [0,\infty)$ as $Q(a(x),b(x));= KL(a(x),b(x))$. Finally, we also define a non-negative {\it stabilizing functional} $\Omega :{\cal D}(\Omega)\subseteq L_2([0,1])\rightarrow [0,1]$ as $\Omega(f) := \left\{1-\int_{0}^{1} {\cal N}f(x)\,dx\right\}$ where ${\cal D}(\Omega)={\cal D}(A)$. Now, we are ready to define the minimization problem \begin{equation}\label{equ:min_problem} T_{p,g}(f)=Q_p(g,Af)+p\Omega(f)\rightarrow \min \end{equation} where $p$ plays the role of {\it regularization parameter}. We use the subscript $p$ for $Q$ to stress the fact that the fitting functional is dependent on the regularization parameter; this doesn't seem to be common in optimization literature but we can still obtain the existence result that we need. The following set of assumptions is needed to establish existence of the solution of this problem; although a version of these assumptions is given in \cite{flemming2011generalized} , we give them here in full for ease of reading. \begin{Assumption}\label{assumption} Assumptions on $A: {\cal D}(A)\subset L_{2}([0,1])\rightarrow L_{2}([0,1])$ \begin{enumerate} \item\label{A1} $A$ is sequentially continuous with respect to the weak topology of the space $L_{2}([0,1])$, i.e. if $f_{k}\rightharpoonup f$ for $f,f_{k}\in {\cal D}(A)$, then we have $A(f_{k})\rightharpoonup A(f)$ \item\label{A2} ${\cal D}(A)$ is sequentially closed with respect to the weak topology on $L_{2}([0,1])$. This means that $f_{k}\rightharpoonup f$ for $\{f_{k}\}\in {\cal D}(A)$ implies that $f\in {\cal D}(A)$. \end{enumerate} Assumptions on $Q:L_{2}([0,1])\times L_{2}([0,1])\rightarrow [0,\infty)$: \begin{enumerate}\setcounter{enumi}{2} \item \label{Q1}$Q_p(g,v)$ is sequentially lower semi-continuous with respect to the weak topology on $L_{2}([0,1])\times L_{2}([0,1])$, that is if $p_{k}\rightarrow p$, $g_{k}\rightharpoonup g$ and $v_{k}\rightharpoonup v$, then $Q_p(g,v)\le \liminf_{k\rightarrow \infty} Q_{p_k}(g_k,v_k)$. \item\label{Q2} If $Q_p(g,v_{k})\rightarrow 0$ then there exists some $v\in L_{2}([0,1])$ such that $v_{k}\rightharpoonup v$. \item\label{Q3} If $v_{k}\rightharpoonup v$ and $Q_p(g,v)<\infty$, then $Q_p(g,v_{K})\rightarrow Q_p(g,v)$. \end{enumerate} Assumptions on $\Omega:{\cal D}(A)\rightarrow [0,1]$: \begin{enumerate}\setcounter{enumi}{5} \item\label{O1} $\Omega$ is sequentially lower semicontinuous with respect to the weak topology in $L_{2}([0,1])$, that is, if $f_{k}\rightharpoonup f$ for $f,f_{k}\in L_{2}([0,1])$, we have $\Omega(f)\le \lim\inf_{k\rightarrow \infty}\Omega(f_{k})$ \item\label{O2} The sets $M_{\Omega}(c):=\{f\in {\cal D}(A):\Omega(f)\le c\}$ are sequentially pre-compact with respect to the weak topology on $L_{2}([0,1])$ for all $c\in \bR$, that is each sequence in $M_{\Omega}(c)$ has a subsequence that is convergent in the weak topology on $L_{2}([0,1])$. \end{enumerate} \end{Assumption} \begin{Lemma} Assume that the kernel function $K$ is bounded from above: $K(x)\le {\cal K}$. Then, the optimization problem \eqref{equ:min_problem} satisfies all of the assumptions listed in \eqref{assumption}. \end{Lemma} \begin{proof} We start with the Assumption \ref{assumption}({\romannumeral 0\ref{A1}}). Note that the space dual to $L_{2}([0,1])$ is again $L_{2}([0,1])$; therefore, the weak convergence $f_{k}\rightharpoonup f$ in $L_2([0,1])$ means that, for any $q\in L_{2}([0,1])$, we have $\int_{0}^{1}f_{k}(x)q(x)\,dx\rightarrow \int_{0}^{1}f(x)q(x)\,dx$ as $k\rightarrow \infty$. To show that the Assumption \ref{assumption}({\romannumeral 0\ref{A1}}) is, indeed, true, we first note that $\{f_k\}$ and $f$ are bounded away from 0 which tells $\int_{0}^{1} |\log f_k(x) -\log f(x)|q(x)dx\leq \int_{0}^{1} |f_k(x)-f(x)|\frac{1}{\eta}q(x)dx\rightarrow 0$ as $k\rightarrow \infty$ for some positive $\eta$ that does not depend on $k$. Therefore, $f_k\rightharpoonup f$ implies $\log f_k\rightharpoonup \log f$. Second, \begin{align*} \int S\log f_k(x)q(x)dx &= \int q(x)\int_{0}^{1} K_h(x-u)\log f_k(u)du\,dx \\ &= \int_{0}^{1} \log f_k(u)\int K_h(x-u)q(x)dx\,du \end{align*} Note that, since $\log f_k\rightharpoonup \log f$, and the function $\tilde q(u)=\int K_h(x-u)q(x)dx$ belongs to $L_{2}([0,1])$, we can claim that $\int S\log f_k(x)q(x)dx \longrightarrow \int_{0}^{1} \log f(u)\int K_h(x-u)q(x)dx\,du=\int S\log f(x)q(x)dx$ as $k\rightarrow \infty$. In other words, we just established that $S\log f_{k}\rightharpoonup S\log f$ as $k\rightarrow \infty$. Moving ahead, we find out, using the Cauchy - Schwarz inequality that $\int_{0}^{1} K_h(x-u)\log f_k(u)du < \int_{0}^{1} K_h(x-u) f_k(u)du \leq {\cal K}\int_{0}^{1} f_k(u)du = {\cal K}$. The same is true for $f(x)$ and so $\int |exp\{ S\log f_k(x)\} - exp\{ S\log f(x)\}|g(x)dx \leq \int |S\log f_k(x)-S\log f(x)|\le E\cdot g(x)dx \rightarrow 0$ where $E$ is a positive constant that does not depend on $k$. Therefore, $f_k\rightharpoonup f$ finally implies ${\cal N} f_k \rightharpoonup {\cal N} f$ and thus $Af_k \rightharpoonup Af$. Next, we need to prove that the assumption \ref{assumption}({\romannumeral 0\ref{A2}}) is also valid. To show that ${\cal D}(A)$ is sequentially closed, we select a particular function $q\equiv 1$ on $[0,1]$. Such a function clearly belongs to $L_{2}([0,1])$ and so we have $\int_{0}^{1} f_{k}(x)q(x)\,dx\equiv \int_{0}^{1} f_{k}(x)\,dx \rightarrow \int_{0}^{1}f(x)\,dx$ as $k\rightarrow \infty$. Since we know that, for any $k$, we have $\int f_{k}(x)\,dx=1$, it follows that $\int_{0}^{1} f(x)dx=1$ as well. It is not hard to check that other characteristics of $D(A)$ are preserved under weak convergence as well. The fitting functional $Q$ is a Kullback-Leibler functional; the fact that it satisfies assumptions \ref{assumption}({\romannumeral 0\ref{Q1}})({\romannumeral 0\ref{Q2}})({\romannumeral 0\ref{Q3}}) has been demonstrated several times in optimization literature concerned with variational regularization with non-metric fitting functionals. The details can be found in e.g. \cite{flemming2010theory}. The sequential lower semi-continuity of the stabilizing functional $\Omega$ in \ref{assumption}({\romannumeral 0\ref{O1}}) is guaranteed by the weak version of Fatou's Lemma. Indeed, let us define $\phi_k(x)=Sf_k(x)-{\cal N}f_k(x)$. Then, due to Jensen's inequality, $\{\phi_k\}$ is a sequence of non-negative measurable functions. We already know that $f_{k}\rightharpoonup f$ guarantees ${\cal N}f_{k}\rightharpoonup {\cal N}f$; therefore, we have $\phi_{k}\rightharpoonup \phi$ where $\phi (x)=Sf(x)-{\cal N}f(x)$. By the weak version of Fatou's lemma, we then have $\int \phi (x)\,dx\le \liminf_{k\rightarrow \infty}\int \phi_k(x)\,dx$, or equivalently, $\Omega (f)\le \liminf_{k\rightarrow \infty}\Omega (f_k)$. Therefore, $\Omega :{\cal D}(A)\rightarrow [0,1]$ is lower semi-continuous with respect to the weak topology on $L_{2}([0,1])$. Finally, the assumption \ref{assumption}({\romannumeral 0\ref{O2}}) is always true simply because $D(A)$ is a closed subset of a closed ball in $L_{2}([0,1])$; sequential Banach-Alaoglu theorem lets us conclude then that $M_{\Omega}(c)$ is sequentially compact with respect to the weak topology on $L_{2}([0,1])$. \end{proof} Finally, we can state the existence result. Note that in optimization literature sometimes one can see results of this nature under the heading of {\it well-posedness}, not existence; see, e.g. \cite{hofmann2007convergence}. \begin{theorem}\label{thm:Existence} {\bf (Existence)} For any mixture density $g(x)\in L_2([0,1])$ and any $0<p<1$, the minimization problem \eqref{equ:min_problem} has a solution. The minimizer $f^*\in {\cal D}(A)$ satisfies $T_{p,g}(f^*)< \infty$ if and only if there exists a density function $\bar{f}\in {\cal D}(A)$ with $Q_p(g,A\bar{f})< \infty$. \end{theorem} \begin{proof} Set $c:=\inf_{f\in {\cal D}(A)}T_{p,g}(f)<\infty$. Note that $c<\infty$ due to existence of $\bar{f}$ and thus the trivial case of $c=\infty$ is excluded. Next, take a sequence $(f_k)_{k\in \mathbb{N}}\in {\cal D}(A)$ with $T_{p,g}(f_k) \rightarrow c$. Then \begin{equation} \Omega(f_k) \le \frac{1}{p}T_{p,g}(f_k)\le \frac{1}{p}(c+1) \end{equation} for sufficiently large $k$ and by the compactness of the sublevel sets of $\Omega$ there is a subsequence $(f_{k_l})_{l\in \mathbb{N}}$ converging to some $\tilde{f}\in {\cal D}(A)$. The continuity of $A$ implies $Af_{k_l}\rightharpoonup A\tilde{f}$ and the lower semi-continuity of $Q_p$ and $\Omega$ gives \begin{equation} T_{p,g}(\tilde{f}) \le \liminf_{l \rightarrow \infty} T_{p,g}(f_{k_l})=c \end{equation} that is, $\tilde{f}$ is a minimizer of $T_{p,g}$. \end{proof} \subsection{Algorithm} Now we go back to introducing the algorithm that would search for unknown $p$ and $f(x)$. The first result that we need is the following technical Lemma. \begin{Lemma}\label{lemma:divergence} For any pdf $\widetilde{f}$ and any real number $\widetilde{p}\in(0,1)$, \begin{align}\label{eqn:divergence} &\ell(\widetilde{p},\widetilde{f})-\ell(p,f)\\ &\le-\int g(x)\left[(1-w(x))\log\left(\frac{1-\widetilde{p}}{1-p}\right) +w(x)\log\left(\frac{\widetilde{p}\mathcal{N}\widetilde{f}(x)}{p\mathcal{N}f(x)}\right)\right]dx\nonumber \end{align} where $w(x)=\frac{p\mathcal{N}f(x)}{(1-p)f_0(x)+p\mathcal{N}f(x)}$. \end{Lemma} \begin{proof}[Proof of Lemma \ref{lemma:divergence}] The result follows by the following straightforward calculations: \begin{eqnarray*} &\ell(\widetilde{p},\widetilde{f})-\ell(p,f)= -\int g(x)\log\left(\frac{(1-\widetilde{p})f_0(x)+\widetilde{p}\mathcal{N}\widetilde{f}(x)} {(1-p)f_0(x)+p\mathcal{N}f(x)}\right)dx\\ &=-\int g(x)\log\left((1-w(x))\frac{1-\widetilde{p}}{1-p}+ w(x)\frac{\widetilde{p}\mathcal{N}\widetilde{f}(x)}{p\mathcal{N}f(x)}\right)dx\\ &\le -\int g(x)\left[(1-w(x))\log\left(\frac{1-\widetilde{p}}{1-p}\right)+w(x) \log\left(\frac{\widetilde{p}\mathcal{N}\widetilde{f}(x)}{p\mathcal{N}f(x)}\right)\right], \end{eqnarray*} where the last inequality follows by convexity of the negative logarithm function. \end{proof} Suppose at iteration $t$, we get the updated pdf $f^t$ and the updated mixing proportion $p^t$. Let $w^{t}(x)=\frac{p^{t}\mathcal{N}f^{t}(x)}{(1-p^t)f_0(x)+p^t\mathcal{N}f^t(x)}$, and define \[ p^{t+1}=\int g(x)w^t(x)dx, \] \[ f^{t+1}(x)=\alpha^{t+1}\int K_h(x-u)g(u)w^t(u)du, \] where $\alpha^{t+1}$ is a normalizing constant needed to ensure that $f^{t+1}$ integrates to one. Then the following result holds. \begin{theorem}\label{descent:property} For any $t\ge0$, $\ell(p^{t+1},f^{t+1})\le\ell(p^t,f^t)$. \end{theorem} \begin{proof}[Proof of Theorem \ref{descent:property}] By Lemma \ref{lemma:divergence}, for an arbitrary density function $\widetilde f$ and an arbitrary number $0<\widetilde p<1$ \begin{align}\label{descent:property:eqn1} &\ell(\widetilde{p},\widetilde{f})-\ell(p^t,f^t)\\ &\le -\int g(x)\left[(1-w^t(x))\log\left(\frac{1-\widetilde{p}}{1-p^t}\right) +w^t(x)\log\left(\frac{\widetilde{p}\mathcal{N}\widetilde{f}(x)}{p^t\mathcal{N}f^t(x)}\right)\right]dx\nonumber. \end{align} Let $(\widehat{p},\widehat{f})$ be the minimizer of the right hand side of (\ref{descent:property:eqn1}) with respect to $\widetilde p$ and $\widetilde f$. Note that the right-hand side becomes zero when $\widetilde{p}=p^{t}$ and $\widetilde{f}=f^{t}$; therefore, the minimum value of the functional on the right hand side must be less then or equal to $0$. Therefore, it is clear that $\ell(\widehat{p},\widehat{f})\le\ell(p^t,f^t)$. To verify that the statement of the theorem \eqref{descent:property} is true, it remains only to show that $(\widehat{p},\widehat{f})=(p^{t+1},f^{t+1})$. Note that the right hand side of (\ref{descent:property:eqn1}) can be rewritten as \begin{align*} &-\int g(x)[(1-w^t(x))\log(1-\widetilde{p})+w^t(x)\log\widetilde{p}]dx\\ &-\int g(x)w^t(x)\log\mathcal{N}\widetilde{f}(x)dx+T, \end{align*} where the term $T$ only depends on $(p^t,f^t)$. The first integral in the above only depends on $\widetilde{p}$ but not on $\widetilde{f}$. It is easy to see that the minimizer of this first integral with respect to $\widetilde{p}$ is $\widehat{p}=\int g(x)w^t(x)dx$. The second integral, on the contrary, depends only on $\widetilde{f}$ but not on $\widetilde{p}$. It can be rewritten as \begin{align*} &-\int g(x)w^t(x)\log\mathcal{N}\widetilde{f}(x)dx=-\int\int g(x)w_t(x)K_h(x-u)\log\widetilde{f}(u)dudx\\ &=-\int\left(\int K_h(u-x)g(x)w^t(x)dx\right)\log\widetilde{f}(u)du\\ &=-\frac{1}{\alpha^{t+1}}\int f^{t+1}(u)\log\widetilde{f}(u)du\\ &=\frac{1}{\alpha^{t+1}}\int f^{t+1}(u)\log\frac{f^{t+1}(u)}{\widetilde{f}(u)}du-\frac{1}{\alpha^{t+1}} \int f^{t+1}(u)\log f^{t+1}(u)du. \end{align*} The first term in the above is the Kullback-Leibler divergence between $f^{t+1}$ and $\widetilde{f}$ scaled by $\alpha^{t+1}$, which is minimized at $f^{t+1}$, i.e., for $\widehat{f}=f^{t+1}$. Since the second term does not depend on $\widetilde{f}$ at all, we arrive at the needed conclusion. \end{proof} The above suggests that the following algorithm can be used to estimate the parameters of the model \eqref{model2}. First, we start with initial values $p_{0}, f^{0}$ at the step $t=0$. Then, for any $t=1,2,\ldots$ \begin{itemize} \item Define the weight \begin{equation}\label{weight_eq} w^{t}(x)=\frac{p^{t}{\cal N}f^{t}(x)}{(1-p^{t})f_{0}(x)+p^{t}{\cal N}f^{t}(x)} \end{equation} \item Define the updated probability \begin{equation}\label{p_eq} p^{t+1}=\int g(x)w^t(x)dx \end{equation} \item Define \begin{equation}\label{func_eq} f^{t+1}(u)=\alpha^{t+1}\int K_h(u-x)g(x)w^t(x)dx \end{equation} \end{itemize} \begin{Remark} Note that the proposed algorithm is an MM (majorization-minimization) and not a true EM algorithm. MM algorithms are commonly used whenever optimization of a difficult objective function is best avoided and a series of simpler objective functions is optimized instead. A general introduction to MM algorithms is available in, for example, \cite{hunter2004tutorial}. As a first step, let $(p^{t},f^{t})$ denote the current parameter values in our iterative algorithm. The main goal is to obtain a new functional $b^{t}(p,f)$ such that, when shifted by a constant, it majorizes $\ell(p,f)$. In other words, there must exist a constant $C^{t}$ such that, for any $(p,f)$ $b^{t}(p,f)+C^{t}\ge \ell(p,f)$ with equality when $(p,f)=(p^{t},f^{t})$. The use of $t$ as a superscript in this context indicates that the definition of the new functional $b^{t}(p,f)$ depends on the parameter values $(p^{t},f^{t})$; these change from one iteration to the other. In our case, we define a functional \begin{align}\label{maj_func} & b^{t}(\tilde p,\tilde f)=-\int g(x)[(1-\omega^{t}(x))\log (1-\tilde p)+\omega^{t}(x)\log \tilde p]\,dx\\ &-\int g(x)\omega^{t}(x)\log {\cal N}\tilde f(x)\,dx\nonumber; \end{align} note that the dependence on $f^{t}$ is through weights $\omega^{t}$. From the proof of the Theorem \eqref{descent:property}, it follows that, for any argument $(\tilde p,\tilde f)$ we have \[ \ell(\tilde p,\tilde f) -\ell(p^{t},f^{t})\le b^{t}(\tilde p,\tilde f)-b^{t}(p^{t},f^{t}). \] This means, that $b^{t}(\tilde p,\tilde f)$ is a majorizing functional; indeed, it is enough to select the constant $C^{t}$ such that $C^{t}= \ell(p^{t},f^{t})-b^{t}(p^{t},f^{t})$. In the proof of the Theorem \eqref{descent:property} it is the series of functionals $b^{t}(\tilde p,\tilde f)$ (note that they are different at each step of iteration) that is being minimized with respect to $(\tilde p,\tilde f)$, and not the original functional $\ell(\tilde p,\tilde f)$. This, indeed, establishes that our algorithm is an MM algorithm. \end{Remark} The following Lemma shows that the sequence $\xi_{t}=\ell(p^t,f^t)$, defined by our algorithm, also has a non-negative limit (which is not necessarily a global minimum of $\ell(p,f)$). \begin{Lemma} \label{lemma:functional_positivity} There exists a finite limit of the sequence $\xi_{t}=\ell(p^t,f^t)$ as $t\rightarrow \infty$: \[ L:=\lim_{t\rightarrow\infty}\xi_{t} \] for some $L\ge 0$. \end{Lemma} \begin{proof}[Proof of Lemma \eqref{lemma:functional_positivity}] First, note that $\xi_{t}$ is a non-increasing sequence for any integer $t$ due to the Theorem \eqref{descent:property}. Thus, if we can show that it is bounded from below by zero, the proof will be finished. Then, the functional $\ell(p^t,f^t)$ can be represented as \begin{align*} &\ell(p^t,f^t)=KL(g(x), (1-p^t)f_{0}(x)+p^t{\cal N}f^t(x))+\int g(x)\,dx\\ &-\int [(1-p^t)f_{0}(x)+p^t {\cal N}f^t(x)]\,dx\\ &=KL(g(x), (1-p^t)f_{0}(x)+p^t{\cal N}f^t(x))+1-(1-p^t)-p^t\int {\cal N}f^t(x)\,dx\\ &= KL(g(x),(1-p^t)f_{0}(x)+p^t{\cal N}f^t(x))+p^t\left[1-\int {\cal N}f^t(x)\,dx\right] \end{align*} Now, since $K$ is a proper density function, by Jensen's inequality, \begin{align*} &{\cal N}f^t(x)=\exp{\left\{\int K_{h}(x-u)\log f^t(u)\,du\right\}}\\ &\le \int K_{h}(x-u)f_t(u)\,du\equiv {\mathcal S}f^t(x). \end{align*} Moreover, using Fubini's theorem, one can easily show that $\int {\mathcal S}f^t(x)\,dx=1$ since $f^t$ is a proper density function. Therefore, one concludes easily that $\int {\cal N}f^t(x)\,dx\le \int {\mathcal S}f^t(x)\,dx=1$. Thus, $\ell(p^t,f^t)\ge 0$ is non-negative due to non-negativity of the Kullback-Leibler distance. \end{proof} It is, of course, not clear directly from the \eqref{lemma:functional_positivity} if the sequence $(p^{t},f^{t})$, generated by this algorithm, also converges. Being able to answer this question requires establishing a lower semicontinuity property of the functional $\ell(p,f)$. Some additional requirements have to be imposed on the kernel function $K$ in order to obtain the needed result that is given below. We denote $\Delta$ the domain of the kernel function $K$. \begin{theorem}\label{theorem:lower_semicontinuity} Let the kernel $K:\Delta\rightarrow \bR$ be bounded from below and Lipschitz continuous with the Lipschitz constant $C_{K}$. Then, the minimizing sequence $(p^{t},f^{t})$ converges to $(p^{*}_{h},f^{*}_{h})$ that depends on the bandwidth $h$ such that $L=l(p^{*}_{h},f^{*}_{h})$. \end{theorem} \begin{proof} We prove this result in two parts. First, let us introduce a subset of functions $B=\{{\mathcal S}\phi:0\le \phi\in L_{1}(\Delta), \int \phi=1\}$. Such a subset represents all densities on a closed compact interval that can be represented as linearly smoothed integrable functions. Every function $f_{t}$ generated in our algorithm except, perhaps, the initial one, can clearly be represented in this form. This is because, at every step of iteration, $f^{t+1}(x)=\alpha^{t+1}\int K_{h}(x-u)g(u)w^{t}(u)\,du=\int K_{h}(x-u)\phi(u)\,du$ where $\phi(u)=\alpha^{t+1}g(u)w^{t}(u)$. Moreover, we observe that $\int \phi(u)\,du=\alpha^{t+1}\int g(u)w^{t}(u)\,du=\alpha^{t+1}p^{t+1}$. Next, one concludes, by using Fubini theorem that, for any $t=1,2,\ldots$ \[ \int f^{t+1}(x)\,dx=\alpha^{t+1}\int g(u)w^{t}(u)\left[\int K_{h}(x-u)\,dx\right]\,du=1. \] Since the iteration step $t$ in the above is arbitrary, we established that $\alpha^{t}p^{t}=1$ and, therefore, $\int \phi(u)\,du=1$. Next, since the kernel function $K$ is bounded from below, we can easily claim that for every $f\in B$ $f=\int K_{h}(x-u)\phi(u)\,du\ge \inf_{x\in \Delta} K_{h}(x-u) \int \phi(u)\,du=\inf_{x\in \Delta} K_{h}(x-u)>0$ and, therefore, every function in the set $B$ is bounded from below. If the kernel function is Lipschitz continuous on $\Delta$ it is clearly bounded from above by some positive constant $M:\sup_{x\in \Delta} K(x)<M$. Thus, every function $f\in B$ satisfies $f(x)\le M<\infty$. This implies that the set $B$ is uniformly bounded. Also, by definition of set $B$, for any two points $x,y\in \Delta$ we have \[ \vert f(x)-f(y)\vert \le \int |K_{h}(x-u)-K_{h}(y-u)\vert \phi(u)\,du\le C_{K}\vert x-y\vert \] where the constant $C_{K}$ depends on the choice of kernel $K$ but not on the function $f$. This establishes the equicontinuity of the set $B$. Therefore, by Arzela-Ascoli theorem the set of functions $B$ is a compact subset of $C(\Delta)$ with a $\sup$ metric. Since for every $t=2,3,\ldots$ $f^{t}\in B$, by Arzela-Ascoli theorem we have a subsequence $f^{t_{k}}\rightarrow f^{*}_{h}$ as $k\rightarrow \infty$ uniformly over $\Omega$. Since for every $t=1,2,\ldots$ $p^{t}$ is bounded between $0$ and $1$, there exists, by Bolzano-Weierstrass theorem, a subsequence $p^{t_{k}}\rightarrow p^{*}_{h}$ as $k\rightarrow \infty$ in the usual Euclidean metric. Consider a Cartesian product space $\{(p,f)\}$ where every $p\in [0,1]$ and $f\in C(\Delta)$. To define a metric on such a space we introduce an $m$-product of individual metrics for some non-negative $m$. This means that, if the first component space has a metric $d_{1}$ and the second $d_{2}$, the metric on the Cartesian product is $(\vert d_{1}\vert ^{m}+\vert d_{2}\vert ^{m})^{1/m}$ for some non-negative $m$. For example, the specific case $m=0$ corresponds to $\vert d_{1}\vert +\vert d_{2}\vert$ and $m=\infty$ corresponds to $\max (d_{1},d_{2})$. For such an $m$-product metric, clearly, we have a subsequence $(p^{t_{k}},f^{t_{k}})\rightarrow (p^{*}_{h},f^{*}_{h})$ that converges to $(p^{*}_{h},f^{*}_{h})$ in the $m$-product metric. Without loss of generality, assume that the subsequence coincides with the whole sequence $(p^{t},f^{t})$. Of course, such a sequence $(p^{t},f^{t})\in [0,1]\times C(\Delta)$ for any $t$. Now, that we know that there is always a converging sequence $(p^{t},f^{t})$, we can proceed further. Since each $f^{t}$ is bounded away from zero and from above, then so is the limit function $f^{*}_{h}(x)$ in the limit $(p^{*}_{h},f^{*}_{h})$. This implies that $(p^{t},\log f^{t})\rightarrow (p^{*}_{h},\log f^{*}_{h})$ uniformly in the $m$-product topology as well and the same is true also for $(p^{t},{\mathcal S}\log f^{t})$. Analogously, the uniform convergence follows also in $(p^{t},{\cal N}f^{t})\rightarrow (p^{*}_{h},{\cal N}f^{*}_{h})$; moreover, $(1-p^{t})f_{0}+p^{t}{\cal N}f^{t}\rightarrow (1-p^{*}_{h})f_{0}+p^{*}_{h}{\cal N}f^{*}_{h}$ uniformly in the $m$-product topology. Since the function $\psi(t)=-\log t+t-1\ge 0$, Fatou Lemma implies that \begin{align*} &\int g(x)\psi((1-p^{*}_{h})f_{0}(x)+p^{*}_{h}{\cal N}f^{*}_{h}(x))\,dx\\ &\le \liminf\int g(x)\psi((1-p^{t})f_{0}(x)+p^{t}{\cal N}f^{t}(x))\,dx. \end{align*} The lower semicontinuity of the functional $\ell(p,f)$ follows immediately and with it the conclusion of the Theorem \eqref{theorem:lower_semicontinuity}. \end{proof} \section{An empirical version of our algorithm}\label{emp_sec} In practice, the number of observations $n$ sampled from the target density function $g$ is finite. This necessitates the development of the empirical version of our algorithm that can be implemented in practice. Many proof details here are similar to proofs of properties of the algorithm we introduced in the previous chapter. Therefore, we will be brief in our explanations. Denote the empirical cdf of the observations $X_{i}$, $i=1,\ldots,n$ $G_n(x)$ where $G_{n}(x)=\frac{1}{n}\sum_{i=1}^nI_{X_i\le x} $. Then, we define a functional \begin{align*}\label{emp.func} &l_n(f,p)=-\int \log ((1-p)f_{0}(x)+p{\cal N}f(x))\,dG_{n}(x)\\ &\equiv -\sum_{i=1}^{n}\log((1-p)f_0(X_{i})+p\mathcal{N}f(X_{i})). \end{align*} The following analogue of the Lemma \eqref{lemma:divergence} can be easily established. \begin{Lemma}\label{empirical:lemma:divergence} For any pdf $\widetilde{f}$ and $\widetilde{p}\in(0,1)$, \begin{align*} &l_n(\widetilde{f},\widetilde{p})-l_n(f,p)\\ &\le -\int\left[(1-w(x))\log\left(\frac{1-\widetilde{p}}{1-p}\right)+ w(x)\log\left(\frac{\widetilde{p}\mathcal{N}\widetilde{f}(x)}{p\mathcal{N}f(x)}\right)\right]dG_n(x), \end{align*} where the weight $w(x)=\frac{p\mathcal{N}f(x)}{(1-p)f_0(x)+p\mathcal{N}f(x)}$. \end{Lemma} The proof is omitted since it is almost exactly the same as the proof of the Lemma \eqref{lemma:divergence}. Now we can define the empirical version of our algorithm. Denote $(p_{n}^t,f_{n}^t)$ values of the density $f$ and probability $p$ at the iteration step $t$. Define the weights as $w_{n}^t(x)=\frac{p_{n}^t\mathcal{N}f_{n}^t(x)}{(1-p_{n}^t)f_0(x)+p_{n}^t\mathcal{N}f_{n}^t(x)}$. We use the subscript $n$ everywhere intentionally to stress that these quantities depend on the sample size $n$. For the next step, define $(p_{n}^{t+1},f_{n}^{t+1})$ as \begin{align} &p_{n}^{t+1}=\int w_{n}^{t}(x)dG_n(x)=\frac{1}{n}\sum_{i=1}^n w_{n}^{t}(X_i) \nonumber\\ &f_{n}^{t+1}(x)=\alpha_{n}^{t+1}\int K_h(x-u)w_{n}^t(u)dG_n(u) \nonumber\\ &=\frac{\alpha_{n}^{t+1}}{n}\sum_{i=1}^n K_h(x-X_i)w_{n}^{t}(X_i)\nonumber \end{align} where $\alpha_{n}^{t+1}$ is a normalizing constant such that $f_{n}^{t+1}$ is a valid pdf. Since $\int K_h(X_i-u)du=1$ for $i=1,\ldots,n$, we get \[ 1=\int f_{n}^{t+1}(u)du=\frac{\alpha_{n}^{t+1}}{n}\sum_{i=1}^nw_{n}^{t}(X_i), \] and hence, \[ \alpha_{n}^{t+1}=\frac{n}{\sum_{i=1}^n w_{n}^{t}(X_i)}. \] The following result establishes the descent property of the empirical version of our algorithm. \begin{theorem}\label{empirical:descent:property} For any $t\ge0$, $\ell_n(p_{n}^{t+1},f_{n}^{t+1})\le\ell_n(p_{n}^{t},f_{n}^{t})$. \end{theorem} The proof of this result follows very closely the proof of the Theorem \eqref{descent:property} and is also omitted for brevity. \begin{Remark} As before, the empirical version of the proposed algorithm is an MM (majorization - minimization) and not a true EM algorithm. More specifically, we can show that there exists another functional $b^{t}_n(p,f)$ such that, when shifted by a constant, it majorizes $l_{n}(p,f)$. It is easy to check that such a functional is \begin{align}\label{maj_func_emp} & b^{t}_n(\tilde p,\tilde f)=-\int [(1-\omega^{t}_{n}(x))\log (1-\tilde p)+\omega^{t}_{n}(x)\log \tilde p]\,dG_{n}(x)\\ &-\int \omega^{t}_{n}(x)\log {\cal N}\tilde f(x)\,dG_{n}(x)\nonumber. \end{align} Note that in the proof of the Theorem \eqref{empirical:descent:property} it is the series of functionals $b^{t}_{n}(\tilde p,\tilde f)$ that is being minimized with respect to $(\tilde p,\tilde f)$, and not the original functional $l_{n}(\tilde p,\tilde f)$. \end{Remark} As before, we can also show that the sequence $\ell_{n}(p_{n}^{t},f_{n}^{t})$ generated by our algorithm does not only possess the descent property but is also bounded from below. \begin{Lemma} \label{lemma:empirical_limit} There exists a finite limit of the sequence $\xi_{n}^{t}=\ell_n(p_{n}^{t},f_{n}^{t})$ as $t\rightarrow \infty$: \[ L_{n}=\lim_{t\rightarrow\infty}\xi_{n}^{t} \] for some $L_{n}\ge 0$. \end{Lemma} The proof is almost exactly the same as the proof of the Lemma \eqref{lemma:functional_positivity} and is omitted in the interest of brevity. Finally, one can also show that the sequence $(p^{t}_{n},f_{n}^{t})$ generated by our algorithm converges to $(p^{*}_{n}, f^{*}_{n})$ such that $L_{n}=l_{n}(p^{*}_{n},f^{*}_{n})$. The proof is almost the same as that of the Theorem \eqref{theorem:lower_semicontinuity} and is omitted for conciseness. \section{Simulations and comparison}\label{sim} \begin{figure}[!htb] \centering \subfigure[Fitted mixture density]{ \includegraphics[width=0.47\columnwidth]{Norm_Gamma_g.jpeg} \label{fig:Gamma:mixture}} \subfigure[Fitted unknown component density] \includegraphics[width=0.47\columnwidth]{Norm_Gamma_f.jpeg} \label{fig:Gamma:unknown}} \caption{Mixture of Gaussian (6,1) and Gamma (2,1)} \end{figure} \iffalse \begin{figure}[!htb] \centering \subfigure[Fitted mixture density]{ \includegraphics[width=0.47\columnwidth]{Norm_Gamma_beta025g.jpeg} \label{fig:Gamma:mixture:inf}} \subfigure[Fitted unknown component density] \includegraphics[width=0.47\columnwidth]{Norm_Gamma_beta025f.jpeg} \label{fig:Gamma:unknown:inf}} \caption{Mixture of Gaussian(6,1) and Gamma(0.5,0.25)} \end{figure} \fi In this section, we will use the notation $I_{[x>0]}$ for the indicator function of the positive half of the real line and $\phi(x)$ for the standard Gaussian distribution. For our first experiment, we generate $n$ independent and identically distributed observations from a two component normal gamma mixture with the density $g(x)$ defined as $g(x)=(1-p)f_{0}(x)+pf(x).$ Thus, the known component is $f_{0}(x)=\frac{2}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)I_{[x>0]}$ while the unknown component is $Gamma(\alpha,\beta)$ , i.e., $f(x)=\frac{\beta^{\alpha}}{\Gamma(\alpha)}x^{\alpha-1}e^{-\beta x}I_{[x>0]}$. Note that we truncate the normal distribution so that it stays on the positive half of the real line. We choose the sample size $n=500$, the probability $p=0.6$, $\mu=6$, $\sigma=1$, $\alpha=2$ and $\beta=1$. The initial weight is $p_0=0.2$ and the initial assumption for the unknown component distribution is $Gamma(4,2)$. The rescaled triangular function $K_h(x)=\frac{1}{h}\left(1-\frac{|x|}{h}\right)I(|x|\le h)$ is used as the kernel function. We use a fixed bandwidth throughout the sequence of iterations and this fixed bandwidth is selected according to the classical Silverman's rule of thumb that we describe here briefly for completeness; for more details, see \cite{silverman1986density}. Let SD and IQR be the standard deviation and interquartile range of the data, respectively. Then, the bandwidth is determined as $h=0.9\min\left\{SD, \frac{IQR}{1.34}\right\}n^{-1/5}.$ We use the absolute difference $|p_{n}^{t+1}-p_{n}^{t}|$ as a stopping criterion; at every iteration step, we check if this difference is below a small threshold value $d$ that depends on required precision. If it is, the algorithm is stopped. The analogous rule has been described for classical parametric mixtures in \cite{mclachlan2004finite}. In our setting, we use the value $d=10^{-5}$. The computation ends after $259$ iterations, with an estimate $\hat{p}=0.6661$; the Figure \ref{fig:Gamma:mixture} shows the true and estimated mixture density function $g(x)$ while the Figure \ref{fig:Gamma:unknown} shows both true and estimated second component density $f$. Both figures show a histogram of the observed target distribution $g(x)$ in the background. Both the fitted mixture density $\hat g(x)$ and the fitted unknown component density function $\hat f(x)$ are quite close to their corresponding true density functions everywhere. We also analyze performance of our algorithm in terms of the mean squared error (MSE) of estimated weight $\hat p$ and the mean integrated squared error (MISE) of $\hat f$. We will use two models for this purpose. The first model is the normal exponential model where the (known) normal component is the same as before while the second (unknown) component is an exponential density function $f(x)=\lambda e^{-\lambda x}I_{[x>0]}$ with $\lambda=0.5$; the value of $p$ used is $p=0.6$. The second model is the same normal-gamma model as before. For each of the two models, we plot MSE of $\hat p$ and MISE of $\hat f$ against the true $p$ for sample sizes $n=500$ and $n=1000$. Here, we use $30$ replications. The algorithm appears to show rather good performance even for the sample size $n=500$. Note that MISE of the unknown component $f$ seems to decrease with the increase in $p$. Possible reason for this is the fact that, the larger $p$ is, the more likely it is that we are sampling from the unknown component and so the number of observations that are actually generated by $f$ grows; this seems to explain better precision in estimation of $f$ when $p$ is large. \begin{figure}[!htb] \centering \subfigure[Normal-Exponential mixture]{ \includegraphics[width=0.48\columnwidth]{MISE_Normal_Exp.jpeg} \label{fig:MSE:Exp}} \subfigure[Normal-Gamma mixture] \includegraphics[width=0.48\columnwidth]{MISE_Normal_Gamma.jpeg} \label{fig:MSE:Gamma}} \caption{MISE of $\hat f$ and MSE of $\hat p$} \end{figure} Another important issue in practice is, of course, the bandwidth selection. Earlier, we simply used a fixed bandwidth selected using the classical Silverman's rule of thumb. In general, when the unknown density is not likely to be normal, the use of Silverman's rule may be a somewhat rough approach. Moreover, in an iterative algorithm, every successive step of iteration brings a refined estimate of the unknown density component; therefore, it seems a good idea to put this knowledge to use. Such an idea was suggested earlier in \cite{chauveau2015semi}. Here we suggest using a version of the $K$-fold cross validation method specifically adopted for use in an iterative algorithm. First, let us suppose we have a sample $X_{1},\ldots,X_{n}$ of size $n$; we begin with randomly partitioning it into $K$ approximately equal subsamples. For ease of notation, we denote each of these subsamples $X^{k}$, $k=1,\ldots,K$. Randomly selecting one of the $K$ subsamples, it is possible to treat the remaining $K-1$ subsamples as a training dataset and the selected subsample as the validation dataset. We also need to select a grid of possible bandwidths. To do so, we compute the preliminary bandwidth $h_{s}$ first according to the Silverman's rule of thumb; the bandwidth grid is defined as lying in an interval $[h_s -l,h_s +l]$ where $2*l$ is the range of bandwidths we plan to consider. Within this interval, each element of the bandwidth grid is computed as $h_{i}=h_s\pm \frac{i}{M}l$, $i=0,1,\ldots,M$ for some positive integer $M$. At this point, we have to decide whether a fully iterative bandwidth selection procedure is necessary. It is worth noting that a fully iterative bandwidth selection algorithm leads to the situation where the bandwidth changes at each step of iteration. This, in turn, implies that the monotonicity property of our algorithm derived in Theorem \eqref{empirical:descent:property} is no longer true. To reconcile these two paradigms, we implement the following scheme. As in earlier simulations, we use the triangular smoothing kernel. First, we iterate a certain number of times $T$ to obtain a reasonably stable estimate of the unknown $f$; if we do it using the full range of the data, we denote the resulting estimate $\hat f_{nh}^{T}(x)=\frac{\alpha_{n}^{T}}{n}\sum_{i=1}^n K_h(x-X_i)w_{n}^{T-1}(X_i)$. Integrating the resulting expression, we can obtain the squared $L_{2}$-norm of $\hat f_{nh}^{T}(x)$ as \[ ||\hat f_{nh}^{T}||^{2}_{2}=\int\left[\frac{\alpha_{n}^{T}}{n}\sum_{i=1}^{n}w_{n}^{T-1}(X_i) K_h(x-X_i)\right]^2\,dx. \] For each of $K$ subsamples of the original sample, we can also define a ``leave-$k$th subsample out" estimator of the unknown component $f$ as $\hat f_{nh,-X_k}^{T}(x)$, $k=1,\ldots,K$ obtained after $T$ steps of iteration. At this point, we can define the CV optimization criterion as (see, for example, \cite{eggermont2001maximum})as \[ CV(h)=||\hat f_{nh}^{T}||^{2}_{2}-\frac{2}{n}\sum_{k=1}^{K}\sum_{x_{i}\in X_k}{\hat f}^{T}_{nh,-X_k}(x_{i}). \] Finally, we select \[ h^{*}=argmin\, CV(h) \] as a proper bandwidth. Now, we fix the bandwidth $h^{*}$ and keep it constant beginning with the iteration step $T+1$ until the convergence criterion is achieved and the process is stopped. An example of a cross validation curve of $CV(h)$ is given in Figure \eqref{fig:BandwidthCV}. Here, we took a sample of size $500$ from a mixture model with a known component of $N(6,1)$, an unknown component of $Gamma(2,1)$ and a mixing proportion $p=0.5$; we also chose $K=50$, $l=0.4$, $M=10$, and $T=5$. We tested the possibility of using larger number of iterations before selecting the optimal bandwidth $h$; however, already $T=10$ results in the selection of $h^{*}$ close to zero. We believe that the likeliest reason for that is the overfitting of the estimate of the unknown component $f$. The minimum of $CV(h)$ is achieved at around $h=0.68$. Using this bandwidth and running the algorithm until the stopping criterion is satisfied, gives us the estimated mixing proportion $\hat p=0.497$. As a side remark, in this particular case the Silverman's rule of thumb gives a very similar estimated bandwidth $\hat h=0.72$. \begin{figure}[!htb] \centering \includegraphics[width=0.6\columnwidth]{CV_curve.jpg} \caption{A plot of $CV(h)$ used for bandwidth selection} \label{fig:BandwidthCV} \end{figure} As a last step, we want to compare our method with the symmetrization method of \cite{bordes_delmas}. To do this, we will use a normal-normal model since the method of \cite{bordes_delmas} is only applicable when an unknown component belongs to a location family. Although such a model does not satisfy the sufficient criterion of the Lemma \eqref{identifiability}, it satisfies the necessary and sufficient identifiability criterion given in Lemma $4$ of \citet{patra2015estimation} (see also Remark $3$ from the Supplement to \citet{patra2015estimation} for even clearer statement about identifiability for normal-normal models in our context); therefore, we can use it for testing purposes. The known component has Gaussian distribution with mean $0$ and standard deviation $1$, the unknown has mean $6$ and standard deviation $1$, and we also consider two possible choices of mixture weight, $p=0.3$ and $p=0.5$. The results for two different sample sizes, $n=500$, and $n=1000$, and $200$ replications, are given below in Tables \ref{table:symmetrization} and \ref{table:ours}. Each estimate is accompanied by its standard deviation in parentheses. Note that the proper expectation here is that our method should perform similarly to the method of \cite{bordes_delmas} but not beat it, for several reasons. First, the mean of the unknown Gaussian distribution is directly estimated as a parameter in the symmetrization method, while it is the nonparametric probability density function that is directly estimated by our method. Thus, in order to calculate the mean of the second component, we have to take an extra step when using our method and employ numerical integration. This is effectively equivalent to estimating a functional of an unknown (and so estimated beforehand) density function; therefore, somewhat lower precision of our method when estimating the mean, compared to symmetrization method, where the mean is just a Euclidean parameter, should be expected. Second, when using symmetrization method, we followed an acceptance/rejection procedure exactly as in \cite{bordes_delmas}. That procedure amounts to dropping certain ``bad" samples whereas our method keeps all the samples. Third, the method of \cite{bordes_delmas}, when estimating an unknown component, uses the fact that this component belongs to a location family - something that our method, more general in its assumptions, does not do. Keeping all of the above in mind, we can see from Tables \eqref{table:symmetrization} and \eqref{table:ours} that both methods produce comparable results, especially when the sample size is $n=1000$. Also, as explained above, it does turn out that our method is practically as good as the method of \cite{bordes_delmas} when it comes to estimating probability $p$ and slightly worse when estimating the mean of the unknown component. However, even when estimating the mean of the unknown component, increase in sample size from $500$ to $1000$ reduces the difference in performance substantially. \begin{table} \caption{Mean(SD) of estimated $p/\mu$ obtained by the symmetrization method} \centering \begin{tabular}{|c|c|c|} \hline \hline $K=200$ & $n=500$ & $n=1000$ \\ \hline $p=0.3/\mu = 6$ & 0.302(0.022)/5.989(0.095) & 0.302(0.016)/5.998( 0.064) \\ \hline $p=0.5/\mu = 6$ & 0.502(0.024)/5.999(0.067) & 0.502(0.017)/6.003(0.050) \\ \hline \end{tabular} \label{table:symmetrization} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline \hline $K=200$ & $n=500$ & $n=1000$ \\ \hline $p=0.3/\mu = 6$ & 0.315(0.024)/5.772(0.238) & 0.312(0.018)/5.818(0.178) \\ \hline $p=0.5/\mu = 6$ & 0.516(0.026)/5.855(0.155) & 0.512(0.018)/5.883(0.117) \\ \hline \end{tabular} \caption{Mean(SD) of estimated $p/\mu$ obtained by our algorithm} \label{table:ours} \end{table} \section{A real data example}\label{realdata} The acidification of lakes in parts of North America and Europe is a serious concern. In $1983$, the US Environmental Protection Agency (EPA) began the EPA National Surface Water Survey (NSWS) to study acidification as well as other characteristics of US lakes. The first stage of NSWS was the Eastern Lake Survey, focusing on particular regions in Midwestern and Eastern US. Variables measured include acid neutralizing capacity (ANC), pH, dissolved organic carbon, and concentrations of various chemicals such as iron and calcium. The sampled lakes were selected systematically from an ordered list of all lakes appearing on $1:250,000$ scale US Geological Survey topographic maps. Only surface lakes with the surface area of at least $4$ hectares were chosen. Out of all these variables, ANC is often the one of greatest interest. It describes the capability of the lake to neutralize acid; more specifically, low (negative) values of ANC can lead to a loss of biological resources. We use a dataset containing, among others, ANC data for a group of $155$ lakes in north-central Wisconsin. This dataset has been first published in \cite{crawford1992modeling} in Table $1$ and analyzed in the same manuscript. \cite{crawford1992modeling} argue that this dataset is rather heterogeneous due to the presence of lakes that are very different in their ANC within the same sample. In particular, seepage lakes, that have neither inlets nor outlets tend to be very low in ANC whereas drainage lakes that include flow paths into and out of the lake tend to be higher in ANC. Based on this heterogeneity, \cite{crawford1992modeling} suggested using an empirical mixture of two lognormal densities to fit this dataset. \cite{crawford1994application} also considered that same dataset; they suggested using a modification of Laplace method to estimate posterior component density functions in the Bayesian analysis of a finite lognormal mixture. Note that \cite{crawford1994application} viewed the number of components in the mixture model as a parameter to be estimated; their analysis suggests a mixture of either two or three components. The sample histogram for the ANC dataset is given on Figure 1 of \cite{crawford1994application}. The histogram is given for a log transformation of the original data $log(ANC+50)$. \cite{crawford1994application} selected this transformation to avoid numerical problems arising from maximization involving a truncation; the choice of $50$ as an additive constant is explained in more detail in \cite{crawford1994application}. The empirical distribution is clearly bimodal; moreover, it exhibits a heavy upper tail. This is suggestive of a two-component mixture where the first component may be Gaussian while the other is defined on the positive half of the real line and has a heavy upper tail. We estimate a two-component density mixture model for this empirical distribution using two approaches. First, we follow the Bayesian approach of \cite{crawford1994application} using the prior settings of Table $4$ in that manuscript. Switching to our framework next, we assume that the normal component is a known one while the other one is unknown. For the known normal component, we assume the mean $\mu_1 = 4.375$ and $\sigma_1 = 0.416$; these are the estimated values obtained in \cite{crawford1994application} under the assumption of two component Gaussian mixture for the original (not log transformed) data and given in their Table $4$. Next, we apply our algorithm in order to obtain an estimate of the mixture proportion and a non-parametric estimate of the unknown component to compare with respective estimates in \cite{crawford1994application}. We set the initial value of the mixture proportion as $p^0=0.3$ and the initial value of the unknown component as a normal distribution with mean $\mu_2^0=8$ and standard deviation $\sigma_2^0=1$. The iterations stop when $|p^{t+1}-p^t|<10^{-4}$. After $171$ iterations, the algorithm terminates with an estimate of mixture proportion $\hat{p}=0.4875$; for comparison purposes, \cite{crawford1994application} produces an estimate $\hat{p}_{Bayesian}=1-0.533=0.4667$. The Figure \eqref{fig:fitmixture} shows the resulting density mixtures fitted using the method of \cite{crawford1994application} and our method against the background histogram of the log-transformed data. The Figure \eqref{fig:fitcomponent} illustrates the fitted first component of the mixture according to the method of \cite{crawford1994application} as well as the second component fitted according to both methods. Once again, the histogram of the log-transformed data is used in the background. \begin{figure}[!htb] \centering \includegraphics[width=0.6\columnwidth]{real_fitted_mixture.jpeg} \caption{Fitted mixture densities} \label{fig:fitmixture} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.6\columnwidth]{real_fitted_component.jpeg} \caption{Fitted component densities} \label{fig:fitcomponent} \end{figure} Note that the mixture density curves based on both methods are rather similar in Figure \eqref{fig:fitmixture}. One notable difference is that the method of \cite{crawford1994application} suggests mixture with peak at the value of transformed ANC of about $6.4$ whereas our method produces a curve that seems to be following the histogram more closely in that location. The Figure \eqref{fig:fitcomponent} also seems to show that our method describes the data more faithfully than that of \cite{crawford1994application}. Indeed, the second parametric component fitted by the method of \cite{crawford1994application} is unable to reproduce the first peak around $4.2$ at all. By doing so, the method of \cite{crawford1994application} suggests that the first peak is there only due to the first component. Our method, on the contrary, suggests that the first peak is at least partly due to the second component as well. Note that \cite{crawford1994application} discusses the possibility of a three component mixture for this dataset; results of our analysis suggest a possible presence of the third component as well based on a bimodal pattern of our fitted second component density curve. Finally, note that the method of \cite{crawford1994application} produces an estimated second component that implies a much higher second peak than the data really suggests whereas our method gives a more realistic estimate. \section{Discussion}\label{Discussion} The method of estimating two component semiparametric mixtures with a known component introduced in this manuscript relies on the idea of maximizing the smoothed penalized likelihood of the available data. Another possible view of this problem is as the Tikhonov-type regularization problem (sometimes also called {\it variational regularization scheme} in optimization literature). The resulting algorithm is an MM algorithm that possesses the descent property with respect to the smoothed penalized likelihood functional. Moreover, we also show that the resulting algorithm converges under mild restrictions on the kernel function used to construct a nonlinear smoother ${\cal N}$. The algorithm also shows reasonably good numerical properties, both in simulations and when applied to a real dataset. If necessary, a number of acceleration techniques can be considered in case of large datasets; for more details, see e.g. \cite{lange2000optimization}. As opposed to the symmetrization method of \cite{bordes_delmas}, our algorithm is also applicable to situations where the unknown component does not belong to any location family; thus, our method can be viewed as a more universal one of two. Comparing our method directly to that of \cite{bordes_delmas} and \citet{patra2015estimation} is a little difficult since our method is based,essentially, on perturbation of observed data the amount of which is controlled by the non-zero bandwidth $h$; thus, we arrive at what is apparently a solution different from that suggested in both \cite{bordes_delmas} and \citet{patra2015estimation}. There are a number of outstanding questions remaining concerning the model \eqref{model2} that will have to be investigated as a part of our future research. First, the constraint that an unknown density is defined on a compact space is, of course, convenient when proving convergence of the algorithm generated sequence; however, it would be desirable to lift it later. We believe that, at the expense of some additional technical complications, it is possible to prove all of our results when the unknown density function $f(x)$ is defined on the entire real line but has sufficiently thin tails. Second, an area that we have not touched in this manuscript is the convergence of resulting solutions. For example, a solution obtained by running an empirical version of our algorithm $(p_{n}^{*},f_{n}^{*})$ would be expected to converge to a solution $(p^{*},f^{*})$ obtained by the use of the original algorithm in the integral form. Analogously, as $h\rightarrow 0$, it is natural to expect that $(p^{*},f^{*})$ would converge to $(p,f)$ such that the identity $(1-p)f_{0}(x)+pf(x)=g(x)$ is satisfied. We expect that some recent results in optimization theory concerning Tikhonov-type regularization methods with non-metric fitting functionals (see, for example, \cite{flemming2010theory} ) will be helpful in this undertaking. Our research in this area is ongoing. \bibliographystyle{Chicago}%
2024-02-18T23:40:25.296Z
2017-08-01T02:06:09.000Z
algebraic_stack_train_0000
2,355
12,076
proofpile-arXiv_065-11451
\section{Introduction} {\tolerance=1200 The associated production of vector bosons and jets (V+jets) in hadronic collisions is a large background source in measurements of several standard model (SM) processes, Higgs boson studies, and many searches for physics beyond the SM. Its description constitutes an important benchmark for perturbative quantum chromodynamics (pQCD) predictions. Differential cross sections as a function of kinematic observables characterizing V+jets topologies are sensitive to the contributions from both the hard scattering process and the associated soft QCD radiation, as well as to the parton distribution functions (PDFs). Among the V+jets processes, the case in which a $\PZ/\gamma^{\ast}$ boson is produced in association with $\PQb$ quarks, $\Pp\Pp\to \PZ +({\geq}1\PQb)$, hereafter denoted as \ensuremath{\PZ(1\PQb)}\xspace, is particularly interesting. Antiquarks are also assumed in the notation, and the $\PZ/\gamma^{\ast}$ interference contribution is considered to be part of the process. Within the SM, the \ensuremath{\PZ(1\PQb)}\xspace final state is the dominant background for studies of the associated production of Higgs and \PZ bosons, in which the Higgs boson decays into a \bbbar pair~\cite{Chatrchyan:2013zna}. Many physics scenarios beyond the SM predict final states with $\PQb$ quarks and \PZ bosons: new generations of heavy quarks ($\PQb^{\prime}, \PQt^{\prime}$) decaying into \ensuremath{\PZ(1\PQb)}\xspace~\cite{Holdom:2009rf}, supersymmetric Higgs bosons produced in association with $\PQb$ quarks~\cite{Hall:2011aa}, and extended SM scenarios with additional SU(2) doublets with enhanced $\PZ\bbbar$ coupling~\cite{Choudhury:2001hs}. The study of the associated production of \PZ bosons and $\PQb$ quark jets may also provide information useful in describing an analogous process where a $\PW$ boson is produced, which is more difficult to measure because of higher background contamination. \par} This paper presents measurements of associated production of a \PZ boson and b quark jets using proton-proton collision data at 8\TeV collected with the CMS detector, corresponding to an integrated luminosity of 19.8\fbinv. The \PZ boson is reconstructed through its leptonic decay into an electron or muon pair, while the presence of $\PQb$ quarks is inferred from the characteristics of jets (denoted as $\PQb$ jets) that originate from their hadronization products and subsequent decays. In order to characterize \ensuremath{\PZ(1\PQb)}\xspace production, fiducial differential cross sections are measured as a function of five kinematic observables: the transverse momentum \pt and pseudorapidity $\eta$ of the highest-\pt $\PQb$ jet, the \PZ boson \pt, the scalar sum of the transverse momenta of all jets regardless of the flavour of the parton producing them (\HT), and the azimuthal angular difference between the direction of the \PZ boson and the highest-\pt $\PQb$ jet ($\Delta \phi_{\PZ\PQb}$). Ratios of the differential cross sections for \ensuremath{\PZ(1\PQb)}\xspace and $\PZ$+jets production, inclusive in jet flavour, are also measured as a function of these five observables. The cancellation of several systematic uncertainties in the cross section ratio allows an even more precise comparison with theory than the differential cross sections themselves. Events with at least two $\PQb$ jets, henceforth \ensuremath{\PZ(2\PQb)}\xspace, contribute as background sources to other SM and beyond-SM processes. The production dynamics of this kind of event are studied through the measurement of the fiducial differential cross section as a function of observables characterizing the kinematic properties of the dijet system formed by the leading and subleading (in \pt) $\PQb$ jets: the \pt of these two jets; the \PZ boson \pt; the invariant masses of the $\PQb\PQb$ and $\PZ\PQb\PQb$ systems ($M_{{\PQb\PQb}}$ and $M_{\PZ\PQb\PQb}$ respectively); the angle $\Delta\phi_{{\PQb\PQb}}$ between the two $\PQb$ jets in the plane transverse to the beam axis and their separation in the $\eta$-$\phi$ plane ($\Delta R_{{\PQb\PQb}}$); the distance in the $\eta$-$\phi$ plane between the \PZ boson and the closer $\PQb$ jet ($\Delta R^{\text{min}}_{\PZ\PQb}$); and the asymmetry in the distances in the $\eta$-$\phi$ plane between the \PZ boson and the closer versus farther $\PQb$ jets ($A_{\PZ\PQb\PQb}$). Previously, the cross section for the associated production of \PZ bosons and $\PQb$ jets was measured in proton-antiproton collisions by the CDF~\cite{Aaltonen:2008mt} and D0~\cite{Abazov:2010ix} Collaborations at the Fermilab Tevatron and in proton-proton collisions at a centre-of-mass energy of 7\TeV by the ATLAS~\cite{Aad:2014dvb} and CMS~\cite{Chatrchyan:2013zja} Collaborations at the CERN LHC. The CMS Collaboration also studied \ensuremath{\PZ(2\PQb)}\xspace production by explicitly reconstructing $\PQb$ hadron decays~\cite{Chatrchyan:2013zjb}, in order to explore the region where $\PQb$ quarks are emitted in an almost collinear topology. Previous measurements of the ratio of the \ensuremath{\PZ(1\PQb)}\xspace to the $\PZ$+jets inclusive cross section were published by the D0 Collaboration~\cite{Abazov:2013uza}. The paper is organized as follows: Section 2 is dedicated to the description of the CMS apparatus and Section 3 to the data and simulated samples used in the analysis. Section 4 discusses the lepton, jet, and $\PQb$ jet reconstruction and the event selection. Section 5 discusses background estimation, while Section 6 is dedicated to the description of the unfolding procedure to correct data for detector effects. Section 7 presents a discussion of the systematic uncertainties. In Section 8 the measured differential cross sections and the corresponding ratios are presented, together with a discussion of the comparison with theoretical predictions. Finally, the results are summarized in Section 9. \section{The CMS detector} A detailed description of the CMS detector, together with the definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter. The field volume houses a silicon tracker, a crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. The magnet flux-return yoke is instrumented with muon detectors. The silicon tracker measures charged particles within the pseudorapidity range $\abs{\eta} < 2.5$. It consists of 1440 silicon pixel and 15\,148 silicon strip detector modules and is located in the 3.8\unit{T} field of the superconducting solenoid. For nonisolated particles of $1 < \pt < 10\GeV$ and $\abs{\eta} < 1.4$, the track resolutions are typically 1.5\% in \pt and 25--90 (45--150)\mum in the transverse (longitudinal) impact parameter \cite{TRK-11-001}. The electron momentum is estimated by combining the energy measurement in the ECAL with the momentum measurement in the tracker. The momentum resolution for electrons with $\pt \approx 45\GeV$ from $\PZ \to \Pe \Pe$ decays ranges from 1.7\% for nonshowering electrons in the barrel region to 4.5\% for showering electrons in the endcaps~\cite{Khachatryan:2015hwa}. Muons are measured in the pseudorapidity range $\abs{\eta} < 2.4$, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum resolution for muons with $20 <\pt < 100\GeV$ of 1.3--2.0\% in the barrel and better than 6\% in the endcaps. The \pt resolution in the barrel is better than 10\% for muons with \pt up to 1\TeV~\cite{Chatrchyan:2012xi}. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. The CMS detector uses a two-level trigger system. The first level of the system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4\mus. The high-level trigger processor farm further decreases the event rate from around 100\unit{kHz} to less than 1\unit{kHz} before data storage. \section{Event simulation} \label{sec:evsim} {\tolerance=800 The associated production of a \PZ boson and jets is experimentally reconstructed as two opposite-sign same-flavour electrons or muons accompanied by jets and can be mimicked by various background sources: \ttbar events, dibosons ($\PW\PW$, $\PW\PZ$, $\PZ\PZ$) and $\PW$ bosons produced in association with jets, single top quark events, as well as $\PZ$+jets events in which the \PZ boson decays into $\Pgt^+\Pgt^-$. Diboson events with a leptonic \PZ boson decay and jets produced in the hadronic decay of the other vector boson are not considered as part of the signal. Samples of simulated events are used to model both the signal and the background processes. The \MADGRAPH 5.1.3.30~\cite{Alwall:2011uj} event generator is used to simulate $\PZ$+jets (including jets from $\PQb$ quarks), \PW+jets, and \ttbar events; this generator implements a leading-order (LO) matrix element calculation with up to four (three) additional partons in the final state for V+jets (\ttbar) events, using the CTEQ6L1 PDF set~\cite{Pumplin:2002vw}, which is based on the five flavour scheme (5FS). A detailed discussion is given in Section~\ref{sec:theory}. The parton-level events are interfaced with \PYTHIA version 6.424~\cite{Sjostrand:2006za} for parton showering, hadronization, and description of the multiple-parton interactions (MPIs). The \PYTHIAS Z2* tune, which is based on the CTEQ6L1 PDF set, is used~\cite{Chatrchyan:2013gfi}. The matrix element and parton shower calculations are matched using the \kt-MLM algorithm~\cite{Alwall:2007fs}. The cross section inclusive in jet multiplicity is rescaled to its next-to-next-to-leading-order (NNLO) prediction, computed with \FEWZ 3.1~\cite{Gavin:2010az,Li:2012wna} for the $\Z$+jets and \PW+jets processes, and with the calculation of reference~\cite{Czakon:2013goa} for the \ttbar process. To study systematic uncertainties, signal events are also generated using \AMCATNLO~\cite{Alwall:2014hca} version 2.2.1, with next-to-leading-order (NLO) matrix elements for zero, one, and two additional partons merged with the {\sc FxFx} algorithm~\cite{Frederix:2012ps}, interfaced with \PYTHIA version 8.205~\cite{Sjostrand:2007gs} for showering and hadronization. In this case the NNPDF 3.0 NLO PDF set~\cite{Ball:2014uwa} is used. Depending on the flavours included in the matrix element calculation of the event or produced in the parton shower through gluon splitting, the inclusive $\PZ$+jets sample can be divided into $\PZ$+b quark, c quark, and light-flavour (u, d, s quark and gluon) jet subsamples. As explained in Section~\ref{sec:unfolding}, the jet flavour identification is based on the particle content of the final state. \par} Diboson events are simulated with \PYTHIAS, and the inclusive cross section rescaled to the NLO prediction provided by \MCFM~\cite{Campbell:2011bn}. The single top quark contribution is evaluated using \POWHEGBOX version 1.0~\cite{Frixione:2007vw,Nason:2004rx,Alioli:2010xd,Alioli:2009je,Re:2010bp} interfaced with \PYTHIAS for parton showering, hadronization, and MPI description. The contribution of multijet events is evaluated using \PYTHIAS generated events and found to be negligible. Generated events are processed with a simulation of the CMS detector based on the \GEANTfour toolkit~\cite{Agostinelli:2002hh}. Signals induced by additional $\Pp\Pp$ interactions in the same or adjacent bunch crossings, referred to as pileup, are simulated using events generated with \PYTHIAS. The pileup distribution in simulation is adjusted in order to reproduce the collision rates observed in data. During the 2012 data taking, the average pileup rate was about 21 interactions per bunch crossing. \section{Event selection} The analysis is based on an online trigger selection requiring events to contain a pair of electron or muon candidates with asymmetric minimum thresholds on their transverse momenta. These threshold settings depended on the instantaneous luminosity and reached maximum values of 17\GeV for the leading lepton and 8\GeV for the subleading one. Events are required to contain a \PZ boson, reconstructed through its decay into an electron or muon pair, produced in association with at least one or at least two hadronic jets. For the \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace event selections the jets are also required to be identified as originating from the hadronization of a $\PQb$ quark. All the measured particles are reconstructed using the particle-flow (PF) algorithm~\cite{CMS-PAS-PFT-09-001,CMS-PAS-PFT-10-001}. The particle-flow event algorithm reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained directly from the ECAL measurement, corrected for zero-suppression effects. The energy of electrons is evaluated from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The transverse momentum of the muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of the momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response functions of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies. The reconstructed leptons selected as candidate decay products of the \PZ boson must match those that triggered the event and must be associated with the primary vertex of the event, defined as the reconstructed vertex with the largest sum of $\pt^2$ of its constituent tracks. Reconstructed electrons must satisfy a set of selection criteria designed to minimize misidentification at a desired efficiency level~\cite{Khachatryan:2015hwa}; the discriminating observables include the measured shower shape in the ECAL and the spatial matching between the electromagnetic deposit in the calorimeter and the reconstructed track associated with it. Additional requirements on electron tracks are used to reject products of photon conversions. Electron isolation criteria exploit the full PF-based event reconstruction, using particles within a cone around the electron direction with radius $\DR = \sqrt{\smash[b]{(\Delta\phi)^2 + (\Delta\eta)^2}} = 0.3$. The isolation requirement is defined by $I_{\text {rel}}=(I_{\text {charged}} + I_{\text {photon}}+I_{\text {neutral}})/\pt^{\Pe} < 0.15$, where $I_{\text {charged}}$ is the scalar \pt sum of all the charged hadrons, $I_{\text {photon}}$ is the scalar \pt sum of all the photons, and $I_{\text {neutral}}$ the scalar sum of \pt of all the neutral hadrons in the cone of interest. The notation $\pt^{\Pe}$ refers to the transverse momentum of the reconstructed electron. Pileup can add extra particles, which affect the isolation variable. Accordingly, only charged particles originating from the reconstructed primary vertex are used in the calculation of $I_{\text {charged}}$. The photon and neutral hadronic contribution to the isolation variable coming from pileup is subtracted using the jet area approach~\cite{Cacciari:2007fd}. Electrons must have $\pt^{\Pe} > 20\GeV$ and be reconstructed within the pseudorapidity range $\abs{\eta}<1.44$ and $1.57<\abs{\eta}<2.4$, which exclude the barrel-endcap transition region. Muon identification criteria are based on the fit quality for tracks measured in the tracker and the muon detector~\cite{Chatrchyan:2012xi}. Further selection criteria are added in order to reject muons from cosmic rays. Muon isolation is computed using all particles reconstructed by the PF algorithm within a cone of radius $\DR = 0.4$ around the muon direction, requiring $I_{\text {rel}}=(I_{\text {charged}}+I_{\text {photon}}+I_{\text {neutral}})/\pt^{\Pgm} < 0.2$. Muons must have $\pt^{\Pgm} > 20\GeV$ and $\abs{\eta}<2.4$. As in the case of electrons, charged particles not originating from the primary vertex are excluded from the isolation calculation. The pileup contribution to $I_{\text {photon}}$ and $I_{\text{neutral}}$ is estimated as half of the corresponding charged hadronic component and is subtracted in the definition of the $I_{\text {rel}}$ variable. The efficiencies for lepton trigger, reconstruction, identification, and isolation are measured with the ``tag-and-probe" technique~\cite{Khachatryan:2010xn} as a function of the lepton $\eta$ and \pt. A sample of events containing a \PZ boson decaying into $\Pe^+\Pe^-$ or $\mu^+\mu^-$ is used for these studies. Efficiency corrections (``scale factors'') of up to 1.2\% (7.3\%), dependent on lepton \pt and $\eta$, are applied to account for differences in the estimated efficiencies between data and simulation in the electron (muon) channel. The pair of selected same-flavour, opposite-sign, highest-\pt isolated leptons is retained as a \PZ boson candidate if the invariant mass $M_{\ell\ell}$ of the pair lies within the 71--111\GeV mass interval. The overall efficiency of the trigger and event selection within the fiducial acceptance is 88\% for dimuons and 58\% for dielectrons. Jets are reconstructed using the anti-\kt algorithm~\cite{Cacciari:2008gp, Cacciari:2011ma} with a distance parameter of 0.5. In order to suppress the contribution from pileup interactions, charged particles not associated with the primary vertex are excluded from the clustering. Jets are required to be in the tracking acceptance region $\abs{\eta}<2.4$ and to have $\pt > 30\GeV$, thereby reducing the contribution from the underlying event to less than 5\%, where jets have a softer \pt spectrum compared to jets from the hard scattering process. Jets with a distance $\DR < 0.5$ from the closer lepton used for the \PZ boson decay reconstruction are not considered in the analysis. The jet energy scale (JES) is calibrated using a factorized approach as described in Refs.~\cite{Chatrchyan:1369486,Khachatryan:2198719}. The jet energy resolution (JER) in data is known to be worse than in the simulation; therefore the simulated resolution is degraded to compensate for this effect as a function of the jet kinematics~\cite{Chatrchyan:1369486,Khachatryan:2198719}. Jets from $\PQb$ quarks are identified using the combined secondary vertex (CSV) b tagging algorithm~\cite{Chatrchyan:2012jua}, a multivariate classifier that makes use of information about reconstructed secondary vertices as well as the impact parameters of the associated tracks with respect to the primary vertex to discriminate \PQb jets from $\PQc$ and light-flavour jets. The threshold applied to the discriminating variable gives a b tagging efficiency of about 50\% and a misidentification probability of 0.1\% for light jets and 1\% for $\PQc$ jets. Scale factors, measured in multijet events and dependent on jet \pt, are used to correct the b, c, and light-flavour jet efficiencies in the simulation to match those observed in the data~\cite{Chatrchyan:2012jua}. The scale factors for $\PQb$ jets are determined using samples of events enriched in such a flavour of jets. This enrichment is obtained including both multijet events containing a muon geometrically associated with a jet, with high probability of originating from the semileptonic decay of a $\PQb$ hadron, and leptonic and semileptonic \ttbar events, where the leading \pt jets are usually $\PQb$ jets. The scale factors are around 0.93, slowly decreasing for jets with \pt above 120\GeV. The scale factors for $\PQc$ jets are assumed the same as for $\PQb$ jets, with an uncertainty twice as large. Relatively pure samples of $\PQc$ jets from $\PW+\PQc$ events, selected using identified muons within the jet, are used to validate this assumption. For light-flavour jets, the same CSV algorithm yields scale factors between 1.1 and 1.4, depending on the jet \pt. The calculation is based on tracks with negative signed impact parameter and secondary vertices with negative signed decay lengths, where the sign is defined by the relative direction of the jet and the particle momentum. Finally, events are selected if they contain a \PZ boson candidate and at least one $\PQb$-tagged jet. The missing transverse momentum vector \ptvecmiss is defined as the projection on the plane perpendicular to the beams of the negative vector sum of the momenta of all reconstructed particles in an event. Its magnitude is referred to as \ETmiss. The \ETmiss significance, introduced in Ref.~\cite{2011JInst...6.9001C,Khachatryan:2014gga}, offers an event-by-event assessment of the consistency of the observed missing energy with zero, given the reconstructed content of the event and known measurement resolutions. In order to suppress the background contamination from \ttbar production, events with \ETmiss significance greater than 30 are vetoed. This requirement provides a 13\% \ttbar background rejection with no loss in signal efficiency. The \ensuremath{\PZ(1\PQb)}\xspace event selection described above yields 26443 (36843) events for the dielectron (dimuon) channels. The exclusive $\PQb$-tagged jet multiplicity and invariant mass distributions of the same flavour dilepton are presented in Figs.~\ref{fig:multb} and \ref{fig:invzb}, for the \ensuremath{\PZ(1\PQb)}\xspace event selection for electron and muon respectively. Data are compared with the simulations where the $\Z$+jets events are described by \MADGRAPH+\PYTHIAS, and good agreement is observed. In all figures, the simulated events are reweighted by scale factors in order to compensate for the residual data-to-simulation discrepancies in lepton selection efficiency, JES and JER calibration, and b tagging efficiency. The background contributions from $\Z$+jets and \ttbar events as adjusted in Section~\ref{sec:backg} are included in Figs.~\ref{fig:multb} and~\ref{fig:invzb}. \begin{figure*}[hbtp] \begin{center} \includegraphics[width=\ghmFigWidth]{Figure_001-a.pdf} \includegraphics[width=\ghmFigWidth]{Figure_001-b.pdf} \end{center} \caption{Exclusive $\PQb$-tagged jet multiplicity distributions for \ensuremath{\PZ(1\PQb)}\xspace events, for the electron (left) and muon (right) decay channel of \PZ boson. Error bars account for statistical uncertainties in data in the upper plots and in both data and simulation in the bottom ratio plots, that show the data to MC ratio.} \label{fig:multb} \end{figure*} \begin{figure*}[hbtp] \begin{center} \includegraphics[width=\ghmFigWidth]{Figure_002-a.pdf} \includegraphics[width=\ghmFigWidth]{Figure_002-b.pdf} \end{center} \caption{Dilepton invariant mass distributions for \ensuremath{\PZ(1\PQb)}\xspace events, for the electron (left) and muon (right) \PZ boson decay channels. Error bars account for statistical uncertainties in data in the upper plots and in both data and simulation in the bottom ratio plots, that show the data to MC ratio.} \label{fig:invzb} \end{figure*} \section{Background estimation} \label{sec:backg} A Drell--Yan event in which a \PZ boson decays into $\tau^+\tau^-$ may contribute to the dielectron or dimuon signal events if both $\Pgt$ leptons decay into electrons or muons. These events are treated as a background source and, being at the few per mil level, their contribution is evaluated from simulation. The process $\Pp\Pp\to \ttbar \to \PW^+ \PQb \PW^-{\PAQb}\to \ell^+\ell^-\PQb{\PAQb}+\MET$ is the dominant non-Drell--Yan background source. The \ttbar background contribution is estimated separately for $\PZ$+jets, \ensuremath{\PZ(1\PQb)}\xspace, and \ensuremath{\PZ(2\PQb)}\xspace events by using the signal selection criteria to produce samples of $\Pe\Pgm$ pairs, which are enriched in \ttbar events with negligible signal contamination. For each measured observable these samples provide the estimates of the \ttbar background; residual non-\ttbar backgrounds in them, amounting to about 29\%, 8\% and 2\% respectively, are subtracted using the simulated prediction. The integrals of such estimates need to be rescaled by the ratio of the same-flavour lepton to $\Pe\Pgm$ yields. This ratio is determined using control samples for both the same-flavour lepton and $\Pe\Pgm$ selections by inverting the \MET significance requirement, namely, \MET significance $>$30. For the same-flavour lepton samples, this selection removes the contribution from the signal processes, while enhancing the fraction of \ttbar events in the sample. The residual contamination from other non-\ttbar processes is similar in the same-lepton and $\Pe\Pgm$ selections, amounting to about 20\%, 7\%, 3\% respectively, and is again taken into account using the simulation. The ratio of the $\Pe\Pgm$ to the $\Pe\Pe$ or $\Pgm\Pgm$ yields in the control samples is used to rescale the estimates of this background source for each lepton channel separately. The ratio is determined as the scaling factor for the normalization of the binned dilepton invariant mass ($M_{\ell\ell}$) distribution in the $\Pe\Pgm$ sample that minimizes the difference of this distribution from the corresponding same-lepton-flavour $M_{\ell\ell}$ distribution in a least-square fit procedure. The fit of the $M_{\ell\ell}$ distribution is performed in the sideband regions 50--84\GeV and 100--200\GeV, to avoid any assumption about the $M_{\ell\ell}$ shape for both opposite and same-sign lepton pairs in the \PZ peak region. The remaining background sources are estimated using simulation. Diboson events may mimic the $\PZ$+b final state when one or more leptons are not reconstructed or when a $\PW$ or \PZ boson decays hadronically into a $\cPq\cPaq$ pair (in particular a \PZ boson may decay into a genuine \bbbar pair). Single top quarks produced in association with either a $\PW$ boson or one or more $\PQb$ jets may also generate a signal-like signature. These events, together with $\PW$+jets, can mimic the signal if a lepton of the same flavour is produced in the hadronization or if a hadron is misidentified. The contribution of multijet events is found to be negligible, as has been previously observed in other similar $\PZ$+jets analyses~\cite{azimuthalref}. After subtraction of all non-Drell--Yan background contributions, the extraction of the \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace event yields requires an evaluation of the purity of the b tagging selection, i.e.\ the fraction of selected Drell--Yan events in which the desired number of $\PQb$-tagged jets, at least one or at least two, originate from the hadronization of a corresponding number of $\PQb$ quarks. This fraction is determined from a study of the secondary vertex mass distribution of the leading $\PQb$-tagged jet, defined as the invariant mass of all the charged particles associated with its secondary vertices, assuming the pion mass for each considered particle. This evaluation is done separately for dielectron and dimuon final states to avoid correlations between channels and to simplify the combination. The secondary vertex mass distributions for $\PQb$, $\PQc$, and light-flavour jets produced in association with \PZ bosons are obtained from the simulation based on the \MADGRAPH event generator interfaced with \PYTHIAS by using the 5FS scheme for PDFs. The sum of the distributions is fitted to the observed distribution with an extended binned likelihood, after subtraction of all non-Drell--Yan background contributions, by varying the three normalization scale factors $c_{\mathrm{b}}$, $c_{\mathrm{c}}$, $c_{\mathrm{udsg}}$ for the various components. The $c_{\mathrm{c}}$, $c_{\mathrm{udsg}}$ factors are used for the subtraction of the respective components. This procedure reduces the dependence on the normalization of the $\cPqb$ hadron production and decay in the simulation because the expected shape of the secondary vertex mass distribution is used. In the case of the \ensuremath{\PZ(2\PQb)}\xspace selection, as it can be seen in Fig.~\ref{fig:multb}, the contamination from c and light-flavour jets is negligible and is subtracted using simulation; only the $c_{{\PQb\PQb}}$ scaling factor for the genuine double $\PQb$ jet component is determined from the fit, and it is used only to correct the relative proportion of \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace events in the simulation, as discussed in Section~\ref{sec:unfolding}. The results of the fit to the secondary vertex mass distributions are presented in Fig.~\ref{fig:bpurity1b} for the \ensuremath{\PZ(1\PQb)}\xspace analysis, showing the flavour composition in each channel. Data-to-simulation scale factors, as determined by the fit, are given in Table~\ref{tab:Fraction-of-beauty} for both event selections and \PZ boson decay channels. The flavour composition of the selected sample after the scale factor corrections for the \ensuremath{\PZ(1\PQb)}\xspace samples is also shown. The $\PQb$-flavour contribution is constrained by the high secondary vertex mass region of the distribution of the CSV discriminating variable, while the $\PQc$-flavour contribution is mostly important in the region between 1 and 2\GeV, and the light-flavour contribution below 1\GeV. This results in a strong anticorrelation both between the $\PQb$- and $\PQc$-flavour and between $\PQc$- and light-flavour contributions, with an estimated correlation coefficient of about -0.6 in both cases, whereas the correlation between the $\PQb$- and light-flavour contributions is negligible. As a consequence, a fluctuation in the small $\PQc$ quark component may cause a difference in the scale factors between different lepton channels. \begin{table*}[htb] \topcaption{Normalization scale factors and post-fit fractions for b, c and light-flavour (u, d, s quark and gluon) components in the selected \ensuremath{\PZ(1\PQb)}\xspace events, and scale factor for b in the selected \ensuremath{\PZ(2\PQb)}\xspace events, obtained from a fit to the secondary vertex mass distribution for dielectron and dimuon final states. The quoted uncertainties are statistical only.} \centering \begin{tabular}{lcccrrr} Event selection & \emph{$c_{\mathrm{b}}$} & \emph{$c_{\mathrm{c}}$} & \emph{$c_{\mathrm{udsg}}$} & \ensuremath{\PZ(1\PQb)}\xspace (\%) & Z+c (\%) & Z+udsg (\%) \\ \hline \ensuremath{\PZ(1\PQb)}\xspace ($\Pe\Pe$) & $0.91\pm0.02$ & $1.29\pm0.13$ & $1.70\pm0.21$ & $69.5\pm1.8$ & $19.0\pm2.0$ & $11.4\pm1.4$ \\ \ensuremath{\PZ(1\PQb)}\xspace ($\Pgm\Pgm$) & $0.91\pm0.02$ & $1.51\pm0.12$ & $1.18\pm0.19$ & $69.7\pm1.5$ & $22.4\pm1.8$ & $7.9\pm1.2$ \\ \end{tabular} \\ \begin{tabular}{lc} Event selection & \emph{$c_{{\PQb\PQb}}$} \\ \hline \ensuremath{\PZ(2\PQb)}\xspace ($\Pe\Pe$) & $1.18\pm0.12$ \\ \ensuremath{\PZ(2\PQb)}\xspace ($\Pgm\Pgm$) & $1.17\pm0.09$ \\ \end{tabular} \label{tab:Fraction-of-beauty} \end{table*} The signal yield for \ensuremath{\PZ(1\PQb)}\xspace events is therefore obtained, for each bin of a distribution, from the selected event yield $N^{\text{selected}}$ as \begin{equation*} \ifthenelse{\boolean{cms@external}} { \begin{split} N_{\ensuremath{\PZ(1\PQb)}\xspace} = N^{\text{selected}}_{\ensuremath{\PZ(1\PQb)}\xspace} & - N_{\ttbar} - N_{\text{Dibosons}}^{\mathrm{MC}} - N_{\text{Others}}^{\mathrm{MC}} \\ & - c_{\mathrm{c}} N_{\mathrm{Z+c}}^{\mathrm{MC}} - c_{\mathrm{udsg}} N_{\mathrm{Z+udsg}}^{\mathrm{MC}} , \end{split} } { {N_{\ensuremath{\PZ(1\PQb)}\xspace} = N^{\text{selected}}_{\ensuremath{\PZ(1\PQb)}\xspace} - N_{\ttbar} - N_{\text{Dibosons}}^{\mathrm{MC}} - N_{\text{Others}}^{\mathrm{MC}} - c_{\mathrm{c}} N_{\mathrm{Z+c}}^{\mathrm{MC}} - c_{\mathrm{udsg}} N_{\mathrm{Z+udsg}}^{\mathrm{MC}} ,} } \end{equation*} where $N_{\ttbar}$, $N_{\text{Dibosons}}^{\mathrm{MC}}$, and $N_{\text{Others}}^{\mathrm{MC}}$ are the \ttbar, diboson, and other background contributions respectively, $c_{\mathrm{c}} N_{\mathrm{Z+c}}^{\mathrm{MC}}$ and $c_{\mathrm{udsg}} N_{\mathrm{Z+udsg}}$ are the numbers of Drell--Yan events in which the b-tagged jets originate from either a c or a light-flavour parton, and the scale factors multiply the event yields predicted by the simulation. For the calculation of the \ensuremath{\PZ(2\PQb)}\xspace event yield a similar procedure is applied: \begin{equation*} {N_{\ensuremath{\PZ(2\PQb)}\xspace} = N^{\text{selected}}_{\ensuremath{\PZ(2\PQb)}\xspace} - N_{\ttbar} - N_{\text{Dibosons}}^{\mathrm{MC}} - N_{\text{Others}}^{\mathrm{MC}} .} \end{equation*} The $c_\mathrm{c}$ and $c_\mathrm{udsg}$ scale factors are also re-evaluated from subsamples obtained by dividing the ranges of the studied observables into wide intervals, in order to study a possible correlation with the observables themselves. The statistical uncertainty of these scale factors depends on the chosen observable and binning, ranging from a factor of 2 up to 10 relative to the size of the uncertainty of the default values obtained with the full sample. Because no statistically significant dependence is observed, the scale factors estimated from the overall sample are used. \begin{figure*}[tb] \begin{center} \includegraphics[width=\ghmFigWidth]{Figure_003-a.pdf} \includegraphics[width=\ghmFigWidth]{Figure_003-b.pdf} \end{center} \caption{Distributions of the secondary vertex (SV) mass of the leading jet after the \ensuremath{\PZ(1\PQb)}\xspace selection with the \PZ boson decaying into electrons (left) and muons (right). The subsamples corresponding to $\PQb$-tagged jets originating from $\PQb$, c, and light-flavour quarks or gluons are shown, with normalizations determined in the fit to data. Non-Drell--Yan background sources are subtracted. Error bars account for statistical uncertainties in data in the upper plots and in both data and simulation in the bottom ratio plots. \label{fig:bpurity1b} } \end{figure*} The amount of background in the final event selection, estimated with the procedures discussed above, can be observed in Fig.~\ref{fig:multb}. For the \ensuremath{\PZ(1\PQb)}\xspace selection, in the electrons (muons) samples the Z+$\PQc$ contribution amounts to about 17\% (20\%), the Z+light flavour jets (including gluons) to 10\% (7\%), and the \ttbar to 9\% (8\%). Other background contributions are globally below the 2\% level. The \ensuremath{\PZ(1\PQb)}\xspace contribution in the corresponding selected sample is about 62\% (63\%) for the electrons (muons) channel. \section{Unfolding} \label{sec:unfolding} The differential event yields are corrected for event selection efficiencies and for detector resolution effects back to the stable-particle level. For this purpose, the singular value decomposition (SVD)~\cite{Hocker:1995kb} unfolding technique, implemented in the {\sc RooUnfold} toolkit~\cite{2011arXiv1105.1160A}, is used. The unfolding procedure is based on a response matrix, which describes the relationship between the particle levels and measured values of a given observable due to the detector resolution and acceptance. The response matrix is calculated using \ensuremath{\PZ(1\PQb)}\xspace events that are generated with \MADGRAPH in the 5FS, interfaced to \PYTHIAS, and followed by the detector simulation. Response matrices are computed separately for the \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace selections. The proportion of events with exactly one or at least two b quarks in the simulation is reweighted to match that observed in data, as determined by the $c_{{\PQb\PQb}}$ scaling factor. Fiducial cross sections are defined, based on event generator predictions at the particle level, for leptons and jets reconstructed from the collection of all stable final-state particles, using the same selection criteria as the data analysis. The two leptons (electrons or muons) with the highest transverse momentum and with $\pt > 20\GeV$ and $\abs{\eta} < 2.4$ are selected as \PZ boson decay products if their invariant mass is in the range of 71--111\GeV. Electromagnetic final-state radiation effects are taken into account in the generator-level lepton definition by clustering all photons in a cone of radius $\DR = 0.1$ around the final-state leptons. The leptons selected as \PZ boson decay products are then removed from the particle collection used for the jet clustering at the generator level. The remaining particles, excluding neutrinos, are clustered into jets using the anti-\kt algorithm with a distance parameter of $0.5$. Generated jets are selected if their distance from the leptons forming the \PZ boson candidate is larger than $\DR = 0.5$. Jets originating from the hadronization of $\PQb$ quarks are selected if a $\PQb$ hadron is an ancestor of one of the particles clustered in it, and this $\PQb$ hadron has a distance from the jet in the $\eta$-$\phi$ plane of $\DR \le 0.5$. Jets and \PQb jets are selected if they have $\pt > 30\GeV$ and lie in the pseudorapidity range $\abs{\eta} < 2.4$. As a cross-check of the SVD technique, the unfolding is also performed with the iterative D'Agostini method~\cite{D'Agostini1995487}, leading to compatible results within the statistical uncertainties. \section{Systematic uncertainties} Several sources of systematic uncertainty affect the cross section measurement: the JES and JER, the calculation of the unfolding response matrix, the estimation of the $\PQb$ quark fraction, the background subtraction, the event selection efficiencies, the pileup description, and the integrated luminosity. For every source other than the luminosity, the full analysis procedure is repeated after the variation of the corresponding input values, and the difference of the extracted cross section with respect to the central measurement is used as an estimate of the uncertainty due to that source. The uncertainties are symmetrized, if not already symmetric. The systematic uncertainties in the measured \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace differential cross sections are summarized in Table~\ref{tab:syst_Zb} and in Tables~\ref{tab:syst_Zb_2bSample} and \ref{tab:syst_Zb_2bSample_extra}, respectively. Reconstructed jet energies must be corrected for several effects, such as pileup contamination, instrumental noise, nonuniformities and nonlinearities in the detector response, and flavour composition. The resulting uncertainty depends on the transverse momentum and pseudorapidity of the jet. The systematic effect due to the application of JES corrections in the data is estimated by increasing and decreasing the correction parameters deviation from their nominal values by one standard. The uncertainty for the JER correction is evaluated in the same way. {\tolerance=800 For the cross section measurement in a given bin, the systematic uncertainty induced by the model used in the unfolding procedure is evaluated as the difference between the standard result and that obtained with an alternative model for unfolding, namely \AMCATNLO interfaced with \PYTHIAE. This alternative model implements NLO hard scattering matrix elements, compared to the LO matrix elements of \MADGRAPH interfaced to \PYTHIAS, and also includes different details of the underlying event, hadronization, and particle decay descriptions compared to the default choice. In order to evaluate the genuine model-induced effects, the statistical uncertainties from the two simulated samples are subtracted in quadrature from the difference; any negative results so obtained are replaced with zero. The uncertainty associated with the size of the simulated sample used to compute the response matrix elements is determined by producing replicas of the matrix whose elements are fluctuated according to a Poisson distribution. \par} The uncertainty induced by the secondary vertex mass fit, used to extract the true flavour composition of the \ensuremath{\PZ(1\PQb)}\xspace sample, is twofold. One part is due to the statistical uncertainty in the $c_\mathrm{c}$, $c_\mathrm{udsg}$ scale factors, whose effect is estimated by varying them up and down by one standard deviation, taking into account their correlation. This source of uncertainty is considered as part of the statistical uncertainty, because it is due to the finite size of the collision data sample. The other part stems from the choice of the simulation model for the shape of the secondary vertex mass distributions. This choice affects also the correction of the relative proportion of different $\PQb$ multiplicities provided by the scale factor $c_{\PQb\PQb}$. In addition, a systematic uncertainty is associated, for both \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace samples, with the modelling of the $\PQc$ quark and light-flavour contributions to each measured observable. Both of these model-induced uncertainties, collectively indicated in the tables as ``c, udsg background model'', are estimated by replacing the default model given by \MADGRAPH 5FS interfaced with \PYTHIAS with \AMCATNLO 5FS interfaced with \PYTHIAE. The scale factors, which are determined from the alternative model, are in statistical agreement for dielectron and dimuon channels within one standard deviation. The difference between the results obtained with the two models is therefore considered as safely accounting for possible residual discrepancies between data and simulation. For each lepton channel the systematic uncertainties in the lepton efficiency calculations for triggering, reconstruction, identification, and isolation are estimated from the $\PZ\to\ell\ell$ ``tag-and-probe'' measurements of data-to-simulation efficiency scale factors. The global effect of the systematic uncertainty related to the scale factors is 1.5\% in the dielectron final state and 2\% in the dimuon final state. The uncertainties in the b tagging efficiency scale factors include contributions from the pileup contamination, the gluon splitting rate in simulation ($\cPg\to \bbbar$), varied by ${\pm}50\%$, and the energy fraction carried by the b hadrons in the hadronization (varied by ${\pm}5\%$)~\cite{Chatrchyan:2012jua}. The global value of the $\PQb$ tagging systematic uncertainty amounts to 3\% per $\PQb$-tagged jet. Scale factors for $\PQc$ jets, assumed equal to those for $\PQb$ jets, are assigned an uncertainty twice as large as for the $\PQb$ jets. The simulation is reweighted according to the generated primary vertex multiplicity and the instantaneous luminosity in data to reproduce the observed primary vertex multiplicity distribution, and provide a reliable representation of pileup. The minimum-bias event cross section in simulation is tuned to provide the best agreement between data and simulation in the vertex multiplicity distribution of $\PZ \to \Pgm \Pgm$ events. The uncertainty associated with this procedure is estimated by varying this minimum-bias cross section value by 5\%. The uncertainty in the \ttbar background normalization is derived from the statistical uncertainties of the same-flavour and $\Pe\Pgm$ control samples and is included in the total statistical uncertainty. The systematic uncertainty related to the diboson background ($\PZ\PZ$, $\PW\PW$, $\PW\PZ$) is evaluated by varying the theoretical production cross sections by ${\pm} 15\%$ of their central values, corresponding to about three standard deviations of the overall theoretical normalization uncertainty and covering the typical differences between the theoretical and measured values. In addition, the statistical uncertainty induced by the limited size of simulation samples is taken into account. The systematic uncertainty in the integrated luminosity is 2.6\%~\cite{CMS-PAS-LUM-13-001}. In the ratios of \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace to the inclusive $\PZ$+jets cross sections, the uncertainties are simultaneously propagated to both the numerator and denominator, taking correlations into account. The uncertainties in the energy scale, resolution, and efficiency corrections for reconstructed leptons and jets are considered to be fully correlated, as is the uncertainty in the integrated luminosity. Tables~\ref{tab:syst_Zb}--\ref{tab:syst_Zb_2bSample_extra} summarize the ranges of variation of the uncertainties for each observable measured with the \ensuremath{\PZ(1\PQb)}\xspace and \ensuremath{\PZ(2\PQb)}\xspace samples. \begin{table*}[htb] \renewcommand{\arraystretch}{1.1} \topcaption{Uncertainties (in percent) in the differential cross sections as a function of the leading $\PQb$ jet \pt and $\abs{\eta}$, the \PZ boson \pt, \HT, and $\Delta\phi_{\PZ\PQb}$ between the \PZ boson and the leading $\PQb$ jet, for the \ensuremath{\PZ(1\PQb)}\xspace sample.} \centering \begin{tabular}{lccccc} Uncertainty (\%) & ${\rd\sigma}/{\rd\pt}$ & ${\rd\sigma}/{\rd\abs{\eta}}$ & ${\rd\sigma}/{\rd\pt^{{\PZ}}}$ & ${\rd\sigma}/{\rd H_{\mathrm{T}}}$ & ${\rd\sigma}/{\rd\Delta\phi_{\PZ\PQb}}$\\ \hline JER & 0.3--1.7 & 0.1--0.6 & 0.2--2.6 & 0.4--1.9 & 0.1--2.2 \\ JES & 0.5--4.8 & 0.7--5.3 & 0.5--7.7 & 0.6--5.2 & 0.4--4.2 \\ Unfolding (MC model) & 0.0--19.2 & 0.2--2.2 & 0.0--18.1 & 0.0--10.2 & 0.0--9.2 \\ Unfolding (MC statistics) & 1.4--10.2 & 1.1--2.7 & 1.8--8.3 & 1.3--7.6 & 1.2--6.1 \\ c, udsg background model & 0.0--6.1 & 0.0--7.0 & 0.0--19.9 & 0.7--7.5 & 0.0--10.9 \\ Electron (muon) efficiency & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) \\ $\PQb$ tagging efficiency & 3.0 & 3.0 & 3.0 & 3.0 & 3.0 \\ Pileup & 0.2--4.3 & 0.6--1.4 & 0.4--2.0 & 0.2--2.3 & 0.2--1.6 \\ Background (systematic) & 0.1--0.4 & 0.1--0.3 & 0.1--0.6 & 0.2--0.3 & 0.1--0.3 \\ Background (statistical) & 1.2--7.2 & 1.0--2.5 & 1.5--5.8 & 1.3--4.6 & 1.2--5.9 \\ Integrated luminosity & 2.6 & 2.6 & 2.6 & 2.6 & 2.6 \\ \hline Total syst.\ uncertainty (\%) & 5.5--21.7 & 5.2--10.6 & 5.6--22.8 & 8.4--13.8 & 6.0--13.3 \\ \hline Total stat.\ uncertainty (\%) & 2.6--8.8 & 3.0--5.4 & 2.9--8.6 & 3.1--6.0 & 3.1--7.0 \\ \end{tabular} \label{tab:syst_Zb} \end{table*} \begin{table*}[htb] \renewcommand{\arraystretch}{1.1} \topcaption{Uncertainties (in percent) in the differential cross sections as a function of the leading and subleading $\PQb$ jet \pt, the \PZ boson \pt, the invariant mass of the two $\PQb$-tagged jets, and the invariant mass of the \PZ boson and the two $\PQb$-tagged jets, for the \ensuremath{\PZ(2\PQb)}\xspace sample.} \centering \ifthenelse{\boolean{cms@external}}{}{\resizebox{\textwidth}{!}} { \begin{tabular}{lccccc} Uncertainty (\%) & ${\rd\sigma}/{\rd\pt^{\text{leading}}}$ & ${\rd\sigma}/{\rd\pt^{\text{subleading}}}$ & ${\rd\sigma}/{\rd\pt^{\PZ}}$ & ${\rd\sigma}/{\rd M_{{\PQb\PQb}}}$ & ${\rd\sigma}/{\rd M_{\PZ\PQb\PQb}}$\\ \hline JER & 0.3--8.3 & 0.7--7.9 & 0.1--3.8 & 0.9--4.1 & 2.9--12.0 \\ JES & 4.4--17.0 & 7.7--23.3 & 3.1--20.3 & 6.7--15.3 & 3.8--16.2 \\ Unfolding (MC model) & 0.0--74.4 & 0.0--52.6 & 0.0--53.6 & 0.0--37.8 & 0.0--57.3 \\ Unfolding (MC statistics) & 8.0--39.4 & 9.0--35.8 & 8.8--27.0 & 7.6--28.0 & 10.0--20.8 \\ c, udsg background model & 0.0--17.3 & 0.0--16.1 & 0.0--15.5 & 0.0--18.5 & 0.0--10.2 \\ Electron (muon) efficiency & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) \\ $\PQb$ tagging efficiency & 6.0 & 6.0 & 6.0 & 6.0 & 6.0 \\ Pileup & 0.4--14.1 & 0.3--11.4 & 1.3--9.6 & 1.1--5.7 & 0.2--4.3 \\ Background (systematic) & 0.3--0.9 & 0.1--0.7 & 0.3--1.2 & 0.0--1.4 & 0.3--1.3 \\ Background (statistical) & 3.1--17.4 & 4.0--24.2 & 4.2--15.0 & 4.3--15.0 & 5.8--10.2 \\ Integrated luminosity & 2.6 & 2.6 & 2.6 & 2.6 & 2.6 \\ \hline Total syst.\ uncertainty (\%) & 17.2--89.4 & 19.7--61.7 & 17.8--56.6 & 14.5--52.9 & 17.9--65.4 \\ \hline Total stat.\ uncertainty (\%) & 6.1--34.1 & 7.6--44.5 & 10.4--23.5 & 7.9--28.0 & 11.2--19.9 \\ \end{tabular} } \label{tab:syst_Zb_2bSample} \end{table*} \begin{table*}[htb] \renewcommand{\arraystretch}{1.1} \topcaption{Uncertainties (in percent) in the differential cross sections as a function of $\DR$ and $\Delta\phi$ between the two $\PQb$-tagged jets, $\DR$ between the \PZ boson and the closer $\PQb$-tagged jet, and the asymmetry $A_{\PZ\PQb\PQb}$, for the \ensuremath{\PZ(2\PQb)}\xspace sample.} \centering \begin{tabular}{lcccc} Uncertainty (\%) & ${\rd\sigma}/{\rd\Delta\phi_{\PQb\PQb}}$ & ${\rd\sigma}/{\rd\Delta R_{{\PQb\PQb}}}$ & ${\rd\sigma}/{\rd\Delta R^{\text{min}}_{\PZ\PQb}}$ & ${\rd\sigma}/{\rd A_{{\PZ\PQb\PQb}}}$\\ \hline JER & 0.8--2.0 & 1.0--5.3 & 0.6--6.1 & 0.6--4.2 \\ JES & 5.6--10.7 & 6.6--20.5 & 4.2--13.1 & 5.1--9.1 \\ Unfolding (MC model) & 0.0--47.0 & 0.0--206 & 0.0--50.6 & 2.6--33.1 \\ Unfolding (MC statistics) & 6.3--11.5 & 6.4--30.7 & 8.2--25.6 & 7.5--30.5 \\ c, udsg background model & 0.0--3.4 & 0.0--10.3 & 0.0--14.2 & 0.0--12.3 \\ Electron (muon) efficiency & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) & 1.5 (2.0) \\ $\PQb$ tagging efficiency & 6.0 & 6.0 & 6.0 & 6.0 \\ Pileup & 0.4--2.4 & 1.3--3.5 & 0.5--4.6 & 1.2--6.1 \\ Background (systematic) & 0.1--0.8 & 0.1--0.8 & 0.2--1.3 & 0.2--0.7 \\ Background (statistical) & 3.4--5.0 & 3.7--9.4 & 3.6--15.9 & 3.3--8.8 \\ Integrated luminosity & 2.6 & 2.6 & 2.6 & 2.6 \\ \hline Total syst.\ uncertainty (\%) & 13.0--50.5 & 12.5--209 & 14.2--59.5 & 13.6--47.2 \\ \hline Total stat.\ uncertainty (\%) & 6.9--10.1 & 7.5--17.6 & 7.4--33.1 & 6.6--18.4 \\ \end{tabular} \label{tab:syst_Zb_2bSample_extra} \end{table*} \section{Results and comparison with theoretical predictions} \subsection{Observables} Differential cross sections as a function of a number of kinematic observables are measured in order to characterize the production mechanisms of \ensuremath{\PZ(1\PQb)}\xspace events. For \ensuremath{\PZ(1\PQb)}\xspace events, five kinematic observables are studied. First, \pt and $\abs{\eta}$ of the leading-\pt $\PQb$ jet are measured, together with the \PZ boson \pt. The distributions of these variables are directly sensitive to the $\PQb$ quark PDF and initial-state gluon splitting and may show differences between different PDF flavour schemes. Searches for physics processes beyond the SM in Lorentz-boosted topology events depend on precise knowledge of the \PZ boson \pt distribution. The scalar sum \HT of the transverse momenta of all selected jets, regardless of their flavour, is related to the structure of the hadronic system recoiling against the boson. The measurement of this observable at high values is potentially sensitive to the presence of intermediate heavy particles decaying hadronically, as predicted, for example, in some SUSY scenarios. Finally, the topology of the system composed of the \PZ boson and $\PQb$ jet is studied by measuring the cross section as a function of the azimuthal angular separation between the direction of the \PZ boson and the direction of the highest-\pt $\PQb$ jet, $\Delta\phi_{\PZ\PQb}$. This observable is also sensitive to the presence of boosted particles decaying into a \PZ boson and $\PQb$ quarks. Ratios of the differential cross sections for \ensuremath{\PZ(1\PQb)}\xspace and $\PZ$+jets events, inclusive in the jet flavour, are also measured: \begin{equation*} R(x)=\frac{\rd\sigma(\PZ{+({\ge}1\PQb)})/\rd x}{\rd\sigma(\PZ{+}\text{jets})/\rd x}, \end{equation*} with $x$ representing one of the five observables described above. The inclusive $\PZ$+jets event selection is defined by releasing the requirement of a b-tagged jet in the \ensuremath{\PZ(1\PQb)}\xspace selection. In these ratios the kinematic observables referring to the highest-\pt b-tagged jet from the \ensuremath{\PZ(1\PQb)}\xspace sample are used in the numerator, while for the denominator the observables related to the highest-\pt jet from the $\PZ$+jet sample are examined. Several systematic uncertainties cancel in the ratios, allowing a precise comparison with theory. For \ensuremath{\PZ(2\PQb)}\xspace events, the cross section is measured as a function of the transverse momenta of the \PZ boson and of the leading and subleading $\PQb$ jets. In addition, the cross section is studied as a function of several variables explicitly related to the topology of the final state consisting of a \Z boson and the two highest-\pt $\cPqb$ jets. The invariant mass ${M_{{\PQb\PQb}}}$ of the $\PQb\PQb$ system and the invariant mass $M_{\PZ\PQb\PQb}$ of the $\PZ\PQb\PQb$ system are studied, because their distributions are sensitive to the presence of heavy intermediate particles. Angular correlations between the $\PQb$ jets and between each $\PQb$ jet and the \PZ boson are described by four observables, also studied in Ref.~\cite{Chatrchyan:2013zjb}. The azimuthal angular separation $\Delta \phi_{{\PQb\PQb}}$ between the directions of the two $\PQb$ jets in the transverse plane is useful to identify back-to-back configurations of the $\PQb$ quarks. The distance between the directions of the two $\PQb$ jets in the $\eta$-$\phi$ plane is defined as $\Delta R_{{\PQb\PQb}} = \sqrt{\smash[b]{(\Delta\eta_{{\PQb\PQb}})^{2}+(\Delta\phi_{{\PQb\PQb}})^{2}}}$, where $\Delta \eta_{{\PQb\PQb}}$ is the separation in pseudorapidity between the two $\PQb$ jets. This variable is sensitive to the different production mechanisms of the \ensuremath{\PZ(2\PQb)}\xspace final-state $\PQb$ quarks. In particular, it is useful to discriminate between the contributions whose scattering amplitudes are dominated by terms involving gluon splitting, $\cPg \to \bbbar$, and those where a \PZ boson is emitted from one of the final-state $\PQb$ quarks. The process $\qqbar\to \Z\bbbar$ contributes to both cases, while $\PQq\Pg\to \Z\bbbar\mathrm{X}$ (with $\mathrm{X}$ an additional parton) contributes to the former and $\Pg\Pg\to \Z\bbbar$ to the latter. These contributions correspond, respectively, to the regions where the two $\PQb$ quarks are almost collinear or mostly acollinear. Because two $\PQb$ jets must be reconstructed, this measurement cannot be sensitive to low-angle gluon splitting, where the distance between the jet-initiating partons is smaller than twice the jet size. This region is better explored by searching directly for pairs of b hadrons close in space, as studied in Ref.~\cite{Chatrchyan:2013zjb}, whose decay products might be part of a single reconstructed jet. Another angular observable of interest is $\Delta R_{\PZ\PQb}^{\text{min}}$, the angular separation between the \PZ boson and the closer $\PQb$ jet in the $\eta$-$\phi$ plane. This variable is useful for testing multileg tree-level and NLO corrections in which a \PZ boson is radiated from a quark, because it is sensitive to event topologies with the \PZ boson in the vicinity of one of the two $\PQb$ jets. Finally, the $A_{\PZ\PQb\PQb}$ asymmetry between the $\PQb$ jet direction and the \PZ boson direction is computed using a combination of $\Delta R_{\PZ\PQb}^{\text{min}}$ and $\Delta R_{\PZ\PQb}^{\text{max}}$ (the latter being the $\eta$-$\phi$ separation between the \PZ boson and the farther b jet): \begin{equation*} A_{\PZ\PQb\PQb} = \frac{\Delta R_{\PZ\PQb}^{\text{max}} - \Delta R_{\PZ\PQb}^{\text{min}}}{\Delta R_{\PZ\PQb}^{\text{max}} + \Delta R_{\PZ\PQb}^{\text{min}}} . \end{equation*} The $A_{\PZ\PQb\PQb}$ asymmetry can provide an indirect test of pQCD validity at higher orders of the perturbative series. A nonzero value of $A_{\PZ\PQb\PQb}$ is related to the emission of additional gluon radiation in the final state, while values of $A_{\PZ\PQb\PQb}$ close to zero identify configurations in which the two $\PQb$ jets are emitted symmetrically with respect to the \PZ boson direction. \subsection{Theoretical predictions} \label{sec:theory} The measured differential cross sections for the associated production of \PZ bosons and $\PQb$ jets are compared to several perturbative QCD theoretical calculations. In pQCD the amplitude for this process can be computed using two alternative approaches. In the four-flavour scheme (4FS)~\cite{Cordero:2009kv}, the $\PQb$ quark mass is explicitly included in the predictions and acts as an infrared cutoff, partly removing possible divergences in the matrix element calculation. This approach corresponds to an effective QCD theory, with $n_{f}=4$ quark flavours involved in the computation of the running of the strong coupling constant \alpS. In this scheme no $\PQb$ quark PDF is used, and the $\PQb$ quark is always produced explicitly by the gluon splitting $\Pg \to \bbbar$ process. In the 5FS~\cite{Campbell:2003dd} (where $n_{f}=5$), the gluon splitting contribution is included within the $\PQb$ quark PDF, and the $\PQb$ quark mass is set to zero in the matrix element calculation. The two schemes can be defined in such a way as to provide identical results when all orders in pQCD are computed. However, differences appear in fixed-order predictions, where the different ordering of terms in the matrix element expansion gives different results. The comparison of different flavour schemes is interesting because, in pQCD, the evolution of the $\PQb$ quark PDF as a function of the Bjorken $x$ variable shows sizeable differences between tree-level calculations and those at NLO. These differences are introduced by singularities in the Altarelli--Parisi splitting functions that are present only at NLO; they have no impact on the tree-level evolution of the $\PQb$ quark PDF~\cite{Maltoni:2012pa}. While NLO calculations are now available for both flavour schemes, LO calculations are still interesting to study because they allow the inclusion of multiple additional light, hard partons in the matrix element. This feature is expected to provide a better description of the real hard radiation, compared to fixed-order NLO calculations matched with parton showering. The \MADGRAPH plus \PYTHIA6 event generator, introduced in Section~\ref{sec:evsim}, describes signal events at full detector simulation level and provides theoretical predictions at tree level for the associated production of \PZ bosons and jets, including $\PQb$ jets. This calculation is based on the 5FS using the LO \MADGRAPH 5.1.3.30 matrix element generator, with up to four additional partons in the matrix element calculation. The factorization and renormalization scales are chosen on an event-by-event basis as the transverse mass of the event, clustered with the \kt algorithm down to a 2$\to$2 topology, and \kt computed at each vertex splitting, respectively~\cite{Alwall:2008qv,Alwall:2007fs}. The matrix element calculation is interfaced with \PYTHIA version~6.424, using tune Z2* for parton showering, hadronization, and description of MPI. The CTEQ6L1 PDF is adopted in the calculations. The Drell-Yan inclusive cross section is rescaled to the NNLO calculation provided by \FEWZ 3.1~\cite{Gavin:2010az,Li:2012wna}, which has a uncertainty of about 5\%. This uncertainty is not propagated into the figures presented below. Theoretical predictions at tree level based on \MADGRAPH matrix elements for the $\Z + 2\PQb$ process are also computed using the 4FS MSTW2008 LO PDF set~\cite{Martin:2009iq}. The factorization and renormalization scales are defined as in the 5FS case. Also in this case, parton showering and hadronization are provided by \PYTHIAS with the tune Z2*. The inclusive cross section is rescaled to the $\Z + 2\PQb$ NLO calculation with \AMCATNLO~\cite{Alwall:2014hca} for the 4FS, which has an estimated theoretical uncertainty of 15\%, dominated by scale variations. This uncertainty is not shown in the figures. The event generator \AMCATNLO~\cite{Alwall:2014hca} version 2.2.1 is used to provide results at NLO, combining matrix elements for zero, one, and two additional partons through the {\sc FxFx} algorithm~\cite{Frederix:2012ps}. The NNPDF 3.0 NLO PDF set~\cite{Ball:2014uwa}, based on the 5FS, is used. Parton showering and hadronization are performed by \PYTHIA version 8.205~\cite{Sjostrand:2007gs}, using the CUETP8M1 tune~\cite{Khachatryan:2015pea}. The choice of QCD scales is the same as for the LO \MADGRAPH prediction. This is the same event generator that is used in Section~\ref{sec:evsim} to study the systematic uncertainty in the secondary vertex mass distribution. The 5FS is also used to compute the NLO \POWHEG prediction for a \PZ boson associated with two extra partons, including $\PQb$ quarks~\cite{Campbell:2013vha}. Lower parton multiplicities are described in the matrix element as well, but with no guarantee of NLO accuracy. The scale choice is based on the \textsc{ minlo} approach~\cite{Hamilton:2012np}. The NNPDF 3.0 PDF set~\cite{Ball:2014uwa} is used, and the matrix element calculation is matched with the \PYTHIAE parton shower evolution and hadronization, using the CUETP8M1 tune. For both \AMCATNLO and \POWHEG, no further rescaling of the native cross section is made. Theoretical systematic uncertainties in the predictions, caused by the choice of the QCD factorization and renormalization scales and by the propagation of the uncertainties in PDFs, are computed. The former are estimated by varying the QCD scales by factors of 2 and 0.5, while the latter are computed according to PDF authors' prescriptions. The uncertainty from varying the QCD scales is generally the dominant contribution. These theoretical uncertainties are displayed in the figures only in the ratio plots, with the statistical uncertainty shown separately, and add up to about 10\% and 20\% for the two calculations, respectively. For LO calculations, only the statistical uncertainty of theoretical predictions is shown. \subsection{Comparison with data} The measured differential cross sections, unfolded for detector effects, are compatible for the two leptonic channels, and are therefore combined into an uncertainty-weighted average for a single lepton flavour. Correlations between systematic uncertainties for the electron and muon channels are taken into account in the combination. In particular, all uncertainties are treated as fully correlated, with the exception of those related to lepton efficiencies, \ttbar background estimates, and the statistical part of the subtraction of the c quark and udsg components from $\PZ$+jets, and the statistical part of the unfolding uncertainty, which are treated as fully uncorrelated. All the cross sections are measured in the fiducial phase space defined at the generated particle level for the unfolding procedure, as discussed in Section~\ref{sec:unfolding}. No attempt is made to disentangle $\PQb$ jet production in the hard scattering processes and in MPI. The integral of the unfolded distributions gives the fiducial cross section, for a single lepton type, for the production of \ensuremath{\PZ(1\PQb)}\xspace events, \begin{equation*} \sigma_{\text{fid}}( \Pp\Pp\to \PZ +({\geq}1\PQb) ) = 3.55 \pm 0.12\stat \pm 0.21\syst\unit{pb} , \end{equation*} and \ensuremath{\PZ(2\PQb)}\xspace events, \begin{equation*} \sigma_{\text{fid}}( \Pp\Pp\to \PZ +({\geq}2\PQb)) = 0.331 \pm 0.011\stat \pm 0.035\syst\unit{pb} . \end{equation*} These measured values can be compared with the corresponding predictions at NLO of \AMCATNLO interfaced with \PYTHIAE (described in Sec.\ref{sec:theory}), $4.23^{+0.27}_{-0.37}$\unit{pb} for \ensuremath{\PZ(1\PQb)}\xspace and $0.356^{+0.030}_{-0.031}$\unit{pb} for \ensuremath{\PZ(2\PQb)}\xspace. The prediction overestimates by about 20\% the measured value for \ensuremath{\PZ(1\PQb)}\xspace, while a reasonable agreement is found for \ensuremath{\PZ(2\PQb)}\xspace within uncertainties. The ratio of the cross sections in the fiducial phase space for the production of at least two and at least one $\PQb$ jet is \begin{equation*} \frac{\sigma_{\text{fid}}( \Pp\Pp\to \PZ +({\geq}2\PQb))}{\sigma_{\text{fid}}( \Pp\Pp\to \PZ +({\geq}1\PQb))} = 0.093 \pm 0.004\stat \pm 0.007\syst , \end{equation*} to be compared with the theoretical prediction $0.084^{+0.002}_{-0.001}$ where the systematic uncertainties are considered as fully correlated. Results for the differential cross sections for the \ensuremath{\PZ(1\PQb)}\xspace events are presented in Figs.~\ref{fig:w_first_bjet_pt_unfolding}--\ref{fig:w_delta_phi_b_unfolding}, together with the ratios to the corresponding observables for the inclusive $\PZ$+jets event selection. Where applicable, the last bin also includes overflow values. A discrepancy of about 20\% is seen in the overall normalization for the 4FS-based prediction, of the same order of magnitude as its estimated theoretical uncertainty. The \POWHEG prediction tends to overestimate the cross sections by about 25\%. Apart for the normalization difference, the pQCD calculation with massive $\PQb$ quarks (4FS) seems to reproduce, slightly better, the shape of the observed distributions in the soft momentum regime of $\PQb$ jets. For the leading \PQb jet \pt spectrum (Fig.~\ref{fig:w_first_bjet_pt_unfolding}), the ratio with data is reasonably flat below 80\GeV, whereas it presents a clear slope in the higher \pt range. A similar behaviour is clear in the \PZ boson \pt distribution below 130\GeV (Fig.~\ref{fig:w_pt_Z_b_unfolding}) and in the \HT spectrum below 200\GeV (Fig.~\ref{fig:w_Ht_b_unfolding}). The \POWHEG generator considerably overestimates the soft parts of the \pt and \HT spectra. The leading $\PQb$ jet $\abs{\eta}$ spectrum shape is well reproduced by the \MADGRAPH 4FS configuration (Fig.~\ref{fig:w_first_bjet_eta_abs_unfolding}), while \MADGRAPH 5FS calculation slightly overestimates the central part of the spectrum. The shape of the distribution of the azimuthal angular separation $\Delta\phi_{\PZ\PQb}$ between the \PZ boson and the leading $\PQb$ jet is reproduced within uncertainties by all the calculations (Fig.~\ref{fig:w_delta_phi_b_unfolding}). The NLO \AMCATNLO predictions have similar behaviour to those from LO \MADGRAPH 5FS. As far as the NLO \POWHEG-based prediction is concerned, it shows a similar behaviour to \AMCATNLO, but the discrepancies are larger, reaching about 40\% at the peak of the \PZ boson \pt spectrum. In general, the shape predicted by each calculation compares with data, within uncertainties, in a similar way in the high \PZ boson \pt and \HT regions, which are potentially sensitive to new physics contributions. The underestimation of the normalization by \MADGRAPH 4FS and the overestimation by \POWHEG are also observed in the ratio of \ensuremath{\PZ(1\PQb)}\xspace and inclusive $\PZ$+jets processes (described by the \MADGRAPH generator in the 5FS). The pseudorapidity distribution (Fig.~\ref{fig:w_first_bjet_eta_abs_unfolding}), with an almost flat shape, clearly shows that the ratio for the 4FS-based prediction is about 4\%, compared to the 5\% of the 5FS-based calculations, while \POWHEG predicts about 6\%. The 4FS prediction also fails to reproduce the ratio of the leading jet \pt spectra (Fig.~\ref{fig:w_first_bjet_pt_unfolding}), which is clearly underestimated below 80\GeV. In contrast, \POWHEG overestimates the spectrum in the soft region by about 30\%. Similar discrepancies, although less pronounced, are observed for \HT and the \PZ boson \pt. The ratio as a function of the azimuthal separation between the \PZ boson and the $\PQb$ jet (Fig.~\ref{fig:w_delta_phi_b_unfolding}) is also slightly underestimated by the \MADGRAPH 4FS prediction when the \PZ boson is approximately back-to-back with respect to the leading $\PQb$ jet, with a difference in the azimuthal angles close to $\pi$. The results for the differential cross sections measured with the \ensuremath{\PZ(2\PQb)}\xspace event selection are shown in Figs.~\ref{fig:w_first_bjet_pt_2b_unfolding}--\ref{fig:w_A_Zb_unfolding}. Within uncertainties, no global normalization discrepancy is observed, contrary to the \ensuremath{\PZ(1\PQb)}\xspace case. The leading and subleading jet spectra are slightly underestimated in the soft region by the LO calculations (the leading b jet \pt in the first two bins of Fig.~\ref{fig:w_first_bjet_pt_2b_unfolding} and the subleading b jet \pt in the first bin of Fig.~\ref{fig:w_second_bjet_pt_2b_unfolding}), while the \PZ boson \pt distribution is well reproduced, within uncertainties (Fig.~\ref{fig:w_pt_Z_b_2b_unfolding}). The 4FS predictions overestimate the data at the high end of these \pt distributions. The ratios of all theoretical predictions and the data show a slight positive slope for the azimuthal separation (Fig.~\ref{fig:w_delta_phi_2b_unfolding}). All the other distributions are well reproduced. In general, given the experimental uncertainties, the measurements do not strongly discriminate between the theoretical predictions. The ratio of the \AMCATNLO and \POWHEG predictions based on NLO matrix elements with data shows a similar shape. \begin{figure*}[hbt] \centering \includegraphics[width=\ghmFigWidth]{Figure_004-a} \includegraphics[width=\ghmFigWidth]{Figure_004-b} \caption{Differential fiducial cross section for Z(1b) production as a function of the leading $\PQb$ jet \pt (left), and the cross section ratio for Z(1b) and Z+jets production as a function of the leading $\PQb$/inclusive (j) jet \pt (right), compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, the inner darker area represents the statistical component only. \label{fig:w_first_bjet_pt_unfolding}} \end{figure*} \begin{figure*}[hbt] \centering \includegraphics[width=\ghmFigWidth]{Figure_005-a} \includegraphics[width=\ghmFigWidth]{Figure_005-b} \caption{Differential fiducial cross section for Z(1b) production as a function of the leading $\PQb$ jet $\abs{\eta}$ (left), and the cross section ratio for Z(1b) and Z+jets production as a function of the leading $\PQb$/inclusive (j) jet $\abs{\eta}$ (right), compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_first_bjet_eta_abs_unfolding}} \end{figure*} \begin{figure*}[hbt] \centering \includegraphics[width=\ghmFigWidth]{Figure_006-a} \includegraphics[width=\ghmFigWidth]{Figure_006-b} \caption{Differential fiducial cross section for Z(1b) production as a function of the \PZ boson \pt (left), and the cross section ratio for Z(1b) and Z+jets production as a function of the \PZ boson \pt (right), compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_pt_Z_b_unfolding}} \end{figure*} \begin{figure*}[hbt] \centering \includegraphics[width=\ghmFigWidth]{Figure_007-a} \includegraphics[width=\ghmFigWidth]{Figure_007-b} \caption{Differential fiducial cross section for Z(1b) production as a function of \HT (left), and the cross section ratio for Z(1b) and Z+jets production as a function of \HT (right), compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_Ht_b_unfolding}} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[width=\ghmFigWidth]{Figure_008-a} \includegraphics[width=\ghmFigWidth]{Figure_008-b} \caption{Differential fiducial cross section for Z(1b) production as a function of $\Delta\phi_{\PZ\PQb}$ (left), and the cross section ratio for Z(1b) and Z+jets production as a function of $\Delta\phi_{\mathrm{Z(b/j)}}$ (right), compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_delta_phi_b_unfolding}} \end{figure*} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_009.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of the leading $\PQb$ jet \pt, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_first_bjet_pt_2b_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_010.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of the subleading $\PQb$ jet \pt, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_second_bjet_pt_2b_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_011.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of the \PZ boson \pt, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_pt_Z_b_2b_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_012.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of the invariant mass of the $\PQb$ jet pair, $M_{\PQb\PQb}$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_bb_mass_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_013.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of the invariant mass of the $\PZ\PQb\PQb$ system, $M_\PZ\PQb\PQb$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_Zbb_mass_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_014.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of $\Delta \phi_{\PQb\PQb}$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_delta_phi_2b_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_015.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of $\Delta R_{\PQb\PQb}$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_DR_bb_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_016.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of $\Delta R_\PZ\PQb^\text{min}$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_DR_Zb_min_unfolding}} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=\ghmFigWidthTwo]{Figure_017.pdf} \caption{Differential fiducial cross section for Z(2b) production as a function of $A_\PZ\PQb\PQb$, compared with the \MADGRAPH 5FS, \MADGRAPH 4FS, \AMCATNLO, and \POWHEG\ \textsc{minlo} theoretical predictions (shaded bands), normalized to the theoretical cross sections described in the text. For each data point the statistical and the total (sum in quadrature of statistical and systematic) uncertainties are represented by the double error bar. The width of the shaded bands represents the uncertainty in the theoretical predictions, and, for NLO calculations, theoretical systematic uncertainties are added in the ratio plots with the inner darker area representing the statistical component only. \label{fig:w_A_Zb_unfolding}} \end{figure} \section{Summary} The process of associated production of jets, including $\PQb$ jets, and a \PZ boson decaying into lepton pairs ($\ell=\Pe,\mu$) are measured in LHC $\Pp\Pp$ collisions at $\sqrt{s} = 8\TeV$ with the CMS experiment, using a data set corresponding to an integrated luminosity of 19.8\fbinv. The measured fiducial cross sections are compared to several theoretical predictions. The cross sections are measured as a function of various kinematic observables describing the event topology with a \PZ boson and at least one $\PQb$ jet: the \pt and $\eta$ of the leading $\PQb$ jet, the \PZ boson \pt, the scalar sum \HT of the jet transverse momenta, and the azimuthal angular difference between the directions of the leading $\PQb$ jet and the \PZ boson. A comparison is made of the unfolded data with leading-order pQCD predictions based on matrix element calculations matched with parton showering, testing models using the \MADGRAPH event generator, or with the NLO calculations, merging predictions for zero, one, and two extra jets with \AMCATNLO, or for the first two jets with \POWHEG in the \textsc{minlo} approach. In most cases the theoretical predictions agree with the data, although the normalization for \MADGRAPH 4FS, which fails to describe simultaneously both the low- and high-\pt $\cPqb$ jet regions, is underestimated by 20\%. The ratios of differential cross sections for the production of a \PZ boson in association with at least one $\PQb$ jet and the inclusive $\PZ$+jets production are measured and compared with theoretical expectations. The 4FS-based prediction fails to describe the shape of the ratio as a function of the leading $\PQb$ jet \pt, and discrepancies in the shape are also observed for high values of the \PZ boson \pt. The production of a \PZ boson in association with two $\PQb$ jets is also investigated. In this case the kinematic observables are the transverse momenta of the leading and subleading $\PQb$ jets, the \pt of the \PZ boson, the separations of the $\PQb$ jets both in azimuthal angle and in the $\eta$-$\phi$ plane, the minimal distance in the $\eta$-$\phi$ plane between the \PZ boson and a $\PQb$ jet, the asymmetry between the minimal and the maximal distances between the \PZ boson and a $\PQb$ jet, and the invariant masses of the $\PQb\PQb$ and the $\PZ\PQb\PQb$ systems. The measured distributions are generally well reproduced by the predictions. \clearpage \begin{acknowledgments} \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren Rachada-pisek} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845. \end{acknowledgments}
2024-02-18T23:40:25.507Z
2017-11-15T02:26:01.000Z
algebraic_stack_train_0000
2,368
14,136
proofpile-arXiv_065-11542
\section{Introduction}\label{Intro} This paper is the final part of the series of papers devoted to the study of truncated Stochastic approximation (SA) with moving bounds. The classical problem of SA is concerned with finding a unique zero, say $z^0$, of a real valued function $R(z): \mathbb{R} \to \mathbb{R}$ when only noisy measurements of $R$ are available. To estimate $z^0$, consider a sequence defined recursively as $$ Z_t= Z_{t-1}+\gm_t \left[R(Z_{t-1})+\ve_t\right], \qquad t=1,2,\dots $$ where $\{\ve_t\}$ is a sequence of zero-mean random variables and $\{\gamma_t\}$ is a deterministic sequence of positive numbers. This is the classical Robbins-Monro SA procedure (see Robbins and Monro (1951)\nocite{RM}), which under certain conditions converges to the root $z^0$ of the equation $R(z)=0$. (Comprehensive surveys of the SA technique can be found in Benveniste et al. (1990)\nocite{Ben1990}, Borkar (2008)\nocite{Bor}, Kushner and Yin (2003)\nocite{KushYin}, Lai (2003)\nocite{Lai}, and Kushner (2010)\nocite{Kush1}.) In applications however, it is important to consider the setting when the function $R$ changes over the time. So, let us assume that the objective now is to find a common root $z^0$ of a dynamically changing sequence of functions $R_t(z)$. Also, in certain circumstances it might be necessary to confine the values of the procedure to a certain set, or to a sequence of sets by applying a truncation operator. This happens if, e.g., the functions in the recursive equation are defined only for certain values of the parameter. Truncations may also be useful when certain standard assumptions, e.g., conditions on the growth rate of the relevant functions are not satisfied. Truncations may also help to make an efficient use of auxiliary information concerning the value of the unknown parameter. For example, we might have auxiliary information about the root $z^0$, e.g. a set, possibly time dependent, that contains the value of the unknown root. In order to study these procedures in an unified manner, we consider a SA of the following form $$ Z_t=\Phi_{U_t} \Big(~ Z_{t-1}+\gm_t(Z_{t-1}) \big[ R_t(Z_{t-1})+\ve_t(Z_{t-1}) \big]\Big), \quad t=1,2,\dots $$ where $Z_0 \in \mathbb{R}^m$ is some starting value, $R_t(z)$ is a predictable process with the property that $R_t(z^0)=0$ for all $t$'s, $\gm_t(z)$ is a matrix-valued predictable step-size sequence, $U_t \subset \mathbb{R}^m$ is a random sequence of truncation sets, and $\Phi$ is the truncation operator which returns the procedure to $U_t$ every time the updated value leaves the truncation set (see Section \ref{MON} for details). These SA procedures have the following main characteristics: (1) inhomogeneous random functions $R_t$; (2) state dependent matrix valued random step-sizes; (3) truncations with random and moving (shrinking or expanding) bounds. The main motivation for these comes from parametric statistical applications: (1) is needed for recursive parameter estimation procedures for non i.i.d. models; (2) is required to guarantee asymptotic optimality and efficiency of statistical estimation; (3) is needed for various different adaptive truncations, in particular, for the ones arising by auxiliary estimators (see Sharia (2014)\nocite{Shar4} for a more detailed discussions of these extensions). Note that the idea of truncations goes back to Khasʹminskii and Nevelson (1972)\nocite{Khas} and Fabian (1978)\nocite{Fab} (see also Chen and Zhu (1986)\nocite{Chen2}, Chen et al.(1987)\nocite{Chen1}, Andrad{\'o}ttir (1995)\nocite{Andr}, Sharia (1997)\nocite{Shar0}, Tadic (1997,1998)\nocite{Tadic1}\nocite{Tadic2}, Lelong (2008)\nocite{Lel}. A comprehensive bibliography and some comparisons can be found in Sharia (2014)\nocite{Shar4}). Convergence of the above class of procedures was studied in Sharia (2014) and the results on rate of convergence were established in Sharia and Zhong (2016). In this paper, we derive further asymptotic properties of these procedures. In particular, we show that under quite mild conditions, SA procedures are asymptotically linear in the statistical sense, that is, they can be represented as weighted sums of random variables. Therefore, a suitable form of the central limit theorem can be applied to derive asymptotic distribution of the corresponding SA process. Since some of the conditions in the main statements might be difficult to interpret, we present explanatory remarks and corollaries. We also discuss the case of the classical SA and demonstrate that truncations with moving bounds make it possible to use SA even when the standard conditions on the function $R$ do not hold. Finally, applications of the above results are discussed and some simulations are presented to illustrate the theoretical results of the paper. Proofs of some technical parts are postponed to Appendices. \section{Main results}\label{MR} \subsection{Notation and preliminaries}\label{MON} Let $(\Omega, ~ \cf,F=(\cf_t)_{t\geq 0}, ~P)$ be a stochastic basis satisfying the usual conditions. Suppose that for each $t=1,2, \dots$, we have $( {\cal{B}} ( \mathbb{R}^m) \times \cf )$-measurable functions $$ \begin{array}{cl} R_t(z)= R_t(z,\omega) &:\mathbb{R}^m \times \Omega \to \mathbb{R}^m \\ \ve_t(z)=\ve_t(z,\omega) &:\mathbb{R}^m \times \Omega \to \mathbb{R}^m \\ \gamma_t(z)=\gamma_t(z,\omega)&:\mathbb{R}^m \times \Omega \to \mathbb{R}^{m\times m} \end{array} $$ such that for each $z\in \mathbb{R}^m$, the processes $R_t(z) $ and $\gamma_t(z)$ are predictable, i.e., $R_t(z) $ and $\gamma_t(z)$ are $\cf_{t-1}$ measurable for each $t$. Suppose also that for each $z\in \mathbb{R}^m$, the process $\ve_t(z) $ is a martingale difference, i.e., $\ve_t(z) $ is $\cf_{t}$ measurable and $E\left\{\ve_t(z)\mid{\cal{F}}_{t-1}\right\}=0$. We also assume that $$ R_t(z^0)=0 $$ for each $ t=1, 2, \dots $, where $z^0 \in \mathbb{R}^m$ is a non-random vector. Suppose that $h=h(z)$ is a real valued function of $ z \in {{\mathbb{R}}}^m$. Denote by $ h'(z)$ the row-vector of partial derivatives of $h$ with respect to the components of $z$, that is, $ h'(z)=\left(\frac{{\partial}}{{\partial} z_1} h(z), \dots, \frac{{\partial}}{{\partial} z_m} h(z)\right). $ Also, we denote by $h''(z)$ the matrix of second partial derivatives. The $m\times m$ identity matrix is denoted by ${{\bf I}}$. Denote by $[a]^+$ and $[a]^-$ the positive and negative parts of $a\in \mathbb R$, i.e. $[a]^+=\max(a,0)$ and $[a]^-=\min(a,0)$. Let $U \subset \mathbb{R}^m$ is a closed convex set and define a truncation operator as a function $\Phi_U(z) : \mathbb{R}^m \longrightarrow \mathbb{R}^m$, such that $$ \Phi_U(z) =\begin{cases} z & \text{if} \;\; z\in U \\ z^* & \text{if} \;\; z\notin U, \end{cases} $$ where $z^*$ is a point in $U$, that minimizes the distance to $z$. Suppose that $z^0 \in \mathbb{R}^m$. We say that a random sequence of sets $U_t =U_t(\omega)$ ($ t=1,2, \dots $) from $\mathbb{R}^m $ is {\underline{\bf admissible}} for $z^0$ if \medskip \noindent $\bullet$ for each $t$ and $\omega,$ $U_t(\omega)$ is a closed convex subset of $ \mathbb{R}^m$; % \\ $\bullet$ for each $t$ and $z \in \mathbb{R}^m$, the truncation $\Phi_{U_t}(z) $ is $ {\cal{F}}_{t}$ measurable; % \\ $\bullet$ $z^0\in U_t$ eventually, i.e., for almost all $\omega$ there exist $t_0(\omega)<\infty$ such that $z^0\in U_t(\omega)$ whenever $t >t_0(\omega)$. \medskip Assume that $Z_0 \in \mathbb{R}^m$ is some starting value and consider the procedure \begin{equation}\label{TSA} Z_t= \Phi_{U_t} \Big( Z_{t-1}+\gm_t(Z_{t-1}) \Psi_t(Z_{t-1})\Big), \quad t=1,2,\dots \end{equation} where $U_t $ is {admissible} for $z^0$, $$ \Psi_t(z)=R_t(z)+\ve_t(z), $$ and $R_t(z) $, $\ve_t(z)$, $\gm_t(z)$ are random fields defined above. Everywhere in this work, we assume that \begin{equation}\label{GTSA1} E\left\{\Psi_t(Z_{t-1})\mid{\cal{F}}_{t-1}\right\}=R_t(Z_{t-1}) \end{equation} and \begin{equation}\label{GTSA2} E\left\{\ve_t^T(Z_{t-1})\ve_t(Z_{t-1})\mid{\cal{F}}_{t-1}\right\}= \left[E\left\{\ve_t^T(z)\ve_t(z)\mid{\cal{F}}_{t-1}\right\} \right] _{z=Z_{t-1}}, \end{equation} and the conditional expectations \eqref{GTSA1} and \eqref{GTSA2} are assumed to be finite. \medskip \begin{remark}\label{disint} {\rm Condition \eqref{GTSA1} ensures that $\ve_t(Z_{t-1})$ is a martingale difference. Conditions \eqref{GTSA1} and \eqref{GTSA2} obviously hold if, e.g., the measurement errors $\ve_t(u)$ are independent random variables, or if they are state independent. In general, since we assume that all conditional expectations are calculated as integrals w.r.t. corresponding regular conditional probability measures (see the convention below), these conditions can be checked using disintegration formula (see, e.g., Theorem 5.4 in Kallenberg (2002)\nocite{Kall}).} \end{remark} \noindent {\bf \em Convention.} \noindent $\bullet$ {\em Everywhere in the present work convergence and all relations between random variables are meant with probability one w.r.t. the measure $P$ unless specified otherwise. \\ {$\bullet$} A sequence of random variables $(\zeta_t)_{t\ge1}$ has a property {\underline {{\bf \em eventually}}} if for every $\omega$ in a set $\Omega_0$ of $P$ probability 1, the realisation $\zeta_t(\omega)$ has this property for all $t$ greater than some $t_0(\omega)<\infty$.}\\ {$\bullet$} {\em All conditional expectations are calculated as integrals w.r.t. corresponding regular conditional probability measures.}\\ {$\bullet$} {\em The $\inf_{z\in U} h(z)$ of a real valued function $h(z)$ is $1$ whenever $U=\emptyset$.} \subsection{Notes on convergence}\label{MR} \begin{remark}\label{PoD}{\rm This subsection contains simple results describing sufficient conditions for convergence and rate of convergence. We decided to present this material here for the sake of completeness, noting that the proof, as well as a number of different sets of sufficient conditions, can be found in Sharia (2014\nocite{Shar4}) and Sharia and Zhong (2016\nocite{Sh-Zh1}). } \end{remark} % \begin{proposition}\label{SC} Suppose that $Z_t$ is a process defined by \eqref{TSA}, $U_t$ are admissible truncations for $z^0$. \smallskip \noindent $\bullet$ Suppose that \begin{description} \item[(D1)] for large $t$'s $$ (z-z^0)^T R_t(z) \le 0 \;\;\;\mbox{if}\;\;\;z \in U_{t-1}; $$ \item[(D2)]there exists a predictable process $r_t>0$ such that $$ \sup_{z \in U_{t-1}}\frac {E \left\{ \|R_t(z)+\ve_t(z)\|^2 \mid{\cf}_{t-1}\right\}} {1+\| z-z^o\|^2}\le r_t $$ eventually, and $$ \sum_{t=1}^{\infty} {r_{t}}{a_t^{-2}} <\infty, \qquad P\mbox{-a.s.} . $$ \end{description} Then $\|Z_t-z^0\|$ converges ($P$-a.s.) to a finite limit. \smallskip \noindent $\bullet$ Furthermore, if \begin{description} \item[(D3)] for each $\epsilon\in (0, 1),$ there exists a predictable process $\nu_t>0$ such that $$ \inf_{\stackrel{ \epsilon \le \|z-z^o\| \le 1/\epsilon}{z\in U_{t-1}}} -(z-z^0)^T R_t(z)> \nu_t $$ eventually, where $$ \sum_{t=1}^{\infty} {\nu_{t}}{a_t^{-1}} =\infty, \qquad P\mbox{-a.s.} $$ \end{description} Then $Z_t$ converges ($P$-a.s.) to $z^0$. \smallskip \noindent $\bullet$ Finally, if \begin{description}\item[(W1)] $$ \Delta_{t-1}^T R_t(Z_{t-1}) \leq -{\frac 1 2} \Delta a_t\|\Delta_{t-1}\|^2 $$ eventually; \item[(W2)] there exist $0<\delta\leq1$ such that, $$ \sum_{t=1}^\infty a_t^{\delta-2}E\left\{\|(R_t(Z_{t-1})+{\ve}_t(Z_{t-1}))\|^2 \mid{\cf}_{t-1}\right\}<\infty. $$ \end{description} Then $a_t^{\delta}\|Z_t-z^0\|^2$ converges to a finite limit ($P$-a.s.). \end{proposition} {\bf Proof.} See Remark \ref{PoD} above. \subsection{Asymptotic linearity }\label{ALin} In this subsection we establish that under certain conditions, the SA process defined by \eqref{TSA} is asymptotically linear in the statistical sense, that is, it can be represented as a weighted sum of random variables. Therefore, a suitable form of the central limit theorem can be applied to derive the corresponding asymptotic distribution. \begin{theorem}\label{ASM} Suppose that process $Z_t$ is defined by \eqref{TSA} and \begin{description} \item[(E1)] \begin{equation}\label{NoTrun} Z_t=Z_{t-1}+\gamma_t(Z_{t-1})[R_t(Z_{t-1})+\ve_t (Z_{t-1})]\;\;\; \mbox{eventually. } \end{equation} \end{description} Suppose also that there exists a sequence of invertible random matrices $A_t$ such that \begin{description} \item[(E2)] $$ A_t^{-1}\longrightarrow 0 \;\;\;\mbox{ and }\;\;\; A_t\gamma_t(z^0) A_t \longrightarrow \eta \;\;\;\mbox{ in probability, } $$ where $\eta<\infty$ ($P$-a.s.) is a finite matrix; \item[(E3)] $$ \lim_{t\rightarrow \infty}A_t^{-1}\sum_{s=1}^t \left [ \Delta \gamma_s^{-1}(z^0) \Delta_{s-1}+\tilde R_s(z^0+\Delta_{s-1}) \right ]=0 $$ in probability, where $$\Delta\gamma_s^{-1}(z^0)=\gamma_s^{-1}(z^0)-\gamma_{s-1}^{-1}(z^0),$$ $$\Delta_s=Z_s-z^0 \;\;\mbox{ and }\;\; \tilde R_s(z)=\gm_s^{-1}(z^0)\gm_s(z)R_s(z);$$ \item[(E4)] $$ \lim_{t\to \infty}A_t^{-1}\sum_{s=1}^t \Big[\tilde \ve_s(z^0+\Delta_{s-1})-\ve_s(z^0)\Big]=0 $$ in probability, where $$ \tilde \ve_s(z)=\gm_s^{-1}(z^0)\gm_s(z)\ve_s(z). $$ \end{description} Then $A_t(Z_t-Z_t^*)\longrightarrow 0$ in probability where $$Z_t^*=z^0+\gamma_t(z^0) \sum_{s=1}^t\ve_s(z^0);$$ that is, $Z_t$ is locally asymptotically linear in $z^0$ with $\gamma_t=\gamma_t(z^0)$ and $\psi_t=\ve_t(z^0)$. \end{theorem} {\bf Proof.} Using the notation $\gamma_t=\gamma_t(z^0)$, $\ve_t=\ve_t(z^0)$ and $\Delta_t=Z_t-z^0$, \eqref{NoTrun} can be rewritten as $$ \Delta_t-\Delta_{t-1}=\gamma_t \tilde R_t(Z_{t-1})+\gamma_t \tilde\ve_t(Z_{t-1}) $$ eventually. Multiplying both sides by $\gamma_t^{-1}$, we have $$ \sum_{s=1}^t[\gamma_s^{-1}\Delta_s-\gamma_{s-1}^{-1}\Delta_{s-1}]=\sum_{s=1}^t[\Delta\gamma_s^{-1}\Delta_{s-1}+\tilde R_s(Z_{s-1})+\tilde\ve_s(Z_{s-1})], $$ and since the sum on the left hand side reduces to $\gamma_t^{-1}\Delta_t-\gamma_0^{-1}\Delta_0$, we obtain $$ \Delta_t= \gamma_t \left [{\cal H}_t+\sum_{s=1}^t\tilde\ve_s(Z_{s-1}) +\gamma_{0}^{-1}\Delta_{0}\right ] $$ eventually, where $$ {\cal H}_t=\sum_{s=1}^t[\Delta\gamma_s^{-1}\Delta_{s-1}+\tilde R_s(Z_{s-1})]. $$ Since $Z_t-Z_t^*=\Delta_t-(Z_t^*-z^0)$, we have $$ Z_t-Z_t^*=\gamma_t \Big[{\cal H}_t+\gamma_{0}^{-1}\Delta_{0} \Big]+\gamma_t\sum_{s=1}^t \Big[\tilde \ve_s(Z_{t-1})-\ve_s\Big], $$ and $$ A_t(Z_t-Z_t^*)= A_t\gamma_t A_t A_t^{-1} \Big[{\cal H}_t+\gamma_{0}^{-1}\Delta_{0} \Big ]+A_t\gamma_t A_t A_t^{-1} \sum_{s=1}^t \Big[\tilde \ve_s(Z_{t-1})-\ve_s\Big] $$ eventually. By conditions (E2), (E3) and (E4), we have $$ A_t\gamma_t A_t \xrightarrow{P} \eta,\;\; A_t^{-1} \Big[{\cal H}_t+\gamma_{0}^{-1}\Delta_{0} \Big ]\xrightarrow{P} 0\;\;\mbox{ and }\;\;A_t^{-1} \sum_{s=1}^t \Big[\tilde \ve_s(Z_{t-1})-\ve_s\Big]\xrightarrow{P} 0 $$ Therefore, $A_t(Z_t-Z_t^*)\longrightarrow 0$ in probability, that is, $Z_t$ is locally asymptotically linear at $z^0$.\hfill $\blacksquare$ \begin{proposition}\label{ASM2} Suppose that $A_t$ in Theorem \ref{ASM} are positive definite diagonal matrices with non-decreasing elements and \begin{description} \item[(Q1)] $$ A_t^{-2} \sum_{s=1}^t A_s\left [ \Delta\gamma_s^{-1}(z^0)\Delta_{s-1}+\tilde R_s(z^0+\Delta_{s-1})\right ]\longrightarrow 0 $$ \end{description} in probability, where $\tilde R_t$ is defined in (E3). Then (E3) in Theorem \ref{ASM} holds. \\ \end{proposition} {\bf Proof.} Denote $$ \chi_s=A_s[\Delta \gamma_s^{-1}(z^0)\Delta_{s-1}+\tilde R_s(z^0+\Delta_{s-1})] $$ and $$ A_t^{-1}\sum_{s=1}^t [\Delta \gamma_s^{-1}(z^0)\Delta_{s-1}+\tilde R_s(z^0+\Delta_{s-1})]=A_t^{-1}\sum_{s=1}^t A_s^{-1}\chi_s\;. $$ Let us denote $P_s=A_s^{-1}$ and $Q_s=\sum_{m=1}^s\chi_m$. Then using the formula (summation by parts) $$ \sum_{s=1}^tP_s\Delta Q_s=P_t Q_t-\sum_{s=1}^t\Delta P_s Q_{s-1}\;\;\; \mbox{with}\;\;\;Q_0=0\;, $$ we obtain $$ A_t^{-1}\sum_{s=1}^t A_s^{-1}\chi_s=A_t^{-2} \sum_{s=1}^t \chi_s +{\cal G}_t ~~~~ \mbox{where} ~~~~ {\cal G}_t= -A_t^{-1} \sum_{s=1}^t \Delta A_s^{-1}\sum_{m=1}^{s-1}\chi_m. $$ Since $A_s$ are diagonal, $$\Delta A_s^{-1}=A_s^{-1}-A_{s-1}^{-1}=-A_s^{-1}(A_s-A_{s-1})A_{s-1}^{-1}=-\Delta A_s A_s^{-1}A_{s-1}^{-1}.$$ Therefore, $$ {\cal G}_t=A_t^{-1} \sum_{s=1}^t \Delta A_s \left \{A_s^{-1}A_{s-1}^{-1}\sum_{m=1}^{s-1}\chi_m\right\}\;. $$ Denote by $A_s^{(j,j)}$ the $j$-th diagonal element of $A_s$. Since $0\leq A_{s-1}^{(j,j)}\leq A_{s}^{(j,j)}$ for all $j$, $$ A_{s-1}^{-2}\sum_{m=1}^{s-1}\chi_m\longrightarrow 0\implies A_s^{-1}A_{s-1}^{-1}\sum_{m=1}^{s-1}\chi_m\longrightarrow 0. $$ Because of the diagonality, we can apply the Toeplitz Lemma to the elements of ${\cal G}_t$, which gives $$ A_t^{-1}\sum_{s=1}^t [\Delta \gamma_s^{-1}(z^0)\Delta{s-1}+\tilde R_s(z^0+\Delta_{s-1})]=A_t^{-2} \sum_{s=1}^t \chi_s +{\cal G}_t\longrightarrow 0\;. $$ \hfill$\blacksquare$ \begin{proposition}\label{ASM3} Suppose that $A_t$ in Theorem \ref{ASM} are positive definite diagonal matrices with non-decreasing elements. Denote by $\alpha^{(j)}$ the $j$-th element of $\alpha\in \mathbb R^m$ and by $A^{(j,j)}$ the $j$-th diagonal element of matrix $A$. Suppose also that \begin{description} \item[(Q2)] $$ \lim_{t\to\infty} (A_t^{(j,j)})^{-2} \sum_{s=1}^t E\Big\{\Big[\tilde\ve_s^{(j)}(z^0+\Delta_{s-1})-\ve_{s}^{(j)}(z^0)\Big]^2\Big|{\cal F}_{s-1}\Big\}= 0 $$ \end{description} in probability $P$ for all $j=1,...,m$, where $\tilde\ve_s$ is defined in (E4). Then (E4) in Theorem \ref{ASM} holds. \end{proposition} {\bf Proof.} Denote $M_t=\sum_{s=1}^t \Big[\tilde \ve_s(z^0+\Delta_{s-1})-\ve_s(z^0)\Big]$. By the assumptions, $M_t$ is a martingale and the quadratic characteristic $\langle M^{(j)}\rangle_t$ of the $j$th component $M_t^{(j)}$ is $$ \langle M^{(j)}\rangle_t=\sum_{s=1}^t E_{z^0}\Big\{\Big[\tilde\ve_s^{(j)}(z^0+\Delta_{s-1})-\ve_{s}^{(j)}(z^0)\Big]^2\Big|{\cal F}_{s-1}\Big\}. $$ Using the Lenglart-Rebolledo inequality (see e.g., Liptser and Shiryayev (1989\nocite{LipShir}), Section 1.9), we have $$ P\Big\{(M_t^{(j)})^2\geq K^2(A_t^{(j,j)})^2\Big\}\leq \frac{\epsilon}{K}+P\Big\{\langle M^{(j)}\rangle_t\;\;\geq\epsilon(A_t^{(j,j)})^2\Big\} $$ for each $K>0$ and $\epsilon>0$. Now by (Q2), $\langle M^{(j)}\rangle_t/(A_t^{(j,j)})^2\longrightarrow 0$ in probability $P$ and therefore $M_t^{(j)}/A_t^{(j,j)}\longrightarrow0$ in probability $P$. Since $A_t$ is diagonal, (E4) holds.\hfill$\blacksquare$ \begin{remark}\label{ChoNor} {\rm Let us use Condition (E3) in Theorem \ref{ASM} to construct an optimal step-size sequence $\gm_t(z^0)$. Consider condition (Q1) in the one-dimensional case. Since $R_t(z^0)=0$, we have \begin{eqnarray*} &&A_t\left[\Delta \gamma_t^{-1}(z^0)\Delta_{t-1}+\tilde R_t(z^0+\Delta_{t-1})\right]\\ &=&\left[\Delta \gamma_t^{-1}(z^0)+e_t\frac{ R_t(z^0+\Delta_{t-1})- R_t(z^0)}{\Delta_{t-1}}\right]A_t\Delta_{t-1}, \end{eqnarray*} where $e_t=\gm_t^{-1}(z^0)\gm_t(z^0+\Delta_{t-1})$. In most applications, the rate of $A_t$ is $\sqrt t$ and $\sqrt t \Delta_{t}$ is stochastically bounded. Therefore, for (Q1) to hold, one should at least have the convergence $$ \Delta \gamma_t^{-1}(z^0)+e_t\frac{ R_t(z^0+\Delta_{t-1})- R_t(z^0)}{\Delta_{t-1}}\longrightarrow 0. $$ If $\gm_t(z)$ is continuous, given that $\Delta_t\longrightarrow0$, we expect $e_t\longrightarrow1$. Therefore, we should have $$ \Delta \gamma_t^{-1}(z^0)\approx -R_t'(z^0). $$ Using the similar arguments for the multi-dimensional cases, we expect the above relation to hold for large $t$'s, where $R_t'(z^0)$ is the matrix of the derivatives of $R_t(z)$ at $z=z^0$. So, an optimal choice of the step-size sequence should be $$ \gm_t^{-1}(z)=-\sum_{s=1}^t R_s'(z), $$ or a sequence which is asymptotically equivalent to this sum. } \end{remark} \begin{remark}\label{ev}{\rm {\bf (a)} Condition (E1) in Theorem \ref{ASM} holds if the truncations in \eqref{TSA} do not occur for large $t$'s. More precisely, (E1) holds if for $t > T$ the truncations in \eqref{TSA} do not occur for some, possibly random $T$. \noindent {\bf (b)} Let us now consider the case when $U_t$ is a shrinking sequence. For example, suppose that a consistent, but not necessarily efficient, auxiliary estimator $\tilde Z_t$ is available. Then one can take the truncations on $U_t=S(\tilde Z_t, r_t)$, which is a sequence of closed spherical sets in $\mathbb R^m$ with the center at $\tilde Z_t$ and the radius $r_t\longrightarrow0$. The resulting procedure is obviously consistent, as $\| Z_t-\tilde Z_t\|\leq r_t\longrightarrow 0$ and $\tilde Z_t\longrightarrow z^0$. However, if $r_t$ decreases too rapidly, condition (E1) may fail to hold. Intuitively, it is quite obvious that we should not allow $r_t$ to decreases too rapidly, as it may result in $ Z_t$ having the same asymptotic properties as $\tilde Z_t$, which might not be optimal. This truncation will be admissible if $\|\tilde Z_t- z^0\|<r_t$ eventually. In these circumstances, (E1) will hold if the procedure generates the sequence $ Z_t$ which converges to $z^0$ faster than $r_t$ converges to 0. \noindent {\bf (c)} The considerations described in (b) lead to the following construction. Suppose that an auxiliary estimator $\tilde Z_t$ has a convergence rate $d_t$, in the sense that $d_t$ is a sequence of positive r.v.'s such that $d_t\longrightarrow \infty$ and $d_t(\tilde Z_t- z^0) \to 0$ ~ $P$-a.s. Let us consider the following truncation sets $$ U_t=S\left(\tilde Z_t,c(d_t^{-1}+a_t^{-1})\right), $$ where $c$ and $a_t$ are positive and $a_t\longrightarrow\infty$. Then the truncation sequence is obviously admissible since $\|\tilde Z_t- z^0\| < cd_t^{-1}$ eventually. Now, if we can claim (using Proposition \ref{SC} or otherwise) that $a_t\| Z_t - z^0\| \longrightarrow 0 $, then condition (E1) holds. Indeed, suppose that (E1) does not hold, that is, the truncations in \eqref{TSA} occur infinitely many times on a set $A$ of positive probability. This would imply that $ Z_t$ appears on the surface of the spheres $U_t$ infinitely many times on $A$. Since $ z^0 \in S(\tilde Z_t, c d_t^{-1})$ eventually, we obtain that $\| Z_t- z^0\| \geq c a_t^{-1}$ infinitely many times on $A$, which contradicts our assumptions. Another possible choice of the truncation sequence is $$ U_t=S\left(\tilde Z_t,c\left(d_t^{-1} \vee a_t^{-1}\right)\right). $$ (Here, $a\vee b=\max(a,b)$ and $a\wedge b=\min(a,b)$). If we can claim by Proposition \ref{SC} or otherwise that $a_t\| Z_t - z^0\| \to 0 $, then condition (E1) holds. Indeed, suppose that (E1) does not hold, that is, on a set $A$ of positive probability the truncations in \eqref{TSA} occur infinitely many times. This would imply that $$ \|\tilde Z_t- Z_t\|=c(d_t^{-1} \vee a_t^{-1}) $$ and $$ 1= c^{-1}(d_t \wedge a_t) \|\tilde Z_t- Z_t\| \le c^{-1}(d_t \wedge a_t) \|\tilde Z_t- z^0\|+ c^{-1}(d_t \wedge a_t)\| Z_t- z^0\| $$ infinitely many times on $A$, which contradicts our assumptions. }\end{remark} \section{Special models and examples}\label{SpME} \subsection{Classical problem of stochastic approximation}\label{CSA} Consider the classical problem of stochastic approximation to find a root $z^0$ of the equation $R(z^0)=0$. Note that in the classical case, the step-size sequence can in general be of the form form $\gamma_t (Z_{t-1})=a_t ^{-1} \gamma(Z_{t-1})$. However, without loss of generality we can assume that $\gamma_t= a_t^{-1}\bf I$, since $\gamma(Z_{t-1})$ can be included in $R$ and $\ve_t$. Therefore, taking the step-size sequence $\gamma_t= a_t^{-1}\bf I$, where $a_t\longrightarrow \infty$ is a predictable scalar process, let us consider the procedure \begin{equation}\label{SN} Z_{t}=\Phi_{U_t}\Big(Z_{t-1}+ a_t^{-1} [ R(Z_{t-1})+\ve_t(Z_{t-1})]\Big). \end{equation} \begin{remark}\label{PoD} {\rm In the corollary below we derive simple sufficient conditions for asymptotic linearity in the case when $a_t=t $. We also assume, using Proposition \ref{SC} or otherwise, that $t^{\delta/2} (Z_t-z^0) \longrightarrow 0$ for any $\delta \in (0,1)$. Note also that the condition (A1) below requires that the procedure is designed in such a way that the truncations in \eqref{SN} do not occur for large $t$'s (see Remark \ref{ev} for a detailed discussion of this requirement).} \end{remark} \begin{corollary} \label{CAS} Suppose that $Z_t$ is defined by \eqref{SN}, $a_t=t$ and $t^{\delta/2} (Z_t-z^0) \longrightarrow 0$ for any $\delta \in (0,1)$. Suppose also that \begin{description} \item[(A1)] $$ Z_{t}=Z_{t-1}+\frac1 {t} [ R(Z_{t-1})+\ve_t(Z_{t-1})] \;\;\;\mbox{ eventually;} $$ \item[(A2)] $$ R(z^0+u)=-u+\alpha(u) \;\;\;\mbox{ where }\;\;\; \|\alpha (u)\|= O(u^{1+\epsilon}) $$ as $u \to 0$ for some $\epsilon>0$; \item[(A3)] $$ t^{-1}\sum_{s=1}^tE\Big\{\Big[\ve_s(z^0+u_s)-\ve_s(z^0)\Big]^2\Big|{\cal F}_{s-1}\Big\}<\infty, $$ where $u_s$ is any predictable process with the property $u_s\longrightarrow 0$. \end{description} Then $Z_t$ is asymptotically linear. \end{corollary} {\bf Proof.} Let $A_t=\sqrt t {\bf I}$, then $A_t \gamma_t A_t={\bf I}$ since $\gm_t={\bf I}/t$. Condition (E2) in Theorem \ref{ASM} is satisfied. On the other hand, since $\tilde R(z)=R(z)$ and $\Delta\gamma_t^{-1}={\bf I}$, we have $$ A_t^{-2} \sum_{s=1}^t A_s\left [ \Delta\gamma_s^{-1}\Delta_{s-1}+\tilde R_s(Z_{s-1})\right ]=\frac 1 t \sum_{s=1}^t \sqrt s [\Delta_{s-1}+R(z^0+\Delta_{s-1})]=\frac 1 t \sum_{s=1}^t \sqrt s \alpha(\Delta_{s-1}). $$ By (A2), there exists a constant $K>0$ such that $$ \|\sqrt s \alpha (\Delta_{s-1})\|\leq K\left\|\sqrt s\Delta_{s-1}^{1+\epsilon}\right\| =K\left\|\sqrt{\frac s {s-1}}\left [(s-1)^{\frac{1} {2(1+\epsilon)}}\Delta_{s-1}\right]^{1+\epsilon}\right\| $$ eventually. Since $1/ [2(1+\epsilon)]<1/2$, we have $(s-1)^{1/ {[2(1+\epsilon)]}}\Delta_{s-1}\longrightarrow 0$, and therefore $\|\sqrt s \alpha (\Delta_{s-1})\|\longrightarrow 0$ as $\Delta_s\longrightarrow 0$. Thus, by the Toeplitz Lemma (see Lemma \ref{Toep} in Appendix A), $$ \frac 1 t \sum_{s=1}^t \sqrt s \alpha(\Delta_{s-1})\longrightarrow 0 $$ So, (Q2) in Proposition \ref{ASM2} holds implying that condition (E3) in Theorem \ref{ASM} is satisfied. Since $\tilde \ve _t(z)=\ve _t(z)$, it follows from (A3) that condition (Q2) in Proposition \ref{ASM3} holds. This implies that (E4) in Theorem \ref{ASM} holds. Thus, all the conditions of Theorem \ref{ASM} hold, implying that $Z_t$ is asymptotically linear. \hfill$\blacksquare$ \begin{remark} {\rm Using asymptotic linearity, the asymptotic normality is an immediate consequence of Corollary \ref{CAS}. Indeed, we have $\sqrt t(Z_t-Z_t^*)\longrightarrow 0$ in probability, where $$ Z_t^*=z^0+\frac 1 t\sum_{s=1}^t \ve_s(z^0). $$ \noindent So, $Z_t$ and $Z_t^*$ have the same asymptotic distribution. Now, to obtain the asymptotic distribution of $Z_t$, it remains only to apply the central limit theorem for martingales. }\end{remark} \begin{remark} {\rm Note that condition (A2) above assumes that $R$ function should be scaled in such a way that the derivative at $z^0$ is $-1$. Alternatively, a step-size sequence should be considered of the form $\gamma_t (Z_{t-1})=t ^{-1} \gamma(Z_{t-1}),$ with appropriately chosen $ \gamma(Z_{t-1}).$ Detailed discussion of selection of an appropriate step-size sequence in the context of statistical parametric estimation is given in Section \ref{PEGM}. } \end{remark} \begin{example}\label{Poly} {\rm Let $l$ be a positive integer and $$ R(z)=-\sum_{i=1}^{l}C_i(z-z^0)^i, $$ where $z , z^0 \in \mathbb{R}$ and $C_i$ are real constants. Suppose that $$ (z-z^0)R(z)\leq0 \;\;\;\mbox{ for all } \;\;\;z\in \mathbb R. $$ Unless $l =1$, we cannot use the standard SA without truncations as the standard condition on the rate of growth at infinity does not hold. So, we consider $Z_t$ defined by \eqref{SN} with a slowly expanding truncation sequence $U_t=[-u_t, u_t]$, where $$ \sum_{t=1}^{\infty} u_t^{2l}~ a_t^{-2} <\infty. $$ We can assume for example, that $u_t=Ct^{r/{2l}}$, where $C$ and $r$ are some positive constants and $r < 1$. One can also take a truncation sequence which is independent of $l$, e.g., $u_t=C \log t$, where $C$ is a positive constant. Suppose for simplicity that the measurement errors are state free with the property that $ \sum_{t=1}^{\infty} \sigma_t^2 {a_t^{-2}} <\infty $, where $ \sigma_t^2= {E \left\{ \ve_t^2 \mid{\cf}_{t-1}\right\}}. $ Then $|Z_t-z^0|$ converges ($P$-a.s.) to a finite limit. Furthermore, if $z^0$ is a unique root, then $Z_t\longrightarrow z^0$ ($P$-a.s.) provided that $ \sum_{t=1}^{\infty} a_{t}^{-1}=\infty. $ Finally, if $Z_t$ is defined by \eqref{SN} with $a_t=C_1t$, then $t^\alpha (Z_t-z^0)\xrightarrow{a.s.} 0$ for any $\alpha<1/2$ (see Sharia and Zhong (2016\nocite{Sh-Zh1}) for details). So, it follows that conditions in Corollary \ref{CAS} hold (with $R$ replaced by $C_1^{-1}R$), implying that $Z_t$ is locally asymptotically linear. Now, depending on the nature of the error terms, one can apply a suitable form of the central limit theorem to obtain asymptotic normality of $Z_t$. } \end{example} \subsection{Linear procedures} Consider the recursive procedure \begin{equation}\label{LP} Z_t=Z_{t-1}+\gamma_t(h_t-\beta_t Z_{t-1}) \end{equation} where $\gamma_t$ is a predictable positive definite matrix process, $\beta_t$ is a predictable positive semi-definite matrix process and $h_t$ is an adapted vector process (i.e., $h_t$ is ${\cal F}_t$-measurable for $t\geq 1$). If we assume that $E\{h_t|{\cal F}_{t-1}\}=\beta_t z^0$, we can view \eqref{LP} as a SA procedure designed to find the common root $z^0$ of the linear functions $$ R_t(u)=E\{h_t-\beta_t u|{\cal F}_{t-1}\}=E\{h_t|{\cal F}_{t-1}\}-\beta_t u=\beta_t(z^0-u) $$ which is observed with the random noise $$ \ve_t=\ve_t(u)=h_t-\beta_t u-R_t(u)=h_t-E\{h_t|{\cal F}_{t-1}\}=h_t-\beta_t z^0. $$ \begin{remark}\label{PoD} {\rm Recursive procedures \eqref{LP} are linear in the sense that they locate the common root $z^0$ of the linear functions $R_t(u)=\beta_t(z^0-u)$. The second part of the corollary below shows that the process $Z_t$ is asymptotically linear in the statistical sense, that is, it can be represented as a weighted sum of random variables. The first part of the corollary below contains sufficient conditions for convergence and rate of convergence. We decided to present this material here for the sake of completeness, noting that the proof can be found in Sharia and Zhong (2016\nocite{Sh-Zh1}) (note also that (G1) below will hold if, e.g., $\Delta\gm_t^{-1}=\beta_t$).} \end{remark} \begin{corollary}\label{LRA} Suppose that $Z_t$ is defined by \eqref{LP} with $E(h_t|{\cal F}_{t-1})=\beta_tz^0$ and $a_t$ is a non-decreasing positive predictable process. \smallskip \noindent {\large \bf \em 1.} Suppose that \begin{description} \item[(G1)] $\Delta\gamma_t^{-1}-2\beta_t+\beta_t\gamma_t\beta_t$ is negative semi-definite eventually; \item[(G2)] $$ \sum_{t=1}^{\infty} a_t^{-1}E\{(h_t-\beta_t z^0)^T\gamma_t(h_t-\beta_t z^0)|{\cal F}_{t-1}\}<\infty. $$ \end{description} Then $a_t^{-1}(Z_t-z^0)^T\gamma_t^{-1}(Z_t-z^0)$ converges to a finite limit (P-a.s.). \smallskip \noindent {\large \bf \em 2.} Suppose that $\gamma_t\longrightarrow 0$ and \begin{equation}\label{CLP} \gamma_t^{1/2}\sum_{s=1}^t (\Delta\gamma_s^{-1}-\beta_s)\Delta_{s-1}\longrightarrow 0 \end{equation} in probability, where $\Delta_t=Z_t-z^0$. Then $Z_t$ is asymptotically linear, that is, $$ \gm_t^{1/2}(Z_t-z^0)=\gm_t^{-1/2}\sum_{s=1}^t \ve_s+r_t(z^0), $$ where $r_t(z^0)\longrightarrow 0$ in probability. \\ \end{corollary} \noindent {\bf Proof.} Let us check the conditions of Theorem \ref{ASM} for $A_t=\gamma_t^{-1/2}.$ Conditions (E1) and (E2) trivially hold. Since $\ve_t(u)=h_t-\beta_tz^0$ is state free (i.e. does not depend on $u$), (E4) also holds. Since $ \tilde R_t(Z_{t-1})=R_t(Z_{t-1}) =-\beta_t\Delta_{t-1}, $ we have $$ A_t^{-1}\sum_{s=1}^t \left ( \Delta \gamma_s^{-1}(z^0) \Delta_{s-1}+\tilde R_s(z^0+\Delta_{s-1}) \right )=\gamma_t^{1/2}\sum_{s=1}^t (\Delta\gamma_s^{-1}-\beta_s)\Delta_{s-1}\;, $$ and (E3) now follows from \eqref{CLP}. Thus, all conditions of Theorem \ref{ASM} are satisfied which implies the required result.\hfill $\blacksquare$ % \begin{example}\label{AR1} {\rm Corollary \ref{LRA} can be applied to study asymptotic behaviour of recursive least squares estimators in regression or time series models. To demonstrate this, let us consider a simple example of AR(1) process $$ X_t=\theta X_{t-1}+\xi_t, $$ where ${\xi}_t$ is a sequence of square integrable random variables with mean zero. Consider the recursive least squares (LS) estimator of $\theta$ defined by \begin{eqnarray} &&\hat\theta_t=\hat\theta_{t-1}+\hat I_t^{-1} X_{t-1}\left(X_t-\hat \theta_{t-1}X_{t-1}\right) , \nonumber \\ &&\hat I_t=\hat I_{t-1}+X_{t-1}^2, \qquad t=1,2,\dots \nonumber \end{eqnarray} where $\hat\theta_0$ and $\hat I_0 > 0$ are any starting points and $ \hat I_t=\hat I_0+\sum_{s=1}^t X_{s-1}^2. $ This procedure is clearly a particular case of \eqref{LP} with $$ z^0=\theta, ~~~~~ Z_t= \hat\theta_t, ~~~~~ \gamma_t=\hat I_t^{-1}, ~~~~~h_t= X_{t-1}X_t, ~~~~~~~ \beta_t=X_{t-1}^2. $$ Since $\Delta\gm_t^{-1}=X_{t-1}^2= \beta_t$, condition (G1) holds (see Corollary 5.2 in Sharia and Zhong (2016\nocite{Sh-Zh1})). Also, since $$ h_t-\beta_t z^0=X_{t-1}(X_t -X_{t-1}\theta) =X_{t-1} \xi_t, $$ it follows that $$ E\{(h_t-\beta_t z^0)^T\gamma_t(h_t-\beta_t z^0)|{\cal F}_{t-1}\}= X_{t-1}^2 \hat I_t^{-1} E\{\xi_t^2 |{\cal F}_{t-1}\}. $$ Let $0<\delta<1$. Then taking $a_t=\hat I_t ^\delta$ in (G2) we obtain $$ \sum_{t=1}^{\infty} a_t^{-1}E\{(h_t-\beta_t z^0)^T\gamma_t(h_t-\beta_t z^0)|{\cal F}_{t-1}\}= \sum_{t=1}^{\infty} \frac 1{\hat I_t^{1+\delta}} X_{t-1}^2 E\{\xi_t^2 |{\cal F}_{t-1}\} $$ Now, since $\Delta\hat I_t=X_{t-1}^2$, if $ \hat I_t \to \infty$ then the sum above is finite even if the conditional variances $E\{\xi_t^2 |{\cal F}_{t-1}\}$ go to infinity with rate $ \hat I_t^{\delta^0}$, as far as ${\delta^0 <\delta}$ (this trivially follows from, e.g., Lemma 6.3 in Sharia and Zhong (2016\nocite{Sh-Zh1})). Let us now assume for simplicity that ${\xi}_t$ is a sequence of i.i.d. r.v.'s with mean zero and variance $1$. Then consistency and rate of convergence follows without any further moment assumptions on the innovation process. Indeed, since $ \hat I_t \to \infty$ for any $\theta\in \mathbb{R}$ (see, e.g, Shiryayev (1984\nocite{Shir}, Ch.VII, $\S$5), it follows that all the conditions of part 1 in Corollary \ref{LRA} hold implying that $I_t^{1+\delta} (\hat\theta_t-\theta)^2$ converges a.s. to a finite limit for any $0<\delta<1$ and $\theta\in\mathbb{R}$. Furthermore, since $\Delta\gm_t^{-1}= \beta_t$, \eqref{CLP} trivially holds. It therefore follows that $\hat\theta_t$ is asymptotically linear and asymptotic normality is now obtained by applying the central limit theorem for i.i.d. random variables. } \end{example} \subsection{Application to parameter estimation}\label{PEGM} Let $X_1, \dots, X_n$ be random variables with a joint distribution depending on an unknown parameter $\theta$. Then an $M$-estimator of $\theta$ is defined as a solution of the estimating equation \begin{equation}\label{esteqg} \sum_{i=1}^n \psi_i(\theta)=0, \end{equation} where $\psi_i(\theta)=\psi_i(X_1^i; \theta),$ ~$i=1,2,\dots, n$, are suitably chosen functions which may, in general, depend on the vector $X_1^i=(X_1, \dots,X_i )$ of all past and present observations. If $f_i(x,\theta)=f_i(x,\theta|X_1, \dots,X_{i-1})$ is the conditional probability density function or probability function of the observation $X_i,$ given $X_1, \dots,X_{i-1},$ then one can obtain a MLE (maximum likelihood estimator) on choosing \begin{equation}\label{CML} \psi_i(\theta)=l_t(\theta)= [f_i'(\theta, X_i|X_1^{i-1})]^T/f_i(\theta, X_i|X_1^{i-1}). \end{equation} Besides MLEs, the class of $M$-estimators includes estimators with special properties such as {robustness}. Under certain regularity and ergodicity conditions, it can be proved that there exists a consistent sequence of solutions of \eqref{esteqg} which has the property of local asymptotic linearity. Let us consider estimation procedures which are recursive in the sense that each successive estimator is obtained from the previous one by a simple adjustment. In particular, we consider a class of estimators $$ \hat\theta_t=\Phi_{U_t}\Big[\hat\theta_{t-1}+ {\gamma_t(\hat\theta_{t-1})}\psi_t(\hat\theta_{t-1})\Big], ~~~~~~~~~t\ge 1, $$ where $\psi_t$ is a suitably chosen vector process, $\gamma_t$ is a matrix valued step-size process, and $\hat\theta_0\in {\mathbb{R}}^m$ is an initial value. This type of recursive estimators are especially convenient when the corresponding $\psi$-functions are non-linear in $\theta$ and therefore, solving \eqref{esteqg} would require a numerical method (see e.g., Example \ref{Gamex}). A detailed discussion and a heuristic justification of this estimation procedure are given in Sharia (2008)\nocite{Shar1}. The above procedure can be rewritten in the SA form. Indeed, assume that $\theta$ is an arbitrary but fixed value of the parameter and denote $$ R_t(z)=E_{\theta}\left\{ \psi_t(z)\mid{\cf}_{t-1}\right\} ~~~ \mbox{and} ~~~ \ve_t(z)=\left(\psi_t(z) - R_t(z)\right). $$ Following the argument in Remark \ref{ChoNor} (see also Sharia (2010\nocite{Shar3})), an optimal step-size sequence would be $$ \gamma_t^{-1}(\theta)=-\sum_{s=1}^t R'_s(\theta) $$ If $\p_t(z)$ is differentiable w.r.t. $z$ and differentiation of $ R_t(z)=E_{\theta} \{ \p_t(z)\mid {\cf}_{t-1}\}$ is allowed under the integral sign, then $R'_t(z)=E_{\theta} \{ \p_t'(z)\mid {\cf}_{t-1}\}.$ This implies that, for a given sequence of estimating functions $\psi_t(\theta),$ another possible choice of the step-size sequence is $$ \gm_t(\theta)^{-1}=-\sum_{s=1}^t E_{\theta} \{ \p_s'(\theta)\mid {\cf}_{s-1}\}, $$ or any sequence with the increments $$ \Dl \gm_t^{-1}(\theta)=\gm_t^{-1}(\theta)-\gm_{t-1}^{-1}(\theta)= -E_{\theta} \{ \p_t'(\theta)\mid {\cf}_{t-1}\}. $$ Also, since $\psi_t(\theta)$ is typically a $P^{\theta}$-martingale difference, $$ 0=\int \p_t(\theta,x\mid X_1^{t-1}) f_t(\theta,x\mid X_1^{t-1})\mu (dx), $$ and if the differentiation w.r.t. $\theta$ is allowed under the integral sign, then (see Sharia (2010\nocite{Shar3}) for details) $$ E_{\theta}\{ \p_t'(\theta)\mid {\cf}_{t-1}\} =-E_{\theta}\{ \p_t(\theta)l^{T}_t(\theta)\mid {\cf}_{t-1}\}, $$ where $l_t(\theta)$ is defined in \eqref{CML}. Therefore, another possible choice of the step-soze sequence is any sequence with the increments $$ \Dl \gm_t^{-1}(\theta)=\gm_t^{-1}(\theta)-\gm_{t-1}^{-1}(\theta)=E_{\theta}\{ \p_t(\theta)l^{T}_t(\theta)\mid {\cf}_{t-1}\}. $$ Therefore, since the process $$ M_t^\theta=\sum_{s=1}^t\p_s(\theta) $$ is a $P^\theta$-martingale, the above sequence can be rewritten as $$ \gm_t^{-1}(\theta)=\langle M^\theta, U^\theta \rangle_t $$ where $U_t^\theta=\sum_{s=1}^t l_s(\theta)$ is the score martingale. Let us consider a likelihood case, that is $\psi_t(\theta)=l_t(\theta)$, the above sequence is the conditional Fisher information \begin{equation} I_t(\theta)=\sum_{s=1}^t E\{l_s(\theta)l_s^T(\theta)|{\cal F}_{s-1}\}. \end{equation} Therefore, the corresponding recursive procedure is \begin{equation} \label{recmle} \hat\theta_t=\Phi_{U_t}\Big(\hat\theta_{t-1}+ {I_t^{-1}(\hat\theta_{t-1})}l_t(\hat\theta_{t-1})\Big), ~~~~~~~~~t\ge 1, \end{equation} Also, given that the model possesses certain ergodicity properties, asymptotic linearity of \eqref{recmle} implies asymptotic efficiency. In particular, in the case of i.i.d. observations, it follows that the above recursive procedure is asymptotically normal with parameters $(0, \; i^{-1}(\theta) )$, where $i(\theta)$ is the one-step Fisher information. \subsubsection{The i.i.d case} Consider the classical scheme of i.i.d. observations $X_1, X_2,...$ having a common probability density function $f(x,\theta)$ w.r.t. some $\sigma$- finite measure $\mu$, where $\theta\in\mathbb R^m$. Suppose that $\psi(x,\theta)$ is an estimating function with $$ E_\theta\left\{\psi(X_1,\theta)\right\}=\int \psi(x,\theta)f(x,\theta)\mu(dx)=0. $$ A recursive estimator $\hat{\theta}_t$ can be defined by $$ \hat\theta_t=\Phi_{U_t}\Big(\hat\theta_{t-1}+a_t^{-1} {\gm(\hat\theta_{t-1})}\psi(X_t,\hat\theta_{t-1})\Big) $$ where $a_t$ is a non-decreasing real sequence, $\gm(\theta)$ is an invertible $m\times m$ matrix and truncation sequence $U_t$ is admissible for $\theta$. In most applications $a_t=t$ and an optimal choice of $\gamma(\theta)$ is $$ \gamma(\theta)=\Big[E_\theta\Big\{\psi(X_t,\theta)l^T(X_t,\theta)\Big\}\Big]^{-1} ~~ \mbox{where}~~ l(x,\theta)=\frac{[f'(x,\theta)]^T}{f(x,\theta)}\;. $$ \begin{example}\label{Gamex} Let $X_1,X_2,\ldots$ be i.i.d. random variables from Gamma$(\theta,1)$ ($\theta >0$). Then the the common probability density function is $$ f(x,\theta)=\frac1{ {\bf{\Gamma}(\theta)} } x^{\theta-1}e^{-x}, \;\;\; \theta >0, \;\; x >0, $$ where ${\bf {\Gamma}(\theta)}$ is the Gamma function. Denote $$ {{\log}' {\bf{\Gamma}(\theta)}}=\frac{d}{d\theta}{\log} {\bf{\Gamma}(\theta)},\;\;\;{{\log}'' {\bf{\Gamma}(\theta)}}=\frac{d^2}{d\theta^2}{{\log}} {\bf{\Gamma}(\theta)}. $$ Then $$ \frac {f'(x,\theta)}{f(x,\theta)}={\log} x- {\log}' {\bf{\Gamma}(\theta)} ~~~~~\mbox{ and }~~~~ {i} (\theta)={{\log}''} {\bf{\Gamma}(\theta)}, $$ where $ {i} (\theta)$ is the one-step Fisher information. Then a recursive likelihood estimation procedure can be defined as \begin{equation} \label{EstG} \hat \theta_t=\Phi_{U_t}\left(\hat \theta_{t-1}+\frac{1}{t ~ {\log}'' {\boldsymbol{\Gamma}(\hat\theta_{t-1})}}\left[ \log X_t-{{\log}' {\boldsymbol{\Gamma}(\hat\theta_{t-1})}}\right] \right) \end{equation} with $U_t=[\alpha_t,\beta_t]$ where $\alpha_t\downarrow 0$ and $\beta_t\uparrow \infty $ are sequences of positive numbers. Then it can be shown that (see Appendix B) if \begin{equation} \label{albt} \sum_{t=1}^\infty \frac {\alpha_{t-1}^2} {t}= \infty ~~~~~ \mbox{and} ~~~~~ \sum_{t=1}^{\infty} \frac{\log^2\alpha_{t-1} +\log^2\beta_{t-1}}{t^2} < \infty, \end{equation} then $\hat\theta_t$ is strongly consistent and asymptotically efficient, i.e., $ \hat\theta_t\xrightarrow{a.s.} \theta $ as $ t\longrightarrow \infty, $ and $$ {\cal L}\Big(t^{1/2}(\hat \theta_t-\theta)|P^\theta\Big)\xrightarrow{w}{\cal N}\Big(0,\log''\boldsymbol\Gamma(\theta)\Big). $$ For instance, $$ {\alpha_t=C_1 ({{\log}} \; (t+2))^{-\frac12}} \;\;\; \mbox{and} \;\;\; {\beta_t=C_2(t+2)} $$ with some positive constants $C_1$ and $C_2$, obviously satisfy \eqref{albt}. The above result can be derived by rewriting \eqref{EstG} in the form of the stochastic approximation (see Appendix B for details), i.e., \begin{equation} \label{SapG} \hat \theta_t=\Phi_{U_t}\left(\hat \theta_{t-1}+\frac1{t} \left[ R(\hat\theta_{t-1}) +\ve_t(\hat\theta_{t-1}) \right] \right) \end{equation} where $$ R(u)=R^\theta(u)=\frac{1}{{\log}'' {\boldsymbol{\Gamma}(u)}}E_\theta\{log X_t-\log' {\bf {\Gamma}}(u)\}= \frac{1}{{\log}'' {\boldsymbol{\Gamma}(u)}}\left(\log' {\bf {\Gamma}}(\theta)- \log' {\bf {\Gamma}}(u)\right) $$ and $$ \ve_t(u)=\frac{1}{{\log}'' {\boldsymbol{\Gamma}(u)}}\left[ \log X_t-{{\log}' {\boldsymbol{\Gamma}(u)}}\right] - R(u). $$ \end{example} \section{Simulations}\label{SIMU} \subsection{Finding roots of polynomials}\label{MCPoly} Let us consider a problem described in Section \ref{Poly} with $$ R(z)=-(z-z^0)^7+2(z-z^0)^6-5(z-z^0)^5-3(z-z^0), $$ and suppose that the random errors are independent Student random variables with degrees of freedom 7. Consider SA procedure \eqref{SN} with $a_t=3t$ and the truncation sequence $U_t=[-\log3t,\log3t]$. Then (see Example \ref{Poly}), it follow that this procedure is consistent, i.e., converges almost surely to $z^0$, and asymptotically linear. Also, since the error terms are i.i.d., it follows that the procedure is asymptotically normal. Note that the SA without truncations fails to satisfy the standard condition on the rate of growth at infinity. Here, slowly expanding truncations are used to artificially slow down the growth of $R$ at infinity. Figure 1 shows 30 steps of the procedure with starting points at $-2$, $0$ and $5$ respectively, where the root $z^0=2$. A histogram of the estimator over 500 replications (with $Z_0=0$) is shown in Figure 2. \begin{figure}[h!] \label{PolyMov} \centering \includegraphics[width=0.7\textwidth]{PolyMov} \caption{Realizations of the estimator in the polynomial example} \end{figure} \begin{figure}[h!] \label{PolyHist} \centering \includegraphics[width=0.6\textwidth]{PolyHist} \caption{Histogram of the estimator in the polynomial example} \end{figure} \subsection{Estimation of the shape parameter of the Gamma distribution} Let us consider procedure \eqref{EstG} in Example \ref{Gamex} with following two sets of truncations $U_t=[\alpha_t,\beta_t]$. \begin{description} \item[(1)] FT -- Fixed truncations: $\alpha_t=\alpha$ and $\beta_t=\beta$ where $0<\alpha<\beta<\infty$. \item[(2)] MT -- Moving truncations: $\alpha_t=C_1[\log(t+2)]^{(-1/2)}$ and $\beta_t=C_2(t+2)$ where $C_1$ and $C_2$ are positive constants. \end{description} \begin{figure}[h!] \label{GamNoBoost} \centering \includegraphics[width=0.7\textwidth]{GamNoBoost} \caption{Performance of the estimator of the parameter in the Gamma distribution} \end{figure} Figure 3 shows realizations of procedures \eqref{EstG} when $\theta=0.1$ and the starting point $\hat \theta_0=1$, $C_1=0.1$, $C_2=1$ in MT, and $\alpha=0.003$, $\beta=100$ in FT. As we can see, the MT estimator approaches the true value of $\theta$ following a zigzag path. However, the FT estimator moves very slowly towards the true value of $\theta$, caused by singularity at 0 of the functions appearing in the procedure. \numberwithin{equation}{section} \numberwithin{lemma}{section} \addcontentsline{toc}{chapter}{Appendix} \section{Appendix }\label{App} \begin{lemma} (The Toeplitz Lemma)\label{Toep} Let $\{a_n\}$ be a sequence of non-negative numbers such that $\sum_{n=1}^\infty a_n$ diverges. If $\nu_n \longrightarrow \nu_\infty$ as $n \longrightarrow \infty$, then $$ \lim_{n \longrightarrow \infty} {\frac{\sum_{i=1}^n a_i \nu_i}{\sum_{i=1}^n a_i}} = \nu_\infty\;. $$ \end{lemma} {\bf Proof.} Proof can be found in Lo{\`e}ve (1977\nocite{Loeve}, P.250). \hfill $\blacksquare$ \\ \bigskip \noindent {\bf Properties of Gamma distribution} ~~ In Example \ref{Gamex}, we will need the following properties of the Gamma function (see, e.g., Whittaker (1927)\nocite{Wit}, 12.16). ${\log}' {\boldsymbol{\Gamma}}$ is increasing, ${\log}'' {\boldsymbol{\Gamma}}$ is decreasing and continuous, $$ {\log}'' {\boldsymbol{\Gamma}}(x) \leq \frac {1+x}{x^2} $$ and \begin{equation}\label{Log''G2} {\log}'' {\boldsymbol{\Gamma}}(x) \geq \frac 1 x. \end{equation} Also (see Cramer (1946)\nocite{Cram}, 12.5.4), $$ {\log}' {\boldsymbol{\Gamma}}(x) \le {\mbox{ln}} (x). $$ Then, \begin{equation}\label{expect} E_\theta\left\{\log X_1 \right\}= {\log}' {\boldsymbol{\Gamma}}(\theta), ~~~~ ~~~~ E_\theta\left\{ \left(\log X_1\right)^2 \right\}={\log}'' {\boldsymbol{\Gamma}}(\theta) + \left({\log}' {\boldsymbol{\Gamma}}(\theta)\right)^2, \end{equation} $$ E_\theta\left\{ \left(\log X_1 -{\log}' {\boldsymbol{\Gamma}}(\theta)\right)^2 \right\}={\log}'' {\boldsymbol{\Gamma}}(\theta). $$ Using \eqref{Log''G2} and \eqref{expect} we obtain \begin{equation}\label{Sq} E_\theta \left\{ \|R(u)+\ve_t(u)\|^2 \mid{\cf}_{t-1}\right\}=\frac{ {\log}'' {\boldsymbol{\Gamma}(\theta)} +\left(\log' {\bf {\Gamma}}(\theta)- \log' {\bf {\Gamma}}(u)\right)^2} {({\log}'' {\boldsymbol{\Gamma}(u))^2} }\;. \end{equation} \\ The convergence to $\theta$ of the estimator defined by \eqref{EstG} is shown in Sharia (\nocite{Shar4}2014). To establish the rate of convergence, let us show that the conditions of Corollary 4.5 in Sharia and Zhong (2016)\nocite{Sh-Zh1} hold. Since \begin{eqnarray*} R'(u)=\frac{d R(u)}{d u}&=&-\frac{{\log}'' {\boldsymbol{\Gamma}(u)}}{{\log}'' {\boldsymbol{\Gamma}(u)}}-\frac{{\log}''' {\boldsymbol{\Gamma}(u)}}{[{\log}'' {\boldsymbol{\Gamma}(u)}]^2}\left(\log' {\bf {\Gamma}}(\theta)- \log' {\bf {\Gamma}}(u)\right)\\ &=&-1-\frac{{\log}''' {\boldsymbol{\Gamma}(u)}}{[{\log}'' {\boldsymbol{\Gamma}(u)}]^2}\left(\log' {\bf {\Gamma}}(\theta)- \log' {\bf {\Gamma}}(u)\right), \end{eqnarray*} we have $R'(\theta)=-1\leq-1/2$ and condition (B1) of Corollary 4.5 in Sharia and Zhong (2016)\nocite{Sh-Zh1} holds. Since $E_{\theta}\left\{ \ve_t(u)\mid {\cf}_{t-1} \right\}=0$, we have \begin{equation}\label{axali} E_{\theta}\left\{ [ R(u)+\ve(u) ]^2 \mid {\cf}_{t-1} \right\} = R^2(u) +E_{\theta}\left\{ \ve_t^2(u) \mid {\cf}_{t-1} \right\}. \end{equation} Using \eqref{Sq} and \eqref{axali}, \begin{eqnarray*} E_\theta\left\{ \ve_t^2(u) \mid {\cf}_{t-1} \right\}&\leq& E_\theta\left\{ [ R(u)+\ve(u) ]^2 \mid {\cf}_{t-1} \right\}\\ &=& {\log}'' {\boldsymbol{\Gamma}(\theta)} +\left(\log' {\bf {\Gamma}}(\theta)- \log' {\bf {\Gamma}}(u)\right)^2, \end{eqnarray*} which is obviously a continuous function of $u$. Thus, for any $v_t\longrightarrow0$, we have $E_{\theta}\left\{ \ve_t^2(\theta+v_t) \mid {\cf}_{t-1} \right\}$ converges to a finite limit and so condition (BB) in Corollary 4.7 in Sharia and Zhong (2016)\nocite{Sh-Zh1} holds. Therefore, all the conditions of this corollary are satisfied with $a_t=t$ implying that $t^\delta (\hat{\theta}_t-\theta)^2\xrightarrow{a.s.}0$ for any $\delta<1$. Furthermore, since the second derivative of $R(u)$ exists, $R'(\theta)=-1$, and $R(\theta)=0$, by the Taylor expansion, $$ R(\theta+u)=-u+R''(\tilde u)u^2 $$ for small $u$'s and for some $\tilde u>0$. Therefore, condition (A2) in Corollary \ref{CAS} holds. It is also easy to check that $$ E_{\theta}\Big\{\Big[\ve_s(\theta+u_s)-\ve_s(\theta)\Big]^2\Big|{\cal F}_{s-1}\Big\}\longrightarrow 0 $$ for any predictable process $u_s\longrightarrow 0$. Condition (A3) is immediate from the Toeplitz Lemma. Thus, estimator $\hat{\theta}_t$ defined by \eqref{SapG} is asymptotic linear. Now, using the CLT for i.i.d. r.v.'s, it follows that $\hat{\theta}_t$ is asymptotically efficient. \bibliographystyle{acm}
2024-02-18T23:40:25.938Z
2016-11-22T02:10:40.000Z
algebraic_stack_train_0000
2,386
8,654
proofpile-arXiv_065-11558
\section{Introduction} \IEEEPARstart{I}{n} the design process of transformers, electric machines, etc., simulations of magnetoquasistatic field problems are an important tool. In particular in multi-query scenarios, as needed e.g. in the case of uncertainty quantification or optimization, using efficient and fast algorithms is important. The spatial discretization of the magnetic vector potential formulation of eddy current problems yields an infinitely stiff differential-algebraic equation system of index 1 (DAE). It can only be integrated in time using implicit time integration schemes, as e.g. the implicit Euler method, or singly diagonal implicit Runge-Kutta schemes~\cite{Hairer,Nicolet}. Due to the nonlinear B-H-characteristic of ferromagnetic materials large nonlinear equation system have to be linearized, e.g. by the Newton-Raphson method, and resolved in every implicit time step. At least one Newton-Raphson iteration is required per time step. The Jacobian and the stiffness matrix have to be updated in every iteration. A linearization within each time step is avoided if explicit time integration methods are used. First approaches for this were published in~\cite{Yioultsis} and~\cite{Ausserhofer}, where different methods are used in the conductive and nonconductive regions respectively. In~\cite{Yioultsis}, the Finite Difference Time Domain (FDTD) method is applied in the conductive regions, while the solution in the nonconductive regions is computed using the Boundary Element Method (BEM) ~\cite{Yioultsis}. In~\cite{Ausserhofer} an explicit time integration method and the discontinuous Galerkin finite element method (DG-FEM) are applied in conductive materials, while the finite element method based on continuous shape functions and an implicit time integration scheme are used in nonconductive domains~\cite{Ausserhofer}. In another recent approach, a similar DG-FEM explicit time stepping approach is used for an $H-\varPhi$ formulation of the magnetoquasistatic field problem \cite{Smajic}. This work is based on an approach originally presented in~\cite{Clemens11}, where the magnetoquasistatic DAE based on an $\vec{A}^{*}-$field formulation is transformed into a finitely stiff ordinary differential equation (ODE) system by applying a generalized Schur complement. The structure of this paper is as follows: Section \ref{sec:Formulation} introduces the mathematical formulation of the eddy current problem and the transformation to an ordinary differential equation. In Section \ref{sec:mrhs} the time stepping and the resulting multiple right-hand side problem are discussed. Here, also the use of the subspace projection extrapolation method and of the proper orthogonal decomposition method as multiple right-hand side techniques is described. In Section \ref{sec:Validation} the simulation results for validating the presented approach and the effect of subspace projection extrapolation method and of the proper orthogonal decomposition method on a nonlinear test problem are presented. The main results of this paper are summarized in Section \ref{sec:conclusion}. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \draw [black] plot [smooth cycle, tension=2] coordinates {(1,2) (3,4.5) (6,1)}; \draw [black] plot [smooth cycle, tension=2] coordinates {(2,2.5) (2,3.5) (1,2.7)}; \draw [black] plot [smooth cycle, tension=2] coordinates {(5,2) (5,3) (4,2.4)}; \node at (4,1) {$\Omega_\mathrm{n}$}; \node at (1.5,2.8) {$\Omega_\mathrm{c}$}; \node at (4.8,2.4) {$\Omega_\mathrm{s}$}; \node at (5,4.2) {$\partial\Omega$}; \end{tikzpicture} \vspace{-0.5cm} \caption{\label{fig:kartoffel}Computational domain $\Omega$ split into three regions: conductive and nonlinearly permeable ($\Omega_\mathrm{c}$), nonconductive with constant permeability ($\Omega_\mathrm{n}$) and nonconductive with excitation ($\Omega_\mathrm{s}$).} \end{figure} \section{Mathematical Formulation} \label{sec:Formulation} The eddy current problem in the $\vec{A}^{*}-$formulation is given by the partial differential equation \begin{equation} \label{eq:pde} \kappa\dfrac{\partial{\vec{\mathrm{A}}}}{\partial \mathrm{t}}+\nabla\times\left(\nu\left(\nabla\times\vec{\mathrm{A}}\right)\nabla\times\vec{\mathrm{A}}\right)=\vec{\mathrm{J}}_{\mathrm{S}}, \end{equation} where $\kappa$ is the electrical conductivity, $\vec{\mathrm{A}}$ is the time-dependent magnetic vector potential, $\nu$ is the reluctivity that can be nonlinear in ferromagnetic materials and $\vec{\mathrm{J}}_{\mathrm{S}}=\vec{\mathrm{X}}_{\mathrm{S}}i_{\mathrm{S}}(t)$, where $i_{\mathrm{S}}(t)$ is the time-dependent source current and $\vec{\mathrm{X}}_{\mathrm{S}}$ distributes the current density spatially. Furthermore, initial values and boundary conditions are needed. The weak formulation of (\ref{eq:pde}) leads to the variational problem: find $\vec{A}$ \begin{align*} \int_\Omega \vec{w}&\cdot\kappa\frac{\partial\vec{A}}{\partial t}\;\mathrm{d}\Omega \;+\\ &\int_\Omega \nabla\times\vec{w}\cdot\nu\left(\nabla\times\vec{A}\right)\nabla\times\vec{A}\;\mathrm{d}\Omega = \int_\Omega \vec{w}\cdot\vec{J}_{\mathrm{s}}\;\mathrm{d}\Omega \end{align*} for all $\vec{w}\in H_0(\mathbf{curl},\Omega)$ where we denote the spatial domain with $\Omega$ and assume Dirichlet conditions at the boundary $\partial\Omega$, see Fig.~\ref{fig:kartoffel}. Discretization and choosing test and ansatz functions from the same space according to Ritz-Galerkin, i.e., \begin{align} \vec{A}(\vec{r},t)\approx\sum_{i=0}^{N_\mathrm{dof}} \vec{w}_i(\vec{r}) a_i(t) \end{align} leads to a spatially discretized symmetric equation system in time domain. Separation of the degrees of freedom (dofs) into two vectors $\mathbf{a}_\mathrm{c}$ storing the dofs allocated in conducting regions (if $\vec{r}\in\Omega_\mathrm{c}$) and $\mathbf{a}_\mathrm{n}$ holding the dofs allocated in nonconducting regions (if $\vec{r}\in\Omega_\mathrm{n}\cup\Omega_\mathrm{s}$) yields the DAE system \begin{equation} \label{eq:DAE} \begin{bmatrix} \mathbf{M}_{\mathrm{c}} & 0 \\ 0 & 0 \end{bmatrix} \dfrac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathbf{a}_{\mathrm{c}}\\ \mathbf{a}_{\mathrm{n}} \end{bmatrix} + \begin{bmatrix} \mathbf{K}_{\mathrm{c}}(\mathbf{a}_{\mathrm{c}}) & \mathbf{K}_{\mathrm{cn}}\\ \mathbf{K}^{\top}_{\mathrm{cn}} & \mathbf{K}_{\mathrm{n}} \end{bmatrix} \begin{bmatrix} \mathbf{a}_{\mathrm{c}}\\ \mathbf{a}_{\mathrm{n}} \end{bmatrix} = \begin{bmatrix} 0\\ \mathbf{j}_{\mathrm{Sn}} \end{bmatrix}, \end{equation} where $\mathbf{M}_{\mathrm{c}}$ is the conductivity matrix, $\mathbf{K}_{\mathrm{c}}$ is the nonlinear curl-curl reluctivity matrix in conducting regions, $\mathbf{K}_{\mathrm{n}}$ is the typically constant curl-curl matrix in nonconducting regions, $\mathbf{K}_{\mathrm{cn}}$ is a coupling matrix, and $\mathbf{j}_{\mathrm{Sn}}$ is the source current typically defined in the nonconducting domain only. The conductivity matrix in (\ref{eq:DAE}) is not invertible and therefore the problems consists of differential-algebraic equations (DAEs). The numerical solution of these systems is more difficult than in the case of ordinary differential equations (ODEs). The level of difficulty is measured by the DAE index, which can be roughly interpreted as the number of differentiations needed to obtain an ODE from the DAE~\cite{Hairer}. System (\ref{eq:DAE}) is essentially an index-1 DAE with the speciality that the algebraic constraint, i.e., the second equation in (\ref{eq:DAE}), is formally not uniquely solvable for $\mathbf{a}_\mathrm{n}$ without defining a gauge condition due to the nullspace of the discrete curl-curl operator $\mathbf{K}_{\mathrm{n}}$. However, it is well known that many iterative solvers have a weak gauging property, e.g. \cite{Clemens99}, such that a formal regularization can be avoided. Relying on this weak gauging property, the generalized Schur complement \begin{equation} \label{eq:SC} \mathbf{K}_{\mathrm{S}}(\mathbf{a}_{\mathrm{c}}):=\mathbf{K}_{\mathrm{c}}(\mathbf{a}_{\mathrm{c}})-\mathbf{K}_{\mathrm{cn}}\mathbf{K}^{+}_{\mathrm{n}}\mathbf{K}^{\top}_{\mathrm{cn}}, \end{equation} where $\mathbf{K}^{+}_{\mathrm{n}}$ represents a pseudo-inverse of $\mathbf{K}_{\mathrm{n}}$ in matrix form, is applied to (\ref{eq:DAE}) and transforms the DAE into \begin{subequations} \begin{eqnarray} \mathbf{M}_{\mathrm{c}}\dfrac{\mathrm{d}}{\mathrm{d}t}\mathbf{a}_{\mathrm{c}}+\mathbf{K}_{\mathrm{S}}(\mathbf{a}_{\mathrm{c}})\mathbf{a}_{\mathrm{c}} &=& -\mathbf{K}_{\mathrm{cn}}\mathbf{K}^{\mathrm{+}}_{\mathrm{n}}\mathbf{j}_{\mathrm{s,n}}, \label{eq:ODE}\\ \mathbf{a}_{\mathrm{n}} &=& \mathbf{K}^{+}_{\mathrm{n}}\mathbf{j}_{\mathrm{s,n}}-\mathbf{K}^{+}_{\mathrm{n}}\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{a}_{\mathrm{c}}. \label{eq:a_n} \end{eqnarray} \end{subequations} A regularization of $\mathbf{K}_{\mathrm{n}}$ by a grad-div or tree/cotree gauging can be used alternatively~\cite{Clemens11,Schoeps}. Here, the pseudo-inverse is evaluated using the preconditioned conjugate gradient method (PCG)~\cite{Dutine}. The finitely stiff ODE (\ref{eq:ODE}) can be integrated explicitly in time, e.g. by using the explicit Euler method. Using this time integration method, the expressions \begin{align} \mathbf{a}^{m}_{\mathrm{c}} &= \mathbf{a}^{m-1}_{\mathrm{c}}\!+\!\Delta t\mathbf{M}^{-1}_{\mathrm{c}}\left[\mathbf{K}_{\mathrm{cn}}\mathbf{K}^{+}_{\mathrm{n}}\mathbf{j}^{m}_{\mathrm{s,n}}\!-\!\mathbf{K}_{\mathrm{S}}(\mathbf{a}^{m-1}_{\mathrm{c}})\mathbf{a}^{m-1}_{\mathrm{c}}\right]\!,\! \label{eq:acm}\\ \mathbf{a}^{m}_{\mathrm{n}} &= \mathbf{K}^{+}_{\mathrm{n}}\mathbf{j}^{m}_{\mathrm{s,n}}-\mathbf{K}^{+}_{\mathrm{n}}\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{a}^{m}_{\mathrm{c}} \label{eq:anm} \end{align} are computed in the $m$-th time step, where $\Delta t$ is the time step size. The Courant-Friedrich-Levy (CFL) criterion determines the maximum stable time step size of explicit time integration methods~\cite{Hairer}. For the explicit Euler method \begin{equation} \label{eq:CFL} \Delta t\leq\dfrac{2}{\lambda_{\mathrm{max}}\left(\mathbf{M}^{-1}_{\mathrm{c}}\mathbf{K}_{\mathrm{S}}\left(\mathbf{a}_{\mathrm{c}}\right)\right)} \end{equation} is an estimation for the maximum stable time step size, where $\lambda_{\mathrm{max}}$ is the maximum eigenvalue~\cite{Schoeps}. The maximum eigenvalue can be estimated using the power method~\cite{Golub}. \section{Multiple Right-Hand Side Problem} \label{sec:mrhs} As the matrix $\mathbf{K}_{\mathrm{n}}$ remains constant within each explicit time step, the repeated evaluation of a pseudo-inverse $\mathbf{K}^{+}_{\mathrm{n}}$ in (\ref{eq:acm}), (\ref{eq:anm}) forms a multiple right-hand side (mrhs) problem of the form \begin{equation} \label{eq:mrhs} \mathbf{K}_{\mathrm{n}}\mathbf{a}_{\mathrm{p}}=\mathbf{j}_{\mathrm{p}}\Leftrightarrow\mathbf{a}_{\mathrm{p}}=\mathbf{K}^{+}_{\mathrm{n}}\mathbf{j}_{\mathrm{p}}. \end{equation}Here, $\mathbf{j}_{\mathrm{p}}$ represents one of the right-hand side vectors $\mathbf{j}^{m}_{\mathrm{s,n}}$,$\,\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{a}^{m}_{\mathrm{c}}$, and $\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{a}^{m-1}_{\mathrm{c}}$. Instead of computing the matrix $\mathbf{K}^{+}_{\mathrm{n}}$ explicitly, a vector $\mathbf{a}_\mathrm{p}$ is computed according to (\ref{eq:mrhs}) using the preconditioned conjugate gradient (PCG) method~\cite{Dutine}. Improved start vectors for the PCG method can be obtained by the subspace projection extrapolation (SPE) method or the proper orthogonal decomposition (POD) method. In the SPE method, the linearly independent column vectors of a matrix $\mathbf{U}_{\mathrm{SPE}}$ are formed by a linear combination of an orthonormalized basis of the subspace spanned by solutions $\mathbf{a}_{\mathrm{p}}$ from previous time steps. The modified Gram-Schmidt method is used for this orthonormalization procedure~\cite{Trefethen}. The improved start vector $\mathbf{x}_{\mathrm{0,SPE}}$ is then computed by~\cite{Clemens04} \begin{equation} \label{eq:SPE_x0} \mathbf{x}_{\mathrm{0,SPE}}:=\mathbf{U}_{\mathrm{SPE}}\left(\mathbf{U}^{\top}_{\mathrm{SPE}}\mathbf{K}_{\mathrm{n}}\mathbf{U}_{\mathrm{SPE}}\right)^{-1}\mathbf{U}^{\top}_{\mathrm{SPE}}\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{j}_{\mathrm{p}}. \end{equation} As only the last column vector in the matrix $\mathbf{U}_{\mathrm{SPE}}$ changes in every time step, all other matrix-column-vector products in computing $\mathbf{K}_\mathrm{n}\mathbf{U}_{\mathrm{SPE}}$ in (\ref{eq:SPE_x0}) are reused from previous time steps in a modification of the procedure in~\cite{Clemens04} referred to as the "Cascaded SPE" (CSPE)~\cite{Dutine}. When using the POD method for the PCG start vector generation, $N_{\mathrm{POD}}$ solution vectors from previous time steps form the column vectors of a snapshot matrix $\mathbf{X}$ which is decomposed into \begin{equation} \label{eq:SVD} \mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top} \end{equation} using the singular value decomposition (SVD)~\cite{Henneron14, Henneron15, Sato}. Here, $\mathbf{U}$ and $\mathbf{V}$ are orthonormal matrices and $\mathbf{\Sigma}$ is a diagonal matrix of the singular values ordered by magnitude $\left(\sigma_{\mathrm{i}}\geq\sigma_{\mathrm{j}}\,\mathrm{for}\,i<j\right)$. The index $k$ is chosen such that the information of the largest singular values is kept \begin{equation} \dfrac{\sigma_{\mathrm{k}}}{\sigma_{1}}\leq\varepsilon_\mathrm{POD}. \label{eq:relSV2} \end{equation} The threshold value $\varepsilon_\mathrm{POD}$ is here chosen as $\varepsilon_\mathrm{POD}:=10^{-4}$. A measure how much information is kept can be computed by the relative information criterion~\cite{Daniel} \begin{equation} \label{eq:keepInfo} \dfrac{\sum\limits_{i=1}^{k}\sigma_{i}}{\sum\limits_{i=1}^{N_{\mathrm{POD}}}\sigma_{i}}\overset{!}{\approx}1. \end{equation} Defining $\mathbf{U}_{\mathrm{POD}}=\left[\mathbf{U}_{\mathrm{:,1}},...\,,\mathbf{U}_{\mathrm{:,k}}\right]$ as the first $k$ columns of $\mathbf{U}$ allows to compute an improved start vector $\mathbf{x}_{\mathrm{0,POD}}$ by \begin{equation} \label{eq:POD_x0} \mathbf{x}_\mathrm{0,POD}:=\mathbf{U}_{\mathrm{POD}}\left[\mathbf{U}^{\top}_{\mathrm{POD}}\mathbf{K}_{\mathrm{n}}\mathbf{U}_{\mathrm{POD}}\right]^{-1}\mathbf{U}^{\top}_{\mathrm{POD}}\mathbf{K}^{\top}_{\mathrm{cn}}\mathbf{j}_{\mathrm{p}}. \end{equation} The repeated evaluation of $\mathbf{M}^{-1}_{\mathrm{c}}$ in (\ref{eq:acm}) also forms a mrhs problem, and both the POD and the CSPE method can be used for computing improved start vectors for the PCG method. In the case of small matrix dimensions of the regular matrix $\mathbf{M}_{\mathrm{c}}$, the inverse can also be computed directly using GPU-acceleration. \section{Numerical Validation} \label{sec:Validation} The ferromagnetic TEAM 10 benchmark problem is used for numerical validation of the presented explicit time integration scheme for magnetoquasistatic fields~\cite{Nakata}. The domain consists of two square-bracket-shaped steel plates opposite of each other and a rectangular steel plate between them, resulting in two 0.5 mm wide air gaps. The model geometry is shown in Fig. \ref{fig:fig_1}. The position where the magnetic field is evaluated is marked as S1. The excitation current $i_{\mathrm{S}}=(1-\exp(-t/\tau))$, where $\tau=0.5\,\mathrm{s}$, is applied for a time interval of 120 ms starting at $t=0\,\mathrm{s}$~\cite{Nakata}. The resulting magnetic flux density is computed for this time interval. The finite element method (FEM) using 1st order edge elements is used for the spatial discretization~\cite{Kameari}. All simulations are computed on a workstation with an Intel Xeon E5 processor and an NVIDIA TESLA K80 GPU. The conjugate gradient method is preconditioned by an algebraic multigrid method \cite{AMG}. The matrix $\mathbf{M}_{\mathrm{c}}$ is inverted using the Magma-library and GPU-acceleration~\cite{Magma}. A fine mesh resulting in about 700,000 dofs and the implicit Euler method are used to validate the simulation code. A good agreement between the measured results published in~\cite{Nakata} and the simulation of this fine spatial discretization is shown in Fig. \ref{fig:fig_1}. The required simulation time of this simulation using the implicit Euler method is 5.38 days using an in-house implicit time integration magnetoquasistatic code. For benchmarking the proposed mrhs techniques for the (semi-)explicit time integration scheme, a model with a coarse spatial discretization yielding about 30,000 dofs and the explicit Euler method is used. For this spatial discretization, the resulting maximum stable time step size according to (\ref{eq:CFL}) is $\Delta t_{\mathrm{CFL}}=1.2\,\mu s$. Both meshes are presented in Fig. \ref{fig:meshes} The results for the average magnetic flux density are compared with the results obtained using the same discretization in space and the implicit Euler method for time integration and show good agreement, depicted in Fig. \ref{fig:fig_1}. The resulting field plots for both spatial discretizations are shown in Fig. \ref{fig:fieldPlots}. The simulation time for the implicit time integration method is still 2.58 h. The effect of computing improved start vectors using POD or CSPE on the average number of PCG iterations and on the simulation time is compared to using the solution from the previous time step $\mathbf{a}^{m-1}_{\mathrm{p}}$ as start vector for the PCG method. An overview is presented in Table \ref{table1} and shows that both the CSPE and the POD start vector generation methods significantly reduce the number of PCG iterations. When using CSPE the number of column vectors in the operator $\mathbf{U}_{\mathrm{SPE}}$ in (\ref{eq:SPE_x0}) is increased during the simulation to improve the spectral information content of $\mathbf{U}_{\mathrm{SPE}}$. This number remains below 20. Thus, only small systems have to be solved for the inversion in (\ref{eq:SPE_x0}) and the effort to perform all computations of the CSPE method is low. This is also confirmed by the simulation time which is shortest when using CSPE. The simulation time resulting from using explicit time integration and CSPE for start vector generation is $63\,\%$ of the simulation time of the implicit reference simulation. A bar plot showing the reduced simulation time by using the explicit Euler scheme and CSPE compared to using the standard formulation and the implicit Euler method for time integration is depicted in Fig. \ref{fig:simTimes}. In case of the POD, the amount of information kept according to (\ref{eq:keepInfo}) is $>0.99$ during the entire simulation. However, the computational effort for performing the SVD and the computations in (\ref{eq:POD_x0}) is higher than the effort for CSPE. Although the number of PCG iterations is further decreased, the simulation time resulting from using POD for start vector generation is higher than when using $\mathbf{a}^{m-1}_{\mathrm{p}}$ as start vector for the PCG method due to the costs of the repeated SVD. \begin{figure}[!t] \centering \includegraphics[width=3.6in]{figure1.pdf} \caption{Comparison of results for the average magnetic flux density evaluated at position S1 and model geometry as inset.} \label{fig:fig_1} \end{figure} \begin{figure}[!t] \hspace*{-0.5cm} \includegraphics[width=3.6in]{figure2.pdf} \caption{Meshes resulting in about 700,000 dofs (left) and in about 30,000 dofs (right).} \label{fig:meshes} \end{figure} \begin{figure}[!t] \hspace*{-0.5cm} \includegraphics[width=3.6in]{figure3.pdf} \caption{Field plots of the magnetic flux density $\vec{B}$ for the spatial discretization with about 700,000 dofs (left) and with about 30,000 dofs (right).} \label{fig:fieldPlots} \end{figure} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Simulation Time and Average Number of PCG Iterations Using Different Start Vectors $\mathbf{x}_{\mathrm{0}}$} \label{table1} \centering \begin{tabular}{|c||c||c|} \hline Start vector & Avg. Number of PCG Iterations & Simulation Time\\ \hline $\mathbf{x}_{\mathrm{0}}:=\mathbf{a}^{m-1}_{\mathrm{p}}$ & 3.16 & \, 2.35 h\\ \hline $\mathbf{x}_{\mathrm{0}}:=\mathbf{x}_{\mathrm{0,POD}}$ & 2.18 & 17.35 h\\ \hline $\mathbf{x}_{\mathrm{0}}:=\mathbf{x}_{\mathrm{0,CSPE}}$ & 1.02 & \, 1.62 h\\ \hline \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3.4in]{figure4.pdf} \caption{Comparison of simulation times.} \label{fig:simTimes} \end{figure} \section{Conclusion} \label{sec:conclusion} The magnetic vector potential formulation of eddy current problems was transformed into an ODE system of finite stiffness using a generalized Schur complement. The resulting ODE system was integrated in time using the explicit Euler method. A pseudo-inverse of the singular curl-curl matrix in nonconducting material was evaluated using the PCG method. Improved start vectors for the PCG method were calculated using the POD and the CSPE method. Although both reduce the number of PCG iterations needed, the computational effort of the CSPE is significantly lower than for the POD method. Reducing the computational effort of the POD, e.g. by accelerating the computation of the SVD is subject to further investigations. Using the CSPE method, the overall simulation time was reduced by $37\,\%$ compared to the simulation time of the implicit reference simulation. \section*{Acknowledgment} This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant numbers CL143/11-1 and SCHO1562/1-1. The third author is supported by the “Excellence Initiative” of the German Federal and State Governments and The Graduate School of CE at TU Darmstadt.
2024-02-18T23:40:25.988Z
2017-09-26T02:05:01.000Z
algebraic_stack_train_0000
2,390
3,560
proofpile-arXiv_065-11583
\section{Introduction} Let $\mathbb C[Z_1,\ldots,Z_n]$ denote the set of all complex valued polynomials in $n$ - variables. For any continuous function $f:X\to \mathbb C,$ on a compact set $X,$ we let $\|f\|_{X,\infty}$ denote its supremum norm, namely, \[\|f\|_{X,\infty}=\sup\{|f(x)|:x\in X\}.\] Let $\mathbb{D}^n$ denote the unit polydisc in $\mathbb{C}^n.$ The von Neumann inequality \cite{vN} states that $\|p(T)\|\leq \|p\|_{\mathbb{D},\infty}$ for all $p\in\mathbb{C}[Z]$ and for any contraction $T$ on a complex Hilbert space. For any pair of commuting contractions $T_1,T_2$, a generalization of the von Neumann inequality: $\|p(T_1,T_2)\|\leq \|p\|_{\mathbb{D}^2,\infty},$ $p\in\mathbb{C}[Z_1,Z_2]$ follows from a deep theorem of Ando \cite{ando} on unitary dilation of a pair of commuting contractions. For $n\in\mathbb{N},$ let $\mathscr{C}_n$ denote the set of all $n$ - tuple $\boldsymbol T=(T_1,\ldots,T_n)$ of commuting contractions on some Hilbert space $\mathbb{H}.$ In the paper \cite{V1}, Varopoulos showed that the von Neumann inequality fails for $\boldsymbol T$ in $\mathscr{C}_n,$ $n > 2.$ He along with Kaijser \cite{V1} and simultaneously Crabb and Davie \cite{CD} produced an explicit example of three commuting contractions $T_1,T_2,T_3$ and a polynomial $p$ for which $\|p(T_1,T_2,T_3)\| > \|p\|_{\mathbb{D}^3,\infty}.$ For a fixed $k\in\mathbb{N},$ define (see \cite{V2} and \cite[page 24]{Pisier}): \[C_k(n)=\sup\big\{\|p(\boldsymbol T)\|:\|p\|_{\mathbb{D}^n,\infty}\leq 1, p\in \mathbb C_k[Z_1,\ldots,Z_n],\,\boldsymbol T \in\mathscr C_n \big\}\] and let $C(n)$ denote $\lim_{k\to \infty} C_k(n).$ Since the counterexample to the von Neumann inequality in three variables, due to Varopoulos and Kaijser \cite{V1}, involves a (explicit) homogeneous polynomial of degree two therefore $C_2(3) >1.$ From the von Neumann inequality and its generalization to two variables, it follows that $C(1)=C(2)=1.$ In the paper \cite{V2}, Varopoulos shows that \begin{equation}\label{bound for C_2} K_G^\mathbb C \leq \lim_{n\to \infty} C_2(n) \leq 2 K_G^\mathbb C, \end{equation} where $K_G^\mathbb C$ is the complex Grothendieck constant defined below. \begin{defi}[Grothendieck Constant] For a complex $($real$)$ array $A:=\big (\!\!\big ( a_{j k} \big ) \!\!\big )_{n\times n},$ define the following norm \begin{equation}\label{hypothesis} \|A\|_{\infty \to 1} := \sup\big\{|\langle A v , w\rangle| :\|v\|_{\ell^\infty(n)}\leq 1, \|w\|_{\ell^\infty(n)} \leq 1\big\}, \end{equation} where $v$ and $w$ are vectors in $\mathbb C^n$ $($resp. $\mathbb R^n)$. There exists a finite constant $K>0$ such that for any choice of unit vectors $(x_j)_1^n$ and $(y_k)_1^n$ in a complex $($resp. real$)$ Hilbert space $\mathbb{H}$, we have \begin{equation}\label{conclusion} \Big |\sum_{j,k=1}^na_{jk}\langle x_j,y_k \rangle\Big | \leq K \|A\|_{\infty \to 1} \end{equation} for all $n\in\mathbb{N}$ and $A= \big (\!\!\big ( a_{j k} \big ) \!\!\big ).$ The least such constant is denoted by $K_G$ and is known as the Grothendieck constant. Note that the definition of $K_G$ depends on the underlying field. When it is the field of complex numbers $\mathbb C$ $(\mbox{\rm resp. }\mathbb{R})$, this constant is known as the complex (resp. real) Grothendieck constant and is denoted by $K_G^\mathbb C$ $($resp. $(K_G^\mathbb{R}))$. It is known that $1.338 < K_G^\mathbb C\leq 1.4049,$ and $1.66\leq K_G^\mathbb{R}\leq \frac{\pi}{2log(1+\sqrt{2})}$, see \cite[\S {4}]{GrPis}. Recently, it has been proved in \cite{Naor} that this upper bound of $K_G^\mathbb{R}$ is strict which settles a long--standing conjecture of Krivine \cite{Kr1,Kr2}. We refer the reader to \cite{BBGM} for some explicit computations of this constant for small values of $n.$ For more on Grothedieck constant, we refer the reader to \cite{GrPis}. \end{defi} Since it is known that $K_{G}^{\mathbb C}> 1,$ the inequality \eqref{bound for C_2} is yet another way to see that the von Neumann inequality fails eventually. We refer to the inequality \eqref{bound for C_2} as {\tt the Varopoulos inequality}. In the paper \cite{V2}, Varopoulos had implicitly asked if $\lim_{n\to\infty}C_2(n)=K_G^\mathbb{C}?$ Recently, first named author of this paper, has improved the Varopoulos inequality \eqref{bound for C_2}: \begin{equation}\label{Improved bound for C_2} K_G^\mathbb C \leq \lim_{n\to \infty} C_2(n) \leq \frac{3\sqrt{3}}{4} K_G^\mathbb C. \end{equation} This inequality is proved by first obtaining a bound for the second derivative of any holomorphic map $f:\mathbb{D}^n\to \mathbb{D},$ namely, $\|D^2f(0)\|_{\infty\to 1} \leq \tfrac{3\sqrt{3}}{2}.$ In this paper, we answer the question of Varopoulos in the negative by improving the lower bound in the inequality \eqref{bound for C_2}. Indeed, we prove that $\lim_{n\to \infty}C_2(n)\geq 1.118 K_{G}^{\mathbb C}.$ In what follows, for each $p\in [1,\infty]$ and $n\in\mathbb{N},$ we denote the normed linear space $(\mathbb{C}^n,\|\cdot\|_p)$ by $\ell^p(n)$ and when the space is $(\mathbb{R}^n,\|\cdot\|_p)$ then we choose to denote it by $\ell^p_\mathbb{R}(n).$ Let $\mathbb C_2^s[Z_1,\ldots , Z_n]$ denote the set of all homogeneous polynomials of degree two in $n$ - variables. A homogeneous polynomial of degree two in $n$ - variables is of the form \[p(z_1,\ldots,z_n)=\sum_{j,k=1}^{n}a_{jk}z_jz_k,\] where $A_p:=\big (\!\!\big ( a_{j k} \big ) \!\!\big )$ is a symmetric matrix associated to $p$. Define the map $\mathscr{A}_n:\mathbb C_2^s[Z_1,\ldots , Z_n] \to M^s_n$ by the rule $\mathscr{A}_n(p)=A_{p},$ where $M_n^s$ is the set of all symmetric matrices of order $n.$ Equip $\mathbb C_2^s[Z_1,\ldots , Z_n]$ with the supremum norm $\|\cdot\|_{\mathbb D^n,\infty}$ and $M_n^s$ with the norm $\|\cdot\|_{\infty \to 1}.$ For each $n\in \mathbb{N},$ $\|\mathscr{A}_n^{-1}\|\leq 1,$ therefore $\|\mathscr{A}_n\|\geq 1.$ In \cite{RG}, it is shown that $\lim_{n\to \infty}\|\mathscr{A}_n\|\leq 3\sqrt{3}/4.$ In this paper, we prove $\lim_{n\to \infty}\|\mathscr{A}_n\|\geq \pi^2/8,$ improving the bound $\lim_{n\to \infty}\|\mathscr{A}_n\|\geq 1.2323,$ obtained earlier in an unpublished article by Holbrook and Schoch (see their related work \cite{HS10}). In Section \ref{MaximizingLemma}, we investigate, in some detail, the constant $C_2(n)$. We exhibit a large class of examples of Varopoulos--Kaijser type and show that the original Varopoulos--Kaijser example is extremal, in an appropriate sense, in this large class of polynomials. Let $M_{n}^{+}(\mathbb{C})$ $(\mbox{\rm resp. } M_{n}^{+}(\mathbb{R}))$ denote the set of all $n\times n$ complex (real) non-negative definite matrices. \begin{defi}[Positive Grothendieck Constant] Suppose $A:=\big ( \!\! \big (a_{jk}\big )\!\!\big )_{n\times n}$ is a complex $($real$)$ non-negative definite array then there exists $K>0,$ independent of $n$ and $A,$ such that \eqref{conclusion} holds. The least such constant is denoted by $K_G^+(\mathbb C)$ $(\mbox{\rm resp. }K_G^+(\mathbb{R}))$ and called complex $($resp. real$)$ Positive Grothendieck constant. The values of $K_G^+(\mathbb C)$ and $K_G^+(\mathbb{R})$ are exactly $4/\pi$ and $\pi/2$ respectively, see \cite[page 259-260]{GrPis} and \cite[Remark following Theorem 5.4]{FactorizationPisier}. To the reader, we also refer \cite[Theorem 1.3]{PL} for these constants. \end{defi} The non-negative definite Grothendieck constant plays an important role in operator theory. We refer \cite[Theorem 1.9]{BM} for some important connections. For any $n\times n$ complex matrix $A:=\big (\!\! \big (a_{jk}\big )\!\!\big ),$ we associate a homogeneous polynomial of degree two denoted by $p_{\!_A}$ and defined by $p_{\!_A}(z_1,\ldots,z_n)=\sum_{j,k=1}^n a_{jk} z_j z_k.$ Suppose $p$ is the following polynomial of degree at most two in $n$ - variables and of the form \[p(z_1,\ldots,z_n)=a_0+\sum_{j=1}^{n}a_jz_j + \sum_{j,k=1}^{n}a_{jk}z_jz_k\] with $a_{jk}=a_{kj}$ (can be assumed without loss of generality) for all $j,k=1,\ldots,n.$ Corresponding to $p,$ one can define the following $(n+1)\times (n+1)$ symmetric complex matrix \begin{eqnarray}\label{Homogenization} A(p)= \left( \begin{array}{ccccc} a_0 & \frac{1}{2} a_1 & \frac{1}{2}a_2 & \cdots & \frac{1}{2}a_n\\ \frac{1}{2}a_1 & a_{11} & a_{12} & \cdots & a_{1n}\\ \frac{1}{2}a_2 & a_{12} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \vdots & & \vdots\\ \frac{1}{2}a_n & a_{1n} & a_{2n} & \cdots & a_{nn}\\ \end{array} \right). \end{eqnarray} It can be seen that $\|p\|_{\mathbb{D}^n,\infty}=\|p_{\!_{A(p)}}\|_{\mathbb{D}^{n+1},\infty}.$ We define the following quantity: \[C_{2}^{+}(n)=\sup\Big\{\frac{\!\!\!\|p_{\!_A}(\boldsymbol T)\|}{\ \ \ \ \, \|p_{\!_A}\|_{\mathbb{D}^n,\infty}}:0\neq A\in M_n^+(\mathbb{C}),\, \boldsymbol T\in \ \mathscr C_n \Big\}\] and let $C_{2}^{+}$ denote the quantity $\lim_{n\to \infty} C_{2}^{+}(n).$ It is clear from the definitions that $C_2(n)\geq C_{2}^{+}(n)$ for each $n.$ In this paper, we prove that $\lim_{n\to \infty}C_{2}^{+}(n)=\pi/2.$ Since $K_{G}^{\mathbb{C}}\leq 1.4049$ therefore $\lim_{n\to \infty}C_2(n)\geq \pi/2\geq 1.118K_{G}^{\mathbb C}.$ \section{Improvement of the Lower Bound in the Varopoulos inequality}\label{Improvement} In this section we improve the lower bound of the Varopoulos inequality and as a consequence, we answer negatively a question of Varopoulos posed in the paper \cite{V2}. The following theorem (see \cite{RG}) concerns an improvement in the upper bound of the Varopoulos inequality. \begin{thm} Suppose $p$ is a polynomial of degree at most 2 in $n$ - variables and $\boldsymbol T\in\mathscr{C}_n.$ Then $\|p(\boldsymbol T)\| \leq \frac{3\sqrt{3}}{4} K_{G}^{\mathbb C}\|p\|_{\mathbb{D}^n,\infty}.$ \end{thm} Throughout this paper, the vectors are assumed to be row vectors unless specified otherwise. The following series of lemmas are the key ingredients of this paper. \begin{lemm}\label{UpperQuantity} For any symmetric $n\times n$ matrix $A=\big(\!\!\big(a_{jk}\big)\!\!\big),$ we have the equality: \[\sup_{\substack{\|x_j\|_{\ell^{2}}\leq 1}}\Big|\sum_{j,k=1}^{n}a_{jk}\langle x_j, x_k\rangle\Big| = \sup_{\substack{\|R_j\|_{\ell^{2}_{\mathbb R}}\leq 1}}\Big|\sum_{j,k=1}^{n}a_{jk}\langle R_j, R_k\rangle\Big|.\] \end{lemm} \begin{proof} Fix $x_j\in \mathbb C^m$ with $\|x_j\|_{\ell^{2}(m)}\leq 1$ for $j=1,\ldots,n.$ Define $R_j=\big(\frac{\overline{x}_j+x_j}{2},i\frac{\overline{x}_j-x_j}{2}\big)$ for $j=1,\ldots, n.$ We see that \begin{eqnarray*} \langle R_j,R_k \rangle &=& \sum\limits_{p=1}^{m}\Big(\frac{\overline{x}^{(p)}_j+x^{(p)}_{j}}{2}\Big)\Big(\frac{\overline{x}^{(p)}_k+x^{(p)}_{k}}{2}\Big)\\ &&+\, i^2\sum\limits_{p=1}^{m}\Big(\frac{ \overline{x}^{(p)}_j-x^{(p)}_{j}}{2}\Big)\Big(\frac{\overline{x}^{(p)}_k-x^{(p)}_{k}}{2}\Big)\\ &=& \sum\limits_{p=1}^{m}(\Re x_j^{(p)} \Re x_k^{(p)} + \Im x_j^{(p)} \Im x_k^{(p)})\\ &=& \Re \Big(\sum\limits_{p=1}^{m}x_j^{(p)}\overline{x}^{(p)}_j\Big)\\ &=& \frac{\langle x_j, x_k\rangle + \langle x_k,x_j \rangle}{2}, \end{eqnarray*} where $\Re z$ and $\Im z$ denote the real and imaginary part of $z$ respectively. In particular, we get that $\|R_j\|_{\ell^2_{\mathbb{R}}(2m)}=\|x_j\|_{\ell^2(m)}$ for each $j=1,\ldots,n.$ Since $A$ is a symmetric matrix therefore \begin{eqnarray*} \sum_{j,k=1}^{n} a_{jk}\langle x_j ,x_k \rangle &=& \sum_{j,k=1}^{n} a_{jk}\langle x_k ,x_j \rangle\\ &=& \sum_{j,k=1}^{n} a_{jk}\frac{\langle x_j ,x_k \rangle + \langle x_k ,x_j \rangle}{2}\\ &=& \sum_{j,k=1}^{n} a_{jk}\langle R_j ,R_k \rangle. \end{eqnarray*} This shows that for each $m\in\mathbb{N}$ and symmetric matrix $A,$ one gets the following identity \[\sup_{\|x_j\|_{\ell^2(m)}\leq 1}\Big|\sum a_{jk}\langle x_j ,x_k \rangle\Big|=\sup_{\|R_j\|_{\ell^2_{\mathbb{R}}(2m)}\leq 1}\Big|\sum a_{jk}\langle R_j ,R_k \rangle\Big|.\] This completes the proof. \end{proof} \begin{rema}\label{S(A)} For any matrix $A,$ one can associate a symmetric matrix $S(A)=(A+A^{\rm t})/{2},$ which has the property $\|p_{\!_{A}}\|_{\mathbb{D}^n,\infty}=\|p_{\!_{S(A)}}\|_{\mathbb{D}^n,\infty}.$ Moreover, if $A$ is a non-negative definite matrix then $S(A)$ is a real non-negative definite matrix. \end{rema} \begin{lemm}\label{LowerQuantity} Suppose $A=\big(\!\!\big(a_{jk}\big)\!\!\big)$ is a non-negative definite $n\times n$ matrix. Then \[\sup_{z_j\in \mathbb{T}}\Big|\sum_{j,k=1}^{n}a_{jk}z_jz_k\Big|=\sup_{s_j=\pm 1}\sum_{j,k=1}^{n}a_{jk}s_js_k.\] \end{lemm} \begin{proof} Suppose $A$ is a $n\times n$ complex non-negative definite matrix. From Remark \ref{S(A)}, we know that $S(A)$ is a real non-negative definite matrix with $\|p_{\!_{A}}\|_{\mathbb{D}^n,\infty}=\|p_{\!_{S(A)}}\|_{\mathbb{D}^n,\infty}.$ Thus, to prove this lemma, it suffices to work with real non-negative definite matrices only. Let $(z_1^{0},\ldots, z_n^{0})$ be a maximizing vector for $\|p_{\!_{A}}\|_{\mathbb{D}^n,\infty}$ i.e. $(z_1^{0},\ldots, z_n^{0})$ satisfies \[\sup_{z_j\in \mathbb{T}}\Big|\sum_{j,k=1}^{n}a_{jk}z_jz_k\Big|= \Big|\sum_{j,k=1}^{n}a_{jk}z_j^{0}z_k^{0}\Big|.\] Define $\tilde{z}_j=e^{-i\phi/2}z_j^{0}$ for $j=1,\ldots, n,$ where $\phi=\arg\big(\sum_{j,k=1}^{n}a_{jk}z_j^{0}z_k^{0}\big).$ Then, we have the following \[\sum_{j,k=1}^{n}a_{jk}\tilde{z}_j\tilde{z}_k=\Big|\sum_{j,k=1}^{n}a_{jk}z_j^{0}z_k^{0}\Big|.\] Therefore \[\sum_{j,k=1}^{n}a_{jk}\tilde{z}_j\tilde{z}_k+ \overline{\sum_{j,k=1}^{n}a_{jk}\tilde{z}_j\tilde{z}_k}= 2\Big|\sum_{j,k=1}^{n}a_{jk}z_j^{0}z_k^{0}\Big|\] and hence one concludes the following identity \[\sup_{z_j\in \mathbb{T}}\Big|\sum_{j,k=1}^{n}a_{jk}z_jz_k+ \overline{\sum_{j,k=1}^{n}a_{jk}z_jz_k}\Big|= 2\sup_{z_j\in \mathbb{T}}\Big|\sum_{j,k=1}^{n}a_{jk}z_jz_k\Big|.\] Since $A$ is real non-negative definite matrix therefore we observe the following \begin{eqnarray*} \frac{1}{2}\sup_{z_j\in \mathbb{T}}\Big|\sum_{j,k=1}^{n}a_{jk}z_j z_k &+& \overline{\sum_{j,k=1}^{n}a_{jk}z_jz_k}\Big|=\sup_{\theta_j\in \mathbb{R}}\Big|\sum_{j,k=1}^{n}a_{jk}cos(\theta_j+ \theta_k)\Big|\\ &=&\sup_{\theta_j\in \mathbb{R}}\Big|\sum_{j,k=1}^{n}a_{jk}(cos\theta_j cos\theta_k-sin\theta_j sin\theta_k)\Big|\\ &=& \sum_{j,k=1}^{n}a_{jk}cos\delta_j cos\delta_k-\sum_{j,k=1}^{n}a_{jk} sin\delta_j sin\delta_k\\ &\leq & \sum_{j,k=1}^{n}a_{jk}cos\delta_j cos\delta_k\\ &\leq & \sup_{s_j=\pm 1}\sum_{j,k=1}^{n}a_{jk}s_js_k, \end{eqnarray*} where $\delta_j,\, j=1,\ldots, n,$ are chosen such that $\sum_{j,k=1}^{n}a_{jk}cos(\theta_j+ \theta_k)$ is positive and attains maximum in modulus at $\theta_j=\delta_j$ for $j=1,\ldots,n.$ The last inequality in above computation can be deduced from the fact that ``For any convex subset $\Omega$ of $\mathbb{R}^n$ and for any non-negative definite matrix $A$, the function $f:\Omega\to\mathbb{R}$ defined by $f(x)=\langle Ax,x\rangle$ is convex" (see \cite[Corollary 3.9.5]{Hessian}). Thus we get the identity \[\sup_{z_j\in \mathbb{T}}\big|\sum_{j,k=1}^{n}a_{jk}z_jz_k\big|=\sup_{s_j=\pm 1}\sum_{j,k=1}^{n}a_{jk}s_js_k.\] This proves the claim. \end{proof} We now prove the main theorem as a corollary of Lemma \ref{UpperQuantity} and Lemma \ref{LowerQuantity}. For the proof, it would be convenient to introduce, what we call, Varopoulos operators. Let $\mathbb{H}$ be a separable Hilbert space and $\{e_j\}_{j\in\mathbb{N}}$ be an orthonormal basis for $\mathbb{H}$. For any $x\in\mathbb{H}$, define $x^{\sharp}:\mathbb{H}\to\mathbb C$ by $x^{\sharp}(y)=\sum_{j}x_jy_j,$ where $x=\sum_j x_je_j$ and $y=\sum_j y_je_j$. For $x,y\in\mathbb{H}$, we set $\left[x^{\sharp},y\right]=x^{\sharp}(y).$ Then $\mathbb{H}^{\sharp}:=\left\{x^{\sharp}:x\in\mathbb{H}\right\}$ is a Hilbert space when equipped with the operator norm. Since the map $\phi:\mathbb{H}\to\mathbb{H}^{\sharp}$ defined by $\phi(x)=x^{\sharp}$ is a linear onto isometry, therefore $\mathbb{H}^{\sharp}$ is linearly (as opposed to the usual anti-linear identification) isometrically isomorphic to $\mathbb{H}$. The following definition is taken from the Ph.D. thesis of the first named author of this paper \cite{GR} submitted to the Indian Institute of Science in 2015. \begin{defi}[Varopoulos Operator] Let $\mathbb{H}$ be a separable Hilbert space. For $x,y\in\mathbb{H}$, define the Varopoulos operator $T_{x,y}:\mathbb C \oplus \mathbb{H} \oplus \mathbb C \to \mathbb C \oplus \mathbb{H} \oplus \mathbb C$, corresponding to the pair $(x,y)$, to be the linear transformation with the matrix representation: \[T_{x,y}= \left( \begin{array}{ccc} 0 & x^{\sharp} & 0\\ 0 & 0 & y\\ 0 & 0 & 0\\ \end{array} \right).\] \end{defi} Notice that for any pairs $(x_1,y_1)$ and $(x_2,y_2)$ in $\mathbb{H}\oplus \mathbb{H},$ the corresponding Varopoulos operators $T_{x_1,y_1}$ and $T_{x_1,y_1}$ commute if and only if $[x_1^{\sharp},y_2]=[x_2^{\sharp},y_1].$ If $x=y,$ then we set $T_x:=T_{x,x}.$ Since for any $x_1,x_2\in\mathbb{H},$ we have $[x_1^\sharp,x_2]=[x_2^\sharp,x_1],$ the corresponding Varopoulos operators $T_{x_1}$ and $T_{x_2}$ commute. \begin{thm}\label{MainTheorem} $C_{2}^{+}=\pi/2.$ \end{thm} \begin{proof} Suppose $A$ is $n\times n$ non-negative definite matrix and suppose that $(T_1,\ldots,T_n)$ is a tuple in $\mathcal{C}_n,$ then, \[p_{\!_A}(T_1,\ldots,T_n)=p_{\!_{S(A)}}(T_1,\ldots,T_n).\] If $b_{jk}$ denotes the $(j,k)$ entry of the matrix $S(A)$ then for every $x,y\in\mathbb{H},$ we have the following \begin{eqnarray*} \sup_{\|x\|\leq 1,\,\|y\|\leq 1}| \langle p_{\!_{S(A)}}(T_1,\ldots,T_n)x,y \rangle | &=& \sup_{\|x\|\leq 1,\,\|y\|\leq 1}\big| \sum_{j,k=1}^{n} b_{jk} \langle T_j x, T_k^* y\rangle \big|\\ & \leq & \sup_{\substack{\|x_j\|\leq 1,\,\|y_k\|\leq 1}}\big| \sum_{j,k=1}^{n} b_{jk}\langle x_j, y_k\rangle \big|\\ & \leq & \sup_{\substack{\|x_j\|\leq 1}}\big| \sum_{j,k=1}^{n} b_{jk}\langle x_j, x_k\rangle \big|. \end{eqnarray*} The last inequality is explained by the following computation, which can essentially be found in \cite{AN}. Since $(\!(b_{jk})\!)$ is a non-negative definite matrix therefore there exist $V_1,\ldots,V_n\in\mathbb{C}^n$ such that $b_{jk}=\langle V_j, V_k\rangle$ for each $j,k=1,\ldots,n.$ \begin{eqnarray*} \sup_{\substack{\|x_j\|\leq 1\\ \|y_k\|\leq 1}}\,\Big| \sum_{j,k=1}^{n} b_{jk} \langle x_j, y_k\rangle \Big| &=& \sup_{\substack{\|x_j\|\leq 1\\ \|y_k\|\leq 1}}\,\Big| \sum_{p=1}^{m}\big\langle\sum_{j=1}^{n}x_{jp}V_j,\sum_{k=1}^{n}{y}_{kp}V_k\big\rangle\Big|\\ &\leq & \sup_{\substack{\|x_j\|\leq 1\\ \|y_k\|\leq 1}} \sum_{p=1}^{m}\Big|\big\langle\sum_{j=1}^{n}x_{jp}V_j,\sum_{k=1}^{n}{y}_{kp}V_k\big\rangle\Big|\\ &\leq & \sup_{\substack{\|x_j\|\leq 1}} \Big(\sum_{p=1}^{m}\big\|\sum_{j=1}^{n}x_{jp}V_j\big\|^2\Big)^{1/2}\\ &&\sup_{\|y_k\|\leq 1} \Big(\sum_{p=1}^{m}\big\|\sum_{k=1}^{n}{y}_{kp}V_k\big\|^2\Big)^{1/2}\\ &=& \sup_{\|x_j\|\leq 1}\sum_{p=1}^{m}\big\|\sum_{j=1}^{n}x_{jp}V_j\big\|^2\\ &=& \sup_{\|x_j\|\leq 1} \sum_{p=1}^{m}\big\langle\sum_{j=1}^{n}x_{jp}V_j,\sum_{k=1}^{n}{x}_{kp}V_k\big\rangle\\ &=& \sup_{\|x_j\|\leq 1}\big| \sum_{j,k=1}^{n} b_{jk} \langle x_j, x_k\rangle \big|.\\ \end{eqnarray*} Therefore, we get the following inequality \[\sup_{\|x\|\leq 1,\,\|y\|\leq 1}| \langle p_{\!_{S(A)}}(T_1,\ldots,T_n)x,y \rangle |\leq \sup_{\|x_j\|\leq 1}\big| \sum_{j,k=1}^{n} b_{jk} \langle x_j, x_k\rangle \big|.\] Also note that \begin{eqnarray*} \|(\!(b_{jk})\!)\|_{\ell^\infty_\mathbb{R}(n)\to \ell^1_\mathbb{R}(n)}&=& \sup_{s_j,t_k \in [-1,1]}\Big| \sum_{j,k=1}^{n} b_{jk} s_j t_k \Big|\\ &=& \sup_{s_j,t_k \in [-1,1]}\Big| \big\langle\sum_{j=1}^{n}s_{j}V_j,\sum_{k=1}^{n}{t}_{k}V_k\big\rangle\Big|\\ &\leq & \sup_{s_j,t_k \in [-1,1]} \Big(\big\|\sum_{j=1}^{n}s_{j}V_j\big\|^2\Big)^{1/2} \Big(\big\|\sum_{k=1}^{n}{t}_{k}V_k\big\|^2\Big)^{1/2}\\ &=& \sup_{s_j \in [-1,1]}\big\|\sum_{j=1}^{n}s_{j}V_j\big\|^2\\ &=& \sup_{s_j \in [-1,1]}\Big| \sum_{j,k=1}^{n} b_{jk} s_j s_k \Big|. \end{eqnarray*} Thus we get the identity $\|(\!(b_{jk})\!)\|_{\ell^\infty_\mathbb{R}(n)\to \ell^1_\mathbb{R}(n)}=\|p_{\!_{S(A)}}\|_{[-1,1]^n,\infty}.$ By this identity and Lemma \ref{LowerQuantity} we get the following \begin{eqnarray}\label{preVIandGI} \sup_{\substack{A\in M_n^+(\mathbb{C})\setminus \{0\}}}&&\!\!\!\!\!\!\!\!\!\!\!\frac{\sup_{\boldsymbol T\in \mathscr{C}_n}\|p_{\!_A}(T_1,\ldots,T_n)\|}{\|p_{\!_A}\|_{\mathbb{D}^n,\infty}}\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\leq \sup_{\substack{B\in M_n^+(\mathbb{R})\setminus \{0\}}}\frac{\sup_{\substack{\|x_j\|_{\ell^2}\leq 1}}\big| \sum_{j,k=1}^{n} b_{jk}\langle x_j, x_k\rangle \big|}{\|B\|_{\ell^{\infty}_{\mathbb{R}}(n)\to \ell^1_{\mathbb{R}}(n)}}\nonumber. \end{eqnarray} Using the definition of $C_{2}^{+},$ Lemma \ref{UpperQuantity} and the inequality \eqref{preVIandGI}, we get the following \begin{eqnarray*} C_{2}^{+} \leq \!\!\!\!\!\sup_{\substack{B\in M_n^+(\mathbb{R})\setminus \{0\}}}\frac{\sup_{\substack{\|R_j\|_{\ell^2_{\mathbb{R}}}\leq 1}}\big| \sum_{j,k=1}^{n} b_{jk}\langle R_j, R_k\rangle \big|}{\|B\|_{\ell^{\infty}_{\mathbb{R}}(n)\to \ell^1_{\mathbb{R}}(n)}}=K_G^+(\mathbb{R})=\pi/2. \end{eqnarray*} Fix an $n\times n$ non-negative definite matrix $A$ and $x_j\in \mathbb C^m,$ $j=1,\ldots,n,$ for some $m\in \mathbb{N}.$ Define the Varopoulos operator $T_j\,(=T_{R_j})$ corresponding to the vector $R_j\in\mathbb{R}^{2m}\,(\subset \mathbb{C}^{2m}),$ where $R_j=(\frac{\overline{x}_j+x_j}{2},i\frac{\overline{x}_j-x_j}{2})$ for $j=1,\ldots,n.$ Then, taking the form of $p_{\!_A}(T_1,\ldots,T_n)$ into account, we get \[\|p_{\!_A}(T_1,\ldots,T_n)\|=\sum_{j,k=1}^{n} a_{jk}\langle R_j, R_k\rangle=\sum_{j,k=1}^{n} \big(\frac{a_{jk}+a_{kj}}{2}\big)\langle R_j, R_k\rangle.\] We use Lemma \ref{LowerQuantity} to subsequently obtain \begin{eqnarray*} \sup_{\substack{A\in M_n^+(\mathbb{C})\setminus \{0\}}}&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac{\sup_{\boldsymbol T\in \mathscr{C}_n}\|p_{\!_A}(T_1,\ldots,T_n)\|}{\ \ \ \ \ \ \ \ \ \, \|p_{\!_A}\|_{\mathbb{D}^n,\infty}}\\ &&\geq \sup_{\substack{B\in M_n^+(\mathbb{R})\setminus \{0\}}}\frac{\sup_{\substack{\|R_j\|_{\ell^2_{\mathbb{R}}}\leq 1}}\big| \sum_{j,k=1}^{n} b_{jk}\langle R_j, R_k\rangle \big|}{\|B\|_{\ell^{\infty}_{\mathbb{R}}(n)\to \ell^1_{\mathbb{R}}(n)}}. \end{eqnarray*} This proves the theorem. \end{proof} Theorem \ref{MainTheorem} shows that if we restrict ourselves to the set of homogeneous polynomials of degree two coming from the real non-negative definite matrices, then we obtain an analogous inequality to that of Varopoulos, where the factor $2$ on the right disappears and the constant $K_G^{\mathbb C}$ gets replaced by $K_G^{+}(\mathbb{R}),$ that is, \[\sup_{n, p_A}\|p_{\!_A}(T_1,\ldots,T_n)\|=K_G^{+}(\mathbb{R}),\] where supremum is taken over all homogeneous polynomials $p_{\!_A}$ of supremum norm at most $1$ with $A$ being a non-negative definite matrix, the tuple $(T_1,\ldots,T_n)$ being arbitrary in $\mathscr{C}_n$ and $n\in\mathbb{N}.$ The next corollary shows that $\lim_{n\to\infty} C_2(n)/K^\mathbb C_G > 1.$ This, in turn, answers a long--standing question of Varopoulos, raised in \cite{V2}, in negative. \begin{coro} \label{Ratio > 1} For some $\epsilon>0,$ we have the inequality $$\lim_{n\to \infty}C_2(n)\geq (1+\epsilon) K_{G}^{\mathbb C}.$$ \end{coro} \begin{proof} We know that $C_2(n)\geq C_{2}^{+}(n)$ for each $n\in \mathbb{N}$ and $K_G^\mathbb C\leq 1.4049$ therefore we have the following \[\lim_{n\to \infty}C_2(n) \geq \lim_{n\to \infty}C_{2}^{+}(n)=\pi/2> K_{G}^{\mathbb C}.\] To complete the proof one can take $\epsilon=0.118.$ \end{proof} A variant of Corollary \ref{Ratio > 1} on $L^p$ - spaces can be found in \cite{SRK}. In the next theorem, we compute a lower bound for the norm of $\mathscr{A}_n$ as $n$ tends to infinity. This improves a bound obtained earlier in an unpublished paper of Holbrook and Schoch where they had proved that $\lim_{n\to \infty}\|\mathscr{A}_n\|\geq 1.2323.$ \begin{thm} $\lim_{n\to \infty}\|\mathscr{A}_n\|\geq \frac{\pi^2}{8}.$ \end{thm} \begin{proof} Fix a natural number $l.$ Since $\lim_{n\to \infty}C_{2}^{+}(n)=\pi/ 2,$ there exist $n\in\mathbb{N}$ and a real non-negative definite matrix $A_l=(\!(a^{(l)}_{jk})\!)$ of size $n$ such that \[\frac{\sup_{\boldsymbol T\in \mathscr{C}_n}\|\sum a^{(l)}_{jk}T_jT_k\|}{\sup_{z_j\in\mathbb{T}}|\sum a^{(l)}_{jk}z_jz_k|}\geq \pi/2 -1/l.\] By Lemma \ref{UpperQuantity} and the fact that $K_G^+(\mathbb C)=4/\pi,$ we get the following for the matrix $A_l$ \begin{eqnarray*} 4/\pi &\geq & \frac{\sup_{\substack{\|x_j\|_{\ell^{2}}}\leq 1}\sum_{j,k=1}^{n}a^{(l)}_{jk}\langle x_j,x_k \rangle}{\|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}\\ &=&\frac{\sup_{\substack{\|R_j\|_{\ell^{2}_{\mathbb{R}}}\leq 1}}\sum_{j,k=1}^{n}a^{(l)}_{jk}\langle R_j,R_k \rangle}{\|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}\\ &=& \frac{\sup_{\boldsymbol T\in\mathcal{C}_n}\|\sum a^{(l)}_{jk}T_jT_k\|}{\sup_{z_j\in\mathbb{T}}|\sum a^{(l)}_{jk}z_jz_k|}\cdot \frac{\sup_{z_j\in\mathbb{T}}\|\sum a^{(l)}_{jk}z_jz_k\|}{\|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}\\ &\geq & (\pi/2-1/l)\frac{\!\!\!\!\!\!\|p_{\!_{A_l}}\|_{\mathbb{D}^n,\infty}}{\ \ \|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}. \end{eqnarray*} Rewriting the inequality appearing above, we see that, for each $l\in \mathbb{N},$ there exists a symmetric matrix $A_l$ such that for the corresponding polynomial $p_{\!_{A_l}},$ we have the following estimate \[\frac{\|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}{\!\!\!\!\!\!\|p_{\!_{A_l}}\|_{\mathbb{D}^n,\infty}}\geq \frac{(\pi/2-1/l)}{4/\pi}.\] Taking supremum over all the natural numbers $l$ on both the sides, we get the following inequality \[\sup_{l\in\mathbb{N}}\frac{\|A_l\|_{\ell^\infty(n)\to \ell^1(n)}}{\!\!\!\!\!\!\|p_{\!_{A_l}}\|_{\mathbb{D}^n,\infty}}\geq \frac{\pi^2}{8}\approx 1.2337.\] The result follows immediately from here. \end{proof} \section{Varopoulos--Kaijser type examples and the constant \texorpdfstring{$C_2(n)$}{TEXT}}\label{MaximizingLemma} In this section, we focus on the constant $C_2(n)$ in more detail. We discuss the asymptotic behaviour of $C_2(n)$ and exhibit an explicit class of examples of Varopoulos--Kaijser type. For this, we mainly rely on a construction which appeared in \cite{FJ}. In this case, our main tool is a very general maximizing lemma which can also be of independent interest. For $n=3$, we successfully construct a very wide class of examples like Varopoulos--Kaijser for which the von Neumann inequality fails and show that the Varopoulos--Kaijser example is extremal on the class of certain $3\times 3$ symmetric matrices including symmetric sign matrices. In \cite{FJ}, the authors produced an explicit set of real non-negative definite matrices for which the positive Grothendieck constant goes up to $1.5$. We briefly describe their matrices as follows: For $l=k(k-1)$, define $F_k=\{v_1,\dots,v_{k(k-1)}\}$, the set of all $k$-dimensional vectors with two non-zero components, either $1$ and $1$ or $1$ and $-1,$ appearing in that order. Define a real $l\times l$ non-negative definite matrix $A_k=(\!(a_{ij})\!)_{1\leq i,j\leq l}$ as $a_{ij}=\langle v_i,v_j\rangle$. In \cite{FJ}, the authors showed that $$\frac{\sup_{\|X_i\|_2=1}\sum_{i,j=1}^la_{ij}\langle X_i,X_j\rangle}{{\sup_{\omega_i\in\{1,-1\}}}\sum_{i,j=1}^la_{ij}\omega_i\omega_j}=\frac{3k-3}{2k-1}.$$ Thus, in view of Lemma \ref{LowerQuantity}, we get a large class of Varopoulos--Kaijser like examples for which the von Neumann inequality fails and $C_2(k(k-1))\geq\frac{3k-3}{2k-1}$. Notice that $\frac{3k-3}{2k-1}$ is an increasing function in $k$ and increases to $\frac{3}{2}$ as $k$ tends to infinity. Hence for this explicit class of matrices, we get the lower bound of $\lim_{n\to\infty} C_2(n)$ to be equal to $\frac{3}{2}.$ Though we already got the lower bound of $\lim_{n\to\infty} C_2(n)$ to be equal to $\frac{\pi}{2},$ it will be interesting to get an estimate of $C_2(n)$ as a function of $n$. Motivated by the example of Varopoulos and Kaijser, provided in \cite{V1}, we develop the following maximizing lemma which enables us to compute the supremum norm of polynomials. \begin{lemm}[Maximizing Lemma]\label{ML} Let $\Omega$ be a bounded domain in $\mathbb{R}^n$. Suppose a function $F=(f_1,\dots,f_m):\overline{\Omega}\subseteq\mathbb{R}^n\to \mathbb{C}^m$ is a continuously differentiable and bounded function with $f_j$ non-vanishing for $j=1,\ldots,m$. Then we have the following $$\big\{x\in\overline{\Omega}:\|F(x)\|_{\ell^\infty(m)}=\|F\|_{\Omega,\infty}\big\}\subseteq\bigcup_{j=1}^m\big\{x\in\overline{\Omega}:\dim_{\mathbb{R}}\bigvee_{k=1}^n\frac{\partial f_j}{\partial x_k}(x)\leq 1\big\},$$ where $\bigvee\{v_1,\ldots,v_n\}$ denotes the subspace spanned by the vectors $v_1,\ldots,v_n$ and $\dim_{\mathbb{R}}$ denotes the dimension of the space over the field of real numbers $\mathbb{R}.$ \end{lemm} \begin{proof} We notice the following identity \begin{equation*} \{x\in\overline{\Omega}:\|F(x)\|_{\ell^\infty(m)}=\|F\|_{\Omega,\infty}\}\subseteq \bigcup_{j=1}^m\big\{x\in\overline{\Omega}:|f_j(x)|=\|f_j\|_{\Omega,\infty}\big\}. \end{equation*} For $f_j:\overline{\Omega}\subseteq\mathbb{R}^n\to \mathbb{C},$ we have \begin{eqnarray}\label{Modulusfj} |f_j|^2=(\Re f_j)^2+(\Im f_j)^2. \end{eqnarray} Differentiating \eqref{Modulusfj} with respect to $x_k$ on both the sides, we get the following \begin{equation}\label{PR} \frac{\partial}{\partial x_k}|f_j|^2=2\Re f_j\Re\frac{\partial f_j}{\partial x_k}+2\Im f_j\Im\frac{\partial f_j}{\partial x_k}. \end{equation} By \eqref{PR} and the fact that, at the point of maximum of $|f_j|,$ all the partial derivatives of $|f_j|^2$ are zero, we obtain \begin{equation}\label{pr1} \langle(\Re f_j,\Im f_j),\big(\Re\frac{\partial f_j}{\partial x_k},\Im\frac{\partial f_j}{\partial x_k}\big)\rangle=0,\ \forall\ 1\leq k\leq n. \end{equation} From Equation \eqref{pr1}, we observe that, at the point of maximum of $|f_j|^2$, the two dimensional vectors $\big(\Re\frac{\partial f_j}{\partial x_k},\Im\frac{\partial f_j}{\partial x_k}\big),$ $1\leq k\leq n,$ lie on a line passing through the origin in $\mathbb{R}^2.$ Therefore, \begin{equation} \big\{x\in\overline{\Omega}:|f_j(x)|=\|f_j\|_{\Omega,\infty}\big\}\subseteq\big\{x\in\overline{\Omega}:\dim_\mathbb{R}\bigvee_{k=1}^n\frac{\partial f_j}{\partial x_k}(x)\leq 1\big\}. \end{equation} This completes the proof. \end{proof} \begin{rema} To disprove the von Neumann inequality in three variables, Varopoulos and Kaijser \cite{V1} considered an explicit homogeneous polynomial of degree two in three variables. While the computation of the supremum norm of this particular polynomial is briefly indicated in their paper, we indicate below, using Lemma \ref{ML}, how to compute the supremum norm of the Varopoulos--Kaijser polynomial. Of course, this recipe applies to the entire class of Varopoulos polynomials. \end{rema} The Varopoulos-Kaijser polynomial is the following homogeneous polynomial of degree two $$p(z_1,z_2,z_3)=z_1^2+z_2^2+z_3^2-2z_1z_2-2z_2z_3-2z_3z_1.$$ Without loss of generality, we can take supremum over $\{(z_1,z_2,z_3)=(1,e^{i\theta},e^{i\phi}): \theta,\phi\in \mathbb{R}\}$ in the expression of $p(z_1,z_2,z_3)$ to compute the quantity $\|p\|_{\mathbb{D}^3,\infty}$. If we consider the function $g(\theta,\phi)=1+e^{2i\theta}+e^{2i\phi}-2e^{i\theta}-2e^{i\phi}-2e^{i(\theta+\phi)}$ then $\|p\|_{\mathbb{D}^3,\infty}=\sup_{\theta,\phi\in \mathbb{R}}|g(\theta,\phi)|.$ Taking partial derivatives with respect to $\theta$ and $\phi$ of $g,$ we get the following expressions \begin{eqnarray*} \frac{\partial g}{\partial\theta}(\theta,\phi)=2ie^{2i\theta}-2ie^{i\theta}-2ie^{i(\theta+\phi)}\\\frac{\partial g}{\partial\phi}(\theta,\phi)=2ie^{2i\phi}-2ie^{i\phi}-2ie^{i(\theta+\phi)}. \end{eqnarray*} Applying Lemma \ref{ML}, we obtain that, at the point of maximum of $|g|,$ the vectors $\frac{1}{2i}\frac{\partial g}{\partial\theta}$, $\frac{1}{2i}\frac{\partial g}{\partial\phi}$ and $g(\theta,\phi)-\frac{1}{2i}(\frac{\partial g}{\partial\theta}+\frac{\partial g}{\partial\phi})$ lie on a line passing through the origin in $\mathbb{R}^2.$ Note that \begin{eqnarray*} g(\theta,\phi)-\frac{1}{2i}\Big(\frac{\partial g}{\partial\theta}+\frac{\partial g}{\partial\phi}\Big)=1-e^{i\theta}-e^{i\phi}\\ \frac{1}{2i}\frac{\partial g}{\partial\theta}-\frac{1}{2i}\frac{\partial g}{\partial\phi}=(e^{i\theta}-e^{i\phi})(e^{i\theta}+e^{i\phi}-1). \end{eqnarray*} Therefore, at the point of maximum, $1-e^{i\theta}-e^{i\phi}$ and $(e^{i\theta}-e^{i\phi})(e^{i\theta}+e^{i\phi}-1)$ lie on a line passing through the origin in $\mathbb{R}^2$. Since $1-e^{i\theta}-e^{i\phi}$ and $e^{i\theta}+e^{i\phi}-1$ are always collinear, one must have $1-e^{i\theta}-e^{i\phi}=0$ or $\arg(e^{i\theta}-e^{i\phi})\in\{0,\pi\}$. From this, it can be concluded that $\|p\|_{\mathbb{D}^3,\infty}=5.$ We would like to bring the attention of the reader to \cite{HJ} where the computation of the supremum norm of this polynomial has also been done. \subsection{Extremal behaviour of Varopoulos--Kaijser example:} In this subsection, we show that the Varopoulos--Kaijser example is extremal, in a certain sense, if we restrict ourselves to a class of symmetric $3\times 3$ matrices which includes the symmetric sign matrices. Sign matrices are the matrices of which each entry is either $1$ or $-1$. Varopoulos--Kaijser example gives a lower bound for the quantity $C_2(\boldsymbol \delta_3)$ and using extreme point method, we establish an upper bound for the same. For this we need the following definitions. \begin{defi}[Correlation Matrix] A correlation matrix is a complex non-negative definite matrix whose all diagonal elements are equal to $1$. We denote the set of all $n\times n$ correlation matrices by $\mathscr{C}(n)$. \end{defi} For every natural number $n,$ define the set $\boldsymbol{\delta}_n$ by \[ \boldsymbol{\delta}_n=\big\{(T_{x_1},\ldots,T_{x_n}):x_j\in \ell^2_\mathbb{R} \mbox{ with }\|x_j\|\leq 1\mbox{ for }j=1,\ldots,n \big\}.\] Define $C_2(\boldsymbol{\delta}_n):=\sup\big\{ \|p(\boldsymbol T)\|:\boldsymbol T\in\boldsymbol\delta_n,\,p\in\mathcal{P}_2^s(n)\mbox{ with }\|p\|_{\mathbb{D}^n,\infty}\leq 1\big\}.$ For the rest of this section, we consider only the tuples of commuting and contractive Varopoulos operators in $\boldsymbol{\delta}_n$. From the definition, it follows that $C_2(\boldsymbol{\delta}_n)\leq C_2(n).$ We prove the following bound on $C_2(\boldsymbol \delta_3).$ \begin{rema}\label{RestrictedVaropoulos} Given $x_1,\ldots,x_n\in \ell^2,$ we can define the vector $R_j=\big(\frac{\overline{x}_j+x_j}{2},i\frac{\overline{x}_j-x_j}{2}\big)$ for $j=1,\ldots, n.$ As noticed in Lemma \ref{UpperQuantity}, we know that $\langle R_j,R_k\rangle=\frac{\langle x_j, x_k\rangle + \langle x_k,x_j \rangle}{2}$. Hence for any degree two homogeneous polynomial $p(z_1,\ldots,z_n)=\sum_{j,k=1}^{n}a_{jk}z_jz_k$, where $(\!(a_{jk})\!)$ is a symmetric matrix, we get that \[\sup_{\boldsymbol T\in \boldsymbol\delta_n}\|p(\boldsymbol T)\|=\sup_{\|x_j\|_{\ell^2}\leq 1}\big|\sum_{j,k=1}^{n}a_{jk}\langle x_j,x_k\rangle\big|.\] \end{rema} \begin{thm}\label{HS} $1.2\leq C_2(\boldsymbol \delta_3)\leq\frac{3\sqrt{3}}{4}.$ \end{thm} \begin{proof} The fact that $C_2(\boldsymbol \delta_3)\geq 1.2$ follows from \cite{HJ}. Given a complex $n\times n$ matrix $A,$ we define the following quantity $$\beta(A):=\sup_{B\in\mathscr{C}(n)}|\langle A,B\rangle|.$$ Every correlation matrix $B$ can be written as $(\!(\langle x_i,x_j\rangle)\!)$ for some unit vectors $x_i,$ $1\leq i\leq n,$ and vice versa. Let $U$ denote the unit ball of $\mathbb{C}^s[Z_1,\ldots, Z_n]$ with respect to supremum norm over the polydisc $\mathbb{D}^n.$ Suppose $A_p$ denote the symmetric matrix corresponding to $p\in \mathbb{C}^s[Z_1,\ldots, Z_n]$. Then, using Remark \ref{RestrictedVaropoulos}, we get \begin{eqnarray*} \sup_{p\in U}\beta(A_p)&=&\sup_{p\in U}\sup_{\|x_i\|_{\ell^2}=1}\big|\sum_{i,j=1}^na_{ij}\langle x_i,x_j \rangle\big|\\ &=& C_2(\boldsymbol \delta_n). \end{eqnarray*} The map $B\mapsto \langle A,B\rangle$ is linear in $B$ and $\mathscr{C}(n)$ is a compact convex set, we conclude that \begin{eqnarray}\label{LinearInB} \beta(A_p)=\sup_{B\in E(\mathscr{C}(n))}|\langle A_p,B\rangle|, \end{eqnarray} where $E(\mathscr{C}(n))$ is the set of all extreme points of $\mathscr{C}(n)$. Since all the elements of $E(\mathscr{C}(n))$ have rank less than or equal to $\sqrt{n}$ (\cite{LCB}) therefore when $n=3,$ we conclude that the extreme correlation matrices have rank one. If the correlation matrix $(\!(\langle x_i,x_j\rangle)\!)$ is of rank $1$, then $\bigvee\{x_i;1\leq i \leq n\}$ is one dimensional. Using Equation \eqref{LinearInB} and \cite{RG}, for $n=3$, we obtain the following \begin{eqnarray*} \beta(A_p)&=&\sup_{\mid z_i\mid=1}\Big|\sum_{i,j=1}^{n}a_{ij}z_i\bar{z}_{j}\Big|\\ &\leq &\|A_p\|_{\infty\to 1}\\ &\leq &\frac{3\sqrt{3}}{4}\sup_{\mid z_i\mid=1}\Big|\sum_{i,j=1}^{n}a_{ij}z_i{z}_{j}\Big|. \end{eqnarray*} This completes the proof of the theorem. \end{proof} \begin{rema} In an unpublished work, Holbrook and Schoch have shown that $C_2(\boldsymbol \delta_3)\geq 1.2323$. Their method rely on explicit construction of a two degree homogeneous polynomial as in Varopoulos--Kaijser (replacing the coefficients $-2$ in the Varopoulos--Kaijser polynomial by something like $-2.5959$). In view of this and Theorem \ref{HS}, we have $1.2323\leq C_2(\boldsymbol \delta_3)\leq\frac{3\sqrt{3}}{4}\approx 1.2990.$ \end{rema} The following table shows that Varopoulos--Kaijser polynomial is extremal among the set of all symmetric sign matrices of order $3$ as long as the ratio $\|p(T_1,T_2,T_3)\|/\|p\|_{\mathbb{D}^3,\infty}$ is concerned, where $T_1,T_2$ and $T_3$ are commuting Varopoulos operators. The total number of symmetric sign matrices of order $3$ is $2^6.$ To compute $\|p\|_{\mathbb{D}^3,\infty}$ and $\|p(T_1,T_2,T_3)\|,$ without loss of generality, we can assume that every entry in first row and first column of $A_p,$ symmetric matrix corresponding to $p,$ is one. Since $\|p\|_{\mathbb{D}^3,\infty}$ and $\|p(T_1,T_2,T_3)\|$ are invariant under $SA_pS^{-1},$ for every permutation matrix $S$ of order $3,$ therefore it leaves us with the following $6$ inequivalent matrices, for which, we use Lemma \ref{ML} to compute the supremum norm of the polynomials. \begin{center} \begin{tabular}{|c|c|c|} \hline $ A_p$ & $\|p\|_{\mathbb{D}^3,\infty}$ & $\|p(T_1,T_2,T_3)\|$\\ \hline $\left( \begin{array}{rrr} 1 & 1 & 1\\ 1 & -1 & 1\\ 1 & 1 & -1\\ \end{array} \right) $ & $5$ & $5$\\ \hline $\left( \begin{array}{rrr} 1 & 1 & 1\\ 1 & 1 & -1\\ 1 & -1 & 1\\ \end{array} \right) $ & $5$ & $6$\\ \hline $\left( \begin{array}{rrr} 1 & \ \ 1 & 1\\ 1 & \ \ 1 & 1\\ 1 & \ \ 1 & -1\\ \end{array} \right) $ & $5\sqrt{2}$ & $7$\\ \hline $\left( \begin{array}{rrr} 1 & 1 & 1\\ 1 & 1 & -1\\ 1 & -1 & -1\\ \end{array} \right) $ & $7$ & $5$\\ \hline $\left( \begin{array}{rrr} 1 & 1 & 1\\ 1 & -1 & -1\\ 1 & -1 & -1\\ \end{array} \right) $ & $\sqrt{41}$ & $4$\\ \hline $\left( \begin{array}{rrr} 1 & \ \ 1 & \ \ 1\\ 1 & \ \ 1 & \ \ 1\\ 1 & \ \ 1 & \ \ 1\\ \end{array} \right) $ & $9$ & $9$\\ \hline \end{tabular} \end{center} We show that Varopoulos--Kaijser polynomial is also extremal among the following matrices \[\mathcal{S}:=\Big\{\mathscr{B}_\alpha=\left( \begin{smallmatrix} 1 & 1 & 1\\ 1 & 1 & \alpha\\ 1 & \alpha & 1\\ \end{smallmatrix} \right):\alpha\in \mathbb{R}\Big\}.\] For $\alpha\geq 0,$ the ratio $\|p(T_1,T_2,T_3)\|/\|p\|_{\mathbb{D}^3,\infty}$ is always $1.$ Hence we consider the case when $\alpha<0.$ Explicit computation shows that, for every symmetric matrix $\mathscr{B}_\alpha$ in $\mathcal{S},$ the corresponding homogeneous polynomial $p_{\!_{\mathscr{B}_\alpha}}$ of degree two satisfies the following $$\|p_{\!_{\mathscr{B}_\alpha}}\|_{{\mathbb{D}^3,\infty}}=\sup_{\theta,\phi\in \mathbb{R}}\big\{|1+e^{i\theta}+e^{i\phi}+e^{i\theta}(1+e^{i\theta}+\alpha e^{i\theta})+e^{i\phi}(1+e^{i\phi}+\alpha e^{i\theta})|\big\}.$$ Suppose $f(\theta,\phi)=|1+e^{i\theta}+e^{i\phi}+e^{i\theta}(1+e^{i\theta}+\alpha e^{i\theta})+e^{i\phi}(1+e^{i\phi}+\alpha e^{i\theta})|.$ Using Lemma \ref{ML}, at point of maximum of $f$, we get that $1+e^{i\theta}+e^{i\phi}, \ e^{i\theta}(1+e^{i\theta}+\alpha e^{i\theta})\ \text{and}\ e^{i\phi}(1+e^{i\phi}+\alpha e^{i\theta})\ \text{are collinear}.$ Subtracting the third vector from the second vector, we get $1+e^{i\theta}+e^{i\phi},(e^{i\theta}-e^{i\phi})(1+e^{i\theta}+e^{i\phi})\ \text{are collinear}.$ We break the computation essentially into the following two cases \textbf{Case 1:}\label{Case1} If $1+e^{i\theta}+e^{i\phi}$ is zero then the maximum of $f$ is $2-2\alpha.$ \textbf{Case 2:} Suppose $1+e^{i\theta}+e^{i\phi}\neq 0.$ Then $1$ and $e^{i\theta}-e^{i\phi}$ are collinear and therefore at a point of maximum of $f,$ we get that $\theta=\phi$ or $\theta=\phi+\pi.$ We deal with this case in the form of following two subcases. \begin{itemize} \item[1.] When $\theta=\phi$ then we need to maximize $f(\theta,\theta)=|1+4e^{i\theta}+2(1+\alpha)e^{2i\theta}|$ over $\theta\in \mathbb{R}.$ We observe that $f(\theta,\theta)=(17+4(1+\alpha)^2+8(3+2\alpha)cos\theta +4(1+\alpha)cos 2\theta)^{1/2}.$ At critical point $\theta_0$ of $f,$ we get that \[cos \theta_0=-\frac{3+2\alpha}{2(1+\alpha)}\mbox{ or }sin\theta_0=0.\] If $\alpha> -5/4$ then the only possibility is $sin\theta_0=0$ i.e. $e^{i\theta_0}=\pm 1.$ In this case, maximum of $f$ is either $|7+2\alpha|$ or $1-2\alpha.$ As case 1 suggests, the quantity $1-2\alpha$ can not be the maximum. In this subcase if $\alpha$ is at most $-5/4$ then at $cos \theta_0=-\frac{3+2\alpha}{2(1+\alpha)},$ $$f(\theta_0,\theta_0)=\big(17+4(1+\alpha)^2-2\frac{(3+2\alpha)^2}{\!\!\!1+\alpha}-4(1+\alpha)\big)^{1/2}.$$ \item[2.] When $\theta=\phi+\pi$ then maximum of $f$ is $3-2\alpha.$ This subcase proves the redundancy of case 1 as far as the maximum of $f$ is concerned. \end{itemize} Comparison of all the possible cases and explicit computation tells us that for $\alpha<0,$ the norm of the homogeneous polynomial $p_{\!_{\mathscr{B}_\alpha}}$ of degree two, is the following continuous function \[\|p_{\!_{\mathscr{B}_\alpha}}\|_{\mathbb{D}^3,\infty}= \begin{cases} 7+2\alpha, & \alpha>-1\\ 3-2\alpha, & \alpha\leq -1\\ \end{cases} \] We define the following quantity $M_{\mathscr{B}_\alpha}=\sup_{|z_j|=1}\big|\sum a_{jk}z_j\overline{z}_k\big|.$ By Remark \ref{RestrictedVaropoulos} and from \cite{LCB}, we see that $M_{\mathscr{B}_\alpha}=\sup_{\boldsymbol T\in \boldsymbol\delta_3}\|p_{\!_{\mathscr{B}_\alpha}}(\boldsymbol T)\|.$ Corresponding to every matrix $\mathscr{B}_\alpha$ in the class $\mathcal{S},$ we get \begin{eqnarray*} M_{\mathscr{B}_\alpha}&=&\sup_{|z_j|=1}\big|3+2(\Re z_1\overline{z}_2+\alpha\Re z_2\overline{z}_3+\Re z_3\overline{z}_1)\big|\\ &=& \sup_{\theta,\phi\in\mathbb{R}}\big|3+2(cos\theta + cos\phi + \alpha cos(\theta-\phi))\big| \end{eqnarray*} Define the function $h(\theta,\phi)=3+2(cos\theta + cos\phi + \alpha cos(\theta-\phi)).$ Then the critical points of the function $h$ are the solutions of $sin\theta +\alpha sin(\theta-\phi)=0$ and $sin\phi -\alpha sin(\theta-\phi)=0.$ Therefore the critical points $(\theta_0,\phi_0)$ satisfy $sin\theta_0 + \sin\phi_0=0.$ Thus, we get that $\theta_0=-\phi_0\mbox{ or }\theta_0=-\phi_0+\pi \mbox{ or }\theta_0=\phi_0+\pi.$ \textbf{Case 1:} If $\theta_0=-\phi_0$ then \begin{eqnarray*} h(\theta_0,\phi_0)&=&3+2(2cos \theta_0 + \alpha cos 2\theta_0)\\ &=&4\alpha cos^2\theta_0 + 4cos \theta_0 +3-2\alpha\\ &=& g(t),\mbox{ say,} \end{eqnarray*} where $t=cos \theta_0.$ If $\alpha> -1/2$ then there is no critical point of $g$ in $(0,1).$ Hence the maximum is $3-2\alpha$ or $|7+2\alpha|.$ Between these, in the case when $\alpha\in (-1/2,0),$ clearly, $3-2\alpha$ is bigger. If $\alpha\leq -1/2$ then the maximum of $g(t)$ is among $3-2\alpha-1/\alpha$ or $|7+2\alpha|$ or $3-2\alpha.$ A straightforward computation shows that $3-2\alpha-1/\alpha$ is bigger than the other two values. Therefore we have the following \[M_{\mathscr{B}_\alpha}= \begin{cases} 3-2\alpha, & \alpha> -\frac{1}{2}\\ 3-2\alpha-1/\alpha, & \alpha\leq -\frac{1}{2}.\\ \end{cases} \] Computations done till now leads to the following graph of the ratio $\mathcal{Q}:=$ $\frac{M_{\mathscr{B}_\alpha}}{\|p_{\!_{\mathscr{B}_\alpha}}\|_{\mathbb{D}^3,\infty}}$ \[\mathcal{Q}= \begin{cases} \frac{3-2\alpha}{7+2\alpha}, & \alpha \in (-1/2,0)\\ \frac{3-2\alpha-1/\alpha}{7+2\alpha}, & \alpha \in [-1,-1/2]\\ \frac{3-2\alpha-1/\alpha}{3-2\alpha}, & \alpha \in (-\infty,-1). \end{cases} \] The ratio $\mathcal{Q}$ is increasing in $(-\infty,-1)$ and decreasing in $(-1,0).$ Thus maximum of the ratio $\mathcal{Q}$ is at $-1$ and hence Varopoulos--Kaijser polynomial is the best in the class we have specified. \textbf{Acknowledgement:} We are very grateful to Prof. Gadadhar Misra and Prof. Gilles Pisier for several fruitful discussions and suggestions. The authors also express their sincere gratitude to Prof. Sameer Chavan and Prof. Parasar Mohanty for their constant support. We also thank the referee and the editor for several constructive suggestions which significantly improved the presentation of the paper. \bibliographystyle{amsalpha}
2024-02-18T23:40:26.062Z
2018-01-31T02:05:57.000Z
algebraic_stack_train_0000
2,393
8,297
proofpile-arXiv_065-11599
\section{Introduction} An 'inflaton' is a scalar field that can drive a period of acceleration in the early universe. Such a finite period of inflation \cite{Aref1,guth} can solve long-standing problems about the structure of the universe that would otherwise require special initial conditions \cite{planck2013,planck2015}. An inflaton provides a matter source that can display antigravitating behavior and so it could also be a candidate for the so-called the 'dark energy' that drives cosmological acceleration today. It is possible that these two eras of cosmological acceleration are connected, but so far there is no compelling theory about how that link might arise between two such widely separated energy scales. Various inflationary self-interaction potentials for the inflaton have been proposed in the literature. Since they lead to different inflationary scenarios, particularly in respect of the density fluctuations produced, they have different observational consequences for the cosmic microwave background radiation, and this permits them to be finely constrained by observational data. Various inflaton potentials in general relativistic scalar field cosmology have been proposed in \cite{ref1a,ref1,ref2,ref3,newinf,ref4,ref5a,ref5,ref6a,ref6,ref7,ref8,ref9,charters , while for inflationary models in other gravity theories, where there are more possibilities, see \cite{Aref1,Aref2,Aref3,Aref3b,Aref5,Aref6,Aref6b,Aref7a,Aref7,Aref8,Aref9,Aref10,Aref11} and references therein. The construction of the inflaton scalar field potential from observational data is an open problem of special interest. It provides critical information about the details of the allowed inflationary models and might provide clues as to the identity of the inflaton. In \cite{cp90,cp93,turner1,Adfre,Cos95a,hoi,bpl}, the perturbative reconstruction approach was applied: the inflaton self-interaction potential, $V\left( \phi\right) $,$~$of the scalar field, $\phi$, was reconstructed by considering a series expansion around a point $\phi=\phi_{0}$, where the coefficients of the series expansion for the potentials are determined from the observable values of the scalar spectral index and the usual slow-roll parameters; for more details see \cite{lidsey}. Alternative approaches to the reconstruction of the scalar field potential include a stochastic perturbative approach in \cite{easther}, or another perturbative approach in \cite{urenalopez}. Two alternative methods for the reconstruction of the scalar field potential have been proposed in \cite{star} and \cite{Wohns}. Specifically, in the latter work, an exponential of the scalar field's Hubble function was considered and found to offer an efficient way to derive and constrain the power-spectrum observables \cite{Wohns}. By contrast, in \cite{star}, the scalar field potential was reconstructed for the Harrison-Zeldovich spectrum by solving the gravitational field equations along with the equation for the adiabatic scalar perturbations. The slow-roll parameters and their relations to the spectral indices have been reconstructed in closed-form \cite{Vallinotto,Chiba,Lin}. This is the approach that we will follow here to find the equation of state for the effective perfect fluid which corresponds to the scalar field with a self-interaction potential.\ While this approach is not so accurate as the previous approaches (because it depends on approximate relations between the spectral indices and the slow-roll parameters \cite{lidsey}) it can more easily reconstruct closed-form solutions for the inflationary potential and the expansion scale factor expansion. Furthermore, as we shall see in the first approximations for the models that we study, there exist mappings which transform the models to other equivalent models and their linearised fluctuations to the Harrison-Zeldovich spectrum. The plan of this paper is as follows. In Section \ref{field}, we review scalar field cosmology in a spatially flat Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) universe and introduce the basic quantities and notations. In Section \ref{section3}, we assume that the spectral index for the density perturbations, $n_{s}$, and the tensor-to-scalar ratio, $r$, are related by a function such that $n_{s}-1=h\left( r\right) $. For the defining function, $h\left( r\right) ,$ we assume that it is either constant, linear or quadratic in $r$. Moreover, using the slow-roll expressions for these indices, we find ordinary differential equations whose solutions provide us with the inflationary scalar field potentials and the equation of state for the energy density and the pressure of the scalar field while the density perturbations to tensor-to-scalar ration diagrams are presented for the analytical solutions that we derive. Moreover, in Section \ref{escape} the values for the free parameters of the models are determined in order a late time attractor to exists such that the universe to escape from the inflation phase. Moreover, a transformation which relates the different models that we study is presented in Section \ref{section4}. We show that our master equations are all maximally symmetric. This ensures that maps exist which can transform the solution of one inflationary model into another. This can be used to determine new inflationary solutions from known ones. A discussion of the results presented and our conclusions are given in the concluding Section \ref{conc}. \section{Underlying equations and definitions} \label{field} We take the gravitational field equations to be (with units $8\pi G=c=\hslash=1$ \begin{equation} G_{\mu\nu}=T_{\mu\nu}^{\left( \phi\right) }+T_{\mu\nu}^{\left( m\right) }, \label{in.01 \end{equation} where $G_{\mu\nu}=$ $R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$ is the Einstein tensor, $T_{\mu\nu}^{\left( \phi\right) }$ is the energy-momentum tensor for the scalar field, \begin{equation} T_{\mu\nu}^{\left( \phi\right) }=\phi_{;\mu}\phi_{;\nu}-g_{\mu\nu}\left( \frac{1}{2}\phi^{;\sigma}\phi_{;\sigma}-V\left( \phi\right) \right) , \label{in.01a \end{equation} and $T_{\mu\nu}^{\left( m\right) }$ denotes the energy-momentum tensor of the other matter sources. Now, we will assume that the universe contains only the scalar field, so $T_{\mu\nu}^{\left( m\right) }=0$. In addition, we have the propagation equation for the scalar field, $\phi$, from the Bianchi identity $T_{~~~~~~~\ ;\nu}^{\left( \phi\right) \mu\nu}=0,$ which i \begin{equation} -g^{\mu\nu}\phi_{;\mu\nu}+V_{,\phi}=0. \label{in.02 \end{equation} For a spatially-flat FLRW universe, with scale factor, $a\left( t\right) $, the field equations (\ref{in.01})-(\ref{in.02}) ar \begin{equation} 3H^{2}=\frac{1}{2}\dot{\phi}^{2}+V(\phi), \label{in.03 \end{equation \begin{equation} 2\dot{H}+3H^{2}=-\frac{1}{2}\dot{\phi}^{2}+V(\phi), \label{in.04 \end{equation} and \begin{equation} \ddot{\phi}+3H\dot{\phi}+V_{,\phi}=0, \label{in.05 \end{equation} where $H=\frac{\dot{a}}{a}$ is the Hubble function and overdots denote differentials with respect to comoving proper time, $t$. The comoving observers have $u^{\mu}=\delta_{0}^{\mu}$ , so $u^{\mu}u_{\mu}=-1$. The FLRW symmetries ensure $\phi=\phi\left( t\right) $. From (\ref{in.01a}), we find that the energy density of the scalar field for the comoving observer is \begin{equation} \rho_{\phi}\equiv\frac{1}{2}\dot{\phi}^{2}+V(\phi); \label{in.06 \end{equation} the pressure is $P_{\phi}=w_{\phi}\rho_{\phi}$, where $w_{\phi}$ is the equation of state parameter (EoS): \begin{equation} w_{\phi}=\frac{\dot{\phi}^{2}-2V(\phi)}{\dot{\phi}^{2}+2V(\phi)}. \label{in.07 \end{equation} The deceleration parameter, $q$, is given by the formula $q=\frac{1}{2}\left( 1+3w_{\phi}\right) $ because, as the only matter source is the scalar field, we have $w_{tot}=w_{\phi}$. The expansion of the universe is accelerated when $q<0$, that is, $w_{\phi}<-\frac{1}{3}$. Since $V\left( \phi\right) >0$, a negative negative EoS parameter means that the potential dominates the kinetic term i.e., $\frac{\dot{\phi}^{2}}{2}<V(\phi)$. Furthermore, in the limit $\dot{\phi}\rightarrow0$ expression (\ref{in.07}) gives $w_{\phi \rightarrow-1$, and the scalar field mimics the cosmological constant. The so-called potential slow-roll parameters (PSR), \begin{equation} \varepsilon_{V}=\left( \frac{V_{,\phi}}{2V}\right) ^{2}~\,,~\eta_{V =\frac{V_{,\phi\phi}}{2V}, \label{in.08 \end{equation} have been introduced \cite{slp01} in order to study the existence of the inflationary phase of the universe. Specifically, the condition for an inflationary universe is $\varepsilon_{V}<<1$, while in order for the inflationary phase to last long enough we require the second PSR parameter also to be small, $\eta_{V}<<1$. Similarly, the Hubble slow-roll parameters (HSR) have been defined by \cite{slpv,slp}\qqua \begin{equation} \varepsilon_{H}=-\frac{d\ln H}{d\ln a}=\left( \frac{H_{,\phi}}{H}\right) ^{2}, \label{in.09 \end{equation} and \begin{equation} \eta_{H}=-\frac{d\ln H_{,\phi}}{d\ln a}=\frac{H_{,\phi\phi}}{H}. \label{in.10 \end{equation} It has been shown that the HSR slow-roll parameters are more accurate descriptors of inflation than the PSR parameters. However, the PSR and HSR parameters are related and, when $\varepsilon_{H}$ and $\eta_{H}$ are small, these relations becom \begin{equation} \varepsilon_{V}\simeq\varepsilon_{H}~\text{and~}\eta_{V}\simeq\varepsilon _{H}+\eta_{H}. \label{in.11 \end{equation} In the following we choose to work with the HSR parameters. \subsection{Analytical solution} Recently, in ref. \cite{dimakis}, it was found that the field equations, (\ref{in.03})-(\ref{in.05}), under the transformation, $dt=\exp\left( \frac{F\left( \omega\right) }{2}\right) d\omega$ with $\omega=6\ln a,$ where $a(t)$ is the cosmic scale factor, can be solved for the scalar field, $\phi$, and the potential, $V\left( \phi\right) ,$ by the following formulae\footnote{Where a prime \textquotedblleft$^{\prime}$\textquotedblrigh \ denotes the total derivative with respect to $\omega$. \begin{equation} \phi(\omega)=\pm\frac{\sqrt{6}}{6}\int\!\!\sqrt{F^{\prime}(\omega) d\omega\label{in.12 \end{equation} and \begin{equation} V(\omega)=\frac{1}{12}e^{-F(\omega)}\left( 1-F^{\prime}(\omega)\right) , \label{in.13 \end{equation} so the line-element for the FLRW spacetime is now \begin{equation} ds^{2}=-e^{F\left( \omega\right) }d\omega^{2}+e^{\omega/3}(dx^{2 +dy^{2}+dz^{2}). \label{in.14 \end{equation} For the latter line element, the Hubble function is defined as$~H\left( \omega\right) =\frac{1}{6}e^{-\frac{F}{2}}$, from where with the use of (\ref{in.12}) it follows $\frac{dH}{d\phi}=\pm\frac{\sqrt{6}}{2}e^{-\frac {F}{2}}F^{\prime}\,\,,$ then expression (\ref{in.14}) reduces to the Hamilton-Jacobi like equation, for $H\left( \phi\right) $, \[ 2\left( \frac{dH\left( \phi\right) }{d\phi}\right) ^{2}-3\left( H\left( \phi\right) \right) ^{2}+V\left( \phi\right) =0, \] which studied in \cite{mos1,mos2}. In the new variables, the effective fluid components for the scalar field ar \begin{equation} \rho_{\phi}(\omega)=\frac{1}{12}e^{-F(\omega)}~,~P_{\phi}(\omega)=\frac{1 {12}e^{-F(\omega)}\left( 2F^{\prime}(\omega)-1\right) , \label{in.15 \end{equation} and the effective EoS parameter takes the simple for \begin{equation} w_{\phi}\left( \omega\right) =\left( 2F^{\prime}(\omega)-1\right) . \label{in.16 \end{equation} These expressions hold for an arbitrary scalar field potential. The field equations have been reduced to a single first-order differential equation which can be viewed as a form of the equation of state, $P_{\phi}=P_{\phi }\left( \rho_{\phi}\right) $, for the scalar field. This approach was applied in \cite{jdband1} in order to construct inflationary potentials from specific linear and non-linear equations of state. We can use this solution to express the slow-roll parameters, PSR or HSR, in terms of the new variable $\omega\equiv\ln(a^{6})$. The HSR parameters are found to be \cite{jdband1 \begin{equation} \varepsilon_{H}=3F^{\prime}~,~\eta_{H}=3\frac{\left( F^{\prime}\right) ^{2}-F^{\prime\prime}}{F^{\prime}}, \label{in.17 \end{equation} or, equivalently in terms of the effective EoS parameter, \begin{equation} \varepsilon_{H}=\frac{3}{2}\left( 1+w_{\phi}\left( \omega\right) \right) , \label{in.18 \end{equation} and \begin{equation} \eta_{H}=\frac{3}{2}\frac{\left( w_{\phi}+1\right) ^{2}-2w_{\phi,\omega }{\left( 1+w_{\phi}\right) }. \label{in.19 \end{equation} The number of e-folds is defined to be $N_{e}=\int_{t_{i}}^{t_{f}}H\left( t\right) dt=\ln\frac{a_{f}}{a_{i}}=\frac{1}{6}\left( \omega_{f}-\omega _{i}\right) ,$ which means that $N_{e}$ is linearly related to the function $\omega$. Hence, the slow-roll parameters can be expressed in terms of $N_{e}$. Lastly, using expression (\ref{in.18}), all the slow-roll parameters can be expressed in terms of the parameter~$\varepsilon_{H}$ and its derivatives. \section{Reconstruction of the inflationary potential} \label{section3} From the recent data analysis by the Planck 2015 collaboration \cite{planck2015}, the value of the spectral index for the density perturbations is $n_{s}=0.968\pm0.006,~$while the range of the scalar spectral index is $n_{s}^{\prime}=-0.003\pm0.007$. The tensor to scalar ratio, $r$, has a value smaller than $0.11$, i.e., $r<0.11$. The mathematical expression which relates the HSR parameters to the spectral indices $n_{s}$ in the first approximation i \begin{equation} n_{s}\equiv1-4\varepsilon_{H}+2\eta_{H}, \label{in.22 \end{equation} while the tensor to scalar ratio is $r=10\varepsilon_{H}$. Moreover, in the second approximation the spectral index, $n_{s}$, become \begin{equation} n_{s}\equiv1-4\varepsilon_{H}+2\varepsilon_{H}-8\left( \varepsilon _{H}\right) ^{2}\left( 1+2C\right) +\varepsilon_{H}\eta_{H}\left( 10C+6\right) -2C\xi_{H}, \label{in.24 \end{equation} where $C=\gamma_{E}+\ln2-2=-0.7296$. So, now it follows that the running index i \begin{equation} n_{s}^{\prime}\equiv2\varepsilon_{H}\eta_{H}-2\xi_{H}. \label{in.23 \end{equation} From the analysis of the previous section, the spectral indices for the FLRW spacetime can be written in terms of $\varepsilon_{H}$ and its derivative, or in terms of the unknown function, $F\left( \omega\right) $, and its derivatives. Recall, that the above expressions for the spectral indices are definitions and not deductions. However, if we assume that the left-hand side satisfies some functional expression, i.e., $n_{s}=h\left( \varepsilon _{H},...\right) $,~for an function $h$, then we define a differential equation, which can be used to construct the exact form for the FLRW spacetime (\ref{in.14}), i.e., determine $F\left( \omega\right) $, that satisfies the spectral index conditions. Hence, with the use of the solution presented in the previous section, the scalar field potential can also be derived. In the following, we consider that \begin{equation} n_{s}-1=h\left( r\right) , \label{in.23a \end{equation} and we work with the expression (\ref{in.22}) in the first-order approximation. Moreover, we assume that we are close to the $n_{s}=1$ spectrum so that we can treat $h\left( r\right) $ as a small correction term to the spectrum. Hence, the Taylor expansion of the $h\left( r\right) $ function close to a constant value for the scalar ratio, that is, $r=r_{0}$, yield \begin{equation} h\left( r\right) =h\left( r_{0}\right) +h^{\prime}\left( r_{0}\right) \left( r-r_{0}\right) +\frac{h^{\prime\prime}\left( r_{0}\right) {2!}\left( r-r_{0}\right) ^{2}+... \label{in.23b \end{equation} For our analysis we select three forms for the function $h(r),$ which \ include the three first terms of the last Taylor expansion for the function $h\left( r\right) $. \ Hence, by substituting from (\ref{in.17}) in (\ref{in.23a}) three master equations follow for each chosen form of $h\left( r\right) $. \subsection{Constant index: $n_{s}-1=-2n_{0}$} Assume that the spectral index for the density perturbations is constant, with $n_{s}-1=-2n_{0}$, where according to the Planck 2015 data at $1\sigma$, \ $n_{0}$ should be bounded in the range $\,0.013\leq n_{0}\leq0.019$. In the case where $n_{0}=0$, i.e., $n_{s}=1$, we have the Harrison--Zeldovich spectrum. These cases were studied before in \cite{Vallinotto,Chiba,Lin}. \subsubsection{Zero $n_{0}:~$Harrison--Zeldovich spectrum} Let $n_{0}=0$, so $n_{s}=1~$and we have the exact Harrison-Zeldovich spectrum. Then, from (\ref{in.22}), it follows that $\eta_{H}=2\varepsilon_{H}$. Hence, from (\ref{in.17}) the second-order differential equation for $F(\omega)$ is \begin{equation} F^{\prime\prime}+\left( F^{\prime}\right) ^{2}=0, \label{in.25 \end{equation} which has the solution $F\left( \omega\right) =\ln\left( F_{1}\left( \omega-\omega_{0}\right) \right) $, where the effective equation-of-state parameter is now \begin{equation} w_{\phi}\left( \omega\right) =-1+\frac{2}{\omega-\omega_{0}}. \label{in.26 \end{equation} The differential equation, (\ref{in.25}), was derived in \cite{jdband1} and it follows from the generalized Chaplygin gas \cite{jdbchg} with $\lambda=2$, that is for an EoS \begin{equation} p_{\phi}=\gamma\rho_{\phi}^{\lambda}-\rho_{\phi}~\text{with}~\lambda=2, \label{in.27 \end{equation} where $\gamma\varpropto F_{1}$. \ Therefore, with the use of expressions (\ref{in.12})-(\ref{in.14}) we find that in the proper time where $N\left( t\right) =1$, \ the scale factor is that of an intermediate inflation (\cite{star,inter,inter2,BLM}), \begin{equation} a\left( t\right) \simeq\exp\left( a_{1}t^{2/3}\right) , \label{in.28a \end{equation} and the scalar-field potential i \begin{equation} V\left( \phi\right) =\frac{1}{18F_{1}}\left( \phi^{-2}-\frac{2}{3}\phi ^{-4}\right) . \label{in.29 \end{equation} Here it is important to mention that the scalar field description of the inflaton is valid only for values of $\phi$, such that the scalar field potential is not negative. Note that the non essential integration constants have been absorb and $\phi$ indicates $\phi-\phi_{0}$, where in (\ref{in.29}) without loss of generality we considered $\phi_{0}=0$. \subsubsection{Non-zero $n_{0}$} We now assume that $n_{s}-1=-2n_{0}\neq0$. Then, from (\ref{in.22}), it follows that $\eta_{H}=2\varepsilon_{H}-n_{0}$ and with the use of (\ref{in.17}), the differential equation for $F(\omega)$ is now \begin{equation} F^{\prime\prime}+\left( F^{\prime}\right) ^{2}-\frac{n_{0}}{3}F^{\prime}=0, \label{in.30 \end{equation} with the closed-form solutio \begin{equation} F\left( \omega\right) =\ln\left\{ F_{1}\exp\left( \frac{n_{0}}{3 \omega\right) \right\} +F_{0}. \label{in.31 \end{equation} The latter function has been derived in \cite{jdbchg} as the solution in which the scalar field mimics the generalized Chaplygin gas (or a bulk viscosity) with EoS parameter \begin{equation} p_{\phi}=A\rho_{\phi}^{2}+B\rho_{\phi} \label{in.32 \end{equation} \qquad for the specific values $A,B$ such that $F_{0}=-\frac{A}{1+B}$ and $B=1+\frac{2}{3}n_{0}$. Furthermore, the effective EoS parameter is calculated to b \begin{equation} w_{\phi}\left( \omega\right) =-1+\frac{n_{0}}{3}-\frac{F_{0}n_{0}}{3}\left( F_{1}\exp\left( \frac{n_{0}}{3}\omega\right) +F_{0}\right) ^{-1}, \label{in.33 \end{equation} while the closed-form expression for the scalar field potential i \begin{equation} V\left( \phi\right) =\frac{F_{1}}{9}\frac{e^{\sqrt{\frac{n_{0}}{3}}\phi }{\left( e^{\sqrt{\frac{n_{0}}{3}}\phi}+F_{0}F_{1}\right) ^{2}}\left( 3-n_{0}\left( \frac{e^{\sqrt{\frac{n_{0}}{3}}\phi}-F_{0}F_{1}}{e^{\sqrt {\frac{n_{0}}{3}}\phi}+F_{0}F_{1}}\right) ^{2}\right) . \label{in.34 \end{equation} The expansion scale factor cannot be written in a closed-form expression in the proper time, $t$. Moreover, for the potential (\ref{in.34}), we have that for large values of $\phi$, the potential becomes exponential, that is \begin{equation} \lim_{\phi\rightarrow+\infty}V\left( \phi\right) =\frac{\left( 3-n_{0}\right) F_{1}}{9}e^{-\sqrt{\frac{n_{0}}{3}}\phi}, \end{equation} and approximates the solution in which the scalar field mimics a perfect fluid with constant equation of state parameter. In order to determine the physical properties of the parameter $n_{0}$, but also those of the integration constants $F_{0}$ and $F_{1}$, the indices $n_{s}$ and $r$ are calculated below. \subsubsection{Observational constraints} For the solution (\ref{in.31}), we calculate the slow-roll parameters to be \begin{equation} \varepsilon_{H}=n_{0}\left( 1-\left( 1+\frac{F_{1}}{F_{0}}e^{\frac{n_{0} {3}\omega}\right) ^{-1}\right) ~,~\eta_{H}=2\varepsilon_{H}-n_{0}, \label{oc.01 \end{equation} which gives that $n_{s}-1=-2n_{0}.~$Recall that inflation ends at $\omega _{f},$ where $\varepsilon_{H}\left( \omega_{f}\right) =1$. Hence, we find that \begin{equation} \omega_{f}=\frac{3}{n_{0}}\ln\left[ \frac{F_{0}}{F_{1}\left( n_{0}-1\right) }\right] , \label{oc.02 \end{equation} and \begin{equation} n_{s}\left( n_{0},N_{e}\right) -1=-2n_{0}~,~r\left( n_{0},N_{e}\right) =\frac{10n_{0}}{1+\left( n_{0}-1\right) e^{-2n_{0}N_{e}}}, \label{oc.03 \end{equation} where $N_{e}$ is the number of e-folds; recall that $6N_{e}=\omega_{f -\omega_{i}.$ From the latter expressions, it follows that while the value of $n_{0}$ fixes the index $n_{s}$, only the scalar-tensor ratio $r$ depends on the number of e-folds. Furthermore, the integration constants are non-essential and fix the value of the scale factor at the end of the inflation. In Figure \ref{linear01} the $n_{s}-r$ plot is presented for the expressions (\ref{oc.03}) and for $n_{0}\in\left( 0,0.02\right) $,~$\ N_{e}\in\left[ 50,60\right] $. Note that for values of $n_{0}$ where $n_{s}$ is constrained by the Planck 2015 data, it follows that $r<0.11$ for very large values of $N_{e}$, \ while for the number of e-folds that we considered in the figure $r>0.11$. \begin{figure}[ptb] \includegraphics[height=7cm]{linear01.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=-2n_{0}.~$The figure is for various values of $n_{0}$ in the range $n_{0}\in\left( 0,0.02\right) $ and for number of e-folds $N_{e}\in\left[ 50,60\right] $. The dot line is for $N_{e}=55$. \label{linear01 \end{figure} Furthermore, in the case of the Harrison-Zeldovich spectrum, that is, $n_{0}=0$, we calculate $r=\frac{10}{1-2N_{e}}$, hence $r<0.11$ when $N_{e}>50$. As before, the integration constants (now $F_{1}$ and $\omega_{0 $) specify only the value of $\omega_{f}$ at which inflation ends. We calculate that $\omega_{f}=3+\omega_{0}$. \subsection{Linear expression: $n_{s}-1\simeq r$} We continue now by taking the more general ansatz, $n_{s}-1=-2n_{1 \varepsilon_{H}-2n_{0}$; that is, the spectral index $n_{s}$ depends linearly on the tensor to scalar ratio,~$r$. Recall that $r=10\varepsilon_{H}$; so in the limit in which $n_{1}\rightarrow0$ we are in a situation where $n_{s}-1=~const$. \ We study two cases: $n_{0}=0$ and $n_{0}\neq0$. \subsubsection{Zero $n_{0}:$} When $n_{0}=0$, so $n_{s}=1$ and $r=0$, with the use of (\ref{in.17}) the differential equation for $F(\omega)$ i \begin{equation} F^{\prime\prime}+\left( 1-n_{1}\right) \left( F^{\prime}\right) ^{2}=0, \label{in.35 \end{equation} with solutio \begin{equation} F\left( \omega\right) =-\frac{1}{n_{1}-1}\ln\left( F_{1}\left( \omega-\omega_{0}\right) \right) ~,~n_{1}\neq1, \label{in.36 \end{equation} or \begin{equation} F\left( \omega\right) =F_{1}\left( \omega-\omega_{0}\right) ,~n_{1}=1, \label{in.37 \end{equation} where $F_{1}$ is constant. Of course, one should be careful because we have assumed that $n_{s}$ is given in terms of the first approximation, i.e., $\left( \varepsilon_{H}\right) ^{2}\simeq0,$ and second-order approximations may need to be considered. For simplicity, we continue with the first-order approximations. The ansatz is stronger if $n_{1}$ is of order $O\left( \varepsilon_{H}\right) ^{-1}$. Obviously, for $n_{1}=0$, eqn. (\ref{in.25}) is recovered. Eqn. (\ref{in.36}) corresponds to the solution of the generalized Chaplygin gas, (\ref{in.27}), with $\lambda=2-n_{1}.~$ The scalar-field potential is given by the expression \cite{jdbchg} \begin{equation} V\left( \phi\right) \varpropto\phi^{-\frac{2}{1-\lambda}}\left( 1-\frac {2}{3\left( 1-\lambda^{2}\right) }\phi^{-2}\right) \label{in.38 \end{equation} and the scale factor is that of intermediate inflation $a\left( t\right) \simeq\exp\left( a_{1}t^{N}\right) ~$for$~n\neq\frac{3}{2}~$and $a\left( t\right) \simeq\exp\left( a_{1}e^{\bar{\gamma}t}\right) ~~$for $~n=\frac {3}{2}$. Moreover, the effective EoS is derived to be \begin{equation} w_{\phi}\left( \omega\right) =-1+\frac{2}{\left( 1-n_{1}\right) }\left( \omega-\omega_{0}\right) ^{-1}. \label{in.39 \end{equation} In the limit where $n_{1}=1,$ from solution (\ref{in.37}) we calculate $w_{\phi}\left( \omega\right) =-1+2F_{1}$, which is a particular solution of the exponential potential $V\left( \phi\right) =\frac{\left( 1-F_{1 \right) }{12}e^{-\sqrt{6F_{1}}\phi}$, and there is a power-law scale factor~$a\left( t\right) \varpropto t^{\frac{1}{3F_{1}}}$. \subsubsection{Non-zero $n_{0}:$} Now we assume that $n_{0}\neq0$. The unknown function, $F\left( \omega\right) $, which provides the solution for the spacetime metric, satisfies the second-order nonlinear differential equatio \begin{equation} F^{\prime\prime}+\left( 1-n_{1}\right) \left( F^{\prime}\right) ^{2 -\frac{n_{0}}{3}F^{\prime}=0 \label{in.40 \end{equation} with closed-form solutio \begin{equation} F\left( \omega\right) =-\frac{1}{n_{1}-1}\ln\left( F_{1}\exp\left( \frac{n_{0}}{3}\omega\right) +F_{0}\right) ,~n_{1}\neq1, \label{in.41 \end{equation} o \begin{equation} F\left( \omega\right) =F_{1}\exp\left( \frac{n_{0}}{3}\omega\right) +F_{0}~,~n_{1}=1. \label{in.42 \end{equation} As in the case of $n_{0}=0$, when $n_{1}\neq1$ the solution generalizes that of (\ref{in.31}) and the scalar field now satisfies the equation of state of the generalized Chaplygin gas, namely \begin{equation} p_{\phi}=A\rho_{\phi}^{\lambda}+B\rho_{\phi}, \label{in.42a \end{equation} where, in contrast to (\ref{in.32}) where $\lambda=2$, we now have $\lambda=2-n_{1}$. Again, the scalar-field potential is given in terms of the hyperbolic functions as \cite{jdband1} \begin{equation} V\left( \phi\right) =\frac{1}{36}\left( \frac{e^{-\sqrt{\frac{n_{0}\left( \lambda-1\right) }{3}}\phi}}{4F_{1}}\left( 1+e^{\sqrt{\frac{n_{0}\left( \lambda-1\right) }{3}}\phi}\right) ^{2}\right) ^{\frac{1}{1-\lambda }\left( 3-\frac{n_{0}}{\lambda-1}\left( \frac{e^{\sqrt{\frac{n_{0}\left( \lambda-1\right) }{3}}\phi}-F_{0}F_{1}}{e^{\sqrt{\frac{n_{0}\left( \lambda-1\right) }{3}}\phi}+F_{0}F_{1}}\right) ^{2}\right) . \label{in.43 \end{equation} Alternatively, for $n_{1}=1,$ the scalar-field potential is \begin{equation} V\left( \phi\right) =\frac{1}{72}\exp\left( -\frac{n_{0}}{2}\phi^{2 -F_{0}\right) \left( 6-\left( n_{0}\phi\right) ^{2}\right) . \label{in.44 \end{equation} The scale factor, $a\left( t\right) ,$ cannot be written as a closed-form expression in either case. However, for the effective EoS parameter we have \begin{equation} w_{\phi}\left( \omega\right) =-1+\frac{2}{3}n_{0}F_{1}\exp\left( \frac{n_{0}}{3}\omega\right) ~,~n_{1}=1 \label{in.45 \end{equation} an \begin{equation} w_{\phi}=-1+\frac{2}{3}\frac{n_{0}F_{1}\exp\left( \frac{n_{0}}{3 \omega\right) }{1-n_{1}}\left( F_{1}\exp\left( \frac{n_{0}}{3 \omega\right) +F_{0}\right) ^{-1}~,~n_{1}\neq1. \label{in.46 \end{equation} So far, the generalized Chaplygin gas which leads to intermediate inflation, and another generalization of the Chaplygin gas which was studied in \cite{jdband1}, have been recovered. For these two inflationary models the scalar-field potentials have similar forms. For one model the potential, $V\left( \phi\right) $, is given by a polynomial function of $\phi$, while for the second model it is given as a function of the hyperbolic trigonometric functions. \subsubsection{Observational constraints} The slow-roll parameters for the solution (\ref{in.36}) are calculated to b \begin{equation} \varepsilon_{H}=\frac{3}{1-n_{1}}\left( \omega-\omega_{0}\right) ^{-1}~,~\eta_{H}=\left( n_{1}-2\right) \varepsilon_{H}. \label{oc.04 \end{equation} Hence, $\omega_{f}=\frac{3}{1-n_{1}}+\omega_{0},$ from which we fin \begin{equation} n_{s}\left( n_{1},N_{e}\right) =1+\frac{2n_{1}}{2\left( n_{1}-1\right) N_{e}-1}~,~r\left( n_{1},N_{e}\right) =\frac{10}{1-2\left( n_{1}-1\right) N_{e}}. \label{oc.05 \end{equation} As before, the constants of integration have no effect on the inflationary parameters. \ Furthermore, we see that we have $n_{s}\left( N_{e}\right) -1<0$, so necessarily $n_{1}>0$ and $n_{1}<1+\frac{1}{2N_{e}}$, while the latter also ensures that $r\left( N_{e}\right) >0.$ The case in which $n_{1}=1$ corresponds to the exponential potential and and gives constant slow-roll parameters. The $n_{s}-r$ diagram for the expressions (\ref{oc.05}) is given in Figure \ref{fig02}, for $N_{e}\in\left[ 50,60\right] $ and the free parameter $n_{1},$ with $n_{1}\in\left( 0.01,0.65\right) $. \begin{figure}[ptb] \includegraphics[height=7cm]{linear02.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=-2n_{1}\varepsilon_{H}.~$The figure is for various values of $n_{1}$ in the range $n_{1}\in\left( 0.01,0.65\right) $ and for number of e-folds $N_{e}\in\left[ 50,60\right] $. The dot line is for $N_{e}=55$. \label{fig02 \end{figure} For $n_{0}\neq0$, from (\ref{in.41}) it follows tha \begin{equation} \varepsilon_{H}=\frac{n_{0}}{n_{1}-1}\left( 1+\frac{F_{0}}{F_{1} e^{-\frac{n_{0}}{3}\omega}\right) ~^{-1}~,~\ \eta_{H}=\frac{n_{0}}{n_{1 -1}\left( \frac{F_{1}e^{\frac{n_{0}}{3}\omega}+F_{0}\left( n_{1}-1\right) }{F_{1}e^{\frac{n_{0}}{3}\omega}+F_{0}}\right) , \label{oc.06 \end{equation} which shows that inflation ends when \begin{equation} \omega_{f}=\frac{3}{n_{0}}\ln\left( \frac{F_{0}}{F_{1}}\frac{\left( 1-n_{1}\right) }{n_{0}-\left( 1-n_{1}\right) }\right) , \label{oc.07 \end{equation} from which we fin \begin{equation} n_{s}\left( n_{0},n_{1},N_{e}\right) =1-\frac{2n_{0}\left( 1+\left( n_{0}+n_{1}-1\right) e^{-2n_{0}N_{e}}\right) }{\left( 1-n_{1}\right) +\left( n_{0}+n_{1}-1\right) e^{-2n_{0}N_{e}}}~, \label{oc.08 \end{equation \begin{equation} ~~r\left( n_{0},n_{1},N_{e}\right) =\frac{10n_{0}}{\left( 1-n_{1}\right) +\left( n_{0}+n_{1}-1\right) e^{-2n_{0}N_{e}}}. \label{oc.09 \end{equation} The $n_{s}-r$ diagram for the parameters (\ref{oc.08}), (\ref{oc.09}) is given in Figures \ref{fig03} and \ref{fig04}, for the number of e-folds $N_{e}=55$ and for various values of the free parameters $n_{0}$ and $n_{1}$. Figure \ref{fig03} is for $n_{0}\in\left[ 0.001,0.01\right] $ and $n_{1}\in\left[ 0.001,0.5\right] $, while Figure \ref{fig04} is for $n_{0}\in\left[ 0.01,0.03\right] $ and $n_{1}\in\left[ -0.5,-0.001\right] $. \begin{figure}[ptb] \includegraphics[height=7cm]{linear03a.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=-2n_{1}\varepsilon_{H}-2n_{0}.~$The figure is for various values of \ the parameters $n_{0}$ and $n_{1};$ in the range $n_{0}\in\left[ 0.001,0.01\right] $, $n_{1}\in\left[ 0.001,0.5\right] $ and for number of e-folds $N_{e}=55$. The dot line is for $n_{0}=0.005$. \label{fig03 \end{figure} \begin{figure}[ptb] \includegraphics[height=7cm]{linear03b.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=-2n_{1}\varepsilon_{H}-2n_{0}.~$The figure is for various values of \ the parameters $n_{0}$ and $n_{1};$ in the range $n_{0}\in\left[ 0.01,0.03\right] $, $n_{1}\in\left[ -0.5,0.001\right] $ and for number of e-folds $N_{e}=55$. \ The dot line is for $n_{0}=0.02$. \label{fig04 \end{figure} We continue our analysis with a more general case in which the relation between $n_{s}$ and $r$ is parabolic. \subsection{Parabolic: $n_{s}-1\simeq r^{2}$} Consider now the case where the relation $n_{s}-1=h\left( \varepsilon _{H}\right) $ describes a parabola such that \begin{equation} n_{s}-1=2n_{2}\left( \varepsilon_{H}\right) ^{2}-2n_{1}\varepsilon _{H}-2n_{0}. \label{in.47 \end{equation} From the constraint equation above, the nonlinear differential equation for $F\left( \omega\right) $ i \begin{equation} F^{\prime\prime}+3n_{2}\left( F^{\prime}\right) ^{3}+\left( 1-n_{1}\right) \left( F^{\prime}\right) ^{2}-\frac{n_{0}}{3}F^{\prime}=0, \label{in.48 \end{equation} which can be written as a first-order ordinary differential equation in terms of the effective EoS parameter or in terms of the HSR parameter, $\varepsilon_{H}$, as \begin{equation} 3\varepsilon_{H}^{\prime}+n_{2}\left( \varepsilon_{H}\right) ^{3}+\left( 1-n_{1}\right) \left( \varepsilon_{H}\right) ^{2}-n_{0}\varepsilon_{H}=0. \label{in.49 \end{equation} As in the linear case, for completeness, one has to consider the second-order approximation in the definition of $n_{s}\left( \varepsilon_{H},\eta _{H}\right) $. However, we continue just with the first approximation here. The ansatz is consistent if $n_{2}~$is of the order $n_{2}\simeq\left( \varepsilon_{H}\right) ^{-2}.$ The general solution of eqn. (\ref{in.49}) i \begin{equation} \frac{3}{2n_{0}}\ln\left( n_{2}+\frac{\left( 1-n_{1}\right) {\varepsilon_{H}}-\frac{n_{0}}{\left( \varepsilon_{H}\right) ^{2}}\right) -\frac{3\left( 1-n_{1}\right) \arctan\left( \frac{2n_{2}\varepsilon _{H}+\left( 1-n_{1}\right) }{\sqrt{4n_{0}n_{2}+\left( 1-n_{1}\right) ^{2 }}\right) }{n_{0}\sqrt{4n_{0}n_{2}+\left( 1-n_{1}\right) ^{2}}}=\left( \omega-\omega_{0}\right) , \label{in.50 \end{equation} which for some specific values of the free parameters can be written in closed form. In the special case of $n_{1}=1$ we find that \begin{equation} \left( \varepsilon_{H}\right) ^{2}=\frac{n_{0}}{c_{1}e^{-\frac{2}{3 n_{0}\omega}+n_{2}}~,~~n_{0}\neq0, \label{in.51 \end{equation} and \begin{equation} \left( \varepsilon_{H}\right) ^{2}=\frac{3}{2n_{2}\omega+c_{1}}~,~n_{0}=0. \label{in.52 \end{equation} Hence, for the function $F\left( \omega\right) $ defining the metric, we hav \begin{equation} F\left( \omega\right) =\pm\frac{\text{\textrm{arctanh}}\left( \sqrt {1+\frac{c_{1}}{n_{2}}}e^{-\frac{2}{3}n_{0}\omega}\right) }{\sqrt{n_{0}n_{2 }}~,~~n_{0}\neq0, \label{in.53 \end{equation} an \begin{equation} F\left( \omega\right) =\pm\sqrt{3}\sqrt{\frac{2}{n_{2}}\omega+c_{1} ~,~n_{0}=0. \label{in.54 \end{equation} Therefore, from (\ref{in.53}), the potential is found to b \begin{equation} V\left( \phi\right) \varpropto\frac{1}{12}\exp\left( -V_{1}\phi ^{2/3}\right) \left( 1+V_{2}\phi^{-\frac{2}{3}}\right) , \label{in.55 \end{equation} where $V_{1,2}=V_{1,2}\left( n_{2}\right) $ are constant,$~$and it can be seen that it has the form of the potential in (\ref{in.44}). For small values of $\left\vert \phi\right\vert ,$the potential, (\ref{in.55}) becomes the power-law potential $V\left( \phi\right) \simeq\phi^{-\frac{2}{3}}$, which means that finite-time singularities of the 'generalized sudden' type can follow \cite{graham}. Moreover, for the EoS for the scalar field it follows that the effective equation of state is \begin{equation} p_{\phi}=\left( \frac{6\rho_{\phi}}{n_{2}\ln\left( 12\rho_{\phi}\right) }-\rho_{\phi}\right) ~,~n_{0}=0. \label{in.56 \end{equation} On the other hand, for $n_{0}\neq0$ we find that that the EoS is \begin{equation} p_{\phi}=2\sqrt{\frac{n_{0}}{n_{2}}}\left( \frac{16n_{0}n_{2}\rho_{\phi ^{2}+1}{16n_{0}n_{2}\rho_{\phi}^{2}-1}\right) -\rho_{\phi}~,~n_{0}\neq0. \label{in.57 \end{equation} The scalar-field potential for $n_{0}\neq0$ cannot be written in closed form. However, in terms of $\omega$ it is \begin{equation} V\left( \omega\right) =e^{-F\left( \omega\right) }\left( 1\pm\sqrt {\frac{n_{0}}{n_{2}}\left( 1+\frac{c_{1}}{n_{2}}e^{-\frac{2}{3}n_{0}\omega }\right) ^{-1}}\right) . \label{in.58 \end{equation} \subsubsection{Observational constraints} For the solution (\ref{in.54}), in which $n_{0}=0$ and $n_{1}=1$, we find that inflation ends when $\omega_{f}=\frac{27-c_{1}\left( n_{2}\right) ^{2 }{2n_{2}}$, and the parameters $n_{s}$ and $r$ are given in terms of the number of e-folds by \begin{equation} n_{s}-1=\frac{2n_{2}-6\sqrt{9+4n_{2}N_{e}}}{9+4n_{2}N_{e}}~,~r=\frac{30 {\sqrt{9+4n_{2}N_{e}}}. \label{oc.10 \end{equation} In Fig. \ref{fig05} the $n_{s}-r$ diagram is given for the parameters (\ref{oc.10}) with $n_{2}\in\left( 10^{2},10^{3}\right) $ and \ $N_{e \in\left[ 50,60\right] $. Note that in order for this case to differ from the linear we have assumed that $n_{2}$ has a large value of order $\left( \varepsilon_{H}\right) ^{-1}$. \begin{figure}[ptb] \includegraphics[height=7cm]{parabola1.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=2n_{2}\left( \varepsilon_{H}\right) ^{2}-2\varepsilon_{H}.~$The figure is for various values of \ the parameter $n_{2}\in\left( 10^{2},10^{3}\right) $ and number of e-folds $N_{e \in\left[ 50,60\right] $. The dot line is for $n_{2}=2\times10^{2}$. \label{fig05 \end{figure} Similarly, for the solution in which $n_{0}\neq0$ but $n_{1}=1$; that is, for the expression (\ref{in.53}), we omit the derivation of the parameters $n_{s}-r$. However, in Fig. \ref{fig06} the $n_{s}-r$ diagram is presented for a number of e-folds given by $N_{e}=55$ and $n_{0}\in\left[ 10^{-4 ,0.3\right] $ and $n_{2}=\left[ 2\times10^{2},10^{3}\right] $. \begin{figure}[ptb] \includegraphics[height=7cm]{parabola2.eps} \caption{Spectral index $n_{s}$ to scalar to tensor ratio $r\,,$ for the scalar field potential in which $n_{s}-1=2n_{2}\left( \varepsilon_{H}\right) ^{2}-2\varepsilon_{H}-2n_{0}.~$The figure is for various values of \ the parameter $n_{2}\in\left[ 2\times10^{2},10^{3}\right] $ and $n_{0}\in\left[ 10^{-4},0.3\right] $ while for the number of e-folds we sellected $N_{e}=55$. The dot line is for $n_{2}=5\times10^{2}$. \label{fig06 \end{figure} \section{Conditions to escape from Inflation} \label{escape} It is an open question as to which values for the free parameters of our models determine when inflation ends. In order to answer this, we consider the master equation (\ref{in.48}) and specifically we choose to rewrite it in terms of the HSR parameter $\varepsilon_{H}\left( \omega\right) $ as \begin{equation} 3\varepsilon_{H}^{\prime}=\left( n_{0}+\left( n_{1}-1\right) \varepsilon _{H}-n_{2}\varepsilon_{H}^{2}\right) \varepsilon_{H}.\label{sc1 \end{equation} This equation has the following critical points: \begin{equation} \varepsilon_{H}^{\left( 0\right) }=0~,~\varepsilon_{H}^{\left( \pm\right) }=\frac{n_{1}-1\pm\sqrt{\left( 1-n_{1}\right) ^{2}+4n_{0}n_{2}}}{2n_{2 }~,\text{for}~n_{2}\neq0,\label{sc2 \end{equation} o \begin{equation} \varepsilon_{H}^{\left( 0\right) }=0~,~\varepsilon_{H}^{\left( 1\right) }=\frac{n_{0}}{1-n_{1}}~,\text{when}~n_{2}=0~\text{and }n_{1}\neq1.\label{sc3 \end{equation} Hence, in order for inflation to end in the cosmological models that we studied, the free parameters of the models have to be constrained so that one of the critical points, $\varepsilon_{H}^{\left( \pm\right) }$ or $\varepsilon_{H}^{\left( 1\right) }$, is an attractor, and also that $\varepsilon_{H}^{\left( \pm\right) }\geq1$ or $\varepsilon_{H}^{\left( 1\right) }\geq1$. We note that point $\varepsilon_{H}^{\left( 0\right) }$ describes a de\ Sitter universe (that is, $w_{\phi}=-1$), while for the other critical points the equation of state parameter, $w_{\phi}$, is constant. Therefore, from the previous analysis we see that at the critical points the scalar field potential is described by the exponential function.~ We proceed by considering the cases (a) $n_{2}=0$ and (b)~ $n_{2}\neq0$, where the number of critical points differs. \subsection{Subcase $n_{2}=0$} Let as assume the simple case which corresponds to the master equation (\ref{in.40}); that is, $n_{2}=0~$and we assume that $n_{1}\neq1$. In that consideration, the critical points of the system are the $\varepsilon _{H}^{\left( 0\right) }~$and $\varepsilon_{H}^{\left( 1\right) }~$of (\ref{sc3}). As far as concerns the stability of these points, we find that point $\varepsilon_{H}^{\left( 1\right) }$ is the unique attractor of the equation when $n_{0}>0$, and $\varepsilon_{H}^{\left( 1\right) }$ describes a point without acceleration when $n_{1}<1$ and $n_{0}>1-n_{1}$. On the other hand, when $n_{0}<0$, the unique attractor of the system is the de Sitter point $\varepsilon_{H}^{\left( 0\right) },$ although in this case the model does not provide an exit from inflation. \subsection{Subcase $n_{2}\neq0$} For $n_{2}\neq0,$ a necessary condition for an exit from the inflation to occur, is that the critical points $\varepsilon_{H}^{\left( \pm\right) }$ are real; that is, $4n_{0}n_{2}\geq-\frac{\left( 1-n_{1}\right) ^{2}}{4}$. \ In the special limit in which $n_{0}=0$, the points $\varepsilon _{H}^{\left( \pm\right) }$ reduce to $\varepsilon_{H}^{\left( 0\right) }$ and $\varepsilon_{H}^{\left( 2\right) }=\frac{n_{1}-1}{n_{2}}$. \ In that case, the two points are stable when $n_{2}>0$, and $\varepsilon_{H}^{\left( 2\right) }$ is positive for any value of $n_{1}>1$. In the general scenario with $n_{0}\neq0,$ it follows easily that in order for $\varepsilon_{H}^{\left( 0\right) }$ to be an elliptic point we require $n_{0}>0$. Moreover, by assuming the condition $\varepsilon_{H}^{\left( \pm\right) }>1,$ we find that only the point $\varepsilon_{H}^{\left( +\right) }$ can be an attractor outside the inflationary era and this is possible only when the free parameters satisfy the conditions \begin{equation} (i)~n_{2}<0,~n_{1}<1+2n_{2},~n_{0}>1-n_{1}+n_{2}\text{ \ and \ }4n_{0 n_{2}\geq-\frac{\left( 1-n_{1}\right) ^{2}}{4}, \end{equation} or \begin{equation} (ii)~n_{2}>0,~n_{1}>1+n_{2}~\text{\ and \ }n_{0}>1-n_{1}+n_{2}, \end{equation} or \begin{equation} (iii)~n_{2}>0,n_{1}\leq1+n_{2}~\text{\ and \ }n_{0}>0\,. \end{equation} \ Hence, for values of the free parameters in those ranges only the third model, i.e. where $h\left( r\right) $ is a quadratic function, admits an attractor outside the inflationary era. In Fig. \ref{figww} the qualitative evolution of the equation of state parameter $w\left( a\right) $, given by the solution of equation (\ref{sc1}) is presented for various values of the free parameters. \begin{figure}[ptb] \includegraphics[height=6cm]{n2zero.eps} \includegraphics[height=6cm]{n2pos.eps} \includegraphics[height=6cm]{n2neg.eps} \includegraphics[height=6cm]{n1one.eps} \caption{Qualitative evolution of the equation of state parameter $w\left( a\right) $, given by the solution of equation (\ref{sc1}). The solid lines are for initial condition $\varepsilon_{H}\left( a_{0}\right) =0.01$ while the dash-dash lines are for initial conditions $\varepsilon_{H}\left( a_{0}\right) =2$. As far as concerns the parameters $\mathbf{n}\left( X\right) =\left( n_{0},n_{1}\right) $ we have that $\mathbf{n}\left( A\right) =\left( 0.3,0.5\right) ,~\mathbf{n}\left( B\right) =\left( 0.5,0.5\right) $, $\mathbf{n}\left( C\right) =\left( 1,0.5\right) ,~\mathbf{n}\left( \alpha\right) =\left( -0.1,0.5\right) $, $\mathbf{n \left( \beta\right) =\left( -0.2,0.5\right) $ and $\mathbf{n}\left( c\right) =\left( -0.5,0.5\right) $. Upper-left fig. is for $n_{2}=0$, upper-right fig. is for $n_{2}=0.2$, and lower-left fig. is for $n_{2 =-0.2.~$Furthermore, the free parameters on the lower-right figure are $\mathbf{n}^{\prime}\left( X\right) =\left( n_{0},n_{2}\right) ,$ such that $\mathbf{n}\left( A\right) =\left( 0.3,0.3\right) ,~\mathbf{n}\left( B\right) =\left( 0.5,0.3\right) $, $\mathbf{n}\left( C\right) =\left( 1,0.3\right) ,~\mathbf{n}\left( \alpha\right) =\left( -0.1,0.3\right) $, $\mathbf{n}\left( \beta\right) =\left( -0.2,0.3\right) $ and $\mathbf{n}\left( c\right) =\left( -0.5,0.3\right) $ while $n_{1}=1$. The values of the free parameters have been chosen such that to cover the stability analysis of equation (\ref{sc1}). \label{figww \end{figure} \section{Equivalent transformations} \label{section4} It is interesting that, when we set $n_{s}-1=0$, the scalar field mimics the generalized Chaplygin gas (\ref{in.27}) with $\lambda=2$. Yet, when we assumed that $\lambda\neq2$ in the equation of state of the generalized Chaplygin gas, we found that $n_{s}-1=-2n_{1}\varepsilon_{H}$, where $\lambda=2-n_{1}$. These two models are the solutions of the two different master equations, (\ref{in.25}) and (\ref{in.35}), respectively. Yet, these two equations are different for $n_{1}\neq0$, we observe that there exists a transformation $F\left( \omega\right) \rightarrow\bar{F}\left( \omega\right) $, allowing equation (\ref{in.25}) to be written in the form of (\ref{in.35}) and vice versa. Suppose that $n_{1}\neq0,1$, then if in (\ref{in.25}) we substitute \begin{equation} F\left( \omega\right) \rightarrow\left( 1-n_{1}\right) \bar{F}\left( \omega\right) , \label{in.59 \end{equation} equation (\ref{in.25}) becomes \begin{equation} \bar{F}^{\prime\prime}+\left( 1-n_{1}\right) \left( \bar{F}^{\prime }\right) =0 \label{in.60 \end{equation} which is just equation (\ref{in.30}). The transformation alters the line element of the \ FLRW spacetime (\ref{in.14}) to \begin{equation} ds^{2}=-\left( e^{-\bar{F}\left( \omega\right) }\right) ^{\left( 1-n_{1}\right) }d\omega^{2}+e^{\omega/3}(dx^{2}+dy^{2}+dz^{2}). \label{in.61 \end{equation} A similar observation holds for the master equations (\ref{in.30}) and (\ref{in.40}). Under the transformation, (\ref{in.59}), these two equations are related, so a known solution for the model with EoS (\ref{in.42a}) for a specific $\lambda$ can be used to construct a solution for another cosmological model with a similar EoS parameter but with some other constant $\lambda$. For completeness, note that in the case of $n_{1}=1$, the transformation which relates the different set of equations is not that of (\ref{in.59}) but $F\left( \omega\right) =\ln\left( \bar{F}\left( \omega\right) \right) $. \ On the other hand, it is important to mention that equation (\ref{in.35}) can be written in the form of (\ref{in.25}) under the simple change of variable $\omega=\frac{3}{n_{0}}\ln\left( \bar{\omega}\right) $. The same transformation can be applied in the master equation, (\ref{in.40}), which is transformed into equation (\ref{in.35}). Moreover, if we also apply the transformation (\ref{in.59}) to (\ref{in.40}), then the latter takes the form of the master equation, (\ref{in.25}). These two transformations modify the line element of the FLRW spacetime\ (\ref{in.14}) t \begin{equation} d\bar{s}^{2}=-\frac{9}{\left( n_{0}\right) ^{2}\omega^{2}}\left( e^{-\bar{F}\left( \bar{\omega}\right) }\right) ^{\left( 1-n_{1}\right) }d\bar{\omega}^{2}+\left( \bar{\omega}\right) ^{1/n_{0}}(dx^{2 +dy^{2}+dz^{2}). \label{in.62 \end{equation} Moreover, in the limit for which $n_{1}=1$ in (\ref{in.35}) the latter becomes the equation of a free particle, while the resulting scalar field theory is that of the exponential potential where the scalar field has a constant EoS parameter. The existence of transformations of this kind, which transform the one model into another, is not a coincidence. The master equations (\ref{in.25}), (\ref{in.30}), (\ref{in.35}) and (\ref{in.40}) are maximally symmetric. In particular they are invariant under the action of one-parameter point transformations (Lie point symmetries) which form the $SL\left( 3,R\right) $ Lie algebra. \footnote{According to Lie's Theorem,\ any second-order equation which admits the elements of the $SL\left( 3,R\right) $ algebra as symmetries is equivalent to the equation of a free particle and all the maximally symmetric equations commute \cite{sLie}. The map is the one which transforms the admitted $SL\left( 3,R\right) $ Lie algebra among the different representations of the admitted equations, for more details see \cite{lie1}.} Consider now the classical Newtonian analogue of a free particle and an observer whose measuring instruments for time and distance are not linear. By using the measured data of the observer we reach in the conclusion that it is not a free particle. On the other hand, in the classical system of the harmonic oscillator an observer with nonlinear measuring instruments can conclude that the system observed is that of a free particle, or that of the damped oscillator or another system. From the different observations, various models can be constructed. However, all these different models describe the same classical system and the master equations are invariant under the same group of point transformations but in different parametrization. In the master equations that we studied there is neither position nor time variables: the independent variable is the scale factor $\omega=6\ln a$, and the Hubble function is the dependent variable, $H\left( a\right) $. Therefore, we can say that at the level of the first-order approximation for the spectral indices, various representations of the variables $\left\{ a,H\left( a\right) \right\} $ provide different observable values for the spectral indices. This property is violated when we consider the second-order approximation. Transformations of this kind are well-known in physics. For instance, the Darboux transformation for the Schr\"{o}dinger equation \cite{darboux} is just a point transformation that relates linear equations with maximal symmetry; that is,\ it belongs to exactly the same category of transformations that we discuss here. A special characteristic of the Darboux transformation is that it preserves the form of the equation but the potential in the Schr\"{o}dinger equation changes. An application of the Darboux transformation for the determination of exactly solvable cosmological models can be found in \cite{darboux22}. Transformations which keep the form of our master equation exist. We do not have potential terms in the master equations but there are transformations which change the constant coefficients appearing there while retaining the form of the master equations. In order to demonstrate this, consider the master equation (\ref{in.40}). The application of the first transformation $F\left( \omega\right) \rightarrow\frac{1-\bar{n}_{1}}{1-n_{1}}\bar{F}\left( \omega\right) ~$in (\ref{in.40}), preserves the form of the master equation but the constant $\lambda$ in the equation of state for the generalized Chaplygin gas (\ref{in.42a}) shifts from $\lambda=2-n_{1}$ to $\bar{\lambda}=2-\bar{n}_{1}$. \ Moreover, the application of the second transformation, $\omega \rightarrow\frac{\bar{n}_{0}}{n_{0}}\bar{\omega},$ in (\ref{in.40}) give \begin{equation} \frac{d^{2}\bar{F}}{d\bar{\omega}^{2}}+\left( 1-\bar{n}_{1}\right) \left( \frac{d\bar{F}}{d\bar{\omega}}\right) ^{2}-\frac{\bar{n}_{0}}{3}\frac {d\bar{F}}{d\bar{\omega}}=0 \label{in.63 \end{equation} which is exactly the same master equation, just with different coefficients. Furthermore, for the more general case that we studied (the master equation of eqn.(\ref{in.48})) it is easy to see that for $n_{2}n_{0}\neq0$, eqn. (\ref{in.48}) admits eight Lie point symmetries; that is, it is maximally symmetric. Hence, there exists a mapping $\left\{ \omega,F\left( \omega\right) \right\} \rightarrow\left\{ \Omega,\Phi\left( \Omega\right) \right\} $ which transforms the master equation (\ref{in.48}) to that of a free particle, or to any other maximally symmetric equation -- such as the other master equations we studied above. \ Of course, this result can be used to derive closed-form solutions in other models with a maximally symmetric master equation. Recall that a map in the space of the variables which transforms one solution to any other solution was also found in \cite{new}. However, while both maps transform solutions into solutions, the one that we have discussed here, transforms not only solutions into solutions but systems of dynamical equations into equivalent systems\footnote{Other transformations which belong to these families of transformations are presented in \cite{charters}.}. In order to reflect that latter property, the map is called an equivalent point transformation. The elements of the $SL\left( 3,R\right) $ -- except for the transformations which relate algebraic equivalent equations -- provide us with important physical information about the system under study. One of these properties which arises from equation (\ref{in.25}) is the well-known scale invariance of the Harrison-Zeldovich spectrum, regarding which it can easily be seen that equation (\ref{in.25}) is invariant under transformations $\omega =\omega^{\prime}+\omega_{0}$ or $\omega=\omega^{\prime}e^{\bar{\omega}_{0}}$, where these two transformations are related with the symmetry vectors $\partial_{\omega}$ and $\omega\partial_{\omega}$. In particular, every element of the $SL\left( 3,R\right) $ is related to a point transformation which leaves the differential equation, and consequently the solution, invariant. Moreover, with a different reparameterization of the $SL\left( 3,R\right) $, for equivalent models, the physical interpretation of the invariant point transformations can change between the different models. \section{Conclusions} \label{conc} In scalar-field cosmology, the dark-energy EoS and the inflationary scalar-field potential have been reconstructed from the spectral index, $n_{s}$. From the Planck 2015 data analysis, it is known that the observable variables -- the tensor-to-scalar ratio, $r$, and the spectral index for the density perturbations, $n_{s}$ -- form a surface in the $n_{s}-r$ plane. Furthermore, these two observable variables can be expressed in terms of the slow-roll parameters and their derivatives. Therefore, the ansatz that the spectral index for the density perturbations is related with the tensor-to-scalar ratio, $\left( n_{s}-1\right) =h\left( r\right) $, provides a differential (master) equation whose solution defines the corresponding cosmological model. In this paper, we assumed $n_{s}$ to be given in the first approximation by a function $h\left( r\right) $ that it is: (a) constant, (b) linear, and (c) quadratic, respectively. In order for the first-order approximation to be valid the free parameters which have been introduced by the function $h\left( r\right) $ have to satisfy some consistency conditions. We work with the HSR parameters. The case in which $h\left( r\right) $ is constant, that is, $n_{s}-1=-2n_{0},\,$\ is one that has been studied before in the literature and, in the limit, $n_{0}=0$, corresponds to the Harrison--Zeldovich spectrum. The differential equation which follows provides the scalar factor to be that of a specific intermediate inflation, $a\left( t\right) \simeq\exp\left( a_{1}t^{2/3}\right) $, while the corresponding perfect fluid satisfies the equation of state (\ref{in.27}). On the other hand, for nonzero $n_{0}$, we found that the scalar field satisfies an EoS given by expression (\ref{in.42a}) for $\lambda=2,$ which includes expression (\ref{in.27}). For the scalar-field potential, the construction looks similar, and for $n_{0}=0$ the potential is given in terms of polynomials of the field $\phi$, and for $n_{0}\neq0$ in terms of hyperbolic trigonometric functions. As a second generalization, we assumed $h\left( r\right) $ to be the linear function, $h\left( r\right) =-\frac{n_{1}}{5}r-2n_{0}$. Now, the models derived from the differential equation $n-1=h\left( r\right) ,$ in the first-order approximation, are the generalized Chaplygin gases, (\ref{in.27}) and (\ref{in.42a}), for $n_{0}=0$ and $n_{0}\neq0,$ respectively; where now the power $\lambda$ in the equations of state is related to the value of $n_{1}$, by $\lambda=2-n_{1}$. Finally, the case in which $h\left( r\right) $ is a quadratic polynomial was considered and two new equations of state which generalize the Chaplygin gas were derived. Exact examples displaying a generalised sudden singularity of the type identified by Barrow and Graham \cite{graham} for inflationary scalar fields with fractional potentials were found here. Lastly, the ranges for the values of the free parameters of the models have been considered which permit the universe to escape from the inflationary phase. It is important to mention that in this work we have assumed that we are in the inflationary epoch and so the equation of state parameters, or equivalently the scalar field potentials that we reconstructed, can be seen as the leading order terms, or attractors, of a more general equation of state parameter which describes the whole evolution of the universe. It is particularly interesting that the master equations we derived in our study are second-order differential equations of maximal symmetry. Hence, they are invariant under the action of point transformations with generators given by the elements of the $SL\left( 3,R\right) $ algebra. Every master equation defines a representation of the $SL\left( 3,R\right) $ algebra and the map which changes the representation transforms the master equation to the corresponding master equation of another model. This relates explicitly the form of the line elements for the various cosmological models. The transformation which performs the change is a projective transformation in the jet-space of the master equation; that is, a map in the space of the dependent variable $F\left( \omega\right) $ and the spacetime variable $\omega$ -- we recall that $dt=e^{-F\left( \omega\right) /2}d\omega$ and $a\left( t\right) =e^{\omega/6}$. In a forthcoming work we will investigate whether the latter result can be extended to the case in which the master equation,~$n_{s}-1=h\left( r\right) $, is defined by higher-order approximations for the spectral indices. \begin{acknowledgments} JDB is supported by the Science and Technology Facilities Council of the United Kingdom (STFC). AP acknowledges the financial support of FONDECYT grant no. 3160121. AP thanks the Durban University of Technology for the hospitality provided while part of this work was performed. \end{acknowledgments}
2024-02-18T23:40:26.178Z
2018-06-12T02:06:08.000Z
algebraic_stack_train_0000
2,397
10,274
proofpile-arXiv_065-11630
\section{Introduction} Fractional differential equation have achieved great attention among researchers due to its wide range of applications in various meaningful phenomena in fluid mechanics, electrical networks, signal processing, diffusion, reaction processes and other fields of science and engineering \cite{MR93,Kilbas06,CM71}, among them, the non-linear oscillation of earthquake can be modeled with fractional derivatives \cite{He98a}, the fluid-dynamic traffic model having fractional derivatives \cite{He99a} can eliminate the deficiency arising from the assumption of continuum traffic flow, fractional non linear complex model for seepage flow in porous media in \cite{He98}. Indeed, it is too tough to find an exact solution of a wide class of the differential equation. Keeping all this in mind, various type of vigorous techniques has been developed to find an approximate solution of such type of fractional differential equations, among others, generalized differential transform method \cite{LH11}, variational iteration method \cite{SE15}, local fractional variational iteration method \cite{YBKM14}, reproducing kernel Hilbert space method \cite{GC12}, Adomian decomposition method \cite{MO06} and homotopy analysis method \cite{SE13}, Fractional reduced differential transform method \cite{SAS14a,SKAS14,SS15,SM16}. In the recent, vigourous techniques with Laplace transform has been developed, among them, see \cite{KKAR14,GKO13,KH11,KG12,KGB13,KGK12,KGHV12,KKA13}. Among others, HPTM has been employed for solving fractional model of Navier-Stokes equation \cite{KSK15}, optimal control problems \cite{GR16}, fractional coupled sine-Gordon equations \cite{SSahoo15}, Falkner-Skan wedge flow \cite{MAAR16}, time- and space-fractional coupled Burgers' equations \cite{SKS16}, strongly nonlinear oscillators \cite{MEA09}, nonlinear boundary values problems \cite{NINO16,KW11}, non-homogeneous partial differential equations with a variable coefficient \cite{MFKY11}. The reader also refer to read \cite{SUE16}. The partial functional differential equations with proportional delays, a special class of delay partial differential equation, arises specially in the field of biology, medicine, population ecology, control systems and climate models \cite{Wu96}, and complex economic macrodynamics \cite{Keller10}. In this paper, we obtain the numerical solution of initial valued autonomous system of TFPDEs with proportional delay \cite{SUE16} defined by \begin{equation*} \left\{ \begin{array}{ll} \mathcal{D}_t^{\alpha} \(u(x, t)\) = f\left(x, u(a_0 x, b_0 t), \frac{\partial }{\partial x}u(a_1 x, b_1 t), \ldots, \frac{\partial^m }{\partial x^m}u(a_m x, b_m t) \right), \\ u^{k}(x, 0)=\psi_k(x). \end{array} \right. \end{equation*} where $a_i, b_i \in (0, 1)$ for all $i\in N \cup \{0\}$. $g_k$ is initial value and $f$ is the differential operator, the independent variables $(x,t)$ (where $t$ denote time, and $x$-space variable) generally denotes the position in space or size of cells, maturation level, at a time while its solution may be the voltage, temperature, densities of different particles, form instance, chemicals, cells, etc.. One significant example of the model: Korteweg-de Vries (KdV) equation, arise in the research of shallow water waves is as follow: \begin{equation*} \mathcal{D}_t^{\alpha} \(u(x, t)\) = b u \frac{\partial }{\partial x} u(a_0x, b_0 t)+ \frac{\partial^3 }{\partial x^3} u(a_1 x, b_1 t), \quad 0< \alpha < 1. \end{equation*} where $b$ is a constant. The another well-known model: time-fractional nonlinear Klein--Gordon equation with proportional delay, aries in quantum field theory to describe nonlinear wave interaction \begin{equation*} \mathcal{D}_t^{\alpha} \(u(x, t)\) = u \frac{\partial^2 }{\partial x^2} u(a_0x, b_0 t)- b u(a_1 x, b_1 t)-F(u(a_2 x, b_2 t))+ h(x, t), \quad 1< \alpha < 2. \end{equation*} where $b$ is a constant, $h(x, t)$ known analytic function and $F$ is the nonlinear operator of $u(x, t)$. For details of various type of models, we refer the reader to \cite{SUE16,Wu96} and the references therein. To best of my knowledge, a little literature of numerical techniques to solve TFPDE with delay, among them, Chebyshev pseudospectral method for linear differential and differential-functional parabolic equations by Zubik-Kowal \cite{ZK2000}, spectral collocation $\&$ waveform relaxation methods by Zubik-Kowal and Jackiewicz \cite{ZJ06} and iterated pseudospectral method \cite{MZ05} for nonlinear delay partial differential equations, two dimensional differential transform method (2D-DTM) and RDTM for partial differential equations with proportional delay by Abazari and Ganji \cite{AG11}, Abazari and Kilicman \cite{AK11} used DTM for nonlinear integro-differential equations with proportional delay, group analysis method for nonhomogeneous mucilaginous Burgers equation with proportional delay due to Tanthanuch \cite{T12}, homotopy perturbation method for TFPDE with proportional delay by Sakar et al.\cite{SUE16} and Shakeri-Dehghan \cite{SD08}, and Biazar ad Ghanbari \cite{BG12}, variational iteration method (VIM) for solving a neutral functional-differential equation with proportional delays by Chena and Wang \cite{CW10}, functional constraints method for the exact solutions of nonlinear delay reaction-diffusion equations by Polyanin and Zhurov \cite{PZ14}, and so on. In this paper, our main goal is to propose an alternative numerical solution of initial valued autonomous system of time fractional partial differential equation with proportional delay \cite{SUE16}. The paper is organized into five more sections, which follow this introduction. Specifically, Section \ref{sec-fracks} deals with revisit of fractional calculus. Section \ref{sec-impliment} is devoted to the procedure for the implementation of the HPTM for the problem \eqref{eqn-PD}. Section \ref{sec-numeric-stdy} is concerned with three test problems with the main aim to establish the convergency and effectiveness of the HPTM. Finally, Section \ref{sec-conclu} concludes the paper with reference to critical analysis and research perspectives. \section{Preliminaries} \label{sec-fracks} This section revisit some basic definitions of definitions of fractional calculus due to Liouville \cite{MR93}, which we need to complete the paper \begin{definition} Let $ \mu \in \BBR$ and $m \in \BBN$. A function $f: \BBR^{+} \to \BBR$ belongs to $\BBC_{\mu}$ if there exists $k \in \BBR$, $k > \mu$ and $g \in C[0, \infty)$ such that $f(x)=x^k g(x)$, $ \forall x \in \BBR^+$. Moreover, $f\in\BBC_{\mu}^m$ if $f^{(m)}\in \BBC_{\mu}$. \end{definition} \begin{definition} Let $\mathcal{J}_t^\alpha~ (\alpha \geq 0)$ be Riemann-Liouville fractional integral operator and let $f \in \BBC_{\mu}$, then \begin{flushleft} \begin{description} \item[(*)] $ \mathcal{\mathcal{J}}_t^{\alpha} f\left(t \right){\rm{ = }}\frac{{\rm{1}}}{{\Gamma \left( \alpha \right)}}\int\limits_{\rm{0}}^{\rm{t}} {\left( {{\rm{t - \tau}}} \right)^{\alpha - 1} } f\left( \tau \right){\rm{d\tau,}} ~~ \mbox { if } \alpha >0, $ \item[(**)]$\mathcal{J}_t^0 f\left(t \right){\rm{ = }} f(t)$, \quad where $\Gamma \left( z \right) := \int\limits_0^\infty {e^{ - t} t^{z - 1} dt,z \in \BBC} $. \end{description} \end{flushleft} \end{definition} For $f \in \BBC_{\mu}, \mu \geq -1, \alpha, \beta \geq 0 $ and $\gamma >-1$, the operator $\mathcal{J}_t^{\alpha}$ satisfy the following properties \begin{itemize} \item [i)]$\mathcal{J}_t^{\alpha} \mathcal{J}_t^{\beta} f(x) = \mathcal{J}_t^{\alpha+\beta} f(x)=J_t^{\beta} \mathcal{J}_t^{\alpha} f(x),$ \quad $\textbf{ ii)}$ $\mathcal{J}_t^{\alpha} x^{\gamma} = \frac{\Gamma {(1+\gamma)}}{\Gamma {(1+\gamma+\alpha)}}x^{\alpha+\gamma}.$ \end{itemize} The Caputo fractional differentiation operator $\mathcal{D}^\alpha_{t}$ defined as follows: \begin{definition} Let $f \in \BBC_{\mu}$, $\mu \geq -1$ and $m - 1 < \alpha \le m, {\rm{ }}m \in \BBN$. Then \begin{equation}\label{eqn-d2} \mathcal{D}_t^\alpha f\left( t \right){\rm{ = }}\mathcal{J}_t^{m - \alpha } \mathcal{D}_t^m f\left( t \right) = \frac{{\rm{1}}}{{\Gamma \left( {m - \alpha } \right)}}\int\limits_{\rm{0}}^{\rm{t}} {\left( {{\rm{t - \tau}}} \right)^{m - \alpha - 1} } f^{\left( m \right)} \left( \tau \right){\rm{d\tau,}} \end{equation} \end{definition} Moreover, the operator $\mathcal{D}_t^\alpha$ satisfy following the basic properties \begin{lemma} \label{lem-caput} Let $m -1 < \alpha \le m,{\rm{ }}m \in \BBN,$ and $f \in \BBC_\mu ^m ,{\rm{ }}\mu \ge {\rm{ - 1}}$, and $\gamma > \alpha -1$, then \begin{flushleft} $\begin{array}{ll} (a) \mathcal{D}_t^{\alpha}\mathcal{D}_t^{\beta} f(x) = \mathcal{D}_t^{\alpha+\beta} f(x); \qquad \quad (b)~~\mathcal{D}_t^{\alpha} x^{\gamma} = \frac{\Gamma {(1+\gamma)}}{\Gamma {(1+\gamma-\alpha)}}x^{\gamma-\alpha},\\ (c)~ \mathcal{D}_t^\alpha \mathcal{J}_t^\alpha f\left( t \right){\rm{ = }}f\left( t \right), (d) ~ \mathcal{J}_t^\alpha \mathcal{D}_t^\alpha f\left( t \right){\rm{ = }}f\left( t \right) - \sum\limits_{k = 0}^m {f^{\left( k \right)} \left( {0^ + } \right)\frac{{t^k }}{{k!}}} ,& \mbox{ for }~~ {\rm{ t > 0.}} \\ \end{array}$ \end{flushleft} \end{lemma} For more details on fractional derivatives, one can refer \cite{MR93,Kilbas06,CM71}. \begin{definition} The Laplace transform of a piecewise continuous function $u(t)$ in $(0, \infty)$ is defined by \begin{equation}\label{eqn-d03} \mathcal{U}(s)=\mathcal{L}\{u(t)\}=\int_{0}^{\infty} u(t)\exp(-st)dt, \end{equation} where $s$ is a parameter. Moreover, for the Caputo derivative $\mathcal{D}_t^\alpha u\left( t \right)$ and Riemann-- Liouville fractional integral $ \mathcal{J}_t^\alpha u\left( t \right)$ of a function $u \in \BBC_{\mu} ~~(\mu\geq-1)$, the laplace transform \cite{MR93,Kilbas06} is defined as \begin{equation}\label{eqn-d03} \begin{split} & \mathcal{L}\{\mathcal{J}_t^\alpha u\left(t\right)\}=s^{- \alpha} \mathcal{U}(s),\\ & \mathcal{L}\{\mathcal{D}_t^\alpha f\left(t\right)\}=s^{\alpha} \mathcal{U}(s) -\sum_{r=0}^{m-1}s^{\alpha-r-1}u^{(r)}(0+), \quad (m-1 < \alpha \leq m)\\ \end{split} \end{equation} \end{definition} \section{Implementation: HPTM for TFPDEs with proportional delay }\label{sec-impliment} \noindent This section describes the implementation of HPTM to the initial valued autonomous system of TFPDEs with proportional delay, defined as below: \begin{equation}\label{eqn-PD} \left\{ \begin{array}{ll} \mathcal{D}_t^{\alpha} \(u(x, t)\) = f\left(x, u(a_0 x, b_0 t), \frac{\partial }{\partial x}u(a_1 x, b_1 t), \ldots, \frac{\partial^m }{\partial x^m}u(a_m x, b_m t) \right), \\ u(x, 0)=\psi(x). \end{array} \right. \end{equation} Taking Laplace transform on Eq. \eqref{eqn-PD}, we get \begin{equation}\label{eqn-RDT-SG-LT} \mathcal{U}(x, s) = \frac{u(x, 0)}{s} +\frac{1}{s^{\alpha}} \mathcal{ L} \left[ f\left(x, u(a_0 x, b_0 t), \frac{\partial }{\partial x}u(a_1 x, b_1 t), \ldots, \frac{\partial^m }{\partial x^m}u(a_m x, b_m t) \right)\right] \end{equation} Inverse Laplace transform of Eq.\eqref{eqn-RDT-SG-LT} leads \begin{equation}\label{eqn-RDT-SG-ILT} u(x, t) = \psi(x) + \mathcal{L}^{-1}\left[\frac{1}{s^\alpha} \mathcal{L}\left[ f\left(x,u(a_0 x, b_0 t), \frac{\partial }{\partial x}u(a_1 x, b_1 t), \ldots, \frac{\partial^m }{\partial x^m}u(a_m x, b_m t) \right)\right]\right], \end{equation} where $\psi(x)$ denotes source term, usually the recommended initial conditions. Let us assume from homotopy perturbation method that the basic solution of Eq. \eqref{eqn-RDT-SG-LT} in a power series: \begin{equation}\label{eqn-basic} u^*(x,t)=\sum_{\imath =0}^{\infty}p^\imath u_\imath(x,t). \end{equation} From Eq. \eqref{eqn-basic} and \eqref{eqn-RDT-SG-ILT}, we get \begin{equation}\label{eqn-RDT-SG-ILT-hpm1} \begin{split} &\sum_{r=0}^{\infty}p^r u_r(x,t)=u(x,0)+\\ &p \left[\mathcal{L}^{-1}\left\{\frac{1}{s^\alpha} \mathcal{L}\left\{ f\left(x, t,\sum_{r=0}^{\infty} u_r(a_0 x, b_0 t), \frac{\partial }{\partial x}\sum_{r=0}^{\infty} u_r(a_1 x, b_1 t), \ldots \frac{\partial^m }{\partial x^m} \sum_{r=0}^{\infty}u_r(a_m x, b_m t) \right)\right\} \right\} \right] \end{split} \end{equation} On equating like powers of $p$, we get \begin{equation} \label{eqn-RDT-SG-ILT-hpm11} \left.\begin{split} & p^0 : u_0(x,t)=\psi(x)\\ &p^1 :u_1(x,t)=\mathcal{L}^{-1}\left[\frac{1}{s^\alpha} \mathcal{L}\left[ f\left(x, t, u_0(a_0 x, b_0 t), \frac{\partial }{\partial x}u_0(a_1 x, b_1 t), \ldots, \frac{\partial^m }{\partial x^m}u_0(a_m x, b_m t) \right)\right]\right]\\ & \qquad \qquad \qquad \vdots \qquad \qquad \qquad \qquad \vdots \end{split} \right\} \end{equation} For $p=1$, an approximate solution is given by \begin{equation}\label{eqn-approx} u(x,t)=\sum_{\imath =0}^{\infty} u_\imath(x,t). \end{equation} \subsection{Convergence analysis and error estimate}\label{sec-conver.} This sections studies the convergence of the HPTM solution and the error estimate. \begin{theorem}\label{conv} Let $0< \gamma <1$ and let $u_n(x, t), u(x, t)$ are in Banach space $(\mathcal{C}[0, 1], ||\cdot||)$. Then the series solution $\sum_{n=0}^{\infty} u_n(x, t)$ from the sequence $\{u_n(x)\}_{n=0}^\infty$ converges to the solution of Eq. \eqref{eqn-PD} whenever $u_n(x)\leq \gamma u_{n-1)}(x)$ for all $n\in N$. Moreover, the maximum absolute truncation error of the series solution \eqref{eqn-approx} for Eq. \eqref{eqn-PD} is computed as \begin{equation}\label{eqn-max-error} \left|\left|u(x,t)-\sum_{\imath =0}^{\ell} u_\imath(x,t)\right|\right|\leq \frac{\gamma^{\ell+1}}{1-\gamma}||u_0(x,t)||<\frac{||u_0(x,t)||}{1-\gamma}. \end{equation} \end{theorem} \begin{proof} The proof is similar to \cite[Theorem 4.1, 4.2]{SUE16}. \end{proof} \section{Application of HPTM for TFPDEs with proportional delay}\label{sec-numeric-stdy} In this section, the effectiveness and validity of HPTM is illustrated by three test problems of initial valued autonomous system of TFPDEs with proportional delay. \begin{example}\label{ex1} Consider initial values system of time-fractional order, generalized Burgers equation with proportional delay as given in \cite{SUE16} \begin{equation} \label{eqn-ex1} \left\{\begin{split} & \mathcal{D}_t^{\alpha} u(x, t) = \frac{\partial^2}{\partial x^2} u(x, t)+u\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u\left(x,\frac{t}{2}\right)+ \frac{1}{2}u(x, t) \\ & u(x, 0) = x, \end{split} \right. \end{equation} Taking Laplace transformation of Eq. \eqref{eqn-ex1}, we get \begin{equation}\label{eqn-ex1-LT} \mathcal{U}(x, s)=\frac{x}{s} +\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2}{\partial x^2} u(x, t)+u\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u\left(x,\frac{t}{2}\right)+ \frac{1}{2}u(x, t)\right] \end{equation} Now, inverse Laplace transform with basic solution \eqref{eqn-basic} leads to \begin{equation}\label{eqn-ex1-ILT-HPM} \begin{split} &\sum \limits_{r = 0}^\infty p^r u_r(x,t)=x+ \\ &p \left[\mathcal{L}^{-1}\left\{\frac{1}{s^\alpha}\mathcal{L}\left( \frac{\partial^2}{\partial x^2} \(\sum\limits_{r = 0}^\infty u_r(x, t)\)+\sum\limits_{r = 0}^{\infty} u_r\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_{k-r}\left(x,\frac{t}{2}\right)+ \frac{1}{2}\sum\limits_{r = 0}^\infty u_r(x, t)\right)\right\}\right] \end{split} \end{equation} On comparing the coefficient of like powers of $p$, we get \begin{equation} \label{eqn-ex1-HPTMa} \left.\begin{split} p^0 : u_0(x,t)&=x,\\ p^1:u_1(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2}{\partial x^2} \( u_0(x, t)\)+ u_0\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_0\left(x,\frac{t}{2}\right)+ \frac{1}{2}u_0(x, t)\right]\right]\\ &= \frac{x t^\alpha}{\Gamma{(1+\alpha)}} \\ p^2: u_2(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left\{ \frac{\partial^2}{\partial x^2} \( u_1(x, t)\)+\frac{1}{2}u_1(x, t)\right.\right.\\ &\left.\left.+ u_0\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_1\left(x,\frac{t}{2}\right)+ u_1\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_0\left(x,\frac{t}{2}\right)\right\}\right]\\ &= \frac{(1+2^{1-\alpha})x t^{2\alpha}}{2\Gamma{(1+2\alpha)}}\\ \end{split} \right. \end{equation} \begin{equation} \label{eqn-ex1-HPTMb} \left.\begin{split} p^3:u_3(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2}{\partial x^2} \( u_2(x, t)\)+ \frac{u_2(x, t)}{2} +u_2\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_0\left(x,\frac{t}{2}\right)\right.\right.\\ &\left.\left.+ u_0\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_2\left(x,\frac{t}{2}\right)+ u_1\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_1\left(x,\frac{t}{2}\right)\right]\right]\\ &=\frac{x t^{3\alpha}}{4 \Gamma{(1+3\alpha)}} \left\{1+ 2^{1-\alpha} + 2^{1-2\alpha} + 2^{2-3\alpha} + \frac{ \Gamma{(1+2\alpha)}}{\Gamma{(1+\alpha)}^2} ~~ 2^{1-2\alpha}\right\}\\ p^4:u_4(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}L\left[ \frac{\partial^2}{\partial x^2} u_3(x, t)+ \frac{u_3(x, t)}{2}+ u_0\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_3\left(x,\frac{t}{2}\right)+ u_1\(\frac{x}{2}, \frac{t}{2}\)\right.\right.\\ &\left.\left. \times\frac{\partial }{\partial x} u_2\left(x,\frac{t}{2}\right) + u_2\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_1\left(x,\frac{t}{2}\right)+u_3\(\frac{x}{2}, \frac{t}{2}\) \frac{\partial }{\partial x} u_0\left(x,\frac{t}{2}\right)\right]\right]\\ &= \frac{{x t^{4\alpha}}}{8\Gamma{(1+4\alpha)}}\left\{1+2^{9-6\alpha}+ 2^{8-5\alpha}+3\times 2^{7-3\alpha}+2^{7-2\alpha} +2^{7-\alpha}+2^{8-4\alpha}\right. \\ & \left.+(2^{8-5\alpha}+2^{7-2\alpha}) \frac{\Gamma{(1+2\alpha)}}{\Gamma{(1+\alpha)^2}}+(2^{9-4\alpha}+2^{8-3\alpha})\frac{\Gamma{(1+3\alpha)}}{\Gamma{(1+\alpha)}\Gamma{(1+2\alpha)}} \right\}\\ & \qquad \qquad \qquad \vdots \qquad \qquad \qquad \qquad \vdots \end{split} \right. \end{equation} Therefore the solution for Eq. \eqref{eqn-ex1} is \begin{equation}\label{eqn-ex1-HP-ILT-SOLN} u(x,t)=u_0(x,t)+u_1(x,t)+u_2(x,t)+u_3(x,t)+u_4(x,t)+\ldots \end{equation} The same solution is is obtained by Sarkar et al. \cite{SUE16}. In particular, for $\alpha=1$, the seventh order solution is \begin{equation}\label{eqn-ex1-HP-ILT-exact-SOLN} u(x,t)=x\left( 1+t+\frac{t^2}{2}+\frac{t^3}{6}+\frac{t^4}{24}+\frac{t^5}{120}+\frac{t^6}{720}+\frac{t^7}{5040}\right) \end{equation} which is same as obtained by DTM and RDTM \cite {AG11}, and is a closed form of the exact solution $u(x, t) = x\exp(t)$. The HPTM solution for $\alpha=1$ is reported in Table \ref{tab1.1}. The surface solution behavior of $u(x,t)$ for different values of $\alpha =0.8, 0.9, 1.0$ is depicted in Fig. \ref{fig1.1}, and the plots of the solution for $x=1$ at different time intervals $t\leq 1$ is depicted in Fig \ref{fig1.11}. It is found that the results are agreed well with HPM as well as DTM solutions and approaching to the exact solution. \begin{table}[t!] \caption{Approximate HPTM solution of Example \ref{ex1} with first six terms for at $\alpha =1.0$} \label{tab1.1} \centering \begin{tabular}{llllllllll} \toprule $x$ &{}& $t$ &{} & Exact &{}&\multicolumn{2}{l}{HPTM}&{}& \\ {} &{}&{} & {} & &{}&solution &{}& $E_{abs}$ \\ \midrule $ 0.25$ \qquad \qquad &{}&$0.25$&{} &0.321006 &{}&0.321004 &{}&2.122401E-06\\ & \qquad\qquad&$0.50$\qquad \qquad\qquad &{} &0.412180 \qquad \qquad \qquad &{}&0.412109 \qquad \qquad \qquad &{}&7.094268E-05\\ &{}&$0.75$&{} &0.529250 &\qquad&0.528686 &\qquad&5.634807E-04\\ &{}&$1.00$&{} &0.679570 &{}&0.677083 &{}&2.487124E-03\\ $ 0.5$ &{}&$0.25$&{} &0.642012 &{}&0.642008 &{}&4.244802E-06\\ &{}&$0.50$&{} &0.824361 &{}&0.824219 &{}&1.418854E-04\\ &{}&$0.75$&{} &1.058500 &{}&1.057373 &{}&1.126961E-03\\ &{}&$1.00$&{} &1.359141 &{}&1.354167 &{}&4.974248E-03\\ $0.75$ &{}&$0.25$&{} &0.963019 &{}&0.963012 &{}&6.369688E-06\\ &{}&$0.50$&{} &1.236541 &{}&1.236328 &{}&2.128250E-04\\ &{}&$0.75$&{} &1.587750 &{}&1.586060 &{}& 1.690020E-03 \\ &{}&$1.00$&{} &2.038711 &{}&2.031250 &{}&7.461370E-03\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!t] \includegraphics[width=5.30cm,height=5.20 cm]{fig1a=a8} \includegraphics[width=5.30cm,height=5.20 cm]{fig1b=a9} \includegraphics[width=5.530cm,height=5.20 cm]{fig1c=a10} \caption{The surface HPTM solution behavior of $u$ of Example \ref{ex1} for (a) $\alpha =0.8$; (b ) $\alpha =0.9$; (c) $\alpha =1.0$} \label{fig1.1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=9.30cm,height=6.20 cm]{fig1=2D} \caption{Plots of HPTM solution $u(x,t)$ of Example \ref{ex1} for$\alpha=0.8, 0.9, 1.0$, $t\in (0,1); x=1$} \label{fig1.11} \end{figure} \end{example} \begin{example}\label{ex2} Consider initial value TFPDE with proportional delay as given in \cite{AG11,SUE16} \begin{equation} \label{eqn-ex2} \left\{\begin{split} & \mathcal{D}_t^{\alpha} u(x, t) = u\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u\left(x,\frac{t}{2}\right) - u(x, t) \\ & u(x, 0) = x^2, \end{split} \right. \end{equation} Taking Laplace transform of Eq. \eqref{eqn-ex2}, we get \begin{equation}\label{eqn-ex2-LT} \mathcal{U}(x, s)=\frac{x^2}{s} +\frac{1}{s^\alpha}\mathcal{L}\left[u\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u\left(x,\frac{t}{2}\right) - u(x, t)\right] \end{equation} The inverse Laplace transform of Eq. \eqref{eqn-ex2-LT} leads to \begin{equation}\label{eqn-ex2-ILT} u(x,t)=x^2 +\mathcal{L}^{-1}\left[\frac{1}{s}x +\frac{1}{s^\alpha}\mathcal{L}\left\{u\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u\left(x,\frac{t}{2}\right) - u(x, t)\right\}\right] \end{equation} Eq. \eqref{eqn-ex2-ILT} with basic solution \eqref{eqn-basic} leads to \begin{equation}\label{eqn-ex2-ILT-HPM} \sum\limits_{n = 0}^\infty p^n u_n(x,t)=x^2+p\left[\mathcal{L}^{-1}\left\{\frac{1}{s^\alpha}\mathcal{L}\left( \sum\limits_{n = 0}^{\infty} u_n\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{k-n}\left(x,\frac{t}{2}\right) - \sum\limits_{n = 0}^\infty u_n(x, t)\right)\right\}\right] \end{equation} On comparing the coefficient of like powers of $p$, we get \begin{equation} \label{eqn-ex2-HP-LTM} \begin{split} p^0 : u_0(x,t)&=x^2,\\ p^1 :u_1(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ u_0\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{0}\left(x,\frac{t}{2}\right) - u_0(x, t)\right]\right]\\ &= \frac{x^2 t^\alpha}{\Gamma{(1+\alpha)}} \\ p^2:u_2(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ u_0\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{1}\left(x,\frac{t}{2}\right) + u_1\(x, \frac{t}{2}\) \times\right.\right.\\ &\left.\left. \frac{\partial^2 }{\partial x^2} u_{0}\left(x,\frac{t}{2}\right)- u_1(x, t)\right]\right]= \frac{x^2 t^{2\alpha}(2^{2-\alpha}-1)}{\Gamma{(1+2\alpha)}}\\ p^3:u_3(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ u_0\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{2}\left(x,\frac{t}{2}\right) + u_2\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{0}\left(x,\frac{t}{2}\right)\right]\right]\\ &+ \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ u_1\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{1}\left(x,\frac{t}{2}\right)- u_2(x, t)\right]\right]\\ &=\frac{x^2 t^{3\alpha}}{ \Gamma{(1+3\alpha)}} \left\{1- 2^{2-\alpha} - 2^{2-2\alpha} + 2^{4-3\alpha} + \frac{ \Gamma{(1+2\alpha)}}{\Gamma{(1+\alpha)}^2} ~~ 2^{1+\alpha}\right\}\\ p^4:u_4(x,t)&= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left\{ u_0\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{3}\left(x,\frac{t}{2}\right) + u_2\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{1}\left(x,\frac{t}{2}\right)\right.\right.\\ &+ \left.\left. u_1\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{2}\left(x,\frac{t}{2}\right)+ u_3\(x, \frac{t}{2}\) \frac{\partial^2 }{\partial x^2} u_{0}\left(x,\frac{t}{2}\right)- u_3(x, t)\right\}\right]\\ &= \frac{{x^2 t^{4\alpha}}}{\Gamma{(1+4\alpha)}}\left(1-2^{2-\alpha}- 2^{2-2\alpha}+3\times2^{2-3\alpha}+2^{4-4\alpha} +2^{4-5\alpha}-2^{6-6\alpha}\right) \\ &+\frac{{x^2 t^{4\alpha}}}{\Gamma{(1+4\alpha)}}\left( \frac{(2^{1+4\alpha}-2^{3+2\alpha})\Gamma{(1+2\alpha)}}{\Gamma{(1+\alpha)^2}}+\frac{(2^{2+3\alpha}-2^{4+2\alpha})\Gamma{(1+3\alpha)}}{\Gamma{(1+\alpha)}\Gamma{(1+2\alpha)}} \right)\\ & \qquad \qquad \qquad \vdots \qquad \qquad \qquad \qquad \vdots \end{split} \end{equation} Thus, the solution for Eq. \eqref{eqn-ex2} is given by \begin{equation}\label{eqn-ex2-HP-ILT-SOLN} u(x,t)=u_0(x,t)+u_1(x,t)+u_2(x,t)+u_3(x,t)+u_4(x, t)+\ldots \end{equation} which is the required exact solution, same solution is obtained by Sarkar et al \cite{SUE16}. In particular for $\alpha=1$, the seventh order solution is obtained as \begin{equation}\label{eqn-ex2-HP-ILT-exact-SOLN} u(x,t)=x^2\left( 1+t+\frac{t^2}{2}+\frac{t^3}{6}+\frac{t^4}{24}+\frac{t^5}{120}+\frac{t^6}{720}+\frac{t^7}{5040}\right) \end{equation} which is same as obtained by DTM and RDTM \cite {AG11}, and is a closed form of the exact solution $u(x, t) = x^2 \exp(t)$. The HPTM solution for $\alpha=1$ is reported in Table \ref{tab2.1}. The surface solution behavior of $u(x,t)$ for different values of $\alpha =0.8, 0.9, 1.0$ is depicted in Fig. \ref{fig2.1}, and the plots of the solution for $x=1$ at different time intervals $t\leq 1$ is depicted in Fig \ref{fig2.11}. It is found that the proposed HPTM results are agreed well with HPM as well as DTM solutions and approaching to the exact solution. \begin{figure}[!t] \includegraphics[width=5.30cm,height=5.20 cm]{fig2a=a8} \includegraphics[width=5.30cm,height=5.20 cm]{fig2b=a9} \includegraphics[width=5.530cm,height=5.20 cm]{fig2c=a10} \caption{The surface solution behavior of $u$ of Example \ref{ex2} for (a) $\alpha =0.8$; (b ) $\alpha =0.9$; (c) $\alpha =1.0$} \label{fig2.1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=9.30cm,height=6.20 cm]{fig2=2D} \caption{Plots of HPTM solution $u(x,t)$ of Example \ref{ex2} for$\alpha=0.8, 0.9, 1.0$ $t(0,1);x=1$} \label{fig2.11} \end{figure} \begin{table}[t!] \caption{Approximate HPTM solution of Example \ref{ex2} with first six terms for at $\alpha =1.0$} \label{tab2.1} \centering \begin{tabular}{llllllllll} \toprule $x$ &{}& $t$ &{}& Exact &{}&\multicolumn{2}{l}{HPTM}&{}& \\ {} &{}& {} &{}& &{}&solution &{}& $E_{abs}$ \\ \midrule $0.25$\qquad \qquad \qquad\qquad& \qquad \qquad&$0.25$ \qquad\qquad&{}& 0.0802516\qquad\qquad \qquad&{}& 0.0802516 \qquad\qquad\qquad&{}& 7.812108E-10& \\ &{}&$0.50$ &{}& 0.1030451&{}& 0.1030450&{}& 1.032903E-07& \\ &{}&$0.75$ &{}& 0.1323125&{}& 0.1323107&{}& 1.824464E-06& \\ &{}&$1.00$ &{}& 0.1698926&{}& 0.1698785&{}& 1.414206E-05& \\ $0.50$ &{}&$0.25$ &{}& 0.3210064&{}& 0.3210064&{}& 3.124843E-09& \\ &{}&$0.50$ &{}& 0.4121803&{}& 0.4121799&{}& 4.131611E-07& \\ &{}&$0.75$ &{}& 0.5292500&{}& 0.5292427&{}& 7.297854E-06& \\ &{}&$1.00$ &{}& 0.6795705&{}& 0.6795139&{}& 5.656823E-05& \\ $0.75$ &{}&$0.25$ &{}& 0.7222643&{}& 0.7222643&{}& 7.030897E-09& \\ &{}&$0.50$ &{}& 0.9274057&{}& 0.9274048&{}& 9.296126E-07& \\ &{}&$0.75$ &{}& 1.1908130&{}& 1.1907963&{}& 1.642017E-05& \\ &{}&$1.00$ &{}& 1.5290340&{}& 1.5289062&{}& 1.272785E-04& \\ \bottomrule \end{tabular} \end{table} \end{example} \begin{example}\label{ex3} Consider initial value TFPDE with proportional delay as given in \cite{AG11,SUE16} \begin{equation} \label{eqn-ex3} \left\{\begin{split} & \mathcal{D}_t^{\alpha} u(x, t) = \frac{\partial^2 }{\partial x^2} u\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u\(\frac{x}{2}, \frac{t}{2}\)-\frac{1}{8} \frac{\partial }{\partial x} u\(x, t\) - u(x, t) \\ & u(x, 0) = x^2, \end{split} \right. \end{equation} \begin{table}[t!] \caption{Approximate HPTM solution of Example \ref{ex3} with first six terms for at $\alpha =1.0$} \label{tab3.1} \centering \begin{tabular}{llllllllll} \toprule $x$ &{}& $t$ &{}& Exact &{}&\multicolumn{2}{l}{HPTM}&{}& \\%\cline{4-9} {} &{}& {} &{}& {} &{}&solution&{}& $E_{abs}$ \\ \midrule $0.25$ \qquad \qquad&\qquad \qquad&$0.25$ \qquad \qquad&{}& 4.867505E-02 \qquad \qquad&\qquad \qquad& 4.867505E-02\qquad\qquad \qquad &{}& 7.338727E-10& \\ &{}&$0.50$ &{}& 3.790817E-02&{}& 3.790826E-02&{}& 9.114643E-08& \\ &{}&$0.75$ &{}& 2.952291E-02&{}& 2.952442E-02&{}& 1.512146E-06& \\ &{}&$1.00$ &{}& 2.299247E-02&{}& 2.300347E-02&{}& 1.100715E-05& \\ $0.50$ &{}&$0.25$ &{}& 1.947002E-01&{}& 1.947002E-01&{}& 2.935491E-09& \\ &{}&$0.50$ &{}& 1.516327E-01&{}& 1.516330E-01&{}& 3.645857E-07& \\ &{}&$0.75$ &{}& 1.180916E-01&{}& 1.180977E-01&{}& 6.048582E-06& \\ &{}&$1.00$ &{}& 9.196986E-02&{}& 9.201389E-02&{}& 4.402860E-05& \\ $0.75$ &{}&$0.25$ &{}& 4.380754E-01&{}& 4.380754E-01&{}& 6.604854E-09& \\ &{}&$0.50$ &{}& 3.411735E-01&{}& 3.411743E-01&{}& 8.203179E-07& \\ &{}&$0.75$ &{}& 2.657062E-01&{}& 2.657198E-01&{}& 1.360931E-05& \\ &{}&$1.00$ &{}& 2.069322E-01&{}& 2.070313E-01&{}& 9.906434E-05& \\ \bottomrule \end{tabular} \end{table} Taking Laplace transform of Eq. \eqref{eqn-ex3}, we get \begin{equation}\label{eqn-ex3-LT} \mathcal{U}(x, s)=\frac{x^2}{s} +\frac{1}{s^\alpha}\mathcal{L}\left[\frac{\partial^2 }{\partial x^2} u\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u\(\frac{x}{2}, \frac{t}{2}\)-\frac{1}{8} \frac{\partial }{\partial x} u\(x, t\) - u(x, t)\right] \end{equation} The inverse Laplace transform leads to \begin{equation}\label{eqn-ex3-ILT} u(x,t)=x^2 +\mathcal{L}^{-1}\left[\frac{1}{s}x +\frac{1}{s^\alpha}\mathcal{L}\left\{\frac{\partial^2 }{\partial x^2} u\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u\(\frac{x}{2}, \frac{t}{2}\)-\frac{1}{8} \frac{\partial }{\partial x} u\(x, t\) - u(x, t)\right\}\right] \end{equation} Homotopy perturbation transform method on Eq.\eqref{eqn-ex3-ILT} leads to \begin{equation}\label{eqn-ex3-ILT-HPM} \begin{split} &\sum\limits_{n = 0}^\infty p^n u_n(x,t)=x^2+\\ & p\left(\mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \sum\limits_{n = 0}^{k=\infty}\frac{\partial^2 u_n }{\partial x^2}\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial u_{k-n}}{\partial x} \(\frac{x}{2}, \frac{t}{2}\) -\frac{1}{8} \frac{\partial }{\partial x}\sum\limits_{n = 0}^\infty u_n\(x, t\) - \sum\limits_{n = 0}^\infty u_n(x, t)\right]\right]\right) \end{split} \end{equation} On comparing the coefficient of like powers of $p$, we get \begin{equation} \label{eqn-ex3-HP-LTM} \begin{split} p^0 :& u_0(x,t)=x^2,\\ p^1 :&u_1(x,t)= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2 }{\partial x^2} u_0\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{0}\(\frac{x}{2}, \frac{t}{2}\) -\frac{1}{8} \frac{\partial }{\partial x} u_0\(x, t\) -u_0(x, t)\right]\right] \\&= \frac{-x^2 t^\alpha}{\Gamma{(1+\alpha)}} \\ p^2:&u_2(x,t)= \mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2 }{\partial x^2} u_0\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{1}\(\frac{x}{2}, \frac{t}{2}\)+u_1\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{0}\(\frac{x}{2}, \frac{t}{2}\) \right.\right.\\ &\left.\left. -\frac{1}{8} \frac{\partial }{\partial x} u_1\(x, t\) -u_1(x, t)\right]\right]\\ &= t^{2\alpha}x\frac{(2^{1-\alpha}+2^{2}x+1)}{2\Gamma{(1+2\alpha)}}\\ p^3:&u_3(x,t)= L^{-1}\left[\frac{1}{s^\alpha}L\left[ \frac{\partial^2 }{\partial x^2} u_0\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{2}\(\frac{x}{2}, \frac{t}{2}\)+\frac{\partial^2 }{\partial x^2} u_1\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{1}\(\frac{x}{2}, \frac{t}{2}\)\right.\right.\\ &+ \left.\left. \frac{\partial^2 }{\partial x^2} u_2\left(\frac{x}{2},\frac{t}{2}\right) ~\frac{\partial }{\partial x} u_{0}\(\frac{x}{2}, \frac{t}{2}\) -\frac{1}{8} \frac{\partial }{\partial x} u_2\(x, t\) -u_2(x, t)\right]\right]\\ &=\frac{ t^{3\alpha}}{ 2\Gamma{(1+3\alpha)}} \left\{-1-2x^2-2^4+ 2^{-\alpha} + 2^{-2\alpha} + 2^{-3-\alpha}+2^{-3-2\alpha}+2^{-2-3\alpha}\right. \\ &\left.+2^{-1-2\alpha}x\frac{\Gamma(1+2\alpha)}{\Gamma(1+\alpha)^2} \right\}\\ p^4:&u_4(x,t)=\mathcal{L}^{-1}\left[\frac{1}{s^\alpha}\mathcal{L}\left[ \frac{\partial^2 u_0}{\partial x^2}\left(\frac{x}{2},\frac{t}{2}\right) \frac{\partial u_{3}}{\partial x}\(\frac{x}{2}, \frac{t}{2}\)+\frac{\partial^2 u_1}{\partial x^2}\left(\frac{x}{2},\frac{t}{2}\right)\frac{\partial u_{2}}{\partial x}\(\frac{x}{2}, \frac{t}{2}\)\right.\right.\\ &+\left.\left. \frac{\partial^2 u_2}{\partial x^2} \left(\frac{x}{2},\frac{t}{2}\right) \frac{\partial u_{1}}{\partial x} \(\frac{x}{2}, \frac{t}{2}\)+\frac{\partial^2 u_3}{\partial x^2}\left(\frac{x}{2},\frac{t}{2}\right)\frac{\partial u_{0}}{\partial x} \(\frac{x}{2}, \frac{t}{2}\)\right.\right.\\ &\left.\left. -\frac{1}{8} \frac{\partial }{\partial x} u_3\(x, t\) -u_3(x, t)\right]\right]\\ &=\frac{ t^{4\alpha}}{ \Gamma{(1+4\alpha)}} \left\{(3\times2^{-5}-2^{-3-\alpha}-2^{-3-2\alpha}+2^{-3-4\alpha}+2^{-3-5\alpha})\right\}\\ &+\frac{ t^{4\alpha}}{ \Gamma{(1+4\alpha)}}\left\{((-2^{-2-3\alpha}-2^{-2-2\alpha}- 2^{-2-\alpha}+3\times2^-3)x+x^2)\right\}\\ & +\frac{ t^{4\alpha}}{ \Gamma{(1+4\alpha)}} -(-2^{-4-\alpha}+2^{-5-2\alpha}+2^{-2-2\alpha}x)\frac{\Gamma(1+2\alpha)}{\Gamma(1+\alpha)^2}\\ &-\frac{ t^{4\alpha}}{ \Gamma{(1+4\alpha)}}\left\{ 2^{-4-4\alpha}(-2+2^\alpha+2^{3+\alpha}x)\frac{\Gamma(1+3\alpha)} {\Gamma(1+2\alpha)\times \Gamma(1+4\alpha))} \right\}\\ & \qquad \qquad \qquad \vdots \qquad \qquad \qquad \qquad \vdots \end{split} \end{equation} The required solution of Eq. \eqref{eqn-ex3} is \begin{equation}\label{eqn-ex3-HP-ILT-SOLN} u(x,t)=u_0(x,t)+u_1(x,t)+u_2(x,t)+u_3(x,t)+\ldots \end{equation} which is a closed form to the exact solution and the solution obtained by Sarkar et al \cite{SUE16}. In particular, for $\alpha=1$, the seventh order solution is obtained as \begin{equation} \begin{split} u(x, t)&= x^2 \left(1-t + \frac{t^2}{3}-\frac{t^3}{6}+\frac{t^4}{24}-\frac{t^5}{120}+\frac{t^6}{720}-\frac{t^7}{5040}+\ldots \right) \end{split} \end{equation} which is same as obtained by DTM and RDTM \cite {AG11} , and is a closed form of the exact solution $u(x, t) = x^2 \exp(-t)$. The HPTM solution for $\alpha =1.0$ is reported in Table \ref{tab3.1}. The solution behavior of $u$ for different values of $\alpha =0.8, 0.9, 1.0$ is depicted in Fig. \ref{fig3.1}, while the plots for $x=1$ at different time levels $t\leq 1$ is depicted in Fig. \ref{fig3.11}. This evident that HPTM solutions are agreed well with HPM as well as DTM solutions, and approaching to the exact solution. \begin{figure}[!t] \includegraphics[width=5.30cm,height=5.20 cm]{fig3a=a8} \includegraphics[width=5.30cm,height=5.20 cm]{fig3b=a9} \includegraphics[width=5.530cm,height=5.20 cm]{fig3c=a10} \caption{The solution behavior of HPTM solution $u$ of Example \ref{ex3} for (a) $\alpha =0.8, 0.9, 1.0$; (b ) $\alpha =0.9$; (c) $\alpha =1.0$} \label{fig3.1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=9.30cm,height=6.20 cm]{fig3=2D} \caption{Plots of HPTM solution $u(x,t)$ of Example \ref{ex3} for$\alpha=0.8, 0.9, 1.0$ $t(0,1);x=1$} \label{fig3.11} \end{figure} \end{example} \section{Conclusion}\label{sec-conclu} In this paper, \texttt{homotopy perturbation transform method} is successfully employed for numerical computation of initial valued autonomous system of time-fractional model of TFPDE with proportional delay, where we use the fractional derivative in Caputo sense. Three test problems are carried out in order to validate and illustrate the efficiency of the method. The proposed solutions agreed excellently with HPM \cite{SUE16} and DTM \cite{AG11}. The proposed approximate series solutions are obtained without any discretization, perturbation, or restrictive conditions, which converges very fast. However, the performed calculations show that the described method needs a small size of computation in comparison to HPM \cite{SUE16} and DTM \cite{AG11}. Small size of computation of this scheme is the strength of the scheme. \section*{Acknowledgment} \noindent The authors are grateful to the anonymous referees for their time, effort, and extensive comment(s) which improve the quality of the paper. Pramod Kumar is thankful to Babasaheb Bhimrao Ambedkar University, Lucknow, INDIA for financial assistance to carry out the work.
2024-02-18T23:40:26.301Z
2016-11-22T02:05:42.000Z
algebraic_stack_train_0000
2,404
6,475
proofpile-arXiv_065-11708
\section{Training RBMs on MNIST} \subsection{Dataset preparation and initial conditions} \begin{itemize} \item In MNIST, each pixel has a value between 0 and 255. We binarize it by thresholding $\geq 128$. The $28 \times 28$ binary images are flattened to a $N = 784$ vector with binary values. \item The dataset is split in a training (60,000 instances) and a test (10,000 instances) sets \item The weights $w_{i\mu}$ are randomly initialized at $\pm W$, where $W = \sqrt{\frac{0.1}{N}}$; this choice corresponds to initial temperature and weight sparsity : $T(0) = 10$ and $p(0) = 1$ (see section III). \item The initial field values are $g_i^0 = \log \left[ \frac{\langle v_i\rangle^{MNIST}}{1-\langle v_i\rangle^{MNIST}} \right]$, where $\langle v_i\rangle^{MNIST}$ denotes the average of pixel $i$ over the training data \item For ReLU, the thresholds $\theta_\mu$ are all initially set to $0$ \end{itemize} \subsection{Learning algorithms} A RBM is associated to a probability distribution $P[{\bf v},{\bf h}) = \frac{e^{-E[{\bf v},{\bf h}]}}{Z}$, where the energy $E$ is defined in the main text. The marginal distribution, $P[{\bf v}] = \int \prod dh_\mu P[{\bf v},{\bf h})$ is fitted to the data by likelihood maximization. Given data instances ${\bf x}^{r}, r \in \{1,D\}$, the log-likelihood is : \begin{equation} \log \mathcal{L}_{{\bf W}, {\bf g}, \bm{\theta}} = \frac{1}{D} \sum_{r=1}^D \log \left[ P[{\bf x}^r | {\bf W}, {\bf g}, \bm{\theta}| \right] \end{equation} Where ${\bf W}$ is the matrix of weights, ${\bf g}$ is the vector of visible layer fields and $\bm{\theta}$ is the vector of hidden units thresholds. Likelihood maximization is implemented by stochastic gradient descent, with the difficulty that extensive Monte Carlo simulations are required to compute the gradient \cite{igel,training1}. For the RBM of Fig. 2 in the main text, we used Persistent Contrastive Divergence \cite{training2} with \begin{itemize} \item 20 mini-batch size \item 100 persistent chains \item 1 Gibbs step between each update \item 200 epochs (600 000 updates in total) \item Initial learning rate of $\lambda_i = 5 \; 10^{-3}$, decaying geometrically (decay starts after 60 epochs) to $\lambda_f = 5 \;10^{-4}$ \end{itemize} PCD is known to be inaccurate toward the end of learning, because the parameters evolve too fast with respect to the the mixing rate of the Markov chains. The regularized RBM of main text, Fig. 4 (b,c), were trained with a more efficient algorithm, variant of Adaptive Parallel Tempering \cite{training3,long} with \begin{itemize} \item 100 mini-batch size \item 100 persistent chains \item 10 replicas \item 1 Gibbs step between each update \item 150 epochs (90 000 updates in total) \item Initial learning rate of $\lambda_i = 10^{-2}$, decaying geometrically (decay starts after 90 epochs) to $\lambda_f = 10^{-4}$ \end{itemize} \subsection{Monitoring the learning} We monitor the evolution of the likelihood and of the pseudo-likelihood of the train and test data sets throughout learning, see Fig. 1(a). The choice of parameters made learning slow, but ensured that the likelihood increased steadily throughout training. The likelihood requires approximate computation of the model partition function; Annealed Importance Sampling \cite{AIS} was used. Parameters : $n_\beta = 10000$ inverse temperatures with an adaptive spacing \cite{long}, $n_{runs} =1$. Additionnaly, we can look at the probability landscape $P_W(v)$ throughout learning. For each of the 70k MNIST samples, a gradient ascent on $P_W(v)$ is performed until convergence to a local maximum; the number of distinct local maxima of $P_W(v)$ and the distance to the original sample are measured. As training goes, more local maxima appear, and they get closer to the training samples; local maxima also appear close to the test set, which shows that RBM generalize well. \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.35]{supplementary_figure1a.png} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.35]{supplementary_figure1b.png} \end{subfigure} \caption{ {\bf (a)} Evolution throughout training of the data loglikelihood (left scale) and pseudo-loglikelihood (right scale) computed over the training and test sets. {\bf (b)} Evolution of the number of distinct local maxima of $P_W(v)$ (left scale) and the distance to the original sample (right scale, for train and test set) are displayed.} \end{figure} \subsection{Controling weight sparsity with regularization} To control the weight sparsity $p$, a regularization penalty is added to the log likelihood $\log \mathcal{L}_{{\bf W}, {\bf g}, \bm{\theta}}$: \begin{equation} \begin{split} \text{Cost} &= -\log \mathcal{L}_{{\bf W}, {\bf g}, \bm{\theta}}+ L^{(x)} \\ L^{(x)} &= \frac{\lambda_x}{x} \sum_\mu \left[\sum_i |w_{i \mu} | \right]^x \\ -\frac{\partial \text{Cost}}{\partial w_{i \mu}} &= \frac{\partial}{\partial w_{i \mu}} \log \mathcal{L}_{{\bf W}, {\bf g}, \bm{\theta}} - \lambda_x \left[ \sum_j |w_{j \mu}| \right]^{x-1} \text{sign}(w_{i \mu}) \end{split} \end{equation} The case $x = 1$ is the usual $L_1$ penalty and performing gradient descent with $\lambda_1>0$ is known to reduce the number of non-zero weights $w_{i\mu}$. However, experiments show that the outcome is inhomogeneous with respect to the hidden units: some hidden units are weakly affected by the penalty, whereas some end up completely disconnected from the visible layer, making them useless, see Fig. 2. To maintain homogeneity among the hidden units, we pick $x = 2$ or $x=3$. As can be seen from the expression of the gradient, it is equivalent to a usual $L_1$ penalty, but with a decay rate adaptive to each hidden unit: hidden units strongly (resp. weakly) coupled to the visible layer (large $\sum_i |w_{i \mu}|$) are strongly (resp. weakly) regularized, thus increasing the homogeneity among hidden units. \begin{figure} \includegraphics[scale=0.7]{supplementary_figure2.png} \caption{Subset of 12 weight features produced by training on MNIST, regularized with $L^1, \lambda_1 = 10^{-3}$ (top panel), and $L^2, \lambda_2 = 3 . 10^{-5}$ (bottom panel). Both have overall sparsity $p \sim 0.036$, but the latter is more homogeneously distributed across hidden units.} \end{figure} \section{Sampling from RBMs} RBM can be sampled by Markov Chains Monte Carlo. Due to the conditional independence property, Gibbs sampling can be performed by alternative sampling of $\bf h$ from $P[{\bf h}|{\bf v}]$, then $\bf v$ from $P[{\bf v}|{\bf h}]$ \cite{training1,igel}. \subsection{MCMC Videos} The two videos in Supplementary Material present visualize MCMC runs from RBM trained on MNIST with Bernoulli, Gaussian, ReLU hidden units. Each square depicts a Markov chain started from a random initial condition. 20 Gibbs steps are performed between each image, and each chain is 500 images long. See Fig. 3 for a snapshot. \begin{figure} \includegraphics[scale=0.7]{supplementary_figure3.png} \caption{Six independent Monte Carlo Markov Chains realization for a RBM trained on MNIST, extracted from the attached videos, see text.} \end{figure} \subsection{Estimating thermal averages with MCMC} Sampling at thermal equilibrium is required to estimate the values of order parameters ($L$, $S$, $q$, $\tilde{m}$,...). Since RBM trained on MNIST effectively operate at low temperature (entropy of 0.1 bits/pixel) the MCMC mixing rate is poor, and long simulations would be required for each of the $\sim 100$ RBMs trained. To overcome this issue we use an Adaptive Parallel Tempering (also known as Replica Exchange) sampling algorithm, with 10 replicas \cite{training3,long}. Observables are averaged over 100 independent Markov Chains, each being first thermalized for 500 Gibbs updates, then run for another 100 Gibbs updates (10K samples in total). \begin{figure} \includegraphics[scale=0.7]{supplementary_figure4.png} \caption{Ten Monte Carlo Markov Chains realizations at different inverse temperatures, coupled by replica exchange. The plots shows the conditional expectations of visibile units, $\mathbb{E} \left[ {\bf v} | {\bf h} \right]$, for thermalized hidden-unit activities, ${\bf h}$.} \end{figure} \subsection{Estimating order parameters of R-RBM with zero temperature MCMC} R-RBM are studied analytically in the zero-temperature limit; this limit can be simulated as well. The energy $E[{\bf v},{\bf h}]$ of a configuration ${\bf v},{\bf h}$ is given by Eqn. (1) in main text, and defines the Gibbs distribution $P^\beta[{\bf v},{\bf h}] = \exp(- \beta E[{\bf v},{\bf h}]/Z(\beta)$, where $\beta = \frac{1}{T}$ is the inverse temperature. As $\beta$ increases, $P^\beta[{\bf v},{\bf h}]$ is more and more peaked around the minimum of $E$. In the limit $\beta \rightarrow \infty$, a dynamical Gibbs step becomes deterministic : \begin{equation} \begin{split} &h_\mu \leftarrow \arg\min_h \left[ U_\mu(h) - h \sum_i w_{\mu i} v_i \right] \equiv \Phi_\mu \left[ \sum_i w_{i\mu } v_i \right] \\ &v_i \leftarrow \arg\min_v \left[ - g_i v - v \sum_\mu w_{\mu i} h_\mu \right] \equiv \Theta \left[ g_i + \sum_\mu w_{i\mu } h_\mu \right]\ , \end{split} \end{equation} where $\Theta$ is the Heaviside function, and $\Phi_\mu$ is the response function (Fig.~1(b) in main text).Starting from a configuration, such zero-temperature Markov Chain runs until convergence to a local minimum of $E$. In practice, to make finite-size corrections to our mean-field theory small, we considered RBMs with up to $N \sim 10^4$ visible units. Such large R-RBM were simulated using a nVidia Tesla K40 GPU, programmed with Theano \cite{theano}. \subsection{Finding local maxima of $P[{\bf v}]$} Given an RBM with energy defined as above, the marginal $P[ {\bf v}]$ is characterized by a Gibbs distribution and a free energy : \begin{equation} \begin{split} P[{\bf v}] =\int \prod_\mu dh_\mu \frac{1}{Z} e^{-E[{\bf v}, {\bf h}]} = \frac{1}{Z} e^{-F[{\bf v}]} \\ F[{\bf v}] = - \sum_i g_i v_i + \sum_\mu U_\mu^{eff}\left( \sum _i w_{i\mu} v_i \right) \\ U_\mu^{eff}(x) = - \log \left[ \int dh \; e^{-U_\mu(h) + x h} \right] \end{split} \end{equation} In order to find the local maxima of $P[{\bf v}]$ (\textit{i.e.} the local minima of $F[{\bf v}]$ ) , we modify it by introducing an inverse temperature $\beta$: \begin{equation} \begin{split} P^\beta[{\bf v}] =\frac{1}{Z(\beta)} e^{-\beta F[{\bf v}]} \\ Z(\beta) = \sum_{\bf v} e^{-\beta F[{\bf v}]} \end{split} \end{equation} Sampling from this distribution at $\beta \neq 1$ is not trivial, as $P^\beta [{\bf v}] $ is not the marginal distribution of $P^\beta[{\bf v},{\bf h}]$ when $\beta\ne 1$. While sampling from $P^1[{\bf v}]$ is easy, as one can simply sample from the joint distribution $P^1[{\bf v}, {\bf h}]$ using Gibbs steps, this is not true for $\beta \ne 1$; in particular the local maxima of $P^\beta[{\bf v},{\bf h}]$ are not equivalent to those of $P^\beta[{\bf v}]$. We notice however that when $\beta \geq 1$ is an integer, $P^\beta [{\bf v}]$ can be interpretated as the $\beta = 1$ distribution of another RBM $P'^1[{\bf v}]$ with $\beta M$ hidden units (each hidden unit is replicated $\beta$ times) and visible fields $g' = \beta g$. Sampling from such RBM can be done as following : \begin{itemize} \item Compute the hidden layer inputs $\sum _iw_{i\mu} v_i$ \item Sample independently $\beta$ replicas $h_\mu^r$ of $h_\mu$ from $P^1[ h_\mu | {\bf v}]$ \item Compute the visible layer inputs $I_i =\sum_{r=1}^\beta \sum _\mu w_{i\mu} h^r_\mu $ \item Sample $v_i$ from the Bernoulli distribution $Bern \left[ \beta ( g_i + \frac{1}{\beta}I_i ) \right] $ \end{itemize} When $\beta \rightarrow \infty$, $\frac{1}{\beta}\sum_{r=1}^\beta h_r$ coincides with the conditional average of ${\bf h}$ given ${\bf v}$, $\mathbb{E}\left[ {\bf h} | {\bf v} \right]$. Therefore, the zero temperature sampling Gibbs step for the free energy is equivalent to : \begin{equation} \begin{split} &h_\mu \leftarrow \mathbb{E}\left[ h_\mu | v \right] \\ &v_i \leftarrow \Theta \left[ g_i + \sum_\mu w_{i\mu} h_\mu \right] \end{split} \end{equation} \section{Numerical proxies for control and order parameters} Several control and order parameters are well defined for R-RBM in the thermodynamical limit, but not for typical RBM trained on data. For R-RBM instances, the average weight sparsity $p$ is well defined because the weights take only three distinct values $\{-\frac{1}{\sqrt{N}},0,\frac{1}{\sqrt{N}} \}$, but for RBM trained on data, the weights $w_{i\mu}$ are never exactly equal to zero. Similarly, the number of strongly activated hidden units $L$ is well-defined for R-RBM in the thermodynamic limit $N \rightarrow \infty$ because their activity scales as $\sqrt{N}$; but in general, all hidden units have finite activation. Proxies are therefore required to compare theoretical and numerical results. We consider 'consistent' proxies, giving back (in the large size limit), the original parameters for RBMs drawn from the R-RBM ensemble. \subsection{Participation Ratios $PR$} Participation ratios are used to estimate numbers of nonzero components in a vector, while avoiding the use of arbitrary thresholds. The Participation Ratio $(PR_a)$ of a vector ${\bf x}=\{x_i \}$ is $$PR_a(\bf x) = \frac{(\sum_{i} |x_i|^a)^2}{\sum_{i} |x_i|^{2a} }$$ If $\bf x$ has $K$ nonzero and equal (in modulus) components PR is equal to $K$ for any $a$. In practice we use the values $a=2$ and 3: the higher $a$ is, the more small components are discounted against strong components in $\bf x$. \subsection{Number $L$ of active hidden units} In both numerical simulations of R-RBM and on RBM trained on MNIST, we estimate $L$, for a given hidden-unit configuration $\bf{h}$, through $$\hat{L} = PR_3(\bf{h})$$ To understand the choice $a=3$, consider a typical activation configuration $\bf{h}$ for a R-RBM : \begin{equation} h_\mu = \left\{ \begin{array}{c c c} m \sqrt{N} & \text{if } &1\le \mu\le L \ , \\ \sqrt{r}\; x_\mu & \text{if }& L+1\le \mu \le M\ , \end{array} \right. \end{equation} where the magnetization $m$ and mean square activity $r$ are $\mathcal{O}(1)$, and $x_\mu$ are random variables with zero mean, and even moments of the order of unity. The first $L$ hidden units are strongly activated ($\mathcal{O}(\sqrt{N})$ activity), whereas the remaining $N-L$ others are not (activations of the order of 1). Here, we assume $L$ to be finite as $N\to\infty$. One computes : \begin{equation} \begin{split} PR_2(h) \sim \frac{(L m^2 N + (N-L) r)^2}{L m^4 N^2 + (N-L) r^2} = L \times \frac{(1+\frac{N-L}{N} \frac{r}{L m^2})^2}{1 + \frac{N-L}{N^2} \frac{r^2}{Lm^4}}\underset{N \rightarrow \infty}{\longrightarrow} L (1 + \frac{r}{L m^2})^2 \ ,\\ PR_3(h) \sim \frac{(L m^3 N^{3/2} + (N-L) r^{3/2})^2}{L m^6 N^{3} + (N-L) r^3} = L \times \frac{(1+\frac{N-L}{N^{3/2}} \frac{r^{3/2}}{L m^3})^2}{1 + \frac{N-L}{N^3} \frac{r^3}{Lm^6}} \underset{N \rightarrow \infty}{\longrightarrow} L\ . \end{split} \end{equation} Hence choosing coefficient $a=3$ ensures that the participation ratio (a) does not take into account the weak activations in the thermodynamical limit, and (b) converges to the true value $L$ if all magnetizations are equal. \subsection{Normalized Magnetizations $\tilde{m}$} Given a RBM and a visible layer configuration, we define the normalized magnetization of hidden unit $\mu$ as the normalized overlap between the configuration and the weights attached to the unit: $$\tilde{m}_\mu = \frac{\sum_i (2 v_i -1) w_{i\mu}}{\sum_i | w_{i\mu} | } \in [-1,1]$$ This definition is consistent with the Hopfield model. For R-RBM, we also have, in the thermodynamical limit, $\hat{m}_\mu = \frac{2I_\mu}{p \sqrt{N}} $, where $I_\mu$ is the input received by the hidden unit from the visible layer; $m_\mu$ is $\mathcal{O}(1)$ for strongly activated hidden units, and $\mathcal{O}(\frac{1}{\sqrt{N}} )$ for the others. For a given configuration $\bf{v}$, with $\hat{L}$ activated hidden units, the normalized magnetization of the activated hidden units $\tilde{m} = \frac{m}{p/2 }$ can be estimated as the average of the $\hat{L}$ highest magnetizations $\hat{m}_\mu$. \subsection{Weight sparsity $p$} A natural way to estimate the fraction of non-zero weights $w_{i\mu}$ would be to count the number of weights with absolute value above some threshold $t$. However, there is no simple satisfactory choice for $t$. Indeed, the fraction of non-zero weights should not depend on the scale of the weights, {\em i.e.} it should be invariant under the global rescaling transformation $\{w_{i\mu}\} \rightarrow \{\lambda\, w_{i\mu}\}$. As the scale of weights vary from RBMs to RBMs and, for each RBM, across training it appears difficult to select an appropriate value for $t$. A possibility would be to use a threshold adapted to each RBM, {\em e.g.} $t \propto \kappa \sqrt{\frac{W_2}{M}}$, where $\kappa$ would be some small number. Our experiments show that it is not accurate enough, due to the scale disparities across the hidden--unit weight vectors ${\bf w}_\mu$. Rather than adapting thresholds to each hidden unit of each RBM, we use Participation Ratios, which naturally enjoy the scale invariance property. We estimate the fraction of nonzero weights through $$ \hat{p} = \frac{1}{MN} \sum_\mu PR_2({\bf w}_\mu )$$ For R-RBM with $w_{i \mu} \in [-W_0, 0, W_0]$ with corresponding probabilities $[\frac p2 , 1-p,\frac p2]$, the estimator is consistent: $\hat{p} = p$. \subsection{Weights heterogeneities} As seen from the features of Fig.~2 in the main text, not all visible units are equally connected to the hidden layer. To better capture this effect, one can study R-RBM with any arbitrary distribution of $p_i$. Analogously to the homogeneous case a high sparsity limit is obtained when the average sparsity, $p=\frac 1N \sum _i p_i$, vanishes. We define the distribution of the ratios $\tilde p_i=\frac{p_i}{p}$ in the $p\to 0$ limit. In practice the ratios are estimated through \begin{equation} \tilde p_i = \frac{\sum_i w_{i\mu}^2}{\frac{1}{M} \sum_{i,\mu} w_{i\mu}^2}\ . \end{equation} For a heterogeneous R-RBM, we have consistently $\tilde p_i = \frac{\hat p_i}{p} = \frac{p_i}{p}$. Looking at the histogram of values of $\tilde p_i$ across all RBM inferred on MNIST, we find a non-negligible spread around one, see Fig.~\ref{distpt}. We also display for each visible unit $i$ the average of $\tilde p_i$ accross all RBM inferred; we can see that the visible units at the border are indeed the least connected (smaller $\tilde p_i$), whereas the ones at the center are strongly connected (larger $\tilde p_i$). \begin{figure} \includegraphics[scale=0.7]{supplementary_figure6.png} \caption{(a) Histogram of $\tilde p_i = \frac{p_i}{p}$ values, across all visible units and RBMs inferred on MNIST. (b) Average across all RBM of $\tilde p_i =\frac{p_i}{p}$, for each visible unit} \label{distpt} \end{figure} \subsection{Effective Temperature $T$} \begin{figure} \includegraphics[scale=0.6]{supplementary_figure5.png} \caption{Conditional means $\mathbb{E}\left[{\bf v} | {\bf h} \right]$ for two hidden units configurations sampled at equilibrium. Most pixels $v_i$ are frozen, with $\mathbb{E}\left[v_i | {\bf h} \right] \in \{0,1\}$} \end{figure} Although RBM distributions are always defined at temperature $T=1$, the effective temperature is not $1$. This is very much like in the Ising model : the behavior of the system depends on an effective temperature $\hat{T} = \frac{T}{J}$ where $J$ is the coupling strength; a low effective temperature phase correpond to high values of $J$. For ReLU RBM, the probability distribution of configurations at temperature $T$ is defined as : \begin{equation} P_{{\bf w}}[{\bf v},{\bf h}]= e^{-\frac{E[{\bf v},{\bf h}]}{T}} \quad \text{with}\quad \frac{E[{\bf v},{\bf h}]}{T} = -\sum_i \frac{g_i}{T} v_i + \sum_\mu \left( \frac{h_\mu^2}{2 T} + \frac{h_\mu \theta_\mu}{T} \right) - \sum_{i,\mu} \frac{w_{i \mu}}{T} v_i h_\mu\ . \end{equation} Let ${\bf \bar{h}} = \frac{{\bf h}}{\sqrt{T}}$. The probability can be rewritten as $P[{\bf v}, {\bf \bar{h}}]=e^{-\bar{E}[{\bf v},{\bf \bar{h}}]}$ with \begin{equation} \bar{E}({\bf v},{\bf \bar{h}}) = -\sum_i \frac{g_i}{T} v_i + \sum_\mu \left( \frac{\bar{h}_\mu^2}{2} + \bar{h}_\mu \frac{\theta_\mu}{\sqrt{T}} \right) - \sum_{i,\mu} \frac{w_{i \mu}}{\sqrt{T}} v_i \bar{h}_\mu\ . \end{equation} Since the marginal $P[\bf v]$ is not affected by the change of variable, a ReLU RBM at temperature $T$ is therefore equivalent to another ReLU RBM at temperature $T=1$, with new fields, thresholds and weights : $\bar{\bf{g}} = \frac{\bf{g}}{T}$, $\bar{\bf{\theta}} = \frac{\bf{\theta}}{\sqrt{T}}$, $\bar{\bf{w}} = \frac{\bf{w}}{\sqrt{T}}$. Therefore, changing the temperature is equivalent to rescaling the parameters, and in turn, the effective temperature of a given RBM can be deduced from the amplitude of its weights. For a R-RBM at temperature $T$: $$ W_2 = \frac{1}{M} \sum_{\mu,i} \bar{w}_{i\mu}^2 \underset{N \rightarrow \infty}\sim \frac{p}{T} \ .$$ We therefore estimate the temperature of a given RBM through $$ \hat{T} = \frac{\hat{p}}{\frac{1}{M} \sum_{\mu,i} w_{i\mu}^2}\ .$$ From this definition, it can be seen that the low temperature regime of the compositional regime, ${T} \ll {p} $, is equivalent to $W_2 \gg 1$. In RBM trained on MNIST, we typically find $W_2 \sim 7$ \subsection{Fields $g$} Similarly to the weights, the fields $g_i$ and normalized fields could be estimated respectively as: \begin{equation} \begin{split} \hat{g_i} &= \hat{T} \bar{g}_i \\ \hat{\tilde{g_i}} &= \frac{\hat{T}}{\hat{p}} \bar{g}_i = \frac{\bar{g}_i}{\frac{1}{M} \sum_{\mu,i} w_{i\mu}^2} \end{split} \end{equation} A naive estimate for the normalized field $\tilde{g}$ would be to average the fields: $\hat{\tilde{g}} = \frac{1}{N} \sum_i \hat{\tilde{g_i}}$. It is however not really meaningful, as the $\hat{\tilde{g_i}}$ are extremely heterogeneous: for instance, the mean value over the sites $i$ of a single RBM is equal to $-0.48$, and is comparable to the standard deviation, $0.40$. This range of variation spans all the phases of R-RBM. To achieve quantitative predictions, we instead adjust the R-RBM parameter $g$ so that $q$, the mean value of $v_i$ in the visible layer, averaged over thermal fluctuations and quenched disorder, matches the value $0.132$ obtained from MNIST data. For the plots of Figure 4 in the main text, this gives $\frac{\hat{g}}{\hat{p}} = -0.1725$ for homogeneous R-RBM, and $\frac{\hat{g}}{\hat{p}} = -0.21$ for heterogeneous R-RBM. \subsection{Thresholds $\theta$} The thresholds and normalized thresholds can be estimated as \begin{equation} \begin{split} \hat{\theta}_\mu &= \sqrt{\hat{T}} \, \bar{\theta}_\mu \\ \hat{\tilde{\theta}}_\mu &= \sqrt{\frac{ \hat{T}}{\hat{p}}} \, \bar{\theta}_\mu = \frac{\bar{\theta}_\mu}{\sqrt{\frac{1}{M} \sum_{\mu,i} w_{i\mu}^2}} \end{split} \end{equation} Again, a naive estimate for the normalized threshold $\tilde{\theta}$ would be the average $\hat{\tilde{\theta}} = \frac{1}{M} \sum_\mu \hat{\tilde{\theta}}_\mu$ but this estimate is not meaningful. Indeed, contrary to the R-RBM case, the inputs $I_\mu$ of the hidden units $\mu$ are not evenly distributed around zero: $\mathbb{E} \left[ I_\mu \right] \neq 0$. Hence, even if the threshold is equal to zero, the activation probability can be different from 0.5. We take this effect into account by substracting the average value of the inputs from the average of $\theta$, and find that the difference is equal to $0.33$, with standard deviation $1.11$. This range of value for $\theta$ again spans all phases. In order to use a well-defined value, we choose $\theta$ such that the critical capacity $\alpha_c^{R-RBM}(\ell_{max}) = 0.5$, where $\ell_{max} \sim 1.5 $ is the maximum average index number observed across all RBMs trained for Fig. 4 in the main text. This estimation gives $ \hat{\tilde{\theta}} \sim 1.5$.
2024-02-18T23:40:26.582Z
2017-03-06T02:01:31.000Z
algebraic_stack_train_0000
2,418
4,114
proofpile-arXiv_065-11735
\section{Introduction} We focus here on switched control systems, a class of hybrid systems recently used with success in various domains such as automotive industry and power electronics. These systems are merely described by piecewise dynamics, periodically sampled with a given period. At each period, the system is in one and only one mode, decided by a control rule \cite{fribourg2014finite,liberzon2012switching}. Moreover, the considered systems can switch between any two modes instantaneously. This simplification can be easily by-passed by the addition of intermediate facticious modes. In this paper, we consider that these modes are represented by nonlinear ODEs. In order to compute the control of a switched system, we do need the solution of differential equations. In the general case, differential equations can not be integrated formally, and a numerical integration scheme is used to approximate the state of the system. With the objective of computing a guaranteed control, we base our approach on validated simulation (also called ``reachability analysis''). The \emph{guaranteed} or \emph{validated} solution of ODEs using interval arithmetic is mainly based on two kinds of methods based on: i) Taylor series \cite{Moore66,Nedialkov,LiSt07,Dzetkulic:2015fk} ii) Runge-Kutta schemes \cite{BM06,Gajda:2008fk,BCD13,report}. The former is the oldest method used in interval analysis community because the expression of the bound of a Taylor series is simple to obtain. Nevertheless, the family of Runge-Kutta methods is very important in the field of numerical analysis. Indeed, Runge-Kutta methods have several interesting stability properties which make them suitable for an important class of problems. Our tool \cite{dynibex} implements Runge-Kutta based methods which prove their efficiency at low order for short simulation (fixed by sampling period of controller). In the methods of symbolic analysis and control of hybrid systems, the way of representing sets of state values and computing reachable sets for systems defined by autonomous ordinary differential equations (ODEs), is fundamental (see, e.g., \cite{girard2005reachability,althoff2013reachability}). Many tools using, eg. linearization or hybridization of these dynamics are now available (e.g., Spacex \cite{frehse2011spaceex}, Flow* \cite{chen2013flow}, iSAT-ODE~\cite{eggers2008sat}). An interesting approach appeared recently, based on the propagation of reachable sets using guaranteed Runge-Kutta methods with adaptive step size control (see \cite{BCD13,immler2015verified}). An originality of the present work is to use such guaranteed integration methods in the framework of switched systems. This notion of guarantee of the results is very interesting, because we are mainly interested into critical domain, such as aeronautical, military and medical ones. Other symbolic approaches for control synthesis of switched systems include the construction of a discrete abstraction of the original system on a grid of the state space. This can be done by computing symbolic models that are approximately bisimilar \cite{girard2010approximately} or approximately alternatingly similar \cite{zamani2012symbolic} to the original system. Another recent symbolic approach relies on feedback refinement relations \cite{reissig2015feedback}. We compare our work with the last two approaches, which are the closest related methods since the associated tools (respectively PESSOA \cite{Mazo2010} and SCOTS \cite{SCOTS}) are used to perform control synthesis on switched systems without any stability assumptions, such as the present method. The paper is divided as follows. In Section~\ref{sec:switched}, we introduce some preliminaries on switched systems and some notation used in the following. In Section~\ref{sec:simulation}, the guaranteed integration of nonlinear ODEs is presented. In Section~\ref{sec:minimator}, we present the main algorithm of state-space bisection used for control synthesis. In Section~\ref{sec:experimentations}, the whole approach is tested on three examples of the literature. We give some performance tests and compare our approach with the state-of-the-art tools in section \ref{sec:comparison}. We conclude in section \ref{sec:conclu}. \section{Switched systems} \label{sec:switched} Let us consider the nonlinear switched system \begin{equation} \dot x(t) = f_{\sigma (t)}(x(t),d(t)) \label{eq:sys} \end{equation} defined for all $t \geq 0$, where $x(t) \in \mathbb{R}^n$ is the state of the system, $\sigma(\cdot) : \mathbb{R}^+ \longrightarrow U$ is the switching rule, and $d(t) \in \mathbb{R}^m$ is a bounded perturbation. The finite set $U = \{ 1, \dots , N \}$ is the set of switching modes of the system. We focus on sampled switched systems: given a sampling period $\tau >0$, switchings will occur at times $\tau$, $2\tau$, \dots{} The switching rule $\sigma(\cdot)$ is thus piecewise constant, we will consider that $\sigma(\cdot)$ is constant on the time interval $\lbrack (k-1) \tau , k \tau )$ for $k \geq 1$. We call ``\emph{pattern}'' a finite sequence of modes $\pi = (i_1,i_2,\dots,i_k) \in U^k$. With such a control input, and under a given perturbation $d$, we will denote by $\mathbf{x}(t; t_0, x_0,d,\pi)$ the solution at time $t$ of the system \begin{equation} \begin{aligned} \dot x(t) & = f_{\sigma (t)}(x(t),d(t)), \\ x(t_0) & = x_0, \\ \forall j \in \{1,\dots,k\}, \ \sigma(t) & = i_j \in U \ \text{for} \ t \in \lbrack (j-1) \tau , j \tau ). \end{aligned} \label{eq:sampled-sys} \end{equation} We address the problem of synthesizing a state-dependent switching rule $\tilde \sigma(x)$ for~\eqref{eq:sampled-sys} in order to verify some properties. The problem is formalized as follows: \begin{problem}[Control Synthesis Problem] Let us consider a sampled switched system~\eqref{eq:sampled-sys}. Given three sets $R$, $S$, and $B$, with $R \cup B \subset S$ and $R \cap B = \emptyset$, find a rule $\tilde \sigma(x)$ such that, for any $x(0)\in R$ \begin{itemize} \item \textit{$\tau$-stability}\footnote{This definition of stability is different from the stability in the Lyapunov sense.}: $x(t)$ returns in $R$ infinitely often, at some multiples of sampling time $\tau$. \item \textit{safety}: $x(t)$ always stays in $S \backslash B$ \end{itemize} \label{prob:nl_control} \end{problem} Under the above-mentioned notation, we propose a procedure which solves this problem by constructing a law $\tilde \sigma(x)$, such that for all $x_0 \in R$, and under the unknown bounded perturbation $d$, there exists $\pi = \tilde \sigma(x_0) \in U^k$ for some $k$ such that: \begin{equation*} \left\{ \begin{aligned} \mathbf{x}(t_0 + k\tau; t_0, x_0,d,\pi) \in R \\ \forall t \in [t_0,t_0 + k\tau], \quad \mathbf{x}(t; t_0, x_0,d,\pi) \in S \\ \forall t \in [t_0,t_0 + k\tau], \quad \mathbf{x}(t; t_0, x_0,d,\pi) \notin B \end{aligned} \right. \end{equation*} Such a law permits to perform an infinite-time state-dependent control. The synthesis algorithm is described in Section~\ref{sec:minimator} and involves guaranteed set based integration presented in the next section, the main underlying tool is interval analysis \cite{Moore66}. To tackle this problem, we introduce some definitions. In the following, we will often use the notation $\lbrack x \rbrack \in \mathbb{IR}$ (the set of intervals with real bounds) where $\lbrack x \rbrack = \lbrack\underline{x}, \overline{x}\rbrack=\{ x \in \Rset \mid \underline{x} \leqslant x \leqslant \overline{x} \}$ denotes an interval. By an abuse of notation $[x]$ will also denote a vector of intervals, \emph{i.e.}, a Cartesian product of intervals, a.k.a. a \emph{box}. In the following, the sets $R$, $S$ and $B$ are given under the form of boxes. \begin{definition}[Initial Value Problem (IVP)] Consider an ODE with a given initial condition \begin{equation} \label{eq:ivp} \dot{x}(t) = f(t, x(t), d(t))\quad\text{with} \quad x(0) \in X_0, \ d(t) \in \lbrack d \rbrack, \end{equation} with $f:\Rset^+\times\Rset^n\times \Rset^m\rightarrow\Rset^n$ assumed to be continuous in $t$ and $d$ and globally Lipschitz in $x$. We assume that parameters $d$ are bounded (used to represent a perturbation, a modeling error, an uncertainty on measurement,~\dots). An \emph{IVP} consists in finding a function $x(t)$ described by the ODE~\eqref{eq:ivp} for all $d(t)$ lying in $\lbrack d \rbrack$ and for all the initial conditions in $X_0$. \end{definition} \begin{definition} Let $X \subset \mathbb{R}^n$ be a box of the state space. Let $\pi = (i_1,i_2,\dots,i_k) \in U^k$. The \emph{successor set} of $X$ via $\pi$, denoted by $Post_{\pi}(X)$, is the (over-approximation of the) image of $X$ induced by application of the pattern $\pi$, \emph{i.e.}, the solution at time $t = k \tau$ of \begin{equation} \label{eq:ivp_post} \begin{aligned} \dot x(t) &= f_{\sigma (t)}(x(t),d(t)), \\ x(0) & = x_0 \in X, \\ \forall t \geq 0, & \quad d(t) \in \lbrack d \rbrack, \\ \forall j \in \{1,\dots,k\},& \quad \sigma(t) = i_j \in U \ \text{for} \ t \in \lbrack (j-1) \tau , j \tau ). \end{aligned} \end{equation} \label{def:post} \end{definition} \begin{definition} Let $X \subset \mathbb{R}^n$ be a box of the state space. Let $\pi = (i_1,i_2,\dots,i_k) \in U^k$. We denote by $Tube_{\pi}(X)$ the union of boxes covering the trajectories of IVP~\eqref{eq:ivp_post}, which construction is detailed in Section~\ref{sec:simulation}. \label{def:tube} \end{definition} \section{Validated simulation} \label{sec:simulation} In this section, we describe our approach for validated simulation based on Runge-Kutta methods \cite{BCD13,report}. A numerical integration method computes a sequence of approximations $(t_n, x_n)$ of the solution $x(t;x_0)$ of the IVP defined in Equation~\eqref{eq:ivp} such that $x_n \approx x(t_n;x_{n-1})$. The simplest method is Euler's method in which $t_{n+1}=t_n+h$ for some step-size $h$ and $x_{n+1}=x_n+h\times f(t_n,x_n, d)$; so the derivative of $x$ at time $t_n$, $f(t_n,x_n, d)$, is used as an approximation of the derivative on the whole time interval to perform a linear interpolation. This method is very simple and fast, but requires small step-sizes. More advanced methods coming from the Runge-Kutta family use a few intermediate computations to improve the approximation of the derivative. The general form of an explicit $s$-stage Runge-Kutta formula, that is using $s$ evaluations of $f$, is \begin{equation} \begin{aligned} x_{n+1} = x_n + h \sum_{i=1}^s b_i k_i\enspace, \\ k_1 = f\big(t_n,\, x_n, d\big)\enspace,\\ k_i = f \Big(t_n + c_i h,\, x_n + h \sum_{j=1}^{i-1} a_{ij}k_j, d\Big) , \ i = 2,3,\dots,s\enspace. \label{eq:ki} \end{aligned} \end{equation} The coefficients $c_i$, $a_{ij}$ and $b_i$ fully characterize the method. To make Runge-Kutta validated, the challenging question is how to compute a bound on the distance between the true solution and the numerical solution, defined by $x(t_n;x_{n-1}) - x_n$. This distance is associated to the \emph{local truncation error} (LTE) of the numerical method. To bound the LTE, we rely on \textit{order condition}~\cite{HNW93} respected by all Runge-Kutta methods. This condition states that a method of this family is of order $p$ iff the $p+1$ first coefficients of the Taylor expansion of the solution and the Taylor expansion of the numerical methods are equal. In consequence, LTE is proportional to the Lagrange remainders of Taylor expansions. Formally, LTE is defined by (see \cite{BCD13}): \begin{multline} \label{eq:truncation-error} x(t_n;x_{n-1}) - x_n = \\ \hspace{5mm}\frac{h^{p+1}}{(p+1)!} \left( f^{(p)}\left(\xi,x(\xi; x_{n-1}), d \right) - \frac{d^{p+1}\phi}{dt^{p+1}}(\eta) \right) \\ \xi\in]t_n, t_{n+1}[ \text{ and } \eta\in]t_n, t_{n+1}[\enspace. \end{multline} The function $f^{(n)}$ stands for the $n$-th derivative of function $f$ w.r.t. time $t$ that is $\frac{d^n f}{dt^n}$ and $h=t_{n+1}-t_n$ is the step-size. The function $\phi:\Rset\to\Rset^n$ is defined by $\phi(t)= x_n + h \sum_{i=1}^s b_i k_i(t)$ where $k_i(t)$ are defined as Equation~\eqref{eq:ki}. The challenge to make Runge-Kutta integration schemes safe w.r.t. the true solution of IVP is then to compute a bound of the result of Equation~\eqref{eq:truncation-error}. In other words we have to bound the value of $f^{(p)}\left(\xi, x(\xi;x_{n-1}), d\right)$ and the value of $\frac{d^{p+1}\phi}{dt^{p+1}}(\eta)$. The latter expression is straightforward to bound because the function $\phi$ only depends on the value of the step-size $h$, and so does its $(p+1)$-th derivative. The bound is then obtain using the affine arithmetic \cite{AffineA97,alexandre2016validated}. However, the expression $f^{(p)}\left(\xi, x(\xi;x_{n-1}), d\right)$ is not so easy to bound as it requires to evaluate $f$ for a particular value of the IVP solution $x(\xi;x_{n-1})$ at an unknown time $\xi \in ]t_n, t_{n+1}[$. The solution used is the same as the one found in~\cite{Nedialkov,BM06} and it requires to bound the solution of IVP on the interval $[t_n, t_{n+1}]$. This bound is usually computed using the Banach's fixpoint theorem applied with the Picard-Lindel\"of operator, see \cite{Nedialkov}. This operator is used to compute an enclosure of the solution $[\tilde{x}]$ of IVP over a time interval $[t_n, t_{n+1}]$, that is for all $t \in [t_n, t_{n+1}]$, $x(t; x_{n-1}) \in [\tilde{x}]$. We can hence bound $f^{(p)}$ substituting $x(\xi;x_{n-1})$ by $[\tilde{x}]$. For a given pattern of switched modes $\pi = (i_1,\dots,i_k) \in U^k$ of length $k$, we are able to compute, for $j \in \{1,..,k\}$, the enclosures: \begin{itemize} \item $[x_j] \ni x(t_j)$; \item $[\tilde{x}_j] \ni x(t), \ \text{for} \ t \in \lbrack (j-1)\tau,j\tau\rbrack$. \end{itemize} with respect to the system of IVPs: \begin{equation} \left\{ \begin{array}{c} \dot x(t) = f_{\sigma (t)}(t,x(t),d(t)),\\ \nonumber x(t_0=0) \in [x_0] , d(t) \in [d],\\ \sigma(t) = i_1, \forall t \in [0,t_1], t_1=\tau\\ \vdots\\ \dot x(t) = f_{\sigma (t)}(t,x(t),d(t)),\\ \nonumber x(t_{k-1}) \in [x_{k-1}], d(t) \in [d],\\ \sigma(t) = i_k, \forall t \in [t_{k-1},t_k], t_k=k\tau \end{array} \right. \end{equation} Thereby, the enclosure $Post_{\pi}(\lbrack x_0 \rbrack)$ is included in $[x_k]$ and $Tube_{\pi}(\lbrack x_0 \rbrack)$ is included in $\bigcup_{j=1,..,k} [\tilde{x}_j]$. This applies for all initial states in $\lbrack x_0 \rbrack$ and all disturbances $d(t) \in [d]$. A view of enclosures computed by the validated simulation for one solution obtained for Example~\ref{ex2} is shown in Figure~\ref{fig:post_tube}. \begin{figure}[ht] \centering \includegraphics[trim = 4cm 3cm 4cm 4cm, clip, width=0.45\textwidth]{tube.pdf} \caption{Functions $Post_{\pi}(X)$ and $Tube_{\pi}(X)$ for the initial box $X=[-0.69,-0.64] \times [1,1.06]$, with a pattern $\pi = (1,3,0)$.} \label{fig:post_tube} \end{figure} \section{The state-space bisection algorithm} \label{sec:minimator} \subsection{Principle of the algorithm} We describe here the algorithm solving the control synthesis problem (see Problem \ref{prob:nl_control}, Section \ref{sec:switched}). Given the input boxes $R$, $S$, $B$, and given two positive integers $K$ and $D$, the algorithm provides, when it succeeds, a decomposition $\Delta$ of $R$ of the form $\{ V_i, \pi_i \}_{i \in I}$, with the properties: $\bigcup_{i \in I} V_i = R$, $\forall i \in I, \ Post_{\pi_i}(V_i) \subseteq R$, $\forall i \in I, \ Tube_{\pi_i}(V_i) \subseteq S$, $\forall i \in I, \ Tube_{\pi_i}(V_i) \bigcap B = \emptyset$. The sub-boxes $\{ V_i \}_{i \in I}$ are obtained by repeated bisection. At first, function $Decomposition$ calls sub-function $Find\_Pattern$ which looks for a pattern $\pi$ of length at most $K$ such that $Post_{\pi}(R) \subseteq R$, $Tube_{\pi}(R) \subseteq S$ and $Tube_{\pi}(R) \bigcap B = \emptyset$. If such a pattern $\pi$ is found, then a uniform control over $R$ is found (see Figure~\ref{fig:scheme}(a)). Otherwise, $R$ is divided into two sub-boxes $V_1$, $V_{2}$, by bisecting $R$ w.r.t. its longest dimension. Patterns are then searched to control these sub-boxes (see Figure~\ref{fig:scheme}(b)). If for each $V_i$, function $Find\_Pattern$ manages to get a pattern $\pi_i$ of length at most $K$ verifying $Post_{\pi_i}(V_i) \subseteq R$, $Tube_{\pi_i}(V_i) \subseteq S$ and $Tube_{\pi_i}(V_i) \bigcap B = \emptyset$, then it is done. If, for some $V_j$, no such pattern is found, the procedure is recursively applied to $V_j$. It ends with success when every sub-box of $R$ has a pattern verifying the latter conditions, or fails when the maximal degree of decomposition $D$ is reached. The algorithmic form of functions $Decomposition$ and $Find\_Pattern$ is given in Figures \ref{fig:decomposition} (cf. \ref{fig:findpattern} and in \cite{fribourg2014finite,ulrich} for the linear case). \begin{figure}[ht] \centering \includegraphics[trim = 2cm 6cm 4cm 5.5cm, clip, width=0.45\textwidth]{bisect.pdf} \caption{Principle of the bisection method.} \label{fig:scheme} \end{figure} Having defined the control synthesis method, we now introduce the main result of this paper, stated as follows: \begin{proposition} The algorithm of Figure \ref{fig:decomposition} with input $(R,R,S,B,D,K)$ outputs, when it successfully terminates, a decomposition $\{ V_i,\pi_i \}_{i \in I}$ of~$R$ which solves Problem~\ref{prob:nl_control}. \end{proposition} \begin{proof} Let $x_0 = x(t_0=0)$ be an initial condition belonging to~$R$. If the decomposition has terminated successfully, we have $\bigcup_{i \in I} V_i = R$, and $x_0$ thus belongs to $V_{i_0}$ for some $i_0\in I$. We can thus apply the pattern $\pi_{i_0}$ associated to $V_{i_0}$. Let us denote by $k_0$ the length of $\pi_{i_0}$. We have: \begin{itemize} \item $\mathbf{x}(k_0\tau;0,x_0,d,\pi_{i_0}) \in R$ \item $\forall t \in [0, k_0\tau], \quad \mathbf{x}(t;0,x_0,d,\pi_{i_0}) \in S$ \item $\forall t \in [0, k_0\tau], \quad \mathbf{x}(t;0,x_0,d,\pi_{i_0}) \notin B$ \end{itemize} Let $x_1 = \mathbf{x}(k_0\tau;0,x_0,d,\pi_{i_0}) \in R$ be the state reached after application of $\pi_{i_0}$ and let $t_1 = k_0 \tau$. State $x_1$ belongs to $R$, it thus belongs to $V_{i_1}$ for some $i_1 \in I$, and we can apply the associated pattern $\pi_{i_1}$ of length $k_1$, leading to: \begin{itemize} \item $\mathbf{x}(t_1 + k_1\tau;t_1,x_1,d,\pi_{i_1}) \in R$ \item $\forall t \in [t_1, t_1 + k_1\tau], \quad \mathbf{x}(t;t_1,x_1,d,\pi_{i_1}) \in S$ \item $\forall t \in [t_1, t_1 + k_1\tau], \quad \mathbf{x}(t;t_1,x_1,d,\pi_{i_1}) \notin B$ \end{itemize} We can then iterate this procedure from the new state $x_2 = \mathbf{x}(t_1 + k_1\tau;t_1,x_1,d,\pi_{i_1}) \in R$. This can be repeated infinitely, yielding a sequence of points belonging to $R$ $x_0,x_1,x_2,\dots$ attained at times $t_0,t_1,t_2,\dots$, at which the patterns $\pi_{i_0},\pi_{i_1},\pi_{i_2},\dots$ are applied. We furthermore have that all the trajectories stay in $S$ and never cross $B$: $ \forall t \in \mathbb{R}^+, \exists k \geq 0, \ t \in \lbrack t_k , t_{k+1} \rbrack$ and $ \forall t \in \lbrack t_k , t_{k+1} \rbrack,\ \mathbf{x} ( t; t_k, x_k, d, \pi_{i_k}) \in S,\ \mathbf{x} (t;t_k, x_k, d, \pi_{i_k}) \notin B $. The trajectories thus return infinitely often in $R$, while always staying in $S$ and never crossing $B$. \end{proof} \begin{remark} Note that it is possible to perform reachability from a set $R_1$ to another set $R_2$ by computing $Decomposition(R_1,R_2,S,B,D,K)$. The set $R_1$ is thus decomposed with the objective to send its sub-boxes into $R_2$, i.e. for a sub-box $V$ of $R_1$, patterns $\pi$ are searched with the objective $Post_{\pi}(V) \subseteq R_2$ (see Example~\ref{ex2}). \end{remark} \begin{figure} \fbox{ \begin{minipage}{0.42\textwidth} \begin{algorithmic} \STATE{\textbf{Function:} $Decomposition(W,R,S,B,D,K)$} \STATE{\begin{center}\line(1,0){150}\end{center}} \STATE{\quad \textbf{Input:} A box $W$, a box $R$, a box $S$, a box $B$, a degree $D$ of bisection, a length $K$ of input pattern}\STATE{\quad \textbf{Output:}$\langle\{(V_i,\pi_i)\}_{i},True\rangle$ or $\langle\_ ,False\rangle$} \STATE{\begin{center}\line(1,0){150}\end{center}} \STATE{ $(\pi,b) := Find\_Pattern(W,R,S,B,K)$} \IF{$b=True$}{ \STATE{\textbf{return} $\langle\{(W,Pat)\},True\rangle$} } \ELSE \IF{$D = 0$} \RETURN{$\langle \_,False\rangle$} \ELSE \STATE{Divide equally $W$ into $(W_1, W_{2})$ \FOR{$i=1,2$}\STATE{\small{$(\Delta_i,b_i)$ := $Decomposition(W_i,R,S,B,D - 1,K)$}}\ENDFOR \RETURN $(\bigcup_{i=1,2} \Delta_i,\bigwedge_{i=1,2} b_i)$ } \ENDIF \ENDIF \end{algorithmic} \end{minipage} } \caption{Algorithmic form of Function $Decomposition$.} \label{fig:decomposition} \end{figure} \begin{figure} \fbox{ \begin{minipage}{0.42\textwidth} \begin{algorithmic} \STATE{\textbf{Function:} $Find\_Pattern(W,R,S,B,K)$} \STATE{\begin{center}\line(1,0){150}\end{center}} \STATE{\quad \textbf{Input:}A box $W$, a box $R$, a box $S$, a box $B$, a length $K$ of input pattern} \STATE{\quad \textbf{Output:}$\langle \pi,True\rangle$ or $\langle\_, False\rangle$} \STATE{\begin{center}\line(1,0){150}\end{center}} \FOR{$i=1\dots K$} \STATE{$\Pi :=$ set of input patterns of length $i$} \WHILE{$\Pi$ is non empty} \STATE{Select $\pi$ in $\Pi$} \STATE{$\Pi:= \Pi\setminus \{\pi\}$} \IF{$Post_{\pi}(W) \subseteq R$ \AND $Tube_{\pi}(W) \subseteq S$ \AND $Tube_{\pi}(W) \bigcap B = \emptyset$ }{\RETURN{$\langle \pi,True\rangle$}} \ENDIF \ENDWHILE \ENDFOR \RETURN{$\langle \_,False \rangle$} \end{algorithmic} \end{minipage} } \caption{Algorithmic form of Function $Find\_Pattern$.} \label{fig:findpattern} \end{figure} \subsection{The research of patterns} We propose here an improvement of the function $Find\_Pattern$ given in \cite{NL_minimator,fribourg2014finite,ulrich}, which is a naive testing of all the patterns of growing length (up to $K$). The improved function, denoted here by $Find\_Pattern2$, exploits heuristics to prune the search tree of patterns. The algorithmic form of $Find\_Pattern2$ is given in Figure \ref{fig:findpattern2}. It relies on a new data structure consisting of a list of triplets containing: \begin{itemize} \item An initial box $V \subset \mathbb{R}^n$, \item A {\em current} box $Post_{\pi}(V)$, image of $V$ by the pattern $\pi$, \item The associated pattern $\pi$. \end{itemize} For any element $e$ of a list of this type, we denote by $e.Y_{init}$ the initial box, $e.Y_{current}$ the {\em current} box, and by $e.\Pi$ the associated pattern. We denote by $e_{current} = takeHead(\mathcal{L})$ the element on top of a list $\mathcal L$ (this element is removed from list $\mathcal L$). The function $putTail(\cdot,\mathcal{L})$ adds an element at the end of the list $\mathcal L$. Let us suppose one wants to control a box $X \subseteq R$. The list $\mathcal{L}$ of Figure \ref{fig:findpattern2} is used to store the intermediate computations leading to possible solutions (patterns sending $X$ in $R$ while never crossing $B$ or $\mathbb{R}^n \setminus S$). It is initialized as $\mathcal{L} = \{ \left(X,X, \emptyset \right) \}$. First, a testing of all the control modes is performed (a set simulation starting from $W$ during time $\tau$ is computed for all the modes in $U$). The first level of branches is thus tested exhaustively. If a branch leads to crossing $B$ or $\mathbb{R}^n \setminus S$, the branch is cut. Otherwise, either a solution is found or an intermediate state is added to $\mathcal{L}$. The next level of branches (patterns of length $2$) is then explored from branches that are not cut. And so on iteratively. At the end, either the tree is explored up to level $K$ (avoiding the cut branches), or all the branches have been cut at lower levels. List $\mathcal{L}$ is thus of the form $\{ (X,Post_{\pi_i}(X),\pi_i) \}_{i \in {I_X}}$, where for each $i \in {I_X}$ we have $Post_{\pi_i}(X) \subseteq S$ and $Tube_{\pi_i}(X) \bigcap B = \emptyset$. Here, $I_X$ is the set of indexes associated to the stored intermediate solutions, $\vert I_X \vert$ is thus the number of stored intermediate solutions for the initial box $X$. The number of stored intermediate solutions grows as the search tree of patterns is explored, then decreases as solutions are validated, branches are cut, or the maximal level $K$ is reached. The storage of the intermediate solutions $Post_{\pi_i}(X)$ allows to reuse the computations already performed. Even if the search tree of patterns is visited exhaustively, it already allows to obtain much better computation times than with Function $Find\_Pattern$. A second list, denoted by $Solution$ in Figure \ref{fig:findpattern2}, is used to store the validated patterns associated to $X$, i.e. a list of patterns of the form $\{ \pi_j \}_{j \in I_X'}$, where for each $j \in I_X'$ we have $Post_{\pi_j}(X) \subseteq R$, $Tube_{\pi_j}(X) \bigcap B = \emptyset$ and $Tube_{\pi_j}(X) \subseteq S$. Here, $I_X'$ is the set of indexes associated the the stored validated solutions, $\vert I_X' \vert$ is thus the number of stored validated solutions for the initial box $X$. The number of stored validated solutions can only increase, and we hope that at least one solution is found, otherwise, the initial box $X$ is split in two sub-boxes. Note that several solutions can be returned by $Find\_Pattern2$, further optimizations could thus be performed, such as returning the pattern minimizing a given cost function. In practice, and in the examples given below, we return the first validated pattern and stop the computation as soon as it is obtained (see commented line in Figure \ref{fig:findpattern2}). % Compared to \cite{fribourg2014finite,ulrich}, this new function highly improves the computation times, even though the complexity of the two functions is theoretically the same, at most in $O(N^K)$. A comparison between functions $Find\_Pattern$ and $Find\_Pattern2$ is given in Section \ref{sec:comparison}. \begin{figure*}[t] \fbox{ \begin{minipage}{0.9\textwidth} \begin{algorithmic} \STATE{\textbf{Function:} $Find\_Pattern2(W,R,S,B,K)$} \STATE{\begin{center}\line(1,0){150}\end{center}} \STATE{\quad \textbf{Input:}A box $W$, a box $R$, a box $S$, a box $B$, a length $K$ of input pattern} \STATE{\quad \textbf{Output:}$\langle \pi,True\rangle$ or $\langle\_, False\rangle$} \STATE{\begin{center}\line(1,0){150}\end{center}} \STATE{$Solution = \{ \emptyset \}$} \STATE{$\mathcal{L} = \{ \left(W,W, \emptyset \right) \}$} \WHILE{$\mathcal{L} \neq \emptyset$} \STATE{$e_{current}$ = takeHead($\mathcal{L}$)} \FOR{$i \in U$} \IF{$Post_{i}(e_{current}.Y_{current}) \subseteq R$ \AND $Tube_{i}(e_{current}.{Y_{current}}) \bigcap B = \emptyset$ \AND $Tube_{i}(e_{current}.Y_{current}) \subseteq S$} \STATE{$\text{putTail}(Solution,e_{current}.\Pi + i)$ \quad\quad {\color{blue} /*can be replaced by: ``{\bf return} $\langle e_{current}.\Pi + i,True \rangle$'' */ } } \ELSE{ \IF{$Tube_{i}(e_{current}.{Y_{current}}) \bigcap B \neq \emptyset$ \OR $Tube_{i}(e_{current}.{Y_{current}}) \nsubseteq S$} \STATE{ discard $e_{current}$ } \ENDIF} \ELSE{ \IF{$Tube_{i}(e_{current}.{Y_{current}}) \bigcap B = \emptyset$ \AND $Tube_{i}(e_{current}.Y_{current}) \subseteq S$} \STATE{\IF{$\text{Length}(\Pi)+1 < K$} \STATE{$\text{putTail}(\mathcal{L},\left(e_{\text{current}}.Y_{\text{init}}, Post_i(e_{current}.Y_{current}),e_{\text{current}}.\Pi + i \right))$ } \ENDIF} \ENDIF} \ENDIF \ENDFOR \ENDWHILE \RETURN{$\langle \_,False \rangle$ if no solution is found, or $\langle \pi,True\rangle$, $\pi$ being any pattern validated in $Solution$.} \end{algorithmic} \end{minipage} } \caption{Algorithmic form of Function $Find\_Pattern2$.} \label{fig:findpattern2} \end{figure*} \section{Experimentations} \label{sec:experimentations} In this section, we apply our approach to different case studies taken from the literature. Our solver prototype is written in C++ and based on DynIBEX \cite{dynibex}. The computations times given in the following have been performed on a 2.80 GHz Intel Core i7-4810MQ CPU with 8 GB of memory. Note that our algorithm is mono-threaded so all the experimentation only uses one core to perform the computations. The results given in this section have been obtained with Function $Find\_Pattern2$. \subsection{A linear example: boost DC-DC converter} This linear example is taken from \cite{beccuti2005optimal} and has already been treated with the state-space bisection method in a linear framework in \cite{fribourg2014finite}. The system is a boost DC-DC converter with one switching cell. There are two switching modes depending on the position of the switching cell. The dynamics is given by the equation $\dot x (t) = A_{\sigma(t)} x(t) + B_{\sigma(t)}$ with $\sigma(t) \in U = \{ 1,2 \}$. The two modes are given by the matrices: $$ A_1 = \left( \begin{matrix} - \frac{r_l}{x_l} & 0 \\ 0 & - \frac{1}{x_c} \frac{1}{r_0 + r_c} \end{matrix} \right) \quad B_1 = \left( \begin{matrix} \frac{v_s}{x_l} \\ 0 \end{matrix} \right) $$ $$ A_2 = \left( \begin{matrix} - \frac{1}{x_l} (r_l + \frac{r_0.r_c}{r_0 + r_c}) & - \frac{1}{x_l} \frac{r_0}{r_0 + r_c} \\ \frac{1}{x_c}\frac{r_0}{r_0 + r_c} & - \frac{1}{x_c} \frac{r_0}{r_0 + r_c} \end{matrix} \right) \quad B_2 = \left( \begin{matrix} \frac{v_s}{x_l} \\ 0 \end{matrix} \right) $$ with $x_c = 70$, $x_l = 3$, $r_c = 0.005$, $r_l = 0.05$, $r_0 = 1$, $v_s = 1$. The sampling period is $\tau = 0.5$. The parameters are exact and there is no perturbation. We want the state to return infinitely often to the region $R$, set here to $\lbrack 1.55 , 2.15 \rbrack \times \lbrack 1.0 , 1.4 \rbrack$, while never going out of the safety set $S = \lbrack 1.54 , 2.16 \rbrack \times \lbrack 0.99 , 1.41 \rbrack$. The decomposition was obtained in less than one second with a maximum length of pattern set to $K = 6$ and a maximum bisection depth of $D = 3$. A simulation is given in Figure~\ref{fig:NL_0}. \begin{figure}[!ht] \centering \includegraphics[scale=0.4]{simu_boost_safe.png} \caption{Simulation from the initial condition $(1.55,1.4)$. The box $R$ is in plain black. The trajectory is plotted within time for the two state variables on the left, and in the state-space plane on the right.} \label{fig:NL_0} \end{figure} \subsection{A polynomial example} \label{ex2} We consider the polynomial system taken from \cite{liu2013synthesis}: \begin{equation} \left \lbrack \begin{matrix} \dot x_1 \\ \dot x_2 \end{matrix} \right \rbrack = \left \lbrack \begin{matrix} -x_2 - 1.5 x_1 - 0.5 x_1^3 + u_1 + d_1 \\ x_1 + u_2 + d_2 \end{matrix} \right \rbrack. \end{equation} The control inputs are given by $u = (u_1,u_2) = K_{\sigma(t)}(x_1,x_2)$, $\sigma(t) \in U = \{ 1,2,3,4 \}$, which correspond to four different state feedback controllers $K_1(x) = (0,-x_2^2 + 2)$, $K_2(x) = (0,-x_2)$, $K_3(x) = (2,10)$, $K_4(x) = (-1.5,10)$. We thus have four switching modes. The disturbance $d = (d_1,d_2)$ lies in $\lbrack -0.005,0.005 \rbrack \times \lbrack -0.005,0.005 \rbrack$. The objective is to visit infinitely often two zones $R_1$ and $R_2$, without going out of a safety zone $S$, and while never crossing a forbidden zone $B$. Two decompositions are performed: \begin{itemize} \item a decomposition of $R_1$ which returns $\{ (V_i,\pi_i) \}_{i \in I_1}$ with: $\bigcup_{i \in I_1} V_i = R_1$, $\forall i \in I_1, \ Post_{\pi_i}(V_i) \subseteq R_2$, $\forall i \in I_1, \ Tube_{\pi_i}(V_i) \subseteq S$, $\forall i \in I_1, \ Tube_{\pi_i}(V_i) \bigcap B = \emptyset$. \item a decomposition of $R_2$ which returns $\{ (V_i,\pi_i) \}_{i \in I_2}$ with: $\bigcup_{i \in I_2} V_i = R_2$, $\forall i \in I_2, \ Post_{\pi_i}(V_i) \subseteq R_1$, $\forall i \in I_2, \ Tube_{\pi_i}(V_i) \subseteq S$, $\forall i \in I_2, \ Tube_{\pi_i}(V_i) \bigcap B = \emptyset$. \end{itemize} The input boxes are the following: $R_1 = \lbrack -0.5 , 0.5 \rbrack \times \lbrack -0.75 , 0.0 \rbrack$, $R_2 = \lbrack -1.0 , 0.65 \rbrack \times \lbrack 0.75 , 1.75 \rbrack$, $S = \lbrack -2.0 , 2.0 \rbrack \times \lbrack -1.5 , 3.0 \rbrack$, $B = \lbrack 0.1 , 1.0 \rbrack \times \lbrack 0.15 , 0.5 \rbrack$. The sampling period is set to $\tau = 0.15$. The decompositions were obtained in $2$ minutes and $30$ seconds with a maximum length of pattern set to $K = 12$ and a maximum bisection depth of $D = 5$. A simulation is given in Figure~\ref{fig:NL_1} in which the disturbance $d$ is chosen randomly in $\lbrack -0.005,0.005 \rbrack \times \lbrack -0.005,0.005 \rbrack$ at every time step. \begin{figure}[!ht] \centering \includegraphics[scale=0.4]{simu_obstacle5.png} \caption{Simulation from the initial condition $(0.5,-0.75)$. The trajectory is plotted within time on the left, and in the state space plane on the right. In the sate space plane, the set $R_1$ is in plain green, $R_2$ in plain blue, and $B$ in plain black.} \label{fig:NL_1} \end{figure} \subsection{Building ventilation} We consider a building ventilation application adapted from \cite{meyer:tel-01232640}. The system is a four room apartment subject to heat transfer between the rooms, with the external environment, with the underfloor, and with human beings. The dynamics of the system is given by the following equation: \begin{multline} \frac{d T_i}{dt} = \sum_{j \in \mathcal{N}^\text{*}} a_{ij} (T_j - T_i) + \delta_{s_i} b_i (T_{s_i}^4 - T_i ^4 ) \\ + c_i \max\left(0,\frac{V_i - V_i^\text{*}}{\bar{ V_i} - V_i^{\text{*}}}\right)(T_u - T_i). \end{multline} The state of the system is given by the temperatures in the rooms $T_i$, for $i \in \mathcal{N} = \{ 1 , \dots , 4 \}$. Room $i$ is subject to heat exchange with different entities stated by the indexes $\mathcal{N}^\text{*} = \{1,2,3,4,u,o,c \}$. The heat transfer between the rooms is given by the coefficients $a_{ij}$ for $i,j \in \mathcal{N}^2$, and the different perturbations are the following: \begin{itemize} \item The external environment: it has an effect on room $i$ with the coefficient $a_{io}$ and the outside temperature $T_o$, varying between $27^\circ C$ and $30^\circ C$. \item The heat transfer through the ceiling: it has an effect on room $i$ with the coefficient $a_{ic}$ and the ceiling temperature $T_c$, varying between $27^\circ C$ and $30^\circ C$. \item The heat transfer with the underfloor: it is given by the coefficient $a_{iu}$ and the underfloor temperature $T_u$, set to $17^\circ C$ ($T_u$ is constant, regulated by a PID controller). \item The perturbation induced by the presence of humans: it is given in room $i$ by the term $\delta_{s_i} b_i (T_{s_i}^4 - T_i ^4 )$, the parameter $\delta_{s_i}$ is equal to $1$ when someone is present in room $i$, $0$ otherwise, and $T_{s_i}$ is a given identified parameter. \end{itemize} The control $V_i$, $i \in \mathcal{N}$, is applied through the term $c_i \max(0,\frac{V_i - V_i^\text{*}}{\bar{ V_i} - V_i^{\text{*}}})(T_u - T_i)$. A voltage $V_i$ is applied to force ventilation from the underfloor to room $i$, and the command of an underfloor fan is subject to a dry friction. Because we work in a switched control framework, $V_i$ can take only discrete values, which removes the problem of dealing with a ``max'' function in interval analysis. In the experiment, $V_1$ and $V_4$ can take the values $0$V or $3.5$V, and $V_2$ and $V_3$ can take the values $0$V or $3$V. This leads to a system of the form~\eqref{eq:sys} with $\sigma(t) \in U =\{ 1, \dots, 16 \}$, the $16$ switching modes corresponding to the different possible combinations of voltages $V_i$. The sampling period is $\tau = 10$s. The parameters $T_{s_i}$, $V_i^\text{*}$, $\bar V_i$, $a_{ij}$, $b_i$, $c_i$ are given in \cite{meyer:tel-01232640} and have been identified with a proper identification procedure detailed in \cite{meyer2014ecc}. Note that here we have neglected the term $\sum_{j \in \mathcal{N}} \delta_{d_{ij}}c_{i,j} \ast h(T_j - T_i)$ of \cite{meyer:tel-01232640}, representing the perturbation induced by the open or closed state of the doors between the rooms. Taking a ``max'' function into account with interval analysis is actually still a difficult task. However, this term could have been taken into account with a proper regularization (smoothing). The decomposition was obtained in $4$ minutes with a maximum length of pattern set to $K = 2$ and a maximum bisection depth of $D = 4$. The perturbation due to human beings has been taken into account by setting the parameters $\delta_{s_i}$ equal to the whole interval $\lbrack 0,1 \rbrack$ for the decomposition, and the imposed perturbation for the simulation is given Figure~\ref{fig:NL_2_perturbation}. The temperatures $T_o$ and $T_c$ have been set to the interval $\lbrack27,30\rbrack$ for the decomposition, and are set to $30^\circ C$ for the simulation. A simulation of the controller obtained with the state-space bisection procedure is given in Figure~\ref{fig:NL_2}, where the control objective is to stabilize the temperature in $\lbrack 20 , 22 \rbrack ^4$ while never going out of $\lbrack 19 , 23 \rbrack ^4$. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{NL_case_2_perturbation.png} \caption{Perturbation (presence of humans) imposed within time in the different rooms.} \label{fig:NL_2_perturbation} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.35]{NL_case_2.png} \caption{Simulation from the initial condition $(22,22,22,22)$. The objective set $R$ is in plain black and the safety set $S$ is in dotted black.} \label{fig:NL_2} \end{figure} \section{Performance tests} \label{sec:comparison} We present a comparison of the computation times obtained with functions $Find\_Pattern$, $Find\_Pattern2$, and with the state-of-the-art tools PESSOA \cite{Mazo2010} and SCOTS \cite{SCOTS}. \begin{table} \caption{Comparison of $Find\_Pattern$ and $Find\_Pattern2$.} \label{tab:FP1-FP2} \begin{tabular}{|c|c|c|} \hline Example & \multicolumn{2}{c|}{Computation time} \\ \cline{2-3} & $Find\_Pattern$ & $Find\_Pattern2$ \\ \hline DC-DC Converter & $1609$ s & $< 1$ s \\ Polynomial example & Time Out & $150$ s \\ Building ventilation & $272$ s & $228$ s \\ \hline \end{tabular} \end{table} Table \ref{tab:FP1-FP2} shows a comparison of functions $Find\_Pattern$ and $Find\_Pattern2$, which shows that the new version highly improves the computation times. We can note that the new version is all the more efficient as the length of the patterns increases, and as obstacles cut the research tree of patterns. This is why we observe significant improvements on the examples of the DC-DC converter and the polynomial example, and not on the building ventilation example, which only requires patterns of length $2$, and presents no obstacle. \begin{table} \caption{Comparison with state-of-the-art tools.} \label{tab:SOTA} \begin{tabular}{|c|c|c|c|} \hline Example & \multicolumn{3}{c|}{Computation time} \\ \cline{2-4} & FP2 & SCOTS & PESSOA \\ \hline DC-DC Converter & $< 1$ s & $43$ s & $760$ s \\ Polynomial example & $150$ s & $131$ s & $\_\_$ \\ Unicyle \cite{zamani2012symbolic,reissig2015feedback} & $3619$ s & $492$ s & $516$ s \\ \hline \end{tabular} \end{table} Table \ref{tab:SOTA} shows of comparison of function $Find\_Pattern2$ with state-of-the-art tools SCOTS and PESSOA. On the example of the DC-DC converter, our algorithm manages to control the whole state-space $R=\lbrack 1.55 , 2.15 \rbrack \times \lbrack 1.0 , 1.4 \rbrack$ in less than one second, while SCOTS and PESSOA only control a part of $R$, and with greater computation times. Note that these computation times vary with the number of discretization points used in both, but even with a very fine discretization, we never managed to control the whole box $R$. For the polynomial example, we manage to control the whole boxes $R_1$ and $R_2$, such as SCOTS and in a comparable amount of time. However, PESSOA does not support natively this kind of nonlinear systems. We compared our method on a last case study on which PESSOA and SCOTS perform well (see \cite{zamani2012symbolic,reissig2015feedback} for details of this case study, and see Appendix for a simulation obtained using our method). For this case study, we have not obtained as good computations times as they have. This comes from the fact that this example requires a high number of switched modes, long patterns, as well as a high number of boxes to tile the state-space. Note that for this case study we used an automated pre-tiling of the state-space permitting to decompose the reachability problem in a sequence of reachability problems. This is in fact the most difficult case of application of our method. This reveals that our method is more adapted when either the number of switched modes of the length of patterns is not high (though it can be handled at the cost of high computation times). Another advantage is that we do not require a homogeneous discretization of the state space. We can thus tile large parts of the state-space using only few boxes, and this often permits to consider much less symbolic states than with discretization methods, especially in high dimensions (see \cite{LeCoent2016}). \section{Conclusion} \label{sec:conclu} We presented a method of control synthesis for nonlinear switched systems, based on a simple state-space bisection algorithm, and on validated simulation. The approach permits to deal with stability, reachability, safety and forbidden region constraints. Varying parameters and perturbations can be easily taken into account with interval analysis. The approach has been numerically validated on several examples taken from the literature, a linear one with constant parameters, and two nonlinear ones with varying perturbations. Our approach compares well with the state-of-the art tools SCOTS and PESSOA. We would like to point out that the exponential complexity of the algorithms presented here, which is inherent to guaranteed methods, is not prohibitive. Two approaches have indeed been developed to overcome this exponential complexity. A first approach is the use of compositionality, which permits to split the system in two (or more) sub-systems, and to perform control synthesis on these sub-systems of lower dimensions. This approach has been successfully applied in \cite{LeCoent2016} to a system of dimension $11$, and we are currently working on applying this approach to the more general context of contract-based design \cite{sangiovanni2012taming}. A second approach is the use of Model Order Reduction, which allows to approximate the full-order system \eqref{eq:sys} with a reduced-order system, of lower dimension, on which it is possible to perform control synthesis. The bounding of the trajectory errors between the full-order and the reduced-order systems can be taken into account, so that the induced controller is guaranteed. This approach, described in \cite{le2016control}, has been successfully applied on (space-discretized) partial differential equations, leading to systems of ODEs of dimension up to $100 000$. The present work is a potential ground for the application of such methods to control of nonlinear partial differential equations, with the use of proper nonlinear model order reduction techniques. \begin{ack} This work is supported by Institut Farman (project {\scshape SWITCH\-DESIGN}), by the French National Research Agency through the ``iCODE Institute project'' funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02, and by Labex DigiCosme (project ANR-11-LABEX-0045-DIGICOSME). \end{ack} \bibliographystyle{plain}
2024-02-18T23:40:26.696Z
2016-11-22T02:09:36.000Z
algebraic_stack_train_0000
2,421
7,344
proofpile-arXiv_065-11773
\section*{Genetic variation} Genetic populations are modelling constructs. As in other scientific endeavours, when developing mathematical models of evolution and statistical tools to analyze data, we have to simplify reality. In such models, we often assume that there are well-defined populations, which are modelled as persisting in particular areas, and that represent somewhat separate and well-defined evolutionary lineages. Such models can be incredibly useful in helping shape our intuition and analysis but can also constrain our thinking and lead to confusion. Well-delimited genetic populations are rarely found within species in the real world. Certainly, such populations are very rare in humans. We can of course talk of “populations” consisting of everyone in the UK, Finland, or whole continents such as Africa, but these are obviously not populations in the sense that is typically meant in evolutionary genetics, in which individuals within a group share closer genealogical relationships to each other than any other individual outside the group. Even if we take subsets of people within these regions, e.g., “people who self-identify as White in the UK” or “people whose four grandparents all come from the UK”, these groups will not form well delimited genetic subsets, as the UK is not a homogeneous population with respect to migration in and out of the UK \citep{leslie2015fine,olalde2018beaker,patterson2022large}. With sparser sampling of a limited number of geographic locations, we can sometimes identify groups that are closer to better defined “genetic populations”, but these simply reflect the limitations of our sampling. The reality is that we’re all related to each other to varying extents, in a complex web of genealogical relations that form an unimaginably complicated family tree. As a result, genetic variation varies fairly smoothly among individuals, often in ways that are correlated with environments. Patterns of human genetic variation are shaped by geographic distance, geographical barriers, as well as broad-scale population movements. It might be tempting to think that the fairly continuous nature of modern human variation reflects admixture only over the past few hundred years. However, human groups of the past have often been geographically widespread and ephemeral, frequently forming, only to rapidly collapse together with other nearby and sometimes much more distant groups. This is a point that advances in ancient DNA technology have repeatedly made abundantly clear \citep{skoglund2018ancient,liu2021insights}. Much of the common genetic variation found within groups is shared across human groups (\citealp{Lewontin1972}; \citealp[for a visualization see][]{biddanda2020variant}). Rarer genetic variation is usually more localized \citep{gravel2011demographic,nelson2012abundance}. In fact, individual rare variants are better thought of not as properties of groups that bear any resemblance to the ancestry groupings used in genetics research, but rather as features of extended families within such groups. Local adaptation is sometimes highlighted as a reason that particular phenotypic outcomes might be common in some groups and rare in others (i.e. due to combinations of alleles varying substantially in frequencies across groups). Although a growing number of convincing cases of genetic variants shaped by local adaptation have come to light \citep{fan2016going}, these are still a tiny minority of the many loci associated with phenotypic and disease outcomes. As a consequence of the complicated genetic structure of humanity, there’s no single “right” level of granularity to use for description for all questions. Human groups are structured from broad geographic scales to fine-scale patterns well below the level of a country \citep{leslie2015fine,han2017clustering,liu2018genomic,narasimhan2019formation,raveane2019population,bycroft2019patterns,byrne2020dutch}. These patterns are generally not well captured by discrete labels. The choice of level of granularity will depend on the specific questions being addressed. For example, the accuracy of polygenic score predictions is lower for people of Italian and Polish ancestry in the UK than for people of United Kingdom ancestry \citep[][the relative contributions of changes in linkage disequilibrium, specific genetic, environmental, and gene-environment interactions to such changes in accuracy are under current investigation and debate]{Prive2022}. Whether the researcher thinks a set of polygenic scores are appropriate for use in “European ancestries” or that they need much more fine-grained GWAS will depend on the questions being pursued and the prediction resolution required by any future uses. Thus, the nature of the sample descriptions needed in this example, and in many other cases in human genetics, will depend on the problem at hand. Given the complexity of human genetic variation, all verbal descriptions of genetic structure will be incomplete and open to miscommunication. While this has often been an issue in human genetics, this issue is coming to the fore as the depth of genetic sampling of humans increases. We need to move towards terminology for genetic sample descriptors that are clear to people across fields and encourage good use. \section*{Why then use descriptors/labels in analysis?} If human genetic structure is relatively continuous why then do we need to use (somewhat) discrete sample descriptors? Lots of valid explanations and uses have been put forward during the presentations at the \citet{NAS_descriptors_1,NAS_descriptors_2,NAS_descriptors_3}. From my perspective, there are two major reasons for using genetic sample descriptors that I want to highlight. \subsection*{Data subsetting} Researchers set out to gather large enough samples that will offer sufficient statistical power to study their chosen problem, but in practice the sampling, genotyping, phenotyping, and analysis effort of a given project will always be limited. There are obviously many historical and ongoing reasons why sampling has focused on particular groups, but one technical issue is the limitations of statistical approaches and tools. Many methods in statistical and population genetics fit statistical models that assume relatively homogeneous groups. For example, standard GWAS rely on pseudo-randomization of genotypes at a locus across genomic backgrounds and environmental causes of trait variation, and so researchers often limit themselves to more genetically well-mixed groups to better satisfy these requirements \citep[although issues of residual heterogeneity will remain, ][]{haworth2019apparent,zaidi2020demographic}. On the population genomics side, methods to reconstruct population history are constrained to describing the history of a small number of groups, while methods to describe more realistically continuous histories are daunting, and are as yet are underdeveloped. Because of these limitations, researchers subset their data in various ways, for example restricting GWAS samples to a particular ethnicity and geographic region. But having genotyped the individuals, they often further subset their data to a particular subset of genetic variation, e.g., a particular subregion of principal components analysis (PCA) space where many of their samples are present. Having subset the samples, descriptors are needed for these genetic subsets (“who was the method applied to?”). Some of the limitations of methods reflect historical legacies of working in modelling frameworks designed to analyze smaller genetic datasets and previous computational limitations. Methods are improving, allowing GWAS to be performed across somewhat more heterogeneous samples and more flexible models of human population genetic history. However, conceptually, it is very hard to analyze even small subsets of human diversity all at once and so it is unlikely that subsetting genomic data will stop in practice any time soon. Thus, genetic sample descriptors will remain in use. \subsection*{Communication} The second and perhaps bigger reason for genetic sample descriptors is that scientists often rely on them to communicate with each other. In many human genomic analyses, assessing patterns of genetic structure is the first step. In such applications, researchers use prior population descriptors to orient results: “we see patterns, what do they reflect?”. For example, what features do the major axes of variation found in a principal components analysis correspond to? We need words to describe these axes and describe our interpretations. Furthermore, as a field we often pool together different datasets, e.g., in GWAS meta-analysis, or combine together different data types, e.g., GWAS effect sizes from one group with genotype data from another. In a practical sense, we need shorthand labels to communicate to each other what we are doing. \section*{Genetic ancestry} One common genetic sample descriptor applied to samples in human genomics is that of “genetic ancestry group”, with terms like “European genetic ancestry” or “East Asian genetic ancestry” frequently appearing in papers to describe the genetics of samples of individuals based on the analysis of their genotypes. “Genetic ancestry” is an evocative concept, as made clear by the success of personal genomics companies, which capitalize on the fact that our views on ancestry and identity long predate the development of genetics. \subsection*{Genetic ancestors} I first want to make the distinction between two related concepts of “genetic ancestors” and “genetic ancestry group”. The former is a well-defined concept and a central part of population genetics. Your genealogical ancestors, the people from whom you are biologically descended, are a well-defined set of people. More than a few hundred years back, each of us have tens of thousands of genealogical ancestors, a number that initially grows exponentially backward in time until it stabilizes when you are descended from everyone who left any descendants to the present day. However, with only 2 copies of your genome, which is inherited in big blocks, you can’t inherit genetic material from all of these ancestors. Instead, you inherit genetic material from only a very small subset of your ancestors \citep{donnelly1983probability,Coop_How_many_genetic_anc}. For example, $\sim$450 years ago you may have more than 32,000 living genealogical ancestors, but only $\sim$1000 of them contributed genetic material that ended up in your genome, and the proportion of ancestors who contribute genetic material drops farther back in time. The subset of your genealogical ancestors from whom you inherited genetic material are your “genetic ancestors”, a small fraction of your total ancestry. Several thousand years back, all modern humans share all of their genealogical ancestors. Farther back than that, anyone who left any descendants in the present day (and many did) is an ancestor to all humans living today \citep{manrubia2003genealogy,rohde2004modelling,Coop_genealogical}. What does that mean for our genetic relatedness? Well, that’s complicated. You and I share a genealogical common ancestors at least as recently as the time when all modern humans share all their ancestors. However, only a limited subset of these people are our genetic ancestors, and we only inherit small fractions of our genome from any one of these common ancestors. Therefore, at a typical locus, your and my most recent genetic common ancestor lived much farther back in the past. Genetic similarity, genealogy, and genetic ancestry are closely related concepts, in ways that are non-trivial to understand \citep{Coop_Genetic_ancs}. All four of my grandparents came from Northern England. As a result, I share slightly more genetic variants in common with other people whose grandparents all came from Northern Europe than I shared with people whose grandparents came from elsewhere in the world. That’s because, while everyone in the world share all of their genealogical common ancestors several thousand years ago, they are not weighted the same in different people’s genetic family trees. A particular ancestor could appear tens or hundreds of times tracing back along different paths in my family tree if they are many generations back. That same ancestor many generations back could appear only once in someone else’s family tree with only a single path to them. While they are an ancestor to both of us, I’m somewhat more likely to have inherited a (small) chunk of my genome from them than the other person is. Even though we all share many genealogical ancestors, I have many more paths back through my family tree to ancestors who are also ancestors many times over for other Northern Europeans than I do with someone from (say) Japan. As a result, I share slightly more genetic variants in common with another Northern European than I do with a person whose grandparents all came from Japan. My genetic resemblance to many Northern Europeans reflects the fact that we share somewhat more of our genealogical ancestry--but not in a way that maps simply onto statements that my ancestors are all European or that Europeans share some set of ancestors that people from elsewhere do not. Our genetic ancestors are a well-defined set of people, who lived in particular places and times. Obviously, in practice, we don’t know who they were, but we could hope to infer things about them. Population genomic data can inform us about our relatedness our shared genetic common ancestors, through summaries of genetic similarity among individuals, which reflects the sharing of genetic material transmitted through meiosis \citep{rosenberg2002genealogical}. A lot of the statistical machinery of population genetics builds on these ideas to learn about evolutionary processes and history. \begin{wrapfigure}{r}{0.4\textwidth} \begin{center} \includegraphics[width=0.39\textwidth]{Figures/Martins_PCA.pdf} \end{center} \caption{{\small Figure from \citet{martin2016population} using 1000 genomes samples: ``Principal components analysis of all samples showing the relative homogeneity of AFR, EUR, EAS, and SAS continental groups and continental mixture of admixed samples from the Americas (ACB, ASW, CLM, MXL, PEL, and PUR)." Cropped from Figure 1 in preprint (CC BY-NC 4.0), figure S1 in published paper. 1000 genome sample codes given at this \href{https://uswest.ensembl.org/Help/Faq?id=532}{link}.} } \label{Fig:Martin_PCA} \end{wrapfigure} However, much of our interpretation of these patterns comes from combining this information with geographic and sample descriptors of the analyzed genomic data. Thus, our interpretations and genetic sample labels always reflect, at least in part, the social context of how samples were chosen and described, and thus they are partially social constructs. Through computational and statistical advances, the field of population genetics is getting much better at describing some properties of our vast number of shared genetic ancestors. A major recent breakthrough is the development of approaches to computationally reconstruct the so-called “ancestral recombination graph”, or approximations of it, for large genomic samples \citep{speidel2019method,kelleher2019inferring}. The ancestral recombination graph describes the full set of genetic relationships among a set of samples in terms of their shared genetic ancestors \citep{hudson1990gene,marjoram1995ancestral}. Along with this breakthrough has come the hope that approaches building on the ancestral recombination graph will allow a fuller description of “genetic ancestry”. It is doubtlessly true that these advances will allow us a fuller picture of some of the properties of our vast clouds of genetic ancestors and underscore the point that we are all embedded in the same giant tree of humanity, something other methods can obscure. However, these representations will necessarily often be high dimensional and do not lend themselves easily to verbal summaries. \section*{What then is a genetic ancestry group? } When human genetic researchers use ancestry group terms such as “European ancestry” or “East Asian ancestry” as a sample descriptor they are (nearly) always a description of genetic similarity to other present day individuals by some summary statistics \citep{mathieson2020ancestry}. \begin{wrapfigure}{r}{0.46\textwidth} \begin{center} \includegraphics[width=0.45\textwidth]{Figures/Martins_structure_preprint.pdf} \end{center} \caption[-2]{{\small Figure from \citet{martin2016population} from their caption ``ADMIXTURE analysis at K = 3 focusing on admixed Americas samples, with the NAT, CEU, and YRI as reference populations.'' They find that ``[t]he six populations from the Americas demonstrate considerable continental admixture, with genetic ancestry primarily from Europe, Africa, and the Americas''. NAT is a sample of Native Americans from \citet{mao2007genomewide}. Figure cropped from Figure 1 of preprint (CC BY-NC 4.0). }} \label{Fig:Martin_admixture} \end{wrapfigure} To illustrate this, we can work through some common ways in which ancestry groups are assigned in human genetics. In doing this, I note that I’ve certainly referred to these approaches in these terms in the recent past, and it can be a very convenient way to explain the concepts at play. My discussion of this topic is forward-looking rather than a judgment of past uses, including my own. One of the most common ways that ancestry labels are assigned to samples is on the basis of how a person’s genome clusters with other samples in a genetic PC plot. For example, Figure \ref{Fig:Martin_PCA} shows the 1000 genomes samples positioned on their first two genetic principal components . If you projected my genome onto this plot, my genotype would doubtless cluster with the EUR samples. On that basis, researchers might choose to label me as belonging to the “European ancestry group”. However, the fact that I would fall close to samples labelled “European” is simply a statement that my genotype is similar to the genotypes of those people along the axes of variation captured by the top two PCs. It is also broadly a statement that I shared a higher degree of relatedness with these individuals along these axes \citep{mcvean2009genealogical} but it is not a statement that my genetic ancestors form a delimited group with other such individuals. Conversely, imagine a person who comes with a sample descriptor “English”, e.g., based on a self-identified label. If this person has 6 great-grandparents from England and 2 whose ancestors trace recently back to West Africa via the Caribbean, their genome might fall roughly 3/4s of the way between the African and European 1000 genomes panel samples on this plot. On that basis, researchers may choose to exclude them from the “European ancestry” group of individuals. Indeed, researchers usually predefine some set of cutoffs, e.g., if a person falls more than 3 standard deviations from the centroid of the cluster of individuals labelled "European" individuals then they are not retained in the European ancestry data subset. As discussed above, there can be good methodological reasons for wanting a relatively genetic homogeneous group for analysis. However, the decision of whom to include is necessarily a somewhat arbitrary exercise in applying discrete labels to continuous variation. Other methods of assigning ancestry groups allow for a person’s ancestry to be drawn from multiple “ancestral” groups \citep[{\tt STRUCTURE} and {\tt ADMIXTURE} style approaches][]{pritchard2000inference,falush2003inference,alexander2009fast}; see Figure \ref{Fig:Martin_admixture} for an example. These methods would allow a breakdown, for example, of someone’s ancestry into 75\% European and 25\% African ancestry. In this case, alleles are modelled as being drawn from some hypothetical well-mixed populations; however, the “genetic ancestry” labels for those populations are being propagated by investigators from other labelled samples (eg a panel of references samples).\begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.48\textwidth]{Figures/AncestryPaintingEg.pdf} \end{center} \caption{{\small Figure from \citet{moreno2013reconstructing} the genome of a Caribbean-descent individual recruited in South Florida, USA. The 22 automosomes are coloured by ``Continental-level local ancestry calls'' \citep{moreno2013reconstructing} using an updated version of PCAdmix \citep{brisbin2012pcadmix}. Reference samples used: YRI, CEU, and Native Americans from Mexico. Cropped from figure published under CC-BY 4.0. } } \label{Fig:moreno_admixture_blocks} \end{wrapfigure} Again, they are really statements about similarity: that 75\% of the genome is most similar to the genotypes of individuals from the CEU samples (CEPH European descent sample, Utah) and 25\% of their genotype looks similar to the genotypes from the YRI sample (Yoruba in Ibadan, Nigeria). Various other finer grain methods and representations also exist. For example, there are approaches where blocks of an individual's genome (haplotypes) are statistically assigned as coming from some limited set of (often) pre-specified ancestries \citep{falush2003inference,price2009sensitive,maples2013rfmix}, defined by representative samples (see Figure \ref{Fig:moreno_admixture_blocks}). Again, however, regions of the genome labelled as having African ancestry could simply be said to be more similar to YRI samples than to CEU samples. Thus, all these statements about ancestry groups are really statements about the genetic similarity to other samples whose population descriptors we choose to propagate as our “ancestry group” labels. Samples are genetically similar because they share more ancestral paths, and so all of these approaches can be phrased in terms of hypotheses about shared genetic ancestors. Framed in that way they can quite be a reasonable set of analyses and hypotheses for future investigation to learn about population history. However, in most applications ancestry group labels are used as simple sample descriptors, which raises a large set of issues. \subsection*{Some issues raised by the use of genetic ancestry group labels } A wide variety of issues have been repeatedly raised over the use of genetic ancestry labels. A common and fair criticism is that the genetic ancestry descriptors used are often based on continental geographic labels and so overlap with racial labels \citep{weiss2009non,fujimura2011different,panofsky2017ambiguity,lewis2022getting}. The ancestral populations alluded to by genetic ancestry labels are at most simple statistical modelling constructs, but they can easily become reified into a discrete view of ancestral populations. While this problem is most apparent in the deliberate misuse of genetic ancestry to reinforce racist narratives of human diversity, it also opens a number of pitfalls for researchers themselves. Primarily because such labels obscure the inhomogeneity within “ancestry groups” and the continuum of relatedness across them and in doing so can bias our thinking about genetic and environmental variation. For example, in GWAS individuals of “European genetic ancestry” are often grouped together for analyses across multiple countries. While there can be good methodological justifications for grouping samples, there are also implicit decisions about who shares enough similarity in genetics and environments to be grouped together. A number of technical issues can also be raised about the use of genetic ancestry labels. For example, even if we accept the idea of using ancestry groups based on genetic similarity, the resolution and naming of ancestry groups is a function of reference samples used. For example, by some methods discussed above a person living in the middle east might find themselves assigned as having “European” ancestry if middle Eastern samples are not included in the reference set. As a result of this, changes to the panels used to assess ancestry groups can also lead to confusing changes in ancestry labels, the most obvious setting where this occurs is in personal genomics but in practice it can also occur as datasets are reused across scientific papers. Nor will these issues be resolved by including finer scale sampling in reference panels, as in the limit, there will often be a fairly continuous spread of peoples’ genotypes, and so there is no natural place to carve human diversity to assign “ancestries”. Another important aspect is the time frame implicitly assumed by descriptions of ancestry groups. Indeed, although statements of ancestry usually bracket genetic ancestors at a specific time period, this aspect is usually missing from the descriptions in papers. For example, in analyses of samples in the Americas, many people will have generations of recent ancestors who also lived in the Americas. However, through the choice of ancestry reference panels the analysis of genetic ancestry is often implicitly targeted at describing the locations of the genetic ancestors of American people $>$600 years in the past. Moreover, it is now clear that there have been large-scale movements of people over the past ten thousand years, which make labels based on current-day sampling locations complicated. For example, my genetic ancestors likely lived in Europe, the middle east, and the Russian Steppe ten thousand years ago \citep{lazaridis2014ancient,allentoft2015population,haak2015massive,chintalapati2022spatiotemporal}. So these statements of genetic ancestry are best thought of as descriptions of genetic ancestors in a bracketed time period of 600 years ago up to a few thousand years ago. Quite why the field has settled on this time period as a basis for comparison is often unclear. \subsection*{Responses } Partially in response to some of these criticisms, a number of alternative approaches have been put forward for population descriptors based on ancestry. One idea might simply be to switch from using terms like “European ancestry” to “European ancestries” to avoid implying that there is one homogeneous ancestral population underlying patterns of similarity. This proposal seems reasonable if we are to keep using ancestry group labels but does not move away from using broad geographic labels. A second, more substantial proposal is to move towards fine-grained ancestry labels, partially motivated by the reasonable objective of moving away from continental labels. \begin{wrapfigure}{r}{0.31\textwidth} \begin{center} \includegraphics[width=0.3\textwidth]{Figures/Coop_23andMe.pdf} \end{center} \caption{{\small My 23andMe ancestry composition (accessed June 2022).} } \label{Fig:Coop23} \end{wrapfigure} For example, 23\&me breaks down my ancestry into Northwestern European, British and Irish ancestry, and to even fine-grainer “ancestries” like “Greater London”. This is based on the similarity of my haplotypes to other present-day people whose grandparents all come from these regions \citep{23andMe_ancestry,durand2021scalable}. These fine-scale approaches are also being used to examine ancestry in medical settings \citep{belbin2021toward}. However, if we have trouble defining what we mean precisely by ancestry at broad geographic scales, we will run into even more difficulties being precise about what we mean by phrases like “British and Irish” genetic ancestry, let alone “Greater London” genetic ancestry. Other approaches to ancestry have been laid out where a person’s ancestry could be reported for various different time epochs, which would remove the need for the implicit choice of time-period. Or we could imagine tracing genetic ancestors back across geographical space over the generations, which would allow a more continuous view of ancestry \citep{osmond2021estimating,wohns2022unified}. Such approaches may well be useful for population geneticists and genetic anthropologists interested in human history. Indeed, advances in population genetics and genetic anthropology combined with ancient DNA are making large inroads into describing human history, and all these fields should work to integrate a better set of descriptors of genetic ancestors. However, the vast amount of research in human genetics (notably that funded by the NIH) is not about selling personal genomics or studying human history. An additional justification often offered for the use of genetic ancestry is that in addition to genetics it also captures socio-environmental factors that can covary with ancestry. On this basis in many cases ancestry group labels are used to subset data or as a covariate in analyses to capture non-genetic effects. However, in doing so, there is an issue of mixing social, geographic, and genetic labels. For example, does a set of “East Asian ancestry” polygenic scores refer to people of East Asian ancestry in the US or from the Japanese biobank and why is East Asian the correct ancestry label for what may be a narrower subset of people? Such distinctions appear already to be important in a number of cases \citep{giannakopoulou2021genetic}. Yet too often, studies center the genetic ancestry label and so confound together relatedness, social environment, and sampling location. This conflation sets up a situation where it is all too easy to slip into viewing differences in genetic ancestry as a genetic cause of differences in phenotypic and health outcomes between groups. Along similar lines, within racial or ethnic groupings the proportion of an individual's genome from particular “ancestries” can be correlated with phenotypic outcomes due to environmental factors as well as generations of racism and discrimination. For example, in African Americans in the US the proportion of African ancestry is correlated with geography, socioeconomic outcomes, and patterns of migration out of the American south \citep{micheletti2020genetic,baharian2016great}. But all too often papers reporting correlations with genetic ancestry in recently admixed populations do not acknowledge the potential for socio-environmental confounding and can slip straight to discussions of genetic causes. \section*{Human genetics should move away from the concept of “genetic ancestry groups” and towards “genetic similarity” descriptors. } Arguably, much of human genetics cares about matching for genetic similarity, not ancestry, when making a set of comparisons or assembling a set of controls. Consider the following commonly posed questions: “What is the disease risk of someone of this specific genotype with this broader genetic background?”, “Are my biological replicates and controls appropriately matched for this gene expression study?”, or “What set of polygenic score weights should I use for prediction for this person or haplotype?”. In all of these cases, the genetics questions we are asking about are based on matching people based on genetic similarity, not a vague sense of who a person’s ancestors were. Researchers are also often interested in controlling for the environment, and given that in many cases the relevant environmental variables may be unknown or unmeasured, it may be reasonable to use non-genetic sample descriptors in these analyses as a control for unmeasured environmental factors. In some cases, a researcher may lack socially relevant sample descriptors and so want to use a genetic label as a proxy for a social label to account for unmeasured environmental factors. Indeed, whenever we include a genetic similarity variable (e.g., a principal component) in an analysis of traits, it may become a proxy for both genetic and correlated environments and social variables. In all such cases, we should be clear that correlates between trait outcomes and genetic similarity could be the result of both genetic and environmental causes rather than relying on the idea of “genetic ancestry” to telegraph that idea. As a field we should move away from genetic ancestry labels and towards simple statements of genetic similarity: “This sample/haplotype is genetically similar to the XX sample set (in comparisons to YYY samples using ZZZ metric)” is much closer to how population genetic methods can be used to provide genetic sample descriptors. For example, “Graham is genetically similar to the GBR 1000 genome samples (on the first 10 principal components)” rather than “Graham has Northwestern European genetic ancestry”. The former sounds a little more awkward, but that awkwardness reflects the truth of how these labels work and comes with many fewer built-in assumptions and pitfalls. From the technical standpoint, moving toward using genetic similarity labels puts a focus on how similar we need to make the match and by what measure we judge similarity. It also directs attention to the panels used to judge similarity and forces us to ask ourselves whether our panels are representative and fine-grained enough for the comparison we wish to make. Importantly, in my view, the term “genetically similar to” also helps to avoid the assumption of homogeneity within labels; “similar to” does not imply “same as”. Similarity-based sample descriptors also move us some way to acknowledging the continuous nature of genetic variation across human groups in our sample descriptions. I am more genetically similar to some samples than I am to others but that does not imply that there are natural groupings. Nor do similarity-based labels imply how I, as an individual, might choose to identify or what distribution of environments I might experience. For example, a person may be genetically similar to Southern Asian 1000 genomes samples, yet in itself this similarity does not identify them as Southern Asian, whereas stating that a person has South Asian genetic ancestry comes much closer to making that linkage in people’s minds. Working out the genealogical history of individuals and of groups of people from around the world is a fascinating area of research, but it should not be the day job of the majority of researchers in the field of human genetics, who instead need accurate sample descriptors of current day genetic diversity that aid clear communication. \subsubsection*{Acknowledgments} Thanks to Vince Buffalo, Doc Edge, Jeff Groh, Emily Josephs, James Kitchens, Magnus Nordborg, Peter Ralph, Alexis Simon, and Silu Wang for comments on a earlier draft. \bibliographystyle{genetics}
2024-02-18T23:40:26.886Z
2022-07-26T02:12:04.000Z
algebraic_stack_train_0000
2,433
5,265
proofpile-arXiv_065-11798
\section{Introduction} \subsection{Background} An iterated function system (IFS) is a finite set of contractions $\{ S_i \colon X \to X\}_{i \in I}$, where $X$ is a closed subset of Euclidean space. For each IFS there is an associated limit set (or attractor) $F$ satisfying $F = \cup_{i \in I} S_i(F)$, which will typically be fractal in nature. IFSs and the dimension theory of the associated limit sets have been studied extensively since Hutchinson's important paper~\cite{Hutchinson1981}. In a seminal 1996 paper~\cite{Mauldin1996}, Mauldin and Urbański extended the theory to infinite conformal iterated function systems (CIFSs, defined in Definition~\ref{d:cifs}) consisting of countably many conformal contractions. The maps are assumed to be sufficiently separated, with the contraction ratios uniformly bounded above by some $\xi < 1$. One well-studied family of sets which are generated by CIFSs are sets of numbers which have continued fraction expansions with restricted entries. There are several notions of fractal dimension which give information about the scaling properties of sets. For background on the Hausdorff and box dimensions, which are defined using global covers of the set, we refer the reader to~\cite{Falconer2014}. In this paper, however, we will focus more on the Assouad type dimensions, which are defined using local covers. The Assouad dimension was introduced in~\cite{Assouad1977thesis} and has been the subject of intensive study in the contexts of embedding theory (see~\cite{Mackay2010,Robinson2011assouad}) and fractal geometry (see~\cite{Fraser2020book}) in recent years. Fraser and Yu~\cite{Fraser2018-2} introduced a parameterised family of dimensions, called the \emph{Assouad spectrum}, which lie between box and Assouad dimension and will be a key focus of this paper. These dimensions are defined in Section~\ref{s:notation}. The dimension theory of infinite IFSs has been studied extensively, for example in~\cite{Urbanski2022,Chousionis2020,Banaji2021,Mauldin1996,Mauldin1999,Mauldin1995:Fractals1995}. There are many similarities between finite and infinite iterated function systems, but also several differences. One notable difference is that the Hausdorff, box and Assouad dimensions coincide for the limit set of any finite CIFS, but can differ for infinite CIFSs, indicating that the presence of infinitely many maps can cause the limit set to have greater inhomogeneity. Mauldin and Urbański showed in~\cite[Theorem~3.15]{Mauldin1996} that for an (infinite) CIFS the Hausdorff dimension $\dim_{\mathrm H} F$ can be determined from a topological pressure function (defined in~\eqref{e:MUpressure}). For notions of dimension $\dim$ which are not countably stable, a na\"ive guess for a formula for the dimension of the limit set might be $\dim F = \max\{\dim_{\mathrm H} F,\dim \{ \, S_i(x) \} \}$, where $x$ is any given point in the limit set. Mauldin and Urbański~\cite[Theorem~2.11]{Mauldin1999} proved that the upper box dimension does indeed satisfy this formula. As the main result of~\cite{Banaji2021}, we proved that the upper \emph{intermediate dimensions} $\overline{\dim}_{\,\theta}$ also satisfy the na\"ive formula, that is, $\overline{\dim}_{\,\theta} F = \max\{\dim_{\mathrm H} F,\overline{\dim}_{\,\theta} \{ S_i(x) \} \}$ for all $\theta \in [0,1]$. The intermediate dimensions are a family of dimensions (introduced in~\cite{Falconer2020} and studied further in~\cite{Banaji2021moran,Banaji2021bedford,Burrell2021-2,Banaji2020}) which are parameterised by $\theta \in [0,1]$ and interpolate between the Hausdorff and box dimensions. We provided some (often counterintuitive) applications to the dimensions of projections, fractional Brownian images, and general H\"older images. We refer the reader to~\cite{Fraser2021-1} for the general area of `dimension interpolation', which includes both the intermediate dimensions and Assouad spectrum. One of the most interesting features of the analysis in this paper is that the na\"ive formula does not always hold for the Assouad spectrum, which instead satisfies two bounds which can be sharp in general. In~\cite{Banaji2021}, we also proved bounds for the Hausdorff, box and intermediate dimensions of the limit set without assuming conformality or separation conditions. However, the Assouad dimension can be particularly sensitive to separation conditions even in the case of finite IFSs (see~\cite[Section~7.2]{Fraser2020book}), and in this paper we assume conformality and appropriate separation conditions throughout. \subsection{Structure of paper and summary of results} In Section~\ref{s:setting} we introduce notation and define conformal iterated function systems (CIFSs) and their limit sets. We prove geometric estimates for CIFSs which will be useful when proving dimension results later. We also define the notions of dimension we will work with, in particular the Hausdorff, upper box and Assouad dimensions (denoted $\dim_{\mathrm H}$, $\overline{\dim}_{\mathrm{B}}$ and $\dim_{\mathrm{A}}$ respectively), and the Assouad and upper Assouad spectra at $\theta \in (0,1)$ (denoted $\dim_{\mathrm{A}}^\theta$ an $\overline{\dim}_\mathrm{A}^\theta$ respectively). In Section~\ref{s:asd} we prove that under an appropriate separation condition the Assouad dimension of the limit set $F$ satisfies the expected formula \[ \dim_{\mathrm{A}} F = \max\{\dim_{\mathrm H} F, \dim_{\mathrm{A}} P\},\] where $P$ is the set of fixed points of the contractions. In Section~\ref{s:asp} we prove one of our main results, namely that the Assouad spectrum of the limit set is monotonic in $\theta$ and satisfies the bounds \begin{equation}\label{e:introbounds} \max\{h,\overline{\dim}_\mathrm{A}^\theta P \} \leq \dim_{\mathrm{A}}^\theta F \leq \max_{\phi \in [\theta,1]} f(\theta,\phi). \end{equation} Here, for $\theta \in (0,1)$ and $\phi \in [\theta,1]$, the function $f$ is defined by \[ f(\theta,\phi) \coloneqq \frac{(\phi^{-1} - 1) \overline{\dim}_\mathrm{A}^\phi P + (\theta^{-1} - \phi^{-1}) \overline{\dim}_{\mathrm{B}} F}{\theta^{-1} - 1}\] and can be thought of as an appropriately weighted average of the Assouad spectrum of $P$ and the upper box dimension of $F$. As with Mauldin and Urbański's proof for the upper box dimension, our proof uses an induction argument to show that the existence of efficient covers at larger scales implies the existence of efficient covers at smaller scales. We also prove several consequences of these bounds, showing for instance that if the Assouad spectrum of the set of fixed points satisfies a form (which we call the three-parameter form) that is often observed, then the Assouad spectrum of the limit set must too. In Section~\ref{s:sharp} we provide a family of examples which show that the bounds in~\eqref{e:introbounds} are sharp in general. The Assouad spectra of the family of examples which we use to do so display interesting behaviour not seen previously in dynamically generated sets. In particular, the spectra can have two phase transitions, and need not satisfy the three-parameter form. Figure~\ref{f:sharp} shows the graphs of the Assouad spectra of a selection of these sets. Finally, in Section~\ref{s:ctdfracsect} we turn our attention to the Assouad spectra of a well-studied class of infinitely generated self-conformal sets, namely sets of numbers which have continued fraction expansions with restricted entries. Many of the interesting properties of the Assouad spectra witnessed in Section~\ref{s:sharp} also hold in this setting. We calculate a precise formula for the Assouad spectrum of the full complex continued fraction set in terms of its Hausdorff dimension, which has in turn been estimated in~\cite{Mauldin1996,Gardner1983,Priyadarshi2016,Falk2018}. We also investigate parabolic IFSs by applying our methods to the `induced' uniformly contracting CIFS. This in particular allows us to calculate the Assouad spectrum of sets of numbers generated by backwards continued fraction expansions. \section{Setting and preliminaries}\label{s:setting} \subsection{Notation and notions of dimension}\label{s:notation} We denote the natural logarithm by $\log$, the cardinality of a set using $\#$, the (Euclidean) diameter of a subset of $\mathbb{R}^d$ by $|\cdot|$, and $d$-dimensional Lebesgue measure on $\mathbb{R}^d$ by $\mathcal{L}^d$. The symbol $\mathbb{N}$ will denote $\{1,2,3,\ldots\}$. In Sections~\ref{s:sharp} and~\ref{s:ctdfracsect} it will be convenient to write $Y \lesssim Z$ (or $Y \gtrsim Z$) to mean $Y \leq CZ$ (or $Y \geq CZ$, respectively). If $Y \lesssim Z$ and $Y \gtrsim Z$ then we write $Y \approx Z$. We will specify which parameters the implicit constant $C>0$ is allowed to depend on. The symbol $||\cdot||$ will denote either the Euclidean norm on $\mathbb{R}^d$ or the supremum norm of a continuous function, depending on context. We write \[ B(x,r) \coloneqq \{ \, y \in \mathbb{R}^d : ||y-x|| < r \, \}\ \] for the open ball of radius $r>0$ centred at $x \in \mathbb{R}^d$. All the sets we consider will be non-empty, bounded subsets of Euclidean space. For $F \subseteq \mathbb{R}^d$ let $N_r(F)$ be the smallest number of balls of radius $r$ needed to cover $F$. For $d \in \mathbb{N}$ and $r \geq 1$, we denote by $A_{d,r} \in \mathbb{N}$ the smallest integer such that for all $U \subset \mathbb{R}^d$ there exist $U_1,\ldots,U_{A_{d,r}} \subseteq \mathbb{R}^d$, each of diameter $|U|/r$, such that \begin{equation}\label{e:doublingconst} U \subseteq \bigcup_{k=1}^{A_{d,r}} U_k. \end{equation} Let $F$ be non-empty, bounded subset of $\mathbb{R}^d$ with the Euclidean metric. The \emph{Hausdorff dimension} is defined by \begin{equation}\label{e:hausdorffdef} \begin{aligned} \dim_\mathrm{H} F = \inf \{ \, s \geq 0 : &\mbox{ for all } \epsilon >0 \mbox{ there exists a finite or countable cover } \\ & \{U_1,U_2,\ldots\} \mbox{ of } F \mbox{ such that } \sum_i |U_i|^s \leq \epsilon \,\}, \end{aligned} \end{equation}% and the \emph{upper box dimension} is \[ \overline{\dim}_{\mathrm{B}} F \coloneqq \limsup_{r \to 0^+} \frac{\log N_r(F)}{-\log r}.\]% Turning now to notions of dimension which describe the local structure of sets, the \emph{Assouad dimension} is defined by \begin{multline}\label{e:assouaddef} \dim_\mathrm{A} F = \inf\left\{ \alpha : \mbox{ there exists }C>0\mbox{ such that for all } x \in F \mbox{ and } \right. \\ \qquad \left. 0<r<R, \mbox{ we have } N_r(B(x,R)\cap F) \leq C(R/r)^\alpha\right\}. \end{multline} For $\theta \in (0,1)$, the \emph{Assouad spectrum} of $F$ at $\theta$ is defined by fixing the scales $R = r^\theta$: \begin{multline*} \dim_{\mathrm{A}}^\theta F = \inf\left\{ \alpha : \mbox{ there exists }C>0\mbox{ such that for all } x \in F \mbox{ and } \right. \\ \qquad \left. 0< R \leq 1, \mbox{ we have } N_{R^{1/\theta}}(B(x,R)\cap F) \leq C R^{\alpha(1 - 1/\theta)}\right\}. \end{multline*}% Clearly $\overline{\dim}_{\mathrm{B}} F \leq \dim_{\mathrm{A}}^\theta F \leq \dim_{\mathrm{A}} F$ for all $\theta \in (0,1)$. The Assouad spectrum is not always monotonic in $\theta$ (see~\cite{Fraser2018-2}). The \emph{upper Assouad spectrum} at $\theta$, however, \emph{is} monotonic, and is defined by \begin{multline*} \overline{\dim}_\mathrm{A}^\theta F = \inf\left\{ \alpha : \mbox{ there exists }C>0\mbox{ such that for all } x \in F \mbox{ and } \right. \\ \qquad \left. 0<r\leq R^{1/\theta} \leq R \leq 1, \mbox{ we have } N_r(B(x,R)\cap F) \leq C(R/r)^\alpha\right\}. \end{multline*} The Assouad spectrum was introduced in~\cite{Fraser2018-2} and has been calculated for various families of fractals in~\cite{Fraser2018assouadfamilies,Fraser2020-2,Burrell2020-1} and other works. Recently, Rutar~\cite{Rutar2022assouad} has given a complete description of the attainable forms of Assouad spectra of sets, showing that a wide variety of behaviour is possible in general. The Assouad spectrum can be used to give information about dimensions of orthogonal projections of sets~\cite{Falconer2021projections}, and has applications related to spherical maximal functions~\cite{Roos2020assouadspectrum} and conformal geometry~\cite{Garitsis2022conformal}. The \emph{quasi-Assouad dimension}, introduced in~\cite{Lu2016}, can be defined by \begin{equation}\label{e:quasiassouaddef} \dim_\mathrm{qA} F \coloneqq \lim_{\theta \to 1^-} \dim_{\mathrm{A}}^\theta F, \end{equation} or equivalently $\dim_\mathrm{qA} F \coloneqq \lim_{\theta \to 1^-} \overline{\dim}_\mathrm{A}^\theta F$ (see~\cite[Corollary~3.3.7]{Fraser2020book}). We always have \[ \dim_{\mathrm H} F \leq \overline{\dim}_{\mathrm{B}} F \leq \dim_{\mathrm{A}}^\theta F \leq \overline{\dim}_\mathrm{A}^\theta F \leq \dim_\mathrm{qA} F \leq \dim_{\mathrm{A}} F,\] and all inequalities can be strict in general. We sometimes write $\dim_{\mathrm A}^1$ or $\overline{\dim}_{\mathrm A}^1$ to mean the quasi-Assouad dimension, and since $\overline{\dim}_\mathrm{A}^\theta F \to \overline{\dim}_{\mathrm{B}} F$ as $\theta \to 0^+$, we sometimes write $\dim_{\mathrm A}^{0} F$ or $\overline{\dim}_{\mathrm A}^0 F$ to mean the upper box dimension of $F$. The Assouad spectrum and upper Assouad spectrum are continuous in $\theta \in (0,1)$, see~\cite{Fraser2018-2,Fraser2019-3}. In~\cite{Fraser2019-3}, Fraser et al. show that we always have $\overline{\dim}_\mathrm{A}^\theta F = \sup_{\theta' \in (0,\theta]} \dim_\mathrm{A}^{\theta'} F$. Therefore if $\theta \mapsto \dim_{\mathrm{A}}^\theta F$ is monotonic (as holds for many sets of interest, see Lemma~\ref{l:monotoniclemma}) then we can use the Assouad spectrum and upper Assouad spectrum interchangeably. For a set $F \subseteq \mathbb{R}^d$ we define the quantity \begin{equation}\label{e:definephasetransition} \rho = \rho_F \coloneqq \inf \{ \, \theta \in [0,1] : \dim_{\mathrm{A}}^\theta F = \dim_\mathrm{qA} F \, \} = \inf \{ \, \theta \in [0,1] : \overline{\dim}_\mathrm{A}^\theta F = \dim_\mathrm{qA} F \, \}, \end{equation} which will often represent a phase transition in the Assouad spectrum of $F$. The two expressions are equal by~\cite[Corollary~3.3.3]{Fraser2020book}. There are also dual notions called the \emph{lower dimension}, \emph{lower spectrum} and \emph{quasi-lower dimension}, which describe the `thinnest' part of the set in question, see~\cite{Fraser2020book}. For infinitely generated self-conformal sets, these share some properties with the Assouad type dimensions. For instance, the lower spectrum is monotonic in $\theta$ by a similar argument to the proof of Lemma~\ref{l:monotoniclemma}. One notable feature of the lower type dimensions is that it is straightforward to see that for infinitely generated self-similar sets whose contraction ratios tend to zero much faster than the rate of accumulation of the fixed points, they can take the value zero. This is in contrast to finitely generated self-similar sets, whose lower dimensions equal the Hausdorff dimension (see~\cite[Corollary~6.4.4]{Fraser2020book}). It would be possible to calculate formulae for the lower type dimensions of the sets in Lemma~\ref{l:existcifstechnical} using a direct covering argument, but we will not pursue this. \subsection{Conformal iterated function systems} We follow the setup of Mauldin and Urbański~\cite{Mauldin1996,Mauldin1999}, for which we need to introduce some notation. % Given a countable index set $I$, define $I_0 \coloneqq \{\varnothing\}$ and $I^* \coloneqq \bigcup_{i=1}^\infty I^n$. We call elements of $I^*$ \emph{finite words} and elements of $I^\mathbb{N}$ \emph{infinite words}. We usually denote words by the letter $w$, and we write $w = i_1 \cdots i_n$ and $w=i_1i_2\ldots$ instead of $w= (i_1,\ldots,i_n)$ and $w=(i_1,i_2,\ldots)$ respectively. We say that a word in $I^n$ has \emph{length} $n$, and an infinite word has \emph{length} $\infty$. If $w \in I^* \cup I^\mathbb{N}$ and $n \in \mathbb{N}$ does not exceed the length of $w$ then we write $w|_n \coloneqq w_1 \ldots w_n \in I^n$, and $w|_0 \coloneqq \varnothing$. If $w \in I_0 \cup I^* \cup I^\mathbb{N}$ and $v \in I_0 \cup I^*$ then we say that $v$ is a \emph{prefix} of $w$ if there exists $n \in \{0,1,2,\ldots\}$ such that $v = w|_n$. In the following definition, the assumption that the contractions are conformal maps (meaning that they locally preserve angles) is especially important. In one dimension, conformal maps are functions with a non-vanishing H\"older continuous derivative, in two dimensions they are holomorphic functions with non-vanishing derivative, and in three dimensions they are M\"obius maps (by Liouville's theorem). % \begin{defn}\label{d:cifs} Let $d \in \mathbb{N}$ and let $X$ be a compact, connected subset of $\mathbb{R}^d$, equipped with the Euclidean metric. Consider a collection of maps $S_i \colon X \to X$, $i \in I$, where $I$ is a countable index set. This system forms a \emph{conformal iterated function system (CIFS)} if the following additional properties are satisfied: % \begin{enumerate} \item\label{i:osc} (Open set condition (OSC)) The set $X$ has non-empty interior $U \coloneqq \mathrm{Int}_{\mathbb{R}^d} X$, and $S_i(U) \subset U$ for all $i \in I$, and $S_i(U) \cap S_j(U) = \varnothing$ for all $i,j \in I$ with $i \neq j$. \item\label{i:cone} (Cone condition) $\inf_{x \in X} \inf_{r \in (0,1)} \mathcal{L}^d (B(x,r) \cap \mathrm{Int}_{\mathbb{R}^d} X)/r^d > 0$. \item\label{i:conformal} (Conformality) There exists an open, bounded, connected subset $V \subset \mathbb{R}^d$ such that $X \subset V$ and such that for each $i \in I$, $S_i$ extends to a $C^{1+\epsilon}$ diffeomorphism $\overline{S_i}$ from $V$ to an open subset of $V$. Moreover, $\overline{S_i}$ is \emph{conformal} on $V$, which means that for all $x \in V$ the differential $\overline{S_i}'|_x$ exists, is non-zero, is a similarity map (so $||\overline{S_i}'|_x (y)|| = ||\overline{S_i}'|_x||\cdot||y||$ for all $y \in \mathbb{R}^d$), and is $\epsilon$-H{\"o}lder continuous in $x$. Furthermore, there exists $\xi \in (0,1)$ such that $||\overline{S_i}'|| <\xi$ for all $i \in I$, where $||\cdot||$ is the supremum norm over $V$. \item\label{i:bdp} (Bounded distortion property (BDP)) There exists $K>0$ such that $||S_w'|_y|| \leq K||S_w'|_x||$ for all $x,y \in V$ and $w \in I^*$. % \end{enumerate} \end{defn} For $w \in I^n$ we define \[ S_w \coloneqq S_{w_1} \circ \cdots \circ S_{w_n},\] and we define $S_\varnothing$ to be the identity function on $X$. Since $|S_{w|_n}(X)| \leq \xi^n |X|$ by the uniform contractivity, the map \[ \pi \colon I^\mathbb{N} \to X, \qquad \pi(w) \coloneqq \bigcap_{n=1}^\infty S_{w|_n}(X) \] is well-defined and continuous. We are interested in the following set, which will typically be fractal in nature. \begin{defn} The \emph{limit set} or \emph{attractor} of a CIFS is defined by \[ F \coloneqq \pi(I^\mathbb{N}) = \bigcup_{w \in I^\mathbb{N}} \bigcap_{n=1}^\infty S_{w|_n}(X). \] \end{defn} For $w \in I^n$ define $F_w = F_{S_w} \coloneqq S_w(F)$ and $X_w = X_{S_w} \coloneqq S_w(X)$. Now, $F$ is clearly non-empty and satisfies the relation \begin{equation}\label{e:attractor} F = \bigcup_{i \in I} F_i. \end{equation} In fact there are many sets which satisfy~\eqref{e:attractor}, and $F$ is the largest of these by inclusion. If $I$ is finite then $F$ is compact (and is indeed the only non-empty compact set which satisfies~\eqref{e:attractor} by Hutchinson's application of the Banach contraction mapping theorem~\cite{Hutchinson1981}), but if $I$ is infinite then $F$ will not generally be closed. % For $w \in I_n$ define \begin{align}\label{e:definerw} \begin{split} r_w &= r_{S_w} \coloneqq \inf_{x,y \in X, x \neq y} \frac{||S_w(x)-S_w(y)||}{||x-y||}; \\ R_w &= R_{S_w} \coloneqq \sup_{x,y \in X, x \neq y} \frac{||S_w(x)-S_w(y)||}{||x-y||}, \end{split} \end{align} noting that $0 \leq r_w \leq R_w \leq \xi$. The value $R_w$ is the smallest possible Lipschitz constant for $S_w$. Mauldin and Urbański~\cite{Mauldin1996} introduced \[ \psi_n(t) \coloneqq \sum_{w \in I^n} ||S_w'||^t \] for $n \in \mathbb{N}$, $t \in (0,\infty)$, and defined the topological pressure function $\overline{P} \colon (0,\infty) \to [-\infty,\infty]$ by % \begin{equation}\label{e:MUpressure} \overline{P}(t) \coloneqq \lim_{n \to \infty} \frac{1}{n} \log \psi_n (t). \end{equation} Mauldin and Urbański~\cite[Theorem~3.15]{Mauldin1996} proved that the Hausdorff dimension of the limit set is given by \begin{equation}\label{e:hausdorffpressure} h \coloneqq \dim_{\mathrm H} F = \inf\{ \, t \geq 0 : \overline{P}(t) \leq 0 \, \}. \end{equation} Throughout the paper, we reserve the letter $h$ for the quantity in~\eqref{e:hausdorffpressure}. In fact, this formula holds even if the cone condition~\eqref{i:cone} is not assumed, see~\cite[Theorem~19.6.4]{Urbanski2022}. However, the cone condition is used in the proof of~\cite[Lemma~2.11]{Banaji2021}, which is in turn used in the proofs of Theorem~\ref{t:mainasp} below. The condition will in particular be satisfied if $X$ is convex or has smooth boundary. Since the pressure function is decreasing, the \emph{finiteness parameter} of the system \begin{equation}\label{e:finiteness} \Theta \coloneqq \inf\{ \, t > 0 : \overline{P}(t) < \infty \, \} \in [0, \infty] \end{equation} is always a lower bound for the Hausdorff dimension. We will use many properties of CIFSs. It is well known (see~\cite[Lemmas~2.9 and~2.12]{Banaji2021}, for example) that for any CIFS there exists $D \geq 1$ such that for all $w \in I^*$, \begin{equation}\label{e:diameterslemma} D^{-1} ||S_w'|| \leq r_w \leq R_w \leq D||S_w'||; \end{equation} \begin{equation}\label{e:2nddiams} D^{-1} ||S_w'|| \leq |F_w| \leq |X_w| \leq D||S_w'||. \end{equation} In~\cite[Lemma~2.11]{Banaji2021} we proved that there exists $M \in \mathbb{N}$ such that for all $z \in \mathbb{R}^d$ and $r>0$, if $w_1,\ldots,w_l$ are distinct words in $I^*$ such that for all $i,j \in \{1,\ldots,l\}$, $w_i$ is not a prefix of $w_j$, and \begin{equation}\label{e:mdefn} B(z,r) \cap S_{w_i}(X) \neq \varnothing \mbox{ and } |S_{w_i}(X)| \geq r/2 \mbox{ for all } i \in \{1,\ldots,l\}, \end{equation} then $l \leq M$. % We will use the following consequences of conformality of the maps when proving results about the Assouad-type dimensions. \begin{lma}\label{l:finitelymanylarge} For any CIFS and any $\lambda > 0$ there are only finitely many $w \in I^*$ with $||S_w'|| \geq \lambda$. \end{lma} \begin{proof} Let $D$ be as in~\eqref{e:diameterslemma},~\eqref{e:2nddiams}. Note that there exists a ball $B(x,r) \subset \mathrm{Int}_{\mathbb{R}^d}(X)$. For $n \in \{1,\ldots,N-1\}$ let $S_n \coloneqq \{ \, w \in I^n : ||S_w'|| \geq \lambda \, \}$. For each $w \in L_n$, $S_w(\mathrm{Int}_{\mathbb{R}^d}(X))$ contains a ball $B_w$ of radius $\lambda r / D$ by~\eqref{e:diameterslemma}. Moreover, if $w, v \in L_n$ are distinct then by the OSC, $B_w \cap B_v = \varnothing$. Therefore by a Lebesgue measure argument, $\# L_n < \infty$. But if $N \in \mathbb{N}$ is large enough that $\xi^N < \lambda$, then $\# L_n = 0$ for all $n \geq N$. Thus $\# \{ \, w \in I^* : ||S_w'|| \geq \lambda \, \} = \sum_{n=1}^\infty \# L_n < \infty$. \end{proof} \begin{lma}\label{l:assouadgeo} There exists $D' > 0$ such that for all $Y \subseteq X$, $w \in I^*$ and $r>0$, \[ (D')^{-1} N_r(Y) \leq N_{||S_w'||r}(S_w(Y)) \leq D' N_r(Y);\] \[ (D')^{-1} N_r(Y) \leq N_{|X_w|r}(S_w(Y)) \leq N_{|F_w|r}(S_w(Y)) \leq D' N_r(Y).\] \end{lma}% \begin{proof} Let $D'$ be the constant $A_{d,2D}$ from~\eqref{e:doublingconst}. Let $Y \subset X$, $w \in I^*$ and $r>0$. For the upper bound note that there exist balls $B_1,\ldots,B_{N_r(Y)}$ of radius $r$ which cover $Y$. Assume without loss of generality that each of these balls intersects $Y$. Then by the upper bound of~\eqref{e:diameterslemma}, for each $j$, \[ |S_w(B_j \cap X)| \leq D||S_w'||\cdot |B_j \cap X| \leq D||S_w'||\cdot |B_j| = D||S_w'||(2r).\] % The $B_j$ cover $Y$ so $\{S_w(B_j \cap X)\}_j$ covers $S_w(Y)$. For each $j$ we can cover $S_w(B_j \cap X)$ by $D'$ balls of radius $||S_w'||r$, so $N_{||S_w'||r}(S_w(Y)) \leq D' N_r(Y)$. For the lower bound, note that there exist balls $b_1,\ldots,b_{N_{||S_w'||r}(S_w(Y))}$ of radius $||S_w'||r$ which cover $S_w(Y)$, each intersecting $S_w(Y)$. By the lower bound of~\eqref{e:diameterslemma}, for each $k$, \[ |S_w^{-1}(B_k \cap S_w(Y))| \leq D||S_w'||^{-1} \cdot |B_k \cap S_w(Y)| \leq D||S_w'||^{-1} \cdot |B_k| = D||S_w'||^{-1} (2||S_w'||r) = 2Dr. \] The $\{S_w^{-1}(B_k \cap S_w(Y))\}_k$ cover $Y$. We can cover each of these sets by $D'$ balls of radius $r$, so $N_r(Y) \leq D' N_{||S_w'||r}(S_w(Y))$, proving the lower bound. In light of~\eqref{e:2nddiams} we may increase $D'$ further to ensure that the second string of inequalities also holds. \end{proof} The following lemma is similar to~\cite[Proposition~2.9]{Mauldin1999} and~\cite[Lemma~3.3]{Banaji2021}. Recall the notation from~\eqref{e:doublingconst}, and recall that $M$ is defined in~\eqref{e:mdefn}. \begin{lma}\label{l:samewithinlevelasd} Consider a CIFS and fix $n \in \mathbb{N}$. If $P$ and $Q$ are both subsets of $X$ which intersect $S_w(X)$ in exactly one point for each $w \in I^n$ then for all $x \in X$ and $R,r>0$, \begin{equation}\label{e:samewithinbound} N_r(P \cap B(x,R)) \leq (A_{d,6} + M) N_r(Q \cap B(x,R)). \end{equation} Moreover, $\dim_{\mathrm{A}} P = \dim_{\mathrm{A}} Q$, and $\dim_{\mathrm{A}}^\theta P = \dim_{\mathrm{A}}^\theta Q$ and $\overline{\dim}_\mathrm{A}^\theta P = \overline{\dim}_\mathrm{A}^\theta Q$ for all $\theta \in (0,1)$. \end{lma} \begin{proof} The set of maps corresponding to words of length $n$ forms another CIFS, so we may assume without loss of generality that $n=1$. Suppose $x_1,\ldots,x_N \in X$ are such that the balls $B(x_1,r),\ldots,B(x_N,r)$ cover $Q$. For each $j=1,\ldots,N$ let $y_{j,1},\ldots,y_{j,A_{d,6}} \in X$ be such that \[ B(x_l,3r) \subseteq \bigcup_{l=1}^{A_{d,6}} B(y_{j,l},r).\] Then this union covers $S_i(X)$ for each $i \in I$ such that $|S_i(X)| < r$ and $S_i(X) \cap B(x_j,r) \neq \varnothing$. By~\cite[Lemma~2.11]{Banaji2021} there exist $i_1, \ldots, i_M \in I$, depending on $j$ and not necessarily distinct, such that $S_{i_k}(X) \cap B(x_j,r) \neq \varnothing$ for $k=1,\ldots,M$, and such that if $i \in I \setminus \{i_1,\ldots,i_M\}$ and $|S_i(X)| \geq r$ then $S_i(X) \cap B(x_j,r) = \varnothing$. If $k=1,\ldots,M$ then we can cover the single element of $P \cap S_{i_k}$ by a ball $B_{j,k}$ of radius $r$. Since $\{B(x_j,r)\}_j$ covers $Q$, \[P \subseteq \bigcup_j \left( \bigcup_{l=1}^{A_{d,6}} B(y_{j,l},r) \cup \bigcup_{k=1}^M B_{j.k} \right),\] proving~\eqref{e:samewithinbound}. It follows immediately that $\dim_{\mathrm{A}} P = \dim_{\mathrm{A}} Q$ and $\dim_{\mathrm{A}}^\theta P = \dim_{\mathrm{A}}^\theta Q$ for all $\theta \in (0,1)$. Since $\overline{\dim}_\mathrm{A}^\theta F = \sup_{\phi \in (0,\theta]} \dim_\mathrm{A}^{\phi} F$ for all $F \subseteq \mathbb{R}^d$ (see~\cite{Fraser2019-3}), it also follows that $\overline{\dim}_\mathrm{A}^\theta P = \overline{\dim}_\mathrm{A}^\theta Q$. % \end{proof} \section{Assouad dimension of the limit set}\label{s:asd} Theorem~\ref{t:asd} gives a simple formula for the Assouad dimension of the limit set under an additional separation condition. Recall that $h \coloneqq \dim_{\mathrm H} F$. \begin{thm}\label{t:asd} Consider a CIFS with notation as in Definition~\ref{d:cifs} and assume that $\overline{S_i}(V) \cap \overline{S_j}(V) = \varnothing$ for all distinct $i,j \in I$. Let $P$ be a subset of $X$ intersecting $S_i(X)$ in exactly one point for each $i$. Then the limit set $F$ satisfies \[ \dim_{\mathrm{A}} F = \max\{h,\dim_{\mathrm{A}} P\}.\] \end{thm} \begin{proof}% Let $x \in F$ and $0 < r < R < |F|$, and write $B \coloneqq B(x,R)$. We will cover $B \cap F$ efficiently at scale $r$ by noting that by the separation condition, the cylinders with size much greater than $R$ which intersect $B$ must form a nested sequence. Within the deepest of these cylinders, we will count subcylinders of a given size between $r$ and $R$ using the Assouad dimension of $P$ and cover each using the box dimension of $F$, and we use $\dim_{\mathrm{A}} P$ to cover all cylinders which are smaller than $r$. Fix $\delta \in(0,\mbox{dist}(X,\mathbb{R}^d \setminus V))$, and let $X'$ be the closed $\delta$-neighbourhood of $X$. By the proof of~\cite[Lemma~2.9]{Banaji2021}, we may increase $D$ (which is defined near~\eqref{e:diameterslemma}) so that~\eqref{e:diameterslemma} holds even if the infimum and supremum in the definition of $r_w$ and $R_w$ respectively (see~\eqref{e:definerw}) are taken over $X'$ instead of $X$. First observe that by the assumed separation condition, there is a unique $w \in I^*$ such that $F_w \cap B \neq \varnothing$ and $|F_w| \geq D^2 R / \delta$ and such that if $v \in I^*$ satisfies $F_v \cap B \neq \varnothing$ and $|F_v| \geq D^2 R / \delta$ then $v$ is a subword of $w$. Then $B \cap F = B \cap F_w$. It is possible that $w$ could be the empty word, in which case recall that $S_{\varnothing}$ is defined to be the identity map. If $i \in I$ is such that $S_w^{-1}(B) \cap F_i \neq \varnothing$, then $|F_i| \leq D^3 R / (\delta ||S_w'||)$. Let $k_0 \geq 0$ be such that $D^3 R 2^{-(k_0+1)} / \delta < r \leq D^3 R 2^{-k_0} / \delta$. For $0 \leq k \leq k_0$, define \[ I_k \coloneqq \{ \, i \in I : F_i \cap S_w^{-1}(B) \neq \varnothing, \, D^3 R 2^{-(k+1)} / ( \delta ||S_w'||) < |F_i| \leq D^3 R 2^{-k} / (\delta ||S_w'|| ) \, \}. \] Let $t > s > \max\{h,\dim_{\mathrm A} P \}$ and let $C>0$ be the constant from the definition of Assouad dimension~\eqref{e:assouaddef} corresponding to exponent $s$. Since $t > \overline{\dim}_{\mathrm{B}} F$, we may increase $C$ further so that for all $r' \in (0,|F|]$ we have $N_{r'}(F) \leq C (r')^{-t}$. Then \begin{align*} \# I_k &\leq \# \{ \, i \in I : F_i \subset B(S_w^{-1}(x),2D^3 R/(\delta ||S_w'||)), \\ &\phantom{--------} D^3 R 2^{-(k+1)} /( \delta ||S_w'||) < |F_i| \leq D^3 R/(\delta ||S_w'||) \, \} \\ &\leq M N_{D^3 R 2^{-(k+1)} / (\delta ||S_w'||)} (P \cap B(S_w^{-1}(x),2D^3R/(\delta ||S_w'||))) \quad \qquad \mbox{(M is from~\eqref{e:mdefn})} \\ &\leq 4 M C 2^{ks}. \end{align*} If $i \in I_k$ then by Lemma~\ref{l:assouadgeo} and~\eqref{e:2nddiams}, \[ N_{r/||S_w'||}(F_i) \leq D' N_{r/(||S_w'||\cdot ||S_i'||)}(F) \leq C D' (D^4 R 2^{-k} / (\delta r))^t = C D' D^{4t} \delta^{-t} 2^{-kt} (R/r)^t.\] We also define \[ \mathit{SMALL} \coloneqq \{ \, j \in I : F_j \cap B(S_w^{-1}(x),DR/||S_w'||) \neq \varnothing, \, |F_j| \leq D^3 R 2^{-(k_0+1)} / (\delta ||S_w'||) \, \}. \] Then since $s > \dim_{\mathrm{A}} P$, recalling that $A_{d,3}$ is from \eqref{e:doublingconst}, \[ N_{r/||S_w'||}( \{ \, F_j : j \in \mathit{SMALL} \, \} ) \leq A_{d,3} C \left(\frac{DR/||S_w'||}{r/||S_w'||}\right)^s = A_{d,3} C D^s (R/r)^s. \] Also, $S_w^{-1}(B) \cap F \subseteq B(S_w^{-1}(x),DR/||S_w'||) \cap F$, so by Lemma~\ref{l:assouadgeo}, \begin{align*} N_r(B \cap F) &\leq D' N_{r/||S_w'||} (S_w^{-1}(B) \cap F) \\ &\leq D' \left( \sum_{k = 0}^{k_0} (\# I_k)C D' D^{4t} \delta^{-t} 2^{-kt} (R/r)^t + N_{r/||S_w'||}( \{ \, F_j : j \in \mathit{SMALL} \, \} ) \right) \\ &\leq D' \left( 4 M C^2 D' D^{4t} \delta^{-t} \sum_{k = 0}^\infty 2^{-(t-s)k} + A_{d,3} C D^s \right) (R/r)^t . \end{align*} Since $s,t$ were arbitrary, the result follows. \end{proof} The additional separation condition that we imposed is satisfied in many natural settings. For example, it holds for the family of CIFSs in Proposition~\ref{l:existcifstechnical}. It holds for the CIFS in Lemma~\ref{l:ctdfraccifs} (which generates sets of numbers whose continued fraction expansions have digits only in $I \subseteq \mathbb{N}$) if and only if $I$ does not contain a pair of consecutive integers. % Nonetheless, it is natural to ask whether the assumption is really needed: \begin{ques} In the statement of Theorem~\ref{t:asd}, can the assumption that ${\overline{S_i}(V) \cap \overline{S_j}(V) = \varnothing}$ for all distinct $i,j \in I$ be removed? \end{ques} The lower bound $\dim_{\mathrm{A}} F \geq \max\{h,\dim_{\mathrm{A}} P\}$ is immediate from Lemma~\ref{e:samewithinbound} even without the assumption, but it is not obvious to us how to prove the other inequality without the assumption. The Assouad dimension is related to \emph{porosity}. A set is said to be porous if there exists $\alpha \in (0,1/2)$ such that for all $x \in F$ and $r>0$ there exists $y \in B(x,r)$ such that $B(y,\alpha r) \cap F = \varnothing$. \begin{cor} Suppose a CIFS on $\mathbb{R}^d$ satisfies $\overline{S_i}(V) \cap \overline{S_j}(V) = \varnothing$ whenever $i \neq j$. Let $P$ be a subset of $X$ intersecting $S_i(X)$ in exactly one point for each $i$, and assume the limit set $F$ satisfies $\dim_{\mathrm H} F < d$. Then $F$ is porous if and only if $P$ is porous. \end{cor} \begin{proof} A subset of $\mathbb{R}^d$ is porous if and only if its Assouad dimension is less than $d$ (see for example~\cite[Theorem~5.1.5]{Fraser2020book}). Therefore the result follows from Theorem~\ref{t:asd}. \end{proof} \section{Bounds for the Assouad spectrum}\label{s:asp} \subsection{The bounds} The following lemma shows that there is a certain uniformity in the definition of the upper Assouad spectrum. \begin{lma}\label{l:uniformconst} For all bounded sets $F \subset \mathbb{R}^d$, $\epsilon > 0$, $0 < \beta < 1$ there exists $R_{F,\beta,\epsilon} \in (0,1)$ such that for all $\theta \in (0,\beta]$ and $x \in F$, if $0<r\leq R^{1/\theta} \leq R \leq R_{F,\beta,\epsilon}$ then \[N_r(B(x,R) \cap F) \leq (R/r)^{\overline{\dim}_\mathrm{A}^\theta F + \epsilon}.\] \end{lma} \begin{proof} Since $\overline{\dim}_\mathrm{A}^\theta F$ is a continuous function of $\theta$, for all $\theta \in (0,1)$ there exists $\eta = \eta_\theta > 0$ small enough that $\eta < \max\{\theta,1-\theta\}$ and $\overline{\dim}_{\mathrm A}^{\theta + \eta} F < \overline{\dim}_{\mathrm A}^{\theta - \eta} F + \epsilon/2$. There exists $\alpha \in (0,\beta)$ small enough that $\overline{\dim}_{\mathrm A}^\alpha F < \overline{\dim}_{\mathrm{B}} F + \epsilon/2$. Since $[\alpha,\beta]$ is compact, there exists a finite set $\theta_1,\ldots,\theta_n \in [\alpha,\beta]$ such that $[\alpha,\beta] \subset \cup_{i=1}^n (\theta_i - \eta_{\theta_i},\theta_i + \eta_{\theta_i})$. By definition of the upper Assouad spectrum, for each $i$ there exists $C_i > 1$ such that for all $\theta \in (\theta_i - \eta_{\theta_i},\theta_i + \eta_{\theta_i})$, $x \in F$ and $0<r\leq R^{1/\theta} \leq R \leq 1$ (so $r < R^{1/(\theta_i + \eta_{\theta_i})}$), we have \[ N_r(B(x,R) \cap F) \leq C_i (R/r)^{\overline{\dim}_{\mathrm A}^{\theta_i - \eta_{\theta_i}} F + \epsilon/2} \leq C_i (R/r)^{\overline{\dim}_{\mathrm A}^{\theta} F + \epsilon/2}. \] There exists $C_0 > 1$ such that for all $\theta \in (0,\alpha]$, $x \in F$, $0<r\leq R^{1/\theta} \leq R \leq 1$ (so $r \leq R^{1/\alpha}$), we have \[ N_r(B(x,R) \cap F) \leq C_0 (R/r)^{\overline{\dim}_{\mathrm{B}} F + \epsilon/2} \leq C_0 (R/r)^{\overline{\dim}_\mathrm{A}^\theta F + \epsilon/2}.\] Let $C_{\beta,\epsilon} \coloneqq \max_{0 \leq i \leq n} C_i$. Choose $R_{F,\beta,\epsilon} \in (0,1)$ small enough that $R_{F,\beta,\epsilon}^{(\beta^{-1} - 1)\epsilon/2} < C_{\beta,\epsilon}^{-1}$. Suppose $0<r\leq R^{1/\theta} \leq R \leq R_{F,\beta,\epsilon} < 1$. Then \[ N_r(B(x,R) \cap F) \leq C_{\beta,\epsilon} (R/r)^{\overline{\dim}_\mathrm{A}^\theta F + \epsilon/2} \leq (R/r)^{\overline{\dim}_\mathrm{A}^\theta F + \epsilon}, \] as required. \end{proof} In~\cite[Section~8]{Fraser2018-2} it was shown that the Assouad and lower spectra of sets are not always monotonic, but the following lemma shows that for limit sets of a CIFS they are in fact monotonic. \begin{lma}\label{l:monotoniclemma} If $F$ is the limit set of a CIFS then the function $\theta \mapsto \dim_{\mathrm{A}}^\theta F$ is increasing in $\theta \in (0,1)$. \end{lma} \begin{proof} Suppose $0<\theta<\phi<1$. The idea is that if a part of $F$ is difficult to cover at a scale corresponding to $\theta$, then the image of this part of $F$ within a cylinder with an appropriately chosen contraction ratio will be difficult to cover at the scale corresponding to $\phi$. Moreover, we can iterate any given map to choose a cylinder with the desired contraction ratio up to a constant multiple. Let $t \in (0,\dim_{\mathrm{A}}^\theta F)$. Then there exist sequences $x_n \in F$ and $r_n \in (0,1)$ such that $r_n \to 0$ as $n \to \infty$ and \begin{equation}\label{e:assmoneqn} r_n^{t(1- \theta)} N_{r_n}(F \cap B(x_n,r_n^\theta)) \to \infty \mbox{ as } n \to \infty. \end{equation} Let $S$ be any map in the IFS and define $S^0$ to be the identity function on $V$, and $S^l \coloneqq S \circ \ldots \circ S$, $l$ times, and $F_l \coloneqq S^l(F)$. Fix $n \in \mathbb{N}$. Noting that $||(S^l)'||$ is decreasing in $l$, let $k = k(n)$ be the smallest natural number such that $||(S^k)'|| \leq (Dr_n^{\theta-\phi})^{\frac{1}{\phi - 1}}$, where $D$ is from~\eqref{e:diameterslemma}. Then \begin{align*} N_{||(S^k)'||r_n}&(F \cap B(S^k(x_n),(||(S^k)'||r_n)^\phi)) \\ &\geq N_{||(S^k)'||r_n}(F \cap B(S^k(x_n),D||(S^k)'||r_n^\theta)) &&\text{by the definition of } k \\ &\geq N_{||(S^k)'||r_n}(S^k(F \cap B(x_n,r_n^\theta))) &&\text{by \eqref{e:diameterslemma}} \\ &\geq (D')^{-1} N_{r_n}(F \cap B(x_n,r_n^\theta)) &&\text{by Lemma~\ref{l:assouadgeo}}. \end{align*} It follows that \begin{align*} (||(S^k)'||&r_n)^{t(1-\phi)} N_{||(S^k)'||r_n}(F \cap B(S^k(x_n),(||(S^k)'||r_n)^\phi)) \\ &\geq (||(S^{k-1})'||\cdot ||S'|| K^{-1} r_n)^{t(1-\phi)}(D')^{-1} N_{r_n}(F \cap B(x_n,r_n^\theta)) &&\text{by the BDP} \\ &> (((Dr_n^{\theta-\phi})^{\frac{1}{\phi - 1}}) ||S'|| K^{-1} r_n)^{t(1-\phi)} (D')^{-1} N_{r_n}(F \cap B(x_n,r_n^\theta)) &&\text{by the definition of } k \\ &= D^{-t} ||S'||^{t(1-\phi)} K^{t(\phi-1)}(D')^{-1} r_n^{t(1- \theta)} N_{r_n}(F \cap B(x_n,r_n^\theta)) \\ &\to \infty \mbox{ as } n \to \infty &&\text{by \eqref{e:assmoneqn}}. \end{align*}% Therefore $\dim_{\mathrm{A}}^\phi F \geq t$, and letting $t \to (\dim_{\mathrm{A}}^\theta F)^-$ gives $\dim_{\mathrm{A}}^\phi F \geq \dim_{\mathrm{A}}^\theta F$, as required. \end{proof} For $\theta \in (0,1)$ and $\phi \in [\theta,1]$ we introduce the continuous function \begin{equation}\label{e:fdefn} f(\theta,\phi) \coloneqq \frac{(\phi^{-1} - 1) \overline{\dim}_\mathrm{A}^\phi P + (\theta^{-1} - \phi^{-1}) \overline{\dim}_{\mathrm{B}} F}{\theta^{-1} - 1}. \end{equation} Recall that $\overline{\dim}_{\mathrm{B}} F = \max\{h,\overline{\dim}_{\mathrm{B}} P\}$ by~\cite[Theorem~3.5]{Banaji2021}. Note in particular that $f(\theta,\theta) = \overline{\dim}_\mathrm{A}^\theta P$ and $f(\theta,1) = \overline{\dim}_{\mathrm{B}} F$. It will be important to note that by the continuity of the upper Assouad spectrum, for fixed $\theta \in (0,1)$, the function $\phi \mapsto f(\theta,\phi)$ is continuous, and hence attains a maximum, on the interval $\phi \in [\theta,1]$. The following technical lemma will be used in the proof of Theorem~\ref{t:mainasp}. \begin{lma}\label{l:fbound} If $0 < \theta' \leq \theta < 1$ and $\theta' \leq \phi' \leq 1$ then $f(\theta',\phi') \leq \max_{\phi \in [\theta,1]} f(\theta,\phi)$. \end{lma} \begin{proof} We may assume $\theta' < \theta$. Let $\phi_1 \in [\theta',1]$ be such that $\max_{\phi \in [\theta',1]} f(\theta',\phi) = f(\theta',\phi_1)$. If $\overline{\dim}_{\mathrm{B}} F > \overline{\dim}_{\mathrm A}^{\phi_1} P$ then clearly $\phi_1 = 1$ (so in fact $\overline{\dim}_{\mathrm A}^{\phi_1} P = \dim_\mathrm{qA} P$ by definition), and $f(\theta',\phi_1) = \overline{\dim}_{\mathrm{B}} F = f(\theta,1) = \max_{\phi \in [\theta,1]} f(\theta,\phi)$. Therefore we may henceforth assume that $\overline{\dim}_{\mathrm{B}} F \leq \overline{\dim}_{\mathrm A}^{\phi_1} P$. Using the quotient rule, if $\theta' < \phi_1$ then for all $\theta_1 \in (\theta',\min\{\phi_1,\theta\})$ we have $\frac{\partial f}{\partial \theta}(\theta_1,\phi_1) \geq 0$. Thus \[ f(\theta',\phi') \leq f(\theta',\phi_1) \leq f(\min\{\phi_1,\theta\},\phi_1). \] Therefore if $\phi_1 \leq \theta$, then \[ f(\theta',\phi') \leq f(\phi_1,\phi_1) = \overline{\dim}_{\mathrm A}^{\phi_1} F \leq \overline{\dim}_{\mathrm A}^{\theta} F = f(\theta,\theta) \leq \max_{\phi \in [\theta,1]} f(\theta,\phi).\] If, on the other hand, $\phi_1 > \theta$, then \[ f(\theta',\phi') \leq f(\theta,\phi_1) \leq \max_{\phi \in [\theta,1]} f(\theta,\phi). \] In either case, the required bound holds. \end{proof} Theorem~\ref{t:mainasp} provides bounds for the Assouad spectrum of a CIFS, and is the main result of this section. These bounds will be illustrated by examples and figures in Section~\ref{s:sharp}, which also show that the bounds are sharp. Recall the definition of $f(\theta,\phi)$ from~\eqref{e:fdefn}, and recall that $h \coloneqq \dim_{\mathrm H} F$. \begin{thm}\label{t:mainasp} Let $F$ be the limit set of any CIFS with notation as above. Let $P$ be any subset of $X$ which intersects $S_i(X)$ in exactly one point for each $i$. Then for all $\theta \in (0,1)$, \[ \max\{h,\overline{\dim}_\mathrm{A}^\theta P \} \leq \overline{\dim}_\mathrm{A}^\theta F = \dim_{\mathrm{A}}^\theta F \leq \max_{\phi \in [\theta,1]} f(\theta,\phi). \] \end{thm} \begin{proof} Fix any $\theta \in (0,1)$. By Lemma~\ref{l:samewithinlevelasd} we may assume without loss of generality that $P = \{ \, S_i(x) : i \in I \, \}$ for some $x \in \mathrm{Int}_{\mathbb{R}^d}(X) \cap F$. % Since $\overline{\dim}_\mathrm{A}^\theta$ is monotonic for subsets, $\overline{\dim}_\mathrm{A}^\theta P \leq \overline{\dim}_\mathrm{A}^\theta F$, and by~\cite[Theorem~3.15]{Mauldin1996}, $h = \dim_\mathrm{H} F \leq \overline{\dim}_\mathrm{A}^\theta F$ so $\max\{h,\overline{\dim}_\mathrm{A}^\theta P \} \leq \overline{\dim}_\mathrm{A}^\theta F$. Lemma~\ref{l:monotoniclemma} says that the Assouad spectrum of $F$ is monotonic, so since $\overline{\dim}_\mathrm{A}^\theta F = \sup_{\theta' \in (0,\theta]} \dim_\mathrm{A}^{\theta'} F$, we have $\overline{\dim}_\mathrm{A}^\theta F = \dim_{\mathrm{A}}^\theta F$. It remains to show that $\overline{\dim}_\mathrm{A}^\theta F \leq \max_{\phi \in [\theta,1]} f(\theta,\phi)$. Let $t>\max_{\phi \in [\theta,1]} f(\theta,\phi)$ and fix $s \in (\max_{\phi \in [\theta,1]} f(\theta,\phi),t)$. Let \begin{equation}\label{e:defineeps} \epsilon \coloneqq \frac{1}{3}(s - \max_{\phi \in [\theta,1]} f(\theta,\phi)). \end{equation} Let $C_{\epsilon} > 0$ be large enough that for all $r \in (0,1]$, $N_r(F) \leq C_{\epsilon} r^{-(\overline{\dim}_{\mathrm{B}} F + \epsilon)}$. We use the same constants $A_{d,r}, D, D', M$ from~\eqref{e:doublingconst},~\eqref{e:diameterslemma},~\eqref{e:2nddiams}, Lemma~\ref{l:assouadgeo},~\eqref{e:mdefn}. We introduce the constants: \begin{align*} C_{\mathrm{small}} &\coloneqq A_{d,3} 2^{d + \epsilon}, \\ C_{\mathrm{sum}} &\coloneqq M 2^{2d + 3\epsilon/2} D' C_{\epsilon} \sum_{k=0}^{\infty} 2^{-(k+1)(\epsilon/2)} , \\ C_{\mathrm{med}} &\coloneqq M 2^{d + \epsilon} D' C_{\epsilon}, \\ C_{\mathrm{big}} &\coloneqq M D' D^s, \\ C_{\mathrm{tot}} &\coloneqq 4 \max\{C_{\mathrm{small}}, C_{\mathrm{sum}}, C_{\mathrm{med}}, C_{\mathrm{big}}\}. \end{align*} Note that $C_{\mathrm{tot}} > C_{\mathrm{small}} > 1$. Fix $\lambda \in (0,1)$ small enough that \begin{equation}\label{e:lambdadef} \lambda^{(1-\theta^{-1})(s-t)} < C_{\mathrm{tot}}^{-1}. \end{equation} % We now want to make all the contraction ratios small enough so that the constants in the induction argument below do not grow too fast. Suppose $||S_{i_1}'|| = \max_{i \in I}||S_i'|| \geq \lambda$. Then we can form the new CIFS $\{ \, S_i : i \in I \setminus \{i_1\} \, \} \cup \{ \, S_{i_1 j} : j \in I \, \}$. In a similar way, we can replace a map in this new CIFS whose derivative norm is largest. By Lemma~\ref{l:finitelymanylarge}, after repeating this finitely many times we will obtain a set $\{ \, T_j : j \in J \, \}$ with each $||T_j'|| < \lambda$. By induction, $\{T_j\}$ will form a CIFS with the same limit set as $\{S_i\}$, namely $F$. Moreover,~\eqref{e:diameterslemma},~\eqref{e:2nddiams}, Lemma~\ref{l:assouadgeo} and~\cite[Lemma~2.11]{Banaji2021} will hold with the \emph{same constants} $D, D', M$. Recalling that $P = \{ \, S_i(x) : i \in I \, \}$, define $Q \coloneqq \{ \, T_j(x) : j \in J \, \}$. Then $Q$ is the union of finitely many bi-Lipschitz copies of cofinite subsets of $P$. Thus if $\dim$ is any notion of dimension that is stable under bi-Lipschitz maps (for example any of the dimensions considered in this paper) then $\dim \{S_i(x) : i \in I\} = \dim \{ T_j(x) : j \in J\}$. The main step in the proof is the following claim. Inspired by Mauldin and Urba\'nski's proof of~\cite[Lemmas~2.8 and~2.10]{Mauldin1999} for the upper box dimension, this is proved using an inductive argument to construct efficient covers at smaller scales using efficient covers at larger scales. \begin{claim} There exists a large constant $A \in [1,\infty)$ such that for all $n \in \mathbb{N}$, if $x \in X$, $\lambda^n \leq R \leq 1$ and $0 < r \leq R^{1/\theta}$, then \[N_r(F \cap B(x,R)) \leq A C_{\mathrm{tot}}^n (R/r)^s.\] \end{claim}% \begin{proof}[Proof of claim] First note that $s > f(\theta,1) + \epsilon = \overline{\dim}_{\mathrm{B}} F + \epsilon$. Therefore we can choose $\beta \in (\theta,1)$ to be close enough to 1 that \begin{equation}\label{e:definebeta} \left( \frac{1}{\beta} - 1\right)(d+\epsilon) \leq \left( \frac{1}{\theta} - 1\right)(s-\overline{\dim}_{\mathrm{B}} F - \epsilon). \end{equation} Let $N \in \mathbb{N}$ be large enough that $\lambda^{-(N-1)} < R_{Q,\beta,\epsilon/2}/2$ and $\lambda^{(N-1)/\theta} < \lambda^{(N-1)/\beta}/10$, % where $R_{Q,\beta,\epsilon/2}$ is the constant from Lemma~\ref{l:uniformconst}. Since $s > f(\theta,1) = \overline{\dim}_{\mathrm{B}} F$, by choosing $A$ large enough we may assume the claim holds for $n=1,2,\ldots,N$. Suppose $n > N$ and assume the claim holds for $1,\ldots,n-1$. We may assume that $\lambda^n \leq R < \lambda^{n-1}$. Suppose $x \in X$ and $0 < r \leq R^{1/\theta}$. Define $\theta_r \in (0,\theta]$ and $k_r \in \mathbb{N}$ by $R=r^{\theta_r}$ and $R^{1/\beta} 2^{-(k_r + 2)} < r \leq R^{1/\beta} 2^{-(k_r + 1)}$. We break up the set of cylinders which intersect $B(x,R)$ into pieces depending on size. We will then bound the number of balls of radius $r$ needed to cover each piece separately. The following sets depend on $x$ and $n$: \begin{align*} \mathit{SMALL} &\coloneqq \{\, w \in J^* : F_w \cap B(x,R) \neq \varnothing, \, |X_w| \leq R^{1/\beta} 2^{-(k_r + 1)} \, \}, \\ I_k &\coloneqq \{ \, w \in J^* : F_w \cap B(x,R) \neq \varnothing, \, R^{1/\beta} 2^{-(k + 1)} < |X_w| \leq R^{1/\beta} 2^{-k} \, \} &\text{for } 0 \leq k \leq k_r, \\ \mathit{MED} &\coloneqq \{\, w \in J^* : F_w \cap B(x,R) \neq \varnothing, \, R^{1/\beta} < |X_w| < R \, \}, \\ \mathit{BIG} &\coloneqq \{\, w \in J^* : F_w \cap B(x,R) \neq \varnothing, \, |X_w| \geq R \, \}, \end{align*} First consider $\mathit{SMALL}$. The cost of covering elements of $\mathit{SMALL}$ at scale $r$ is comparable to the cost of covering the corresponding elements of $Q$, so we will use the upper Assouad spectrum of $Q$ to obtain a bound. Since $2R \leq R_{Q,\beta,\epsilon/2}$, there exist at most $(2R/r)^{\overline{\dim}_\mathrm{A}^\theta Q + \epsilon}$ balls of radius $r$ which cover $Q \cap B(x,2R)$. If $w \in \mathit{SMALL}$ then $Q \cap X_w \in B(x,2R)$, so the set of balls with the same centres and radii $3r$ will cover $\{\ X_w : w \in \mathit{SMALL} \, \}$. By covering each of these larger balls with balls of radius $r$, \begin{equation}\label{e:smallcost} N_r\left( \bigcup_{w \in \mathit{SMALL}} X_w \right) \leq A_{d,3} 2^{d + \epsilon} (R/r)^{\overline{\dim}_\mathrm{A}^\theta Q + \epsilon} \leq C_{\mathrm{small}} (R/r)^s. \end{equation} The last inequality holds since $s > f(\theta,\theta) + \epsilon = \overline{\dim}_\mathrm{A}^\theta Q + \epsilon$. Now consider $I_k$. We will bound the number of such cylinders using the upper Assouad spectrum of $Q$, and bound the cost of each using the upper box dimension of $F$. This is where the form of the function $f(\theta,\phi)$ comes from. Let $\phi_k \in [\theta_r,\beta)$ be such that $R^{1/\beta} 2^{-(k+1)} = R^{1/\phi_k}$. For $0 \leq k \leq k_r$, \begin{equation}\label{e:mediumcardinality} \begin{aligned} \# I_k &\leq \# \{ \, w \in J^* : X_w \subset B(x,2R), \, R^{1/\phi_k} \leq |X_w| \leq R \, \} \\ &\leq M N_{R^{1/\phi_k}} (Q \cap B(x,2R)) &&\text{($M$ is from \eqref{e:mdefn})} \\ &\leq M \left( \frac{2R}{R^{1/\phi_k}}\right)^{\overline{\dim}_{\mathrm A}^{\phi_k} Q + \epsilon/2} &&\text{by Lemma~\ref{l:uniformconst}}\\ &\leq M 2^{d + \epsilon/2} R^{(1-\phi_k^{-1})(\overline{\dim}_{\mathrm A}^{\phi_k} Q + \epsilon)} 2^{-(k+1)(\epsilon/2)} , \end{aligned} \end{equation} where in the last line we use that $R^{1-\beta^{-1}} \geq 1$. If $w \in I_k$ then by Lemma~\ref{l:assouadgeo}, \[ N_r(F_w) \leq D' N_{r/|X_w|} (F) \leq D' C_{\epsilon} (r/|X_w|)^{-(\overline{\dim}_{\mathrm{B}} F + \epsilon)} \leq D' C_{\epsilon} 2^{d + \epsilon} R^{(\phi_k^{-1} - \theta_r^{-1})(\overline{\dim}_{\mathrm{B}} F + \epsilon)}. \] Therefore \begin{align}\label{e:mediumcost} \begin{split} N_r\left( \bigcup_{w \in \cup_{k=0}^{k_r} I_k} X_w \right) &\leq \sum_{k=0}^{k_r} N_r \left( \bigcup_{w \in I_k} X_w \right) \\ &\leq C_{\mathrm{sum}} R^{(1-\phi_k^{-1})(\overline{\dim}_{\mathrm A}^{\phi_k} Q + \epsilon) + (\phi_k^{-1} - \theta_r^{-1})(\overline{\dim}_{\mathrm{B}} F + \epsilon)} \\ &\leq C_{\mathrm{sum}} (R/r)^s. \end{split} \end{align} The last inequality is since $s > \max_{\phi \in [\theta,1]} f(\theta,\phi) + 2\epsilon \geq f(\theta_r,\phi_k) + 2\epsilon$ by the definition of $\epsilon$ in~\eqref{e:defineeps} and Lemma~\ref{l:fbound}. Now consider $\mathit{MED}$. We will use that $\beta$ is close to 1 to show that the cardinality of $\mathit{MED}$ is not too large. We then use the upper box dimension of $F$ to bound the cost of each element of $\mathit{MED}$. As in~\eqref{e:mediumcardinality}, \[ \# \mathit{MED} \leq M \left( \frac{2R}{R^{1/\beta} }\right)^{\overline{\dim}_{\mathrm A}^{\beta} Q + \epsilon} \leq M 2^{d + \epsilon} R^{(1-\beta^{-1})(d + \epsilon)}. \] If $w \in \mathit{MED}$ then \[N_r(F_w) \leq D' N_{r/R}(F) \leq D' C_{\epsilon} (r/R)^{-(\overline{\dim}_{\mathrm{B}} F + \epsilon)}. \] Using the definition of $\beta$ in~\eqref{e:definebeta}, and since $r \leq R^{1/\theta}$, \begin{equation}\label{e:medcost} N_r\left( \bigcup_{w \in \mathit{MED}} X_w \right) \leq M 2^{d + \epsilon} R^{(1-\beta^{-1})(d + \epsilon)} D' C_{\epsilon} (r/R)^{-(\overline{\dim}_{\mathrm{B}} F + \epsilon)} \leq C_{\mathrm{med}} (R/r)^s. \end{equation} Finally, consider $\mathit{BIG}$. The conformality and OSC give an absolute bound for the cardinality: $\# \mathit{BIG} \leq M$ from~\eqref{e:mdefn}. We now use conformality (through Lemma~\ref{l:assouadgeo}) to compare the cost of each piece with the cost of its (larger) preimage, which can be bounded using the inductive hypothesis. Indeed, if $w \in \mathit{BIG}$ then \begin{align*} N_r( B(x,R) \cap F_w) &\leq D' N_{r/||T_{w}'||}(T_{w}^{-1}(B(x,R) \cap F_w)) &\text{by Lemma~\ref{l:assouadgeo}}\\ &\leq D' A C_{\mathrm{tot}}^{n-1} \left( \frac{DR/||T_{w}'||}{r/||T_{w}'||} \right)^s &\text{by inductive hypothesis} \\ &= A C_{\mathrm{tot}}^{n-1} D' D^s (R/r)^s. \end{align*} We were able to apply the inductive hypothesis at the crucial step because $r \leq R^{1/\theta}$ and $||T_{w}'|| \leq \lambda \leq 1$ so \[ r/||T_{w}'|| \leq (R/||T_{w}'||)^{1/\theta} \leq (RD/||T_{w}'||)^{1/\theta} \] and \[ DR/||T_{w}'|| \geq R/\lambda \geq \lambda^{n-1}.\] Now, \begin{equation}\label{e:bigcost} N_r\left( \bigcup_{w \in \mathit{BIG} } X_w \right) \leq A C_{\mathrm{big}} C_{\mathrm{tot}}^{n-1} (R/r)^s. \end{equation} Putting together~\eqref{e:smallcost},~\eqref{e:mediumcost},~\eqref{e:medcost},~\eqref{e:bigcost} and using the definition of $C_{\mathrm{tot}}$ gives \[ N_r(F \cap B(x,R)) \leq (C_{\mathrm{small}} + C_{\mathrm{sum}} + C_{\mathrm{med}} + A C_{\mathrm{big}} C_{\mathrm{tot}}^{n-1})(R/r)^s \leq A C_{\mathrm{tot}}^n (R/r)^s, \] completing the proof of the claim. \end{proof} We now complete the proof of Theorem~\ref{t:mainasp}. If $n \in \mathbb{N}$, $\lambda^n \leq R \leq 1$, $x \in X$ and $0 < r \leq R^{1/\theta}$, then \begin{align}\label{e:exponentialgap} N_r(F \cap B(x,R)) \leq A C_{\mathrm{tot}}^n (R/r)^s &\leq A C_{\mathrm{tot}}^n R^{(1-\theta^{-1})(s-t)} (R/r)^t \\ \nonumber &\leq A C_{\mathrm{tot}}^n \lambda^{n (1-\theta^{-1})(s-t)}(R/r)^t \\ \nonumber &\leq A(R/r)^t. \nonumber \end{align} The last inequality is by the definition of $\lambda$ in~\eqref{e:lambdadef}. Thus $\overline{\dim}_\mathrm{A}^\theta F \leq t$, as required. \end{proof} In~\eqref{e:exponentialgap}, we exploited the exponential gap between the scales $r \leq R^{1/\theta}$ and $R$ that is in the definition of the Assouad spectrum but not the Assouad dimension. This allowed us to complete the proof of Theorem~\ref{t:mainasp} without assuming the separation condition that is assumed in Theorem~\ref{t:asd}. \subsection{Consequences of Theorem \ref{t:mainasp}}\label{s:consequences} We now consider what can be deduced from Theorem~\ref{t:mainasp} about the general form of the Assouad spectrum of infinitely generated self-conformal sets. Throughout Section~\ref{s:consequences}, $F$ will denote the limit set of a CIFS, $P$ will denote an arbitrary subset of $X$ intersecting $S_i(X)$ in exactly one point for each $i$, and $h = \dim_{\mathrm H} F$. Recall that the quasi-Assouad dimension is defined in~\eqref{e:quasiassouaddef}. \begin{cor}\label{c:qa} We have $\dim_\mathrm{qA} F = \max\{h,\dim_\mathrm{qA} P\}$. \end{cor} \begin{proof} Since $\overline{\dim}_{\mathrm{B}} F = \max\{h,\overline{\dim}_{\mathrm{B}} P\}$, we have $\max_{\phi \in [\theta,1]} f(\theta,\phi) \leq \max\{h,\dim_\mathrm{qA} P\}$ for all $\theta \in (0,1)$, so the result follows from Theorem~\ref{t:mainasp} upon letting $\theta \to 1^-$. \end{proof} \begin{cor}% If $h \geq \dim_\mathrm{qA} P$ then $\dim_{\mathrm{A}}^\theta F = h$ for all $\theta \in (0,1)$. \end{cor} \begin{proof} Immediate from Corollary~\ref{c:qa}. \end{proof} \begin{cor} If $\dim_{\mathrm{A}}^\theta P = \overline{\dim}_{\mathrm{B}} P$ for all $\theta \in (0,1)$ then $\dim_{\mathrm{A}}^\theta F = \overline{\dim}_{\mathrm{B}} F$ for all $\theta \in (0,1)$. \end{cor} \begin{proof} This follows from Corollary~\ref{c:qa} and the fact that $\overline{\dim}_{\mathrm{B}} F = \max\{h,\overline{\dim}_{\mathrm{B}} P\}$. \end{proof} Recall that the phase transition $\rho_F$ for the Assouad spectrum of a set $F$ is defined in~\eqref{e:definephasetransition}. \begin{cor}\label{c:phase} If $h < \dim_\mathrm{qA} P$ then $\rho_F = \rho_P$. \end{cor} \begin{proof} It follows from Lemma~\ref{l:samewithinlevelasd} and Corollary~\ref{c:qa} that $\rho_F \leq \rho_P$, so it remains to prove the reverse inequality. Fix $\theta \in (0,\rho_P)$, so $f(\theta,\theta) = \overline{\dim}_\mathrm{A}^\theta P < \dim_\mathrm{qA} P$. If $\phi \in (\theta,1]$ then \[ f(\theta,\phi) \leq \frac{(\phi^{-1} - 1) \dim_\mathrm{qA} P + (\theta^{-1} - \phi^{-1}) \max\{h,\overline{\dim}_\mathrm{A}^\theta P\}}{\theta^{-1} - 1} < \dim_\mathrm{qA} P.\] Therefore $\overline{\dim}_\mathrm{A}^\theta F < \dim_\mathrm{qA} P$ by Theorem~\ref{t:mainasp}, so $\rho_F \geq \theta$. It follows that $\rho_F \geq \rho_P$, as required. \end{proof} In light of Corollary~\ref{c:phase}, when there is no confusion we will sometimes write $\rho$ for the common value $\rho_P = \rho_F$. \begin{defn}\label{d:threeparam} If $G \subset \mathbb{R}^d$ is non-empty and bounded then we say that the Assouad spectrum of $G$ has the \emph{three-parameter form} if either $\overline{\dim}_{\mathrm{B}} G = \dim_\mathrm{qA} G$, or else \[ \dim_{\mathrm{A}}^\theta G = \min\left\{ \overline{\dim}_{\mathrm{B}} G + \frac{(1-\rho_G) \theta}{(1-\theta)\rho_G} (\dim_\mathrm{qA} G - \overline{\dim}_{\mathrm{B}} G) , \dim_\mathrm{qA} G \right\} \] for all $\theta \in [0,1)$. \end{defn} The three parameters are the upper box dimension, the quasi-Assouad dimension, and the phase transition $\rho_G \in \left[1-\frac{\overline{\dim}_{\mathrm{B}} G}{\dim_\mathrm{qA} G},1\right]$ (defined in~\eqref{e:definephasetransition}). The Assouad spectrum of many natural sets, such as polynomial sequences and spirals, Bedford--McMullen carpets, Kleinian limit sets and Julia sets happens to take the three-parameter form (see~\cite[Section~17.7]{Fraser2020book} and~\cite{Fraser2020-2}). It is perhaps noteworthy that infinitely generated self-conformal sets do not necessarily satisfy the three-parameter form (as we will see in Theorem~\ref{t:sharp}). However, they sometimes will satisfy this form, as the following result shows. \begin{cor}\label{c:special} Assume that the Assouad spectrum of $P$ is non-constant and has the three-parameter form as in Definition~\ref{d:threeparam}. Then the upper bound of Theorem~\ref{t:mainasp} is the three-parameter form for $F$, namely \begin{equation}\label{e:specialubf} \dim_{\mathrm{A}}^\theta F \leq \max_{\phi \in [\theta,1]} f(\theta,\phi) = \min\left\{ \overline{\dim}_{\mathrm{B}} F + \frac{(1-\rho_F) \theta}{(1-\theta)\rho_F} (\dim_\mathrm{qA} F - \overline{\dim}_{\mathrm{B}} F) , \dim_\mathrm{qA} F \right\}. \end{equation} In particular, if $h \leq \overline{\dim}_{\mathrm{B}} P$ then the bounds in Theorem~\ref{t:mainasp} coincide, with $\dim_{\mathrm{A}}^\theta F = \dim_{\mathrm{A}}^\theta P$ for all $\theta \in [0,1]$. If, on the other hand, we have $\overline{\dim}_{\mathrm{B}} P < h < \dim_\mathrm{qA} P$, then the upper and lower bounds of Theorem~\ref{t:mainasp} differ for all $\theta \in (0,\rho_P)$. \end{cor} \begin{proof} \emph{Case 1:} If $h \geq \dim_\mathrm{qA} P$ then clearly $\max_{\phi \in [\theta,1]} f(\theta,\phi) = f(\theta,1) = h$. % \emph{Case 2:} If $h \leq \overline{\dim}_{\mathrm{B}} P = \dim_\mathrm{qA} P$ then $\max_{\phi \in [\theta,1]} f(\theta,\phi) = f(\theta,\theta) = \overline{\dim}_{\mathrm{B}} P$. \emph{Case 3:} If $h \leq \overline{\dim}_{\mathrm{B}} P < \dim_\mathrm{qA} P$ then $\max_{\phi \in [\theta,1]} f(\theta,\phi) = f(\theta,\phi') = \overline{\dim}_\mathrm{A}^\theta P$ for all $\phi' \in [\theta,\rho_P]$, and the bounds coincide. \emph{Case 4:} If $\overline{\dim}_{\mathrm{B}} P < h < \dim_\mathrm{qA} P$ then a direct calculation shows that for $\theta \in (0,\rho_P)$, \[ \max_{\phi \in [\theta,1]} f(\theta,\phi) = f(\theta,\rho_P) = h + \frac{(1-\rho_P) \theta}{(1-\theta)\rho_P}(\dim_\mathrm{qA} P - h) > \dim_{\mathrm{A}}^\theta P.\]% We see that in all cases $\dim_{\mathrm{A}}^\theta F$ has the three-parameter form. \end{proof} One setting where Corollary~\ref{c:special} is applicable is where the maps are concentrated around the sets $F_p \coloneqq \{ \, i^{-p} : i \in \mathbb{N} \, \}$, where $p \in (0,\infty)$ is a constant. Fraser and Yu~\cite[Corollary~6.4]{Fraser2018-2} showed that \begin{equation}\label{e:fpspectrum} \dim_{\mathrm{A}}^\theta F_p = \min\left\{ \frac{1}{(1+p)(1-\theta)},1\right\}. \end{equation} This is the three-parameter form with $\overline{\dim}_{\mathrm{B}} F_p = (p+1)^{-1}$, $\dim_\mathrm{qA} F_p = 1$, $\rho_{F_p} = \frac{p}{p+1}$. \begin{lma}\label{l:existcifstechnical} Let $p \in (0,\infty)$, $t \in [p+1,\infty)$ and $h \in (1/t,1)$. Then there exists a CIFS on $X=[0,1]$ with $I=\{2,3,4,\ldots\}$ % and limit set $F$ such that the following hold: \begin{itemize} \item For all $i \in \mathbb{N}$, $i \geq 2$ there exists $c_i \in (0,(i-1)^{-p} - i^{-p})$ such that $S_i(x) = c_i x + i^{-p}$ for all $x \in [0,1]$, so $S_i$ is a similarity map with contraction ratio $c_i$ and $i^{-p} = S_i(0) < S_i(1) \leq (i-1)^{-p}$. \item There exists $N \in \mathbb{N}$ such that $c_i = p i^{-t}$ for all $i \geq N$. % \item $\dim_\mathrm{H} F = h$ \end{itemize} \end{lma} In fact any CIFS which satisfies the first two conditions will have finiteness parameter $\Theta = 1/t$ and $\dim_\mathrm{H} F \in (1/t,1]$. \begin{proof} By a mean value theorem argument, $(i-1)^{-p} - i^{-p} > p i^{-(p+1)}$ for all $i \in \mathbb{N}$, $i \geq 2$. By~\eqref{e:hausdorffpressure} we have $h = \inf\{\, s \geq 0 : \sum_{i=2}^\infty c_i^s < 1 \, \}$. Therefore since $1/t < h < 1$, we can choose $N \geq 2$ sufficiently large that $p^h \sum_{i=N+1}^\infty i^{-th} < 1$ % and $\sum_{i=1}^N ((i-1)^{-p} - i^{-p})^h \geq 1$. % By an intermediate value theorem argument, there exist $c_j \in (0,(j-1)^{-p} - j^{-p})$ for $1 \leq j \leq N$ such that \[ \sum_{j=1}^N c_j^h + p^h \sum_{i=N+1}^\infty i^{-th} = 1.\] It now follows from~\cite[Theorem~3.15]{Mauldin1996} that $\dim_{\mathrm H} F = h$. \end{proof} We now use Lemma~\ref{l:existcifstechnical} to give an example with $\overline{\dim}_{\mathrm{B}} F = \overline{\dim}_{\mathrm{B}} F_p$ and the Assouad spectrum satisfying the three-parameter form. \begin{cor} Consider a CIFS satisfying the three conditions in Lemma~\ref{l:existcifstechnical} with $t>p+1$ and $h = \dim_\mathrm{H} F \in (1/t,(p+1)^{-1}]$. Then for all $\theta \in [0,1]$, \[\dim_{\mathrm{A}}^\theta F = \dim_{\mathrm{A}}^\theta F_p = \min\left\{ \frac{1}{(1+p)(1-\theta)},1\right\}. \]% \end{cor} \begin{proof} This is immediate from Corollary~\ref{c:special}. \end{proof} \section{Sharpness of the Assouad spectrum bounds}\label{s:sharp} Theorem~\ref{t:sharp} provides a family of examples with $\dim_{\mathrm B} F = \dim_\mathrm{H} F \eqqcolon h$ which show in particular that the bounds in Theorem~\ref{t:mainasp} are sharp in general. % The graph of the Assouad spectrum for a certain choice of parameters is shown in Figure~\ref{f:sharp}. \begin{thm}\label{t:sharp} Consider a CIFS satisfying the three conditions in Lemma~\ref{l:existcifstechnical} with $h \in ((p+1)^{-1},1)$. There are three different cases depending on the parameter $t$: \begin{enumerate} \item\label{i:sharpupper} If $t=p+1$ then \begin{equation}\label{e:sharpupperspecial} \nonumber \dim_{\mathrm{A}}^\theta F = \left\{\begin{array}{lr} h + \frac{\theta}{p(1-\theta)}(1-h), & \text{for } 0\leq \theta < \frac{p}{1+p} \\ 1, & \text{for } \frac{p}{1+p}\leq \theta \leq 1 \end{array}\right. . \end{equation}% \item\label{i:sharpmiddle} If $p+1 < t < p+h^{-1}$ then \[ \dim_{\mathrm{A}}^\theta F = \left\{\begin{array}{lr} h + \frac{\theta}{p(1-\theta)}(1-h(t-p)), & \text{for } 0\leq \theta \leq \frac{(h+hp-1)p}{(1+p)(ht-1)}\\ \frac{1}{(1+p)(1-\theta)}, & \text{for } \frac{(h+hp-1)p}{(1+p)(ht-1)} < \theta < \frac{p}{1+p}\\ 1, & \text{for } \frac{p}{1+p}\leq \theta \leq 1 \end{array}\right. .\] \item\label{i:sharplower} If $t \geq p + h^{-1}$ then \[ \dim_{\mathrm{A}}^\theta F = \left\{\begin{array}{lr} h, & \text{for } 0\leq \theta \leq \frac{h+hp-1}{h(1+p)}\\ \frac{1}{(1+p)(1-\theta)}, & \text{for } \frac{h+hp-1}{h(1+p)} < \theta < \frac{p}{1+p}\\ 1, & \text{for } \frac{p}{1+p}\leq \theta \leq 1 \end{array}\right. . \] \end{enumerate} \end{thm} By Corollary~\ref{c:special}, the bounds from Theorem~\ref{t:mainasp} (or from Corollary~\eqref{c:special}) differ if $0<\theta< \rho_{F_p} = \frac{p}{1+p}$. Moreover, in~\eqref{i:sharpupper} the upper bounds are attained, in~\eqref{i:sharpmiddle} $\dim_{\mathrm{A}}^\theta F$ lies strictly between the bounds for all $\theta \in (0,\frac{p}{1+p})$, and in~\eqref{i:sharplower} the lower bounds are attained. We note that the formula in~\eqref{i:sharplower} does not depend on the precise value of $t \in [p+h^{-1},\infty)$. \begin{figure}[ht] \centering \pgfplotsset{ standard/.style={ axis line style=thick, trig format=rad, axis x line=middle, axis y line=middle, enlarge x limits=0.1, enlarge y limits=0.1, axis equal image, every axis x label/.style={at={(current axis.right of origin)},anchor=north west}, every axis y label/.style={at={(current axis.above origin)},anchor=south east}, ticklabel style={font=\tiny}, legend style={font=\tiny} } } \scalebox{1.5}{ \begin{tikzpicture}[/pgf/declare function={p=1.8;h=0.5;}] \begin{axis}[standard, reverse legend, xtick={p/(1+p),1}, ytick={h,1}, xticklabels={$\frac{p}{1+p}$,$1$}, yticklabels={$h$,$1$}, xlabel={\tiny $\theta$}, ylabel={\tiny $\dim_{\mathrm A}^\theta F$}, samples=100, xmin=-0, xmax=1, ymin=-0, ymax=1]% \path[name path=xAxis] (axis cs:0,0.005) -- (axis cs:1,0.005); \foreach \c in {0.25,0.5}% \addplot[semithick,forget plot,blue!80,domain={0:1}]{min(max( h + x*(1 + h*(p - (p+1+\c*(1/h-1))))/p/(1-x), 1/(1+p)/(1-x) ) ,1) }; \addplot[semithick,forget plot,green!50!black,domain={0:1}]{min(max( h + x*(1 + h*(p - 2*p))/p/(1-x), 1/(1+p)/(1-x) ) ,1) }; \node[anchor=center,label=south west:\tiny $0$] at (axis cs:0,0){}; \addplot[name path=h,brown!80!black,thick,domain={0:1}]{max(h,min( 1/(1+p)/(1-x) ,1) )}; \addplot[name path=xh,thick,black,domain={0:1}]{min(1,h + x*(1-h)/p/(1-x))}; \end{axis} \end{tikzpicture} } \caption{\label{f:sharp} Assouad spectra of the sets in Theorem~\ref{t:sharp} for $p=1.8$ and $h \approx 0.5$. In black: the upper bound (attained when $t=p+1$). In \textcolor{brown!80!black}{brown}: the lower bound (attained when $t>p+h^{-1}$. In \textcolor{blue!80}{blue:} some more choices of $t \in (p+1,p+h^{-1})$. In \textcolor{green!50!black}{green:} the case $t=2p$, which is also the dimension of the continued fraction set from Proposition~\ref{p:ctdspaced}. % } \end{figure} We now give a technical lemma that forms the main part of the proof of the upper bound of Theorem~\ref{t:sharp}~\eqref{i:sharpmiddle}. \begin{lma}\label{l:sharplemma} Consider a CIFS satisfying the conditions of Lemma~\ref{l:existcifstechnical} with $(p+1)^{-1} < h < 1$ and $p+1 < t < p + h^{-1}$. Since for each $i \geq 2$, $S_i([0,1])$ and $S_{i+1}([0,1])$ are sufficiently well separated, we can choose $c>0$ large enough that for any interval $B$ (of length $R$, say) there exists at most one $i \in I$ such that $|F_i| \geq c R$ and $F_i \cap B \neq \varnothing$. Let $s$ be any number greater than the claimed formula for $\dim_{\mathrm{A}}^\theta F$ in Theorem~\ref{t:sharp}~\eqref{i:sharpmiddle}. Then if $0 < r < 1$ and $B$ is any interval of length $r^\theta$ such that whenever $i \in I$ is such that $B \cap F_i \neq \varnothing$ we have $|F_i| \leq cr^\theta$, then \[ N_r(B \cap F) \lesssim r^{(\theta - 1)s}, \] where the implicit constant in $\lesssim$ can depend on $p,h,t, F,s$ but not on $r$ or $B$. \end{lma} \begin{proof} By increasing the implicit constant if required, we may assume that $r$ is small enough that if $B \cap X_i \neq \varnothing$ then $X_i = [i^{-p}, i^{-p} + p i^{-t}]$. Since $B \cap F \subseteq \bigcup_{i \in I : B \cap F_i \neq \varnothing} F_i$, it suffices to prove that $N_r(\bigcup_{i \in I : B \cap F_i \neq \varnothing} F_i) \lesssim r^{(\theta-1)s}$. First note that since $s > \dim_{\mathrm{A}}^\theta P$, \[N_r \left(\bigcup_{i \in I : B \cap F_i \neq \varnothing, |F_i| \leq r} F_i \right) \lesssim N_r(B \cap P) \lesssim r^{(\theta - 1)s}.\] Therefore it remains to prove that $N_r(\bigcup_{i \in I : B \cap F_i \neq \varnothing, r < |F_i| \leq cr^\theta} F_i ) \lesssim r^{(\theta - 1)s}$. To do so, we consider three cases depending on the value of $\theta$, corresponding to the location of the point $i^{-p}$ that satisfies $i^{-t} \approx r$. For such an $i$, in Case 1, $i^{-p} \lesssim r^\theta$, in Case 2, $r^\theta \lesssim i^{-p} \lesssim r^{\frac{p\theta}{p+1}}$, and in Case 3, $r^{\frac{p\theta}{p+1}} \lesssim i^{-p}$. The significance of $r^{\frac{p\theta}{p+1}}$ is that this is where gaps between consecutive elements of $P$ are $\approx r^\theta$. % \textbf{Case 1:} Assume $0 < \theta \leq p/t$. % Since $s > h = \dim_{\mathrm B} F$, we have \[ N_r\left( \bigcup_{i \in I : B \cap F_i \neq \varnothing, r < |F_i| \leq r^{t\theta/p} } F_i \right) \lesssim \sum_{i = \lfloor r^{-\theta/p} \rfloor}^{\lfloor r^{-1/t} \rfloor} N_{r i^t}(F) \approx \int_{r^{-\theta /p}}^{r^{-1/t}} (r x^t)^{-h} dx \lesssim r^{-h} (r^{1-th})^{-\theta/p} \leq r^{(\theta - 1)s}. \]% Let $\eta$ be such that $\sup B = r^{\eta}$. We may assume without loss of generality that $\eta \leq \theta$. % We now have two subcases depending on the value of $\eta$. \begin{itemize} \item Subcase 1.1: Assume $\eta > \frac{\theta p}{p+1}$. If $th \geq p+1$ then $\frac{\eta(p+1-th) + ph - p\theta}{p(1-\theta)} < h$, % whereas if $th < p+1$ then $\frac{\eta (p+1-th) + ph - p\theta}{p(1-\theta)} \leq h + \frac{\theta}{p(1-\theta)}(1-h(t-p))$, so if we define \[ \epsilon \coloneqq \frac{1-p/t}{2}\left(s - \left( h + \frac{\theta}{p(1-\theta)}(1-h(t-p)) \right) \right) > 0 \] then $\frac{\eta(p+1-th) + ph - p\theta}{p(1-\theta)} + \frac{2}{1-p/t}\epsilon \leq s$. % Then \[ \# \{ \, i \in I : B \cap F_i \neq \varnothing, |F_i| > r \, \} \lesssim r^{-\eta /p} - (r^{\eta} + r^\theta)^{-1/p} = r^{-\eta /p}(1 - (1+r^{\theta - \eta})^{-1/p}) \lesssim r^{-\eta /p + \theta - \eta - \epsilon}, \] where the last bound is by Taylor's theorem. Furthermore, if $F_w \cap B \neq \varnothing$ and $|F_i| > r$ then $|F_i| \approx r^{\eta t/p}$, and $N_r(F_i) \approx N_{r^{1-\eta t/p}}(F) \lesssim r^{-(1-\eta t/p)(h+\epsilon)}$. Putting this all together gives \[ N_r \left( \bigcup_{i \in I : B \cap F_i \neq \varnothing, |F_i| > r} F_i \right) \lesssim r^{-\eta /p + \theta - \eta - \epsilon - (1-\eta t/p)(h+\epsilon)} = r^{(\theta - 1)( \frac{\eta (p+1-th) + ph - p\theta}{p(1-\theta)} + \frac{2}{1-p/t}\epsilon)} \leq r^{(\theta - 1)s}. \]% \item Subcase 1.2: Assume $\eta \leq \frac{\theta p}{p+1}$. Then since $|B| = r^\theta$, we have $\# \{ \, i \in I : B \cap F_i \neq \varnothing\} \lesssim 1$. Therefore \[ N_r \left( \bigcup_{i \in I : B \cap F_i} F_i \right) \lesssim N_{r/c r^\theta}(F) \lesssim r^{(\theta - 1)s}. \] \end{itemize} \textbf{Case 2:} Assume $p/t < \theta < (p+1)/t$. Again let $\eta$ be such that $\sup B = r^{\eta}$. If $\eta > \frac{\theta p}{p+1}$ then by a similar argument to the proof of Subcase 1.1 we have \[ N_r \left( \bigcup_{i \in I : B \cap F_i \neq \varnothing, |F_i| > r} F_i \right) \lesssim r^{(\theta - 1)s}.\] If, on the other hand, $\eta \leq \frac{\theta p}{p+1}$, then as in Subcase 1.2 we have $\# \{ \, i \in I : B \cap F_i \neq \varnothing\} \lesssim 1$ so $N_r (\bigcup_{i \in I : B \cap F_i \neq \varnothing} F_i ) \lesssim r^{(\theta - 1)s}$. \textbf{Case 3:} Assume $(p+1)/t \leq \theta < 1$. Then if $r < |F_i| \leq c r^\theta$ and $B \cap F_i \neq \varnothing$ then $r^{\frac{\theta p}{p+1}} \leq r^{p/t} \lesssim \inf B$. % Therefore \[ \# \{ \, i \in I : B \cap F_i \neq \varnothing, r < |F_i| \leq c r^\theta \, \} \lesssim 1. \] Consequently, \[ N_r \left( \bigcup_{i \in I : B \cap F_i \neq \varnothing, r < |F_i| \leq c r^\theta } F_i \right) \lesssim r^{(\theta - 1)s}. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem~\ref{t:sharp}] We begin with the upper bounds. Case~\eqref{i:sharpupper} follows from Theorem~\ref{t:mainasp}. Assume $p + 1 < t < p + h^{-1}$ and let $s$ be larger than the claimed upper bound. The argument now uses a similar trick to the proof of Theorem~\ref{t:asd}. Let $0 < r < 1$ and let $B$ be a ball of radius $r^\theta$ intersecting $F$. Recall the constant $c$ from Lemma~\ref{l:sharplemma}. Let $w \in I^*$ be the unique word such that $F_w \cap B \neq \varnothing$ and $|F_w| \geq cr^\theta$ and such that if $v \in I^*$ satisfies $F_v \cap B \neq \varnothing$ and $|F_v| \geq cr^\theta$ then $v$ is a subword of $w$. Then $B \cap F = B \cap F_w$. If $c_w$ is the contraction ratio of $S_w$ then \[ N_r(B \cap F) = N_{r/c_w}(S_w^{-1}(B \cap F)) \leq N_{r/c_w}(S_w^{-1}(B) \cap F) \lesssim r^{(\theta - 1)s} \] by Lemma~\ref{l:sharplemma}, since the claimed formula is increasing in $\theta$. The upper bound now follows. Case~\ref{i:sharplower} can be proved using a similar (but easier) argument. We now consider the lower bounds. Case~\eqref{i:sharplower} follows from Corollary~\ref{c:special}, so assume $t < p+h^{-1}$. The bound for $\theta \geq \frac{(h+hp-1)p}{(1+p)(ht-1)}$ is immediate from Theorem~\ref{t:mainasp}. Since $p/t \geq \frac{(h+hp-1)p}{(1+p)(ht-1)}$, it suffices to prove \[ \dim_{\mathrm{A}}^\theta F \geq h + \frac{\theta}{p(1-\theta)}(1-h(t-p)) \qquad \mbox{ for all }\theta \in (0,p/t). \]% Fix $\epsilon \in (0,(th-1)/t)$. % In the $\lesssim$ notation below, the implicit constant can depend on $p,h,t,\epsilon$ only. Since $\dim_{\mathrm B} F = h$ we have $N_r(S_i(F)) \approx N_{r i^t} (F) \gtrsim (r i^t)^{-(h-\epsilon)}$ % for sufficiently small $r>0$ and all $i \in \mathbb{N}$ such that $r \leq i^{-t}$, by Lemma~\ref{l:assouadgeo}. Therefore \begin{align*} N_r(F \cap [0,r^\theta]) \gtrsim \sum_{i = \lfloor r^{-\theta / p} \rfloor}^{\lfloor r^{-1/t}\rfloor} (ri^t)^{-(h-\epsilon)} &\approx r^{-(h-\epsilon)} \int_{ r^{-\theta / p} }^{r^{-1/t}} x^{-t(h-\epsilon)} dx \\ &\approx r^{-(h-\epsilon)} r^{-\frac{\theta}{p} (1-t(h-\epsilon))} \\ &= r^{(\theta - 1)\left( h + \frac{\theta}{p(1-\theta)}(1-h(t-p)) - \frac{1 - \theta t / p}{1 - \theta } \epsilon \right) }. \end{align*} Since $\epsilon$ was arbitrary, this completes the proof. \end{proof} The Assouad spectra of the sets in Theorem~\ref{t:sharp}~\eqref{i:sharpupper} satisfy the three-parameter formula and so have only one phase transition. The sets in~\eqref{i:sharpmiddle} and~\eqref{i:sharplower}, on the other hand, are sets which exhibit self-similarity and whose Assouad spectrum displays interesting behaviour in that it \begin{itemize} \item does \emph{not} satisfy the three-parameter formula, \item has two phase transitions. \end{itemize} These are the first examples of dynamically defined fractals whose Assouad spectrum has two phase transitions, though the elliptical polynomial spirals in~\cite{Burrell2020-1} can also have Assouad spectrum with two phase transitions. We will see in Section~\ref{s:ctdfracsect} that this behaviour can also be observed for classes of infinitely generated self-conformal sets defined using continued fraction expansions. We note that the Assouad spectrum can be used to distinguish between different sets in this setting in cases where no other notion of dimension is able to. For any fixed $p \in (0,\infty)$, consider two sets from Theorem~\ref{t:sharp} that are chosen to have the same Hausdorff dimension $h$ but use different values of $t \in [p+1,p+h^{-1}]$, so have different Assouad spectra. Since the Assouad spectrum is stable under bi-Lipschitz maps, we deduce that the two sets cannot be bi-Lipschitz equivalent. In fact,~\cite[Proposition~4.7]{Fraser2018-2} can be used to give quantitative information about the possible exponents of bi-H\"older maps between two such sets. % However, the Hausdorff, box and intermediate dimensions (studied in~\cite{Banaji2021}) of each of the sets will be $h$, and the Assouad and quasi-Assouad dimensions will be 1, independent of $t$, so none of these other dimensions provide information about Lipschitz equivalence or H\"older distortion. \section{Applications: continued fractions and parabolic IFSs}\label{s:ctdfracsect} In this section we apply our methods to calculate the Assouad spectra of several interesting families of fractal sets. Sets of real or complex numbers which have continued fraction expansions with restricted entries are especially well-studied in the dimension theory of dynamical systems, see for instance~\cite{Banaji2021,Chousionis2020,Mauldin1999} and~\cite[Section~9.2]{Fraser2020book}. For a non-empty, proper subset $I \subset \mathbb{N}$, define \[ F_I \coloneqq \left\{ \, z \in (0,1) \setminus \mathbb{Q} : z = \frac{1}{b_1 + \frac{1}{b_2 + \frac{1}{\ddots}}}, b_n \in I \mbox{ for all } n \in \mathbb{N} \, \right\}. \] Then $F_I$ is the limit set of the CIFS given by the inverse branches of the Gauss map $\mathcal{G} \colon [0,1] \to [0,1]$ (given by $\mathcal{G}(x) = \{ 1/x \}$ where $\{y\}$ denotes the fractional part of $y \geq 0$) corresponding to the elements of $I$. \begin{lma}\label{l:ctdfraccifs} Working in $\mathbb{R}$, letting $X \coloneqq [0,1]$ and $V \coloneqq (-1/8,9/8)$, \begin{itemize} \item\label{i:ctdfracnot1} If $1 \notin I$ then $\{ \, S_b(x) \coloneqq 1/(b+x) : b \in I \, \}$ is a CIFS with limit set $F_I$. \item\label{i:ctdfrac1} If $1 \in I$ then $\{ \, S_b(x) \coloneqq 1/(b+x) : b \in I, b \neq 1 \, \} \cup \left\{ \, S_{1b}(x) \coloneqq \frac{1}{b+\frac{1}{1+x}} : b \in I \, \right\}$ is a CIFS with limit set $F_I$. \end{itemize} \end{lma} \begin{proof} This is well-known (see~\cite[page~4997]{Mauldin1999}, for example). \end{proof} Lemma~\ref{l:ctdfraccifs} shows why our general results can be applied in this setting, and is one of the reasons why we proved the general bounds above for conformal contractions rather than just for similarities. Recall that by~\cite[Theorem~3.15]{Mauldin1996}, the Hausdorff dimension $h$ can be determined by the topological pressure function. We first show that subsets of $\mathbb{N}$ which satisfy an asymptotic condition give rise to continued fraction sets whose Assouad spectrum satisfies the three-parameter form and attains the upper bounds of Theorem~\ref{t:mainasp} and Corollary~\ref{c:special}. \begin{prop}\label{t:ctdfracdense} If $I \subseteq \mathbb{N}$ satisfies \begin{equation}\label{e:fulldensity} \limsup_{N \to \infty} \frac{\log \# (I \cap [1,N])}{\log N} = 1 \end{equation} then for all $\theta \in (0,1)$ we have \[ \dim_{\mathrm{A}}^\theta F_I = \left\{\begin{array}{lr} h + \frac{\theta}{(1-\theta)}(1-h), & \text{for } 0\leq \theta < 1/2 \\ 1, & \text{for } 1/2 \leq \theta \leq 1 \end{array}\right. . \] \end{prop}% \begin{proof} If $I \subseteq \mathbb{N}$ satisfies the condition~\eqref{e:fulldensity} then by the proof of~\cite[Lemma~14.1.4]{Fraser2020book}, \[ \dim_{\mathrm{A}}^\theta \{ \, 1/b : b \in I \, \} = \min\left\{\frac{1}{2(1-\theta)},1\right\} \] for all $\theta \in [0,1]$. This will equal the Assouad spectrum of the set of fixed points of the contractions comprising the CIFS from Lemma~\ref{l:ctdfraccifs} even if $1 \in I$, because the Assouad spectrum is finitely stable and unchanged under bi-Lipschitz transformations. Fix an integer $k \geq 10$. By~\eqref{e:fulldensity} there exists a sequence of integers $M_1 < M_2 < \cdots$ such that $M_{n+1} > M_n^{k/(k-2)}$ and $\# (I \cap [1,M_n^{k/(k-2)}]) \geq (M_n^{k/(k-2)})^{1-1/k}$ for all $n \in \mathbb{N}$. % Then $\# (I \cap [M_n,M_n^{k/(k-2)}]) \gtrsim M_n^{\frac{k-1}{k-2}}$, with the implicit constant independent of $n$. Now \[ \sum_{b \in I} (b^{-2})^{\frac{k-1}{2k}} \geq \sum_{n=1}^{\infty} \sum_{\substack{b \in I \\ M_n \leq b \leq M_n^{k/(k-2)}}} (b^{-2})^{\frac{k-1}{2k}} \gtrsim \sum_{n=1}^{\infty} M_n^{\frac{k-1}{k-2}} ((M_n^{k/(k-2)})^{-2})^{\frac{k-1}{2k}} = \infty . \] The finiteness parameter (recall~\eqref{e:finiteness}) of the system therefore satisfies $\Theta \geq \frac{k-1}{2k}$. Letting $k \to \infty$ shows that $\Theta \geq 1/2$, so $\dim_{\mathrm B} F = h \geq 1/2$. % Fix $s<h$ and $\theta \in (0,1/2)$. By Lemma~\ref{l:assouadgeo}, \begin{align*} N_{M_n^{-1/\theta}} (F \cap [0,M_n^{-1}]) \geq N_{M_n^{-1/\theta}} (F \cap [M_N^{-k/(k-2)},M_n^{-1}]) &\gtrsim M_n^{\frac{k-1}{k-2}} \left( \frac{M_n^{-2k/(k-2)}}{M_n^{1/\theta}} \right)^s \\ &= M_n^{-(1-1/\theta)\left( \frac{s(1-2k\theta/(k-2)) + \theta(k-1)/(k-2)}{1-\theta} \right)}. \end{align*} The result follows upon letting $s \to h^-$ and $k \to \infty$. \end{proof} In Propositions~\ref{p:ctdspaced} and~\ref{p:ctdclustered} we consider the Assouad spectra of continued fraction sets for particular families of infinite subsets of $\mathbb{N}$. In Proposition~\ref{p:ctdspaced}, the numbers that we allow to be included in the continued fraction expansions are eventually spaced apart like $\{ \, \lfloor n^p \rfloor : n \in \mathbb{N} \, \}$ (similar to~\cite[Example~4.4]{Banaji2021} for the intermediate dimensions). However, we need to allow an arbitrary choice for the first finitely many digits of $I$ to ensure that the Hausdorff dimension can be in the range that yields non-trivial behaviour for the Assouad spectrum (such as two phase transitions). \begin{prop}\label{p:ctdspaced} Fix $p \in (1,\infty)$. Assume that $I \subseteq \mathbb{N}$ is such that the symmetric difference of $I$ and $\{ \, \lfloor n^p \rfloor : n \in \mathbb{N} \, \}$ is finite and that $h \in (1/(p+1),1/p)$. Then \[ \dim_{\mathrm{A}}^\theta F_I = \left\{\begin{array}{lr} h + \frac{\theta}{p(1-\theta)}(1-ph), & \text{for } 0\leq \theta \leq \frac{(h+hp-1)p}{(1+p)(2ph-1)}\\ \frac{1}{(1+p)(1-\theta)}, & \text{for } \frac{(h+hp-1)p}{(1+p)(2ph-1)} < \theta < \frac{p}{1+p}\\ 1, & \text{for } \frac{p}{1+p}\leq \theta \leq 1 \end{array}\right. .\] In particular, $\dim_{\mathrm{A}}^\theta F$ has two phase transitions and lies strictly between the bounds in Theorem~\ref{t:mainasp} for all $\theta \in (0,\frac{p}{1+p})$. \end{prop} \begin{proof} We start with the appropriate CIFS from Lemma~\ref{l:ctdfraccifs} (depending on whether or not $1 \in I$). Although the OSC always holds, the separation condition in Theorem~\ref{t:asd} does not hold if $I$ contains consecutive digits. To deal with this, we use the same trick as in the proof of Theorem~\ref{t:mainasp}, namely to iterate finitely many times to form a new CIFS where each conformal copy of $F$ is very small and use an induction argument. The proof is now very similar to the case $t=2p$ of the proof of Theorem~\ref{t:sharp}~\eqref{i:sharpmiddle}, but with the additional technicality that the bound from Lemma~\ref{l:assouadgeo} needs to be used since we are in the self-conformal (rather than strictly self-similar) setting. The details are left to the reader. \end{proof} In Proposition~\ref{p:ctdspaced}, if $h \in (1/(2p),1/(p+1)]$ then the bounds coincide at the Assouad spectrum of the set of fixed points (which equals $\dim_{\mathrm{A}}^\theta F_p = \min\left\{ \frac{1}{(1+p)(1-\theta)},1\right\}$) by Corollary~\ref{c:special}. If, on the other hand, $h \in [1/p,1)$, % then the lower bounds are attained. The Assouad spectrum for the above sets for a certain choice of parameters is shown in green in Figure~\ref{f:sharp}. In Proposition~\ref{p:ctdclustered}, the elements of $I$ are very clustered together (in contrast to Proposition~\ref{p:ctdspaced}), resulting in the upper bound from Theorem~\ref{t:mainasp} and the three-parameter form being satisfied. \begin{prop}\label{p:ctdclustered} Fix $\alpha \in (0,1)$ and let $I \subset \mathbb{N}$ be such that the symmetric difference of $I$ and $\mathbb{N} \cap \bigcup_{k=1}^\infty [2^k,2^k + 2^{k \alpha}]$ is finite. Then the phase transition is $\rho = 1-\alpha/2$, and \[ \dim_{\mathrm{A}}^\theta F_I = \left\{\begin{array}{lr} h + \frac{\alpha \theta}{(1-\theta)(2-\alpha)}(1-h), & \text{for } 0\leq \theta < \rho \\ 1, & \text{for } \rho \leq \theta < 1 \end{array}\right. . \] \end{prop} \begin{proof} A direct calculation shows that $\dim_{\mathrm A}^{1-\alpha /2} P = 1$, and combining this with~\cite[Lemma~3.4.4]{Fraser2020book}) shows that $\dim_{\mathrm{A}}^\theta P = \min\left\{\frac{\alpha}{2(1-\theta)},1\right\}$ for all $\theta \in (0,1)$. Moreover, $\dim_{\mathrm B} F = h \geq \alpha/2$, since the finiteness parameter is easily calculated to be $\Theta = \alpha /2$ in this case. Moreover, for all $\epsilon > 0$ and $k \in \mathbb{N}$ we have $2^{-k} - (2^{-k} + 2^{k\alpha})^{-1} \approx 2^{-k(2-\alpha)}$ with the implicit constant independent of $k$. Therefore by Lemma~\ref{l:assouadgeo}, \begin{align*} N_{2^{-k(2-\alpha )/\theta}}(F_i \cap [(2^k - 2^{-k(2-\alpha)},2^{-k}]) &\gtrsim 2^{k\alpha} \left(\frac{2^{-2k}}{2^{-k(2-\alpha )/\theta}}\right)^{h-\epsilon} \\ &= 2^{k[-(2-\alpha)(1-\theta^{-1})(h + \frac{\alpha \theta}{(1-\theta)(2-\alpha)}(1-h)) - (\frac{2-\alpha }{\theta} -2)\epsilon ]}. \end{align*} Since $\epsilon$ was arbitrary, this completes the proof. \end{proof} We now calculate the Assouad spectrum of a special set related to complex continued fraction expansions. Namely, % define \[ F_{\mathbb{N} + \mathbb{Z} i} \coloneqq \left\{ \, z \in \mathbb{C} : z = \frac{1}{b_1 + \frac{1}{b_2 + \frac{1}{\ddots}}}, b_n \in \mathbb{N} + \mathbb{Z} i \mbox{ for all } n \in \mathbb{N} \, \right\}. \] It is clear from~\cite[Section~6]{Mauldin1996} that \begin{equation}\label{e:complexcifs} \{ \, S_b(z) \coloneqq 1/(b+z) : b \in (\mathbb{N} + \mathbb{Z} i) \setminus \{1\} \, \} \cup \left\{ \, S_{1b}(z) \coloneqq \frac{1}{b+\frac{1}{1+z}} : b \in \mathbb{N} + \mathbb{Z} i \, \right\} \end{equation} is a CIFS with limit set $F_{\mathbb{N} + \mathbb{Z} i}$. % Estimates for the Hausdorff dimension $h = \dim_{\mathrm H} F_{\mathbb{N} + \mathbb{Z} i}$ are given in~\cite[Section~6]{Mauldin1996} and~\cite{Gardner1983,Priyadarshi2016}. To our knowledge, the tightest bounds to date are $1.85574 \leq h \leq 1.85589$ from~\cite{Falk2018}. The following result states that the Assouad spectrum satisfies a formula in terms of the Hausdorff dimension, given by the upper bound from Theorem~\ref{t:mainasp}. \begin{prop} We have \[ \dim_{\mathrm{A}}^\theta F_{\mathbb{N} + \mathbb{Z} i} = \left\{\begin{array}{lr} h + \frac{\theta}{1-\theta}(2-h), & \text{for } 0 < \theta < 1/2 \\ 2, & \text{for } 1/2 \leq \theta < 1 \end{array}\right. . \] \end{prop} \begin{proof} Let $P$ be the set of fixed points of the CIFS in~\eqref{e:complexcifs}. A similar proof to the proof of the $p=1$ case of~\eqref{e:fpspectrum} in~\cite[Corollary~6.4]{Fraser2018-2} shows that $\dim_{\mathrm{A}}^\theta P = \min\{(1-\theta)^{-1},2\}$. Fix $0 < \theta < 1/2$ and very small $\epsilon \in (0,1)$, and suppose $0 < R < 1$. Let $I_{R,\epsilon} \coloneqq \{ \, i \in \mathbb{N} + \mathbb{Z} i : R^{1+\epsilon} < z < R \mbox{ for all } z \in F_i \, \}$. Then $\# I_{R,\epsilon} \approx R^{-2}$, with the implicit constants depending on $\epsilon$ but not on $R$. Using the Koebe distortion theorem we have $|F_i| \gtrsim R^{2(1+\epsilon)}$ for all $i \in I_{R,\epsilon}$. % Therefore by Lemma~\ref{l:assouadgeo}, % \[ N_{R^{1/\theta}} (B(0,R) \cap F_{\mathbb{N} + \mathbb{Z} i}) \gtrsim \sum_{i \in I_{R,\epsilon}} N_{R^{1/\theta}} (F_i) \gtrsim R^{-2} \left( \frac{R^{2(1+\epsilon)}}{R^{1/\theta}}\right)^{h-\epsilon} = R^{(1-\theta^{-1})\left( f(\theta,1/2) - \frac{1-2\theta \epsilon}{1-\theta} \epsilon \right)}.\] Letting $\epsilon \to 0^+$ gives $\dim_{\mathrm{A}}^\theta F_{\mathbb{N} + \mathbb{Z} i} \geq f(\theta,1/2)$, from which the result follows. \end{proof} It would be possible to study the Assouad spectra of the limit sets of appropriately chosen subsystems of the CIFS in~\eqref{e:complexcifs}, but we will not pursue this. Our results also have applications to the dimension theory of parabolic iterated function systems. In such a system, each map $S \colon X \to X$ still satisfies $||S(x) - S(y)|| < ||x-y||$ for all $x,y \in X$, but may contain a \emph{parabolic fixed point} $p \in X$, meaning that $S(p) = p$ but the derivative of $S$ (or an extension of $S$) at $p$ has norm 1. The theory of parabolic IFSs has been developed by Mauldin and Urbański in~\cite{Mauldin2000parabolic}, and they have also been studied in~\cite{Urbanski1996paraboliccantor,Mauldin2002parabolic},~\cite[Section~9.2]{Fraser2020book}, and many other works. Given a (possibly infinite) parabolic IFS as defined in~\cite[Section~2]{Mauldin2000parabolic}, one can associate an `induced' uniformly contracting infinite CIFS (see~\cite[Theorem~5.2]{Mauldin2000parabolic}). It is clear that if $F$ is the limit set of the parabolic IFS and $F^*$ is the limit set of the induced CIFS then $F^* \subseteq F$ with $F \setminus F^*$ countable, and $F$ and $F^*$ have the same closure. Therefore if $\dim$ is any of the notions of dimension mentioned in this paper then $\dim F = \dim F^*$. In particular,~\cite[Theorem~3.5]{Banaji2021} for the intermediate dimensions and Theorem~\ref{t:mainasp} for the Assouad spectrum can be applied directly to the induced system to give information about the corresponding dimension of $F$. % As an example, we consider a finite parabolic IFS on the line with a single parabolic fixed point of Manneville--Pomeau type at 0. In this setting, the Hausdorff dimension (denoted $h$) equals the box dimension~\cite{Urbanski1996paraboliccantor} and the Assouad dimension is 1~\cite[Theorem~9.2.1]{Fraser2020book}. In Proposition~\ref{p:parabolic} we show that the Assouad spectrum attains the upper bound from Theorem~\ref{t:mainasp}. \begin{prop}\label{p:parabolic} Fix $N \in \mathbb{N}$ and $\epsilon, q > 0$. Suppose $S_1, \ldots, S_N \colon [0,1] \to [0,1]$ satisfy $|S_i(x) - S_i(y)| < |x-y|$ for all $x,y \in [0,1]$ and extend to $C^{1+\epsilon}$ maps on $(-\epsilon, 1+\epsilon)$. Assume $S_2,\ldots,S_N$ are $\xi$-Lipschitz for some $\xi < 1$ and that $S_i((0,1)) \cap S_j((0,1)) = \varnothing$ whenever $i \neq j$. Finally, assume $S_1(0) = 0$ and \[ \frac{x - S_1(x)}{x^{1+q}} \to 1 \quad \mbox{as} \quad x \to 0^+.\] Let $F$ denote the limit set of this parabolic IFS. Then for $\theta \in (0,1)$, \begin{equation}\label{e:parabolic} \dim_{\mathrm{A}}^\theta F = \left\{\begin{array}{lr} h + \frac{q \theta}{1-\theta}(1-h), & \text{for } 0\leq \theta < \frac{1}{1+q} \\ 1, & \text{for } \frac{1}{1+q}\leq \theta < 1 \end{array}\right. . \end{equation} \end{prop} \begin{proof} Let $P$ be the set of fixed points of the induced CIFS \[ \{ \, S_1^n \circ S_i : n \in \mathbb{N}, 2 \leq i \leq N \, \} \cup \{ S_2, \ldots, S_N\}.\] Estimates similar to~\cite[(4.2)--(4.4)]{Mauldin2002parabolic} show that $\dim_{\mathrm{A}}^\theta P = \dim_{\mathrm{A}}^\theta (\{ \, i^{-1/q} : i \in \mathbb{N} \, \})$. The result now follows by a similar argument to the proof of Theorem~\ref{t:sharp}~\eqref{i:sharpupper}, using estimates similar to~\cite[(4.1)]{Mauldin2002parabolic}. \end{proof} More generally, one could consider a finite parabolic iterated function system that generates a `parabolic Cantor set' in the sense of~\cite{Urbanski1996paraboliccantor}. Then several of the maps $S_i$ can have a parabolic fixed point $p_i$ with local behaviour $S_i(x) = p_i + a_i (x-p_i)^{1+q_i} + o((x-p_i)^{1+q_i} )$, and the Assouad spectrum will take the form of~\eqref{e:parabolic} with $q = \max_i q_i$. Finally, we apply Proposition~\ref{p:parabolic} to calculate the Assouad spectrum of sets generated by backwards continued fraction expansions. For $I \subseteq \{2,3,4,\ldots,\}$ define \[ \mathcal{B}_I \coloneqq \left\{ \, z \in (0,1) \setminus \mathbb{Q} : z = 1 - \frac{1}{b_1 - \frac{1}{b_2 - \frac{1}{\ddots}}}, b_n \in I \mbox{ for all } n \in \mathbb{N} \, \right\}. \] Recall that the R\'enyi map $\mathcal{R} \colon [0,1] \to [0,1]$ is defined by $\mathcal{R}(x) \coloneqq \left\{ \frac{1}{1-x}\right\}$. Then $\mathcal{B}_I$ is the limit set of the (possibly parabolic, possibly infinite) IFS consisting of the inverse branches of the R\'enyi map corresponding to the elements of $I$. \begin{cor}\label{c:backward} If $I \subseteq \{2,3,4,\ldots,\}$ is finite with $2 \in I$ then \[ \dim_{\mathrm{A}}^\theta \mathcal{B}_I = \left\{\begin{array}{lr} h + \frac{\theta}{1-\theta}(1-h), & \text{for } 0\leq \theta < 1/2 \\ 1, & \text{for } 1/2 \leq \theta < 1 \end{array}\right. . \] \end{cor} \begin{proof} The inverse branch of the R\'enyi map corresponding to $2 \in I$ is $x \mapsto x/(1+x)$, which has a parabolic fixed point at 0, and all other branches are uniformly contracting. Therefore using Taylor's theorem, Corollary~\ref{c:backward} follows from the $q=1$ case of Proposition~\ref{p:parabolic}. % \end{proof} \section*{Acknowledgements} AB thanks Mariusz Urbański for a helpful conversation. We thank Alex Rutar for helpful comments on a draft version of this paper. Both authors were financially supported by a Leverhulme Trust Research Project Grant (RPG-2019-034). JMF was also supported by an EPSRC Standard Grant (EP/R015104/1) and an RSE Sabbatical Research Grant (70249). \section*{References} \printbibliography[heading=none]% \end{document}
2024-02-18T23:40:27.014Z
2022-07-26T02:12:45.000Z
algebraic_stack_train_0000
2,438
16,444
proofpile-arXiv_065-12069
\section{Introduction} The convolutional codes are widely used in digital communication systems to correct the transmission errors, which have been adopted by almost all advanced wireless communication standards. There are strong requirements to develop high performance decoding components with good scalability and reconfigurability to support various standards, which can be applied to new radio communication systems such as the Software Defined Radio (SDR) and Cognitive Radio (CR). As the Viterbi algorithm \cite{Viterbi1967} is the most popular method for decoding convolutional codes, our discussion focuses on the techniques of Viterbi decoder implementations. Traditional communication systems mainly use Field-Programmable Gate Array (FPGA) and Application Specific Integrated Circuit (ASIC) in hardware platforms. Enormous researches of the Viterbi decoder implementation are based on FPGA/ASIC and Gb/s throughput is achieved \cite{Fettweis1996} \cite{VLSI2010}. However, these high performances are always along with expensive cost and long development cycle, and these techniques can not provide the flexibility required by SDR or CR systems. Alternative microprocessors like Central Processor Unit (CPU) are more flexible than FPGAs/ASICs. Some works on CPU-based software decoding use single-instruction multiple-data (SIMD) instruction sets to achieve parallel decoding \cite{CPU2009} \cite{CPU2010}. But restricted by their computation resources, the data processing speed and decoding throughputs are much lower. High performance computing (HPC) on GPUs is developed rapidly over the last decade. Compared with FPGAs/ASICs, GPU-based implementations have very good flexibility and scalability using high-level programming language. Compared with CPUs, GPUs have more massive ALU cores to ensure large-scale parallel execution, which can gain higher throughput with appropriate optimization. A lot more works have focused on GPU-based decoding in recent years. \cite{SDR2011} \cite{TVDA2011} \cite{TVDA_WCNC2013} \cite{SDR2010} \cite{TVDA_2014} use CUDA to design Viterbi decoders for SDR systems on NVIDIA GPUs. \cite{OPENCL2014} uses opening computing language (OpenCL) to achieve accelerating Viterbi decoding on an AMD GPU. However, most of these works just simply design a parallel decoder for block-coded convolutional codes, and basic level of optimizations are presented. Compared with these works, a better parallel Viterbi decoding algorithm with lower computational complexity is proposed in this paper. Fine-grained and coarse-grained parallelism optimizations are both presented, to maximize the execution efficiency of mathematical operations, memory transactions and data transfer between the host and the device. The good generality means our parallel block-based Viterbi decoder can work for most kinds of convolutional codes, and some optimizations can also be adopted to implementations of other GPU-based decoders. \section{The Viterbi Decoding Algorithm} The Viterbi algorithm (VA), a maximum likelihood sequence estimator, uses the trellis to exhaustively search the sequence that is closest to received bits from channel. It consists of two procedures in two directions: the forward procedure and the traceback procedure. Three kind of units will be calculated: the path metric (PM), the branch metric (BM) and the survivor path (SP). BM is calculated to measure Hamming/Euclidean distance from the received bits to the legal codewords at each stage. PM is the accumulated distance added by BMs. SP takes a record of the path with minimum distance to each state. Forward computing starts at stage 0 with all metrics set to zero. For each state at current stage, an add-compare-select (ACS) operation is carried out to update their PM at next stage and rewrite their SP relatively. The ACS operation can be described by equation (\ref{Eq_ACS}). $PM_n^j$ denotes the path metric of state $j$ at stage $n$. $BM_n^{i,j}$ denotes the branch metric from state $i$ at stage $n-1$ to state $j$ at stage $n$. $PM_{n - 1}^i$ and $BM_n^{i,j}$ are added up for all state $i$ connected to state $j$ and a minimum result is chosen to update $PM_n^j$. \begin{equation}\label{Eq_ACS} PM_n^j = \mathop {\min }\limits_i \left( {PM_{n - 1}^i + BM_n^{i,j}} \right) \end{equation} While the forward ACS computing finishes at the end of the data stream, a state with minimum PM should be estimated as the beginning of traceback procedure. The selected state $S_E$ is believed to be the true encoding tail state with high probability. Therefore, the traceback process goes along the final survivor path $SP_T^E$ to obtain decoded bits. In below sections, the $(R, 1, K)$ convolutional code with code rate $1/R$ and constraint length $K$ is concerned. The number of states is denoted by $N$. \section{Proposed GPU-based Decoder and Methods for Efficient Decoding} \subsection{Parallel Block-based Viterbi Decoder} The original VA is not suitable to decode the convolutional codes encoded in a stream, as a huge amount of storage resource would be required and high decoding latency would not be acceptable. Thus, we propose a parallel block-based Viterbi decoder (PBVD) based on the GPU architecture. Fig.\ref{Fig_SBVD} shows a schematic of the decoding procedures using PBVD. A real-time constraint is introduced into the decoding procedure. A data segment from stage $t-M$ to $t+D+L$ called a parallel block (PB) consists of a truncated block, a traceback block and a decoding block. Assuming that the block to be decoded starts at stage $t$, with the length of $D$, the PBVD should start at stage $t-M$. A forward ACS procedure is carried out with unknown initial state metrics (typically set to zero). The ACS operation goes through stage $t-M$ to $t+D+L$ and survivor paths with length of $M+D+L$ are estimated and stored, so as the PMs for each state. At the end of the interval, a traceback procedure starts from a random state (state $S_0$, for example). After $L$ times traceback along a randomly picked survivor path, state $S_E$ is reached and regarded as the authentic state at stage $t+D$. Afterwards, traceback procedure would continue and the data segment from stage $t$ to $t+D$ is decoded. \begin{figure}[b] \centering \includegraphics[width=3.5in]{PBVD.pdf} \caption{The diagram of decoding procedures inside a data segment.} \label{Fig_SBVD} \end{figure} Unlike the original VA, there is no state estimation between forward and backward procedure in PBVD. That means the shortest path would not need to be picked out as the unique selection for backward decoding. This simplification benefits from the traceback block, which provides $L$ stage for all survivor paths merging to an authentic state at stage $t+D$. The length $L$ is called decoding depth and typically equal to $5K$ \cite{SBVD1997}. Similarly, by a number of $M$ iterations on the truncated block, the truncation error due to the unknown initial metrics is negligible. Thus, the strong probability of successful decoding for the mid block is guaranteed. \begin{figure}[tb] \centering \includegraphics[width=3.2in]{PBVD_stream3.pdf} \caption{The diagram of parallel decoding for data stream using two individual GPU kernels. (Note that the composition of VP in K1 and K2 are different.)} \label{Fig_SBVDstream} \end{figure} To decode a stream of convolutional codes, the input data could be blocked to a series of segments of length $D$. Each segment extends a length of $L$ in both sides as the truncated block and traceback block, to form a parallel block ($M$ is set equal to $L$ in the following description), so the biting length for adjacent PB is $2L$. To achieve high decoding throughput on a GPU, these $N_t$ PBs should be decoded concurrently by two individual GPU kernels (denoted by K1 and K2) with different parallelism to match the different computational complexity of procedures in two directions. Each PB should be successively handled by GPU thread cluster in K1 and K2, which are named virtual processors (VP). After synchronization, the outputs of all VPs in K2 are finally gathered to form the decoded stream. An example of the design for stream decoding using GPU-based PBVD with $N_t = 4$ is shown in Fig.\ref{Fig_SBVDstream}. \subsection{Optimized Parallelism for Forward ACS Computation} Typically, the commonly used schemes for the forward ACS operations are the state-based parallel execution \cite{TVDA_WCNC2013} and the butterfly-based parallel execution \cite{TVDA_2014}. In this paper, a group-based parallel scheme is proposed by exploiting the characteristics of the trellis, to reduce the amount of branch metric computation in the forward procedure. For a $(R,1,K)$ convolutional code, the state in the trellis is defined by the contents of the $v$ binary memory cells $D_{v-1}{\sim}D_0$ in the encoder, which can be denoted by $S_d$ and $d=(D_{v-1}D_{v-2} \cdot \cdot \cdot D_1D_0)_2$. There are $R$ filters in the encoder, the $r$th of which has impulse response ${\textbf{\emph{g}}^{(r)}} = [ {g_{K - 1}^{(r)}g_{K - 2}^{(r)} \cdot \cdot \cdot g_1^{(r)}g_0^{(r)}} ]$, called the generator polynomials. $\textbf{\emph{c}}(S_d, x) = [ {{c^{(1)}}{c^{(2)}} \cdot \cdot \cdot {c^{(R)}}}]$ is used to express the encoder output corresponding to input bit $x$ at state $S_d$. $c^{(r)}$ is the output of the $r$th filter, 0 or 1, which can be calculated by: \begin{equation}\label{Eq_cr} {c^{(r)}} = (x \cdot g_{K - 1}^{(r)}) \oplus ({D_{K - 2}} \cdot g_{K - 2}^{(r)}) \oplus \cdot \cdot \cdot \oplus ({D_0} \cdot g_0^{(r)})\\ \end{equation} All operations $\oplus$ are module-2 additions in field GF(2). Consider a butterfly structure from the trellis, the contiguous states $S_{2j}$ and $S_{2j+1}$ in $j$th butterfly ($j = 0,1,2,...,N/2-1$) would like to shift to the states $S_j$ or $S_{j+2^{v-1}}$ for different input bits. $\bm{\alpha}$ and $\bm{\beta}$ denote the output of encoder at state $S_{2j}$ with input bit $x=0$ and 1 respectively. So as the $\bm{\gamma}$ and $\bm{\theta}$ for state $S_{2j+1}$. The $r$th bit ${\alpha}^{(r)}$ in $\bm{\alpha} = [ {{{\alpha}^{(1)}}{{\alpha}^{(2)}} \cdot \cdot \cdot {{\alpha}^{(R-1)}}{{\alpha}^{(R)}}}]$ can be obtained by: \begin{align}\label{Eq_alpha} {{\alpha}^{(r)}} &= c^{(r)}(S_{2j},0) \notag\\ &= (x \cdot g_{K - 1}^{(r)}) \oplus \cdot \cdot \cdot \oplus ({D_1} \cdot g_1^{(r)}) \oplus ({D_0} \cdot g_0^{(r)}) \notag\\ &= (0 \cdot g_{K - 1}^{(r)}) \oplus \cdot \cdot \cdot \oplus ({D_1} \cdot g_1^{(r)}) \oplus (0 \cdot g_0^{(r)}) \notag\\ &= ({D_{K - 2}} \cdot g_{K - 2}^{(r)}) \oplus \cdot \cdot \cdot \oplus ({D_1} \cdot g_1^{(r)}) \end{align} Similarly, ${\beta}^{(r)}$, ${\gamma}^{(r)}$ and ${\theta}^{(r)}$ can be obtained as follows: \begin{align} \label{Eq_beta} {{\beta}^{(r)}} &= c^{(r)}(S_{2j},1) = g_{K - 1}^{(r)} \oplus {\alpha}^{(r)} \\ \label{Eq_gamma} {{\gamma}^{(r)}} &= c^{(r)}(S_{2j+1},0) = {\alpha}^{(r)} \oplus g_0^{(r)}\\ \label{Eq_theta} {{\theta}^{(r)}} &= c^{(r)}(S_{2j+1},1) = g_{K - 1}^{(r)} \oplus {\alpha}^{(r)} \oplus g_0^{(r)} \end{align} From equation (\ref{Eq_alpha}) to (\ref{Eq_theta}) we can conclude that for given generator polynomials, once the $\bm{\alpha}$ is established, other outputs $\bm{\beta}$, $\bm{\gamma}$ and $\bm{\theta}$ in the butterfly would be uniquely derived. Therefore, all the $N/2$ butterflies in the $N$-state trellis can be classified to $2^R$ (denoted by $N_c$) groups. The groups are distinguished by $\bm{\alpha}$, which means that butterflies in the same group have the same branch metrics at one stage. As a result, for the $N / N_c$ states in the same group, only four branch metrics need to be calculated, to update the $N / N_c$ path metrics. Thus, the total computation of branch metrics for all the ACS operations at one stage can be calculated as $2^{R+2}$. For the widely used convolutional codes which have $R=2$ and $K=5,7,9$, or $R=3$ and $K=7,9$, the forward ACS operations can be accelerated due to lower computation of branch metrics than state-based or butterfly-based parallelism scheme ($2^{R+2} < 2^K$). \section{Framework of Kernels and Memory Organization on GPU} \subsection{Kernel Execution and Thread Mapping Strategies} \renewcommand\arraystretch{1.1} \begin{table}[bp] \centering \caption{Thread Dimensions and Execution Parallelism of Two Kernels} \label{Tab_Parallelism} \begin{tabular}{ccccc} \hline \multirow{2}{*}{Kernel} & \multicolumn{2}{c}{Thread dimension} & \multicolumn{2}{c}{Parallelism}\\ \cline{2-5} & BlockDim & \multicolumn{1}{c|}{ThreadDim} & Inter-frame & Intra-frame \\ \hline K1 & $N_{bl}$ & \multicolumn{1}{c|}{$32N_c$} & $32N_{bl}$ & $N_c$ \\ K2 & $N_{bl}/N_c$ & \multicolumn{1}{c|}{$32N_c$} & $32N_{bl}$ & $1$ \\ \hline \end{tabular} \end{table} In our GPU-based implementation, two individual kernels K1 and K2 with different thread dimensions are initiated. K1 finishes the forward computing, followed by K2 which carries out the traceback and decoding procedures. To describe the thread organizations in kernels, blockDim and threadDim are used to represent the number of threadblocks and the number of threads in each threadblock. In K1, the group-based parallel execution mode is employed. For the forward computing of a PB, all the $N$ states will be sorted to $N_c$ groups using the given criteria. Then for each group, a thread is dispatched to calculate four (or two in special) branch metrics to update all the path metrics and survivor paths at each stage. Thus, $N_c$ threads are required to build a virtual processor in K1. Considering that 32 CUDA threads are managed cooperatively in batches called a warp, a threadblock in K1 is regulated to accommodate 32 virtual processors. That means the threadDim of K1 is $N_c$ times the warp size. \begin{algorithm}[tb] \caption{Parallel block-based Viterbi decoding algorithm} \label{Alg_PBVD} \begin{algorithmic}[1] \item[\algorithmickernelone] \textbf{Forward procedure} \FOR {thread block $b=0$ to $N_{bl} -1$, warp $w=0$ to $N_c-1$ and thread $t=0$ to $31$ \textbf{parallel}} \FOR {stage $s=0$ to $D+2L-1$} \STATE $sp = 0$, $tid= b \times 32 + t$; \STATE Load input symbol and calculate four branch metrics; \FOR {\textbf{all} $j \in Group(w)$} \STATE Load: $pm_1 = {\rm PM}[2j][t]$, $pm_2 = {\rm PM}[2j+1][t]$; \STATE $reg[j] = min( pm_{1} + BM_{\bm{\alpha}}, pm_{2} + BM_{\bm{\gamma}} )$; \STATE take a bitwise record in $sp$ for state $j$; \STATE $reg[j+2^{K-2}] = min( pm_{1} + BM_{\bm{\alpha}}, pm_{2} + BM_{\bm{\gamma}} )$; \STATE take a bitwise record in $sp$ for state $j+2^{K-2}$; \ENDFOR \STATE Store: ${\rm PM[\ast][t]} = reg[\ast]$, ${\rm SP}[s][w][tid] = sp$; \ENDFOR \ENDFOR \item[\algorithmickerneltwo] \textbf{Backward procedure} \FOR {thread block $b=0$ to $N_{bl} / N_c - 1$, warp $w=0$ to $N_c-1$ and thread $t=0$ to $31$ \textbf{parallel}} \STATE $i=j=g=state=0$, $tid=b \times N_c \times 32 + w \times 32 + t$; \FOR {stage $s=D+2L-1$ to $L$} \STATE Obtain $i$ by $state$ from lookup tables; \FOR {$g=0$ to $N_c-1$} \STATE Load ${\rm SP}[s][g][tid]$ and store into $sp$; \ENDFOR \IF {$s \leq D+L-1$} \STATE Output decoded bit: $(state>>(K-2)) \& 0x01$; \ENDIF \STATE $j = state \% 2^{K-2}$, $sp = (sp >> i) \& 0x01$; \STATE $state = 2 \times j + sp$; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} In the second kernel K2, as the backward procedure is a completely serial processing that can not be executed in parallel, only one thread is enough to constitute the virtual processor in K2. For convenience narration, we let the threadDim of K2 equal to K1, so that each threadblock in K2 contains $32 \times N_c$ virtual processors. If we allocate $N_{bl}$ threadblocks in K1, the total number of PBs $N_t$ should be equal to $32 \times N_{bl}$. Thus, to handle the $N_t$ PBs simultaneously, $N_{bl} / N_c$ threadblocks should be allocated in K2. Inter-frame parallelism and intra-frame parallelism are introduced to indicate the number of virtual processors in each kernel and the number of threads each virtual processor contains, respectively. Table \ref{Tab_Parallelism} gives a summary about the thread dimensions and execution parallelism of K1 and K2. \begin{figure}[tb] \centering \includegraphics[width=3.4in]{Coalesced_access.pdf} \caption{The diagram of coalesced memory accesses for survivor paths.} \label{Fig_Coalesced} \end{figure} \subsection{Memory Organization for Various Information} In the parallel block-based Viterbi decoder, there are several kinds of data information: (i) the input/output data streams, which can only be stored in the off-chip global memory as they need to be exchanged from the host machine; (ii) the cumulative path metrics and the branch metrics, which are only updated in the forward procedure, so on-chip register resources and shared memory can be used under the conditions of enough capacity; (iii) the survivor paths, which are generated in forward procedure and fetched in decoding phase, that they can only be placed in global memory and designed to meet the alignment requirement for coalesced memory access of two individual kernels. It is a challenge to design a suitable data structure for the survivor paths due to the different intra-frame parallelism in K1 and K2. Once the coalesced memory access is satisfied in one of the two kernels, the memory transactions in the other kernel would face to horrible inefficiencies. To solve this inconsistency, an optimized construction is exploited in Fig.\ref{Fig_Coalesced}. At the current stage, states from 32 PBs are gathered and reordered. All states would be collected to $N_c$ groups followed by the state classification criteria. These $N_c$ groups of states are mapped to different warps allocated in K1. Inside a group, $N/N_c$ states from the same PB are processed in order by the same thread, which means threads with the same threadIdx.x from these $N_c$ warps make up a virtual processor in K1. As the survivor path is a record of selected forward path which can be presented by bit data (for example, bit 0 denotes the upper branch, and bit 1 denotes the lower branch), these $N/N_c$ results can be stored by bit in a same unit as: \begin{equation}\label{Eq_bit} {\rm SP}[ x ][ y ][ z ] = \underbrace {1101 \cdot \cdot \cdot 01}_{(N/{N_c})bits} \notag \end{equation} As a result, the survivor paths should be allocated as ${\rm SP}[D+2L][N_c][N_t]$ to ensure coalesced access for contiguous PBs inside a warp. For each backward stage in K2, $N_c$ individual results are merged because only one warp is needed for the backward phase of these 32 PBs. For a single thread in this warp, all survivor path messages from a PB are loaded with $N_c$ memory requests, but all in the form of aligned transaction. After all, the memory requests in both K1 and K2 are managed without duplicate transactions and extra time overhead. The shared memory are allocated based on thread blocks and threads with the same threadIdx.x in different warps need to swap data to jointly accomplish the forward phase for a PB. To avoid the bank conflict in shared memory transactions, the data structure should be devised as ${\rm PM}[N][32]$ to ensure that the accesses for path metrics with the same state id are aligned and fall into individual shared memory banks. As a result, for each shared memory store/load instruction, no transaction for the same request replays and maximum bandwidth utilization is reached. Remarkably, additional registers are necessary as the temporary places to store the updated results for path metrics, and shared memory store transactions would not be carried out until all the calculations at a stage are finished. \subsection{Asynchronous Data Transfer and Throughput Analysis} The time overhead of data transfer between host and device should be taken into account when evaluating a GPU-based decoder. CUDA supports asynchronous streams technique to achieve the overlap for data transfer tasks and kernel launches in different streams. The decoder should activate a suitable number of CUDA streams and arrange tasks to the idle streams consecutively to ensure the high occupancy of the GPU device. For our GPU-based Viterbi decoder, the H2D messages are blocked input data streams and D2H messages are decoded bits. A kernel throughput $S_k$ is introduced to evaluate the kernel execution efficiency and it can be obtained by $\frac{D \times N_{t}}{T_k}$ where $T_k$ is kernel execution time. For the H2D data transfer, a parameter $U_1$ is defined to indicate the number of bytes for an input symbol storage. Similarly, a parameter $U_2$ is defined to indicate the number of bytes for the storage of a decoded bit in D2H data transfer. Thus, the time cost of H2D and D2H transfer can be calculated by: $T_{\rm H2D} = \frac{(D+2L) \times N_{t} \times U_1}{B}$ and $T_{\rm D2H} = \frac{D \times N_{t} \times U_2}{B}$, where $B$ denotes the PCI-E bandwidth. To hide data transfer latency, $N_s$ CUDA streams can be allocated (in each stream, $N_t$ parallel blocks are arranged). Ideally, all the data transfer batches can be completely hidden by the kernel executions, besides the first H2D batch and the last D2H batch. Thus, the decoding throughput can be approximately calculated by : \begin{align}\label{Eq_TP} {\rm T/P} &\approx \frac{D \times N_t \times N_s}{T_{\rm H2D} + \sum{T_k} + T_{\rm D2H}} \notag\\ &\approx \frac{B \times N_s}{(1+2L/D) \times U_1 + N_s/S_k + U_2} \end{align} Notice that the approximation $\sum{T_k} \approx N_s \times T_k$ can be used, though the concurrent kernel execution (CKE) technique or the Hyper-Q technique in CUDA may be applied. \renewcommand\arraystretch{0.9} \begin{table}[bp] \centering \caption{Classification of states for a (2, 1, 7) convolutional code} \label{Tab_Group} \begin{tabular}{|c|c|c|c|c|c|} \hline Group & $\bm{\alpha}$ & $\bm{\beta}$ & $\bm{\gamma}$ & $\bm{\theta}$ & Index of states\\%\tnote{*}\\ \hline \multirow{2}{*}{0} & \multirow{2}{*}{00} & \multirow{2}{*}{11} & \multirow{2}{*}{11} & \multirow{2}{*}{00} & \multicolumn{1}{l|}{0, 1, 4, 5, 24, 25, 28, 29, 42, 43}\\ & & & & & \multicolumn{1}{l|}{46, 47, 50, 51, 54, 55}\\\hlin \multirow{2}{*}{1} & \multirow{2}{*}{01} & \multirow{2}{*}{10} & \multirow{2}{*}{10} & \multirow{2}{*}{01} & \multicolumn{1}{l|}{2, 3, 6, 7, 26, 27, 30, 31, 40, 41}\\ & & & & & \multicolumn{1}{l|}{44, 45, 48, 49, 52, 53}\\\hlin \multirow{2}{*}{2} & \multirow{2}{*}{11} & \multirow{2}{*}{00} & \multirow{2}{*}{00} & \multirow{2}{*}{11} & \multicolumn{1}{l|}{8, 9, 12, 13, 16, 17, 20, 21, 34}\\ & & & & & \multicolumn{1}{l|}{35, 38, 39, 58, 59, 62, 63}\\\hlin \multirow{2}{*}{3} & \multirow{2}{*}{10} & \multirow{2}{*}{01} & \multirow{2}{*}{01} & \multirow{2}{*}{10} & \multicolumn{1}{l|}{10, 11, 14, 15, 18, 19, 22, 23, 32}\\ & & & & & \multicolumn{1}{l|}{33, 36, 37, 56, 57, 60, 61}\\%5, 7, 9, 11, 16, 18, 28, 30 \hline \end{tabular} \end{table} To improve the decoding throughput, one way is to make the kernels operate efficiently by the approaches in above sections, to achieve a higher $S_k$. Another way is to develop suitable methods of the storage for input/output messages, to reduce $U_1$ and $U_2$. For a soft-decision decoding over the AWGN channel, received symbols should be converted to soft messages and stored by several bits. In fact, a $q$-bit fixed-point quantization scheme can be designed and $\lfloor 32/q \rfloor$ messages can be packed and stored into a same integer unit. As a result, the value $U_1$ decreases from $4R$ to $4R/\lfloor 32/q \rfloor$. For the storage of decoded bits, we can use a similar packing scheme to store each decoded bit by bitwise operations. In this case, a character type can store 8 individual decoded bits that reduce $U_2$ to $1/8$. \renewcommand\arraystretch{1.0} \begin{table*}[htbp] \centering \caption{Time consumption and throughput of original and optimized decoder under different devices and various Parallelism} \label{Tab_R2} \begin{tabular}{cccccccc|ccccccc} \hline \multirow{2}{*}{Device} & \multirow{2}{*}{$N_{bl}$} & \multirow{2}{*}{$N_t$} & \multicolumn{5}{c|}{Original results} & \multicolumn{7}{c}{Optimized results}\\ \cline{4-15} & & & $T_k$ & $T_{\rm H2D}$ & $T_{\rm D2H}$ & $S_k$ & T/P(1S) & $T_{k1}$ & $T_{k2}$ & $T_{\rm H2D}$ & $T_{\rm D2H}$ & $S_k$ & T/P(1S) & T/P(3S)\\ \hline \multirow{5}{*}{GTX580} & 64 & 2048 & 2.914 & 1.532 & 0.636 & 359.8 & 181.5 & 1.443 & 0.611 & 0.377 & 0.023 & 509.5 & 403.4 & 508.3\\ & 128 & 4096 & 5.811 & 2.968 & 1.280 & 362.9 & 185.4 & 3.046 & 0.859 & 0.747 & 0.043 & 571.4 & 446.4 & 547.7\\ & 192 & 6144 & 8.514 & 4.506 & 1.969 & 368.0 & 189.1 & 4.050 & 1.232 & 1.155 & 0.063 & 594.5 & 472.2 & 571.0\\ & 256 & 8192 & 11.361 & 5.986 & 2.556 & 368.2 & 189.3 & 5.250 & 1.456 & 1.571 & 0.082 & 628.7 & 498.4 & 590.0\\ & 320 & 10240 & 14.224 & 7.502 & 3.192 & 369.6 & 189.4 & 6.513 & 1.807 & 1.893 & 0.101 & 641.8 & 504.9 & 598.3\\ \hline \multirow{5}{*}{GTX980} & 64 & 2048 & 1.681 & 0.865 & 0.325 & 620.6 & 294.7 & 0.591 & 0.377 & 0.261 & 0.012 & 1082.5 & 764.9 & 1243.5\\ & 128 & 4096 & 3.232 & 1.771 & 0.652 & 647.1 & 298.6 & 0.840 & 0.386 & 0.454 & 0.023 & 1575.4 & 1051.4 & 1623.7\\ & 192 & 6144 & 4.831 & 2.684 & 0.981 & 650.8 & 304.9 & 1.172 & 0.392 & 0.678 & 0.032 & 2005.2 & 1253.0 & 1767.5\\ & 256 & 8192 & 6.436 & 3.613 & 1.333 & 652.3 & 308.8 & 1.568 & 0.414 & 0.896 & 0.042 & 2116.8 & 1290.6 & 1785.2\\ & 320 & 10240 & 8.034 & 4.334 & 1.657 & 652.5 & 309.1 & 1.899 & 0.523 & 1.102 & 0.052 & 2122.7 & 1324.7 & 1802.5\\ \hline \end{tabular} \begin{tablenotes} \item $T_k$, $T_{\rm H2D}$ and $T_{\rm D2H}$ are in ms. $S_k$ and T/P are in Mbps. \end{tablenotes} \end{table*} \section{Experimental Results and Discussions} The experimentations are carried out on Intel i7-4790k platform with NVIDIA GTX580 (1544MHz, 512 CUDA cores, and PCI-E 2.0 supported) and Nvidia GTX980 (1126MHz, 2048 CUDA cores, and PCI-E 3.0 supported). The program are complied with GCC 4.8.2 and CUDA 6.5. A (2,1,7) convolutional code with generator polynomials $\textbf{\emph{g}}^{(1)}=[1111001]$ and $\textbf{\emph{g}}^{(2)}=[1011011]$ is chosen from CCSDS standard \cite{CCSDS} for convenient comparison with other works. As the code rate is 1/2, the 64 states can be divided into $2^2=4$ groups using the given classification methods, and the result is shown in Table \ref{Tab_Group}. The BER performance under AWGN channel for various $L$ are presented in Fig.\ref{Fig_BER} ($D$ is fixed to 512, which is an less important factor). It is shown that as $L$ rises to 42, which is about 6 times the constraint length, the BER result is approximate to the theoretical performance. Actually, in the proposed decoder, larger $L$ results in better error correction performance, but too large $L$ can cut down the decoding throughput. Thus, $D=512$ and $L=42$ are selected for the parallel block in the following tests. \begin{figure}[tb] \centering \includegraphics[width=3.0in]{BER_black.pdf} \caption{BER performance of the (2,1,7) convolutional code. ($D=512$, 8-bit quantization)} \label{Fig_BER} \end{figure} To demonstrate the improvements by using the proposed strategies and methods, experimental results of both the original decoder and the optimized decoder are given for comparison in Table \ref{Tab_R2}, including the kernel execution times, the data transfer times, the kernel throughput and the decoding throughput. The proposed decoder is operated on different GPU devices with various numbers of $N_{bl}$ and $N_t$. The original parallel block-based Viterbi decoder launches only one kernel to finish the whole decoding procedure. 32-bit float-point quantization is used for the input soft messages, and decoded bits are stored in integers. In the optimized decoder, two kernels with different number of threads are launched and execution times $T_{k1}$ and $T_{k2}$ are recorded individually. It can be seen that the total execution times are reduced significantly by at least 40\%, which results in an improvement of kernel throughput $S_k$. Input messages are quantized to 8-bit, which are stored using the packing scheme, and bitwise storage is designed for decoded bits. As a result, the H2D/D2H data transfer sizes are both cut down and $T_{\rm H2D}$/$T_{\rm H2D}$ are greatly shorted to improve the decoding throughput (T/P). To hide data transfer latency, asynchronous transfer technique is adopted and throughput results with three CUDA streams (3S) are presented. By comparing with the performance under the synchronous mode which only uses one CUDA stream (1S), it shows that the more powerful the GPU is, the more efficient overlap and more throughput improvement become. Futhermore, as the increase in the number of concurrently executed parallel blocks $N_t$, the GPU will finally run at full capacity and the decoder will reach the peak throughput. Table \ref{Tab_R3} shows the decoding throughput comparison between our work and existing works on various GPU platforms, which are all for convolutional codes with code rate 1/2 and constraint length 7. A metric named TNDC (Throughput under Normalized Decoding Cost) introduced in \cite{TNDC} is provided in order to make fair comparison. As the normalized results show, the proposed decoder achieves about 1.5$\sim$9.2 times speedup compared with the existing GPU-based implementations. In addition, compared with the existing fastest x86-CPU work \cite{CPU2010}, which runs a 64-state VA decoder on the Intel Core 2 Extreme X9650 (4 cores, 3.0GHz) at the speed of 60Mbps, our results show significant throughput advantages. Compared with the newest results on FPGA platforms, e.g., 865Mbps for a 64-state VA decoder on Stratix III 340 (216MHz) \cite{FPGA2014} and 10Gbs for a 32-state VA decoder on Xilinx Virtex 7 XC7VX690T-2 \cite{FPGA2015}, our results reach a comparable speed, and the good scalability and compatibility make it easy to transplant our decoder onto future powerful GPU devices to achieve higher performance. \renewcommand\arraystretch{1.1} \begin{table}[htb] \centering \caption{Decoding throughput comparison with existing works} \label{Tab_R3} \begin{tabular}{ccccc} \hline Work & Device & T/P(Mbps) & TNDC & Speedup\\ \hline \cite{SDR2011} & GTX275 & 28.7 & $ \approx $0.085 & $\times$9.20\\%\hline \cite{TVDA2011} & 8800GTX & 29.4 & $ \approx $0.170 & $\times$4.60\\%\hline \cite{TVDA_WCNC2013} & GTX580 & 67.1 & $ \approx $0.085 & $\times$9.20\\%\hline \cite{SDR2010} & 9800GTX & 90.8 & $ \approx $0.420 & $\times$1.86\\%\hline \cite{OPENCL2014} & HD7970 & 391.5 & $ \approx $0.207 & $\times$3.78\\%\hline \multirow{2}{*}{\cite{TVDA_2014}} & Tesla C2050 & 240.9 & $ \approx $0.468 & $\times$1.67\\ & GTX580 & 404.7 & $ \approx $0.512 & $\times$1.53\\\hline \multirow{2}{*}{This work} & GTX580 & 598.3 & $ \approx $0.757 & $\times$1.03\\ & GTX980 & 1802.5 & $ \approx $0.782 & $\times$1.00\\ \hline \end{tabular} \end{table} \section{Conclusion} This paper introduces a parallel block-based Viterbi decoder. The data stream is divided to a series of parallel blocks for concurrently decoding. Implementation on GPU uses two individual kernels mapping to two decoding phases, and optimized parallelism inside kernels are presented, which are based on the proposed state classification criteria. Aiming to accelerate the decoding, appropriate GPU memory and data structure are developed for intermediate messages. Storage for input/output data are designed and multiple CUDA streams are used to reduce the overhead of data transfer. Experimental results show that proposed GPU-based decoder achieves about 1.5 times speedup than the existing fastest work on GPU. The proposed decoding architecture can be used in the software-defined radio systems, as a flexible Viterbi decoding unit with strong reconfigurable ability. \section*{Acknowledgment} This work was supported by the National Natural Science Foundation of China (91438116). \bibliographystyle{IEEEtran}
2024-02-18T23:40:28.219Z
2016-08-02T02:03:02.000Z
algebraic_stack_train_0000
2,487
5,307
proofpile-arXiv_065-12146
\section{INTRODUCTION} \label{sec:intro} The Large Binocular Telescope Observatory (LBTO) is situated near the city of Safford in southeastern Arizona in the Pinale\~{n}o Mountains. It is part of the Mount Graham International Observatory (MGIO) located on Emerald Peak on the highest mountain, Mount Graham. LBTO sits at an altitude of 3192 meters. As the name implies, the Large Binocular Telescope (LBT) houses two primary mirrors, separated by 14.4 meters (center-to-center), mounted on a single altitude-azimuth mount. Each mirror is 8.4 meters in diameter, with a combined collecting area equivalent to a single 11.8 meter mirror, or an interferometric baseline of 22.65 meters, edge-to-edge. Although the LBT is comprised of two 8.4 meter mirrors, their fast {\it f/1.14} focal ratio allows for a compact mount and co-rotating enclosure, see Hill et al. (2004)\cite{2004SPIE.5489..603H}, Ashby et al. (2006)\cite{2006SPIE.6274E..23A}, Hill et al. (2006)\cite{2006SPIE.6267E..0YH} and Hill et al. (2010)\cite{2010SPIE.7733E..0CH} for more details. The binocular design is combined with four Bent Gregorian focal stations (three with instrument rotator bearings) and one Direct Gregorian focal station for each side of the telescope. The two mirrors can be used in binocular mode with the same pairs of instruments as well as each independently with different instrumentation. Switching between different optical instrumentation is done by moving various swing arms which hold the prime focus optical cameras, or secondary and tertiary mirrors. The transition between prime focus and Gregorian instruments takes $\sim$ 20 minutes, while transitions between different Gregorian instruments can take $\le$ 10 minutes. This flexibility is advantageous for incorporating a variety of scientific programs during a single night as well as adapting quickly to changes in site and weather conditions. For brevity, the left-side of the telescope is denoted as {\it SX} and the right-side is denoted as {\it DX}.\\ \indent The LBT is an international partnership which includes the University of Arizona (25$\%$) including access for Arizona State University and Northern Arizona University; Germany (25$\%$) or LBTB (Beteiligungsgesellschaft) which includes participation of five German institutes (Landessternwarte K{\"{o}nigstuhl, Leibniz Institute for Astrophysics Potsdam, Max-Planck-Institut f{\"{u}r Astronomie, Max-Planck-Institut f{\"{u}r Extraterrestrische Physik, and Max-Planck-Institut f{\"{u}r Radioastronomie); Italy (25$\%$) or Instituto Nazionale di Astrofisica which is responsible for offering access to the Italian community to LBTO; the Ohio State University (12.5$\%$); and the Research Corporation for Science and Advancement (RC) which coordinates the participation of four universities (Ohio State University, University of Notre Dame, University of Minnesota, and University of Virginia).\\ \indent In 2014, the second Multi-Object Double Spectrograph (MODS-2) was installed on the LBT. This marked the completion of the installation of all facility instrumentation. Beginning with the 2014B semester and up through the 2016A semester, all facility instruments (3 on SX, 3 on DX) were available for on-sky scientific use during partner and LBTO science time or underwent commissioning (or re-commissioning). For the 2016B semester, all six facility instruments will be available for on-sky scientific use by the LBTO and its partners. In this conference proceeding, we present a summary of the LBT scientific facility instruments that are now available for partner science observations. It is an update on the significant changes that have occurred at LBTO since Wagner et al. (2014)\cite{2014SPIE.9147E..05W}. \section{Types of Instrumentation} \label{sec:types} There are three categories of LBT scientific instrumentation. The first are {\it facility instruments}, which are available for use by anyone within the partnerships. Facility instruments are supported and maintained by LBTO personnel. Although during commissioning phases, facility instruments are still supported by the instrument teams, who work in conjunction with LBTO staff. The second are {\it Principal Investigator instruments} such as the Potsdam Echelle and Polarimetric Spectroscopic Instrument (PEPSI), which uses both primary mirrors, see Strassmeier et al. (2008)\cite{2008SPIE.7014E..0NS} for more information, and has been used on-sky for scientific observations during the 2015B and 2016A semesters. These are maintained and operated solely by the builders, but may be used by LBTO partners for science on a collaborative basis or through time exchanges at the discretion of the instrument principal investigator (PI). LBTO provides limited technical assistance to enable the instruments to interface with the telescope control systems and telescope infrastructure. The third type are {\it Strategic instruments}, which are technically challenging and designed to push the limits of astronomical instrumentation. They may be uniquely suited to the LBT and are designed to have a major impact on astronomy. Strategic instruments may be available to the LBT community on a collaborative basis or through time exchanges. A current strategic instrument is the LBT Interferometer (LBTI), which uses both primary mirrors and comprises LMIRCam (3-5 $\mu$m) and the NOMIC (8-13$\mu$m) camera. They are currently operational for on-sky scientific observations, see Hinz et al. (2008)\cite{2008SPIE.7013E..28H}, Wilson et al. (2008)\cite{2008SPIE.7013E..3AW}, Skrutskie et al. 2010\cite{2010SPIE.7735E..3HS}, Leisenring et al. (2012)\cite{2012SPIE.8446E..4FL}, and Hoffmann et al. (2014)\cite{2014SPIE.9147E..1OH} for more information. Recent results include mapping the 5 $\mu$m emission of the Loki Patera volcanoes on Jupiter's moon Io using the SX and DX mirrors in interferometric mode (Conrad et al. 2015\cite{2015AJ....149..175C}) or the LBTI Exozodi Exoplanet Common Hunt (LEECH) survey (Skemer et al. 2016\cite{2016ApJ...817..166S}). Future strategic instruments include: LBT INterferometric Camera and the Near–IR/Visible Adaptive iNterferometer for Astronomy (LINC- NIRVANA), a multi-conjugate adaptive optics (MCAO) near-IR imaging system that provides both ground-layer and high-layer corrections. It is in the first stages of on-telescope testing (Gassler et al. (2004)\cite{2004SPIE.5382..742G}, Herbst et al. (2014)\cite{2014SPIE.9147E..1MH}); and iLocater, a diffraction-limited Doppler spectrometer with high spectral resolution ({\it R} $\sim$ 110,000) operating in the {\it Y}-band that will be used to characterize Earth-like exo-planets orbiting M-dwarf stars. iLocater will use a fiber-fed AO-corrected beam to pass light to a compact spectrograph (Crepp at al. 2014\cite{2014AAS...22334820C}, Veillet et al. 2014\cite{2014SPIE.9149E..16V}). Further discussion of PI and Strategic instruments are beyond the scope of this paper. \section{Facility Instruments} \label{sec:facility} The Large Binocular Cameras (LBCs) are a pair of blue- and red-optimized prime focus imagers with a field of view just shy the size of the Moon projected on the sky. The LBC filter suite covers {\it U}-band (0.33 $\mu$m) in the blue through {\it Y}-band (1.1 $\mu$m) in the red. The Multi-Object Double Spectrographs 1 and 2 (MODS-1 and MODS-2) are a pair of optical spectrographs (longslit and custom located at the direct Gregorian foci at each primary mirror. The two LBT NIR Spectroscopic Utility with Camera Instruments (LUCIs) are capable of imaging and spectroscopy (longslit and custom designed multi-object slit masks) each located at one of the Bent {\it f/15} Gregorian ports. As of the 2016A semester all facility instruments have been used for scientific observations on-sky. Table 1 presents a brief overview of the scientific capabilities of the facility instruments. Specific details will be discussed or cited in subsequent sections. \begin{table}[ht] \caption{Overview of Facility Instruments \& Capabilities} \label{tab:FacCap} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} {\bf Instrument} & {\bf Focal} & {\bf MODES} &{\bf $\lambda$ Range} &{\bf FOV} &{\bf Plate Scale} &{\bf Resolution}\\ \rule[-1ex]{0pt}{3.5ex} & {\bf Station}& &($\mu$m) &(\hbox{$^\prime$}) & (\hbox{$^{\prime\prime}$}/pixel) & \\ \hline \rule[-1ex]{0pt}{3.5ex} LBC Blue & SX Prime & Imaging & 0.33-0.67 & 23\hbox{$^\prime$} $\times$ 25\hbox{$^\prime$} & 0.2255 & ... \\ \hline \rule[-1ex]{0pt}{3.5ex} LBC Red & DX Prime & Imaging & 0.55-1.11 & 23\hbox{$^\prime$} $\times$ 25\hbox{$^\prime$} & 0.2255 & ... \\ \hline \rule[-1ex]{0pt}{3.5ex} MODS-1/-2 & {\it f/15} Direct & Imaging & 0.31-1.1 & 6\hbox{$^\prime$} $\times$ 6\hbox{$^\prime$} & 0.120 {\footnotesize(blue)} & {\footnotesize2300 (blue)} \\ \rule[-1ex]{0pt}{3.5ex} & Gregorian & Spectroscopy & & & 0.123 {\footnotesize(red)} & {\footnotesize1850 (red)}\\ \rule[-1ex]{0pt}{3.5ex} & & & & & &{\footnotesize(100-500 prism)}\\ \hline \rule[-1ex]{0pt}{3.5ex} LUCI-1/-2 &{\it f/15} Bent &Imaging & 0.95-2.5 & 4\hbox{$^\prime$} $\times$ 4\hbox{$^\prime$} & 0.25 {\footnotesize(N1.8)} & \footnotesize1500-5500 {(N1.8)} \\ \rule[-1ex]{0pt}{3.5ex} {\footnotesize(Seeing Limited)} &Greogorian &Spectroscopy & & & 0.12 {\footnotesize(N3.75)}& {\footnotesize3000-11,000 (N3.75)} \\ \hline \end{tabular} \end{center} \end{table} \subsection{Large Binocular Cameras (LBCs)} \label{subsec:LBCs} The LBCs are comprised of two wide-field imagers, one blue optimized at the {\it f/1.14} prime focus on SX, and one red optimized at the {\it f/1.14} prime focus on DX. They are each mounted on a spider swing-arm that can be deployed above their respective primary mirror and moved into and out of the telescope beam as required. The LBCs were accepted as facility instruments in October 2011. The two instruments were an in-kind contribution by INAF to the first generation of LBT instruments. Specific details regarding construction, commissioning, and upgrades can be found in Ragazzoni et al. (2006)\cite{2006SPIE.6267E..10R}, Speziali et al. (2008)\cite{2008SPIE.7014E..4TS}, and Giallongo et al. (2008)\cite{2008A&A...482..349G}. The LBCs are the first instruments to make full use of the binocular capabilities of the LBT. Binocular observing has been done with the LBCs since the installation and commissioning of LBC Red was completed.\\ \indent Owing to the fast focal ratio of the primary mirrors and placement at prime focus, a set of refractor corrector lenses is required to deal with geometric distortions that would affect the large field of view (FOV). Each LBC uses a similar set of five corrective lens (a 6th lens is the cryostat window with almost no net power). This is based on the Wynne approach of positive-negative-positive lenses (Wynne 1972\cite{1972MNRAS.160P..13W}), but with the second and third elements each split into two lenses. A filter wheel sits between the 5$^{th}$ and 6$^{th}$ corrective lens (the first lens is defined as closest to the primary mirror). The lenses in LBC Blue are made of fused silica which permits better transmittance of light at shorter wavelengths (0.3-0.5 $\mu$m). The lenses in LBC Red use borosilicate glass (BK7) and are optimized for longer wavelengths ($\lambda$ $>$ 0.5 $\mu$m). The corrected fields have a diameter of 110 mm and 108.2 mm for LBC Blue and LBC Red (equivalent to 27\hbox{$^\prime$}\ in diameter), respectively. The science detectors cover $\sim$ 75$\%$ of this area.\\ \indent The LBCs each contain six E2V CCD detectors, four of which are used for science. The four science CCDs are E2V 420-90s with 2048 $\times$ 4608 (13.5 $\mu$m square pixels) are arranged in a mosaic with three abutted next to each other. A fourth CCD is rotated clockwise 90 degrees and centered along the top of the three science CCDs. Each CCD covers 7\hbox{$^\prime$}.8 $\times$ 17\hbox{$^\prime$}.6 with a gap of 70 pixels (18\hbox{$^{\prime\prime}$}\ ) between each CCD. This yields a 23\hbox{$^\prime$}\ $\times$ 25\hbox{$^\prime$}\ FOV. In order to obtain an uninterrupted image, dithering is required to to fill the gaps between CCDs (and recommended to correct for cosmic rays and bad pixel columns). The un-binned readout time for the full array of science CCDs is 27 seconds. The other two CCDs are used for guiding and tracking collimation and wavefront control. They are E2V 420-90 custom made 512 $\times$ 2048 (13.5 $\mu$m square) pixel CCDs that do not have a shutter mechanism. They are placed on either of the science CCD chips. One is within the focal plane and is used for guiding adjustments, the other is out of the focal plane and uses extra-focal pupil images to maintain collimation and focus. Figure 1 shows the layout of LBC Blue (LBC Red is the same layout with a slight difference in the corrected field size), computed distortion map, and an example of a deep UV image. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{Figure1.eps} \end{tabular} \end{center} Figure 1: Shown here are: a) the chip layout of LBC Blue (science and technical chips); b) Distortion map computed for LBC Blue; and c) 2160 sec {\it U}-spec image of the galaxy merger. The project goal is to map the extent of the Globular Cluster (GC) and Young Star Cluster (YSC) population, which appear as partially or unresolved point sources across the field. Here, the LBC FOV covers $\sim$ 160 $\times$ 150 metric kiloparsecs, which should cover the entire spatial extent of the GC and YSC populations (Rochais et al.\cite{2016AAS...22724012R}). \end{figure} \indent Each of the LBCs houses two filter wheels, each wheel houses 5 slots, which allows for up to 8 filters to be used for each instrument (one slot in each wheel must always be empty). Although nominally of similar design, the LBCs use different filters of different widths. LBC Blue filters are 159.8 mm in diameter (155 mm opening), while LBC Red filters are 189.6 mm in diameter (185 mm opening). Currently, LBC Blue houses six filters for scientific use: {\it U}-Bessel, {\it U}-Spectroscopic (broader transmission curve than {\it U}-Bessel), {\it B}-Bessel, {\it V}-Bessel, Sloan {\it g}, {\it r}. There are now ten filters available for scientific use with LBC Red: {\it V}-Bessel, {\it R}-Bessel, {\it I}-Bessel, sloan {\it r}, {\it i}, and {\it z}, and {\it Y}-band filter; and three medium filters, F972N20, TiO 784 and CN 817. The TiO 784 and CN 817 filters were purchased and tested in semester 2014B by Landessternwarte K{\"{o}nigstuhl (LBTB-Germany). These filters have been available for use by all LBTO partners since semester 2015A. However, they must be requested in advance to allow for time to swap with filters normally kept in the LBC Red filter wheels. \begin{table}[ht] \caption{Overview of Available LBC Filters} \label{tab:LBCfilters} \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} {\bf LBC Blue}& {\bf 50$\%$ Cut-On}& {\bf 50$\%$ Cut-Off}& {\bf LBC Red}& {\bf 50$\%$ Cut-On}& {\bf 50$\%$ Cut-Off}\\ \rule[-1ex]{0pt}{3.5ex} & ($\mu$m)& ($\mu$m)& & ($\mu$m) &($\mu$m)\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it U}-Spectroscopic$^{1}$& 0.332& 0.381& {\it V}-Bessel& 0.493& 0.577\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it U}-Bessel& 0.333& 0.382& {\it r}-sloan& 0.553& 0.686\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it B}-Bessel& 0.375& 0.469& {\it R}-Bessel& 0.572& 0.690\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it g}-sloan& 0.397& 0.550& {\it i}-sloan& 0.697& 0.836\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it V}-Bessel& 0.488& 0.610& {\it I}-Bessel& 0.713& 0.881\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it r}-sloan& 0.552& 0.686& {\it z}-sloan& 0.830& ...\\ \hline \rule[-1ex]{0pt}{3.5ex} ...& ...& ...& {\it Y}& 0.952& 1.110\\ \hline \rule[-1ex]{0pt}{3.5ex} ...& ...& ...& TiO 784$^{2}$& 0.769& 0.788\\ \hline \rule[-1ex]{0pt}{3.5ex} ...& ...& ...& CN 817$^{2}$& 0.802& 0.821\\ \hline \rule[-1ex]{0pt}{3.5ex} ...& ...& ...& F972N20$^{2}$& 0.952& 0.974\\ \hline \end{tabular} \end{center} {\footnotesize $^{1}$ = Broad width (top-hat) filter response designed to mimic the spectroscopic coverage in this wavelength range; $^{2}$ = Medium width filters}\\ \end{table} \indent Steps continue toward the improvement of collimation with the LBCs. The goal is to expand and improve the range of conditions under which collimation can be achieved. Currently, collimation is achieved through a custom IDL program called {\tt DOFPIA} which measures and analyzes the pupil (highly de-focused) images of stars, see Hill et al. (2008)\cite{2008SPIE.7012E..1MH} for more details. Using a geometrical method described by Wilson (1999) \cite{1999rto..book.....W} aberration coefficients are derived by measuring the internal and external borders of pupils, as well as in some cases, their illumination profiles. Empirically determined scaling relations based on these are then used to apply the Zernike corrections (Z4, Z5, Z6, Z6, Z8, Z11, and Z22) needed to remove the aberrations. A small region of Chip 2 is read-out in order to speed up the process. The process is repeated until the corrections converge.\\ \indent In September 2015, during fall startup, the position of the region used for {\tt DOFPIA} was changed to below the rotator center on Chip 2 for both LBCs in an effort to improve collimation. The rotator center for LBC Blue is (in detector coordinates on Chip 2) {\tt [1035,2924]} and for LBC Red {\tt [1078,2913]}. The new section is at {\it Y} = 1201:2608 (previous focus used {\it Y} = 3201:4608) and the entire width of Chip 2. {\tt DOFPIA} needs to be run ever 30 minutes or less to maintain effective collimation. In addition, active collimation continues during science exposures when both technical chips takes exposures every 8-32 seconds (guiding and pupil exposures are set to the same exposure time, which is dependent on the brightness of the guidestar). Corrections from the collimation technical chip (Tek Chip 2) are applied in between science exposures. However, {\tt DOFPIA} does have certain limitations that can affect achieving optimal collimation. This often occurs at the start of an observing night, where temperature differentials between the primary mirrors and ambient air make collimation time-consuming or difficult to achieve. The limitations, in part, come from issues in how {\tt DOFPIA} fits the extra-focal pupils, as well as the signal-to-noise (S/N) of the data. A new system, Wavefront Reconstruction Software (WRS) is being developed with INAF. WRS takes into account S/N considerations, applies a new wavefront reconstruction algorithm, and maintains a more detailed log of the corrections applied, (Stangalini et al. 2014\cite{TechnicalReport...INAF...2014}). WRS testing indicates the biggest gains can be made when there is significant coma at the start of the night (Z7 and/or Z8). WRS tests have been carried it during 2015 and 2016, but unfortunately have been hampered by poor weather during engineering nights. In some cases, WRS tests have been performed in parallel with science observations, mostly with LBC Red (DX) while MODS-1 or LUCI-1 is being used (SX). It is expected that a final analysis and a beta version will be made available for more widespread testing in the near future.\\ \indent Improvements to the LBC control hardware and software continue. Software upgrades are regularly rolled out to correct and improve guiding, instrument rotation, non-sidereal guiding, and error handling. As noted in Summers et al. (2104)\cite{2014SPIE.9152E..2ES}, upgrades will be made to the LBC control systems since their installation at LBT. These include replacing the four CCD controllers, one each for LBC Blue and Red science CCDs and one each for Blue and Red guiding (technical controllers). Each controller is handled by a separate PC running Microsoft Windows Server 2003. An ethernet CCD controller upgrade will replace the need for four physical Windows PCs. Currently a single LINUX machine known as the Central Management Unit (CMU) is used to run the LBC software and interface with the CCD controllers. A future upgrade will eliminate the need for the Windows software and port everything to a LINUX based software system. A BeagleBones board ({\url http://beagleboard.org/}) will be used to run LINUX on a daughter card attached to the CCD Controller cards. Step one (completed) is to move image analysis from the Windows PCs to the CMU. A Prototype CCD controller with a BeagleBones board is currently being tested by INAF, as is ongoing porting of software from Windows to LINUX. \subsection{Multi-Object Double Spectrograph (MODS)} \label{subsec:MODS} The Multi-Object Double Spectrographs (MODS) are a pair of identical optical imagers and spectrographs designed to use longslit and user-designed multi-object slit masks. Each MODS is attached to the straight through {\it f/15} Gregorian focus on the respective primary mirror (MODS-1 on SX, MODS-2 on DX). MODS were designed and built by The Ohio State University as part of its contribution to the first generation of LBT instruments. Specific details can be found in Pogge et al. (2006)\cite{2006SPIE.6269E..0IP},(2010)\cite{2010SPIE.7735E..0AP} and first light results are presented in Pogge et al. (2012)\cite{2012SPIE.8446E..0GP}. The instrument description and capabilities described applies to both MODS-1 and MODS-2. MODS-1 was installed and aligned in 2009 and became available for partner science in semester 2011B. MODS-2 was installed in semester 2014A and commissioned in 2014B-2015. Both MODS have been used for on-sky science since semester 2015B and have been used in binocular mode in 2016.\\ \indent MODS is a double spectrograph and imager that employs reflective optics to achieve high-throughput from near-UV (0.32 $\mu$m) through near-IR (1.1 $\mu$m) wavelengths. Both MODS house separate blue- and red-optimized channels that use custom-built E2V CCD231-68 back-side illuminated CCDs with 3072 $\times$ 8192 pixels (15 $\mu$m square). The blue channel is standard silicon with E2V Astro-Broadband coating and the red channel is 40 $\mu$m thick deep depletion silicon with extended-red coating (E2V Astro-ER1). This provides increased performance long-wards of 0.8 $\mu$m, with significantly reduced fringing relative to other optical spectrographs and imagers. The readout time for the un-binned 8K$\times$3K is $\sim$ 105 seconds. For more information see Atwood et al. (2008)\cite{2008SPIE.7021E..08A}. \\ \indent The guiding and wavefront-sensing systems are constructed as part of MODS and located above the instrument focal plane, but within the unit itself. MODS also houses the calibration system internally. It consists of continuum (fixed intensity Quartz-Halogen and variable intensity incandescent) used for calibration imaging and spectroscopic flats; and emission-line lamps (arc lamps) used for wavelength calibration of grating and prism spectroscopy. The optical layout of MODS incorporates a dichroic beam splitter below the focal plane that splits light into separate, but optimized blue and red only channels. There is a cross-over at 0.565 $\mu$m that results in a drop in flux in a small region ($\sim$ 0.005 $\mu$m centered on this wavelength). For some science cases, users may choose to employ blue- or red-only observations. The dichroic is replaced with no optic in the beam for blue-only mode and replaced with a flat mirror for red-only mode (imaging and spectroscopy). MODS uses an infrared laser ($\lambda$ = 1.55 $\mu$m) closed-loop image compensation system (IMCS) to provide flexure compensation due to gravity, mechanical, and temperature effects. The IMCS can null motion to within an average of $\pm$0.6 pixels for every 15\ifmmode^\circ\else$^\circ$\fi\ for elevations of 90\ifmmode^\circ\else$^\circ$\fi\ -30\ifmmode^\circ\else$^\circ$\fi. More information about the IMCS can be found in Marshall et al. (2006)\cite{2006SPIE.6269E..1JM}.\\ \indent MODS has two observing modes: direct imaging, and spectroscopy using curved focal plane masks. These masks include facility longslit and multi-object slit masks that can be custom designed by users and fabricated at the University Research Instrumentation Center (URIC) at the University of Arizona (see Reynolds et al. 2014\cite{2014SPIE.9151E..4BR} for details on fabrication and materials used for the masks). Direct imaging is achieved by replacing the grating with a plane mirror and is used for target acquisition for spectroscopy. The standard acquisition is to read out a smaller 1K$\times$1K region of the CCD to reduce overheads during the acquisition (readout $\sim$ 40 sec). Direct imaging can also be used for science programs. MODS includes a full complement of sloan filters: {\it u}, and {\it g} for the Blue channel; and {\it r}, {\it i}, and {\it z} for the Red channel. The usable FOV is 6\hbox{$^\prime$} $\times$ 6\hbox{$^\prime$}\ but with degraded image quality at radii $>$ 4\hbox{$^\prime$}. In the case of direct imaging for science, the CCDs are read out in 3K$\times$3K mode (readout time is $\sim$ 68 sec).\\ \indent MODS has two spectroscopic modes: a medium resolution diffraction grating optimized for blue and red spectral regions with {\it R} $\sim$ 2300, and 1850 (using a 0\hbox{$^{\prime\prime}$}.6 wide slit, and scaling inversely with increasing slitwidth), respectively; and a double-pass 8\ifmmode^\circ\else$^\circ$\fi\ glass prism with back reflective coating that produces a low-dispersion spectroscopic mode with {\it R} $\sim$ 420-140 in the blue, and {\it R} $\sim$ 500-200 in the red. The grating dispersion uses the full 8K$\times$3K CCD, while the prism mode uses a 4k$\times$3K readout mode. Longslit and multi-object slit masks are made available through a mask cassette system with 24 positions. Each mask is matched to the shape of the Gregorian focal plane. The first 12 positions in the cassette contain permanent facility and testing masks. The facility science masks include: 0\hbox{$^{\prime\prime}$}.3, 0\hbox{$^{\prime\prime}$}.6, 0\hbox{$^{\prime\prime}$}.8, 1\hbox{$^{\prime\prime}$}.0, and 1\hbox{$^{\prime\prime}$}.2 longslit segmented masks (each contains five 1\hbox{$^\prime$}\ long slits each separated by segmented braces); and a 5\hbox{$^{\prime\prime}$}\ wide $\times$ 60\hbox{$^{\prime\prime}$}\ long longslit single segment mask used primarily for spectro-photometric calibrations. In semester 2016A a new facility 2\hbox{$^{\prime\prime}$}.4 $\times$ segmented longslit was fabricated and is now available for all LBT partners upon request. The remaining 12 mask slots are available for custom designed MOS masks (discussed in the next section).\\ \indent MODS has excellent sensitivity at both the UV and near-IR extremes, producing high S/N spectra as far blue-ward as 0.315 $\mu$m and as far red-ward as 1.05 $\mu$m. Table 3 provides an overview of the imaging and spectroscopic modes available for MODS-1 and MODS-2. \begin{table}[ht] \caption{Overview of MODS-1/-2 Configurations} \label{tab:MODSCap} \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} {\bf Mode}& {\bf Channel}& {\bf Filters}& {\bf Resolution}& {\bf $\lambda$}& {\bf CCD Size} \\ \rule[-1ex]{0pt}{3.5ex} & & & (0\hbox{$^{\prime\prime}$}.6 slit)& ($\mu$m)& \\ \hline \rule[-1ex]{0pt}{3.5ex} Imaging& Dual& {\it u},{\it g},{\it r},{\it i},{\it z}& ...& 0.33-0.95& 3K$\times$3K\\ \rule[-1ex]{0pt}{3.5ex} & Blue& {\it u},{\it g}& ...& 0.33-0.55& \\ \rule[-1ex]{0pt}{3.5ex} & Red & {\it r},{\it i},{\it z}& ...& 0.55-0.95& \\ \hline \rule[-1ex]{0pt}{3.5ex} Grating& Dual& Clear& 1850-2300& 0.31-1.05& 8K$\times$3K\\ \rule[-1ex]{0pt}{3.5ex} Spectroscopy& Blue& Clear& 1850 (@0.4$\mu$m)& 0.32-0.60& \\ \rule[-1ex]{0pt}{3.5ex} & Red& GG495& 2300 (@0.7$\mu$m)& 0.50-1.05& \\ \hline \rule[-1ex]{0pt}{3.5ex} Prism& Dual& Clear& 500-140& 0.31-1.05& 4K$\times$3K\\ \rule[-1ex]{0pt}{3.5ex} Spectroscopy& Blue& Clear& 420-140& 0.32-0.60& \\ \rule[-1ex]{0pt}{3.5ex} & Red& GG495& 500-200& 0.50-1.05& \\ \hline \end{tabular} \end{center} \end{table} Figure 2 shows an example spectrum obtained with MODS-1, using the dual-grating mode with high sensitivity at both near-UV and near-IR wavelengths. The target is a z$\sim$1 Ultraluminous Infrared Galaxy (ULIRG) that is suspected of being a late-stage merger between two gas-rich spiral galaxies (Rothberg et al. 2015\cite{2015IAUGA..2257946R}). ULIRGs emit 10$^{12}$ {\it L}$_{\odot}$ integrated over 8-100 $\mu$m and contain anywhere from 10$^{9}$-10$^{10}$ {\it M}$_{\odot}$ of molecular gas, which provides fuel for forming new stars and growing super-massive central black holes (SMBH) that power Active Galactic Nuclei (AGN). The most powerful AGN are quasars (QSOs) and reside in massive elliptical galaxies (10-100$\times$ more massive than the Milky Way). In the local Universe, ULIRGs are known as the progenitors of QSO host galaxies (e.g. Sanders et al. 1988\cite{1988ApJ...325...74S}, Rothberg et al. 2013\cite{2013ApJ...767...72R}). \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=9.5cm]{Figure2.eps} \end{tabular} \end{center} Figure 2: Shown is a sample spectrum from a project to measure the dynamical, star-formation, and search for AGN in a sample of Ultraluminous Infrared Galaxy mergers at 0.4 $<$ z $<$ 1.0. The sample selection is different than most intermediate redshift ULIRG surveys (PI Rothberg). Rest-frame UV/Optical spectra were obtained from MODS-1/-2 in and used in conjunction with archival imaging from {\it Hubble Space Telescope}. The spectra shown here were obtained with MODS-1, 4800 sec total integration time, 0\hbox{$^{\prime\prime}$}.6 slit, and dual grating mode. Both the observed and rest-frame wavelengths are shown to demonstrate the range of MODS and identify the high-excitation emission lines. The z=0.92 ULIRG shows evidence of a SMBH with {\it M}$_{\bullet}$ $\sim$ 10$^{8}$ {\it M}$_{\odot}$, consistent with the SMBH masses in nearby Type I AGN, like Mrk 231. \end{figure} \subsubsection{MODS Multi-Object Spectroscopic (MOS) Masks} The multi-object spectroscopy (MOS) masks allow PIs to create masks based on the scientific needs of the targets to be observed. Masks are designed using a software program called {\tt MMS} (MODS Mask Simulator). The software is a modified version of the LUCI mask software, {\tt LMS} (LUCI Mask Simulator). A user's manual can be found at {\url www.astronomy.ohio-state.edu/~martini/mms/}. Both are based upon the European Southern Observatory SkyCat tool. The software allows users to load a {\it fits} image file with a valid world coordinate system (WCS) or access archival images from the Digital Sky Survey or 2MASS (2 Micron All Sky Survey) and place slits of user-defined length and width within the field of view of MODS. The {\tt MMS} software allows users to rotate the image as needed, add multiple slits, and display information to ensure that slits do not overlap. Alignment is done using a minimum of three 4\hbox{$^{\prime\prime}$} $\times$ 4\hbox{$^{\prime\prime}$} alignment boxes that are placed over the positions of stars in the field. Smaller sized boxes may be used if needed, but they should be larger than the upper limits of the required seeing constraints (so the star may be fully measured in the box). The MODS alignment software ({\tt modsAlign}) uses these boxes to determine offset and rotation to align the mask with the target field. The newest version of {\tt modsAlign} used for both MODS auto-detects the alignment boxes using the measured focal-plane to detector geometries and then prompts users to centroid on the alignment stars that should be centered within the box. The more stars used, the more precise the alignment. The software compares the centroid position of the stars and the positions of the alignment boxes to determine the offsets in translation and rotation. The {\tt MMS} software also requires users to select a valid guidestar, and provides an overlay of the Auto-Guiding and Wavefront sensing (AGW) patrol field. Figure 3 shows the {\tt MMS} software with an example target and MOS mask being created ({\it left}) and an example of the final mask output as Gerber ({\it gbr}) file. Masks are submitted by the partner coordinators to the LBTO Mask Scientist at various mask deadlines during each semester. The MOS scientist (currently, B. Rothberg) reviews each mask to ensure it meets the criteria of sufficient alignment boxes, no overlapping slits, a suitable guidestar, etc. Approved masks are then sent to URIC for fabrication. For more information on fabrication and materials used, (Reynolds et al. 2014\cite{2014SPIE.9151E..4BR}). Once fabricated they are sent to LBTO and mask IDs are used to catalog the masks and place them into inventory for future (and possibly repeated) use. Mask exchanges are typically done just before or during the day of the first partner science block. Unlike LUCI, the mask unit and masks are not cryogenically cooled, allowing each partner science block to use all 12 of the available mask slots. Multiple mask exchanges can be done fairly quickly during partner science blocks if more than 12 MOS masks are needed. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=5.6cm]{Figure3.eps} \end{tabular} \end{center} Figure 3: MODS {\tt MMS} software used to design masks MOS masks ({\it left}). The example shown uses Hickson Compact Group 40, a group of five galaxies gravitationally bound to each other, several of which are in the early stages of interaction. {\tt MMS} creates a Gerber file ({\it gbr}) which has the information needed to physically manufacture the mask. Note the square reference boxes used for alignment, and science slits (0\hbox{$^{\prime\prime}$}.6\ width and of 4\hbox{$^{\prime\prime}$}\ - 10\hbox{$^{\prime\prime}$}\ in length). \end{figure} \subsubsection{Binocular Observing with MODS-1 \& MODS-2} The use of MODS-1 \& MODS-2 together for scientific observations marks the second of the three facility instruments ready for binocular operations on-sky. The first tests of the binocular mode of MODS-1 \& MODS-2 took place on UT January 15, 2016. The instrument PI, Richard Pogge (Ohio State University) developed an interface that takes a single MODS script and ``twins'' it so that the preset (or pointing information) instructs the binocular mount to move to a designated set of celestial coordinates and configures both mirrors to point at the same region of the sky. The observers simply run a shell script, for acquisition ({\tt acqBinoMODS}) or for starting science observations ({\tt execBinoMODS}) which automatically twins the single input script. For spectroscopy, after the preset, the observers must then align and place the science target in the longslit for each MODS separately. In the case of imaging, observers execute a shell script that moves the telescope to the science field and begins the science integrations on both MODS.\\ \indent Figure 4 shows the first dual grating spectra (1\hbox{$^{\prime\prime}$}.0 slitwidth) obtained simultaneously from MODS-1 and MODS-2 of the nearby Seyfert 2 AGN host galaxy NGC 1068. The data were processed using a quick-look software developed by The Ohio State University (and based upon the modsIDL data reduction package: {\url http://www.astronomy.ohio-state.edu/MODS/Software/modsIDL/}) and currently available for visiting astronomers to use to assess the quality of data obtained in near real time. A total of three targets, including an intermediate redshift ULIRG, were successfully observed that night using the MODS-Binocular mode. The MODS-Binocular mode has been available on a ``shared-risk'' basis for visiting astronomers since May 2016, allowing all LBT partners an opportunity to use both MODS. As of semester 2016A, only longslit-longslit or imaging-imaging configurations are supported. MOS masks are not currently supported for MODS-Binocular mode, but should be available for use in semester 2016B. This will require two masks to be fabricated from a single {\it gbr} file. The next step is to use mixed configurations, such as different MOS masks for the same field, or a mixture of spectroscopy and imaging. Nevertheless, MODS-Binocular currently provides PIs a $\sqrt 2$ increase in S/N for the same amount of observing time as before. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=9cm]{Figure4.eps} \end{tabular} \end{center} Figure 4: First MODS-Binocular observations of a single target, the Seyfert 2 AGN NGC 1068. Both MODS were configured in dual grating mode with a 1\hbox{$^{\prime\prime}$}.0 slitwidth. The total exposure time was 15 minutes per channel and mirror (300 sec $\times$ 3 exposures), or 30 minutes total each for blue and red channels. Thus, including overheads, 30 minutes worth of data were obtained in $\sim$ 18 minutes. The initial binocular acquisition preset was sent by the instrument PI Richard Pogge. The MODS-Binocular observations that night were executed from the remote observing room in Tucson. \end{figure} \subsection{LBT NIR Spectroscopic Utility with Camera Instruments (LUCI)} \label{subsec:LUCI} The two LBT Utility Camera in the Infrared instruments (LUCI, formerly LUCIFER), are a pair of cryogenic near-IR (NIR) instruments, capable of imaging and spectroscopy (longslit and MOS) each located at a bent Gregorian {\it f/15} focus port of the SX and DX mirrors. The discussion of the LUCIs will focus primarily on seeing-limited operations. Diffraction-limited modes for both LUCIs are still being commissioned. The LUCIs are rather compact and rely on a series of fold mirrors to bring the light from the tertiary mirror (M3) into the focal plane. The LUCIs are cooled using closed cycle coolers which are monitored to maintain the correct temperatures needed for optimal operation. Currently, flexure compensation is achieved in a passive mode whereby a lookup-table is used based on the elevation and rotation of the instrument and applied before an exposure is taken. The corrections are applied to the last fold mirror in the optical train (FM4), which lies in front of the instrument's internal pupil. An active flexure compensation system is currently in development that will apply corrections during a science exposure.\\ \indent The LUCIs are sensitive from 0.95-2.44 $\mu$m and are designed to be used in both seeing-limited and diffraction limited (via active optics) modes. More detailed technical information and on-sky performance about LUCI (specifically LUCI-1) can be found in Seifert et al. (2003)\cite{2003SPIE.4841..962S}, Ageorges et al. (2012)\cite{2010SPIE.7735E..1LA}, and Buschkamp et al. (2012)\cite{2012SPIE.8446E..5LB}. LUCI-1 was installed at LBT in September 2008 and has been in service from December 2009 through July 2015 in seeing-limited mode only. It was removed from the telescope during 2015 summer shutdown to replace the detector with a Hawaii2 RG (H2RG) 2K $\times$ 2K detector and install a high resolution camera (N30) which is designed to be used with adaptive optics (AO). These upgrades were designed so LUCI-1 would match the capabilities of LUCI-2. LUCI-2 was made available to the LBT community for on-sky science starting in semester 2015B and continuing through semester 2016A in seeing-limited mode only. Commissioning of the diffraction-limited modes of LUCI-1 and LUCI-2 are ongoing. In addition, both LUCIs are designed to work with the ARGOS, a green laser system designed for wide-field ground-layer adaptive optics (GLAO) corrections, (e.g. Rabien et al. 2010\cite{2010SPIE.7736E..0ER}, Rabien et al. 2014\cite{2014SPIE.9148E..1BR}, and Rahmer et al. 2014\cite{2014SPIE.9149E..2AR}).\\ \indent Both LUCI-1 and LUCI-2 are now equipped with the same 2K $\times$ 2K H2RG detectors. The detectors are controlled by GEIRS (GEneric InfraRed detector Software) developed by MPIA. The LUCIs now have the same set of cameras: an {\it f/1.8} camera with 0\hbox{$^{\prime\prime}$}.25 pixel$^{-1}$ (N1.8), an {\it f/3.75} camera with a 0\hbox{$^{\prime\prime}$}.12 pixel$^{-1}$, and an {\it f/30} camera with 0\hbox{$^{\prime\prime}$}.015 pixel$^{-1}$. Nominally, the N1.8 camera is primarily used for seeing-limited spectroscopy; the N3.75 camera is used for seeing-limited imaging, yielding a 4\hbox{$^\prime$} $\times$ 4\hbox{$^\prime$}\ FOV; and the N30 is used for AO imaging and spectroscopy, providing a 30\hbox{$^{\prime\prime}$} $\times$ 30\hbox{$^{\prime\prime}$}\ FOV. Both LUCIs also house the same complement of broad and narrow-band filters. However, there are differences between the available spectroscopic gratings for the two LUCIs. Tables 4 \& 5 provide an overview of the capabilities available for both LUCIs in seeing-limited mode. Unlike MODS, where the grating tilt is not changeable by the user, the LUCIs offer a wide range of configuration possibilities that can be achieved with various tilts (i.e. central wavelengths or $\lambda$$_{\rm c}$), gratings, slits, and cameras. Using the N1.8 camera, low resolution grating (G200) permits nearly complete coverage of the near-IR window with only two settings. The high resolution grating (G210) with the N1.8 camera allows for nearly full wavelength coverage of each filter (i.e. {\it z}, {\it J}, {\it H}, and {\it K}-band). Users also have the flexibility to combine cameras, gratings, slits, and $\lambda$$_{\rm c}$ in different ways to achieve a wide range of scientific goals (i.e. higher spectral resolutions over shorter wavelength ranges). \begin{table}[ht] \caption{LUCI-1 \& LUCI-2 Filters Available for Science} \label{tab:LUCIcap1} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} {\bf Filter}& $\lambda$$_{C}$& {\bf FWHM}& & {\bf Filter}& $\lambda$$_{\rm c}$& {\bf FWHM}\\ \rule[-1ex]{0pt}{3.5ex} & ($\mu$m)& ($\mu$m)& & & ($\mu$m)& ($\mu$m)\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it z}& 0.957& 0.195& & He I& 1.088& 0.015\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it J}& 1.247& 0.305& & Paschen-$\gamma$& 1.097& 0.010\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it H}& 1.653& 0.301& & OH 1190& 1.194& 0.010\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it K}$_{\rm s}$& 2.163& 0.270& & {\it J} low& 1.199& 0.112\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it K}& 2.104& 0.408& & Paschen-$\beta$& 1.283& 0.012\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it zJ} spec& 1.175& 0.405& & {\it J} high& 1.303& 0.108\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it HK} spec& 1.950& 0.981& & FeII& 1.646& 0.018\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it Y}1& 1.007& 0.069& & H$_{\rm 2}$& 2.124& 0.023\\ \hline \rule[-1ex]{0pt}{3.5ex} OH 1060& 1.065& 0.010& & Brackett-$\gamma$& 2.170& 0.024\\ \hline \rule[-1ex]{0pt}{3.5ex} {\it Y}2& 1.074& 0.065& & ...& ....& ...\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \caption{LUCI-1 \& LUCI-2 Orders, Gratings, Valid $\lambda$$_{\rm c}$, and Resolution Values for Seeing-Limited Mode} \label{tab:LUCIcap1} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \rule[-1ex]{0pt}{3.5ex} {\bf Order}& {\bf G210 HiRes}& $\Delta$$\lambda$& Resolution& {\bf G200 LoRes}& $\Delta$$\lambda$& Resolution\\ \rule[-1ex]{0pt}{3.5ex} & Valid $\lambda$$_{\rm c}$ ($\mu$m)& ($\mu$m)& (0\hbox{$^{\prime\prime}$}.5\ slit)& Valid $\lambda$$_{\rm c}$ ($\mu$m)& ($\mu$m)& (0\hbox{$^{\prime\prime}$}.5\ slit)\\ \hline \rule[-1ex]{0pt}{3.5ex} 1& ...& ...& ...& 1.476-2.535 (L1)& 0.880& 1900 (H), 2600 (K)\\ \rule[-1ex]{0pt}{3.5ex} & ...& ...& ...& 1.248-2.491 (L2)& ''& ''\\ \hline \rule[-1ex]{0pt}{3.5ex} 2& 2.098-2.728 (L1)& 0.328& 5000& 0.897-1.358$^{*}$ (L1)& 0.440& 2100 (z), 2400 (J) \\ \rule[-1ex]{0pt}{3.5ex} & 2.061-2.702 (L2)& '' & ''& 0.599-1.246$^{*}$ (L2)& ''& ''\\ \hline \rule[-1ex]{0pt}{3.5ex} 3& 1.398-1.819 (L1)& 0.202& 5900& ...& ...& ...\\ \rule[-1ex]{0pt}{3.5ex} & 1.374-1.801 (L2)& '' & ''& ...& ...& ...\\ \hline \rule[-1ex]{0pt}{3.5ex} 4& 1.084-1.364 (L1)& 0.150& 5800& ...& ...& ...\\ \rule[-1ex]{0pt}{3.5ex} & 1.072-1.351 (L2)& '' & ''& ...& ...& ...\\ \hline \rule[-1ex]{0pt}{3.5ex} 5& 0.839-1.075$^{*}$ (L1)& 0.124& 5400& ...& ...& ...\\ \rule[-1ex]{0pt}{3.5ex} & 0.824-1.066$^{*}$ (L2)& '' & ''& ...& ...& ... \\ \hline \end{tabular} \end{center} {\footnotesize L1 is LUCI-1, L2 is LUCI-2. * $=$ While $\lambda$$_{\rm c}$ $<$ 0.95 $\mu$m are valid, the L1 and L2 entrance windows are now coated to cutoff at $\lambda$ $<$ 0.95 $\mu$m. Resolution scales down as slitwidth increases. The {\it zJ} spec and {\it HK} spec filters in Table 4 are primarily used with the G200 grating. The G150 Ks grating is only available on LUCI-1 and the allowable $\lambda$$_{\rm c}$ wavelength range is 1.95-2.4 $\mu$m and $\Delta$$\lambda$ $=$ 0.533 $\mu$m with {\it R} $\sim$ 4150 using a 2 pixel slitwidth (0\hbox{$^{\prime\prime}$}.5\ ). The G040 AO grating is only available on LUCI-2. More information regarding the G040 AO grating will be determined at a later date. The wavelength coverage in this table assumes the N1.80 camera. If using the N3.75 camera multiply $\Delta$$\lambda$ by 0.48, and if using the N30 camera multiple by 0.06. This table should allow one to determine the wavelength range visible for different configurations.}\\ \end{table} \indent The calibration unit for both LUCIs are external the instrument. The units are mounted above the bent Gregorian ports housing LUCI. In stowed position, they are flush with the ports. When required, they are activated by scripts (or manually from the user interface or WEB/IO interface) and swing out laterally and then downwards so they are directly in front of the entrance window. Three Halogen lamps of varying brightness are used for imaging and spectroscopic flats. In 2016A, a neutral density filter was added to the LUCI-2 calibration unit to better match the intensities of the LUCI-1 calibration unit. Three emission-lamps (arc lamps) are available for wavelength calibration: Neon, Argon, and Xenon. \subsubsection{LUCI Multi-Object Spectroscopic (MOS) Masks} \indent LUCI-1 and LUCI-2 each use a cryogenic MOS unit to house both a set of permanent facility longslit masks and user designed MOS slit masks (Hofmann et al. 2004\cite{2004SPIE.5492.1243H} and Buschkamp et al. 2010\cite{2010SPIE.7735E..79B}). The MOS units hold up to 33 masks distributed over two cabinets. The permanent cabinets, which are inside the LUCIs, houses 10 facility masks, including longslit masks. These include: a wide mask with two long slits, one 2\hbox{$^{\prime\prime}$}\ wide (top) and one 1\hbox{$^{\prime\prime}$}.5\ wide (bottom), both 100\hbox{$^{\prime\prime}$} in length; and longslits of length 3\hbox{$^\prime$}.8 and widths of 1\hbox{$^{\prime\prime}$}.0, 0\hbox{$^{\prime\prime}$}.75\ (currently only available in LUCI-2), 0\hbox{$^{\prime\prime}$}.5, 0\hbox{$^{\prime\prime}$}.25, and 0\hbox{$^{\prime\prime}$}.13 (to be used with AO and the N30 camera). This main unit houses the focal plane unit (FPU) which places the masks in and out of the LUCI focal plane using a robotic grabber arm (see Figure 5). The grabber slides along set of rails to select the requested mask, place it in the FPU, and later place the mask back in its designated slot once it is no longer needed (and another mask is requested). When imaging mode is used, an empty mask holder is placed in the FPU to allow light to pass unobstructed to the detector. A second exchangeable cassette which contains 23 masks slots is used to house MOS masks custom designed by science PIs. Secondary cabinet exchanges are executed on a monthly basis to accomodate different partner science programs. The exchanges include masks from multiple partners, with each partner assigned a maximum number available slots in the secondary cabinet. Mask exchanges are performed at cryogenic temperatures and require the use of two auxiliary cryostats in order to maintain pressure and temperatures at all times. An auxiliary cryostat holding a secondary cabinet is loaded with the next set of masks to be used for science. It is evacuated and cooled over 24-48 hours before a scheduled exchange. During the exchange, one aux cryostat is attached to LUCI using a set of gate valves controlled by software. Rails connect the aux cryostat to LUCI. The current installed secondary cabinet is moved along the rails into the cryostat. That cryostat is removed and a second cryostat is then attached and a secondary cabinet containing the new masks is placed into LUCI. The cabinet exchange is all done on the telescope infrastructure itself. This requires the cryostats to be lifted up through large doors in the high bay up and over the telescope and then gently placed on a platform on the telescope (located between the SX and DX mirrors where the bent Gregorian foci are located) \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=9cm]{Figure5.eps} \end{tabular} \end{center} Figure 5: The LUCI-1 MOS unit outside of its housing and only with the permanent cabinet. Shown is the grabber arm placing a mask holder (which would normally contain a MOS or longslit mask) into the FPU. The rail system can be seen at the bottom of the image along with the slots holding other masks in place (right side of photo). Photo courtesy of B. Rothberg. \end{figure} \indent The {\tt LMS} software is used to create MOS mask designs ({\url http://abell.as.arizona.edu/\~{}lbtsci/Instruments/LUCIFER/}) It is the forerunner of {\tt MMS}. The interface and concept is nearly the same as {\tt MMS} (see Figure 3). The main differences are the the different guidestar patrol fields (generally above the science FOV for LUCI and generally below the science FOV for MODS), and that LUCI uses reference stars to calculate the rotation and shift needed to align the mask correctly. A previous version of the LUCI control software only required alignment stars to be defined by {\tt LMS} without the needs for alignment boxes to be cut in the mask around each star. Currently, the new version of the LUCI software requires both designated reference stars and alignment boxes cut around them, but future updates may return to a system that does not require physical alignment boxes cut into the mask. MOS mask designs are submitted at the same deadline as MODS MOS mask designs. As with MODS, the MOS scientist reviews each mask to ensure it meets the criteria of sufficient alignment boxes, no overlapping slits, a suitable guidestar, etc. Approved masks are then sent to URIC for fabrication. For more information on fabrication and materials used, see Reynolds et al. 2014\cite{2014SPIE.9151E..4BR}. Once fabricated they are sent to LBTO and mask IDs are used to catalog the masks and place them into inventory for future (and possibly repeated) use. \subsubsection{LUCI Software Upgrade} As noted above, a series of hardware upgrades were made to LUCI-1 during summer shutdown of 2015. In addition to hardware upgrades, a new LUCI User interface has been developed by MPIA. The software completely replaces the previous version used to run LUCI-1. The new LUCI User interface communicates directly with both LUCI-1 and LUCI-2, and in principle, allows for binocular control of both instruments. The interface has three major components (see Figures 6 and 7): 1) The Main Observer Graphical User Interface (GUI) which displays information such as target coordinates, name, offsets, instrument configuration, and integration times in the central ``queue'' panel. Users load the script and click ''GO'' which sends the preset information to the telescope control software (TCS) and configures the instrument. Users may Pause, Reset, Abort, and Skip (or skip to) steps; 2) the Real-Time Display (RTD), which is based on the ALADIN software ({\url http://aladin.u-strasbg.fr/}) and is used for longslit and MOS mask acquisition; and 3) the Readout and Instrument Control panels, which gives users the ability to manually change parameters such as filters, mask, integration times, cameras, and the calibration unit. The previous LUCI-1 software parsed scripts using ASCII plain text. The new LUCI control software requires observing scripts in XML format. Currently, a scripting webpage, SCRIPTOR ({\url http://scriptor.tucson.lbto.org/}), was created to allow users to generate XML scripts by selecting the instrument, its configuration, integration times, guidestars, etc. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=12cm]{Figure6.eps} \end{tabular} \end{center} Figure 6: The new LUCI software interface. ({\it Top}) The main Observer GUI where scripts are loaded and information about the status of the observations are presented. ({\it Bottom}) Readout and Instrument panels for both LUCIs which give the users more manual control over the instrument. \end{figure} \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=10cm]{Figure7.eps} \end{tabular} \end{center} Figure 7: The Real Time Display showing example of MOS mask acquisition ({\it Top}) and longslit acquisition ({\it Bottom}). \end{figure} \section{Mixed-Mode Use} \label{subsec:mixed} The goal of LBT is to use the telescope in binocular mode all of the time. LBTI and LBCs have been observing in binocular mode for some time. In the last few months, MODS have been successfully tested and used on-sky in binocular mode. While the facility instruments have been designed to work in pairs in binocular mode, the telescope can also be figured to use instruments in a ``mixed mode.'' These modes would see configurations such as MODS-1/LBC-R, MODS-2/LBC-B, LUCI-1/LBC-R, LUCI-2/LBC-B, or MODS/LUCI. Mixed-Mode use is desirable as it opens up a much wider wavelength ranges for scientific study (i.e. simultaneous UV and near-IR observations). As noted in Hill et al. (2014)\cite{2014SPIE.9145E..02H}, the two sides are not required to have precisely the same target or position angle for binocular mode to work. The telescope mount points near the the mid-point between the two sides and the telescope software ``knows'' to avoid presets or small offsets that would violate the co-pointing limit (the maximum travel distance between the two mirrors). Currently, the co-pointing limit is set by software to be 40\hbox{$^{\prime\prime}$}\ (radius) apart in any direction. In effect, once the telescope mount has slewed to a set of coordinates, the two mirrors can effectively be treated as independent telescopes. Each side can dither as required by the science, so long as the two sides together don't violate the co-pointing limit.\\ \indent However, a current limitation of using Mixed-Mode is the ability to pass a binocular preset from two different instruments to the TCS. Since 2014, several combinations of Mixed-Mode have been used. Currently, different combinations require somewhat different setups and have different limitations. The first attempts in 2013 used a LUCI-1/LBC-R and MODS-1/LBC-R. In the case of the former, the telescope is configured in binocular mode, and the TCS waits to receive a preset sent from each instrument before moving to the field. Once there, LUCI-1 acquisitions are done normally (now using the RTD interface) and the script can dither as needed by near-IR observations, while LBC-R can either stare or dither. In the case of MODS-1/LBC-R, the telescope is set up in a hybrid configuration called ``pseudo-monocular.'' MODS-1 ``drives'' the mount, i.e. the preset is sent only by MODS-1 while LBC-R is ``along for the ride.'' On the MODS side, the acquisition is sent normally using the {\tt acqMODS} Perl script. Once the preset is successful, and the telescope is at the target field, the LBC-R is collimated using {\tt DOFPIA} and then a modified LBC script is executed that includes a value of -90$^{\circ}$ in the Declination coordinate, which is interpreted by the TCS as a flag to ignore the preset. Alignment and acquisition proceed as normal with {\tt modsAlign} and the observations are started using the Perl script {\tt execMODS}. Both sides can guide independently. Dithering can be done using the primary mirrors. Thus, both the dominant (in this case MODS) and the passive side (in this case LBC-R) can dither independently via the mirror. In normal LBC binocular mode, any dithering is done by the mount (which is faster), and not by the two mirrors. However, it was found that in pseudo-monocular mode when LBC (either Red or Blue) dithers using the mirror (which is slower than the mount), the start of the exposure does not wait for the slower move and collimation update from the primary mirror. Work to correct this is underway. Figure 8 plots the limits of the M1 (primary) and M2 (secondary) on SX (configured with MODS-1) with M1 on DX configured with LBC Red. The plot shows the co-pointing limits and how they are affected by the motions available to M1 and M2 together. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=11cm]{Figure8.eps} \end{tabular} \end{center} Figure 8: ({\it Left}) The co-pointing limits plot for MODS-1 (SX) and ({\it Right}) LBC Red (DX) with the telescope configured in pseudo-monocular mode averaged over one hour. The circles indicate the various available ranges of re-pointing available to each side of the telescope. Since LBC Red is at prime focus only M1 is shown on the {\it Right}. The small blue diamond represents the current pointing. The XY range of M2 and the tip-tilt range of M1 provide the tightest constraints. The constraints shown here must be considered by PIs in designing their science programs when using Mixed-Mode or Binocular Mode with LUCI or MODS (LBCs dither by moving the mount only, while in Mixed-Mode LBC dithers are done by M1). \end{figure} \indent With LUCI-1 off the telescope, and with MODS-1 unavailable in November 2015, Mixed-Mode has been successfully tested with MODS-2/LBC-B and LUCI-2/LBC-B. As of this writing, LBTO has not tested a Mixed-Mode LUCI/MODS combination. Based on current limitations, it is theoretically possible this mode should work in ``pseudo-monocular'' mode. Since LUCI observations (imaging and spectroscopy) require dithering, the most effective combination is LUCI as the dominant instrument and MODS in passive mode. This mode is scheduled to be tested in the near future. \section{Non-Sidereal Guiding} Non-sidereal guiding for all three facility instruments is accomplished using the NSIGUI (Non-Sidereal Instrument Graphical User Interface). Figure 9 shows the NSIGUI panels and all three tabs (left to right). The simplest method is to either load a properly formatted ephemeris, or to search for the non-sidereal target and retrieve the ephemeris using the JPL HORIZONS database (web interface: {\url http://ssd.jpl.nasa.gov/horizons.cgi}). In cases where an ephemeris does not exist, the middle tab can be used to enter the information on UT time, celestial coordinates, and rates manually. The user then selects the ``SET'' button in the IIF Non-Sidereal Override Control which sets an override flag in the TCS. Any preset that is sent from LBC, LUCI, or MODS is then overriden or ``hijacked'' by the non-sidereal coordinates from the NSIGUI. Instrument configurations are not affected by the GUI. Guiding and tracking then proceeds at a non-sidereal rate. The third tab allows users to update the rates as might be needed during observations. Non-sidereal observations using the LBCs have been done on a regular basis for several semesters. Non-sidereal guiding with LUCI and MODS have been tested at rates up to $\sim$ 100\hbox{$^{\prime\prime}$}\ hour$^{-1}$, thus making it possible to obtain spectra of non-sidereal targets. \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=11.5cm]{Figure9.eps} \end{tabular} \end{center} Figure 9: ({\it Left}) The three tabs for the NSIGUI (Non-Sidereal Instrument Graphical User Interface). This GUI is used to ``hijack'' preset coordinates from the facility instruments and replace them with the coordinates determined from an ephemeris or input along with rates in Tab 2. ({\it Top Right}) Example of non-sidereal guiding with LUCI-1 of the asteroid Janesick (106\hbox{$^{\prime\prime}$}\ hour$^{-1}$, 300 sec total time on target - 20 $\times$ 15 sec exposures at {\it z}-band) obtained by O. Kuhn. ({\it Bottom Right}) LBC Red image of the near-Earth asteroid 2012 ER3 (100\hbox{$^{\prime\prime}$}\ hour$^{-1}$, 100 seconds at Sloan {\it r}) from Hill et al. (2012)\cite{2012SPIE.8444E..1AH}. \end{figure} \section{Summary} \label{sec:concluson} Although it was in 2014 that all of the facility instruments arrived and were installed on the telescope, it has been 2015 and 2016A that have been truly exciting. For the first time, all facility instruments have been used on-sky to obtain scientific data. Two-thirds of the facility instruments are available and have been used in full binocular mode, and it is expected that full binocular observations will occur for {\it all} facility instruments in 2016B. Full binocular observations will be a paradigm shift for the nightly operations of LBTO. The parameter space of what LBT can do will significantly expand with full binocular mode, especially in the case of Mixed-Mode observations. Planning tools and scripting tools to address this are already in development and testing, led by the development of queue observing at LBT (see Edwards et al. in the companion proceedings 9910, Observatory Operations: Strategies, Processes, and Systems VI). Full binocular mode will also require a shift in how users propose and design observations to take full advantage of the capabilities of LBT. Information regarding the design and use of instrumentation can be found on the Science Operations webpages: {\url http://scienceops.lbto.org/sciops\_cookbook/} and {\url http://abell.as.arizona.edu/\~{}lbtsci/scihome.html} as well as the main LBT webpage: {\url www.lbto.org} which contains the latest updates and information on all that is happening at the observatory. \acknowledgments B.Rothberg would like to acknowledge a NASA Keck PI Data Award (Contract $\#$1462408), administered by JPL and the NASA Exoplanet Science Institute, for support of the work on intermediate redshift ULIRGs.
2024-02-18T23:40:28.495Z
2016-08-02T02:01:51.000Z
algebraic_stack_train_0000
2,505
10,460
proofpile-arXiv_065-12210
\section{Introduction} \label{sec:introduction} The primal-dual interior point method (IPM) is one of the most widely used methods for solving large linear programming problems. The method can be analysed and implemented as a path-following algorithm, in which the iterates follow a central trajectory toward the solution set, or as a potential reduction algorithm, which makes progress by systematically reducing a potential function. Most implementations make use of the path-following concept \cite{andersen1996}. This paper analyses a variant of the IPM that works with inexactly computed step directions. Inexact directions occur in implementations which solve the linear equation systems by iterative methods. The analysis given here is closely related to such an implementation and provides criteria to control the level of inexactness in the computation. The paper introduces two interior point algorithms that work with inexact directions. The first algorithm requires a strictly feasible starting point and keeps all iterates feasible. The second algorithm can start from an infeasible point and achieves feasibility in the limit. Both algorithms are formulated and analysed as potential reduction methods. It is proved that in both cases the \emph{inexact} methods retain the convergence and complexity bounds of the \emph{exact} ones. The linear program is stated in standard form of a primal-dual pair \begin{alignat}{2} \label{eq:primal} &\text{minimize }\+c^T\x &\quad& \text{subject to }A\x=\+b,\;\x\ge\0, \\ \label{eq:dual} &\text{maximize }\+b^T\y && \text{subject to }A^T\y+\z=\+c,\;\z\ge\0, \end{alignat} in which $A$ is an $m\times n$ matrix of full row rank. An IPM generates a sequence of iterates $\rvec{\xk,\yk,\zk}$ by taking steps along the Newton direction to the nonlinear system \begin{equation} \label{eq:nonlinear} F(\x,\y,\z) := \pmat{A\x-\+b \\ A^T\y+\z-\+c \\ X\z-\mu\e} = \0, \end{equation} in which $X:=\diag(\x)$, $\e$ is the $n$-vector of ones and $\mu>0$ is a parameter that is gradually reduced to zero. The step directions are computed from the linear system \begin{equation} \label{eq:newton} \bmat{A&0&0 \\ 0&A^T&I \\ Z^k&0&X^k} \pmat{\dx^* \\ \dy^* \\ \dz^*} = \pmat{\+b-A\xk \\ \+c-A^T\yk-\zk \\ -X^k\zk+\mu\e}, \end{equation} in which $X^k:=\diag(\xk)$ and $Z^k:=\diag(\zk)$. The step sizes are chosen to keep $\xk$ and $\zk$ positive. The potential reduction method is a particular instance of the IPM. It sets $\mu=(\xk)^T\zk/(n+\nu)$ for a constant $\nu\ge\sqrt{n}$ and chooses a step size to decrease a potential function by at least a certain constant. This paper uses the Tanabe-Todd-Ye potential function \cite{tanabe1988,todd1990} \begin{equation*} \phi(\x,\z) := (n+\nu)\ln(\x^T\z) - \sum_{i=1}^n\ln(x_iz_i) -n\ln n. \end{equation*} The inexact methods work with step directions of the form \begin{equation} \label{eq:inewton} \bmat{A&0&0 \\ 0&A^T&I \\ Z^k&0&X^k} \pmat{\dx \\ \dy \\ \dz} = \pmat{\+b-A\xk \\ \+c-A^T\yk-\zk \\ -X^k\zk+\mu\e+\+\xi_0}, \end{equation} in which a residual $\+\xi_0$ remains in the complementarity equations. The primal and dual feasibility equations must be satisfied exactly. Conditions will be imposed on $\+\xi_0$ to guarantee that the step decreases $\phi$ sufficiently. \section{The Inexact Potential Reduction Method} \label{sec:algorithm} Considering one iterate $\rvec{\xk,\yk,\zk}$, we define diagonal matrices \begin{equation*} D:=(X^k)^{1/2}(Z^k)^{-1/2}, \quad W:=(X^kZ^k)^{1/2}, \end{equation*} and $\w:=W\e$. To analyse the step directions it is convenient to write the Newton system \eqref{eq:newton} in the scaled quantities $\du^*:=D^{-1}\dx^*$ and $\dv^*:=D\dz^*$, which is \begin{equation} \label{eq:snewton} \bmat{AD&0&0 \\ 0&DA^T&I \\ I&0&I} \pmat{\du^* \\ \dy^* \\ \dv^*} = \pmat{\+b-A\xk \\ D(\+c-A^T\yk-\zk) \\ -\w+\mu W^{-1}\e} =: \pmat{\+p \\ \+q \\ \+r}. \end{equation} The inexact solution corresponding to a residual $\+\xi$ in the scaled system then satisfies \begin{equation} \label{eq:isnewton} \bmat{AD&0&0 \\ 0&DA^T&I \\ I&0&I} \pmat{\du \\ \dy \\ \dv} = \pmat{\+p \\ \+q \\ \+r+\+\xi}. \end{equation} The inexact potential reduction algorithms make use of the following conditions on the residual, in which $\kappa\in[0,1)$ and $\norm{\cdot}$ is the Euclidean norm: \begin{subequations} \begin{align} \label{eq:cond1} -\+r^T\+\xi &\le \kappa \norm{\+r}^2, \\ \label{eq:cond2} \norm{\+\xi} &\le \kappa \min(\norm{\du},\norm{\dv}), \\ \label{eq:cond3} -\w^T\+\xi &\le \kappa n/(n+\nu) \norm{\w}^2. \end{align} \end{subequations} Algorithm~\ref{alg:feasible} is the inexact version of the feasible potential reduction method described in \cite{kojima1991,wright1997}. All iterates belong to the strictly feasible set \begin{equation*} \mathcal{F}^o := \left\{(\x,\y,\z): A\x=\+b, A^T\y+\z=\+c, (\x,\z)>0\right\}, \end{equation*} which is assumed to be nonempty. The algorithm does not require condition \eqref{eq:cond3}. \begin{algorithm} \label{alg:feasible} Given $\rvec{\x^0,\y^0,\z^0}\in\mathcal{F}^o$ and $\eps>0$. Choose $\nu\ge\sqrt{n}$ and $\kappa\in[0,1)$. Set $\delta:=0.15(1-\kappa)^4$ and $k:=0$. \begin{enumerate} \item If $(\xk)^T\zk\le\eps$ then stop. \item Let $\mu:=(\xk)^T\zk/(n+\nu)$. Compute the solution to \eqref{eq:isnewton} with residual $\+\xi$ that satisfies \eqref{eq:cond1}--\eqref{eq:cond2}. Set $\dx:=D\du$ and $\dz:=D^{-1}\dv$. \item Find step size $\alpha^k$ such that \begin{equation} \label{eq:stepsize1} \phi(\xk+\alpha^k\dx, \zk+\alpha^k\dz) \le \phi(\xk,\zk)-\delta. \end{equation} \item Set $\rvec{\x^{k+1},\y^{k+1},\z^{k+1}} := \rvec{\xk,\yk,\zk} + \alpha^k\rvec{\dx,\dy,\dz}$, $k:=k+1$ and go to 1. \end{enumerate} \end{algorithm} The following theorem, which is proved in Section~\ref{sec:proof1}, states that Algorithm~\ref{alg:feasible} retains the complexity bound of the exact version analysed in \cite{kojima1991,wright1997}. \begin{theorem} \label{thm:feasible} Let $\rvec{\x^0,\y^0,\z^0}\in\mathcal{F}^o$ and $L\ge0$ such that $\phi(\x^0,\z^0)=O(\nu L)$. Suppose that $\ln(1/\eps)=O(L)$. Then Algorithm~\ref{alg:feasible} terminates in $O(\nu L)$ iterations provided that $\kappa$ is chosen independently of $n$. \end{theorem} Algorithm~\ref{alg:infeasible} is an infeasible inexact potential reduction method, as its sequence of iterates does not, in general, belong to $\mathcal{F}^o$. It extends Algorithm~1 from \cite{mizuno1995} to work with inexact directions. Given positive constants $\rho$ and $\eps$, it finds $\eps$-accurate approximations to solutions $\x^*$ to \eqref{eq:primal} and $\rvec{\y^*,\z^*}$ to \eqref{eq:dual}, if they exist, such that \begin{equation*} \norm{\rvec{\x^*,\z^*}}_\infty \le \rho. \end{equation*} \begin{algorithm} \label{alg:infeasible} Given $\rho>0$ and $\eps>0$. Set $\rvec{\x^0,\y^0,\z^0}=\rho\rvec{\e,\0,\e}$. Choose $\sqrt{n}\le\nu\le2n$ and $\kappa\in[0,1)$. Set $\delta:=(1-\kappa)^4/(1600(n+\nu)^2)$ and $k:=0$. \begin{enumerate} \item If $(\xk)^T\zk\le\eps$ then stop. \item Let $\mu:=(\xk)^T\zk/(n+\nu)$. Compute the solution to \eqref{eq:isnewton} with residual $\+\xi$ that satisfies \eqref{eq:cond1}--\eqref{eq:cond3}. Set $\dx:=D\du$ and $\dz:=D^{-1}\dv$. \item Find step size $\alpha^k$ such that \begin{subequations} \begin{align} \label{eq:step2a} \phi(\xk+\alpha^k\dx, \zk+\alpha^k\dz) &\le \phi(\xk,\zk)-\delta, \\ \label{eq:step2b} (\xk+\alpha^k\dx)^T(\zk+\alpha^k\dz) &\ge (1-\alpha^k) (\xk)^T\zk. \end{align} \end{subequations} If no such step size exists then stop. \item Set $\rvec{\x^{k+1},\y^{k+1},\z^{k+1}} := \rvec{\xk,\yk,\zk} + \alpha^k\rvec{\dx,\dy,\dz}$, $k:=k+1$ and go to 1. \end{enumerate} \end{algorithm} The following theorem, which is proved in Section~\ref{sec:proof2}, states that Algorithm~\ref{alg:infeasible} retains the complexity bound of the exact infeasible potential reduction method \cite{mizuno1995}. \begin{theorem} \label{thm:infeasible} Let $L\ge\ln n$ such that $\rho=O(L)$. Suppose that $\ln(1/\eps)=O(L)$. Then Algorithm~\ref{alg:infeasible} terminates in $O(\nu(n+\nu)^2L)$ iterations provided that $\kappa$ is chosen independently of $n$. If the algorithm stops in step 1 then the iterate is an $\eps$-approximate solution; otherwise it stops in step 3 showing that there are no optimal solutions $\x^*$ to \eqref{eq:primal} and $\rvec{\y^*,\z^*}$ to \eqref{eq:dual} such that $\norm{\rvec{\x^*,\z^*}}_\infty\le\rho$. \end{theorem} The following lemma is key to the analysis of the inexact potential reduction methods given in the next two sections. It exploits the particular form of the scaled Newton system to prove that condition \eqref{eq:cond2} bounds the \emph{relative error} in the inexact solution. \begin{lemma} \label{lem:relerr} Given solutions to \eqref{eq:snewton} and \eqref{eq:isnewton}, suppose that \eqref{eq:cond2} holds for $\kappa\in[0,1)$. Then \begin{equation*} \frac{\norm{\du-\du^*}}{\norm{\du^*}} \le \frac{\kappa}{1-\kappa}, \quad \frac{\norm{\dv-\dv^*}}{\norm{\dv^*}} \le \frac{\kappa}{1-\kappa}. \end{equation*} \end{lemma} \begin{proof} Denoting $P:=DA^T(AD^2A^T)^{-1}AD$, the solution to \eqref{eq:snewton} is \begin{subequations} \begin{align*} \du^* &= DA^T(AD^2A^T)^{-1}\+p - (I-P)\+q + (I-P)\+r, \\ \dy^* &= (AD^2A^T)^{-1}(\+p + AD\+q - AD\+r), \\ \dv^* &= -DA^T(AD^2A^T)^{-1}\+p + (I-P)\+q + P\+r. \end{align*} \end{subequations} It follows that \begin{align*} \du-\du^* &= (I-P)\+\xi, \\ \dv-\dv^* &= P\+\xi. \end{align*} Because $P$ and $(I-P)$ are projection operators, $\norm{P}\le1$ and $\norm{(I-P)}\le1$. Therefore the absolute errors are bounded by the norm of the residual, \begin{align*} \norm{\du-\du^*} &\le \norm{\+\xi}, \\ \norm{\dv-\dv^*} &\le \norm{\+\xi}. \end{align*} On the other hand, it follows from the triangle inequality and \eqref{eq:cond2} that \begin{subequations} \begin{align} \label{eq:ubound} \norm{\du^*} &= \norm{\du-(I-P)\+\xi} \ge \norm{\du}-\norm{\+\xi} \ge (1-\kappa)\norm{\du}, \\ \label{eq:vbound} \norm{\dv^*} &= \norm{\dv-P\+\xi} \ge \norm{\dv}-\norm{\+\xi} \ge (1-\kappa)\norm{\dv}. \end{align} \end{subequations} Combining both inequalities and \eqref{eq:cond2} gives \begin{align*} \frac{\norm{\du-\du^*}}{\norm{\du^*}} &\le \frac{\norm{\+\xi}}{\norm{\du^*}} \le \frac{\kappa\norm{\du}}{(1-\kappa)\norm{\du}} = \frac{\kappa}{1-\kappa}, \\ \frac{\norm{\dv-\dv^*}}{\norm{\dv^*}} &\le \frac{\norm{\+\xi}}{\norm{\dv^*}} \le \frac{\kappa\norm{\dv}}{(1-\kappa)\norm{\dv}} = \frac{\kappa}{1-\kappa} \end{align*} as claimed. \end{proof} \section{Proof of Theorem~\ref{thm:feasible}} \label{sec:proof1} This and the next section use two technical results from Mizuno, Kojima and Todd \cite{mizuno1995}, which are stated in the following two lemmas. \begin{lemma} \label{lem:qbound} For any $n$-vectors $\x>0$, $\z>0$, $\dx$, $\dz$ and $\alpha>0$ such that $\norm{\alpha X^{-1}\dx}_\infty \le \tau$ and $\norm{\alpha Z^{-1}\dz}_\infty \le \tau$ for a constant $\tau\in(0,1)$ it holds true that \begin{equation*} \phi(\x+\alpha\dx,\z+\alpha\dz) \le \phi(\x,\z) + g_1\alpha + g_2\alpha^2 \end{equation*} with coefficients \begin{align*} g_1 &= \lt(\frac{n+\nu}{\x^T\z}\e-(XZ)^{-1}\e\rt)^T (Z\dx+X\dz), \\ g_2 &= (n+\nu)\frac{\dx^T\dz}{\x^T\z} + \frac{\norm{X^{-1}\dx}^2+\norm{Z^{-1}\dz}^2}{2(1-\tau)}. \end{align*} \end{lemma} \begin{lemma} \label{lem:wbound} For any $n$-vector $\w>0$ and $\nu\ge\sqrt{n}$ \begin{equation*} \lt\lVert W^{-1}\e-\frac{n+\nu}{\w^T\w}\w\rt\rVert \ge \frac{\sqrt{3}}{2\wmin}, \end{equation*} where $W:=\diag(\w)$ and $\wmin:=\min_i w_i$. \end{lemma} Applying Lemma~\ref{lem:wbound} to the vector $\+r$ defined in \eqref{eq:snewton} shows that \begin{equation} \label{eq:rbound} \norm{\+r}=\norm{-\w+\mu W^{-1}\e}=\mu\norm{-\frac{1}{\mu}\w+W^{-1}\e} \ge \mu\frac{\sqrt{3}}{2\wmin}. \end{equation} The following lemma extends the analysis of the feasible potential reduction method given in \cite{wright1997}. It shows that Algorithm~\ref{alg:feasible} finds a step size that reduces $\phi$ by at least the prescribed value in each iteration. \begin{lemma} \label{lem:deltaphi1} In the $k$-th iteration of Algorithm~\ref{alg:feasible} \eqref{eq:stepsize1} holds for \begin{equation*} \alpha:=\frac{\wmin}{2\norm{\+r}}(1-\kappa)^3, \end{equation*} where $\wmin:=\min_i\sqrt{x_i^kz_i^k}$. \end{lemma} \begin{proof} It follows from the first two block equations in \eqref{eq:isnewton} and $\+p=\0$, $\+q=\0$ that \begin{equation*} \du^T\dv = -\du^TDA^T\dy = -(AD\du)^T\dy = 0, \end{equation*} and analogously $(\du^*)^T\dv^*=0$ from \eqref{eq:snewton}. Therefore $\norm{\du^*}^2+\norm{\dv^*}^2=\norm{\+r}^2$ and from \eqref{eq:ubound}, \eqref{eq:vbound} and the definition of $\alpha$ \begin{align*} \norm{\alpha X^{-1}\dx}_\infty &\le \alpha\norm{W^{-1}}\norm{\du} \le \frac{\alpha}{\wmin} \frac{\norm{\du^*}}{1-\kappa} \le \frac{\alpha}{\wmin} \frac{\norm{\+r}}{1-\kappa} \le \frac{1}{2}, \\ \norm{\alpha Z^{-1}\dz}_\infty &\le \alpha\norm{W^{-1}}\norm{\dv} \le \frac{\alpha}{\wmin} \frac{\norm{\dv^*}}{1-\kappa} \le \frac{\alpha}{\wmin} \frac{\norm{\+r}}{1-\kappa} \le \frac{1}{2}. \end{align*} Therefore $\tau:=1/2$ satisfies the assumptions of Lemma~\ref{lem:qbound}, so that \begin{equation*} \phi(\xk+\alpha\dx,\zk+\alpha\dz) - \phi(\xk,\zk) \le g_1\alpha + g_2\alpha^2 \end{equation*} with coefficients \begin{align*} g_1 &= \lt( \frac{n+\nu}{\w^T\w}\e-W^{-2}\e \rt)^T W(\du+\dv) \\ g_2 &= \norm{W^{-1}\du}^2+\norm{W^{-1}\dv}^2. \end{align*} To show that $\phi$ is sufficiently reduced along the direction $\rvec{\dx,\dz}$ it is necessary to show that $g_1$ is negative and bounded away from zero, while $g_2$ is bounded. From the definition of $\+r$ and condition \eqref{eq:cond1} it follows that \begin{subequations} \begin{align} \label{eq:g1} g_1 &= \lt(\frac{n+\nu}{\w^T\w}\w-W^{-1}\e\rt)^T(\du+\dv) = -\frac{n+\nu}{\w^T\w}\+r^T(\+r+\+\xi) \\ \label{eq:g1bound} &\le -(1-\kappa)\frac{n+\nu}{\w^T\w}\norm{\+r}^2. \end{align} \end{subequations} For the second order term it follows from \eqref{eq:ubound}, \eqref{eq:vbound} that \begin{align*} g_2 &= \norm{W^{-1}\du}^2+\norm{W^{-1}\dv}^2 \le \frac{1}{\wmin^2} \lt(\norm{\du}^2+\norm{\dv}^2\rt) \\ &\le \frac{\norm{\du^*}^2+\norm{\dv^*}^2}{\wmin^2(1-\kappa)^2} = \frac{\norm{\+r}^2}{\wmin^2(1-\kappa)^2}. \end{align*} Inserting the bounds on $g_1$ and $g_2$ into the quadratic form and using the definition of $\alpha$ gives \begin{align*} \phi(\xk&+\alpha\dx,\zk+\alpha\dz) - \phi(\xk,\zk) \\ &\le -(1-\kappa)\frac{n+\nu}{\w^T\w}\norm{\+r}^2 \alpha + \frac{\norm{\+r}^2}{\wmin^2(1-\kappa)^2} \alpha^2 \\ &= -(1-\kappa)^4\frac{n+\nu}{\+w^T\+w}\frac{\wmin}{2}\norm{\+r} + \frac{(1-\kappa)^4}{4}. \end{align*} Finally, using the bound on $\norm{\+r}$ from \eqref{eq:rbound} gives \begin{align*} \phi(\xk&+\alpha\dx,\zk+\alpha\dz) - \phi(\xk,\zk) \\ &\le (1-\kappa)^4 \lt(-\frac{\sqrt{3}}{4}+\frac{1}{4}\rt) \\ &\le -0.15(1-\kappa)^4 = -\delta \end{align*} as claimed. \end{proof} The proof of Theorem~\ref{thm:feasible} is immediate. Since $\phi(\x,\z)\ge\nu\ln(\x^T\z)$, the termination condition \begin{equation*} \nu \ln\lt((\xk)^T\zk\rt) \le \nu \ln(\eps) \end{equation*} is satisfied when \begin{equation} \label{eq:phik} \phi(\xk,\zk) \le \phi(\x^0,\z^0)-k\delta \le \nu\ln(\eps). \end{equation} Since under the assumption of the theorem $\phi(\x^0,\z^0)=O(\nu L)$ and $\ln(1/\eps)=O(L)$, and since $\delta$ is independent of $n$, \eqref{eq:phik} holds for $k\ge K=O(\nu L)$. \section{Proof of Theorem~\ref{thm:infeasible}} \label{sec:proof2} The proof of the theorem is based on Mizuno, Kojima and Todd \cite{mizuno1995}. We define a sequence $\{\theta^k\}$ by \begin{equation} \label{eq:theta} \theta^0:=1 \quad \text{and} \quad \theta^{k+1}:=(1-\alpha^k) \theta^k \text{ for } k\ge0. \end{equation} Since the first two block equations in \eqref{eq:nonlinear} are linear and satisfied exactly by a full step of the algorithm \begin{equation*} \rvec{A\xk-\+b,A^T\yk+\zk-\+c}=\theta^k \rvec{A\x^0-\+b,A^T\y^0+\z^0-\+c}. \end{equation*} The following lemma is obtained from Lemma~4 in \cite{mizuno1995} by setting $\gamma_0=1$ and $\gamma_1=1$. \begin{lemma} \label{lem:uvbound2} Let $\rho>0$ and suppose that \begin{align} \rvec{\x^0,\y^0,\z^0} &= \rho \rvec{\e,\0,\e}, \notag \\ \rvec{A\xk-\+b,A^T\yk+\zk-\+c} &= \theta^k \rvec{A\x^0-\+b,A^T\y^0+\z^0-\+c}, \notag \\ \label{eq:feas} (\xk)^T\zk &\ge \theta^k (\x^0)^T\z^0. \end{align} If there exist solutions $\x^*$ to \eqref{eq:primal} and $\rvec{\y^*,\z^*}$ to \eqref{eq:dual} such that $\norm{\rvec{\x^*,\z^*}}_\infty\le\rho$ then the solution to \eqref{eq:snewton} at $\rvec{\xk,\yk,\zk}$ satisfies \begin{align*} \norm{\du^*} &\le \frac{5(\xk)^T\zk}{\wmin}, \\ \norm{\dv^*} &\le \frac{5(\xk)^T\zk}{\wmin}, \end{align*} where $\wmin:=\min_i\sqrt{x^k_iz^k_i}$. \end{lemma} The following lemma is based on Lemma~5 in \cite{mizuno1995}. It shows that when optimal solutions to \eqref{eq:primal} and \eqref{eq:dual} exist, then Algorithm~\ref{alg:infeasible} can find a step size in each iteration that satisfies \eqref{eq:step2a} and \eqref{eq:step2b}. \begin{lemma} \label{lem:minstep2} If there exist optimal solutions $\x^*$ to \eqref{eq:primal} and $\rvec{\y^*,\z^*}$ to \eqref{eq:dual} such that $\norm{\rvec{\x^*,\z^*}}_\infty\le\rho$ then \eqref{eq:step2a} and \eqref{eq:step2b} hold for \begin{equation*} \alpha := \frac{(1-\kappa)^3\wmin^2}{200(n+\nu)(\xk)^T\zk} \end{equation*} in the $k$-th iteration, where $\wmin:=\min_i\sqrt{x^k_iz^k_i}$. \end{lemma} \begin{proof} A simple calculation shows that by definition of $\rvec{\x^0,\z^0}$ and because of \eqref{eq:step2b} the assumptions of Lemma~\ref{lem:uvbound2} are satisfied. Combining the lemma with \eqref{eq:ubound}, \eqref{eq:vbound} shows that \begin{align*} \norm{\du} &\le \frac{5(\xk)^T\zk}{(1-\kappa)\wmin}, \\ \norm{\dv} &\le \frac{5(\xk)^T\zk}{(1-\kappa)\wmin}. \end{align*} It follows that \begin{align*} \norm{\alpha X^{-1}\dx} &\le \alpha \norm{W^{-1}} \norm{\du} \le \alpha \frac{5(\xk)^T\zk}{(1-\kappa)\wmin^2} = \frac{(1-\kappa)^2}{40(n+\nu)} \le \frac{1}{40}, \\ \norm{\alpha Z^{-1}\dz} &\le \alpha \norm{W^{-1}} \norm{\dv} \le \alpha \frac{5(\xk)^T\zk}{(1-\kappa)\wmin^2} = \frac{(1-\kappa)^2}{40(n+\nu)} \le \frac{1}{40}. \end{align*} Therefore $\tau:=1/40$ satisfies the assumption of Lemma~\ref{lem:qbound}, so that \begin{equation*} \phi(\xk+\alpha\dx,\zk+\alpha\dz) \le \phi(\xk,\zk) + g_1\alpha + g_2\alpha^2 \end{equation*} with coefficients \begin{align*} g_1 &= \lt(\frac{n+\nu}{\w^T\w}\e-W^{-2}\e\rt)^T W(\du+\dv), \\ g_2 &= \lt((n+\nu)\frac{\du^T\dv}{\w^T\w} + \frac{\norm{W^{-1}\du}^2+\norm{W^{-1}\dv}^2}{2(1-\tau)}\rt). \end{align*} It will be shown that $g_1$ is negative and bounded away from zero, while $g_2$ is bounded. Combining \eqref{eq:g1bound} and \eqref{eq:rbound} gives \begin{equation*} g_1 \le -(1-\kappa)\frac{1}{\mu}\norm{\+r}^2 \le -(1-\kappa)\mu\frac{3}{4\wmin^2}. \end{equation*} Next, from the bound on $\norm{\du}$ and $\norm{\dv}$ it follows that \begin{equation} \label{eq:dudv} |\du^T\dv| \le \norm{\du}\norm{\dv} \le \lt( \frac{5\w^T\w}{(1-\kappa)\wmin} \rt)^2, \end{equation} which implies that \begin{equation} \label{eq:g2a} (n+\nu)\frac{\du^T\dv}{\w^T\w} \le \frac{n+\nu}{\w^T\w}\lt( \frac{5\w^T\w}{(1-\kappa)\wmin} \rt)^2 \le \frac{n+\nu}{n} \lt( \frac{5\w^T\w}{(1-\kappa)\wmin^2} \rt)^2, \end{equation} where the last inequality is obtained by multiplying with $\w^T\w/(n\wmin^2)\ge1$. Moreover, the bound on $\norm{\du}$ and $\norm{\dv}$ also implies that \begin{equation} \label{eq:g2b} \frac{\norm{W^{-1}\du}^2+\norm{W^{-1}\dv}^2}{2(1-\tau)} \le \frac{1}{1-\tau} \lt( \frac{5\w^T\w}{(1-\kappa)\wmin^2} \rt)^2. \end{equation} Adding up \eqref{eq:g2a} and \eqref{eq:g2b} and using $\nu\le2n$ gives \begin{equation*} g_2 \le \lt(\frac{n+\nu}{n}+\frac{1}{1-\tau}\rt) \lt( \frac{5\w^T\w}{(1-\kappa)\wmin^2} \rt)^2 \le 5 \lt( \frac{5\w^T\w}{(1-\kappa)\wmin^2} \rt)^2. \end{equation*} Inserting $g_1$, $g_2$ and the definition of $\alpha$ into the quadratic form gives \begin{align*} \phi(\xk&+\alpha\dx,\zk+\alpha\dz) - \phi(\xk,\zk) \\ &\le -(1-\kappa)\frac{\w^T\w}{n+\nu}\frac{3}{4\wmin^2} \alpha + 5 \lt( \frac{5\w^T\w}{(1-\kappa)\wmin^2} \rt)^2 \alpha^2 \\ &= \frac{(1-\kappa)^4}{(n+\nu)^2} \lt(-\frac{3}{4\cdot200} + 5 \lt(\frac{5}{200}\rt)^2 \rt) = -\delta, \end{align*} which shows that $\alpha$ satisfies \eqref{eq:step2a}. Finally, to verify that $\alpha$ satisfies \eqref{eq:step2b}, a straightforward calculation shows that \begin{equation*} \dz^T\xk + \dx^T\zk = \dv^T\w + \du^T\w = \w^T(\+r+\+\xi) = \lt(\frac{n}{n+\nu}-1\rt)\w^T\w + \w^T\+\xi \end{equation*} and consequently \begin{multline*} (\xk+\alpha\dx)^T(\zk+\alpha\dz) = (\xk)^T\zk + \alpha(\dz^T\xk+\dx^T\zk) + \alpha^2 \dx^T\dz \\ = (1-\alpha)\w^T\w + \alpha\lt(\frac{n}{n+\nu}\w^T\w + \w^T\+\xi + \alpha\du^T\dv\rt). \end{multline*} Using \eqref{eq:dudv} and \eqref{eq:cond3} it follows for the term in parenthesis that \begin{align*} \frac{n}{n+\nu}\w^T\w + \w^T\+\xi + \alpha\du^T\dv &\ge \frac{(1-\kappa)n}{n+\nu}\w^T\w - \alpha\lt(\frac{5\w^T\w} {(1-\kappa)\wmin}\rt)^2 \\ &= \frac{(1-\kappa)\w^T\w}{n+\nu}\lt(n-\frac{1}{8}\rt) > 0. \end{align*} Therefore $\alpha$ satisfies \eqref{eq:step2b}, which completes the proof. \end{proof} Theorem~\ref{thm:infeasible} follows from the lemma by the same argumentation as in \cite{mizuno1995}. Under the hypothesis of the theorem $\phi(\x^0,\z^0)=O(\nu L)$ and $\ln(1/\eps)=O(L)$. Since $\phi(\x,\z)\ge\nu\ln(\x^T\z)$ and the potential function decreases by at least $\delta$ in each iteration, Algorithm~\ref{alg:infeasible} terminates in $O(\nu L/\delta)=O(\nu(n+\nu)^2L)$ iterations. When the algorithm stops in step 1, then $(\xk)^T\zk\le\eps$ and because of \eqref{eq:step2b} \begin{equation*} \norm{\rvec{A\xk-\+b,A^T\yk+\zk-\+c}} \le \eps \norm{\rvec{A\x^0-\+b,A^T\y^0+\z^0-\+c}}/(\x^0)^T\z^0, \end{equation*} so that the final iterate is indeed an $\eps$-approximate solution. On the other hand, if there exist optimal solutions $\x^*$ to \eqref{eq:primal} and $\rvec{\y^*,\z^*}$ to \eqref{eq:dual} such that $\norm{\rvec{\x^*,\z^*}}\le\rho$, then it follows from Lemma \ref{lem:minstep2} that a step size exists which satisfies \eqref{eq:step2a} and \eqref{eq:step2b}. Therefore, if the algorithm stops in step 3, then there are no such solutions. \begin{remark} Theorem~\ref{thm:infeasible} imposed the upper bound $\nu\le2n$, which is not needed in the analysis of the exact potential reduction method. The actual value of this bound, however, is not important and the proof remains valid by adapting $\alpha$ and $\delta$ as long as $\nu=O(n)$. \end{remark} \section{Discussion} \label{sec:discussion} The analysis has shown some insights into the conditions \eqref{eq:cond1}--\eqref{eq:cond3}. It has been seen from \eqref{eq:g1} that $-\+r^T\+\xi<\norm{\+r}^2$ is sufficient and necessary for $\rvec{\dx,\dz}$ to be a descent direction for $\phi$, making \eqref{eq:cond1} a necessary condition in a potential reduction method. Condition \eqref{eq:cond2} bounds the curvature of $\phi$ along $\rvec{\dx,\dz}$. When the iterate is feasible this condition can be replaced by $\norm{\+r}\le c\norm{\+\xi}$ for an arbitrary constant $c$, since then \begin{equation*} \norm{\du}^2+\norm{\dv}^2=\norm{\+r+\+\xi}^2 \le (1+c)^2\norm{\+r}^2 \end{equation*} gives the required bound on $g_2$ in Lemma~\ref{lem:deltaphi1}. For an infeasible iterate, however, condition \eqref{eq:cond2} is needed in its form to bound $\norm{\du}$ and $\norm{\dv}$. Finally, condition \eqref{eq:cond3} guarantees that in the infeasible algorithm the step size restriction \eqref{eq:step2b} can be satisfied. Inexact directions of the form \eqref{eq:inewton} have been used and analysed in \cite{monteiro2003,gondzio2009} in the path-following method, which sets $\mu=\sigma\x^T\z/n$ for $\sigma<1$ and chooses the step size to keep $x_iz_i\ge\gamma\x^T\z/n$ for a constant $\gamma\in(0,1)$. Both papers use a basic-nonbasic splitting of the variables and solve \eqref{eq:isnewton} with residual $\+\xi=\rvec{\+\xi_B,\+\xi_N}=\rvec{\+\xi_B,\0}$. \cite{monteiro2003} imposes the condition \begin{equation} \label{eq:monteiro} \norm{\+\xi_B} \le \frac{(1-\gamma)\sigma}{4\sqrt{n}}\sqrt{\x^T\z/n}, \end{equation} whereas \begin{equation} \label{eq:gondzio} \norm{W_B\+\xi_B}_\infty \le \eta \x^T\z/n \end{equation} is used in \cite{gondzio2009} with $\eta<1$ depending on $\sigma$ and $\gamma$. Both conditions seem to require more effort by an iterative method than the conditions used in this paper. \eqref{eq:monteiro} obviously becomes restrictive for large problems. \eqref{eq:gondzio} is not affected by the problem dimension, but the infinity norm does not tolerate outliers in $W_B\+\xi_B$. Another form of inexact direction has been analysed in \cite{mizuno1999}, which solves the complementarity equations exactly and allows a residual in the primal and dual equations. Due to the form of the Newton system, solving the complementarity equations exactly is trivial, whereas computing directions that satisfy primal feasibility requires particular preconditioning techniques \cite{gondzio2008,monteiro2003,oliveira2005}. The analysis in \cite{mizuno1999} shows, however, that a residual in the feasibility equations must be measured in a norm depending on $A$, which seems not to be accessible in an implementation. Therefore this form of inexact direction is hardly useful in practice.
2024-02-18T23:40:28.747Z
2016-08-02T02:00:25.000Z
algebraic_stack_train_0000
2,516
4,631
proofpile-arXiv_065-12336
\section{Introduction} One of the four large experiments operating at the Large Hadron Collider (LHC) is A Large Ion Collider Experiment (ALICE)~\cite{Aamodt:2008zz}. It is the experiment dedicated to study the properties and behavior of the strongly interacting matter, the Quark-Gluon Plasma (QGP)~\cite{Shuryak:1978ij}, at the very high temperatures and energy densities reached in ultrarelativistic Pb–Pb collisions. In addition to heavy ions, ALICE also studies proton-proton and proton-lead collision systems, to provide the baseline for A--A collisions. The LHC Run 1 results from pp and p--A collisions turned out to be as interesting as the results from A--A data, revealing surprising structures previously attributed to the hydrodynamic expansion of the QGP medium. This has triggered the still ongoing debate on potential existence of a collective phase in small systems. The most unique features of the ALICE detector, allowing measurements of a wide variety of physics phenomena, are the excellent tracking and particle identification (PID) capabilities over a broad momentum range (from just a few MeV/$c$ up to more than 100 GeV/$c$). These capabilities enable studies of both ``soft" (non-perturbative regime of Quantum Chromodynamics, QCD) and ``hard" physics (perturbative regime of QCD). In this report we focus on selected particle correlation measurements from the soft sector of QCD, describing the bulk properties of the created systems. \section{Angular correlations} A variety of physical phenomena, like the collective behavior of the medium, conservation laws, jets, quantum statistics, or final-state interactions, result in correlations between particles in the final state. One of the most commonly used experimental techniques are two-particle correlations in relative pseudorapidity ($\Delta\eta$) and azimuthal angle ($\Delta\varphi$) space. The studies typically involve different momentum ranges of particles in the pair: a ``trigger" particle in a certain $p_{\rm T,trig}$ interval and an "associated" particle in a $p_{\rm T,assoc}$ interval. The final correlation is calculated as a per-trigger yield (or ``associated yield per trigger particle"). At RHIC in Au--Au collisions these type of correlations have proved to be a powerful tool to measure and study the properties of high-energy nucleus-nucleus collisions~\cite{Abelev:2009af,Adams:2005ph,Adare:2006nr,Alver:2009id} Two distinctive features of these measurements were observed: (i) a pronounced peak around $(\Delta\eta,\Delta\varphi)=(0,0)$ originating mostly from jets, called the ``near-side peak", and (ii) a ridge-like correlation structures at $\Delta\varphi=0$ (''near-side") and $\Delta\varphi=\pi$ (``away-side"), elongated over several units of rapidity, usually referred to as the ``ridge". The ridge structure on the near-side, associated with the collective behavior of the medium, has a clear dependence on the centrality of the collision. The same trends have been observed for the Pb--Pb collisions at the LHC~\cite{Aamodt:2011by,Chatrchyan:2012wg}. \subsection{Double-ridge in p--Pb collisions} In small systems, like pp or p--A, the shape of the correlation is dominated by the near-side jet peak and the long-range $\Delta\varphi=\pi$ ridge from back-to-back jets. Surprisingly, in high multiplicity pp collisions at $\sqrt{s}=2.76$~TeV, 7~TeV, and 13~TeV at the LHC the similar long-range near-side ridge structure was observed by both CMS and ATLAS detectors~\cite{Khachatryan:2010gv,Aad:2015gqa,Khachatryan:2015lva}. Further analysis of of p--Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV~\cite{CMS:2012qk,Abelev:2012ola,Aad:2014lta,Adam:2015bka} also showed these structures. Various explanations of these these phenomena have been proposed, either solely based on hydrodynamics (e.g. Refs.~\cite{Bozek:2012gr,Shuryak:2013ke,Bzdak:2013zma}), or originating from the Color Glass Condensate scenario present in the initial state (e.g. Refs.~\cite{Dusling:2012wy,Altinoluk:2015uaa}). In addition to the near-side ridge structure, the p--Pb results from ALICE revealed another surprising effect -- the presence of two similar long-range ridge-like correlations, one on the near side and one one the away side~\cite{Abelev:2012ola}. This double ridge structure can be observed if the per-trigger yield from the low multiplicity events is subtracted from the high multiplicity events. This procedure approximately removes most of the jet-induced correlations since the near-side yield does not depend on particle multiplicity. The low and high multiplicity per-trigger yields are shown in Fig.~\ref{fig:2angcorrs}. The result of subtraction is shown in Fig.~\ref{fig:2dsubt}-left. One can notice the small peak for $(\Delta\eta,\Delta\varphi)\approx(0,0)$ which still remains. It corresponds to unsubtracted residual jet correlations. Further projections onto $\Delta\varphi$ exclude the region $|\Delta\eta|<0.8$. \begin{figure}[!hbt] \centering \includegraphics[width=0.49\textwidth]{Figures/2012-Dec-13-fig1a} \hfill \includegraphics[width=0.49\textwidth]{Figures/2012-Dec-13-fig1b} \caption{\label{fig:2angcorrs} The associated per-trigger yields as a function of $\Delta\varphi$ and $\Delta\eta$ for charged-particle pairs with $2<p_{\rm T,trig}<4$~GeV/$c$ and $1<p_{\rm T,assoc}<2$~GeV/$c$ in p--Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV. Left: results for 60--100\% event class. Right: results for 0--20\% event class~\cite{Abelev:2012ola}. } \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=0.39\textwidth]{Figures/2012-Dec-13-fig3a} \hfill \includegraphics[width=0.59\textwidth]{Figures/2013-Oct-30-fig3b} \caption{\label{fig:2dsubt} Left: the associated per-trigger yield as a function of $\Delta\varphi$ and $\Delta\eta$ for charged-particle pairs with $2<p_{\rm T,trig}<4$ GeV/$c$ and $1<p_{\rm T,assoc}<2$ GeV/$c$ in p--Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV, after subtraction of the associated yield from 60--100\% event class. Right: the associated per-trigger yield after subtraction projected onto $\Delta\varphi$ \cite{Abelev:2012ola}. } \end{figure} The right panel of Fig.~\ref{fig:2dsubt} shows the projection of subtracted per-trigger yield onto $\Delta\varphi$. A modulated signal is clearly observed. We should note that a similar signal extracted from HIJING Monte Carlo model does not show any significant modulation. The modulation effect, for different $p_{\rm T}$ intervals, can be quantified by fitting of the following formula: \begin{equation} 1/N_{\rm trig} \mathrm{d} N_{\rm assoc}/\mathrm{d}\Delta\varphi = a_0 + 2\,a_2 \cos(2\Delta\varphi) + 2\,a_3 \cos(3\Delta\varphi). \label{fitfunction} \end{equation} The $v_{n}$ flow coefficients can be extracted with the following formula: \begin{equation} v_n = \sqrt{a_n / b}, \label{vn} \end{equation} where $b$ is the baseline calculated from the high multiplicity event class. This procedure is only possible when $p_{\rm T, trig}$ and $p_{\rm T, assoc}$ intervals are the same. The $v_2$ extracted with this procedure is denoted as $v_2\{\rm 2PC,sub\}$. For the details of the procedure we refer to Ref.~\cite{Abelev:2012ola}. A similar subtraction procedure was also applied by ALICE to identified particles (pions, kaons, and protons)~\cite{ABELEV:2013wsa} and the double ridge structure is present as well. The data allowed for the extraction of the flow $v_{2}$ coefficients as a function of $p_{\rm T}$, which are shown in Fig.~\ref{fig:v2_identified}. A clear mass ordering between pions and protons is observed, qualitatively comparable to A--A measurements~\cite{Abelev:2012di} , which at low $p_{\rm T}$ can be reproduced by models employing hydrodynamic expansion of the medium~\cite{Huovinen:2001cy, Shen:2011eg}. Currently, the observation of a long-range double ridge structure in p--Pb collisions is well established by further measurements at the LHC and in d--Au collisions at RHIC (i.e. see Refs.~\cite{Aad:2014lta,Khachatryan:2015waa}). \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{Figures/2013-Jul-15-Fig4} \caption{\label{fig:v2_identified} The Fourier coefficient $v_2\{\rm 2PC,sub\}$ for hadrons (black squares), pions (red triangles), kaons (green stars), and protons (blue circles) as a function of $p_{\rm T}$ from the two-particle correlation for high-multiplicity collisions after the subtraction of low-multiplicity collisions\cite{ABELEV:2013wsa}. } \end{figure} \subsection{Muon-hadron correlations} In order to get more insight into the double ridge structure in p--Pb collisions the ALICE collaboration extended the correlation measurements to forward rapidities, taking the advantage of the muon spectrometer located at pseudorapidity range $-4<\eta<-2.5$~\cite{Adam:2015bka}. In this study, muons were correlated with tracklets\footnote{Tracklets are short track segments reconstructed only with the Silicon Pixel Detector which constitutes the two innermost layers of the Inner Tracking System.} which are measured in the central rapidity region $|\Delta\eta|<1$. In this way particles with $p_{\rm T}$ as low as 50~MeV/$c$ can be detected. The measured muon sample has contribution from decays of pions and kaons, important for $p_{\rm T}<1.5$~GeV/$c$. Above 2 GeV/$c$ muons originate mostly from heavy flavor decays. The p--Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV were delivered by the LHC in two beam configurations: (i) the proton going towards the muon spectrometer (called ``p-going") and (ii) the Pb-ion going in the direction of the muon spectrometer (called ``Pb-going"). Both beam configurations were studied. Similarly as in the case of hadron-hadron correlations, the muon-hadron correlations were measured for the highest multiplicity, 0--20\%, and low multiplicity, 60--100\%, event classes. The jet contribution was reduced by subtracting the correlation calculated in 60--100\% event class from the correlation calculated in the 0--20\% event class. The resulting correlation is shown in Fig.~\ref{fig:2dsubt_3}. \begin{figure}[!hbt] \centering \includegraphics[width=0.49\textwidth]{Figures/2015-Jul-06-subt2} \hfill \includegraphics[width=0.49\textwidth]{Figures/2015-Jul-06-subt3} \caption{\label{fig:2dsubt_3} Muon-hadron correlations in p-going (left panel) and Pb-going (right panel) direction for high-multiplicity collisions after the subtraction of low-multiplicity collisions~\cite{Adam:2015bka}. } \end{figure} The results show a double ridge structure over 10 units of $\Delta\eta$. Projections on $\Delta\varphi$ of correlations are shown in Fig.~\ref{fig:muonProj}. A Fourier decomposition was applied to the projections: \begin{equation} Y_{\rm sub} = a_0 + 2\,a_2 \cos(2\Delta\varphi) + 2\,a_3 \cos(3\Delta\varphi). \label{fitfunction2} \end{equation} The second coefficient dominates in both p-going and Pb-going directions. The result of the decomposition is also shown in Fig.~\ref{fig:muonProj}. Figure~\ref{fig:v2_muonProj} presents the extracted $v^{\mu}_{2}\{\rm 2PC,sum\}$ as a function of $p_{\rm T}$. The values are found to be $16\pm6\%$ larger for Pb-going than for p-going direction. The results from the AMPT model follow the trend for low $p_{\rm T}$; however, for higher $p_{\rm T}$ they underestimate the data. The higher $v_2$ values at high $p_{\rm T}$, where muons mostly come from heavy-flavor decays, may indicate a non-zero heavy-flavour $v_2$ in the data or a different particle composition in this $p_{\rm T}$ region. \begin{figure}[!hbt] \centering \includegraphics[width=0.49\textwidth]{Figures/2015-Jul-06-proj2} \hfill \includegraphics[width=0.49\textwidth]{Figures/2015-Jul-06-proj3} \caption{\label{fig:muonProj} Projections of muon-hadron correlations to $\Delta\varphi$ in p-going (left panel) and Pb-going (right panel) direction for high-multiplicity collisions after the subtraction of low-multiplicity collisions. The lines indicate the first three Fourier components of the distribution~\cite{Adam:2015bka}. } \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=0.8\textwidth]{Figures/2015-Jul-06-v2_final} \caption{\label{fig:v2_muonProj} $v^{\mu}_{2}\{\rm 2PC,sum\}$ coefficient extracted from muon-hadron correlations after low-multiplicity subtraction (for details see text). The result from the p-going direction is shown by open symbols, while filled symbols are for Pb-going direction. The result is compared to AMPT predictions~\cite{Adam:2015bka}. } \end{figure} \subsection{Correlations of identified particles in pp collisions} The correlations analysis in $\Delta\eta$ and $\Delta\varphi$ of identified particles (pions, kaons, and protons) was also performed in pp collisions at $\sqrt{s}=7$~TeV. The measured correlation functions for like- and unlike-sign pairs are presented in Figs.~\ref{fig:ppUnlike} and~\ref{fig:ppLike}. The shape of all correlations modulo like-sign proton pairs show typical near- and away-side structures. While the detailed discussion of the results is available in Ref.~\cite{Graczykowski:2014eqa}, here we would like to focus on the most surprising result -- a wide depression around $(\Delta\eta,\Delta\varphi)=(0,0)$ for like-sign proton pairs. We note that this effect is strictly limited to the baryon-baryon (or antibaryon-antibaryon) scenario. In the case of proton-antiproton correlations the near-side peak is present. The following conclusion from this observation can be drawn: baryons are produced in mini-jet fragmentation, however producing more than one baryon-antibaryon pair is strongly suppressed. A similar analysis performed on Monte Carlo data~\cite{Graczykowski:2014eqa} does not reproduce ALICE results. Therefore, the mechanism producing this suppression needs further investigation. \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/2014-Aug-25-PiKP_noPtDepUnlike} \caption{\label{fig:ppUnlike} Correlation functions for unlike-sign pairs of protons (left), kaons (middle) and pions (right) for pp at $\sqrt{s}=7$~TeV data. } \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/2014-Aug-25-PiKP_noPtDepLike} \caption{\label{fig:ppLike} Correlation functions for like-sign pairs of protons (left), kaons (middle) and pions (right) for pp at $\sqrt{s}=7$~TeV data. } \end{figure} \section{Femtoscopy} \subsection{Three-dimensional pion femtoscopy} A technique used to measure the volume of the particle-emitting region at freeze-out is femtoscopy~\cite{Lednicky:2005af,Lisa:2005dd}. In particular, two-pion correlations at low relative momentum $k^{\ast}$ (commonly referred to as Hanbury-Brown, or ``HBT" correlations) have been developed into a precision tool which can be used to extract a detailed information system size and its dependence on event multiplicity and pair transverse momentum, $k_{\rm T}$. Femtoscopy, in general, measures the width of the distribution of relative separation between the emission points of two particles, which is conventionally referred to as the ``radius parameter" (or the ``HBT radius"), and can be evaluated in three dimensions: \emph{long} along the beam axis, \emph{out} along the pair transverse momentum, and \emph{side} perpendicular to the other two. Moreover, the femtoscopic results are usually interpreted within the hydrodynamic framework as a signature of collective behavior of the strongly interacting medium and provide crucial constraints on phase transition to hadronic matter. ALICE results on three-dimensional pion femtoscopy in pp, p--Pb and Pb--Pb collisions can be found in~\cite{Aamodt:2011mr,Aamodt:2010jj,Aamodt:2011kd,Adam:2015pya,Adam:2015vna}. In heavy-ion collisions two clear trends can be observed: (i) all three radii scale approximately linearly with the cube root of the charged particle multiplicity density at midrapidity, $\langle {\rm d}N_{\rm ch}/{\rm d}\eta \rangle^{1/3}$, and (ii) they decrease with pair transverse momentum. The ALICE results from Pb--Pb data at $\sqrt{s_{\rm NN}}=2.76$~TeV for different centrality and $k_{\rm T}$ ranges~\cite{Adam:2015vna}, showing both trends, are presented in Fig.~\ref{fig:centralityFemtoPbPb}. \begin{figure}[!hbt] \centering \includegraphics[width=0.49\textwidth]{Figures/2016-Jan-12-canfvalktdep4p} \hfill \includegraphics[width=0.49\textwidth]{Figures/2016-Jan-12-radscalemult} \caption{\label{fig:centralityFemtoPbPb} Left: femtoscopic radii for seven centrality ranges shown as a function of pair transverse momentum $k_{\rm T}$. Right: femtoscopic radii shown as a function of the cube root of charged particle multiplicity density. For better visibility some points were shifted in $x$ direction~\cite{Adam:2015vna}. } \end{figure} It has been argued in Ref.~\cite{Lisa:2005dd}, that not only the three-dimensional radii scale with the cube root of the multiplicity density for a single collision energy, but across different energies and colliding system. Indeed, one can clearly see (Fig.~\ref{fig:WorldDataFemto}) significantly different scaling between A--A and pp systems. The p--Pb radii tend to agree with those in pp at low mulitplicities and start to diverge for increasing multiplicities. This finding is confirmed with the three-pion cumulant correlation analysis performed in all three systems~\cite{Abelev:2014pja}. \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/WorldData_kT0_pPb-12604} \caption{\label{fig:WorldDataFemto} Femtoscopic radii from various collision systems and energies as a function of cube root of the measured charged-particle multiplicity density~\cite{Adam:2015pya}. } \end{figure} \subsection{$\mathrm{K}^0_{\rm S}\mathrm{K}^{\pm}$ femtoscopy} The femtoscopic formalism is not limited to pions only. Recently, results of identical-kaon (neutral and charge) femtoscopy have been published by the STAR Collaboration for Au--Au collisions at $\sqrt{s_{\rm NN}}=0.2$~TeV~\cite{Abelev:2006gu} as well as for pp data at $\sqrt{s}=7$~TeV and Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76$~TeV by the ALICE Collaboration~\cite{Abelev:2012ms,Abelev:2012sq,Adam:2015vja}. The correlation function is a result of the interplay of the following phenomena: quantum statistics (for $\mathrm{K}^{\pm}\mathrm{K}^{\pm}$ and $\mathrm{K}^0_{\rm S}\mathrm{K}^0_{\rm S}$), Coulomb interaction ($\mathrm{K}^{\pm}\mathrm{K}^{\pm}$), and the final-state interaction through the $f_0(980)/a_0(980)$ threshold resonances (for $\mathrm{K}^0_{\rm S}\mathrm{K}^0_{\rm S}$). In addition to identical-kaon system, $\mathrm{K}^0_{\rm S}\mathrm{K}^{\pm}$ correlations can also be considered, though no such measurements have been performed before this study. For these correlations, in addition to the trivial elastic scattering channel, the only allowed final-state pair-wise interaction proceeds through the $a_0(980)$ resonance\footnote{The $\mathrm{K}^0_{\rm S}\mathrm{K}^0_{\rm S}$ pair is in $I=1$ isospin state, as is the $a_0$, whereas the $f_0$ is in $I=0$ state, so the isospin would not be conserved.}. Another property of the $\mathrm{K}^0_{\rm S}\mathrm{K}^{\pm}$ interaction through the $a_0$ resonance is also the fact that the $a_0$ has strangeness $S=0$. The $\mathrm{K}^0_{\rm S}$ state is a linear combination of $\mathrm{K}^0$ and $\mathrm{\bar{K^0}}$ states. In order to conserve strangeness only the $\mathrm{\bar{K^0}K^+}$ pair from $\mathrm{K^0_SK^+}$ and the $\mathrm{\bar{K^0}K^-}$ pair from $\mathrm{K^0_SK^-}$ can form the $a_0$. This feature allows the possibility to study the $\rm K^0$ and $\rm \bar{K^0}$ sources separately. In addition to above possibilities the $\mathrm{K}^0_{\rm S}\mathrm{K}^{\pm}$ final-state interaction allows the study of the properties of the $a_0$ itself. Its interest comes from the fact that many papers in the literature discuss the possible scenario that the $a_0$ resonance could be a 4-quark state, i.e. a tetraquark, or a ``$\rm \bar{K}-K$ molecule"~\cite{Martin:1976vx,Antonelli:2002ip,Achasov:2001cj,Achasov:2002ir}. Figure~\ref{fig:K0sK+-} shows examples of $\mathrm{K^0_S}\mathrm{K^+}$ and $\mathrm{K^0_S}\mathrm{K^-}$ correlation functions with Lednicky fits~\cite{Lednicky:1981su,Bekele:2007zza} using the ``Achasov2"~\cite{Achasov:2001cj} parameters of the $a_0$. The main feature of the femtoscopic correlation function can be observed: the suppression caused by the strong final-state interactions for small $k^{\ast}$. From this plot we can conclude that the $a_0$ final-state interaction gives an excellent representation of the the data, i.e. the suppression of the correlation functions in the $k^{\ast}$ range up to $0.15$~GeV/$c$. \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/2015-Oct-29-Callfit_al_gc} \caption{\label{fig:K0sK+-} Examples of $\mathrm{K^0_S}\mathrm{K^+}$ and $\mathrm{K^0_S}\mathrm{K^-}$ correlation functions and fit with the Lednicky parametrization using "Achasov2"~\cite{Achasov:2001cj} parameters. } \end{figure} The results of the fit, $R$ (the size of the kaon source) and $\lambda$ (the strength of the correlation) parameters, for all considered $a_0$ parameterizations (in the decreasing order from the largest to the lowest $a_0$ parameters: ``Achasov2"~\cite{Achasov:2002ir}, ``Achasov1"~\cite{Achasov:2001cj}, ``Antonelli"~\cite{Antonelli:2002ip}, and ``Martin"\cite{Martin:1976vx}) are presented in Figs.~\ref{fig:RK0sK+-} and~\ref{fig:LamK0sK+-}. Since $\mathrm{K^0_S}\mathrm{K}^{+}$ and $\mathrm{K^0_S}\mathrm{K}^{-}$ are consistent with each other, both parameters shown are calculated as their average. The comparison of the radius parameter with identical kaon results shows clear agreement with each other for ``Aachasov", ``Achasov2", and ``Antonelli" parameterizations of $a_0$ resonance. This is expected from the fact that radii from both $\mathrm{K}^{0}_{\rm S}\mathrm{K}^{0}_{\rm S}$ and $\mathrm{K}^{\pm}\mathrm{K}^{\pm}$ pair combinations have similar source geometry and there is no reason for $\mathrm{K}^{0}_{\rm S}\mathrm{K}^{\pm}$ to be different. A clear discrepancy is visible for ``Martin", which corresponds to the lower values of $a_0$ parameters. Therefore, the higher values of $a_0$ parameters are favored. The $\lambda$ parameters of identical kaon results also agree with $\mathrm{K}^{0}_{\rm S}\mathrm{K}^{\pm}$. This is consistent with the assumption of 100\% final-state interaction going through the $a_0$ resonance channel. \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/2015-Oct-29-Ralld2_al_gc} \caption{\label{fig:RK0sK+-} $R$ fit parameters from averaged $\mathrm{K}^{0}_{\rm S}\mathrm{K}^{\pm}$ analysis compared to identical kaon femtoscopy from ALICE~\cite{Adam:2015vja}. } \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width=\textwidth]{Figures/2016-Apr-04-fig3_prelim_gc} \caption{\label{fig:LamK0sK+-} $\lambda$ fit parameters from averaged $\mathrm{K}^{0}_{\rm S}\mathrm{K}^{\pm}$ analysis compared to identical kaon femtoscopy from ALICE~\cite{Adam:2015vja}. } \end{figure} \section{Conclusions} Several recent correlation results from ALICE have been presented. The angular correlations analysis in p--Pb collisions revealed the existence of a double ridge structure, similar to the one observed in A--A data and usually interpreted as a signature of collectivity. The measurements were extended with correlations at forward rapidities thanks to ALICE muon spectrometer. Similar correlation studies in pp reveal surprising anti-correlation structure for identical proton pairs, which is not reproduced by existing models. This result suggests strong influence of local conservation laws on the shape of the observed correlation. The pion femtoscopic analysis in Pb--Pb show clear multiplicity and pair transverse momentum scalings for all three radii. Comparison of A--A, p--A, and pp radii as a function of cube root of charged particle multiplicity density across various experiments and collision energies show a universal trend for A--A data, different from the one observed in pp collisions. The p--Pb results from ALICE tend to agree with those of pp at low multiplicites and start to diverge as the multiplicity increases. \\ The author wishes to acknowledge the financial support of the Polish National Science Centre under decisions no. 2013/08/M/ST2/00598 and no. UMO-2014/13/B/ST2/04054. \bibliographystyle{h-physrev}
2024-02-18T23:40:29.276Z
2016-08-02T02:02:09.000Z
algebraic_stack_train_0000
2,539
3,866
proofpile-arXiv_065-12427
\section{Idempotents and Cellular Algebras}\label{sec:cellular} \subsection{Definitions} We review the definition of, and basic results from cellular algebra theory. \begin{definition}\cite[1.1]{graham_lehrer_1996} An $k$-algebra $A$ is \emph{cellular} with \emph{cell data} $(\Lambda, M, C, \iota)$ iff \begin{enumerate}[(C1)] \item The finite set $\Lambda$ is partially ordered and to each $\lambda \in \Lambda$ we have a non-empty finite set of ``$\lambda$-tableaux'' or ``$\lambda$-diagrams'', $M(\lambda)$. \item The set $\{C^\lambda_{S,T} \;:\; \lambda \in \Lambda;\; S,T\in M(\lambda)\}\subseteq A$ forms a $k$-basis of $A$. \item The map $\iota$ is an $k$-linear anti-automorphism of $A$ with $\iota^2 = {\operatorname{id}}$ and $\iota C^{\lambda}_{S,T} = C^\lambda_{T,S}$. \item For each $\lambda \in \Lambda$, $S,T \in M(\lambda)$, and $a \in A$, there is a map $r_a : M(\lambda)\times M(\lambda) \to k$ such that \begin{equation*} aC^\lambda_{S,T} = \sum_{U \in M (\lambda)} r_a(S,U)C^\lambda_{U,T} \mod A^{<\lambda} \end{equation*} where $A^{<\lambda}$ is the linear span of all $C^\mu_{V,W}$ with $\mu < \lambda$. \end{enumerate} \end{definition} We will write $A^{\le \lambda}$ to denote the linear span of all $C^{\mu}_{V,W}$ with $\mu \le \lambda$ in the natural way. We will refer to $k$-linear anti-automorphisms that square to the identity as involutions. \begin{proposition}\cite[1.5]{graham_lehrer_1996}\label{prop:a_lambda_ideal} The linear spaces $A^{\le\lambda}$ and $A^{<\lambda}$ are two sided ideals of $A$ fixed by $\iota$. \end{proposition} From this, and the definitions, it is clear that $A / A^{\le \lambda}$ is cellular with identical cell data to $A$, except that $\Lambda$ is restricted to elements not less than or equal to $\lambda$. \begin{proposition}\cite[2.2]{graham_lehrer_1996}\label{prop:a_lambda_decomp} As a left module, $A^{\le \lambda}/A^{<\lambda}$ is the direct sum of copies of a module denoted $\Delta(\lambda)$ which has basis $C^\lambda_{S,T}$ as $S$ varies and $T$ is fixed. \end{proposition} We may write this last condition as \begin{equation} A^{\le\lambda}/A^{<\lambda} \simeq \bigoplus_{T \in M(\lambda)} \Delta(\lambda) \end{equation} where $\Delta(\lambda)$ is simply the $k$-span of $M(\lambda)$ and the action of $a\in A$ on $S \in M(\lambda)$ is given (from (C4)) by \begin{equation} a \cdot S = \sum_{U \in M(\lambda)} r_a(S,U) U. \end{equation} These modules, $\Delta(\lambda)$ are known as cell modules or standard modules and are crucial in what follows. An alternative definition is in terms of cell ideals. \begin{definition}\cite[Definition 3.2]{konig_xi_1998} Let $A$ be an $k$-algebra for $k$ a commutative Noetherian domain equipped with involution $\iota$. A two sided ideal $J\subseteq A$ is called a \emph{cell ideal} iff it is fixed by $\iota$ and there is a left ideal $\Delta\subseteq J$ such that $\Delta$ is finitely generated and free over $k$ and $J\simeq \Delta\otimes_k \iota\Delta$ such that the action of $\iota$ sends $x\otimes y \mapsto \iota y \otimes \iota x$. \end{definition} The definition of a cell ideal is motivated as follows. Let $\lambda\in\Lambda$ be minimal so that $A^\lambda = A^{\le \lambda}$ is isomorphic to $\Delta(\lambda)\otimes \iota\Delta(\lambda)$. Then $A^\lambda$ is certainly fixed by $\iota$ and the involution acts as we expect it to if the isomorphism is given by $S\otimes \iota T \mapsto C^\lambda_{S,T}$. Hence $A^\lambda$ is a cell ideal. \begin{definition}\cite[Definition 3.2]{konig_xi_1998} The algebra $A$ is called \emph{cellular} iff there is an $k$-module decomposition $A = J_1'\oplus\cdots\oplus J_n'$ with each $J_i'$ fixed by $\iota$ such that if $J_j = \oplus_{\ell = 1}^j J_\ell'$, we have a chain of two sided ideals \begin{equation*} 0 = J_0 \subset J_1 \subset\cdots \subset J_n = A \end{equation*} such that $J_j / J_{j-1}$ is a cell ideal of $A/J_{i-1}$. \end{definition} The arguments succeeding \cref{prop:a_lambda_ideal,prop:a_lambda_decomp} show that the first definition of a cell ideal implies the second definition. It is not too hard to show the converse. \subsection{Cell modules} Let us return to the important modules $\Delta(\lambda)$. We know from (C4) that (modulo $A^{<\lambda}$), \begin{equation} C^\lambda_{V,W}aC^\lambda_{S,T} = \sum_{U \in M(\lambda)} r_{ C^\lambda_{V,W}a}(S,U) C^\lambda_{U,T}. \end{equation} Similarly \begin{equation} C^\lambda_{T,S}\iota aC^\lambda_{W,V} = \sum_{U \in M(\lambda)} r_{ C^\lambda_{T,S}\iota a}(W,U) C^\lambda_{U,V}. \end{equation} But these equations are simply involutions of each-other so in particular, $U=V$ is the only term to survive in the first equation and $U=T$ the only in the second. We may rewrite this as asserting that there is a map $\phi_a : M(\lambda)\times M(\lambda) \to k$ such that \begin{equation}\label{eq:cac_multiply} C^\lambda_{V,W}aC^\lambda_{S,T} = \phi_a(W,S) C^\lambda_{V,T} \mod A^{<\lambda}. \end{equation} A crucial r\^ole is then played by the bilinear form $\langle -,- \rangle_\lambda:\Delta(\lambda)\times\Delta(\lambda) \to k$ defined to be the linear extension of $\phi_{\operatorname{id}}$. When $\lambda$ is understood, it will be dropped from the notation. Let $R(\lambda)\subseteq \Delta(\lambda)$ be the radical of the bilinear form on $\Delta(\lambda)$ and $L(\lambda) = \Delta(\lambda) / R(\lambda)$. We will denote by $\Lambda_0\subseteq\Lambda$ the set of all $\lambda$ such that $L(\lambda) \neq 0$. \begin{lemma}\cite[3.4]{graham_lehrer_1996} If $k$ is a field, then the set $\{L(\lambda) \;:\; \lambda\in \Lambda_0\}$ is a complete set of pairwise non-isomorphic completely irreducible modules for $A$. \end{lemma} Fix $T \in M(\lambda)$. Then $\operatorname{span} \{C^\lambda_{S,T} \;:\; S\in M(\lambda)\} = \Delta(\lambda) \otimes \iota T$ is a left ideal of $A^{\le \lambda}/A^{<\lambda}$. It lifts to the left ideal generated by $\{C^\lambda_{S,T} \;:\; S\in M(\lambda)\}$ of $A$, which we denote $D(\lambda)$. Note that it is not always true that $D(\lambda) \supseteq A^{<\lambda}$. \begin{lemma}\label{lem:linear_comm} The algebra $A$ is commutative if $\dim \Delta(\lambda) = 1$ for all $\lambda \in \Lambda$. \end{lemma} \begin{proof} The algebra $A$ has an involution-fixed $k$-basis since $C^\lambda_{x,x} = \iota C^\lambda_{x,x}$ and so $\iota = {\operatorname{id}}$. But then $ab = \iota(ab) = (\iota b)(\iota a) = b a$. \end{proof} \begin{example}\label{eg:cellular_ks3} If $k$ is any field and $\mathfrak{S}_3$ is generated by the transpositions $\sigma_1 = (12)$ and $\sigma_2 = (23)$, then $k\mathfrak{S}_3$ is cellular with involution given by the linear extension of the inverse operator on $\mathfrak{S}_3$. The set $\Lambda$ is the set of three partitions: \begin{equation} \Yboxdim{5pt} \Yvcentermath1 \Lambda = \left\{ \yng(3)\;>\; \yng(2,1)\;>\; \yng(1^3) \right\} \end{equation} and the sets $M(\lambda)$ consist of standard tableaux: \begin{equation} \Yboxdim{5pt} \Yvcentermath1 M\left(\yng(3)\right) \Yboxdim{9pt} = \left\{ \young(123) \right\}\quad\quad \Yboxdim{5pt} M\left(\yng(2,1)\right) \Yboxdim{9pt} = \left\{ \young(12,3), \young(13,2) \right\}\quad\quad \Yboxdim{5pt} M\left(\yng(1^3)\right) \Yboxdim{9pt} = \left\{ \young(1,2,3) \right\} \end{equation} We will denote these by $\mathbf{123}$, $\mathbf{12}$, $\mathbf{13}$ and $\mathbf{1}$ respectively representing their first rows. A possible cellular basis (there are multiple) is given by \begin{align*} \Yboxdim{3pt} \Yvcentermath1 C^{\yng(1^3)}_{\mathbf{1},\mathbf{1}} &= \sigma_1\sigma_2\sigma_1 + \sigma_1\sigma_2 + \sigma_2\sigma_1 + \sigma_1 + \sigma_2 + {\operatorname{id}} & \Yboxdim{3pt} \Yvcentermath1 C^{\yng(3)}_{\mathbf{123},\mathbf{123}} &= {\operatorname{id}} \\ \Yboxdim{3pt} \Yvcentermath1 C^{\yng(2,1)}_{\mathbf{12},\mathbf{12}} &= \sigma_2 + {\operatorname{id}} & \Yboxdim{3pt} \Yvcentermath1 C^{\yng(2,1)}_{\mathbf{12},\mathbf{13}} &= \sigma_2\sigma_1 + \sigma_2 + \sigma_1 + {\operatorname{id}}\\ \Yboxdim{3pt} \Yvcentermath1 C^{\yng(2,1)}_{\mathbf{13},\mathbf{13}} &= \sigma_1 + {\operatorname{id}} & \Yboxdim{3pt} \Yvcentermath1 C^{\yng(2,1)}_{\mathbf{13},\mathbf{12}} &= \sigma_1\sigma_2 + \sigma_1 + \sigma_2 + {\operatorname{id}}, \end{align*} and this satisfies the involutive property. From this we can calculate $\Yboxdim{3pt}\Yvcentermath1 C^{\yng(1^3)}_{\mathbf{1},\mathbf{1}}\cdot C^{\yng(1^3)}_{\mathbf{1},\mathbf{1}} = 6 C^{\yng(1^3)}_{\mathbf{1},\mathbf{1}}$ from which $\langle \mathbf{1},\mathbf{1}\rangle = 6$ and similarly $\Yboxdim{3pt}\Yvcentermath1 C^{\yng(2,1)}_{\mathbf{12},\mathbf{12}}\cdot C^{\yng(2,1)}_{\mathbf{13},\mathbf{12}} = C^{\yng(1^3)}_{\mathbf{1},\mathbf{1}} + C^{\yng(2,1)}_{\mathbf{12},\mathbf{12}}$ so $\langle\mathbf{12},\mathbf{13}\rangle =1$. Using these calculations we find that in the given basis, \begin{equation}\Yboxdim{3pt} \langle-,-\rangle_{\yng(1^3)} = \begin{pmatrix}6\end{pmatrix}\quad\quad\quad \langle-,-\rangle_{\yng(2,1)} = \begin{pmatrix}2&1\\1&2\end{pmatrix}\quad\quad\quad \langle-,-\rangle_{\yng(3)} = \begin{pmatrix}1\end{pmatrix}. \end{equation} Thus if $\operatorname{char} k\not\in\{2,3\}$ then $\Lambda_0 = \Lambda$ and otherwise $\Yboxdim{3pt}\Yvcentermath1\Lambda_0 = \Lambda\setminus\left\{\yng(1^3)\right\}$. In all cases $\Delta(\smallyng{5pt}{3})$ is irreducible as $R(\smallyng{5pt}{3}) = 0$. It is the linear, sign representation. Similarly $\Delta(\smallyng{5pt}{1^3})$ is the trivial representation. If $\operatorname{char} k = 2$ then $\Lambda_0 = \{\smallyng{5pt}{2,1},\smallyng{5pt}{3}\}$ and $\Delta(\smallyng{5pt}{2,1})$ is the two dimensional irreducible. Notice then that the sign representation $\Delta(\smallyng{5pt}{3})$ is actually the trivial and is isomorphic to $\Delta(\smallyng{5pt}{1^3})$. On the other hand if $\operatorname{char} k = 3$ then $\Lambda_0$ is also $ \{\smallyng{5pt}{2,1},\smallyng{5pt}{3}\}$ but now the trivial module arises as the indecomposable head of $\Delta(\smallyng{5pt}{2,1})$. There is in fact a short exact sequence \begin{equation} 0\to\Delta(\smallyng{5pt}{3}) \to \Delta(\smallyng{5pt}{2,1}) \to \Delta(\smallyng{5pt}{1^3})\to 0. \end{equation} \end{example} \subsection{The Cellular Data of a Hecke Algebra}\label{sec:cell_hecke} Let $A$ be a cellular algebra with cell data $(\Lambda, M, C, \iota)$ and let $e \in A$ be idempotent fixed by $\iota$. We will show that $eAe = \widetilde A$ is cellular and construct cellular data $(\widetilde \Lambda, \widetilde M, \widetilde C, \iota)$. We will have $\widetilde\Lambda \subseteq \Lambda$ and $\widetilde M(\lambda) \subseteq M(\lambda)$. If $\widetilde M(\lambda) = \emptyset$ we will exclude $\lambda$ from $\Lambda$, thus satisfying (C1). As a first step, let $\widetilde C^\lambda_{S,T} = eC^\lambda_{S,T}e$. Then, $\iota$ is an involution of $eAe$ such that $\iota \widetilde C^\lambda_{S,T} = \widetilde C^\lambda_{T,S}$. Hence (C3) holds. It is clear that as $C^\lambda_{S,T}$ span $A$, the $\widetilde C^\lambda_{S,T}$ span $\widetilde A$. Further, $\widetilde A^{\le \lambda} \subseteq \widetilde A \cap A ^{\le\lambda}$ and if $x \in \widetilde A \cap A^{\le\lambda}$ is written in terms of $C^\mu_{S,T}$, then since it is fixed by left and right multiplication by $e$, we see the reverse inclusion holds too. Now note that, for any $eae \in \widetilde A$, \begin{align} eae\, \widetilde C^\lambda_{S,T} &= \sum_{U\in M(\lambda)} e\,r_{ae}(U,S)\,C^\lambda_{U,T}e \mod A^{<\lambda}\nonumber\\ &= \sum_{U\in M(\lambda)} r_{ae}(U,S)\widetilde C^\lambda_{U,T}\mod A^{<\lambda} \end{align} and so (C4) is satisfied too, if we set $\widetilde r_{eae}(-,-)$ to be $r_{ae}(-,-)$. The final constraint to show is that the $\widetilde C^{\lambda}_{S,T}$ form an $k$-basis of $\widetilde A$. Here is where we remove elements of $M(\lambda)$ to form $\widetilde M(\lambda)$ and, should the result be empty, $\lambda$ from $\Lambda$. Clearly by induction it will suffice to show that $\widetilde C^\lambda_{S,T}$ is a basis of $\widetilde A^{\le \lambda}$ for minimal $\lambda\in\widetilde\Lambda$. On the other hand, since $\widetilde A^{\le\lambda} = \widetilde A \cap A^{\le\lambda}$, we will require that $\widetilde C^\lambda_{S,T}$ is a basis of $\widetilde A^{\le \lambda}$ for minimal $\lambda$. Thus this is a necessary and sufficient condition. Equivalently, we require that $\{e\cdot S \;:\; S \in \widetilde M(\lambda)\}$ is a linearly independent set in $\Delta(\lambda)$ for each $\lambda$ (it is already spanning). In fact, the preceding remark makes clear that as we can pick any subset of $M(\lambda)$ for $\widetilde M(\lambda)$ as long as such a set is a basis. Let $N^\Delta_e(\lambda) = \{ S \in M(\lambda) \;:\; e \cdot S = 0\}$ be those tableaux killed by $e$ in $\Delta(\lambda)$ and $N^D_e(\lambda) = \{ S\in M(\lambda) \;:\; e \cdot C^\lambda_{S,T} = 0\}$ in $D(\lambda)$. Clearly $\widetilde M(\lambda) \subseteq M(\lambda) \setminus N_e(\lambda)$ and $N^{D}_e(\lambda) \subseteq N^{\Delta}_e(\lambda)$. \begin{definition}\label{def:generous} We will say that the idempotent $e\in A$ is \emph{generous} if $M(\lambda) \setminus N^\Delta_e(\lambda)$ forms a basis for $\Delta(\lambda)$ for each $\lambda \in \Lambda$. It will be termed \emph{lavish} if $N^D_e(\lambda) = N^\Delta_e(\lambda)$ for each $\lambda$. \end{definition} Notice that even a generous idempotent does not necessarily have $\widetilde \Lambda = \Lambda$. The above analysis makes it clear that $\widetilde\Delta(\lambda) = e\Delta(\lambda)$. Further, by making the substitution $a \to e$ and multiplying \cref{eq:cac_multiply} by $e$ on both sides, we see that the bilinear form on $\widetilde\Delta(\lambda)$ is exactly the restriction of the bilinear form on $\Delta(\lambda)$. Now \begin{align}\nonumber ey \in \operatorname{rad} \widetilde \Delta(\lambda) &\iff \langle ey, ez \rangle = 0 \quad\forall ez \in \widetilde \Delta(\lambda)\\\nonumber &\iff \langle ey, z \rangle = 0 \quad\forall z \in \Delta(\lambda)\\ &\iff ey \in \operatorname{rad} \Delta(\lambda) \end{align} Hence $\operatorname{rad} \widetilde\Delta(\lambda) = e\Delta(\lambda)\cap \operatorname{rad} \Delta(\lambda)$. However, since $e$ acts as identity on $\operatorname{rad} \widetilde\Delta(\lambda)$ it must be that \begin{equation}\label{eq:e_rad_is_rad} \operatorname{rad} \widetilde\Delta(\lambda) = e \operatorname{rad} \Delta(\lambda). \end{equation} Recall that the functor ${\rm res}_e : \cmod{A} \to \cmod{\widetilde A}$ sending $M \mapsto eM$ and restricting morphisms is exact. Hence we have the exact sequence \begin{equation}\label{eq:exact_restrict_delta} 0 \to \operatorname{rad} \widetilde\Delta(\lambda) = e\operatorname{rad} \Delta(\lambda) \to \widetilde\Delta(\lambda) \to \widetilde L(\lambda) = eL(\lambda) \to 0 \end{equation} giving that $\{e L(\lambda) : \lambda \in \widetilde \Lambda_0\}$ is a complete set of irreducible modules for $\widetilde A$ (the only thing to prove here was that $e L(\lambda) = \widetilde L(\lambda)$). \begin{corollary}\label{cor:semisimple} If $A$ is semi-simple, then $\tilde A$ is semi-simple. \end{corollary} \begin{proof} The algebra $A$ (resp. $\widetilde A$) is semi-simple iff $\Delta(m)$ (resp. $\widetilde\Delta(m)$) is simple for all $m$. That is to say, $\operatorname{rad}\Delta(m)$ (resp. $\operatorname{rad}\widetilde\Delta(m)$) is zero. \end{proof} \begin{example} Recall \cref{eg:cellular_ks3} and suppose $k$ has characteristic 3. Let $e$ be the idempotent $-\sigma_1 - {\operatorname{id}}$. Then $e\cdot \mathbf{1} = \mathbf{1}$ as $e\cdot C^{\smallyng{3pt}{1^3}}_{\mathbf{1},\mathbf{1}} = C^{\smallyng{3pt}{1^3}}_{\mathbf{1},\mathbf{1}}$. On the other hand, in the module $\Delta(\smallyng{5pt}{3})$, we have $e\cdot \mathbf{123} = 0$ as $e\cdot C^{\smallyng{3pt}{3}}_{\mathbf{123},\mathbf{123}} = e = -C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{13}}$. Similar calculations show that \begin{align*} e \cdot C^{\smallyng{3pt}{2,1}}_{\mathbf{12},\mathbf{12}} &= -C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{12}}& e \cdot C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{13}} &= C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{13}}\\ e \cdot C^{\smallyng{3pt}{2,1}}_{\mathbf{12},\mathbf{13}} &= -C^{\smallyng{3pt}{1^3}}_{\mathbf{1},\mathbf{1}} + -C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{13}}& e \cdot C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{12}} &= C^{\smallyng{3pt}{2,1}}_{\mathbf{13},\mathbf{12}} \end{align*} from which we deduce that $e\cdot\mathbf{13} = \mathbf{13}$ and $e\cdot\mathbf{12} = -\mathbf{13}$ in $\Delta(\smallyng{5pt}{2,1})$. Its clear then that an appropriate choice of $\widetilde M(\smallyng{5pt}{2,1})$ is $\{\mathbf{13}\}$. However, this idempotent is not generous since $N^\Delta_e(\smallyng{5pt}{2,1}) = \emptyset$. It is also not lavish since $N^\Delta_e(\smallyng{5pt}{3}) = \{\mathbf{123}\}$ but $N^D_e(\smallyng{5pt}{3}) = \emptyset$. \end{example} \subsection{The case of generalised Jones-Wenzl idempotents}\label{sec:lp_hecke} Let us now turn to a central example. Let $A = {\rm TL}_n$ over pointed ring $(k,\delta)$ with $(\ell, p)$-torsion. It is well known that $A$ is cellular with $\Lambda = \{ 0 \le m \le n \;:\; m \equiv_2 n\}$, the sets $M(m)$ being monic $(n,m)$-diagrams, the map $\iota$ the natural duality on the Temperley-Lieb category and $C^m_{S,T}$ being the natural image of $\Delta(m)\otimes\iota\Delta(m)$. Let $e$ be the idempotent describing the projective cover of the trivial module. The trivial module is a composition factor of $\Delta(m)$ iff $m \in \operatorname{supp} n$. If so, it does so with multiplicity 1. Notice now that \begin{equation} \operatorname{Hom}_{A}(Ae, \Delta(m)) \cong e\Delta(m) = \widetilde\Delta(m) \subseteq \Delta(m), \end{equation} where a morphism is identified with the image of $e$. Suppose that $m \in \operatorname{supp} n$, and let $S \subset \Delta(m)$ be the largest (by inclusion) submodule of $\Delta(m)$ which does not have the trivial module as a composition factor. Given non-zero $\phi_1, \phi_2 \in \operatorname{Hom}_A(Ae,\Delta(m))$, the images of $e$ under the maps, $v_1 = \phi_1(e)$ and $v_2 = \phi_2(e)$ are non-zero in $\Delta(m)/ S$. Both lie in the single trivial submodule of $\Delta(m)/S$ and thus there is a linear combination, $\alpha_1v_1 + \alpha_2v_2$ that vanishes modulo $S$. But then we have that the image of $\alpha_1\phi_1 + \alpha_2\phi_2$ lies in $S$, which has no trivial factors. Thus indeed $\alpha_1\phi_1 + \alpha_2\phi_2 = 0$ so the two were collinear to begin with. Hence we see that \begin{equation}\label{eq:dim_hom_edelta} \dim \operatorname{Hom}_{A}(Ae, \Delta(m)) = \dim \widetilde\Delta(m) = \begin{cases} 1 & m \in \operatorname{supp} n\\ 0 & m \not \in \operatorname{supp} n \end{cases} \end{equation} \begin{remark}\label{rem:lavish_not_eve} Readers may be surprised that this is the case. Surely if $\Delta(m-2)$ has a non-zero element indexed by diagram $S$\footnote{If $e \Delta(m) \neq 0$, then the non-zero element can be chosen to be $e\cdot S$, by the discussion preceding \cref{def:generous}}, then simply ``popping'' one of the outermost links of the diagram $S$ should give a diagram $S' \in \Delta(m)$ and since $0\neq e\cdot S = e\cdot S' \cdot u$, where $u$ is a simple cup diagram, we must have that $e \cdot S' \neq 0$ and so $e\widetilde\Delta(m)\neq 0$? The key here lies in the fact that these diagram manipulations are taken modulo different conditions. While indeed $0\neq e\cdot S' \cdot u$ as morphisms in ${\mathscr{TL}}$, the map $e\cdot S'$ factors through $m-2$ and thus vanishes in $\widetilde\Delta(m)$. Equivalently, $e$ is not lavish. Now, it is clear that when $m\not\in \operatorname{supp} n$, that $N^\Delta_e(m) = M(n)$. On the other hand, when $m\in \operatorname{supp} n$, there may be multiple diagrams in $\Delta(m)$ not killed by $e$. Indeed, the diagram ${\rm d}_n^m$ is one such (canonical) example, but other ``ancestor centred'' options exist (see~\cite{tubbenhauer_wedrich_2019}). Hence $e$ is not even generous. \end{remark} We can conclude \begin{corollary} Let $\operatorname{supp} n = \{s_1 < s_2 < \cdots < s_{2^\generation{n}} = n\}$ and morphism $x_i : \underline{n} \to \underline{s_i}$ such that $e\cdot x_i$ has through degree $s_i$. Then for any morphism $u : \underline{s_i} \to \underline{m}$ with through degree less than $s_i$, the morphism $e\cdot x_i \cdot u$ has through degree at most $s_{i-1}$. \end{corollary} Further, a direct application of \cref{cor:semisimple} shows that if $\ell = +\infty$ then $\widetilde A$ is semi-simple. \section{The Projective Endomorphism Algebra}\label{sec:endomorphism} Let ${\boldsymbol{\mu}} = (n)$, a single-part, not necessarily Eve composition. Then elements of the algebra ${\rm TL}_{(n)}$ are of the form \begin{center} \begin{tikzpicture} \foreach \x in {0, 1.5} { \begin{scope}[shift={(\x,0)}] \foreach \i in {6,...,14} { \draw[very thick] (-.2,.2+ \i/5) -- (.5,.2 + \i/5); } \draw[thick,fill=purple] (0,1.3) rectangle (0.3,3.1); \draw[thick,fill=black] (0,3.0) rectangle (0.3,3.1); \end{scope} } \draw[thick] (.5,1.3) rectangle (1.3, 3.1); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-0.3,1.3) -- (-0.3,3.1) node [black,midway,xshift=-10pt] {\footnotesize $n$}; \end{tikzpicture} \end{center} where the purple boxes are $(\ell,p)$-Jones-Wenzl idempotents and the white box is any element of ${\rm TL}_n$. Recall from \cref{eg:rings_tl_mu} (ii) that \begin{equation}\label{eq:TL_JW_is_poly} {\rm TL}_{\boldsymbol{\mu}} \simeq k[X_1, \ldots, X_{\generation{n}}]/(X_1^2, \ldots, X_{\generation{n}}^2) \end{equation} where $n + 1 = \sum_{i = 1}^{\generation{n}+1} s_i p^{(d_i)}$ are the digits of the $(\ell, p)$-adic expansion of $n+1$. In fact, the case studied in~\cite{tubbenhauer_wedrich_2019} is only for $\delta = -2$ and so one must extend their argument to mixed (rather than positive) characteristic. However, the result as stated above still holds. If $S \subseteq \{1, \ldots, \generation{n}\}$, write $X_S$ for $\prod_{i \in S} X_i$. These $S$ can be compared to the ``down-admissible sets'' from~\cite{tubbenhauer_wedrich_2019}. They are in bijection with numbers $m_S = n - 2\sum_{i \in S} s_i p^{(d_i)}$ which form $\operatorname{supp}(n)$. Further the set $\operatorname{supp}(n)$ is exactly the indices of those cell modules of ${\rm TL}_{n}$ which have a trivial composition factor. \begin{lemma} The set $\operatorname{supp}(n) = \{m_S \;:\; S\subseteq\{1,\ldots, \generation{n}\}\} \subseteq \Lambda$ indexes the cell modules of ${\rm TL}_{\boldsymbol{\mu}}$, all of which are one dimensional. Further, $(\Lambda_{\boldsymbol{\mu}})_0 = \{n\}$. \end{lemma} \begin{proof} The explicit construction in~\cite{tubbenhauer_wedrich_2019} constructs $X_S = e^{\boldsymbol{\mu}} {\rm d}^m_{n} {\rm u}^m_{n}e^{\boldsymbol{\mu}}$. Recall the morphism ${\rm d}^m_{n} : \underline {n} \to \underline {m}$ is actually a diagram (and not a linear combination of diagrams). Now recall that $e M \simeq \operatorname{Hom}(Ae, M)$ for any idempotent $e$. In this case ${\rm TL}_{n}\cdot e^{\boldsymbol{\mu}}$ is the projective cover of the trivial module and by the last comment above, the trivial module of ${\rm TL}_{n}$ appears in each of the cell modules $\{\Delta(m_S) \;:\; S \subseteq \{1,\ldots, \generation{n}\}\}$. Hence there is a morphism from the projective cover of the trivial into each of those, so $e^{\boldsymbol{\mu}}\cdot\Delta(m_S)\neq 0$. The algebra ${\rm TL}_{\boldsymbol{\mu}}$ has $k$-dimension $2^{\generation{n}}$ since a $k$-basis is given by all the $\{X_S : S \subseteq \{1, \ldots, \generation{n}\}\}$. Recall that the dimension of a cellular algebra is the sum of the squares of the dimensions of its cell modules (see \cref{prop:a_lambda_decomp}): \begin{equation} \dim A = \sum_{\lambda\in \Lambda} \big(\dim \Delta(\lambda)\big)^2. \end{equation} However, we have already constructed $2^{g_r}$ cell modules $\Delta_{\boldsymbol{\mu}}(m_S) = e^{\boldsymbol{\mu}}\cdot\Delta(m_S)$. Hence we deduce that these are all one dimensional. An induction proof shows that $\Delta_{\boldsymbol{\mu}}(m_S) = \operatorname{span}\{e^{\boldsymbol{\mu}}\cdot {\rm d}^{m_S}_{n}\}$. Note that an inductive proof is necessary as, without knowing that the cell modules are not zero, we could not rule out the possibility that some $e^{\boldsymbol{\mu}}{\rm d}^{m_S}_n$ was not of through degree less than $m_S$. However, the algebra in \cref{eq:TL_JW_is_poly} has a single simple module on which all the indeterminates act as zero. Thus we expect $(\Lambda_{\boldsymbol{\mu}})_0 \subseteq \Lambda_{\boldsymbol{\mu}} = \operatorname{supp} n$ to be a singleton. It is clear that $n \in (\Lambda_{\boldsymbol{\mu}})_0$ and so we deduce that the inner product is zero on all cell modules apart from the trivial one. \end{proof} Note that the final part of this proof indicates that ${\rm u}_n^m e^{(n)} {\rm d}_n^m$ has through degree less than $m$ whenever $m<n$ over positive characteristic. Finally, we note that the decomposition numbers of this algebra are trivial: all the cell modules are isomorphic and one-dimensional. This can be demonstrated by the below image. Here each row corresponds with a value of $n$ and the columns are values of $m$. A gray dot indicates that $e^{(n)}\Delta_n(m) = 0$ and an orange dot denotes those $\Delta_{(n)}(m)$ which are one dimensional, i.e. those $m$ lying in $\Lambda_{(n)}$ The dot is circled if that value of $m$ lies in $(\Lambda_{(n)})_0$. \begin{center} \includegraphics[width=0.5\textwidth]{endomorphism.png} \end{center} This diagram (or diagrams similar to it) appear in~\cite[figure 3]{spencer_2020}, ~\cite[figure 1]{sutton_tubbenhauer_wedrich_zhu_2021} and~\cite[figure 1]{jensen_williamson_2015}. Further enlightenment can be obtained by a subtle change of basis. While not strictly necessary for analysing ${\rm TL}_{\boldsymbol{\mu}}$ for one-part ${\boldsymbol{\mu}}$, this will be very useful in analysing other algebras. Throughout, we have been tacitly imagining $\Delta(\lambda)$ as a quotient of $D(\lambda)$. In terms of our morphisms, we have been considering $\Delta(m)$ as the set of all \emph{monic} morphisms - that is morphisms in $D(m) \simeq \operatorname{Hom}_{{\mathscr{TL}}}(\underline{n}, \underline{m})$ with propagation number equal to $m$. Thus, when we consider ``the'' element $e^{\boldsymbol{\mu}} {\rm d}_n^m$ as a basis of $\Delta_{\boldsymbol{\mu}}(m)$ we are really considering its image. While it is sometimes convenient to use this element (as it is a idempotent multiplied by diagram), an alternative is available. Consider the morphism $\overline{\rm d}_n^m$ over characteristic zero. This is a non-zero element of $\operatorname{Hom}(n,m)$, but might not descend to mixed characteristic due to the presence of the various Jones-Wenzl idempotents. The important fact about this element is neatly encapsulated by the following lemma. \begin{lemma}\cite[Proposition 3.2]{burrull_libedinsky_sentinelli_2019}\label{lem:mutual_orthogonal} Suppose $m, m' \in \operatorname{supp} n$. Then \begin{equation} \lambda_n^{m'} \; \overline{\rm u}_n^{m'}\cdot \overline{\rm d}_n^m = \delta_{m,m'}{\rm JW}_m. \end{equation} \end{lemma} This tells us several things. Firstly, as an equation over characteristic zero, \begin{equation}\label{eq:idemoptent_not_kill_nice_basis} e^{\boldsymbol{\mu}}\cdot\overline{\rm d}_n^m = \sum_{m'\in \operatorname{supp} n} \lambda_n^{m'} \overline{\rm d}_n^{m'} \overline{\rm u}_n^{m'} \left(\overline{\rm d}_n^m \right) = \overline{\rm d}_n^m \end{equation} \begin{lemma}\label{lem:suitable_multiples_of_basis} The monic submorphism of $\overline{\rm d}_n^m$ descends to $e^{(n)}{\rm d}_n^m$ over characteristic $(\ell, p)$. \end{lemma} \begin{proof} {\bf Step 1} {\it The coefficient of the diagram ${\rm d}_n^m$ in $\overline{\rm d}_n^m$ is exactly 1.} We can prove this by induction on the generation of $n$. Clearly it is true for all $m$ in the support of an Eve $n$. Then, if it is so for $n$ of generation $g-1$, consider the ladder construction from \cref{def:ladder}. The only terms that contribute to the coefficient of ${\rm d}_n^m$ are those that have the form ${\rm d}_{\mother[m']{n}}$ for some $m'$, followed by an optional Jones-Wenzl idempotent. The diagram ${\rm d}_n^m$ then only occurs from the identity coefficient in ${\rm JW}_i$ and so has unit coefficient as desired. This is effectively half the proof that the change of basis of $\Delta_{\mathbf{n}}(m)$ from $\{{\rm d}_n^m\}_m$ to $\{\overline{\rm d}_n^m\}$ is upper uni-triangular. {\bf Step 2} {\it The coefficient of the diagram ${\rm d}_n^m$ in $e^{(n)}{\rm d}_n^m$ is exactly 1.} We can consider the morphism $e^{(n)}{\rm d}_n^m$ modulo morphisms of through degree $<m$. In this case, we can write \begin{equation}\label{eq:down_morphism_expansion} e^{(n)} {\rm d}_n^m = \sum_{m\le m' \in \operatorname{supp} n}\lambda_n^{m'}\overline{\rm d}_n^{m'}\overline{\rm u}_n^m{\rm d}_n^m \end{equation} However, if $m' > m$ then $\overline{\rm u}_n^{m'}{\rm d}_n^{m'}$ is of the form of ${\rm JW}_{m'}$ multiplied by a morphism of through degree at most $m$ and hence vanishes unless $m'=m$. Thus \cref{eq:down_morphism_expansion} expands to $\lambda_n^m \overline{\rm d}_n^m\overline{\rm u}_n^m{\rm d}_n^m$. Finally, note that by construction $\overline{\rm u}_n^m{\rm d}_n^m = \overline{\rm u}_n^m\overline{\rm d}_n^m = (\lambda_n^m)^{-1}$. Then step 1 finishes the sub-proof. {\bf Step 3} {\it Some multiple of the monic part of $\overline {\rm d}_n^m$ descends to mixed characteristic:} We consider the construction of $\overline{\rm d}_n^m$ from \cref{def:ladder}. If $0\in S$, then all idempotents occurring in the definition are of the form ${\rm JW}_i$ where $i \equiv_\ell -1$. The general theory of Jones-Wenzl idempotents (see \cite{spencer_2020, ridout_saint_aubin_2014}) then tells us that these descend to the ring $\mathbb{Z}[X]_{\mathfrak m}$ where $\mathfrak m$ is the ideal generated by the minimal polynomial of $\delta$. As such, the entire morphism descends to that ring. On the other hand, if $0\not\in S$ we must be more careful. Here, we note that all but the last ${\rm JW}_i$ in the definition of $\overline{\rm d}_n^m$ exist over $\ell$. However, if we consider only monic diagrams, all terms but the identity of the final idempotent disappear. Hence, if we consider the monic image of $\overline{\rm d}_n^m$, this is defined over $\mathbb{Z}[X]_{\mathfrak m}$. Either way, the monic image of $\overline{\rm d}_n^m$ is defined over $\mathbb{Z}[X]_\mathfrak{m} \subseteq \mathbb{Q}(X)$. If we quotient out by the ideal $\mathfrak{m}\mathbb{Z}[X]_\mathfrak{m}$ we are left with coefficients with denominators in $\mathbb{Z}[X]/\mathfrak{m}$. In this ring, the ideal $(p)$ is principle and maximal. Hence we can multiply by a suitable power of $p$ to obtain an element that descends to our characteristic $(\ell, p)$. {\bf Step 4} {\it This multiple of the monic part of $\overline {\rm d}_n^m$ is fixed (modulo $< m$) by the action of $e^{(n)}$, and hence must be a multiple of $e^{(n)}{\rm d}_n^m$ over mixed characteristic.} This follows, since $\overline{\rm d}_n^m$ is fixed by $e^{(k)}$ over characteristic zero so its multiple is too. If we quotient out by morphisms of through degree less than $m$ we still get an morphism fixed by $e^{(n)}$. Since both this morphism and $e^{(n)}$ exist over the mixed characteristic ring it is still fixed. But the space of fixed morphisms is one dimensional so it must be a scalar multiple of $e^{(n)}{\rm d}_n^m$. {\bf Step 5} {\it By steps 1 and 2 above, the multiple must be exactly 1.} \end{proof} There is an alternative, more direct proof by calculation. However, this proof relies on the much misused evaluation principle~\cite{goodman_wenzl_1993}, which is why we presented the longer proof above which does not rely on this. \begin{proof} We consider calculating $e^{(n)}{\rm d}_n^m$ in characteristic zero. Now, if $m' > m$ then clearly $\overline {\rm u}_n^{m'}{\rm d}_n^m$ vanishes since ${\rm JW}_{m'}\overline {\rm u}_n^{m'} = \overline {\rm u}_n^{m'}$. Thus we may write \begin{equation}\label{eq:what_is_down_baby_dont_hurt} e^{(n)}{\rm d}_n^m = \sum_{m'\in\operatorname{supp} n}\lambda_n^{m'}\overline{\rm d}_n^{m'}\overline{\rm u}_n^{m'}{\rm d}_n^m = \sum_{m \ge m'\in\operatorname{supp} n}\lambda_n^{m'}\overline{\rm d}_n^{m'}\overline{\rm u}_n^{m'}{\rm d}_n^m \end{equation} However, if we quotient out by morphisms of through degree less than $m$ we get that \cref{eq:what_is_down_baby_dont_hurt} evaluates to $\overline{\rm d}_n^m$. Now we carefully apply the evaluation principle. Since both $e^{(n)}$ and ${\rm d}_n^m$ are defined over mixed characteristic, so too must their product be. Since their product, up to through degree less than $m$ is $\overline{\rm d}_n^m$, this must hold over positive characteristic. \end{proof} From this we can obtain information on the scalars $\lambda_n^m$ in the definition of the $(\ell, p)$-Jones-Wenzl idempotents from \cref{sec:notation}. Recall that they were defined such that $\overline{\rm u}_n^m\overline{\rm d}_n^m = (\lambda_n^m)^{-1}{\rm JW}_m$. But by considering this modulo morphisms of through degree $<m$, we see that in fact $(\lambda_n^m)^{-1}$ is simply the inner product on the cell module $\Delta_{(n)}(m)$. This gives a possibly surprising corollary. \begin{corollary} For each $m \in \operatorname{supp}(n)$, $(\lambda_n^m)^{-1}$ is defined over mixed characteristic. Further, over this characteristic, it is zero if $m\neq n$ and one if $m=n$. \end{corollary} \section{Further work}\label{sec:further} \subsection{Other Idempotents} In general, the idempotent $e^{(n)}$ is not the only idempotent describing the projective cover of the trivial module. Though in any given decomposition of ${\rm TL}_n$, there is exactly one factor isomorphic to $P_n(n)$, there are multiple decompositions possible. Indeed, as a first example, consider the idempotent $\bar e^{(n)}$ which is simply the vertical flip of the morphism $e^{(n)}$. It is clear that ${\rm TL}_n \cdot \bar e^{(n)} \simeq {\rm TL}_n \cdot e^{(n)}$ by the isomorphism sending $x \cdot e^{(n)} \mapsto \bar x \cdot e^{(n)}$. Equivalently, there is a trivial isomorphism of cell data. This stems from the fact that the ``ladder construction'' in \cref{def:ladder} is lopsided. A more general construction would allow for the new strands in \label{eq:ladder_step} to be placed either above or below the previous morphism. It only takes a minor alteration of the argument in \cite{martin_spencer_2021} to show that the resulting morphisms each describe $P_n(n)$. The root cause of this is the tower of algebras \begin{equation}\label{eq:tl_tower} k = {\rm TL}_0 \subset {\rm TL}_1 \subset {\rm TL}_2\subset \cdots\subset {\rm TL}_n \subset {\rm TL}_{n+1}\subset \cdots. \end{equation} Traditionally, ${\rm TL}_n\subset {\rm TL}_{n+1}$ by ``adding a strand below'', but one might as well add a strand above. By using these different embeddings throughout, one determines different, equivalent idempotents. Since these are equivalent idempotents, the results should be the same. For example, \cref{prop:main_two_part} should describe ${\rm TL}_{(1,k)}$ as well as ${\rm TL}_{(k,1)}$. However, evaluating the traces on the ``wrong side'' of $e^{(k)}$ is difficult, and it is likely to only get worse with non-lopsided idempotents. Finding a method of calculation which works through all possible \cref{eq:tl_tower} may provide some insight into dealing with non-lopsided ladder constructions in general. \begin{question} Are there other towers of the form \cref{eq:tl_tower} that do not consist of adding strands above and below in some order? \end{question} This question can be seen as a dual to the construction of Goodman and Wenzl in~\cite{goodman_wenzl_1993} where the authors construct idempotents describing projective covers of all the simple modules in characteristic $(\ell, 0)$ by considering paths that end at $(n,m)$. In some sense, our observation implies that the projective covers should be indexed by pairs of paths. \subsection{Other algebras} The Temperley-Lieb algebras are only one family out of many families of cellular algebras. Indeed, as endomorphisms of tilting modules for quantum groups as described in~\cite{andersen_stroppel_tubbenhauer_2018}, Temperley-Lieb algebras fall into the greater study of tilting modules for algebraic groups. This has recently been expanded to more general ``standard categories with duals''~\cite{bellamy_thiel_2021} which allows one to assign cellular structures to endomorphism algebras over categories that are not necessarily highest-weight categories. If the endomorphism algebras sit in a tower similar to \cref{eq:tl_tower}, compatible with a monoidal category structure, it may be possible to construct similar endomorphisms to $e^{\boldsymbol{\mu}}$. If so, the factorisation in \cref{sec:cell_arbitrary} may be effective at studying this problem. In particular, $({\rm TL}_n)_{n\in\mathbb{N}_0}$ is in Schur-Weyl duality to $U_q(\mathfrak{sl}_2)$. The objects in duality to $U_q(\mathfrak{sl}_n)$ for higher $n$ are known as {\it webs} or {\it spiders} and also admit cellular, diagrammatic behaviour. It is hoped that they will admit a similar analysis as is indicated by the success in describing ``clasps'' (higher order Jones-Wenzl elements) in~\cite{elias_2015}. Additionally, this question has application to Soergel bimodule theory~\cite{elias_williamson_2016}. Here we again are interested in the endomorphism spaces of objects, this time Soergel bimodules. Through the diagrammatic definition of the category similar techniques may allow one to study the breakdown of these objects (in particular the Bott-Samelson objects) into their indecomposable summands -- a key question in the area. \section{Cell Data for Arbitrary Compositions}\label{sec:cell_arbitrary} Suppose now that ${\boldsymbol{\mu}} = (\mu_1, \ldots, \mu_r) \vdash n$ is an arbitrary composition. We will now generalise the ideas in the previous sections to determine cell data for ${\rm TL}_{\boldsymbol{\mu}}$. \begin{definition} For a given ${\boldsymbol{\mu}}$, let the set of \emph{associated compositions}, $\operatorname{ass}{\boldsymbol{\mu}}$ be \begin{equation*} \operatorname{ass}{\boldsymbol{\mu}} = \{(a_1, \ldots,a_r) : a_i \in \operatorname{supp}\mu_i\}, \end{equation*} so that $|\operatorname{ass} {\boldsymbol{\mu}}| = 2^{\sum_i \generation{\mu_i}}$. \end{definition} The idea for constructing a cell basis can be summed up by the following diagram. Here, to fit on the page we have written the morphism from ``bottom to top'' instead of ``left to right''. \input{ideas.tex} Notice we factor the element of the cell module into an idempotent, some elements of cell modules for ${\rm TL}_{(n)}$ and one further diagram. Recall the definition of $E_\mathbf{a}$ from \cref{sec:valenced_cell_data}. \begin{proposition} If ${\boldsymbol{\mu}} \vdash n$, then ${\rm TL}_{\boldsymbol{\mu}}$ is cellular with cell modules described by \begin{equation}\label{eq:subsetsum} \Lambda_{\boldsymbol{\mu}} = \bigcup_{\mathbf{a}\in \operatorname{ass}{\boldsymbol{\mu}}} E_\mathbf{a} \end{equation} and \begin{equation}\label{eq:dim_general_cell} \dim\Delta_{\boldsymbol{\mu}}(m) = \sum_{\mathbf{a}\in\operatorname{ass}{\boldsymbol{\mu}}} C^\mathbf{a}_m. \end{equation} \end{proposition} \begin{proof} If effect, we generalise the factorisation of \cref{sec:seam}. Let $\mathbf{t}$ be a diagram from $\underline{n} \to \underline{m}$ such that $e^{\boldsymbol{\mu}}\cdot\mathbf{t}$ is not zero in $\Delta(m)$. Let $B_i = \{\mu_1 + \cdots + \mu_{i-1}+1,\ldots,\mu_1 + \cdots + \mu_i\}$ be the $i$-th ``bucket'' of source sites on $\mathbf{t}$. Suppose that, in $\mathbf{t}$, $a_i$ of the source sites in bucket $B_i$ are not connected to other source sites in bucket $B_i$ and let $a = \sum_{i} a_i$. Then $\mathbf{t}$ uniquely factorises into \begin{equation} \mathbf{t} = (\mathbf{t}_0^1 \otimes\cdots\mathbf{t}_0^r) \cdot \mathbf{t}_1 \end{equation} where $\mathbf{t}_0^i$ is a monic diagram $\underline{\mu_i}\to\underline{a_i}$ and $\mathbf{t}_1$ is a monic diagram $\underline{a}\to\underline{m}$ such that no two source sites in the buckets described by the $a_i$ are connected. Let this describe the associated tuple to a diagram, $\operatorname{ass} \mathbf{t} = (a_1,\ldots, a_r)$. Order the $\mathbf{t}$ lexicographically by their associated tuples. Now suppose that $e^{\boldsymbol{\mu}}\cdot \mathbf{t}$ is not in the linear span of $e^{\boldsymbol{\mu}}\cdot\mathbf{t}'$ for smaller $\mathbf{t}'$. If $e^{\mu_i}\cdot \mathbf{t}_0^i$ has through degree less than $i$ then $e^{\boldsymbol{\mu}}\cdot \mathbf{t}$ lies in the linear span of $e^{\boldsymbol{\mu}}\cdot\mathbf{t}'$ where the $\mathbf{t}' \prec \mathbf{t}$. Indeed, by expanding at bucket $i$ we can express $e^{\mu_i}\cdot\mathbf{t}_0^i$ as a linear combination of $e^{\mu_i}\cdot \mathbf{t}'^i_0$ where $\mathbf{t}'^i_0$ have through degree in $\operatorname{supp} \mu_i$ that are less than $a_i$. Replacing $\mathbf{t}_0^i$ by such $\mathbf{t}'^i_0$ results in diagrams $\mathbf{t}'$ such that $a_i$ is smaller and no $a_j$ increase for $j \neq i$. Such a tuple is lexicographically smaller than $(a_i)$, showing the claim. Thus for this to be a new, linearly independent vector, $\mathbf{t}_0^i$ is a monic diagram from $\underline{\mu_i} \to \underline{a_i}$ and $a_i\in\operatorname{supp}{\mu_i}$. Since there is a unique morphism from $\underline{\mu_i} \to \underline{a_i}$ for each $a_i \in \operatorname{supp}{\mu_i}$, we can calculate the total number of morphisms by summing the appropriate $C^\mathbf{a}_m$. \end{proof} \begin{remark} Note that $C_m^\mathbf{a} = 0$ when $m \not\in E_\mathbf{a}$ so we could have written the sum in \cref{eq:dim_general_cell} over all tuples. However, the formulation of summing over associated tuples makes the factorisation clearer. \end{remark} \begin{remark} Unfortunately, it is not likely that much more will be able to be said about general compositions without more powerful machinery. The recursive nature of the calculations of $C^\mathbf{a}_m$ and the combinatorial nature of $\operatorname{ass}{\boldsymbol{\mu}}$ means that the analysis given for select compositions in this paper is unlikely to extend easily to general compositions (though the ideas may well be developed into something more useful). This factorisation, for example, is particularly useful in calculating Gram matrices for certain compositions that are either Eve or almost Eve (first generation idempotents with large cell indices). However, it does not yield much in the general case. For example, it becomes important to know the value of ${\rm u}_n^{m'} e^{(n)}{\rm d}_n^{m'}$ {\it exactly} and not just modulo lower morphisms, if one wants to calculate inner products on a cell module of a composition including $n$. \end{remark} \section*{Introduction}\label{sec:introduction} Given a well-understood algebra, there are various ways to derive new algebras related to the original but with subtle differences. The simplest is taking quotients, but more exotic examples, such as idempotent truncation, crossed products or quantum deformations exist. In this paper we will be interested in truncating an algebra by an idempotent. The representation theory of the Temperley-Lieb algebras over positive characteristic has recently received some attention. These algebras are cellular in the sense of Graham and Lehrer~\cite{graham_lehrer_1996} and this gives us a good handle on their structure and representation theory. Indeed, we are able to give the complete decomposition matrix for these algebras~\cite{spencer_2020} as well as formulae for the idempotents of the projective cover of the trivial modules~\cite{martin_spencer_2021} (so-called generalised Jones-Wenzl elements). We may ask what we can say about the truncations of these algebras at certain idempotents. In this paper we will investigate a particular class of idempotents, formed out of generalised Jones-Wenzl elements and the algebras they induce. These algebras inherit some of their cellular structure from the parent Temperley-Lieb algebra, but ``loose'' some of the complexity in interesting ways. We would like to answer a few principal questions about the representation theory of these algebras: \begin{enumerate} \item What are their cell modules? \item What are the cell module dimensions? \item Can we construct a basis of the cell module? \item How many (isomorphism classes of) irreducible modules does the algebra have? \item What are the dimensions of the irreducible modules? \end{enumerate} Knowledge of these questions gives us answers to many others. For example, knowing (1) and (2) gives us the dimension of the algebra and (3) gives a basis. The question of identifying the irreducible modules for an algebra is classical and of its own importance. Their dimensions have applications to higher representation theory as the ranks of intersection forms. If the algebra happens to only have one-dimensional cell modules, then it is commutative. In this paper, we answer these questions for certain classes of idempotent-truncated Temperley-Lieb algebras over mixed characteristic and thus derive a large tranche of their structure. The particular truncation studied is of its own interest, but is motivated by both physical systems and Soergel bimodule theory. We are able to give recursive formula to answer points (1) and (2) above and give an algorithm for constructing the basis of (3). In particular cases, we can identify the irreducible modules explicitly and give their dimensions, answering (4) and (5). It should be noted that the positive and mixed characteristic theory is very much more intricate and complicated than the characteristic zero theory, even specialised at a root of unity. However, our results are strict generalisations of these cases and the concerned reader may, if they wish, consider all algebras as occurring over $\mathbb{C}$ (the mantra to hold in mind is that $p = \infty$). \vspace{1.5em}\noindent The Temperley-Lieb algebras arises from the monoidal Temperley-Lieb category, ${\mathscr{TL}}$, as the endomorphism spaces of the objects $\{\underline{n} \;:\; n \in \mathbb{N}_0\}$. To be exact, these spaces consist of linear Temperley-Lieb diagrams on $n$ points. This category is equipped with a natural tensor product which acts as addition on the objects and vertical concatenation of the morphisms: \begin{equation*} \vcenter{\hbox{ \begin{tikzpicture} \node at (0.5, 0.4) {$f_1$}; \draw (0,.0325) rectangle (1,0.75); \foreach \i in {0,...,2} { \draw[very thick] (-.4, \i/4+0.125) -- (0,\i/4+0.125); \draw[very thick] (1.4, \i/4+0.125) -- (1,\i/4+0.125); } \end{tikzpicture} }} \otimes \vcenter{\hbox{ \begin{tikzpicture} \node at (0.5, -0.7) {$f_2$}; \draw (0,-.0325) rectangle (1,-1.25); \foreach \i in {-5,...,-1} { \draw[very thick] (-.4, \i/4+0.125) -- (0,\i/4+0.125); \draw[very thick] (1.4, \i/4+0.125) -- (1,\i/4+0.125); } \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \node at (0.5, 0.4) {$f_1$}; \node at (0.5, -0.7) {$f_2$}; \draw (0,-.0325) rectangle (1,-1.25); \draw (0,.0325) rectangle (1,0.75); \foreach \i in {-5,...,2} { \draw[very thick] (-.4, \i/4+0.125) -- (0,\i/4+0.125); \draw[very thick] (1.4, \i/4+0.125) -- (1,\i/4+0.125); } \end{tikzpicture} }} \end{equation*} If $f_1$ and $f_2$ are each idempotents within their respective Temperley-Lieb algebras, then their tensor product, $e$ is too. We can then construct the algebra $e\cdot{\rm TL}_n\cdot e$. It is unital and associative algebra, but not a sub-algebra of ${\rm TL}_n$ since the identities differ. Crucially for what follows, it is cellular as defined by Graham and Lehrer~\cite{graham_lehrer_1996}. In more generality, $(f_i)_{i=1}^r$ is a family of idempotents with $f_i \in {\rm TL}_{n_i}$ with $\sum_i n_i = n$, then so too is $e = \bigotimes f_i$ in ${\rm TL}_n$. Of particular importance will be when each of the $f_i$ are idempotents describing the projective covers of the trivial ${\rm TL}_{n_i}$ module --- the various types of Jones-Wenzl idempotents. In this case, we are particularly interested in the representation theory of $e \cdot {\rm TL}_n \cdot e$. The tuple $(n_1,\ldots, n_r)$ will be called the composition of the idempotent. It is ``Eve'' if each projective cover is itself trivial. The question in the Temperley-Lieb category in particular is motivated in~\cite[1.3(5)]{burrull_libedinsky_sentinelli_2019} by the study of indecomposable objects in Hecke categories for Universal Coxeter systems~\cite{elias_libedinsky_2017}. The systems considered there are restricted to Eve compositions over realisations with Cartan matrix elements $\pm 2$. Burrull, Libedinsky and Sentinelli, by constructing $p$-Jones-Wenzl idempotents, open the question to non-Eve compositions. By using the extension in \cite{spencer_2020} we can study more general Cartan matrices using $(\ell, p)$-Jones-Wenzl idempotents. It is this more general characteristic that this paper works. The question also arises as a natural positive characteristic analogy of mathematics important to physics. Originally conceived of as operators for statistical mechanics~\cite{temperley_lieb_1971}, the Temperley-Lieb algebras' close links to knot theory bestow strong links to physics. Recently, the representation theory of the boundary seam algebras have been studied~\cite{langlois_remillard_saint_aubin_2020} after finding application to conformal field theory~\cite{morin_duchesne_rasmussen_david_2015}. These are both special cases of valenced Temperley-Lieb algebras~\cite{flores_peltola_2018a, flores_peltola_2018b} . The representation theory of these algebras is well understood over characteristic zero, where we have explicit forms for the Gram matrices, simple modules and semi-simplicity criteria. However, as is to be expected, the results over characteristic zero do not extend simply to positive characteristic. In the case of Temperley-Lieb theory, the most characteristic considered is \emph{mixed} characteristic $(\ell, p)$. Here, even the representation theory of the Temperley-Lieb algebra is rich and fractal-like~\cite{spencer_2020}. In this paper, we explore the representation theory of valenced Temperley-Lieb algebras over mixed characteristic. We use knowledge of the cellular structure of the algebras, and explicit calculations of certain bilinear forms to enumerate the simple modules for some specific cases. Key to our analysis is a particular factoring of morphisms in the cell modules of valenced Temperley-Lieb algebras, and the explicit forms of the ($\ell, p)$-Jones-Wenzl idempotents. In concurrent work, Sutton, Tubbenhauer, Wedrich and Zhu~\cite{sutton_tubbenhauer_wedrich_zhu_2021} tackled an essentially identical question, building on earlier work in computing the $p$-Jones-Wenzl idempotents in~\cite{tubbenhauer_wedrich_2019}. Through their working, the representation theory of $U_q(\mathfrak{sl}_2)$ is critical in providing the module structures and Clebsch-Gordan rules. The idempotents $\otimes f_i$ are expressed as sums of mutually orthogonal idempotents explicitly. Our work differs in that it builds on~\cite{spencer_2020} and~\cite{martin_spencer_2021} to work solely within the Temperley-Lieb category. We don't split the idempotents explicitly, but focus on the cellular nature of ${\rm TL}_n$ and its valenced cousins and determine the cell data for these algebras. Further, we consider some additional cases not covered by~\cite{sutton_tubbenhauer_wedrich_zhu_2021}. \vspace{2em} The remainder of this paper is arranged as follows. In \cref{sec:notation} we introduce the notation necessary for describing the representation theory of ${\rm TL}_n$. The reader is directed to~\cite{ridout_saint_aubin_2014} for an overview of the characteristic zero theory,~\cite{spencer_2020} for the mixed characteristic and~\cite{tubbenhauer_wedrich_2019, martin_spencer_2021} for the theory on general Jones-Wenzl elements. \Cref{sec:cellular} revises Graham and Lehrer's \cite{graham_lehrer_1996} cellular algebras and makes explicit K\"onig and Xie's observation that Hecke algebras (those of the form $eAe$ for idempotent $e$) are cellular~\cite[Proposition 4.3]{konig_xi_1998}. We determine precisely how the representation theory of $eAe$ is related to that of $A$ and consider the critical example ${\rm JW}_n \cdot {\rm TL}_n \cdot {\rm JW}_n$ (where ${\rm JW}_n$ is the $(\ell, p)$-Jones-Wenzl idempotent of \cite{martin_spencer_2021}. We then introduce the objects of study explicitly in \cref{sec:valenced} and consider some known examples. In \cref{sec:valenced_cell_data} we restrict the idempotents to be ``Eve''. That is, all Jones-Wenzl elements appearing are ``characteristic zero'' as opposed to $(\ell, p)$-Jones-Wenzl elements. This substantially simplifies the analysis, and we construct diagram bases for the cell modules, and so find their dimension. We consider the Gram matrix for these modules in \cref{sec:gram}. Having laid the ground theory for the study of the valenced algebras, we turn to certain compositions. The simplest of these, considered in \cref{sec:endomorphism}, is the composition $(n)$ in which case we study ${\rm JW}_n \cdot {\rm TL}_n \cdot {\rm JW}_n$ for some $(\ell, p)$-Jones-Wenzl idempotent ${\rm JW}_n$. This is (for positive characteristic) is an object of study in~\cite{tubbenhauer_wedrich_2019} and this section is a recasting of those results into our notation and framework. This is critical for studying more complicated algebras. The seam algebras of~\cite{morin_duchesne_rasmussen_david_2015} (with $\beta_2 = 0$ and $\beta_1 = 1$) are studied over characteristic zero in~\cite{langlois_remillard_saint_aubin_2020} and \cref{sec:seam} can be viewed as the natural extension of that paper to mixed characteristic. We compute the cell indices and cell-module dimensions in the general case as well as the irreducible modules when the composition is Eve. Finally, we observe that in the special case of a two part seam partition, we are able to determine all the cell data explicitly (even for non-Eve compositions) and relate this to action of inducing modules from ${\rm TL}_{n}$ to ${\rm TL}_{n+1}$. \Cref{sec:two-part} considers two-part, Eve, compositions and \cref{sec:small_tensor} considers two-part non-Eve compositions where the second part is sufficiently small. In both cases, we are able to characterise the cell indices, bases and dimensions of cell modules (and hence the algebra) and which cell modules have non-degenerate bilinear form. Finally, we collate those parts of the preceding analysis that are common into a general result in \cref{sec:cell_arbitrary} and muse on further directions and questions in \cref{sec:further}. \section*{Acknowledgements} The author would like to thank Louise Sutton, Daniel Tubbenhauer, Paul Wedrich and Jieru Zhu for a preview of their work. The author is also grateful to Stuart Martin and Daniel Tubbenhauer for their comments on a draft of this paper. \input{notation} \input{cellular} \input{valenced} \input{valenced-cell-data} \input{valenced-cell-modules} \input{endomorphism} \input{seam} \input{two-part} \input{small_tensor} \input{general-composition-cells} \input{further} \bibliographystyle{alpha} \section{Notation for Temperley--Lieb Theory}\label{sec:notation} We define some notation for dealing with the modular theory of the Temperley-Lieb algebras. We assume the reader is familiar with the basics of the Temperley-Lieb theory, at least over characteristic zero (see~\cite{ridout_saint_aubin_2014}). When discussing the Temperley-Lieb category, ${\mathscr{TL}}$ we use underlines to denote the object set $\{\underline{n}\;:\; n\in \mathbb{N}_0\}$. \subsection{Ring Parameters and Natural Numbers} Throughout, implicitly, we will be fixing a ring $R$ which has $(\ell, p)$-torsion. That is to say, $R$ has a distinguished element $\delta = [2]$ such that $[\ell]$ is the first quantum number after $[1]$ to vanish and $1\in R$ has additive order $p$. Here, the quantum numbers $[n]$ are defined as polynomials in $\delta$ by $[n+1] + [n-1] = [2] [n]$. Our Temperley-Lieb algebras will always be defined over $R$, or sometimes $\mathbb{Q}(\delta)$. The ring in question will always be an integral domain. This fixes a prime $p$ and integer $\ell > 1$. Either $p$ or $\ell$ may be taken to be $+\infty$. The case of $p = +\infty$ is the characteristic zero case, and $\ell = +\infty$ gives the semi-simple Temperley-Lieb algebra (even when $p < +\infty$). \begin{definition} We write $p^{(i)} = \ell p^{i-1}$ with the understanding that $p^{(0)} = 1$. If $n = \sum_{i=0}^r n_i p^{(i)}$ where $0\le n_0 < \ell$ and $0\le n_i < p$ for $i>0$, we will write $n = \pldigs{n_r, n_{r-1},\ldots, n_0}$. These are the $(\ell,p)$-digits of $n$. We extend this notation so that $\pldigs{n_r, \ldots, n_0} = \sum n_i p^{(i)}$ even when $n_i$ are possibly negative. \end{definition} We would like to emphasise that the sequel of the paper applies to the characteristic zero case and the semi-simple case equally. That is, if the base ring is $\mathbb{C}$ (or similar) and $\delta$ (or $q$ when $\delta = q + q^{-1}$) is an indeterminate, setting $p,\ell = \infty$ gives that the digits of any $n$ are simply $n = \pldigs{n}$. Similarly, if $q$ is specialised at a root of unity (so $\delta$ satisfies some quantum polynomial), the digits are $n = \pldigs{n_1,n_0}$ where $n = n_1 \ell + n_0$. In these cases we recover the known theory of the Temperley-Lieb algebras (such as is described in~\cite{ridout_saint_aubin_2014} exactly. \begin{definition} If $n + 1 = \pldigs{n_r, \ldots, n_s,0,\ldots, 0}$ with $n_s \neq 0$, then the \emph{mother} of $n$, denoted $\mother{n}$ is such that $\mother{n} + 1 = \pldigs{n_r, \ldots, n_{s+1},0,\ldots, 0}$ i.e. the mother of $n$ is found by adding one, setting its least significant non-zero digit to zero, and subtracting one. If $n+1$ has a single non-zero digit (i.e. $n = n_r p^{(r)}-1$) then we term $n$ \emph{Eve}. It has no mother. The \emph{generation} of $n$ is one less than the number of non-zero digits in $n+1$. We write it $\generation{n}$ and note that $n$ is Eve iff $\generation{n}=0$. The set $A(n) = \{\mother{n},\mother[2]{n},\ldots\}$ are known as the \emph{ancestors} of $n$. Here, $\mother[t]{n} = \mother[t-1]{\mother{n}}$. Note that $|A(n)| = \generation{n}$. While in the literature, ``support'' is the generally used term for the next definition, really ``cousins'' is a more accurate description. The \emph{support} of $n = \pldigs{n_r, \ldots, n_0} - 1$ is \begin{equation*} \operatorname{supp}(n) = \{\pldigs{n_r, \pm n_{r-1}, \ldots, \pm n_0} - 1\}. \end{equation*} Clearly $n\in \operatorname{supp}(n)$ and $|\operatorname{supp}(n)| = 2^{\generation{n}}$. \end{definition} Usually, for set $S$, we will write $S + x$ to mean $\{s+x \;:\; s \in S\}$. Hence we could have written \begin{equation*} \operatorname{supp}(n) = \{\pldigs{n_r, \pm n_{r-1}, \ldots, \pm n_0}\} - 1. \end{equation*} More generally, for sets $S, T$, the set $S + T$ will mean $\{s + t\;:\; s \in S \text{ and }t \in T\}$. We will let $L_n(m)$, $\Delta_n(m)$ and $P_n(m)$ be the trivial module, cell module and projective indecomposable module for ${\rm TL}_n$ labelled by $m$ respectively. Recall that $L_n(m)$ is a composition factor of $\Delta_n(m')$ iff $m' \in \operatorname{supp} m$~\cite[Theorem 8.4]{spencer_2020} iff $\Delta_n(m')$ appears in a $\Delta$-filtration of $P_n(m)$ (for $m \in \Lambda_0$)~\cite[Theorems 2.4, 2.7]{xi_2006}. The representation theory of ${\rm TL}_n$ depends on the parity of $n$. We thus define $\mathbb{Z}_2^n$ to be the set of all integers of the same parity as $n$. Another important set is \begin{equation}\label{eq:two_part_e} E_{r,s} = \{|r-s|, |r-s| + 2, \ldots, r + s\} \end{equation} Note that $t \in E_{r,s} = E_{s,r}$ iff $r \in E_{t,s}$. \subsection{Boxes and Pictures for Temperley-Lieb Morphisms} When drawing morphisms in ${\mathscr{TL}}$ we may replace submorphisms by boxes where the meaning is clear. The boxes will often have the morphism name written in the middle. Where multiple strands run concurrently (a so-called ``ribbon'') we may denote this with a thicker line annotated by its width. As such, the following three morphisms are all identical: \begin{equation} \vcenter{\hbox{ \begin{tikzpicture}[] \draw (-.4,-.4) rectangle (.4,.4); \node at (0,0) {${\rm JW}_2$}; \draw[very thick] (-.6,-.2) -- (-.4,-.2); \draw[very thick] (-.6,.2) -- (-.4,.2); \draw[very thick] (.4,-.2) arc (90:-90:.2) -- (-.6,-.6); \draw[very thick] (.4,.2) arc (90:-90:.6) -- (-.6,-1); \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture}[] \draw (-.4,-.4) rectangle (.4,.4); \node at (0,0) {${\rm JW}_2$}; \draw[line width=2pt] (-.6,0) -- (-.4,0); \draw[line width=2pt] (.4,0) arc (90:-90:.3) -- (-.6,-.6); \node at (.8,-.6) {\tiny$2$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture}[] \draw[very thick] (.4,-.2) arc (90:-90:.2); \draw[very thick] (.4,.2) arc (90:-90:.6); \end{tikzpicture} }} - \frac 1{[2]} \vcenter{\hbox{ \begin{tikzpicture}[] \draw[very thick] (.4,.2) arc (90:-90:.2); \draw[very thick] (.4,-.6) arc (90:-90:.2); \end{tikzpicture} }} \end{equation} \subsection{Down- and Up-morphisms} Throughout, a critical role will be played by certain morphisms known as up and down morphisms. The mnemonic is that down morphisms decrease the number of strands and up morphisms increase them. For $n + 1 = \pldigs{n_r, \ldots, n_0}$, and $S \subseteq \{0, \ldots, r-1\}$ write $n[S] = n - 2\sum_{i \in S} n_ip^{(i)}$. \begin{definition}\cite[Definition 2.14]{tubbenhauer_wedrich_2019} Suppose $n+1 = [n_j, n_{j-1},\ldots, n_0]_{p, \ell}$. Then for each $0\le i\le j$ consider $w = [n_j, \ldots, n_{i+1}, -n_i,0,\ldots, 0]_{p, \ell}-1$ and $x = [n_{i-1}, \ldots, n_0]_{p, \ell}$. Note that $w + x = n - 2n_ip^{(i)} = n[\{i\}]$. Then define diagram ${\rm d}_i : \underline{n} \to \underline{x+w}$ by \begin{equation} {\rm d}_i = \vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw (0,-.3) -- (0,3.3); \draw[very thick] (0,3) -- (1.3,3); \draw[very thick] (0,.8) arc (-90:90:0.7); \draw[very thick] (0,0) -- (1.3,0); \node at (1.9,3) {\small $w$}; \node at (1.9,1.5) {\small $n_i p^{(i)}$}; \node at (1.9,0) {\small $x$}; \end{tikzpicture} }} \end{equation} Now if $S = \{s_k> \ldots > s_0\}$ is down-admissible for $n$ set \begin{equation}\label{eq:chain downs} {\rm d}_{n}^{n[S]} = {\rm d}_{s_k} \ldots {\rm d}_{s_1}{\rm d}_{s_0}. \end{equation} Note that this is a morphism from $\underline n$ to $\underline{n[S]}$. Finally, we denote ${\rm u}_S = \iota {\rm d}_S$ where $\iota$ is the natural involution on the Temperley-Lieb category. \end{definition} These morphisms are a special case of a more general ``ladder construction''~\cite{elias_2015}. \begin{definition}\label{def:ladder} Select morphisms $g_n : \underline{n} \to \underline{n}$ for each $n \in \mathbb{N}_0$. Then for $S$ a down-admissible set for $n$ where $n + 1 = \pldigs{n_j,\ldots, n_0}$ construct morphisms ${}_g\widetilde{\rm d}_i$ for each $0\le i \le j$ as follows. Firstly, \begin{equation} {}_g\widetilde{\rm d}_j = \vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) -- (.6,0); \draw[very thick] (0.6,-.7) rectangle (1.9,0.7); \draw[very thick] (1.9,0) -- (2.5,0); \node at (4.5,0) {\small $n_j p^{(j)} - 1$}; \node at (1.25,0) {\small $g$}; \end{tikzpicture} }} \end{equation} and then \begin{equation}\label{eq:ladder_step} {}_g\widetilde{\rm d}_i = \begin{cases} \vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[white] (0,-3) -- (0,1); \draw[very thick] (0,0) -- (.6,0); \draw[very thick] (0.6,-.8) rectangle (2.9,0.8); \draw[very thick] (2.9,0) -- (3.5,0); \draw[very thick] (3.5,-2.4) rectangle (4.9,0.8); \draw[very thick] (0,-2.0) -- (3.5,-2.0); \draw[very thick] (4.9,-0.8) -- (5.5,-0.8); \node at (4.2,-0.9) {\small $g$}; \node at (1.75,-1.5) {\tiny $n_i p^{(i)}$}; \node at (1.75,0) {\small ${}_g\widetilde{\rm d}_{i+1}$}; \end{tikzpicture} }} & i \not\in S\\ \vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[white] (0,-3) -- (0,1); \draw[very thick] (0,0) -- (.6,0); \draw[very thick] (0.6,-.8) rectangle (2.9,0.8); \draw[very thick] (2.9,0.4) -- (4.0,0.4); \draw[very thick] (4.0,-.8) rectangle (5.4,0.8); \draw[very thick] (0,-2.0) -- (2.9,-2.0); \draw[very thick] (2.9, -.4) arc (90:-90:0.8); \draw[very thick] (5.4,0) -- (6.0,0); \node at (4.7,0) {\small $g$}; \node at (1.75,-1.5) {\tiny $n_i p^{(i)}$}; \node at (1.75,0) {\small ${}_g\widetilde{\rm d}_{i+1}$}; \end{tikzpicture} }} & i \in S\\ \end{cases}. \end{equation} Then the ladder morphism $\underline{n} \to \underline{n[S]}$ with respect to $g$ is the morphism ${}_g{\rm d}_n^{n[S]} = {}_g{\widetilde{\rm d}}_0$. \end{definition} In almost all examples we will need, the family $\{g_n\}_n$ are ``self absorbing'' in that $g_m\otimes{\operatorname{id}}_{n-m} \cdot g_{n} = g_n$ for $n \ge m$. Examples of such families are the identify morphisms, the Jones-Wenzl idempotents and the $(\ell, p)$-Jones-Wenzl idempotents. \begin{example} Suppose $\ell =4$ and $p = 3$. Let $n = 278$ so that $n + 1 = [2,1,2,0,3]_{3,4}$. Then $n$ is third generation and has eight cousins: $\operatorname{supp} n = \{278,272,230,224,206,200,158,152\}$. We tabulate the down morphisms from $\underline{n}$ as well as the general form of the ladder morphism with respect to a self absorbing family. \begin{center} \begin{tabular}{llll} \toprule $S$ & $n[S]$ & ${\rm d}_n^{n[S]}$& ${}_g{\rm d}_n^{n[S]}$\\ \midrule $\emptyset$ & 278 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) -- (1.3,0); \node at (1.9,0) {\tiny $278$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) -- (1.3,0); \draw[very thick, fill=white] (.3,-.2) rectangle (1,.2); \end{tikzpicture}}}$ \\ $\{0\}$ & 272 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) arc (-90:90:0.7); \draw[very thick] (0,1.7) -- (1.3,1.7); \node at (1.2,0.7) {\tiny $3$}; \node at (1.9,1.7) {\tiny $272$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \end{tikzpicture}}}$ \\ $\{2\}$ & 230 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,2.2) -- (1.3,2.2); \draw[very thick] (0,.4) arc (-90:90:0.7); \draw[very thick] (0,0) -- (1.3,0); \node at (1.9,2.2) {\tiny $227$}; \node at (1.2,1.1) {\tiny $24$}; \node at (1.9,0) {\tiny $3$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick] (0,0.1) -- (1.3,0.1); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \draw[very thick, fill=white] (1.3,-.1) rectangle (1.8,1.9); \draw[very thick] (1.8,0.9) -- (2.1,0.9); \end{tikzpicture}}}$ \\ $\{0,2\}$ & 224 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) arc (-90:90:0.7); \draw[very thick] (0,1.7) -- (1.3,1.7); \node at (1.2,0.7) {\tiny $27$}; \node at (1.9,1.7) {\tiny $224$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \end{tikzpicture}}}$\\ $\{3\}$ & 206 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,2.2) -- (1.3,2.2); \draw[very thick] (0,.4) arc (-90:90:0.7); \draw[very thick] (0,0) -- (1.3,0); \node at (1.9,2.2) {\tiny $179$}; \node at (1.2,1.1) {\tiny $36$}; \node at (1.9,0) {\tiny $27$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick] (0,0.1) -- (1.3,0.1); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \draw[very thick, fill=white] (1.3,-.1) rectangle (1.8,1.9); \draw[very thick] (1.8,0.9) -- (2.1,0.9); \end{tikzpicture}}}$ \\ \bottomrule \end{tabular} \hspace{3em} \begin{tabular}{llll} \toprule $S$ & $n[S]$ & ${\rm d}_n^{n[S]}$\\ \midrule $\{0,3\}$ & 200 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,4) -- (1.3,4); \draw[very thick] (0,2.2) arc (-90:90:0.7); \draw[very thick] (0,1.8) -- (1.3,1.8); \draw[very thick] (0,0) arc (-90:90:0.7); \node at (1.9,4) {\tiny $179$}; \node at (1.2,2.9) {\tiny $36$}; \node at (1.9,1.8) {\tiny $21$}; \node at (1.2,0.7) {\tiny $3$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick] (0,0.1) -- (1.3,0.1); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \draw[very thick, fill=white] (1.3,-.1) rectangle (1.8,1.9); \draw[very thick] (1.8,1.0) -- (2.3,1.0); \draw[very thick] (0,-.4) -- (1.8,-.4) arc (-90:90:0.3); \end{tikzpicture}}}$ \\ $\{2,3\}$ & 158 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,2.2) -- (1.3,2.2); \draw[very thick] (0,.4) arc (-90:90:0.7); \draw[very thick] (0,0) -- (1.3,0); \node at (1.9,2.2) {\tiny $155$}; \node at (1.2,1.1) {\tiny $60$}; \node at (1.9,0) {\tiny $3$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick] (0,0.1) -- (1.3,0.1); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \draw[very thick, fill=white] (1.3,-.1) rectangle (1.8,1.9); \draw[very thick] (1.8,0.9) -- (2.1,0.9); \end{tikzpicture}}}$ \\ $\{0,2,3\}$ & 152 & $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) arc (-90:90:0.7); \draw[very thick] (0,1.7) -- (1.3,1.7); \node at (1.2,0.7) {\tiny $63$}; \node at (1.9,1.7) {\tiny $152$}; \end{tikzpicture}}}$& $\vcenter{\hbox{ \begin{tikzpicture}[scale=0.4] \draw[very thick] (0,.5) -- (0.8,.5) arc (-90:90:0.3); \draw[very thick] (0.8,1.7) -- (1.3,1.7); \draw[very thick] (0,1.35) -- (0.8,1.35); \draw[very thick, fill=white] (.3,.8) rectangle (.8,1.9); \end{tikzpicture}}}$\\ \bottomrule \end{tabular} \end{center} Note that the caps of ${\rm d}_n^{n[S]}$ are all ``ancestor centred.'' That is, their centres lie $x$ strands from the top where $x\in A(n) = \{275, 251, 215\}$. \end{example} In particular, if we pick $g_n = {\rm JW}_n$ to be the usual Jones-Wenzl idempotents over characteristic zero, we define $\overline{\rm d}_n^m$ and $\overline{\rm u}_n^m$ in ${\rm TL}_n$ over $\mathbb{Q}(\delta)$. These are also indexed by $m \in \operatorname{supp} n$ and in general do not descend to our pointed ring $(R,\delta)$. For each $m$ in $\operatorname{supp} n$, we define \begin{equation} \overline{\rm L}_n^m = \overline{\rm d}_n^m \cdot {\rm JW}_m \cdot \iota\overline{\rm d}_n^m = \overline{\rm d}_n^m\cdot \overline{\rm u}_n^m. \end{equation} The specifics can be found in~\cite{martin_spencer_2021} where the variable $U_n^m$ is used in place of $\overline{\rm L}_n^m$ and $p_n^m$ replaces $\overline{\rm d}_n^m$. Now, let $\lambda_n^m \in \mathbb{Q}(\delta)$ be defined as $\overline{\rm u}_n^m\cdot\overline{\rm d}_n^m = (\lambda_n^m )^{-1}{\rm JW}_m$. The remarkable fact is that $\sum_{m\in \operatorname{supp} n} \lambda_n^m \overline{\rm L}_n^m$ is a sum of orthogonal idempotents which descends to mixed characteristic $(\ell, p)$ (the constants $\lambda_n^m$ as well as the set $\operatorname{supp} n$ are dependent on $\ell$ and $p$) and gives the idempotent describing the projective cover of the trivial ${\rm TL}_n$-module. This element is termed the \emph{$(\ell, p)$-Jones-Wenzl idempotent}. \subsection{Boxes for Distinguished Morphisms} Certain morphisms will be represented by the following diagrams. \begin{equation} {\rm JW}_n = \begin{cases} \vcenter{\hbox{ \begin{tikzpicture}[] \draw[white] (0,0) -- (0,-.3); \draw[thick,fill=purple] (-.2,0) rectangle (0.2,1.6); \foreach \i in {0,...,6} { \draw[very thick] (-.2,.2+ \i/5) -- (-.4,.2 + \i/5); \draw[very thick] (.2,.2+ \i/5) -- (.4,.2 + \i/5); } \end{tikzpicture} }}&\text{if $n$ is Eve}\\ \vcenter{\hbox{ \begin{tikzpicture}[] \draw[thick,fill=purple] (-.2,0) rectangle (0.2,1.6); \draw[thick,fill=black] (-.2,1.5) rectangle (0.2,1.6); \foreach \i in {0,...,6} { \draw[very thick] (-.2,.2+ \i/5) -- (-.4,.2 + \i/5); \draw[very thick] (.2,.2+ \i/5) -- (.4,.2 + \i/5); } \end{tikzpicture} }}&\text{otherwise} \end{cases} \end{equation} The black box in the non-Eve $(\ell, p)$-Jones-Wenzl idempotents indicates the construction from the ``top down'' in \cref{def:ladder}. \begin{equation} {\rm d}_n^m = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (1.6,0.2) -- (0.95,0) -- (0.95,1.8) -- (1.6,1.6) -- cycle; \foreach \i in {1,...,8} { \draw[very thick] (0.75,\i/5) -- (0.95,\i/5); } \foreach \i in {2,...,7} { \draw[very thick] (1.6,\i/5) -- (1.8,\i/5); } \draw [decorate,decoration={brace,amplitude=5pt}] (0.6,0.1) -- (.6,1.7) node [black,midway,xshift=-10pt] {\footnotesize $n$}; \draw [decorate,decoration={brace,amplitude=5pt}] (1.9,1.5) -- (1.9,0.3) node [black,midway,xshift=10pt] {\footnotesize $m$}; \end{tikzpicture} }}\quad\quad\quad\quad\quad\quad \overline{\rm d}_n^m = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (1.6,0.2) -- (0.95,0) -- (0.95,1.8) -- (1.6,1.6) -- cycle; \fill[thick] (1.6,1.45) -- (1.6,1.6) -- (0.95,1.8) -- (0.95,1.65) -- cycle; \foreach \i in {1,...,8} { \draw[very thick] (0.75,\i/5) -- (0.95,\i/5); } \foreach \i in {2,...,7} { \draw[very thick] (1.6,\i/5) -- (1.8,\i/5); } \draw [decorate,decoration={brace,amplitude=5pt}] (0.6,0.1) -- (.6,1.7) node [black,midway,xshift=-10pt] {\footnotesize $n$}; \draw [decorate,decoration={brace,amplitude=5pt}] (1.9,1.5) -- (1.9,0.3) node [black,midway,xshift=10pt] {\footnotesize $m$}; \end{tikzpicture} }}\quad\quad\quad \end{equation} To avoid confusion, we may write the name of the morphism in the box where necessary. \section{Questions to Answer} When we study the representation theory of algebras, we typically have a hierarchy of questions we want to know the answers to. These are adapted from Barcelo and Ram's excellent survey on the subject~\cite{barcelo_ram_1997}. \begin{enumerate}[(Q1)] \item What is the dimension of the algebra? \item How many finitely generated, simple modules does it have? \item Can we index the simple modules? \item What is the dimension of the simple modules? \item How do we construct the simple modules? \item What are the indecomposable modules? \item What are the composition factors of the indecomposable modules? \item What are the structures of their composition series? \end{enumerate} When we know the algebra is cellular, we have an alternative set of tasks (typically we will know the involution already). \begin{enumerate}[(CQ1)] \item What is the set $\Lambda$? \item What are the sets of tableaux $M(\lambda)$? \item What is the set $\Lambda_0$? \item What are the composition multiplicities of simple modules in cell modules (decomposition numbers)? \end{enumerate} These are related by \begin{align*} \text{CQ1} + \text{CQ2} & \Rightarrow \text{Q1}\\ \text{CQ3} & \Rightarrow \text{Q2} + \text{Q3} + \text{Q5} + \text{Q6}\\ \text{CQ1} + \text{CQ2} +\text{CQ3} + \text{\cref{lem:folklore_1}} & \Rightarrow \text{Q4}\\ \text{CQ4} & \Rightarrow \text{Q7} \end{align*} \section{The Seam Algebra}\label{sec:seam} Here we examine in more detail one of the algebras that has been appearing in our examples. We will change notation slightly in order to maintain parity with the characteristic zero literature, particularly~\cite{langlois_remillard_saint_aubin_2020}. \begin{definition} The \emph{boundary seam algebra} is ${\rm TL}_{\boldsymbol{\mu}}$ where ${\boldsymbol{\mu}} = (k, 1^n)$. \end{definition} This algebra can be imagined diagrammatically as follows. \begin{center} \begin{tikzpicture} \foreach \x in {0, 1.5} { \begin{scope}[shift={(\x,0)}] \foreach \i in {0,...,14} { \draw[very thick] (-.2,.2+ \i/5) -- (.5,.2 + \i/5); } \draw[thick,fill=purple] (0,1.3) rectangle (0.3,3.1); \draw[thick,fill=black] (0,3.0) rectangle (0.3,3.1); \end{scope} } \draw[thick] (.5,0.1) rectangle (1.3, 3.1); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-0.3,1.3) -- (-0.3,3.1) node [black,midway,xshift=-10pt] {\footnotesize $k$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-0.3,0.1) -- (-0.3,1.3) node [black,midway,xshift=-10pt] {\footnotesize $n$}; \end{tikzpicture} \end{center} Here the purple block is $e^k$ and the white rectangle can be any morphism in ${\rm TL}_{k+n}$. Let us calculate the cell data for this algebra. We have already done much of this in examples from the first few chapters when ${\boldsymbol{\mu}}$ is Eve. We will now relax that assumption. We begin by formalising \cref{eg:when_boundary_sites} into a lemma for later use. \begin{lemma}\label{lem:count_dissapearing_morphisms} There is a monic diagram $\underline{a+n} \to \underline{m}$ such that none of the first $a$ sites are connected to each other iff $a-n \le m \le a+n$ and $m \equiv_2 a+n$. If so, there are \begin{equation} \binom{n}{(n + a - m)/2} - \binom{n}{(n - a - m - 2)/2} \end{equation} such diagrams. \end{lemma} Note that this is equivalent to counting the number of defect-$m$ walks over ${\boldsymbol{\mu}} = (a,1^n)$, i.e. the number of walks from $(a,a)$ to $(a+n, m)$ that do not cross the $x$-axis. Now, consider an arbitrary monic diagram $\mathbf{t}:\underline{k + n} \to \underline{m}$. Let $a_{\mathbf{t}}$ be the number of the first $k$ sites not connected to other sites in the first $k$. In terms of the associated walk, ${\boldsymbol{\rho}}$, this is exactly $a_{\mathbf{t}} = \rho_k$. There is now a unique factorisation into monic diagrams $\mathbf{t} = (\mathbf{t}_0 \otimes {\operatorname{id}}_{n}) \circ \mathbf{t}_1$ where $\mathbf{t}_0 : \underline{k} \to \underline{a_{\mathbf{t}}}$ and $\mathbf{t}_1:\underline{a_{\mathbf{t}}+n}\to\underline{m}$ is a diagram which does not join any of the first $a$ sites. Further, any $a \in \{k, k-2,\ldots\}$ and monic diagrams $\mathbf{t}_0 : \underline{k} \to \underline a$ and $\mathbf{t}_1: \underline{a + n} \to \underline{m}$ with the condition above uniquely describe a diagram from $\underline{k+n}$ to $\underline{m}$ such that exactly $a$ of the first $k$ sites are not connected to each other. \begin{center} \begin{tikzpicture}[scale = 0.5]; \draw (0,-.3) -- (20,-0.3); \draw[fill=yellow!50!white] (.5,-.3) rectangle (10.5,1.7); \draw[fill=blue!30!white] (.5,1.7) rectangle (19.5,3); \foreach \i in {1,...,19} { \draw[very thick] (\i,-.3) -- (\i,0); } \draw[very thick] (6,0) arc (0:180:1.5); \draw[very thick] (5,0) arc (0:180:.5); \draw[very thick] (9,0) arc (0:180:.5); \draw[very thick] (11,1.7) arc (0:180:.5); \draw[very thick] (13,1.7) arc (0:180:.5); \draw[very thick] (18,1.7) arc (0:180:.5); \foreach \i in {1,2,15,16,19} { \draw[very thick] (\i,0) -- (\i,3.3); } \foreach \i in {7,10,11,12,13,14,17,18} { \draw[very thick] (\i,0) -- (\i,1.7); } \draw[very thick] (7+.8,1.7+.8) arc (90:180:.8); \draw[very thick] (14,1.7) arc (0:90:.8); \draw[very thick] (14-.8,1.7+.8) -- (7+.8, 1.7+.8); \end{tikzpicture} \end{center} In the above figure, we have factorised a $\underline{19} \to \underline{5}$ diagram with $k = 10$. This diagram has $a_{\mathbf{t}} = 4$ and $\mathbf{t}_0$ is highlighted in yellow. The diagram $\mathbf{t}_1 : \underline{13} \to \underline{5}$ is shown in blue. In terms of walks, we can represent this below: \begin{center} \begin{tikzpicture}[scale=0.5] \foreach \i in {0,2,4,6} { \draw[dotted, thin] (-.2,\i+.2) -- (\i+.2,-.2); \draw[dotted, thin] (20+.2,\i+.2) -- (20-\i-.2,-.2); } \foreach \i in {8,10,12,14,16,18} { \draw[dotted, thin] (\i-6-.2,6+.2) -- (\i+.2,-.2); \draw[dotted, thin] (20-\i+6+.2,6+.2) -- (20-\i-.2,-.2); } \foreach \i in {20,22,24} { \draw[dotted, thin] (\i-6-.2,6+.2) -- (20+.2,\i-20-.2); \draw[dotted, thin] (20-\i+6+.2,6+.2) -- (-.2,\i-20-.2); } \draw (-0.5,0) -- (20.5,0); \draw[very thick,blue!80!black] (6,0)--(10,4)-- (11,3) -- (12,4) -- (14,2) -- (17,5) -- (18,4)--(19,5) ; \draw[very thick,yellow!90!black] (0,0) -- (4,4) -- (6,2) -- (8,4) -- (9,3); \draw[very thick,yellow!90!black,dashed] (9,3) -- (10,4); \fill (1,1) circle (0.1); \fill (2,2) circle (0.1); \fill (3,3) circle (0.1); \fill (4,4) circle (0.1); \fill (5,3) circle (0.1); \fill (6,2) circle (0.1); \fill (7,3) circle (0.1); \fill (8,4) circle (0.1); \fill (9,3) circle (0.1); \fill (10,4) circle (0.1); \fill (11,3) circle (0.1); \fill (12,4) circle (0.1); \fill (13,3) circle (0.1); \fill (14,2) circle (0.1); \fill (15,3) circle (0.1); \fill (16,4) circle (0.1); \fill (17,5) circle (0.1); \fill (18,4) circle (0.1); \fill (19,5) circle (0.1); \end{tikzpicture} \end{center} The yellow walk corresponds to $\mathbf{t}_0$. The blue walk corresponds to $\mathbf{t}_1$. To factorise any walk into $\mathbf{t}_0$ and $\mathbf{t}_1$, we mark the point $(k, \rho_k)$ and colour the walk to the left yellow. This is $\mathbf{t}_0$. We then add a ramp from $(k-\rho_k, 0)$ to $(k,\rho_k)$ and add this to the remainder of the walk. This is $\mathbf{t}_1$. Note in the diagram above that the yellow and blue paths overlap briefly. Fix $k$ and $n$ and consider the algebra ${\rm TL}_{\boldsymbol{\mu}}$. We will now determine a valid set $M_{\boldsymbol{\mu}}(m)$. Recall that a strategy for this is to sort the tableau in $M(m)$ and progressively take tableaux, skipping those that would result in a linearly dependent set in $\Delta_{\boldsymbol{\mu}}(m)$. Sort the tableaux in $M(m)$ by $\mathbf{t} \prec \mathbf{u}$ if $a_{\mathbf t} < a_{\mathbf u}$. We will consider the tableau in this order and select some and discard others. Now, suppose that $\mathbf{t}$ factors into $\mathbf{t}_0$ and $\mathbf{t}_1$. Suppose $a'$ is the through degree of $e^k \cdot \mathbf{t}_0$. Then certainly $e^{\boldsymbol{\mu}} \cdot \mathbf{t}$ is in the span of $\{ e^{\boldsymbol{\mu}} \cdot \mathbf u \;:\; a_{\mathbf u} \le a'\}$. If $a' < a_{\mathbf{t}}$ then $e^{\boldsymbol{\mu}} \cdot \mathbf{t}$ is in the span of basis elements we have chosen before. Otherwise $a' = a_{\mathbf{t}}$ and so $e^k \cdot \mathbf{t}_0$ does not vanish in the cell module of ${\rm TL}_{(k)}$ indexed by $a$. We know that this cell module is one-dimensional and so there is a ``canonical choice'' (which we will take to be ${\rm d}_{k}^a$) for the diagram $\mathbf{t}_0$. We can use \cref{lem:count_dissapearing_morphisms} to count the number of diagrams we can choose for $\mathbf{t}_1$. We deduce \begin{proposition}\label{prop:tow_part_mu_dims} Let ${\boldsymbol{\mu}} = (k,1^n)$ be a (possibly not Eve) composition of $k + n$. Then \begin{equation} \Lambda_{\boldsymbol{\mu}} = \bigcup_{a \in \operatorname{supp}(k)}\{ a-n, a-n + 2, \ldots, a+n \}\cap \mathbb{N}_0 \end{equation} and \begin{equation} \dim \Delta_{\boldsymbol{\mu}}(m) = \sum_{a \in \operatorname{supp}(k)} \binom{n}{(n+a-m)/2} - \binom{n}{(n-a-m-2)/2}. \end{equation} \end{proposition} Note that in the case that $k$ is Eve, $\operatorname{supp} (k) = \{k\}$ and we recover the results of~\cite{langlois_remillard_saint_aubin_2020}. \subsection{Eve Compositions} Let us suppose that $k$ is Eve. Our next step is to determine the set $(\Lambda_{\boldsymbol{\mu}})_0$. For a morphism $x : \underline{n} \to \underline{n}$, let $x_{\operatorname{id}}$ be the coefficient of the identity diagram in $x$. We begin with a lemma regarding evaluating fractions of quantum numbers. \begin{lemma}\label{lem:unreasonable_cancellation} If $i > 0$ and $b \neq 0$ then, over $(\ell, p)$-characteristic, \begin{equation}\label{eq:unreasonable_cancellation} \frac{[a p^{(i)}]}{[b p^{(i)}]} = \frac{a}{b}. \end{equation} \end{lemma} \begin{proof} This result is well known, but we would like to highlight a proof that does not depend on the ``$q$-formulation'' of quantum numbers. If we write the left hand side out in $\mathbb{Q}(\delta)$, we find that \begin{equation} \frac{[a\ell]_{\delta}}{[b\ell]_{\delta}} = \frac{[\ell]_{\delta}[a]_{\delta'}}{[\ell]_{\delta}[b]_{\delta'}} = \frac{[a]_{\delta'}}{[b]_{\delta'}} \end{equation} Here $\delta'$ is a polynomial in $\delta$ which is 2 modulo $[\ell]$. Recall that $[n]_2 = n$ so that in $\mathbb{Q}(\delta)_{\mathfrak p}/\mathfrak p$ (where $\mathfrak p$ is a minimal integral polynomial of $\delta$), $[a\ell]/[b\ell] = a / b$. Now, if $\delta' = 2$ then $\ell = p$, and so \cref{eq:unreasonable_cancellation} follows by induction on $i$. \end{proof} \begin{lemma}\label{lem:tracing_jw} Let $n = a p^{(r)}$ for $0 \le a \le \ell \vee p$. Hence ${\rm JW}_{n-1}$ exists over characteristic $(\ell, p)$, and $\tau^k({\rm JW}_{n-1})_{{\operatorname{id}}}$ vanishes unless $k = b p^{(r)}$ for some $0\le b<a$. \end{lemma} \begin{proof} The proof is an exercise in the evaluation principle. We know that over characteristic zero, \begin{equation}\label{eq:trace_jw_zero} \tau^k({\rm JW}_m) = \frac{[m+1]}{[m+1-k]} {\rm JW}_{m-k}. \end{equation} Hence $\tau^k({\rm JW}_{n-1})_{\operatorname{id}} = [n]/[n-k]$. However, in the morphism $\tau^k({\rm JW}_{n-1})$, the coefficient of the identity diagram is a linear combination of the coefficients of diagrams in ${\rm JW}_{n-1}$. This linear combination takes coefficients from $\{1, \delta, \delta^2,\ldots\}$. Thus we can consider this statement as stating that a combination of elements (possibly multiplied by a suitable power of $\delta$) gives $[n]/[n-k]$: \begin{equation} \sum_{i \in I} \delta^{f_i} c_i = \frac{[n]}{[n-k]}. \end{equation} Now, since $n$ is Eve, all the coefficients $c_i$ descend to an element of our base ring, as does $\delta$. We deduce that $[n]/[n-k]$ must too. Indeed, \cref{eq:unreasonable_cancellation} assures us that these fractions {\it exist} even if they are not all non-zero. Finally, let $s$ be the maximal natural such that $p^{(s)} \mid n - k$ so we can write $[n]/[n-k] = [ap^{(r)}]/[b p^{(s)}] = a p^{(r-s)} / b$. Note that $s$ is, by definition, at most $r$. Clearly for this coefficient to not vanish in our ring, we must have $s=r$ giving the result. \end{proof} \begin{theorem}\label{prop:eve_seam_mu_0} Let ${\boldsymbol{\mu}} = (k, 1^n)$ be an Eve composition so $k+1 = k_r p^{(r)}$. Then, \begin{equation} (\Lambda_{\boldsymbol{\mu}})_0 = \bigcup_{\lfloor k_r - n/p^{(r)}\rfloor\le b\le k_r}bp^{(r)} -1 + (\Lambda_{\mathbf{n-k+bp^{(r)}-1}})_0. \end{equation} \end{theorem} Recall that $(\Lambda_{\mathbf{n}})_0 = \{ m \in \mathbb{Z}_n^2 \;:\; m \equiv_2 n\}$ unless $\delta=0$ and $n$ is even in which case $(\Lambda_{\mathbf{n}})_0 = \{ m \in \mathbb{Z}_n^2\setminus\{0\} \;:\; m \equiv_2 n\}$. \begin{proof} We want to find all $m \in \Lambda_{{\boldsymbol{\mu}}} = \{k -n \le m \le k + n\;:\; m \equiv_2 k + n\}$ such that there are two diagrams $\mathbf{u}, \mathbf{t} \in \Delta(m)$ such that $(\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot \mathbf{t})_{\operatorname{id}}\neq 0$. If such a pair exists, then the bilinear form is not degenerate on $\Delta_{\boldsymbol{\mu}}(m)$ and $m \in (\Lambda_{\boldsymbol{\mu}})_0$. If no such pair exists, since all elements of $\Delta_{\boldsymbol{\mu}}(m)$ are linear combinations of $e^{\boldsymbol{\mu}} \mathbf{t}$ we deduce the bilinear form is identically zero and so $m\not\in (\Lambda_{\boldsymbol{\mu}})_0$. Now we examine the morphism $\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot \mathbf{t}$. Suppose the first $0\le a\le m$ sites of $\mathbf{t}$ are ``defects'' - not connected to other sources sites, and without loss, suppose this quantity is at least that for $\mathbf{u}$. We will be evaluating the morphism up to isotopy without expanding $e^k$. Thus it will remain in the form $\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot \mathbf{t}$ up to a factor of $\delta^r$ for some $r$. If $\delta = 0$ then this vanishes identically unless $r = 0$, so we discard it. Otherwise we may assume that the $a+1$-st source site of $\mathbf{t}$ is not connected to the target sites of $\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot \mathbf{t}$, even after evaluation of the morphism (but not expanding $e^k$). \begin{center} \begin{tikzpicture} \draw[thick,fill=purple] (-.15,1.3) rectangle (0.15,3.1); \foreach \i in {0,...,6} \draw[very thick] (-.15,\i *0.2) -- (.15,\i*.2); \draw[thick] (-.15, -.1) -- (-.85,.2) -- (-.85, 2.8) -- (-.15,3.1) -- cycle; \node at (-.5,1.5) {$\iota\mathbf{u}$}; \draw[thick] (.15, -.1) -- (.85,.2) -- (.85, 2.8) -- (.15,3.1) -- cycle; \node at (.5,1.5) {$\mathbf{t}$}; \foreach \i in {0,...,3} \draw[very thick] (.15,2.6-\i *0.2) -- (1,2.6-\i*.2); \draw[very thick] (.15,1.8) edge[out=0,in = 0] (0.15,1); \draw[very thick] (.15,1.6) edge[out=0,in = 0] (0.15,1.2); \draw[very thick] (.15,.8) edge[out=0,in = 180] (1,0.8); \draw[very thick] (.15,.6) edge[out=0,in = 180] (1,0.6); \draw[very thick] (.15,.4) edge[out=0,in = 180] (1,0.4); \draw[very thick] (.15,.2) edge[out=0,in = 0] (.15,0.0); \draw [decorate,decoration={brace,amplitude=5pt},xshift=1pt,yshift=0pt] (1,2.7) -- (1,1.9) node [black,midway,xshift=10pt] {\footnotesize $a$}; \foreach \i in {0,...,6} \draw[very thick] (-.85,2.1-\i *0.2) -- (-1,2.1-\i*.2); \end{tikzpicture} \end{center} {\bf Case $a \ge k$:} In this case, $\mathbf{t}$ and $\mathbf{u}$ can be written ${\operatorname{id}}_k \otimes \mathbf{t}'$ and ${\operatorname{id}}_k \otimes \mathbf{u}'$ respectively, for some monic diagrams $\mathbf{t}', \mathbf{u}' : \underline{n} \to \underline{m-k}$. If so, \begin{equation} (\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot \mathbf{t})_{\operatorname{id}} = \Big(({\operatorname{id}}_k\otimes\iota\mathbf{u'})\cdot (e^k\otimes{\operatorname{id}}_{m-k})\cdot ({\operatorname{id}}_k \otimes\mathbf{t'})\Big)_{\operatorname{id}} = (e^k)_{\operatorname{id}} (\iota\mathbf{u'}\cdot\mathbf{t'})_{\operatorname{id}} \end{equation} Since $(e^k)_{\operatorname{id}} = 1$, the problem reduces to asking which $m-k$ are in $(\Lambda_\textbf{n})_0$. Recall that these are all naturals of the same parity as $n$ with the exception of $0$ if $\delta = 0$. Hence the set of possible $m$ with such a pair of diagrams is $k + (\Lambda_\mathbf{n})_0$. {\bf Case $a < k$:} We consider which site the $a+1$-st source site of $\mathbf{t}$ is connected to after simplifying all loops and isotopies without expanding $e^k$. Recall that it is not connected to a target site. Suppose it connects to a source site, as in the below diagram. Let the first $a' < a$ sites of $\textbf{u}$ be defects. This must be a strict inequality as $a$ and $a'$ have opposite parity. Then the $a'+1$-st site of $\textbf{u}$ is not a defect so it must connect to some other point on the Jones-Wenzl idempotent. This means that either two of the upper sites or two of the lower sites are connected and the entire morphism vanishes identically. \begin{center} \begin{tikzpicture} \draw[thick,fill=purple] (-.15,1.3) rectangle (0.15,3.1); \foreach \i in {-1,...,6} \draw[very thick] (-.15,\i *0.2) -- (.15,\i*.2); \node at (.4,1.4) {\footnotesize ?}; \node at (-.6,1.2) {\footnotesize ?}; \node at (-.6,2.2) {\footnotesize ?}; \foreach \i in {0,...,1}{ \draw[very thick] (-.85,\i *0.2+0.1) -- (-1,\i*.2+0.1); \draw[very thick] (.85,\i *0.2+0.1) -- (1,\i*.2+0.1); } \draw[very thick] (-.15, 1.6) -- (-.3,1.6) edge[out=180, in=180] (-.3,1.8); \draw[very thick] (-.3,1.8) -- (-.15,1.8); \draw[thick] (-.15, -.3) -- (-.85,0) -- (-.85, 0.4) -- (-.15,0.7) -- cycle; \draw[thick] (.15, -.3) -- (.85,0) -- (.85, 0.4) -- (.15,0.7) -- cycle; \foreach \i in {-2,...,2} \draw[very thick] (.15,2.6-\i *0.2) -- (1,2.6-\i*.2); \foreach \i in {-2,...,0} \draw[very thick] (-.15,2.6-\i *0.2) -- (-1,2.6-\i*.2); \draw[very thick] (.15,2.0) --(.4,2) edge[out=0,in = 0] (0.4,.8); \draw[very thick] (0.4,.8) -- (-1,.8); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-1,2.5) -- (-1,3.1) node [black,midway,xshift=-10pt] {\footnotesize $a'$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=1pt,yshift=0pt] (1,3.1) -- (1,2.1) node [black,midway,xshift=10pt] {\footnotesize $a$}; \draw [decorate,decoration={brace,amplitude=3pt},xshift=1pt,yshift=0pt] (1,0.4) -- (1,0) node [black,midway,xshift=15pt] {\footnotesize $m-a$}; \end{tikzpicture} \end{center} Hence for $\iota\mathbf{u}\cdot e^{\boldsymbol{\mu}}\cdot\mathbf{t}$ to not vanish identically, the $a+1$-st site of $\mathbf{t}$ must connect to the another site of the idempotent. This is illustrated in the diagram below, lest two upper or two lower sites of the Jones-Wenzl idempotent be joined, killing the morphism completely. \begin{center} \begin{tikzpicture} \draw[thick,fill=purple] (-.15,1.3) rectangle (0.15,3.1); \foreach \i in {-1,...,6} \draw[very thick] (-.15,\i *0.2) -- (.15,\i*.2); \foreach \i in {0,...,1}{ \draw[very thick] (-.85,\i *0.2+0.1) -- (-1,\i*.2+0.1); \draw[very thick] (.85,\i *0.2+0.1) -- (1,\i*.2+0.1); } \node at (.5,1.6) {\footnotesize ?}; \node at (-.5,1.6) {\footnotesize ?}; \draw[thick] (-.15, -.3) -- (-.85,0) -- (-.85, 0.4) -- (-.15,0.7) -- cycle; \draw[thick] (.15, -.3) -- (.85,0) -- (.85, 0.4) -- (.15,0.7) -- cycle; \foreach \i in {-2,...,1} \draw[very thick] (.15,2.6-\i *0.2) -- (1,2.6-\i*.2); \foreach \i in {-2,...,1} \draw[very thick] (-.15,2.6-\i *0.2) -- (-1,2.6-\i*.2); \draw[very thick] (.15,2.2) --(.5,2.2) edge[out=0,in = 0] (0.5,.8); \draw[very thick] (0.5,.8) -- (.15,.8); \draw[very thick] (-.15,2.2) --(-.5,2.2) edge[out=180,in = 180] (-0.5,.8); \draw[very thick] (-0.5,.8) -- (-.15,.8); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-1,2.3) -- (-1,3.1) node [black,midway,xshift=-10pt] {\footnotesize $a$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=1pt,yshift=0pt] (1,3.1) -- (1,2.3) node [black,midway,xshift=10pt] {\footnotesize $a$}; \draw [decorate,decoration={brace,amplitude=3pt},xshift=1pt,yshift=0pt] (1,0.4) -- (1,0) node [black,midway,xshift=15pt] {\footnotesize $m-a$}; \draw [decorate,decoration={brace,amplitude=3pt},xshift=-1pt,yshift=0pt] (-1,0) -- (-1,0.4) node [black,midway,xshift=-15pt] {\footnotesize $m-a$}; \end{tikzpicture} \end{center} Further, by \cref{lem:tracing_jw}, this vanishes identically unless $a = b p^{(r)}-1$ for some $1\le b\le k_r$. But we also require that $n \ge k - a$ and so $b p^{(r)} \ge k - n+1$. Suppose, after isotopy, that there are $n-k'$ remaining central sites not connected to the Jones-Wenzl idempotent in any way. It is clear that $m \in (\Lambda_{\boldsymbol{\mu}})_0$ if $m-a \in (\Lambda_\mathbf{n-k'})_0$. But $k'$ is of the same parity as $k-a$ and since $(\Lambda_\mathbf{n-k'})_0\subseteq(\Lambda_\mathbf{n-k+a})_0$ we may assume without loss that $k' = k-a$. Recall $a = b p^{(r)}-1$ so $k-a = (k_r-b)p^{(r)} \le n$. Hence $m-a \in (\Lambda_\mathbf{n-k+a})_0$ if the inner product is not zero. Thus the indices coved by this case are \begin{equation} \bigcup_{\lfloor k_r - n/p^{(r)}\rfloor\le b\le k_r}bp^{(r)} -1 + (\Lambda_\mathbf{n-k+bp^{(r)}-1})_0. \end{equation} Note that the consideration $a \ge k$ is actually a special case of this, where $b = k_r$. \end{proof} \begin{example} To elucidate on the above we present the \cref{fig:seam dots}. \begin{figure}[htpb] \centering \includegraphics[width=\textwidth]{seam_types.png} \caption{}% \label{fig:seam dots} \end{figure} Here we have picked $\ell = 2$ (so $\delta = 0$) and $p = 5$ with $k= 39 = [4,0,0]_{5,2}-1$ (which is Eve). The $y$ axis indicates $n$, increasing downwards, and the $x$ axis $m$. Note how $m \le n + k$ and that $n+k \equiv_2 m$, giving the offset ``chequerboard'' pattern. The light grey circles indicate elements of $\Lambda _{\mathbf{k+n}}\setminus \Lambda_{{\boldsymbol{\mu}}}$ and the larger black and coloured circles elements of $\Lambda_{{\boldsymbol{\mu}}}$. Circles shaded purple indicate values of $n$ and $m$ that are non-degenerate due to the first case in the proof of \cref{prop:eve_seam_mu_0}. Those shaded in different colours come from the second case, with the different colours indicating different values of $b$. The remaining circles, shaded black represent the elements of $\Lambda_{{\boldsymbol{\mu}}} \setminus \Lambda_{\bmu0}$. We can read \cref{fig:seam dots} as follows. Consider the partition ${\boldsymbol{\mu}} = (39,1^{11})$. This is represented by the row labelled twelve (the first row being $(39, 1^0)$). There are $25 = \lfloor(39 + 11 )/ 2\rfloor$ dots in this row, representing the 25 cell modules of ${\rm TL}_{\mathbf{50}}$. Of those, the smallest thirteen are light grey and thus index cell modules that do not descend to ${\rm TL}_{\boldsymbol{\mu}}$. The remaining twelve consist of five dots that are black (and so the cell modules represented by these are degenerate) and seven that are coloured. We can deduce that the algebra ${\rm TL}_{\boldsymbol{\mu}}$ has seven simple modules in mixed characteristic $(2,5)$. \end{example} \subsection{Two Part Seam Algebras}\label{ssec:two_part_seams} Now let us lift the restriction on $k$ but force $n = 1$. This algebra has been studied in~\cite{sutton_tubbenhauer_wedrich_zhu_2021} where $e^{\boldsymbol{\mu}}$ is split into a sum of idempotents. We can specialise \cref{prop:tow_part_mu_dims} at $n=1$: \begin{proposition} Let ${\boldsymbol{\mu}} = (k,1)$ be a (possibly not Eve) composition of $k + 1$. Then \begin{equation} \Lambda_{{\boldsymbol{\mu}}} = \bigcup_{a \in \operatorname{supp}(k)}\{ a-1, a+1 \} \cap \mathbb{N}_0 \end{equation} and \begin{equation} \dim \Delta_{\boldsymbol{\mu}}(m) = \begin{cases} 2 & m \in (\operatorname{supp} k -1) \cap (\operatorname{supp} k + 1)\\ 1 & \text{ else } \end{cases}. \end{equation} \end{proposition} To calculate the dimension of ${\rm TL}_{\boldsymbol{\mu}}$, it will first be useful to know when two elements of $\operatorname{supp} n$ are separated by exactly 2, as this tells us which cell modules are two-dimensional. \begin{definition}\label{def:tail_length} If $n + 1 = \pldigs{n_r, \ldots, n_0}$, we say $n$ has \emph{tail length} $t$ if $n_0 = \ell -1$ and $n_i = p-1$ for $0<i<t$. Equivalently, $p^{(t_n)} \mid n+1$. The maximum tail length of $n$ is denoted $t_n$. \end{definition} Note that any given $n$ can have multiple tail lengths and that every $n$ has a tail of length 0 and so $t_n \ge 0$. We let $\mother[k]{n}$ be the $k$-th (matrilineal) ancestor of $n$. \begin{lemma} Let $n+1 = \pldigs{n_r, \ldots, n_0}$. If $m, m'$ are elements of $\operatorname{supp}(n)$ such that $m' = m + 2$ then $n$ has a tail of length $t$ with $n_{t} = 1$, and $m + 1 \in \operatorname{supp}(\mother[t+1]{n})$. \end{lemma} \begin{proof} Suppose that $m' = n[S_1]$ and $m = n[S_2]$ (notation from \cref{sec:notation}). Then \begin{equation*}\label{eq:twoooo} 2 = \sum_i n_i p^{(i)} - 2\sum_{i \in S_1} n_i p^{(i)} - \sum_i n_i p^{(i)} + 2\sum_{i \in S_2} n_i p^{(i)} = 2 \sum_{i\in S_2} n_i p^{(i)} - 2 \sum_{i\in S_1} n_i p^{(i)}. \end{equation*} \begin{equation}\label{eq:diff_two_one} \Rightarrow \sum_{i\in S_2\setminus S_1} n_i p^{(i)} - \sum_{i\in S_1\setminus S_2} n_i p^{(i)} = 1. \end{equation} Note that the sets $S_1\setminus S_2$ and $S_2\setminus S_1$ are distinct. Further by considering \cref{eq:diff_two_one} modulo $\ell$ we see that one of the two contains $0$. If $0 \in S_2\setminus S_1$ then clearly $n_0 = 1$ and $S_1\setminus S_2 = \emptyset$ and $S_2 \setminus S_1 = \{0\}$. Here, $t = 0$ is a tail of $n$. On the other hand if $0\in S_1\setminus S_2$ then $n_0 = \ell-1$. By considering \cref{eq:diff_two_one} divided by $\ell$ and applying induction, we see that there is a tail $r>t>0$ and $\{0,1,\ldots, t-1\}\subset S_1\setminus S_2$ but $ S_2\setminus S_1 = \{t\}$ and $n_{t}=1$. Either way, $S_2 \setminus S_1 = \{t\}$ and $S_1\setminus S_2 = \{0,1,\ldots, t-1\}$. If $S_0 = S_1 \cap S_2$ then \begin{equation} n[S_1] - 1 = n[S_2] + 1 = \mother[t+1]{n}[S_0] \end{equation} which lies in $\operatorname{supp}(\mother[t+1]{n})$ as desired. \end{proof} \begin{example} Let $n=513\,398$ so that $n+1 = [4,0,2,3,1,4,4,7]_{5,8}$. Then $n$ has maximal tail length $t_n = 3$ and, for example, $513\,000 =[4,0,2,3,1,-4,-4,-7]_{5,8}$ and $512\,998 =[4,0,2,3,-1,4,4,7]_{5,8}$ differ by 2. Now, $\mother[4]{n} = 513999$ so that $\mother[4]{n}+1 = [4,0,2,3,0,0,0]_{5,8}$. It is clear that the support of $\mother[4]{n}$ has 4 elements so we deduce that there are 4 pairs of elements of $\operatorname{supp} n$ differing by 2. \end{example} If $n_{t} \neq 1$ for any tail length $t$ of $n$, we term $n$ \emph{interior}. We see that two elements of $\operatorname{supp} n$ differ by two iff $n$ is not interior. We deduce: \begin{proposition} Let $k + 1 = \pldigs{n_r,\ldots, n_{0}}$ and $T_n = \{0 \le t \le t_n \;:\; n_t = 1\}$ be the set of all tails that start with the digit 1. Then if ${\boldsymbol{\mu}} = (k,1)$, \begin{equation}\label{eq:dim_of_ind_one} \dim {\rm TL}_{\boldsymbol{\mu}} = \begin{cases} 2^{\generation{n}+1}& n \text{ interior}\\ 2^{\generation{n}+1} + \sum_{t \in T_n}2^{\generation{n}-t}& \text{else}\\ \end{cases}. \end{equation} \end{proposition} \begin{proof} The cell modules of ${\rm TL}_{\boldsymbol{\mu}}$ are one-dimensional, unless they are indexed by a number lying exactly between two elements of $\operatorname{supp} n$ separated by 2, in which case they are two-dimensional. Hence the number of two-dimensional cell modules is $\sum_{t \in T_n}2^{\generation{m}-t-1}$. Recall that the dimension of a cellular algebra is the sum of the squares of the dimensions of its cell modules. The result follows. \end{proof} \begin{remark} The form of \cref{eq:dim_of_ind_one} is typical of what we will see going forward. Note that there is actually only one case, as $T_n = \emptyset$ if $n$ is interior. Further, in the majority of cases, $T_n$ is either zero or a singleton. Indeed, it is only if $\ell = 2$ or $p = 2$ that multiple tail lengths can have $n_t = 1$. \end{remark} Let us be more explicit. Recall that $e^{(k)}{\rm d}_k^m$ was a basis vector for the linear module $\Delta_{(k)}(m)$ for each $m \in \operatorname{supp} k$. To find an element of a cell module for $e^{\boldsymbol{\mu}}$ we have two options for each such $m$: \begin{equation} d_k^{m_+} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick,fill=purple] (0,0.1) rectangle (.3,1.8); \draw[thick,fill=black] (0,1.7) rectangle (0.3,1.8); \draw[thick] (.3,.1) -- (0.95,.3) -- (0.95,1.6) -- (.3,1.8); \node at (.65,.9) {${\rm d}_k^m$}; \foreach \i in {1,...,7} { \draw[very thick] (0,\i/4) -- (-.2,\i/4); } \foreach \i in {2,...,6} { \draw[very thick] (0.95,\i/4) -- (1.2,\i/4); } \draw[very thick] (-.2,0) -- (1.2,0); \end{tikzpicture} }} \quad\quad \quad\quad d_k^{m_-} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick,fill=purple] (0,0.1) rectangle (.3,1.8); \draw[thick,fill=black] (0,1.7) rectangle (0.3,1.8); \draw[thick] (.3,.1) -- (0.95,.3) -- (0.95,1.6) -- (.3,1.8); \node at (.65,.9) {${\rm d}_k^m$}; \foreach \i in {1,...,7} { \draw[very thick] (0,\i/4) -- (-.2,\i/4); } \foreach \i in {3,...,6} { \draw[very thick] (0.95,\i/4) -- (1.4,\i/4); } \draw[very thick] (-.2,0) -- (0.95,0) arc (-90:90:0.25); \end{tikzpicture} }} \end{equation} Here, $m_+ = m+1$ and $m_- = m-1$ are the targets of the morphisms. We will use $u_{k}^{m_\pm}$ to denote their vertical involutions. As written, these are elements of $\operatorname{Hom}_{{\rm TL}_{\boldsymbol{\mu}}}(k+1,m_\pm)$. That is, they are not necessarily monic but we are really considering their {\it images} in $\Delta_{\boldsymbol{\mu}}(m_\pm)$. The two-dimensional cell modules are those for which some $m_+$ is the same as some $m'_-$ for $m,m'\in \operatorname{supp} n$. All other cell modules are one-dimensional. \begin{corollary} If $k$ is interior, ${\rm TL}_{(k,1)}$ is commutative. \end{corollary} \begin{proof} If $k$ is interior, then all the $m_+$ and $m_-$ are distinct and hence all the cell modules are one-dimensional. Hence by \cref{lem:linear_comm}, ${\rm TL}_{\boldsymbol{\mu}}$ is commutative. We will later show it is the direct sum of the two algebras ${\rm TL}_{(k+1)}\oplus {\rm TL}_{(k-1)}$, each of which are commutative as we know from \cref{sec:endomorphism}. \end{proof} Recall the goal is to enumerate $(\Lambda_{\boldsymbol{\mu}})_0$. To that end, let us evaluate the morphism $u_k^{m'_\pm}d_k^{m_\pm}$. Firstly, it is clear that \begin{equation} u_k^{m_+}d_k^{m_+} = \left({\rm u}_k^{m}e^{(k)}\otimes{\operatorname{id}}\right)\cdot\left(e^{(k)}{\rm d}_k^{m}\otimes{\operatorname{id}}\right) = {\rm u}_k^me^{(k)}{\rm d}_k^m\otimes{\operatorname{id}} \end{equation} and so $(u_k^{m_+}b_k^{m_+})_{\operatorname{id}} = \delta_{m,k}$ is inherited from the theory of ${\rm TL}_{(k)}$. Next we examine $u_k^{m_-}d_k^{m_-}$: \begin{equation}\label{eq:trace_a_clamp} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick,fill=purple] (0,0.1) rectangle (.3,1.8); \draw[thick,fill=black] (0,1.7) rectangle (0.3,1.8); \draw[thick] (.3,.1) -- (0.95,.3) -- (0.95,1.6) -- (.3,1.8); \draw[thick] (0,.1) -- (-0.65,.3) -- (-0.65,1.6) -- (0,1.8); \node at (.65,.9) {${\rm d}_k^m$}; \node at (-.35,.9) {${\rm u}_k^m$}; \foreach \i in {3,...,6} { \draw[very thick] (0.95,\i/4) -- (1.4,\i/4); \draw[very thick] (-0.65,\i/4) -- (-1.1,\i/4); } \draw[very thick] (-.65,0) -- (0.95,0) arc (-90:90:0.25); \draw[very thick] (-0.65,0.5) arc (90:270:0.25); \end{tikzpicture} }} \end{equation} If we focus on the submorphism ${\rm u}_k^m e^{(k)}{\rm d}_k^m$ we can write \begin{equation}\label{eq:clamping_an_idemp} {\rm u}_k^m e^{(k)}{\rm d}_k^m = \sum_{m' \in \operatorname{supp} k}\lambda_k^{m'} {\rm u}_k^m\overline{\rm d}_k^{m'}\overline{\rm u}_k^{m'}{\rm d}_k^{m} = \sum_{\substack{m' \in \operatorname{supp} k\\ m' \le m}}\lambda_k^{m'} {\rm u}_k^m\overline{\rm d}_k^{m'}\overline{\rm u}_k^{m'}{\rm d}_k^{m}. \end{equation} Clearly the value of the identity coefficient in \cref{eq:trace_a_clamp} is unchanged by summands factoring through $\underline{m-4}$ in \cref{eq:clamping_an_idemp}. Indeed, composition by morphisms (on either side) cannot raise the through-degree, and tensoring with a strand raises it by at most one. Now suppose that $m-2\not\in\operatorname{supp} k$. Then almost all the summands of \cref{eq:clamping_an_idemp} vanish up to morphisms factoring through $\underline{m-2}$ and we are left with \begin{equation} (u_k^{m_-}d_k^{m_-})_{\operatorname{id}} = (\lambda_k^{m}{\rm u}_k^{m_-}\overline{\rm d}_k^{m_-}\overline{\rm u}_k^{m_-}{\rm d}_k^{m_-})_{\operatorname{id}} = \frac{[m+1]}{[m]}(\lambda_k^m)^{-1} \end{equation} If $\ell \nmid m$ then this only doesn't vanish if $m = k$ and $\ell \nmid m+1$. So let us assume $\ell\mid m$, and write $k+1 = \pldigs{n_r, \ldots, n_0}$ with $m = k[S]$. Then we see $m + 1 = \pldigs{n_r, \widetilde{n}_{r-1}, \ldots, \widetilde{n}_1,1}$. Hence, if $0\not \in S$, we see that $n_0 = 1$ and so $m - k[S\cup\{0\}] = 2$, a contradiction, and so $0\in S$. Let us assume that $\{0,1,\ldots, s\} \in S$ and hence $t_k > 0$. Now, $\left(\lambda_k^{k[S]}\right)^{-1}$ has factor $$ \frac{[\pldigs{n_r,\ldots, n_{s+1}, 0,\ldots, 0}]} {[\pldigs{n_r,\ldots, n_{s+1}, -n_s,\ldots, -n_0}]} $$ and $[m+1] = [\pldigs{n_r,\ldots,n_{s+1}, -n_s,\ldots, -n_0}]$ so $\frac{[m+1]}{[m]}\left(\lambda_k^{k[S]}\right)^{-1}$ has factor $$ \frac{[\pldigs{n_r,\ldots, n_{s+1}, 0,\ldots, 0}]} {[\pldigs{n_r,\ldots, n_{s+1}, -n_s,\ldots, -n_0-1}]}. $$ Which descends to zero unless $n_0 = \ell -1$ and $n_1 = \ldots = n_s = p-1$, i.e. $s < t_k$. If this occurs then the factor is $$ \frac{[\pldigs{n_r,\ldots, n_{s+1}, 0,\ldots, 0}]} {[\pldigs{n_r,\ldots, n_{s+1}-1, 0,\ldots, 0}]} = \frac{[n_r,\ldots, n_{s+1}]_p} {[n_r,\ldots, n_{s+1}-1]_p} $$ Now, since $s+1\not\in S$, $n_{s+1} \neq 0$ and if $n_{s+1} = 1$ then $m-2\in \operatorname{supp} k$. Hence this factor does not vanish under specialisation. It is clear that any other factors must vanish, hence we see that it must be the only factor. That is to say $0\le s < t_{n}$, and each such $s$ describes a single $m = k[\{0,\ldots, s\}]$ for which the inner product does not vanish. We now study the case where $m-2\in \operatorname{supp}{k}$. \begin{lemma}\label{lem:id_of_trace_of_clamp} Suppose that $m, m-2 \in \operatorname{supp} k$. Then \begin{equation} \left(\tau\left({\rm u}_k^me^{(k)}{\rm d}_k^m\right)\right)_{\operatorname{id}} = \left(\lambda_k^{m-1}\right)^{-1}\left([p^{(t_k+1)} + 1] - [p^{(t_k+1)} - 1]\right) \end{equation} where $t_k$ is such that $m-1 \in \operatorname{supp} \mother{k}^{t_k+1}$. \end{lemma} \begin{proof} Suppose that $S_1 = S_0 \cup\{t_k\}$ and $S_2 = S_0 \cup\{t_k-1,\ldots, 0\}$ are such that $m = k[S_2] = k[S_1] + 2$. Then, if $k' = \mother{k}^{t_k+1}$, $m-1 = k'[S_0]$. We calculate \begin{equation}\label{eq:lobsided_clamp} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,0) -- (0.95,.2) -- (0.95,1.6) -- (0,1.8) -- cycle; \draw[thick] (0,0) -- (-0.95,.2) -- (-0.95,1.6) -- (0,1.8); \fill[thick] (-0.95,1.5) -- (-0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,.9) {${\rm d}_k^{k[S_2]}$}; \node at (-.45,.9) {$\overline{\rm u}_k^{k[S_1]}$}; \foreach \i in {2,...,7} { \draw[very thick] (-0.95,\i/5) -- (-1.3,\i/5); } \foreach \i in {1,...,7} { \draw[very thick] (0.95,\i/5+0.125) -- (1.3,\i/5+0.125); } \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8) -- cycle; \draw[thick] (0,0.6) -- (-0.95,.8) -- (-0.95,1.6) -- (0,1.8); \fill[thick] (-0.95,1.5) -- (-0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,1.2) {${\rm d}_{k'}^{k'[S_0]}$}; \node at (-.45,1.2) {$\overline{\rm u}_{k'}^{k'[S_0]}$}; \draw[line width=3pt] (0.95, 1.2) -- (2.1, 1.2); \node at (1.5, 1.5) {\footnotesize $k'[S_0]$}; \draw[line width=3pt] (-0.95, 1.3) -- (-2.4, 1.3); \node at (-1.7, 1.6) {\footnotesize $k'[S_0]-p^{(t_k)}$}; \draw[line width=2pt] (-0.95, 1.1) arc (90:180:.4) arc (180:270:.4) -- (0,.3); \draw[thick] (0,.32) -- (2.1,.32); \draw[line width=1.5pt] (0,.29) arc (90:0:.3) arc (0:-90:.3) -- (-2.4,-.31); \node at (0.7,-.34) {\footnotesize $p^{(t_k)}-1$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8) -- cycle; \draw[thick] (0,0.6) -- (-0.95,.8) -- (-0.95,1.6) -- (0,1.8); \fill[thick] (-0.95,1.5) -- (-0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,1.2) {${\rm d}_{k'}^{k'[S_0]}$}; \node at (-.45,1.2) {$\overline{\rm u}_{k'}^{k'[S_0]}$}; \draw[line width=3pt] (0.95, 1.2) -- (1.6, 1.2); \draw[line width=3pt] (-0.95, 1.3) -- (-1.6, 1.3); \draw[line width=1pt] (-0.95, 1.1) arc (90:180:.4) arc (180:270:.4) -- (0,.3)-- (1.6,.32); \end{tikzpicture} }} \end{equation} and hence \begin{align*} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick,fill=purple] (0,0.1) rectangle (.3,1.8); \draw[thick,fill=black] (0,1.7) rectangle (0.3,1.8); \draw[thick] (.3,.1) -- (1.15,.3) -- (1.15,1.6) -- (.3,1.8); \draw[thick] (0,.1) -- (-0.85,.3) -- (-0.85,1.6) -- (0,1.8); \node at (.75,.9) {\small ${\rm d}_k^{k[S_2]}$}; \node at (-.4,.9) {\small${\rm u}_k^{k[S_2]}$}; \foreach \i in {2,...,7} { \draw[very thick] (1.15,\i/5+0.07) -- (1.5,\i/5+0.07); \draw[very thick] (-0.85,\i/5+0.07) -- (-1.2,\i/5+0.07); } \end{tikzpicture} }}&=\lambda_{k}^{k[S_2]}\vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0.95,0) -- (0,.2) -- (0,1.6) -- (0.95,1.8) -- cycle; \draw[thick] (-0.95,0) -- (0,.2) -- (0,1.6) -- (-0.95,1.8) -- cycle; \fill[thick] (0,1.5) -- (0,1.6) -- (0.95,1.8) -- (0.95,1.7) -- cycle; \fill[thick] (0,1.5) -- (0,1.6) -- (-0.95,1.8) -- (-0.95,1.7) -- cycle; \node at (.5,.9) {\small$\overline{\rm u}_k^{k[S_2]}$}; \node at (-.45,.9) {\small$\overline{\rm d}_k^{k[S_2]}$}; \node at (-1.35,.9) {\small${\rm u}_k^{k[S_2]}$}; \node at (1.35,.9) {\small${\rm d}_k^{k[S_2]}$}; \draw[thick] (-1.9,0.2) -- (-0.95,0) -- (-0.95,1.8) -- (-1.9,1.6) -- cycle; \draw[thick] (1.9,0.2) -- (0.95,0) -- (0.95,1.8) -- (1.9,1.6) -- cycle; \foreach \i in {2,...,7} { \draw[very thick] (-1.9,\i/5) -- (-2.15,\i/5); \draw[very thick] (1.9,\i/5) -- (2.15,\i/5); } \end{tikzpicture} }}+\lambda_{k}^{k[S_1]}\vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0.95,0) -- (0,.2) -- (0,1.6) -- (0.95,1.8) -- cycle; \draw[thick] (-0.95,0) -- (0,.2) -- (0,1.6) -- (-0.95,1.8) -- cycle; \fill[thick] (0,1.5) -- (0,1.6) -- (0.95,1.8) -- (0.95,1.7) -- cycle; \fill[thick] (0,1.5) -- (0,1.6) -- (-0.95,1.8) -- (-0.95,1.7) -- cycle; \node at (.5,.9) {\small$\overline{\rm u}_k^{k[S_1]}$}; \node at (-.45,.9) {\small$\overline{\rm d}_k^{k[S_1]}$}; \node at (-1.35,.9) {\small${\rm u}_k^{k[S_2]}$}; \node at (1.35,.9) {\small${\rm d}_k^{k[S_2]}$}; \draw[thick] (-1.9,0.2) -- (-0.95,0) -- (-0.95,1.8) -- (-1.9,1.6) -- cycle; \draw[thick] (1.9,0.2) -- (0.95,0) -- (0.95,1.8) -- (1.9,1.6) -- cycle; \foreach \i in {2,...,7} { \draw[very thick] (-1.9,\i/5) -- (-2.15,\i/5); \draw[very thick] (1.9,\i/5) -- (2.15,\i/5); } \end{tikzpicture} }}\\ &= \left(\lambda_k^{m}\right)^{-1} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (-.5, 0) rectangle (.5,1.4); \node at (0,0.7) {${\rm JW}_{m}$}; \foreach \i in {1,...,6} { \draw[very thick] (-.5,\i/5) -- (-0.7,\i/5); \draw[very thick] (.5,\i/5) -- (0.7,\i/5); } \end{tikzpicture} }} + \lambda_k^{m-2} \vcenter{\hbox{ \begin{tikzpicture} \begin{scope}[shift={(1.5,0)}] \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8) -- cycle; \draw[thick] (0,0.6) -- (-0.95,.8) -- (-0.95,1.6) -- (0,1.8); \fill[thick] (-0.95,1.5) -- (-0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,1.2) {${\rm d}_{k'}^{k[S_0]}$}; \node at (-.45,1.2) {$\overline{\rm u}_{k'}^{k[S_0]}$}; \draw[line width=3pt] (0.95, 1.2) -- (1.4, 1.2); \draw[line width=3pt] (-0.95, 1.3) -- (-1.5, 1.3); \draw[line width=1pt] (-0.95, 1.1) arc (90:180:.4) arc (180:270:.4) -- (0,.3)-- (1.4,.32); \end{scope} \begin{scope}[shift={(-1.5,0)}] \draw[thick] (0,0.6) -- (-0.95,.8) -- (-0.95,1.6) -- (0,1.8) -- cycle; \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8); \fill[thick] (0.95,1.5) -- (0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (-.5,1.2) {${\rm d}_{k'}^{k[S_0]}$}; \node at (.45,1.2) {$\overline{\rm u}_{k'}^{k[S_0]}$}; \draw[line width=3pt] (-0.95, 1.2) -- (-1.4, 1.2); \draw[line width=3pt] (0.95, 1.3) -- (1.5, 1.3); \draw[line width=1pt] (-1.4,.32) -- (0.95,.3) arc (-90:90:.4); \end{scope} \end{tikzpicture} }}\\ &= \left(\lambda_k^{m}\right)^{-1} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (-.5, 0) rectangle (.5,1.4); \node at (0,0.7) {${\rm JW}_{m}$}; \foreach \i in {1,...,6} { \draw[very thick] (-.5,\i/5) -- (-0.7,\i/5); \draw[very thick] (.5,\i/5) -- (0.7,\i/5); } \end{tikzpicture} }} + \lambda_k^{m-2}\left(\lambda_{k'}^{m-1}\right)^{-2} \vcenter{\hbox{ \begin{tikzpicture} \begin{scope}[shift={(1.2,0)}] \draw[thick] (-.6, 0) rectangle (.6,1.4); \node at (0,0.7) {${\rm JW}_{m-1}$}; \foreach \i in {1,...,6} { \draw[very thick] (.6,\i/5) -- (0.8,\i/5); } \draw[very thick] (-.6,.2) arc (90:270:.2) -- (0.8,-.2); \end{scope} \begin{scope}[shift={(-0.8,0)}] \draw[thick] (-.6, 0) rectangle (.6,1.4); \node at (0,0.7) {${\rm JW}_{m-1}$}; \foreach \i in {1,...,6} { \draw[very thick] (-.6,\i/5) -- (-0.8,\i/5); } \foreach \i in {2,...,6} { \draw[very thick] (.6,\i/5) -- (1.4,\i/5); } \draw[very thick] (-0.8,-.2) -- (.6,-.2) arc (270:360:.2) arc (0:90:.2); \end{scope} \end{tikzpicture} }}. \end{align*} Thus we see that \begin{equation}\label{eq:clampything} \left( \vcenter{\hbox{ \begin{tikzpicture} \draw[thick,fill=purple] (0,0.1) rectangle (.3,1.8); \draw[thick,fill=black] (0,1.7) rectangle (0.3,1.8); \draw[thick] (.3,.1) -- (1.25,.3) -- (1.25,1.6) -- (.3,1.8); \draw[thick] (0,.1) -- (-0.95,.3) -- (-0.95,1.6) -- (0,1.8); \node at (.8,.9) {\small ${\rm d}_k^{k[S_2]}$}; \node at (-.5,.9) {\small${\rm u}_k^{k[S_2]}$}; \foreach \i in {3,...,7} { \draw[very thick] (1.25,\i/5) -- (1.7,\i/5); \draw[very thick] (-0.95,\i/5) -- (-1.4,\i/5); } \draw[very thick] (-.95, .4) arc (90:270:.2) -- (1.25,0) arc (-90:90:.2); \end{tikzpicture} }} \right)_{\rm id} = \frac{[m+1]}{[m]}\left(\lambda_k^m\right)^{-1} + \lambda_k^{m-2}\left(\lambda_{k'}^{m-1}\right)^{-2} \end{equation} Now, recall that \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,.2) -- (0.95,.4) -- (0.95,1.6) -- (0,1.8) -- cycle; \fill (0.95,1.6)--(0,1.8) -- (0,1.7) -- (0.95,1.5)--cycle; \node at (.5,.9) {\small $\overline{\rm d}_k^{k[S_2]}$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8) -- cycle; \fill[thick] (0.95,1.5) -- (0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,1.2) {$\overline{\rm d}_{k'}^{k'[S_0]}$}; \draw[line width=3pt] (0.95, 1.2) -- (1.2, 1.2); \draw (1.2,0) rectangle (3.4, 1.6); \node at (2.3, 0.8) {${\rm JW}_{m+p^{(t_k)}-1}$}; \draw[line width=3pt] (0, 0.3) -- (1.2, 0.3); \node at (0.6, -0.03) {\footnotesize $p^{(t_k)}$}; \draw[line width=2pt] (0, -.4) -- (3.4, -.4) arc(-90:90:.4); \node at (2.2, -0.8) {\footnotesize $p^{(t_k)}-1$}; \draw[line width=3pt] (3.4, 1.1) -- (4.0, 1.1); \end{tikzpicture} }} \end{equation} and so $\lambda_k^m = \lambda_{k'}^{m-1}\frac{[m+1]}{[m+p^{(t_k+1)}]}$. Similarly, \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,.2) -- (0.95,.4) -- (0.95,1.6) -- (0,1.8) -- cycle; \fill (0.95,1.6)--(0,1.8) -- (0,1.7) -- (0.95,1.5)--cycle; \node at (.5,.9) {\small $\overline{\rm d}_k^{k[S_1]}$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,0.6) -- (0.95,.8) -- (0.95,1.6) -- (0,1.8) -- cycle; \fill[thick] (0.95,1.5) -- (0.95,1.6) -- (0,1.8) -- (0,1.7) -- cycle; \node at (.5,1.2) {$\overline{\rm d}_{k'}^{k'[S_0]}$}; \draw[line width=3pt] (0.95, 1.2) -- (1.7, 1.2); \draw (1.7,-.8) rectangle (3.0, 1.6); \node at (2.35, 0.4) {${\rm JW}_{m-2}$}; \draw[line width=3pt] (0, 0.3) -- (0.95, 0.3) arc (-90:90:0.33); \node at (0.6, -0.03) {\footnotesize $p^{(t_k)}$}; \draw[line width=2pt] (0, -.4) -- (1.7, -.4); \node at (0.6, -0.8) {\footnotesize $p^{(t_k)}-1$}; \draw[line width=3pt] (3.0, 0.4) -- (3.3, 0.4); \end{tikzpicture} }} \end{equation} giving $\lambda_{k}^{m-2} = \lambda_{k'}^{m-1}\frac{[m-p^{(t_k+1)}]}{[m]}$ so we can simplify \cref{eq:clampything} to \begin{equation} \left(\lambda_{k'}^{m-1}\right)^{-1}\left(\frac{[m-p^{(t_k+1)}] + [m + p^{(t_k+1)}]}{[m]}\right) = \left(\lambda_{k'}^{m-1}\right)^{-1}\left([p^{(t_k+1)}+1] - [p^{(t_k+1)}-1]\right) \end{equation} as desired. \end{proof} As an application, consider the cell module $\Delta_{\boldsymbol{\mu}}(m-1)$ of ${\rm TL}_{{\boldsymbol{\mu}}}$ spanned by $\{d_k^{(m-2)_+}, d_k^{m_-}\}$. Then we can use \cref{lem:id_of_trace_of_clamp} to see that $\langle d_k^{m_-}, d_k^{m_-}\rangle$ vanishes unless $k' = m-1$ or $\ell = 2$. We already know that $\langle d_k^{(m-2)_+}, d_k^{(m-2)_+}\rangle$ is given by $(\lambda_{k}^{m-2})^{-1}$ which always vanishes. Finally we can use \cref{eq:lobsided_clamp} to show that \begin{equation} {\rm u}_k^{m-2} e^{(k)}{\rm d}_k^m = \sum_{m' \in \operatorname{supp} k}\lambda_k^{m'} {\rm u}_k^m\overline{\rm d}_k^{m'}\overline{\rm u}_k^{m'}{\rm d}_k^{m} = \sum_{\substack{m' \in \operatorname{supp} k\\ m' \le m-2}}\lambda_k^{m'} {\rm u}_k^m\overline{\rm d}_k^{m'}\overline{\rm u}_k^{m'}{\rm d}_k^{m}. \end{equation} which taken mod $<m-2$ gives $\lambda_k^{m-2}{\rm u}_k^m\overline{\rm d}_k^{m-2}\overline{\rm u}_k^{m-2}{\rm d}_k^{m}$. It is then clear that after adding the rest of the diagram, we get $\langle d_k^{(m-2)_+}, d_k^{m_-}\rangle = \lambda_{k'}^{m-1} = (\lambda_{k'}^{m-1})^{-1}$. Thus two dimensional modules are completely degenerate unless $m-1 = k'$ in which case they are simple. We have deduced: \begin{theorem}\label{prop:main_two_part} Suppose that ${\boldsymbol{\mu}} = (k,1)\vdash n = \pldigs{n_r, \ldots, n_0}$. Let $T_n = \{0\le t \le t_k \;:\; n_t = 1\}$ and $S = \{0\le t < t_k \;:\; n_t >1\}$. Let \begin{align*} \Lambda_{1\rm{a}} &= \begin{cases} \left\{k+1, k-1\right\} & \ell \nmid k \text{ and } \ell \nmid k+1\\ \left\{k+1\right\} & \text{else} \end{cases}\\ \Lambda_{1\rm{b}} &= \left\{\pldigs{n_r, \ldots, n_{t+1}-1, 0,\ldots, 0}-1 \;:\; t \in S\right\}\\ \Lambda_2 &= \left\{\pldigs{n_r, \ldots, n_{t+1},0,\ldots,0}-1 \;:\; t \in T\right\} \end{align*} Then $\Lambda_0 = \Lambda_{1\rm a}\cup \Lambda_{1\rm b} \cup \Lambda_2$ and $\Lambda_{1\rm a}$ and $\Lambda_{1 \rm b}$ index the irreducible simple modules of dimension 1 and $\Lambda_2$ that of dimension 2. \end{theorem} This should be compared to its counterpart, \cite[Proposition 4.7]{sutton_tubbenhauer_wedrich_zhu_2021}. \begin{example} We present the case of $\ell = 5$ and $p = 3$. The below diagram differs from previous ones in that it is not a Bratteli diagram. Each row represents the cell modules of ${\rm TL}_{(k,1)}$. Grey dots are cell modules of ${\rm TL}_{k+1}$ that are not cell modules of ${\rm TL}_{(k,1)}$. Orange dots are one dimensional cell modules and red dots two dimensional. Those dots in a black outline are non-degenerate. \begin{center} \includegraphics[width=0.7\textwidth]{two_part_seams.png} \end{center} \end{example} \begin{example} On the other hand, if $\ell = p = 2$ we obtain the following diagram. \begin{center} \includegraphics[width=0.7\textwidth]{two_part_seams_2.png} \end{center} Note that all irreducible modules are two dimensional (apart from the trivial). If $k$ is odd then ${\rm TL}_{{\boldsymbol{\mu}}}$ has a single irreducible module and all cell modules are one dimensional. Indeed, when $\ell = 2$, then in \cref{prop:main_two_part}, $\Lambda_{1\rm a}$ is always exactly $\{k + 1\}$. Further $T$ is non-empty iff $k$ is even. If $p = 2$ also, then $S = \emptyset$, showing the above claim. \end{example} \subsection{Two Part Seam Algebras as Inductions} Recall that $P_{\mathbf{n}}(m)$ has a $\Delta_{\mathbf{n}}$-filtration. Moreover, by applying the truncation functor, we see that the it has the same factors as the projective cover of $L_{\mathbf{m}}(m)$. That is to say $\Delta_{\mathbf{m}}(r)$ appears in a filtration of $P_{\mathbf{m}}(m)$ iff $\Delta_{\mathbf{n}}(r)$ appears in a filtration of $P_{\mathbf{n}}(m)$. This holds as $r \le m$ for all $\Delta_\mathbf{n}(r)$ appearing in $P_\mathbf{n}(m)$. Since adding a strand gives rise inducing from $n$ to $n+1$, we see that \begin{equation} {\rm TL}_{{\boldsymbol{\mu}}}= e^{\boldsymbol{\mu}} \cdot {\rm TL}_\mathbf{k+1} \cdot e^{\boldsymbol{\mu}} \simeq \operatorname{End}_{{\rm TL}_\mathbf{k+1}}({\rm TL}_\mathbf{k+1}\cdot e^{\boldsymbol{\mu}}) \simeq \operatorname{End}_{{\rm TL}_\mathbf{k+1}}(P_\mathbf{k}(k)\mathord\uparrow) \end{equation} From the above observation and the known composition factors of $P_\mathbf{k}(k)$, we see that if $k$ has no tail and $n_0 \neq 1$, $P_\mathbf{k}(k)\mathord\uparrow \simeq P_\mathbf{k+1}(k+1) \oplus P_\mathbf{k+1}(k-1)$. Indeed, we know it must be the direct sum of projective indecomposables and has a $\Delta_{\mathbf{k+1}}$ filtration. Further, inducing $\Delta_{\mathbf k}(m)$ gives a module with a $\Delta$-filtration with factors $\Delta_\mathbf{k+1}(m+1)$ and $\Delta_\mathbf{k+1}(m-1)$. In this case, these lie in different blocks and so \begin{align} {\rm TL}_{{\boldsymbol{\mu}}}\simeq \operatorname{End}_{{\rm TL}_\mathbf{k+1}}(P_\mathbf{k}(k)\mathord\uparrow) &\simeq \operatorname{End}_{{\rm TL}_\mathbf{k+1}}(P_\mathbf{k+1}(k-1)) \oplus \operatorname{End}_{{\rm TL}_\mathbf{k+1}}(P_\mathbf{k+1}(k+1))\\ &\simeq \operatorname{End}_{{\rm TL}_\mathbf{k-1}}(P_\mathbf{k-1}(k-1)) \oplus {\rm TL}_{(k+1)}\nonumber\\ &\simeq {\rm TL}_{(k-1)} \oplus {\rm TL}_{(k+1)}.\nonumber \end{align} This reconfirms that ${\rm TL}_{{\boldsymbol{\mu}}}$ is commutative and lays bare the representation theory in terms of algebras we understand from \cref{sec:endomorphism}. The second equivalence deserves some explanation. While the second summand is plain, the first asserts that $\operatorname{End}_{{\rm TL}_\mathbf{k+1}}(P_\mathbf{k+1}(k-1)) \simeq \operatorname{End}_{{\rm TL}_\mathbf{k-1}}(P_\mathbf{k-1}(k-1))$. This holds because the modules $P_{\mathbf{k+1}}(k-1)$ and $P_\mathbf{k-1}(k-1)$ each have standard filtrations by modules indexed by the same cell indices. Each of these modules have simples indexed by the same indices and the only additional factors are those of the trivial module which is distinct from the head of $P_\mathbf{k+1}(k-1)$. A similar character calculation shows that if $k \equiv_\ell - 1$ then $P_\mathbf{k}(k)\mathord\uparrow \equiv P_\mathbf{k+1}(k+1)$. Indeed, certainly $P_\mathbf{k+1}(k+1)$ is a summand of $P_\mathbf{k}(k)\mathord\uparrow$ and by looking at the values of $\operatorname{supp}({\ell})$ it must have the same standard factors. Thus \begin{align} {\rm TL}_{{\boldsymbol{\mu}}}\simeq \operatorname{End}_{{\rm TL}_{k+1}}(P_k(k)\mathord\uparrow) &\simeq \operatorname{End}_{{\rm TL}_{k+1}}(P_{k+1}(k+1))\\ &\simeq {\rm TL}_{(k+1)}.\nonumber \end{align} Indeed, character arguments as above are used in~\cite{sutton_tubbenhauer_wedrich_zhu_2021} to deduce similar results to \cref{prop:main_two_part}. \section{Tensor with Small Idempotents}\label{sec:small_tensor} Here we consider the composition ${\boldsymbol{\mu}} = (n, k)$ where $1 \le k \le \ell$. This corresponds to tensoring the $n$-th Jones-Wenzl idempotent by the $k$-th where $k$ lies on, below, or just above the first multiple of $\ell$. We will focus on the case where $\ell \mid n+1$. \subsection{Eve Idempotent} Through this sub-section, suppose $1\le k \le \ell-1$ in the algebra ${\rm TL}_{\boldsymbol{\mu}}$ for ${\boldsymbol{\mu}} = (n, k)$. As such, $k$ is Eve. \begin{proposition} If ${\boldsymbol{\mu}} = (n, k)$ with $1 \le k \le \ell -1$ and $\ell \mid n + 1$, then ${\rm TL}_{\boldsymbol{\mu}}$ is commutative and \begin{equation} \Lambda_{\boldsymbol{\mu}} = \left(\operatorname{supp} n + \{k, k-2, \ldots, -k\}\right) \cap \mathbb{N}_0. \end{equation} Further, \begin{equation} (\Lambda_{\boldsymbol{\mu}})_0 = \{n-k-2i\mid 0\leq i\leq \lfloor k/2\rfloor\}. \end{equation} In particular, ${\rm TL}_{\boldsymbol{\mu}}$ has $\lfloor \frac{k}{2}\rfloor+1$ irreducible modules. \end{proposition} \begin{proof} By performing a similar factorisation to that in \cref{sec:seam} we see that a basis diagram of any cell module is given by the composition of two diagrams $\mathbf{t} = (\mathbf{t_0 \otimes {\operatorname{id}})\cdot \mathbf{t}}_1$ where $\mathbf{t}_0 : \underline{n} \to \underline{m_1}$ can be taken to be ${\rm d}_k^{m_1}$ and $\mathbf{t}_1 : \underline{m_1 + k} \to \underline{m}$ links no two of the last $k$ sites. It is clear then that $\Lambda_{\boldsymbol{\mu}}$ has the desired form. Now, since $\ell \mid n+1$, all elements of $\operatorname{supp} n$ are at least $2\ell$ apart and hence all the cell modules are one-dimensional. Hence ${\rm TL}_{\boldsymbol{\mu}}$ is commutative. Next we compute the Gram form on these cell modules. Since the modules are one dimensional, this is a simple coefficient. Observe that the chosen basis for each module is spanned by a morphism of the form \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.6); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \draw[line width=2pt] (0,-.2) -- (0.6,-.2) arc (-90:90:0.35); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (0,-.4) -- (1.4, -.4); \draw[line width=2pt] (0,.95) -- (0,.95); \node at (1.1, 0.1) {\tiny$t$}; \node at (1.0, 1.2) {\tiny$m_1-t$}; \node at (0.7, -.57) {\tiny$k-t$}; \end{tikzpicture} }} \end{equation} and so the Gram constant on $\Delta_{\boldsymbol{\mu}}(m_1 + k -2t)$ can be calculated as the coefficient of the identity diagram in \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (-0.15,.1) -- (-0.75,.3) -- (-0.75,1.6) -- (-0.15,1.8) --cycle; \draw[thick] (0.15,.1) -- (0.75,.3) -- (0.75,1.6) -- (0.15,1.8) --cycle; \node at (.47,.9) {${\rm d}_n^{m_1}$}; \node at (-.45,.9) {${\rm u}_n^{m_1}$}; \draw[line width=2pt] (0.15,-.2) -- (0.75,-.2) arc (-90:90:0.35); \draw[line width=2pt] (-0.15,-.2) -- (-0.75,-.2); \draw[line width=2pt] (-.75, .5) arc (90:270:0.35); \draw[line width=2pt] (0.75,1) -- (1.55, 1); \draw[line width=2pt] (0.15,-.6) -- (1.55, -.6); \draw[line width=2pt] (-0.75,1) -- (-1.55, 1); \draw[line width=2pt] (-0.15,-.6) -- (-1.55, -.6); \node at (1.25, 0.1) {\tiny$t$}; \node at (1.15, 1.2) {\tiny$m_1-t$}; \node at (0.85, -.77) {\tiny$k-t$}; \node at (-1.25, 0.1) {\tiny$t$}; \node at (-1.15, 1.2) {\tiny$m_1-t$}; \node at (-0.85, -.77) {\tiny$k-t$}; \draw[thick, fill=purple] (-.15,.1) rectangle (.15,1.8); \fill (-.15,1.65) rectangle (.15,1.8); \draw[thick, fill=purple] (-.15,0) rectangle (.15,-.8); \end{tikzpicture} }} = \sum_{\substack{m' \in \operatorname{supp} n\\m' \le m_1}} \lambda_n^{m'}\vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,.3) -- (-0.6,.1) -- (-0.6,1.8) -- (0,1.6) --cycle; \draw[thick] (0,.3) -- (0.6,.1) -- (0.6,1.8) -- (0,1.6) --cycle; \fill (0,1.45) -- (-0.6,1.65) -- (-0.6,1.8) -- (0,1.6) --cycle; \fill (0,1.45) -- (0.6,1.65) -- (0.6,1.8) -- (0,1.6) --cycle; \node at (.32,.9) {$\overline{\rm u}_n^{m'}$}; \node at (-.32,.9) {$\overline{\rm d}_n^{m'}$}; \draw[thick] (-1.2,.3) -- (-0.6,.1) -- (-0.6,1.8) -- (-1.2,1.6) --cycle; \draw[thick] (1.2,.3) -- (0.6,.1) -- (0.6,1.8) -- (1.2,1.6) --cycle; \node at (.92,.9) {${\rm d}_n^{m_1}$}; \node at (-.92,.9) {${\rm u}_n^{m_1}$}; \draw[line width=2pt] (0.15,-.2) -- (1.2,-.2) arc (-90:90:0.35); \draw[line width=2pt] (-0.15,-.2) -- (-1.2,-.2); \draw[line width=2pt] (-1.2, .5) arc (90:270:0.35); \draw[line width=2pt] (1.2,1) -- (2.0, 1); \draw[line width=2pt] (0.15,-.6) -- (2.0, -.6); \draw[line width=2pt] (-1.2,1) -- (-2.0, 1); \draw[line width=2pt] (-0.15,-.6) -- (-2.0, -.6); \node at (1.7, 0.1) {\tiny$t$}; \node at (1.6, 1.2) {\tiny$m_1-t$}; \node at (1.3, -.77) {\tiny$k-t$}; \node at (-1.7, 0.1) {\tiny$t$}; \node at (-1.6, 1.2) {\tiny$m_1-t$}; \node at (-1.3, -.77) {\tiny$k-t$}; \draw[thick, fill=purple] (-.15,0) rectangle (.15,-.8); \end{tikzpicture} }} \end{equation} Now, if $m' < m_1$, then $m' \le m_1 - 2\ell <m_1 + k - 2t$. Hence the only term that contributes is \begin{equation}\label{eq:inner_product_easy_small_diag} \lambda_n^{m_1}\vcenter{\hbox{ \begin{tikzpicture} \draw[thick] (0,.3) -- (-0.6,.1) -- (-0.6,1.8) -- (0,1.6) --cycle; \draw[thick] (0,.3) -- (0.6,.1) -- (0.6,1.8) -- (0,1.6) --cycle; \fill (0,1.45) -- (-0.6,1.65) -- (-0.6,1.8) -- (0,1.6) --cycle; \fill (0,1.45) -- (0.6,1.65) -- (0.6,1.8) -- (0,1.6) --cycle; \node at (.32,.9) {$\overline{\rm u}_n^{m_1}$}; \node at (-.32,.9) {$\overline{\rm d}_n^{m_1}$}; \draw[thick] (-1.2,.3) -- (-0.6,.1) -- (-0.6,1.8) -- (-1.2,1.6) --cycle; \draw[thick] (1.2,.3) -- (0.6,.1) -- (0.6,1.8) -- (1.2,1.6) --cycle; \node at (.92,.9) {${\rm d}_n^{m_1}$}; \node at (-.92,.9) {${\rm u}_n^{m_1}$}; \draw[line width=2pt] (0.15,-.2) -- (1.2,-.2) arc (-90:90:0.35); \draw[line width=2pt] (-0.15,-.2) -- (-1.2,-.2); \draw[line width=2pt] (-1.2, .5) arc (90:270:0.35); \draw[line width=2pt] (1.2,1) -- (2.0, 1); \draw[line width=2pt] (0.15,-.6) -- (2.0, -.6); \draw[line width=2pt] (-1.2,1) -- (-2.0, 1); \draw[line width=2pt] (-0.15,-.6) -- (-2.0, -.6); \node at (1.7, 0.1) {\tiny$t$}; \node at (1.6, 1.2) {\tiny$m_1-t$}; \node at (1.3, -.77) {\tiny$k-t$}; \node at (-1.7, 0.1) {\tiny$t$}; \node at (-1.6, 1.2) {\tiny$m_1-t$}; \node at (-1.3, -.77) {\tiny$k-t$}; \draw[thick, fill=purple] (-.15,0) rectangle (.15,-.8); \end{tikzpicture} }} = \left(\lambda_n^{m_1}\right)^{-1}\vcenter{\hbox{ \begin{tikzpicture} \draw[line width=2pt] (0.15,-.2) arc (-90:90:0.35); \draw[line width=2pt] (-0.15, .5) arc (90:270:0.35); \draw[line width=2pt] (0.15,1) -- (0.95, 1); \draw[line width=2pt] (0.15,-.6) -- (0.95, -.6); \draw[line width=2pt] (-0.95,1) -- (-0.15, 1); \draw[line width=2pt] (-0.15,-.6) -- (-0.95, -.6); \node at (0.65, 0.1) {\tiny$t$}; \node at (0.65, 1.2) {\tiny$m_1-t$}; \node at (0.55, -.77) {\tiny$k-t$}; \node at (-0.65, 0.1) {\tiny$t$}; \node at (-0.65, 1.2) {\tiny$m_1-t$}; \node at (-0.55, -.77) {\tiny$k-t$}; \draw[thick, fill=purple] (-.15,0.1) rectangle (.15,1.6); \draw[thick, fill=purple] (-.15,0) rectangle (.15,-.8); \end{tikzpicture} }} \end{equation} For any morphism $ f : \underline{m} \to \underline{m}$, the coefficient of the identity diagram is given by the full trace, $\tau^m(f \cdot {\rm JW}_m)/[m+1]$. Hence, the identity coefficient of the morphism in \cref{eq:inner_product_easy_small} is given by \begin{equation}\label{eq:inner_product_easy_small} \left(\lambda_n^{m_1}\right)^{-1} \frac{\Theta(m_1, k, m_1 + k - 2t)}{[m_1+k-2t+1]} = \left(\lambda_n^{m_1}\right)^{-1} (-1)^{m_1+k-t} \frac{\genfrac{[}{]}{0pt}{}{m_1+k-t+1}{t}}{\genfrac{[}{]}{0pt}{}{m_1}{t}\genfrac{[}{]}{0pt}{}{k}{t}}. \end{equation} Now, since $t \le k < \ell$, the denominator of \cref{eq:inner_product_easy_small} does not vanish. Thus in order to check when it is zero, we require $(\lambda_n^{m_1})^{-1}$ to not be zero (i.e. $m_1 = n$) and $m_1 + k -t + 1 \triangleright t$. Since $\ell \mid m_1 + 1$, this is equivalent to $k -t \ge t$. Thus $t \in \{0,1, \ldots, \lfloor \frac{k}2\rfloor\}$ as desired. \end{proof} \begin{example} A critical example is when $k = \ell - 1 $. Here we present the data for $p = 2$ and $\ell = 6$. Note we are restricting to cases where $6 \mid n + 1$. All cell modules for ${\rm TL}_{(n,5)}$ are one dimensional (orange). Grey dots are the cell indices of cell modules of ${\rm TL}_\mathbf{n+5}$ and circled dots indicate elements of $(\Lambda_{(n,5)})_0$. \begin{center} \includegraphics[width=0.8\textwidth]{two_part_wall} \end{center} \end{example} \subsection{First non-Eve Idempotent} We now investigate the algebra ${\rm TL}_{(n,\ell)}$ where $\ell \mid n + 1$. We are motivated by the self-similarity of the representation theory of ${\rm TL}_n$ under the $\ell$-dilation $n \mapsto (n+1)/\ell$. \begin{proposition} Suppose that ${\boldsymbol{\mu}} = (n, \ell)$ with $\ell \mid n+1$. Write $n+1 = \pldigs{n_r, \ldots, n_1,0}$ and let $t_n$ be the largest such that $n_1 = n_2 = \cdots = n_{t_k} = p-1$. Set $T = \{1 \le t \le t_k\;:\; n_t = 1\}$ and $S = \{0 \le t < t_k \;:\; n_t > 1\}$. Unless $p=2$, $T$ has at most one element. Then ${\rm TL}_{\boldsymbol{\mu}}$ has \begin{equation} \Lambda_{\boldsymbol{\mu}} = \left( \operatorname{supp} n + \left\{ \ell, \ell-2, \ldots, -\ell \right\}\right) \cap \mathbb{N}_0. \end{equation} Write \begin{align*} \Lambda_{2\rm x} &= \left(\operatorname{supp} n + \left(\{\ell-2, \ell-4, \ldots, -\ell+2\right\}\right)\cap N_0\\ \Lambda_{2\rm y} &= \bigcup_{t \in T}\operatorname{supp} \mother[t]{n}\\ \Lambda_1 &= \Big(\operatorname{supp} n + \{\ell, -\ell\}\Big) \setminus \Lambda_{2\rm y}. \end{align*} Then $\Lambda_1$ indexes cell modules of dimension 1, and $\Lambda_{2\rm x}$ and $\Lambda_{2 \rm y}$ index those of dimension 2. If \begin{align} \Lambda_{1\rm a} &= \{n + \ell\}\nonumber\\ \Lambda_{1\rm b} &= \left\{\pldigs{n_r, \ldots, n_{t+1} - 1,0,\ldots, 0} - 1 \;:\; t \in S\right\}\nonumber\\ \Lambda_{2\rm a} &= \left\{n+\ell-2t \;:\; 1\le t \le \lfloor \ell / 2\rfloor\right\}\nonumber\\ \Lambda_{2\rm b} &= \left\{\pldigs{n_r, \ldots, n_{t+1},0,\ldots, 0} - 1 \;:\; t \in T\right\} \end{align} then $\Lambda_0 = \Lambda_{1\rm a} \cup\Lambda_{1\rm b} \cup\Lambda_{2\rm a} \cup\Lambda_{2\rm b}$ and $\Lambda_{1\rm a} \cup\Lambda_{1\rm b}$ index the irreducible modules of dimension 1 and $\Lambda_{2\rm a} \cup\Lambda_{2\rm b}$ those of dimension 2. \end{proposition} \begin{proof} Note that $e^{(\ell)} = {\rm JW}_{\ell - 1} \otimes {\operatorname{id}}$ and ${\rm TL}_{(\ell)}$ has two cell modules, indexed by $\{\ell, \ell-2\}$. Hence we see that \begin{equation} \Lambda_{\boldsymbol{\mu}} = \left( \operatorname{supp} n + \{ \ell, \ell-2, \ldots, -\ell \}\right) \cap \mathbb{N}_0 \end{equation} Suppose that $t \in \{1,\ldots, \ell-1\}$. Then if $m_1 \in \operatorname{supp} n$, the module $\Delta_{\boldsymbol{\mu}}(m_1 + \ell - 2t)$ is spanned by \begin{align} x_1^{m_1+\ell-2t} &= \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.6); \fill (-.3,-0.05) rectangle (0,0.1); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \draw[line width=2pt] (0,-.2) -- (0.6,-.2) arc (-90:90:0.35); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (0,-.4) -- (1.4, -.4); \draw[line width=2pt] (0,.95) -- (0,.95); \node at (1.1, 0.1) {\tiny$t$}; \node at (1.0, 1.2) {\tiny$m_1-t$}; \node at (0.7, -.57) {\tiny$\ell-t$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.45); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \draw[line width=2pt] (0,-.05) -- (0.6,-.05) arc (-90:90:0.3); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (0,-.2) -- (1.4, -.2); \draw[line width=2pt] (0,.95) -- (0,.95); \draw[very thick] (-.3,-.55) -- (1.4,-.55); \node at (1.15, 0.2) {\tiny$t$}; \node at (1.1, 1.2) {\tiny$m_1-t$}; \node at (0.9, -.37) {\tiny$\ell-t-1$}; \end{tikzpicture} }} \\ x_2^{m_1+\ell-2t} &= \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.6); \fill (-.3,-0.05) rectangle (0,0.1); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \draw[line width=2pt] (0,-.05) -- (0.6,-.05) arc (-90:90:0.3); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (0,-.2) -- (1.4, -.2); \draw[line width=2pt] (0,.95) -- (0,.95); \draw[very thick] (0,-.5) -- (0.1,-.5) arc(-90:90:0.06) -- (0,-.38); \node at (1.25, 0.2) {\tiny$t-1$}; \node at (1.2, 1.2) {\tiny$m_1-t + 1$}; \node at (0.9, -.37) {\tiny$\ell-t-1$}; \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.45); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \draw[line width=2pt] (0,-.05) -- (0.6,-.05) arc (-90:90:0.3); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (0,-.2) -- (1.4, -.2); \draw[line width=2pt] (0,.95) -- (0,.95); \draw[very thick] (-.3,-.55) -- (0.1,-.55) arc(-90:90:0.08) -- (0,-.39); \node at (1.25, 0.2) {\tiny$t-1$}; \node at (1.2, 1.2) {\tiny$m_1-t + 1$}; \node at (0.9, -.37) {\tiny$\ell-t-1$}; \end{tikzpicture} }} \end{align} It is clear that $\langle x_1^m, x_1^m \rangle = G_{(n,\ell-1)}^{m-1}$ which vanishes unless $t \in \{ 1, \ldots, \lfloor \frac{\ell}2\rfloor \}$. On the other hand, $\langle x_2^m, x_2^m \rangle = [\ell]/[\ell-1] G_{(n, \ell-2)}^{m - 2}$ which always vanishes. To compute cross terms, we must use the following well known form of the Jones-Wenzl idempotents (see~\cite[Theorem 4.5]{frenkel_khovanov_1997}). \begin{equation}\label{eq:single_clasp_jones_wenzl} \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick, fill = purple] (-.45,0) rectangle (0.45,2); \node at (0,1) {${\rm JW}_n$}; \draw[line width=2pt] (-.8,1) -- (-.45,1); \draw[line width=2pt] (.8,1) -- (.45,1); \end{tikzpicture} }} = \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick, fill = purple] (-.65,0.2) rectangle (0.65,2); \node at (0,1.1) {${\rm JW}_{n-1}$}; \draw[line width=2pt] (-1,1.1) -- (-.65,1.1); \draw[line width=2pt] (1,1.1) -- (.65,1.1); \draw[very thick] (-1,0) -- (1,0); \end{tikzpicture} }} + \sum_{i=1}^{n-1} \frac{(-1)^i [i]}{[n]} \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick, fill = purple] (-.65,0.2) rectangle (0.65,2); \node at (0,1.1) {${\rm JW}_{n-1}$}; \draw[line width=2pt] (1,1.1) -- (.65,1.1); \draw[very thick] (1,0) -- (-0.65,0); \draw[very thick] (-.65,0.4) arc (90:270:0.2); \draw[very thick] (-1.6,1.1) arc (-90:90:0.2); \draw[line width=2pt] (-.65, 1.7) edge[out=180,in=0] (-1.4,1.9); \draw[line width=2pt] (-1.4,1.9) -- (-1.6,1.9); \draw[line width=2pt] (-.65, 0.9) edge[out=180,in=0] (-1.4,0.7); \draw[line width=2pt] (-1.4,0.7) -- (-1.6,0.7); \node at (-1.1, 2.1) {\tiny$i-1$}; \node at (-1.2, 0.5) {\tiny$n-1-i$}; \end{tikzpicture} }} \end{equation} This can be verified by noting that the morphism in \cref{eq:single_clasp_jones_wenzl} is killed by the right action of all cups and thus must be ${\rm JW}_{n}$ by uniqueness under that property~\cite[Lemma 1.1]{martin_spencer_2021}. Then the inner product $\langle x_1^{m}, x_2^{m}\rangle$ is the identity coefficient in \begin{equation}\label{eq:twisty_inner_product} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,1.8); \fill (-.3,1.65) rectangle (0,1.8); \draw[thick, fill=purple] (-.3,0.1) rectangle (.0,-.45); \draw[thick] (0,.1) -- (0.6,.3) -- (0.6,1.6) -- (0,1.8) --cycle; \draw[thick] (-.3,.1) -- (-.9,.3) -- (-.9,1.6) -- (-.3,1.8) --cycle; \node at (.32,.9) {${\rm d}_n^{m_1}$}; \node at (-.62,.9) {${\rm u}_n^{m_1}$}; \draw[line width=2pt] (0,-.05) -- (0.6,-.05) arc (-90:90:0.3); \draw[line width=2pt] (-.3,-.05) -- (-.9,-.05); \draw[line width=2pt] (-.9, 0.55) arc (90:270:0.3); \draw[line width=2pt] (0.6,1) -- (1.4, 1); \draw[line width=2pt] (-0.9,1) -- (-1.7, 1); \draw[line width=2pt] (0,-.2) -- (1.4, -.2); \draw[line width=2pt] (-.3,-.2) -- (-1.7, -.2); \draw[line width=2pt] (0,.95) -- (0,.95); \draw[very thick] (-1.7,-.55) -- (0.1,-.55) arc(-90:90:0.08) -- (0,-.39); \node at (1.25, 0.2) {\tiny$t-1$}; \node at (-1.50, 0.2) {\tiny$t$}; \node at (1.2, 1.2) {\tiny$m_1-t + 1$}; \node at (-1.5, 1.2) {\tiny$m_1-t$}; \node at (0.9, -.37) {\tiny$\ell-t-1$}; \node at (-1.2, -.37) {\tiny$\ell-t-1$}; \end{tikzpicture} }} = \left(\lambda_{n}^{m_1}\right)^{-1} \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.15,0.2) rectangle (.15,1.8); \draw[thick, fill=purple] (-.15,0.1) rectangle (.15,-.45); \draw[line width=2pt] (.15,-.05) arc (-90:90:0.3); \draw[line width=2pt] (-.15, 0.55) arc (90:270:0.3); \draw[line width=2pt] (0.15,1) -- (1.4, 1); \draw[line width=2pt] (-0.15,1) -- (-1.4, 1); \draw[line width=2pt] (0.15,-.2) -- (1.4, -.2); \draw[line width=2pt] (-.15,-.2) -- (-1.4, -.2); \draw[very thick] (-1.4,-.55) -- (0.25,-.55) arc(-90:90:0.08) -- (0.15,-.39); \node at (0.6, 0.6) {\tiny$t-1$}; \node at (-.45, 0.6) {\tiny$t$}; \node at (0.8, 1.2) {\tiny$m_1-t + 1$}; \node at (-.8, 1.2) {\tiny$m_1-t$}; \node at (1.0, -0) {\tiny$\ell-t-1$}; \node at (-1.05, -0) {\tiny$\ell-t-1$}; \end{tikzpicture} }} \end{equation} This identify follows by the same argument in \cref{eq:inner_product_easy_small_diag}. Applying \cref{eq:single_clasp_jones_wenzl} to the lower Jones-Wenzl projector, and noting that the only remaining term after expanding is the one for which $i = t$, gives the coefficient of the identity as \begin{equation} \left(\lambda_{n}^{m_1}\right)^{-1}\frac{(-1)^t[t]\Theta(m_1, \ell-2, m_1+\ell-2t)}{[\ell-1][m_1+\ell-2t+1]} = \left(\lambda_{n}^{m_1}\right)^{-1}(-1)^t \frac{[t]}{[\ell-1]} \genfrac{[}{]}{0pt}{}{m_1+\ell-t}{t-1}\Big/\genfrac{[}{]}{0pt}{}{m_1}{t-1}\genfrac{[}{]}{0pt}{}{\ell-2}{t-1}. \end{equation} Again, this vanishes unless $t \in \{0, \lfloor \frac \ell 2\rfloor\}$ and $m_1 = n$. The final case to consider is when $t = 0$ or $t= \ell$. Here one needs to know if two elements of $\operatorname{supp} n$ differ by $2\ell$. However, the analysis is identical to that in \cref{ssec:two_part_seams} and so is not repeated here. \end{proof} \begin{remark} In the same way that the theta network encodes the inner product on the two part Eve cell modules: \begin{equation} \left( \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.25,0.1) rectangle (.25,1.2); \draw[thick, fill=purple] (-.25,-0.1) rectangle (.25,-1.2); \draw[line width=2pt] (.25,-.35) arc (-90:90:0.35); \draw[line width=2pt] (-.25, 0.35) arc (90:270:0.35); \draw[line width=2pt] (0.25,0.8) -- (1.4, 0.8); \draw[line width=2pt] (-0.25,0.8) -- (-1.4, 0.8); \draw[line width=2pt] (0.25,-0.8) -- (1.4, -0.8); \draw[line width=2pt] (-0.25,-0.8) -- (-1.4, -0.8); \node at (0.8, 0) {\tiny$i$}; \node at (-.8, 0) {\tiny$i$}; \node at (0.8, 1.0) {\tiny$r-i$}; \node at (-.8, 1.0) {\tiny$r-i$}; \node at (0.8, -1.0) {\tiny$s-i$}; \node at (-0.8, -1.0) {\tiny$s-i$}; \end{tikzpicture} }} \right)_{\operatorname{id}} = \frac{1}{[r + s - 2i + 1]} \vcenter{\hbox{ \begin{tikzpicture}[scale=0.7] \draw[thick, fill=purple] (-.25,0.1) rectangle (.25,1.2); \draw[thick, fill=purple] (-.25,-0.1) rectangle (.25,-1.2); \draw[thick, fill=purple] (-.25,-1.4) rectangle (.25,-2.5); \draw[line width=2pt] (.25,-.35) arc (-90:90:0.35); \draw[line width=2pt] (-.25, 0.35) arc (90:270:0.35); \draw[line width=2pt] (.25,-1.65) arc (-90:90:0.35); \draw[line width=2pt] (-.25, -0.95) arc (90:270:0.35); \draw[line width=2pt] (.25,0.95-3.1) arc (-90:90:1.55); \draw[line width=2pt] (-.25, 0.95) arc (90:270:1.55); \node at (0.8, 0) {\tiny$i$}; \node at (-.8, 0) {\tiny$i$}; \node at (1.3, -.65) {\tiny$r-i$}; \node at (-1.3, -.65) {\tiny$r-i$}; \node at (1.1, -1.3) {\tiny$s-i$}; \node at (-1.1, -1.3) {\tiny$s-i$}; \end{tikzpicture} }} = \frac{\Theta(r, s, r + s - 2i)}{[r+s-2i+1]} \end{equation} \cref{eq:twisty_inner_product} can be considered as a sort of ``singly twisted'' theta network: \begin{equation} \left( \vcenter{\hbox{ \begin{tikzpicture} \draw[thick, fill=purple] (-.25,0.1) rectangle (.25,1.2); \draw[thick, fill=purple] (-.25,-0.1) rectangle (.25,-1.2); \draw[line width=2pt] (.25,-.35) arc (-90:90:0.35); \draw[line width=2pt] (-.25, 0.35) arc (90:270:0.35); \draw[line width=2pt] (0.25,0.8) -- (1.4, 0.8); \draw[line width=2pt] (-0.25,0.8) -- (-1.4, 0.8); \draw[line width=2pt] (0.25,-0.8) -- (1.4, -0.8); \draw[line width=2pt] (-0.25,-0.8) -- (-1.4, -0.8); \node at (0.9, 0) {\tiny$i-1$}; \node at (-.8, 0) {\tiny$i$}; \node at (0.9, 1.0) {\tiny$r-i+1$}; \node at (-.8, 1.0) {\tiny$r-i$}; \node at (0.8, -0.65) {\tiny$s-i$}; \node at (-0.8, -0.65) {\tiny$s-i$}; \draw[very thick] (-1.4,-1.3) -- (0.25,-1.3) arc (-90:90:0.1); \end{tikzpicture} }} \right)_{\operatorname{id}} = \frac{1}{[r + s - 2i + 2]} \vcenter{\hbox{ \begin{tikzpicture}[scale=0.7] \draw[thick, fill=purple] (-.25,0.1) rectangle (.25,1.2); \draw[thick, fill=purple] (-.25,-0.1) rectangle (.25,-1.2); \draw[thick, fill=purple] (-.25,-1.8) rectangle (.25,-2.9); \draw[line width=2pt] (.25,-.35) arc (-90:90:0.35); \draw[line width=2pt] (-.25, 0.35) arc (90:270:0.35); \draw[line width=2pt] (.25,-2.35) arc (-90:90:0.85); \draw[line width=2pt] (-.25, -0.65) arc (90:270:0.85); \draw[line width=2pt] (.25,0.45-3.1) arc (-90:90:1.8); \draw[line width=2pt] (-.25, 0.95) arc (90:270:1.8); \node at (1.0, 0) {\tiny$i-1$}; \node at (-.8, 0) {\tiny$i$}; \node at (1.5, 1.0) {\tiny$r-i+1$}; \node at (-1.5, 1.0) {\tiny$r-i$}; \node at (1.3, -0.8) {\tiny$s-i$}; \node at (-1.3, -0.8) {\tiny$s-i$}; \draw[very thick] (.25, -1.0) arc (90:-90:0.25) -- (-.25, -1.5) arc (90:270:0.25); \end{tikzpicture} }} \end{equation} \end{remark} \section{Two Part Eve Compositions}\label{sec:two-part} Let ${\boldsymbol{\mu}} = (\mu_1,\mu_2) \vdash n$ be Eve and without loss of generality suppose $\mu_1 \ge \mu_2$. This is so because the (Eve) Jones-Wenzl projectors have up-down symmetry. \begin{center} \begin{tikzpicture} \foreach \x in {0, 1.5} { \begin{scope}[shift={(\x,0)}] \foreach \i in {0,...,14} { \draw[very thick] (-.2,.2+ \i/5) -- (.5,.2 + \i/5); } \draw[thick,fill=purple] (0,0.1) rectangle (0.3,1.85); \draw[thick,fill=purple] (0,1.95) rectangle (0.3,3.1); \end{scope} } \draw[thick] (.5,0.1) rectangle (1.3, 3.1); \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-0.3,0.1) -- (-0.3,1.9) node [black,midway,xshift=-10pt] {\footnotesize $\mu_1$}; \draw [decorate,decoration={brace,amplitude=5pt},xshift=-1pt,yshift=0pt] (-0.3,1.9) -- (-0.3,3.0) node [black,midway,xshift=-10pt] {\footnotesize $\mu_2$}; \end{tikzpicture} \end{center} \begin{remark} When we relax the condition that ${\boldsymbol{\mu}}$ is Eve we might find that we no longer have a $(\mu_1, \mu_2) \longleftrightarrow (\mu_2, \mu_1)$ symmetry. Indeed, the ${\rm JW}_{\mu_i}$ are no longer symmetric so the two cases are not related by the vertical symmetry. This is discussed in \cref{sec:further}. \end{remark} \begin{theorem} Let ${\boldsymbol{\mu}} = (\mu_1,\mu_2) \vdash n$ be an Eve composition. Write $\mu_1 = a_1p^{(i_1)}-1 \ge \mu_2 = a_2p^{(i_2)}-1$. \begin{enumerate}[(i)] \item The algebra ${\rm TL}_{\boldsymbol{\mu}}$ has $k$-dimension equal to $\min\{\mu_1,\mu_2\}$. \item The set $\Lambda_{\boldsymbol{\mu}} = E_{\mu_1,\mu_2} \subseteq \Lambda_\mathbf n$ and each cell module is one dimensional. \item Further, $(\Lambda_{\boldsymbol{\mu}})_0 = \{m \in E_{\mu_1,\mu_2}\;:\; (n+m)/2 + 1 \triangleright (n-m)/2\}$. We can also write this as \begin{equation} (\Lambda_{\boldsymbol{\mu}})_0 = \left\{n - 2\sum_{i=0}^{i_2}k_i p^{(i)} \;:\; k_{i_2} \in X\text{ and } k_{i} \le \left\lfloor\frac{\ell \wedge p -1}{2}\right\rfloor \text{ for } 0 \le i < i_2 \right\}, \end{equation} where if $i_1 > i_2$ the set $X = \{0,\ldots,a_2 - 1\}$ and if $i_1 = i_2$ with $(a_1 + a_2-1) p^{(i_1)} = c p^{(i_1)} + d p^{(i_1+1)}$, \begin{equation} X = \{0,\ldots, \lfloor c/2\rfloor\} \cup \{c+1,\ldots,\min\{a_2, \lfloor (p\wedge \ell +c)/2\rfloor\}\}. \end{equation} \end{enumerate} \end{theorem} Interestingly we see that the composition ${\boldsymbol{\mu}}$ controls which cell modules exist, but not whether or not they are degenerate --- that is controlled entirely by $n$. Again this should be compared to~\cite[Proposition 4.11]{sutton_tubbenhauer_wedrich_zhu_2021}. \begin{proof} Recall from \cref{eg:rings_tl_mu} part (iii) that over characteristic zero (or $(\ell, p)$ where $\ell > \mu_1, \mu_2$) \begin{equation}\label{eq:two_part_iso} {\rm TL}_{\boldsymbol{\mu}} \simeq k[x] \Big/ \left(\prod_{r = 1}^{\mu_2} \left(x - \frac{[r][n-r+1]}{[\mu_1][\mu_2]}\right)\right). \end{equation} Such an algebra has $\mu_2$ cell modules indexed by each $1\le r \le \mu_2$. The involution is the identity and each cell module is one dimensional. It is clear that the cell modules of ${\rm TL}_{\boldsymbol{\mu}}$ are indexed by \begin{equation} \Lambda_{\boldsymbol{\mu}} = \{n, n-2, \ldots, n-2\mu_2\} = E_{\mu_1,\mu_2}. \end{equation} By comparing \cref{eq:two_part_recurse} we see that the generator $U_1$ acts on the module indexed by $n-2k$ as the scalar $[k][n-k+1]/[\mu_1][\mu_2]$. Thus in fact the isomorphism in \cref{eq:two_part_iso} is an isomorphism preserving the given cell data. We have thus deduced that there are $\mu_2$ cell modules, each of which is one dimensional. Since the dimension of a cellular algebra is given by the sum of the squares of the dimensions of its cell modules, we have shown both (i) and (ii). From \cref{eq:useful_gauss}, we can see that the determinant of the Gram matrix for the cell module $\Delta_{\boldsymbol{\mu}}(n-2k)$ is \begin{align}\label{eq:two_part_gram} \frac{\Theta(\mu_1,\mu_2,n-2k)}{[n-2k+1]} &= \frac{[n-k+1]}{[n-2k+1]}\genfrac{[}{]}{0pt}{}{n-k}{k}\Big/\genfrac{[}{]}{0pt}{}{\mu_1}{k}\genfrac{[}{]}{0pt}{}{\mu_2}{k}\\ &=\genfrac{[}{]}{0pt}{}{n-k+1}{k}\Big/\genfrac{[}{]}{0pt}{}{\mu_1}{k}\genfrac{[}{]}{0pt}{}{\mu_2}{k}\nonumber. \end{align} The determinant is well defined over mixed characteristic since it is a polynomial in the elements of the Gram matrix, which are themselves defined over mixed characteristic. Each cell module is one-dimensional and so iff this determinant vanishes, the entire form is degenerate and $n-2k \not\in (\Lambda_{{\boldsymbol{\mu}}})_0$. However, an equivalent condition for $\mu_i$ to be Eve is that all quantum binomials $\genfrac{[}{]}{0pt}{}{\mu_i}{k}$ are non-zero in the ring~\cite[Section 10.3]{spencer_2020}. Hence the denominator in \cref{eq:two_part_gram} is non-zero and so the determinant vanishes precisely when $\genfrac{[}{]}{0pt}{}{n-k+1}{k} = 0$. Recall the notation that $n \triangleright r$ if $n_i \ge r_i$ for all $i$, where $n = \sum_i n_i p^{(i)}$ and $r = \sum_i r_ip^{(i)}$ are the $(\ell,p)$-adic expansions. Then the quantum binomial $\genfrac{[}{]}{0pt}{}{n}{r}$ is non-zero iff $n \triangleright r$. Hence the determinant vanishes exactly when $n +1 - k \triangleright k$. If $\mu_1 = a_1 p^{(i_1)}$ and $\mu_2 = a_2 p^{(i_2)}$ for $i_1\neq i_2$, then $n+1 = [a_1, 0,\ldots,0, a_2-1,p-1,\ldots,\ell-1]_{\ell,p}$. Since $k \le \mu_2$, when subtracting $k$ from $n+1$, there are no carries. Hence the set of $k$ for which the cell module indexed by $n-2k$ is non-degenerate is \begin{equation} \left\{\sum_{i=0}^{i_2} k_i p^{(i)}\;:\; k_{i_2} \le \left\lfloor\frac{a_2-1}{2}\right\rfloor\text{ and } k_{i} \le \left\lfloor\frac{\ell \wedge p -1}{2}\right\rfloor \right\} \end{equation} On the other hand let $i_1 = i_2$ and $(a_1 + a_2-1) p^{(i_1)} = c p^{(i_1)} + d p^{(i_1+1)}$ for $d \in \{0,1\}$ and $c < a_2$. We can thus write $n+1 = \pldigs{d,c,p-1,\ldots, \ell-1}$. Then again there are no carries when $k$ is subtracted from $n+1$ except possibly at the $i_1$-th place. Write $k = \sum_{i = 0}^{i_1}k_i p^{(i)}$ and assume that $n+1-k \triangleright k$. Similarly to the first case, $k_i \le \lfloor(\ell\wedge p -1)/2\rfloor$ for $i < i_1$. Then if $k_{i_1} \le c$ we must have that $k_{i_1} \le \lfloor c/2\rfloor$. Otherwise $c < k_{i_1} \le a_2$ and $k_{i_1} \le \lfloor (p\wedge \ell +c)/2\rfloor$. \end{proof} \section{Cell Data for Jones-Wenzl Algebras over Eve Composition}\label{sec:valenced_cell_data} In general, since ${\rm TL}_n$ is cellular, we may use the results of \cref{sec:cellular} to study ${\rm TL}_{\boldsymbol{\mu}}$. However, this may prove to be complicated, depending on the composition ${\boldsymbol{\mu}}$. Here we determine the cellular data $(\Lambda_{\boldsymbol{\mu}}, M_{\boldsymbol{\mu}}(\lambda), C_{\boldsymbol{\mu}}, \iota)$ for the algebra ${\rm TL}_{\boldsymbol{\mu}}$ when ${\boldsymbol{\mu}}$ is Eve. The task to be done is to determine the appropriate restrictions of $M(\lambda)$ and $\Lambda$, since we will show momentarily that $e^{\boldsymbol{\mu}}$ is both generous and lavish. \subsection{The Sets of Indices} We determine which $m\in \Lambda$ we will need to drop -- i.e. which $\Delta_{\boldsymbol{\mu}}(m) = 0$. \begin{proposition}\label{prop:eve_mu_generous_lavish} If ${\boldsymbol{\mu}}$ is Eve, then $e^{\boldsymbol{\mu}}$ is both lavish and generous in the sense of \cref{def:generous}. \end{proposition} \begin{proof} Fix some $m \in \Lambda$. Let the ``boundary'' be the set of sites \begin{equation} \{ 1,2,\ldots,n-1\} \setminus \{\mu_1, \mu_1+\mu_2,\ldots,\mu_1 + \mu_2 +\cdots+\mu_{r-1}\} \end{equation} and let $B(m) \subseteq \Delta(m)$ be the linear span of all diagrams in $M(n)$ which have a simple left cap in the boundary. It is clear that $B(m)$ is killed by the action of $e$ as ${\boldsymbol{\mu}}$ is Eve and so $N^\Delta_e(m) \supseteq B(m)$. Moreover, since every non-identity diagram in the support of $e^{\boldsymbol{\mu}}$ has a simple cap on the boundary, $e^{\boldsymbol{\mu}}$ acts as the identity on the vector space $\Delta(m) / B(m)$. Hence $N^\Delta_e(m)= B(m)$. Now suppose that \begin{equation} \sum_{S \in M(m)\setminus N^\Delta_e(m)} \alpha_{S} \;e \cdot S = 0. \end{equation} Then certainly $\sum_{S \in M(m)\setminus N^{\Delta}_e(m)}\alpha_{S} S \in N^\Delta_e(m) = B(m)$, which is a contradiction since we are summing over diagrams not living in $B(m)$. Hence $\{ e\cdot S \;:\; S \in M(m) \setminus N^\Delta_e(m)\}$ is a linearly independent set and $e^{\boldsymbol{\mu}}$ is generous. If $S \in N^\Delta_e(\lambda)$ then $S$ has a left boundary cap. Clearly then $e \cdot C^m_{S,T} = 0$ for any $T$ and so $S \in N^{D}_e(m)$. Hence $e^{\boldsymbol{\mu}}$ is lavish. \end{proof} If ${\boldsymbol{\mu}}$ is not Eve, then $e^{\boldsymbol{\mu}}$ is no longer generous or lavish, as shown in \cref{rem:lavish_not_eve}. When ${\boldsymbol{\mu}}$ is Eve, we have shown that $\Delta_{\boldsymbol{\mu}}(m)$ has a basis consisting of all diagrams without a simple cap on the boundary. This allows us to enumerate which $m$ lie in $\Lambda_{\boldsymbol{\mu}}$. \begin{example}\cite[eq. 3.8]{langlois_remillard_saint_aubin_2020}\label{eg:when_boundary_sites} If ${\boldsymbol{\mu}} = (k,1,1,\ldots,1)\vdash n$ is Eve, then \begin{equation} \Lambda_{\boldsymbol{\mu}} = \{m \in \mathbb{N}_0 \;:\; 2k-n \le m \le n, \;\text{and}\; m \equiv_2 n \;\}. \end{equation} Indeed, in this case, the boundary consists of the sites $\{1, \ldots, k-1\}$. It is clear that a diagram with $r$ loops (so with $m = n-2r$) is possible for each $0\le r \le n-k$. Indeed if $r < k$, simply connect sites $\{k-r,\ldots, k\}$ to $\{k+1,\ldots, k+r\}$. Otherwise, connect $\{1,\ldots,k\}$ to $\{k+1,\ldots,2k\}$ and then pair up remaining sites until $r$ loops are made. On the other hand, if $n-k < r$, by pigeon-hole principle, there is a boundary site connected to another, and so a simple loop on the boundary. Hence such a diagram is killed by $e^{\boldsymbol{\mu}}$. \todohidden[inline]{Make diagram?} \end{example} \begin{example} If ${\boldsymbol{\mu}} = (\mu_1, \mu_2)$ is Eve, then $\Lambda_{\boldsymbol{\mu}} = E_{\mu_1,\mu_2}$, as defined in \cref{eq:two_part_e}. \end{example} Suppose that diagram $x \in \Delta(m)$ is not killed by $e^{\boldsymbol{\mu}}$. If $m < n$ then $x = x' u$ where $u$ is a simple cap diagram $\underline{m+2} \to \underline{m}$. It is then the case that $e^{\boldsymbol{\mu}} x'$ is not zero in $D(m+2)$ and, because $e^{\boldsymbol{\mu}}$ is lavish, this means that it is not zero in $\Delta(m+2)$. As such, for each Eve composition ${\boldsymbol{\mu}}\vdash n$, there is a $s_{\boldsymbol{\mu}}$ such that the set $\Lambda_{\boldsymbol{\mu}}$ is of the form \begin{equation}\label{eq:Lambda_bmu} \Lambda_{\boldsymbol{\mu}} = \{s_{\boldsymbol{\mu}}, s_{\boldsymbol{\mu}}+2,\ldots, n\}. \end{equation} \begin{lemma}\cite[Lemma 2.3]{flores_peltola_2018b}\label{lem:eve_composition_valids} For Eve composition ${\boldsymbol{\mu}} =(\mu_1,\ldots,\mu_r)\vdash n$, \begin{equation}\label{eq:recurse_smin} s_{(n)} = n\quad\quad ; \quad\quad s_{{\boldsymbol{\mu}}} = \begin{cases} s_{\hat{\boldsymbol{\mu}}} - \mu_r, & \mu_r \le s_{\hat{\boldsymbol{\mu}}}\\ s_{\hat{\boldsymbol{\mu}}} - \mu_r\mod 2, & s_{\hat{\boldsymbol{\mu}}} < \mu_r < |\hat{\boldsymbol{\mu}}|\\ \mu_r - |\hat{\boldsymbol{\mu}}|, & |\hat{\boldsymbol{\mu}}| \le \mu_r\\ \end{cases} \end{equation} \end{lemma} \begin{example} Suppose that ${\boldsymbol{\mu}} = (5,3,4,7,22)\vdash 41$. Then we can calculate, using the above \begin{equation} s_{(5)} = 5\quad\quad s_{(5,3)} = 2\quad\quad s_{(5,3,4)} = 0\quad\quad s_{(5,3,4,7)} = 1\quad\quad s_{(5,3,4,2,22)} = 3 \end{equation} On the other hand, if ${\boldsymbol{\mu}} = (22,7,4,3,5)\vdash 41$, then \begin{equation} s_{(22)} = 22\quad\quad s_{(22,7)} = 15\quad\quad s_{(22,7,4)} = 11\quad\quad s_{(22,7,4,3)} = 8\quad\quad s_{(22,7,4,3,5)} = 3. \end{equation} This illustrates that \cref{lem:eve_composition_valids} gives the same result for any composition and its reverse --- a fact that does not follow easily from the definition. \end{example} This set will also be critical for evaluating non-Eve compositions. Thus if ${\boldsymbol{\mu}}$ is any composition of $n$ (Eve or otherwise), let $E_{\boldsymbol{\mu}}$ be the set given by \cref{eq:Lambda_bmu} with $s_{\boldsymbol{\mu}}$ as in \cref{eq:recurse_smin}. Notice that if ${\boldsymbol{\mu}}$ is two-part, this coincides with the definition of $E_{r,s}$ given in \cref{eq:two_part_e}. \subsection{The Sets of Tableaux} Let ${\boldsymbol{\mu}} = (\mu_1, \ldots,\mu_r)\vdash n$ be Eve. We now turn to enumerating the tableaux in $M_{\boldsymbol{\mu}}(m)$ for $m \in \Lambda_{\boldsymbol{\mu}}$. Recall that diagrams from $\underline{n} \to \underline{m}$ are in bijection with standard $(\frac{n+m}{2}, \frac{n-m}{2})$-Young tableaux. The diagrams in $\Delta_{\boldsymbol{\mu}}(m)$, are then such diagrams without a simple link in the boundary. Let the ``$i$-th bucket'' be the subset of sites, \begin{equation} B_i = \{\mu_1+\cdots\mu_{i-1}+1,\mu_1+\cdots\mu_{i-1}+1,\cdots,\mu_1+\cdots\mu_{i}\}. \end{equation} Then we seek all diagrams without a link between two points in the same bucket. Given such a diagram it is clear then that for each $i$ there is a $t_i$ such that the first $t_i$ sites in bucket $B_i$ are connected to sites of smaller index by the diagram. The remaining sites are either free (defects) or are connected to sites in later buckets. Thus an alternative characterisation of this diagram is thus by a tuple $\mathbf{t} = (t_1, t_2, \ldots, t_r)$ with $0 \le t_i \le \mu_i$ for each $i$ and $n - \sum_{i=1}^n t_i = m$ such that \begin{equation} \sum_{i=1}^{r-1} \mu_i - t_i \ge t_r \quad\quad\text{and}\quad\quad t_1 = 0. \end{equation} We also construct the tuple ${\boldsymbol{\rho}} = {\boldsymbol{\mu}} - \mathbf{t} = (\mu_1 - t_1, \ldots, \mu_r - t_r)$. This can also be envisioned as a walk on a planar lattice. The walk begins at $(0,0)$ and ends at $(n, m)$. For each $0 \le i \le n$, it moves down and right one unit if $i$ is a site connected to a smaller index and up and right otherwise. At no point may the walk cross the $x$-axis and each ``bucket'' consists of some number of steps down followed by some number of steps up. \begin{example}\label{eg:walk1} Let ${\boldsymbol{\mu}} = (3,2,4,4,2) \vdash 15 = n$ and $m = 2$. Then the tuple $\mathbf{t}=(0,1,2,1,2)$ corresponds to diagram \begin{center} \begin{tikzpicture}[scale=0.7] \draw (0.5,0) -- (15.5,0); \foreach \i in {1,...,15} { \fill (\i,0) circle (0.1); } \draw[very thick] (1,0) -- (1,2); \draw[very thick] (2,0) edge[out=90,in=90] (7,0); \draw[very thick] (3,0) edge[out=90,in=90] (4,0); \draw[very thick] (5,0) edge[out=90,in=90] (6,0); \draw[very thick] (8,0) -- (8,2); \draw[very thick] (9,0) edge[out=90,in=90] (10,0); \draw[very thick] (11,0) -- (11,2); \draw[very thick] (12,0) edge[out=90,in=90] (15,0); \draw[very thick] (13,0) edge[out=90,in=90] (14,0); \draw[dashed,red] (3.5,-0.15) -- (3.5,1.5); \draw[dashed,red] (5.5,-0.15) -- (5.5,1.5); \draw[dashed,red] (9.5,-0.15) -- (9.5,1.5); \draw[dashed,red] (13.5,-0.15) -- (13.5,1.5); \draw [decorate,decoration={brace,amplitude=5pt}] (3.2,-.2) -- (.7,-.2) node [black,midway,yshift=-10pt] {\footnotesize $B_1$}; \draw [decorate,decoration={brace,amplitude=5pt}] (5.2,-.2) -- (3.7,-.2) node [black,midway,yshift=-10pt] {\footnotesize $B_2$}; \draw [decorate,decoration={brace,amplitude=5pt}] (9.2,-.2) -- (5.7,-.2) node [black,midway,yshift=-10pt] {\footnotesize $B_3$}; \draw [decorate,decoration={brace,amplitude=5pt}] (13.2,-.2) -- (9.7,-.2) node [black,midway,yshift=-10pt] {\footnotesize $B_4$}; \draw [decorate,decoration={brace,amplitude=5pt}] (15.2,-.2) -- (13.7,-.2) node [black,midway,yshift=-10pt] {\footnotesize $B_5$}; \end{tikzpicture} \end{center} has ${\boldsymbol{\rho}} = (3,1,2,3,0)$ and walk \begin{center} \begin{tikzpicture}[scale=0.7] \draw (-0.5,0) -- (16.5,0); \fill (1,1) circle (0.1); \fill (2,2) circle (0.1); \fill (3,3) circle (0.1); \fill (4,2) circle (0.1); \fill (5,3) circle (0.1); \fill (6,2) circle (0.1); \fill (7,1) circle (0.1); \fill (8,2) circle (0.1); \fill (9,3) circle (0.1); \fill (10,2) circle (0.1); \fill (11,3) circle (0.1); \fill (12,4) circle (0.1); \fill (13,5) circle (0.1); \fill (14,4) circle (0.1); \fill (15,3) circle (0.1); \draw[thick] (0,0) -- (3,3) -- (4,2) -- (5,3) -- (7,1) -- (9,3) -- (10,2) -- (13,5) -- (15,3); \foreach \i in {0,2,4,6} { \draw[dotted, thin] (-.2,\i+.2) -- (\i+.2,-.2); \draw[dotted, thin] (16+.2,\i+.2) -- (16-\i-.2,-.2); } \foreach \i in {8,10,12,14,16} { \draw[dotted, thin] (\i-6-.2,6+.2) -- (\i+.2,-.2); \draw[dotted, thin] (16-\i+6+.2,6+.2) -- (16-\i-.2,-.2); } \foreach \i in {18,20,22} { \draw[dotted, thin] (\i-6-.2,6+.2) -- (16+.2,\i-16-.2); \draw[dotted, thin] (16-\i+6+.2,6+.2) -- (-.2,\i-16-.2); } \draw[dashed,red] (3.5,-0.2) -- (3.5,6.2); \draw[dashed,red] (5.5,-0.2) -- (5.5,6.2); \draw[dashed,red] (9.5,-0.2) -- (9.5,6.2); \draw[dashed,red] (13.5,-0.2) -- (13.5,6.2); \draw[very thick, green!60!black, ->] (0,0.2) -- (3,3.2); \draw[very thick, orange, ->] (3.6,2.6) -- (4,2.2); \draw[very thick, green!60!black, ->] (4,2.2) -- (5,3.2); \draw[very thick, orange, ->] (5.6,2.6) -- (7,1.2); \draw[very thick, green!60!black, ->] (7,1.2) -- (9,3.2); \draw[very thick, orange, ->] (9.6,2.6) -- (10,2.2); \draw[very thick, green!60!black, ->] (10,2.2) -- (13,5.2); \draw[very thick, orange, ->] (13.6,4.6) -- (15,3.2); \end{tikzpicture} \end{center} Note that the rises in the walk (indicated in green) are of lengths given by ${\boldsymbol{\rho}}$ and the falls (in orange are given by $\textbf{t}$. \end{example} We now wish to count the number of such walks. This will give us the dimension of the cell module $\Delta_{\boldsymbol{\mu}}(m)$. For a composition (of any number), ${\boldsymbol{\eta}}$, let $C^{\boldsymbol{\eta}}_m$ be the number of walks from $(0,0)$ to $(|{\boldsymbol{\eta}}|, m)$ that do not cross the origin and which obey the ``down-then-up'' rule within each of the buckets described by ${\boldsymbol{\eta}}$. Such a walk is termed a ``walk over ${\boldsymbol{\eta}}$''. \begin{proposition}\cite[Lemma 2.8]{flores_peltola_2018b}\label{prop:count_Cmun} If ${\boldsymbol{\mu}} = (n)\vdash n$, \begin{equation}\label{eq:count_Cmun_1} C^{(n)}_m = \begin{cases} 1 & n = m\\ 0 & \text{else} \end{cases} \end{equation} and if ${\boldsymbol{\mu}} = (\mu_1,\ldots, \mu_r)\vdash n$ for $r > 1$, \begin{equation} C^{\boldsymbol{\mu}}_n = \sum _{g = 0}^{\min\{\mu_r, m\}} C^{\hat{\boldsymbol{\mu}}}_{m + \mu_r- 2g }. \end{equation} \end{proposition} \begin{proof} \Cref{eq:count_Cmun_1} is clear. Consider then some valid walk ending at $(|{\boldsymbol{\mu}}|,m)$. The final bucket of the walk must consist of a fall, followed by a rise of length $0\le g \le \mu_r$. The $x$ coordinate of the walk increases by $\mu_r$ and the $y$ coordinate by first decreases by $\mu_r - g$ and then increases by $g$ for a net change of $2g - \mu_r$. However, to prevent the walk from sinking below the $x$-axis, we require that $g \le m$. The formula follows. \end{proof} Note that this is still an inherently recursive formula - $\hat{\boldsymbol{\mu}}$ is just the composition with the last element removed for which we will need to know the values of $C^{\hat{\boldsymbol{\mu}}}_n$ to calculate $C^{\boldsymbol{\mu}}_n$. However, let us reiterate the importance of this number: \begin{equation} \dim \Delta_{\boldsymbol{\mu}}(m) = C^{{\boldsymbol{\mu}}}_m, \end{equation} and this is non-zero exactly when $m$ lies in the set described by \cref{eq:Lambda_bmu}. \begin{example} Setting ${\boldsymbol{\mu}} = \boldsymbol{n}$, we recover the well known recurrence for the Catalan triangle (which computes the Catalan numbers on the diagonal) \begin{equation} C^{\boldsymbol{n}}_m = \begin{cases} 0&m > n\\ 1&m = n\\ C^{\boldsymbol{n-1}}_1&m = 0\\ C^{\boldsymbol{n-1}}_{m-1} + C^{\boldsymbol{n-1}}_{m+1} &\text{else}\\ \end{cases} \end{equation} The solution to this recursion is that $C^{\boldsymbol{n}}_m$ vanishes if $n$ and $m$ are of different parity, and if they are of the same parity, \begin{equation} C^{\boldsymbol{n}}_m = \binom{n}{(n-m)/2} - \binom{n}{(n-m)/2-1}. \end{equation} We have recovered, naturally, the dimension of the cell module $\Delta(m)$. \end{example} With this, we conclude our analysis of the cellular data of ${\rm TL}_{\boldsymbol{\mu}}$ when ${\boldsymbol{\mu}}$ is Eve. Our choice of diagrams are those corresponding to walks over ${\boldsymbol{\mu}}$, which are counted by \cref{prop:count_Cmun} and this leaves us with indices described by \cref{eq:Lambda_bmu}. \section{Gram Matrices for Cell Modules}\label{sec:gram} Recall the notation of $\Lambda_0$ for all cell indices with non-degenerate cell module forms from \cref{sec:cellular}. We recall and expand a useful lemma. \begin{lemma}[\citefirst[Proposition 3.4]{flores_peltola_2018b}[Lemma 7.1]{spencer_2020}\label{lem:folklore_1} Suppose $k$ is a field and fix cell indices $m \in (\Lambda_{\boldsymbol{\mu}})_0$ and $m' \in \Lambda_{\boldsymbol{\mu}}$. Then for any submodules $M \subseteq \Delta_{\boldsymbol{\mu}}(m)$ and $M'\subseteq \Delta_{\boldsymbol{\mu}}(m')$, let $\theta : \Delta_{\boldsymbol{\mu}}(m)/M \to \Delta_{\boldsymbol{\mu}}(m')/M'$ be a ${\rm TL}_{\boldsymbol{\mu}}$-morphism. Then \begin{enumerate}[(i)] \item If $m < m'$ then $\theta = 0$. \item If $m = m'$ then $\theta(z+M') = \lambda_\theta z + M$ for some scalar $\lambda_\theta \in k$. \item If $m \ge m'$ then there is a morphism $v_\theta : \underline{m'} \to \underline{m}$ such that $\theta(z + M') = z \cdot v_\theta + M'$ and $v_\theta$ is in the linear span of monic $\underline m' \to \underline m$ diagrams. \end{enumerate} Further, the image is cyclic. \end{lemma} \begin{proof} Since $m \in (\Lambda_{\boldsymbol{\mu}})_0$, there is an $0 \neq x \in \Delta_{\boldsymbol{\mu}}(m) \setminus \operatorname{rad}\Delta_{\boldsymbol{\mu}}(m)$. Then $x \neq 0$ in $\Delta_{\boldsymbol{\mu}}(m) / M$ and $x$ generates $\Delta_{\boldsymbol{\mu}}(m)$. Since $k$ is a field, and $x \not \in \operatorname{rad}\Delta_{\boldsymbol{\mu}}(m)$, there is a $y \in \Delta_{\boldsymbol{\mu}}(m)$ such that $\langle x, y \rangle = 1$. We may lift $\theta$ to a ${\rm TL}_{\boldsymbol{\mu}}$-map $\Delta_{\boldsymbol{\mu}}(m) \to \Delta(m')/M'$ along the natural projection $\Delta_{\boldsymbol{\mu}}(m) \to \Delta_{\boldsymbol{\mu}}(m)/M$. But now for any $z \in \Delta_{\boldsymbol{\mu}}(m)$, \begin{equation} \phi (|z\rangle) = \phi(|z\rangle\langle x|y\rangle) = |z\rangle\langle x|\theta(|y\rangle). \end{equation} This makes it clear that the image of $\theta$ is generated by $\theta(|y\rangle)$ and is thus cyclic. Now, suppose $m < m'$. Then the algebra element $|z\rangle\langle x|$ lies in ${\rm TL}_{\boldsymbol{\mu}}^{\le m}$ and so kills all quotients of $\Delta_{\boldsymbol{\mu}}(m')$. Hence the map $\theta$ is zero. If $m \ge m'$ then note that part (ii) is a special case of part (iii). Then $\langle x|\theta(|y \rangle)$ is a morphism from $m\to m'$. Any diagram in $\langle x|\theta(|y \rangle)$ that is not monic factors through $m'' < m'$ and thus does not contribute to the resultant element of $\Delta_{\boldsymbol{\mu}}(m')/M'$. Thus such diagrams can be removed to obtain the morphism $v_\theta$. \end{proof} \begin{proposition} If $k$ is a characteristic zero field (so that $p= \infty$), ${\boldsymbol{\mu}}$ is Eve and $m \in (\Lambda_{\boldsymbol{\mu}})_0$, then the radical of $\Delta_{\boldsymbol{\mu}}(m)$ is either zero or a simple module. \end{proposition} \begin{proof} Recall \cref{eq:e_rad_is_rad} where it was shown that $\operatorname{rad} \Delta_{\boldsymbol{\mu}}(m) = e^{\boldsymbol{\mu}}\cdot\operatorname{rad}_\mathbf{n}\Delta(m)$. In the characteristic zero case, if $\operatorname{rad}\Delta_\mathbf{n}(m)$ is non-zero it is simple and isomorphic to $L_\mathbf{n}(m')$ for some $m' > m$~\cite[Corollary 7.3]{ridout_saint_aubin_2014}. This means that $m' \in (\Lambda_{\boldsymbol{\mu}})_0$ and so $e^{\boldsymbol{\mu}}\cdot L_\mathbf{n}(m') = L_{\boldsymbol{\mu}}(m')$. However, \cref{eq:exact_restrict_delta} shows that multiplication by $e^{\boldsymbol{\mu}}$ preserves simplicity, showing the result. \end{proof} \subsection{Trivalent Link States} We define open and closed trivalent link states and expound upon some of their properties. \begin{definition}\label{def:trivalent} Let $r,s,t \in \mathbb{N}$ such that $r \in E_{s,t}$ and let \begin{equation*} i = \frac{r + s - t}{2} \quad\quad\quad j = \frac{r - s + t}{2} \quad\quad\quad k = \frac{-r + s + t}{2} \end{equation*} The open trivalent link state is a shorthand for a diagram of the following form \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick] (0,0) circle (0.2); \foreach \ang/\label in {0/r,120/s,240/t} { \begin{scope}[rotate=\ang] \draw[very thick] (0,0.2) -- (0,1.2); \node at (0,1.4) {$\label$}; \end{scope} } \end{tikzpicture}}} \quad=\quad \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick] (1.03+0.05, -0.6+0.086) to[out=150,in=-90] (0.1,1.2); \draw[very thick] (-1.03-0.05, -0.6+0.086) to[out=30,in=-90] (-.1,1.2); \draw[very thick] (-1.03+0.05, -0.6-0.086) to[out=30,in=150] (1.03-0.05,-0.6-0.086); \node at (0,1.4) {$r$}; \node at (1.2,-.7) {$t$}; \node at (-1.2,-.7) {$s$}; \node at (0.606,0.35) {$j$}; \node at (-0.606,0.35) {$i$}; \node at (0,-0.7) {$k$}; \end{tikzpicture}}}. \end{equation} If $r$, $s$ and $t$ are all Eve, the closed trivalent link state is defined to be \begin{equation} \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick, fill=purple] (0,0) circle (0.2); \foreach \ang/\label in {0/r,120/s,240/t} { \begin{scope}[rotate=\ang] \draw[very thick] (0,0.2) -- (0,1.2); \node at (0,1.4) {$\label$}; \end{scope} } \end{tikzpicture}}} \quad=\quad \vcenter{\hbox{ \begin{tikzpicture} \draw[very thick] (0,0) circle (0.2); \foreach \ang/\label in {0/r,120/s,240/t} { \begin{scope}[rotate=\ang] \draw[very thick] (0,0.2) -- (0,0.5); \draw[very thick] (0,0.8) -- (0,1.2); \draw[very thick, fill=purple] (-.3,0.5) rectangle (0.3,0.8); \node at (0,1.4) {$\label$}; \end{scope} } \end{tikzpicture}}}. \end{equation} where the three boxes represent the classical Jones-Wenzl idempotents ${\rm JW}_r$, ${\rm JW}_s$ and ${\rm JW}_t$ (all defined, since $r$, $s$ and $t$ are Eve). \end{definition} An important morphism is the so-called ``theta network'' of Kauffman and Lins~\cite{kauffman_lins_94}. This is a map $\underline 0 \to \underline 0$ and thus can be identified with a scalar $\Theta(r,s,t)$. It is given by evaluating the morphism \begin{equation} \vcenter{\hbox{ \begin{tikzpicture}[scale=.5] \draw[very thick] (0,0) circle (2); \draw[very thick] (-2,0) to (2,0); \draw[very thick, fill=purple] (2,0) circle (0.3); \draw[very thick, fill=purple] (-2,0) circle (0.3); \node at (0,2.3) {$r$}; \node at (0,0.3) {$s$}; \node at (0,-1.7) {$t$}; \end{tikzpicture}}} \end{equation} For this to be well defined, we require that $r \in E_{s,t}$. If $i,j,k$ are as in \cref{def:trivalent}, then \begin{equation} \Theta(r,s,t) = \frac{(-1)^{i+j+k}[i+j+k+1]![i]![j]![k]!}{[i+j]![j+k]![k+i]!}. \end{equation} Note that unlike the quantum binomials, this is in general not a polynomial in $\delta$. For example $\Theta(2,2,2) = -\frac{[4][3]}{[2]^2} = -(\delta^4 - 3\delta^2 + 1)/\delta$. Further examples are: \begin{align*} \Theta(r, s, r + s) &= (-1)^{r+s}[r+s+1]\\ \Theta(2r', 2r', 2r') &= \frac{(-1)^{r'}[3r'+1]!([r']!)^3}{([2r']!)^3} \end{align*} A useful normalisation of the theta value is the following: \begin{align}\label{eq:useful_gauss} \Theta(r,s,t) & = \frac{(-1)^{i+j+k}[i+j+k+1]![i]![j]![k]!}{[r]![s]![t]!}\\\nonumber & = \frac{(-1)^{i+j+k}[i+j+k+1]!}{[k]![i+j]!}\Big/ \genfrac{[}{]}{0pt}{}{s}{k}\genfrac{[}{]}{0pt}{}{t}{k}\\\nonumber & = (-1)^{i+j+k}[i+j+k+1]\genfrac{[}{]}{0pt}{}{i+j+k}{k} \Big/ \genfrac{[}{]}{0pt}{}{s}{k}\genfrac{[}{]}{0pt}{}{t}{k} \end{align} With the language of trivalent nodes, the basis of $\Delta_{\boldsymbol{\mu}}(m)$ given by defect $m$ walks over ${\boldsymbol{\mu}}$ has a new description. \begin{example} The tableau in \cref{eg:walk1} can further be described by the network \begin{center} \begin{tikzpicture} \draw (-0.5,0) -- (8.5,0); \draw[very thick] (0,0) to (1-0.07, 1-0.07); \node at (0.15,0.5) {3}; \node at (1.35,1.7) {3}; \node at (2.35,2.7) {3}; \node at (3.35,3.7) {5}; \draw[very thick] (2,0) to (1+0.07, 1-0.07); \node at (1.9,0.5) {2}; \draw[very thick] (1,1) circle (0.1); \draw[very thick] (1+0.07,1+0.07) to (2-0.07, 2-0.07); \draw[very thick] (4,0) to (2+0.07, 2-0.07); \node at (3.9,0.5) {4}; \draw[very thick] (2,2) circle (0.1); \draw[very thick] (2+0.07,2+0.07) to (3-0.07, 3-0.07); \draw[very thick] (6,0) to (3+0.07, 3-0.07); \node at (5.9,0.5) {4}; \draw[very thick] (3,3) circle (0.1); \draw[very thick] (3+0.07,3+0.07) to (4-0.07, 4-0.07); \draw[very thick] (8,0) to (4+0.07, 4-0.07); \node at (7.9,0.5) {2}; \draw[very thick] (4,4) circle (0.1); \draw[very thick] (4.,4.07) to (4,5); \node at (4.3,4.5) {3}; \draw[fill=white] (-0.1,0.2) rectangle (0.5,0); \draw[fill=white] (1.5,0.2) rectangle (2.1,0); \draw[fill=white] (3.5,0.2) rectangle (4.1,0); \draw[fill=white] (5.5,0.2) rectangle (6.1,0); \draw[fill=white] (7.5,0.2) rectangle (8.1,0); \end{tikzpicture} \end{center} We will call this the \emph{ladder} form of the tableaux. This could, should we wish, be taken to the extreme: \begin{center} \begin{tikzpicture}[scale=0.7] \draw[thick] (1,1) -- (2,2); \draw[thick] (1,0) -- (1,1); \draw[thick] (2,0) -- (2,2); \draw[thick] (3,0) -- (3,3); \draw[thick] (4,0) -- (4,2); \draw[thick] (5,0) -- (5,3); \draw[thick] (6,0) -- (6,2); \draw[thick] (7,0) -- (7,1); \draw[thick] (8,0) -- (8,2); \draw[thick] (9,0) -- (9,3); \draw[thick] (10,0) -- (10,2); \draw[thick] (11,0) -- (11,3); \draw[thick] (12,0) -- (12,4); \draw[thick] (13,0) -- (13,5); \draw[thick] (14,0) -- (14,4); \draw[thick] (15,0) -- (15,3); \draw[very thick] (2,2) --(3,3) -- (4,2) -- (5,3) -- (7,1) -- (9,3) -- (10,2) -- (13,5) -- (15,3) -- (16,3); \draw (-0.5,0) -- (16.5,0); \draw[very thick, fill=white] (1,1) circle (0.1); \draw[very thick, fill=white] (2,2) circle (0.1); \draw[very thick, fill=white] (3,3) circle (0.1); \draw[very thick, fill=white] (4,2) circle (0.1); \draw[very thick, fill=white] (5,3) circle (0.1); \draw[very thick, fill=white] (6,2) circle (0.1); \draw[very thick, fill=white] (7,1) circle (0.1); \draw[very thick, fill=white] (8,2) circle (0.1); \draw[very thick, fill=white] (9,3) circle (0.1); \draw[very thick, fill=white] (10,2) circle (0.1); \draw[very thick, fill=white] (11,3) circle (0.1); \draw[very thick, fill=white] (12,4) circle (0.1); \draw[very thick, fill=white] (13,5) circle (0.1); \draw[very thick, fill=white] (14,4) circle (0.1); \draw[very thick, fill=white] (15,3) circle (0.1); \draw[fill=white] (0.7,0.2) rectangle (3.3,0); \draw[fill=white] (3.7,0.2) rectangle (5.3,0); \draw[fill=white] (5.7,0.2) rectangle (9.3,0); \draw[fill=white] (9.7,0.2) rectangle (13.3,0); \draw[fill=white] (13.7,0.2) rectangle (15.3,0); \node at (1.4,1.8) {1}; \node at (2.4,2.8) {2}; \node at (3.6,2.8) {3}; \node at (4.4,2.8) {2}; \node at (5.6,2.8) {3}; \node at (6.6,1.8) {2}; \node at (7.4,1.8) {1}; \node at (8.4,2.8) {2}; \node at (9.6,2.8) {3}; \node at (10.4,2.8) {2}; \node at (11.4,3.8) {3}; \node at (12.4,4.8) {4}; \node at (13.6,4.8) {5}; \node at (14.6,3.8) {4}; \node at (15.6,3.3) {3}; \end{tikzpicture} \end{center} Here the link with a ``walk'' in the traditional sense is made explicit. Note in both these examples, we have drawn links in $\Delta_\mathbf{15}(3)$. The corresponding elements of $\Delta_{(3,2,4,4,2)}(3)$ would have ${\rm JW}$ projectors at the places marked with boxes. \end{example} Let us now work over $\mathbb{Q}(\delta)$. The key operation we will be undertaking is to replace open nodes in ladder forms with closed ones. This change of basis introduces a number of Jones-Wenzl idempotents, which may not be defined over $k$ (even if ${\boldsymbol{\mu}}$ is Eve). However, the key result of this section is the form of the determinant of the Gram matrix for $\Delta_{\boldsymbol{\mu}}(m)$ and that is independent of the underlying field. To be clear, this takes a ``diagram'' element of $\Delta_{\boldsymbol{\mu}}(m)$ (which we will identify with the tableaux $\mathbf{t}$ and the walk ${\boldsymbol{\rho}}$) and replaces it by a morphism in $\operatorname{Hom}_{{\mathscr{TL}}}(\underline{n}, \underline{m})$. We will quotient this by any morphisms factoring through objects less than $\underline m$ to get our resulting element of $\Delta_{\boldsymbol{\mu}}(m)$. \begin{proposition}\cite[Lemma 4.6]{flores_peltola_2018b} The set of all ladders with filled in trivalent nodes forms a basis for $\Delta_{\boldsymbol{\mu}}(m)$. \end{proposition} \begin{proof} A sketch of the proof is to introduce a partial order on the tuples $\{{\boldsymbol{\rho}}\}$ (lexicographically) and then to show that introduction of Jones-Wenzl idempotents along the diagonal give terms that are smaller. \end{proof} \begin{proposition}\cite[Proposition 4.7]{flores_peltola_2018b} The determinant of the Gram matrix has form \begin{equation} \det G^m_{\boldsymbol{\mu}} = \prod_{{\boldsymbol{\rho}}}\prod_{i=1}^{r-1} \frac{\Theta(\rho_i, \rho_{i+1}, \mu_{i+1})}{[\rho_{i+1}+1]} \end{equation} where the product is over all walks ${\boldsymbol{\rho}}$ over ${\boldsymbol{\mu}}$. \end{proposition} Unfortunately, in almost all cases, this form of the determinant is unwieldy. \begin{example}~ \begin{enumerate}[(i)] \item Recall that the form of the determinant for the cell modules of ${\rm TL}_n = {\rm TL}_{\mathbf{n}}$ is~\cite{ridout_saint_aubin_2014} \begin{equation} \det G_n^m = \prod_{j = 1}^{\frac{n-m}{2}} \left( \frac{[m + j + 1]}{[j]} \right) ^ {\dim \Delta_n(m + 2j)}. \end{equation} Now, notice that if $\rho_{i+1} = \rho_i + 1$ that $\Theta(\rho_i, \rho_{i+1}, 1)/[\rho_{i+1}+1] = 1$ and if $\rho_{i+1} = \rho_i - 1$ that $\Theta(\rho_i, \rho_{i+1}, 1)/[\rho_{i+1}+1] = [\rho_i+1]/[\rho_i]$. Let $R^+$ be the set of walks over $\mathbf{n-1}$ that end at $m-1$ and $R^-$ be those that end at $m+1$ (note that either of these may be empty). This partitions the set of all walks ending at $m$, by those that finish $\rho_n = \rho_{n-1}+1$ and $\rho_n = \rho_{n-1}-1$. Using inducting on $n$, \begin{align*} \det G_{\mathbf{n}}^m &= \left( \prod_{{\boldsymbol{\rho}} \in R^+} \prod_{i = 1}^{r-2} \frac{\Theta(\rho_i, \rho_{i+1}, \mu_{i+1})}{[\rho_{i+1}+1]} \right) \left( \prod_{{\boldsymbol{\rho}} \in R^-} \prod_{i = 1}^{r-2} \frac{\Theta(\rho_i, \rho_{i+1}, \mu_{i+1})}{[\rho_{i+1}+1]} \frac{[m+2]}{[m+1]} \right)\\ &= \left( \det G_{n-1}^{m-1} \right) \left( \det G_{n-1}^{m+1} \right) \left( \frac{[m+2]}{[m+1]} \right)^{\dim \Delta_{n-1}(m+1)} \end{align*} which we recognise as the recursion relation for the Gram determinants of ${\rm TL}_n$ as shown in~\cite[equation 4.19a]{ridout_saint_aubin_2014}. \item If ${\boldsymbol{\mu}} = (n)$ then the only path ${\boldsymbol{\rho}}$ over ${\boldsymbol{\mu}}$ is ${\boldsymbol{\rho}} = (n)$ and the product is empty. Note that the only valid $m$ in this situation is $n$ itself. \item If ${\boldsymbol{\mu}} = (\mu_1, \mu_2) \vdash n$, where without loss, $\mu_1 > \mu_2$, then again there is a unique ${\boldsymbol{\rho}}$ for any $m$ given by ${\boldsymbol{\rho}} = (\mu_1, m)$. The determinant is thus $\Theta(\mu_1, m, \mu_2)/[m+1]$ as should be expected by evaluating the following morphism modulo diagrams of non-maximal through degree. \begin{center} \begin{tikzpicture} \draw[very thick] (0.5,0.2) arc (0:180:0.5); \draw[very thick] (0.5,-0.2) arc (0:-180:0.5); \draw[very thick] (0.75,1) -- (0.75,-1); \draw[very thick] (-0.75,1) -- (-0.75,-1); \draw[fill=white] (-1,0.2) rectangle (-.3, -.2); \draw[fill=white] (1,0.2) rectangle (.3, -.2); \node at (0,0.4) {$i$}; \node at (0,-0.4) {$i$}; \node at (-.65,0) {$\mu_1$}; \node at (.65,0) {$\mu_2$}; \end{tikzpicture} \end{center} This leads to some interesting behaviour. For example, if ${\boldsymbol{\mu}} = (3,3)\vdash 6 = n$, then $[5] \mid \det G_{\boldsymbol{\mu}}^2$ but $[5]\nmid \det G_{\boldsymbol{\mu}}^m$ for any other $m\in \{0,2,4,6\}$. We can evaluate \cref{eq:two_part_eve_simeq} \begin{equation} {\rm TL}_{(3,3)}\simeq k[X]/(X, X-[6], X-[2][5], X-[3][4]). \end{equation} If we specialise to a pointed ring where $\ell = 5$, then this simplifies to $k[X]/(X, X + [4], X - [2])$ which has three simple modules, in line with the three values of $m$ for which the Gram determinant does not vanish. This highlights a key feature. Though $\Lambda_{\boldsymbol{\mu}}$ may be ``easy'' to calculate and may contain a contiguous stretch of weights, $(\Lambda_{\boldsymbol{\mu}})_0$ will not be so well behaved. \item If ${\boldsymbol{\mu}} = (k, 1,\ldots, 1) \vdash n$ and $b = n-k$,we recover a recursion formula almost identical to that in part (i) and, after calculating the base case of $b = 0$, find that \begin{equation} \det G_{(k,1^b)}^m = \prod_{j = 1}^{\lfloor k/2\rfloor} \left( \frac{[j]}{[k-j+1]} \right)^{\dim \Delta_{(k-2j,1^{k+2j})}(m)} \prod_{j = 1}^{(n-m)/2}\left( \frac{[m + j + 1]}{[j]} \right)^{\dim \Delta_{(k,1^b)}(m+2j)} \end{equation} \end{enumerate} \end{example} The observation in \cref{eq:exact_restrict_delta} has an interesting consequence. \begin{lemma} If \begin{equation*} \prod_{j = 1}^{\frac{n-m}{2}} \left( \frac{[m + j + 1]}{[j + 1]} \right) ^ {\dim \Delta_n(m + 2j)}\neq 0 \end{equation*} in $(k, \delta)$, then \begin{equation*} \prod_{{\boldsymbol{\rho}}}\prod_{i=1}^{r-1} \frac{\Theta(\rho_i, \rho_{i+1}, \mu_{i+1})}{[\rho_{i+1}+1]} \neq 0 \end{equation*} for all Eve ${\boldsymbol{\mu}}$ such that $m \in (\Lambda_{\boldsymbol{\mu}})_0$. \end{lemma} \section{Valenced Temperley-Lieb or Jones-Wenzl Algebras}\label{sec:valenced} A composition of $n \in \mathbb{N}$ into $r$ parts, written ${\boldsymbol{\mu}}\vdash n$ is a tuple $(\mu_1, \mu_2, \ldots, \mu_r)$ such that $0\le \mu_i$ and $\sum_{i=1}^r \mu_i = n$. The sub-tuple $(\mu_1, \mu_2, \ldots, \mu_{r-1})$ will be written as $\hat{\boldsymbol{\mu}} \vdash n-\mu_r$. We will denote the distinguished composition $(1,1,\ldots, 1) \vdash n$ as $\boldsymbol{n}$. Throughout, fix a pointed ring $(k,\delta)$ of $(\ell,p)$-torsion. Recall the definition of ${\mathscr{TL}}$ and let $e^{n}$ be the $(\ell,p)$-Jones-Wenzl idempotent on $n$ strands over $k$. Note that $e^{n}$ is not ``the'' Jones-Wenzl idempotent, ${\rm JW}_n$, unless $n < \ell$ or $n = a p^{(r)}-1$ for $1\le a \le \ell$ if $r=0$ and $1\le a \le p$ else. Such $n$ will be called Eve. For any ${\boldsymbol{\mu}} \vdash n$, let $e^{\boldsymbol{\mu}}$ be the idempotent on $n$ strands given by \begin{equation} e^{{\boldsymbol{\mu}}} = e^{\mu_1} \otimes \cdots \otimes e^{\mu_r}. \end{equation} An idempotent $e^{{\boldsymbol{\mu}}}$ will be termed Eve if all $\mu_i$ are Eve. Our interest lies in ${\rm TL}_{\boldsymbol{\mu}} = e^{\boldsymbol{\mu}}\cdot {\rm TL}_n \cdot e^{\boldsymbol{\mu}}$. This is a $k$-subspace of ${\rm TL}_n$, and an algebra with unit $e^{\boldsymbol{\mu}}$. That makes it a (non-unital) subalgebra of ${\rm TL}_n$ but it is not a unital subalgebra (since the units differ). It is isomorphic to the endomorphism ring $\operatorname{End}_{{\rm TL}_n}({\rm TL}_n \cdot e^{\boldsymbol{\mu}})$. \begin{remark} In~\cite{flores_peltola_2018a, flores_peltola_2018b}, the algebra ${\rm TL}_{\boldsymbol{\mu}}$ is called the ``Jones-Wenzl algebra''. The Valenced Temperley-Lieb algebra defined in~\cite{flores_peltola_2018b} is not an associative algebra in general, and in fact, multiplication is only defined when ${\boldsymbol{\mu}}$ is Eve. However, when ${\boldsymbol{\mu}}$ is Eve, the concepts coincide, which is why we consider the terms to be interchangeable whenever the word ``algebra'' is involved. \end{remark} \begin{example}\label{eg:rings_tl_mu} We present some of the rings ${\rm TL}_{\boldsymbol{\mu}}$ for simple ${\boldsymbol{\mu}}$ as well as some of their representation theory. \begin{enumerate}[(i)] \item If ${\boldsymbol{\mu}} = \boldsymbol{n}$ then $e ^{\boldsymbol{\mu}} = {\operatorname{id}}$ so ${\rm TL}_{\boldsymbol n} = {\rm TL}_n$. \item On the other hand, if $n$ is Eve and ${\boldsymbol{\mu}} = (n)$ then ${\rm TL}_{\boldsymbol{\mu}} \simeq k$. The case when $n$ is not Eve is a subject of~\cite{tubbenhauer_wedrich_2019} and \cref{sec:lp_hecke} for $\ell = p$. It is shown that \begin{equation}\label{eq:two_part_eve_simeq} {\rm TL}_{(n)} \simeq k[X_1,\ldots, X_r]/(X_1^2,\ldots, X_r^2). \end{equation} This algebra has a single simple module, which is linear, on which the images of the $X_i$ act as zero. \item Suppose that ${\boldsymbol{\mu}} = (\mu_1, \mu_2)\vdash n$ is such that $\mu_i < \ell$. In~\cite[Lemma 3.5]{flores_peltola_2018a} it is shown that this is a quotient of a polynomial algebra with one generator. The proof supplied there determines the form of this algebra but does not state the result, so we show it below. The element $U_1$, defined as $e^{\boldsymbol{\mu}}\cdot u_{\mu_1} \cdot e^{\boldsymbol{\mu}}$ generates the algebra, and if $U_k$ is the element $e^{\boldsymbol{\mu}}\cdot u^{(k)}_{\mu_1}\cdot e^{\boldsymbol{\mu}}$, then $\{U_k \;:\; 0\le k \le \min\{\mu_1,\mu_2\}\}$ gives a $k$-basis. Here $u^{(k)}_i$ is the (possibly not simple) cap of $k$ strands centred just after $i$. It is shown that\todohidden{This isn't quite what is written there, but is what the computer says is true. Check the working in FP18} \begin{equation}\label{eq:two_part_recurse} U_1 \cdot U_k = \frac{[k][n - k + 1]}{[\mu_1][\mu_2]} U_k + \frac{[\mu_1-k][\mu_2-k]}{[\mu_1][\mu_2]} U_{k+1}. \end{equation} Since $\mu_i < \ell -1$, we see that $[\mu_i]\neq 0$ in our ring $k$ and so we may consider the alternative normalisation $\widetilde U_k = U_k [\mu_1][\mu_2]$ which descends to $k$ so then \begin{equation} \widetilde U_{k+1} =\frac{\widetilde U_1 - [k][n-k+1]}{[\mu_1-k][\mu_2-k]}\cdot \widetilde U_k \end{equation} It is clear that $\widetilde U_{k}$ is non-zero for all $0\le k < \min\{{\boldsymbol{\mu}}\}$ but that if we extend this definition, $\widetilde U_{k+1} = 0$. Hence a polynomial satisfied by $\widetilde U_1$ is \begin{equation} f(X) = \prod_{r = 0}^{\min\{{\boldsymbol{\mu}}\}} \left(X - [r][n-r+1]\right). \end{equation} Thus ${\rm TL}_{\boldsymbol{\mu}} \simeq k[X]/(f(X))$. There are $\min\{{\boldsymbol{\mu}}\}$ simple, linear modules on which $X$ acts as the scalar $[r][n-r+1]$. We consider this example in more detail in \cref{sec:two-part}. \item Let ${\boldsymbol{\mu}} = (k,1,\ldots, 1) \vdash n$ and set $b = n - k$. When ${\boldsymbol{\mu}}$ is Eve, this is the ``seam algebra'' studied in~\cite{langlois_remillard_saint_aubin_2020}. \end{enumerate} \end{example} As in \cref{sec:cell_hecke}, we inherit cellular data for ${\rm TL}_{\boldsymbol{\mu}}$ from ${\rm TL}_n$. Following \cref{sec:notation}, we let $L_{\boldsymbol{\mu}}(m)$, $\Delta_{\boldsymbol{\mu}}(m)$ and $P_{\boldsymbol{\mu}}(m)$ be the simple, cell and projective modules indexed by $m$ respectively.
2024-02-18T23:40:29.651Z
2021-08-24T02:30:44.000Z
algebraic_stack_train_0000
2,558
34,588
proofpile-arXiv_065-12469
\section{Introduction}\label{sec:intro} \section*{Introduction} Over the past decade our understanding of the internal structure of jets has increased tremendously. Thanks to the applications of the methods of perturbative QCD, the field has become mature and jet substructure algorithms that are, at the same time, performant and robust have been developed. Observables originally designed for searching new physics are now the target of unfolded measurements that can be compared to theoretical predictions, at the precision level, see \textit{e.g.}~\citep{Aad:2020zbq,Aad:2019vyi,Aad:2019onw,Aaboud:2017qwh,Aaboud:2019aii,Sirunyan:2018xdh,Sirunyan:2018gct,CMS:2021vsp}. Furthermore, ideas developed by the jet substructure community have found applications in other contexts of particle physics. For instance, after it was realised that the momentum fraction that characterises the splitting identified by the SoftDrop\xspace algorithm~\citep{Larkoski:2014wba} follows a distribution dictated by the QCD splitting functions~\citep{Larkoski:2015lea, Larkoski:2017bvj, Tripathee:2017ybi, Cal:2021fla}, this observable has become a standard way of probing interactions with the quark/gluon plasma, see e.g.~\citep{Chen:2021osv} and references therein. This Letter is part of an incipient effort to find innovative ways of applying jet substructure techniques to the broader LHC physics program and to provide the necessary tools so that this cross-pollination can bear its fruits. In this context, measurements of the internal structure of highly energetic jets produced in association with a boosted electroweak boson offer an ideal playground for such studies, because the leptonic decay of the $Z$ boson offers a valuable trigger. These events are not only relevant for background studies~\footnote{This process represents the main background for the associated production of a Higgs and an electroweak boson in the boosted regime, where the decay products of the Higgs are reconstructed in one jet.} but, as we shall detail in the following, they open up novel possibilities to employ jet substructure for Standard Model measurements. Substructure observables, such as jet angularities~\cite{Larkoski:2014pca}, measure the pattern of the had\-ronic activity within a jet. For this reason, they are often employed as tagging variables that aim to distinguish jets that have been originated by elementary particles carrying different colour degrees of freedom, \textit{e.g.} colour singlets versus QCD partons or, even, quarks versus gluons. Although there exist more powerful quark/gluon (\emph{q/g}) discriminants than a cut on a jet angularity, this procedure is theoretically well-defined, infra-red and collinear (IRC) safe and, hence, its behaviour can be understood with perturbative methods. In the following, we study $Z$+jet production, requiring that the jet with the highest transverse momentum has been identified as quark-initiated. As preliminarly explored in \citep{Amoroso:2020lgh}, by tagging the final-state, we indirectly bias particular partonic sub-processes. If the leading jet is tagged as quark-initiated, then, at leading order, the subprocess that features a quark and gluon in the initial state is enhanced, thus providing a new handle on the determination of the gluon parton distribution function (PDF). The main result of this study is the calculation of the transverse momentum distribution of the $Z$ boson in events where the leading jet is identified as quark-initiated. Because of IRC safety of the tagging procedure, we are able to compute this distribution at a well-defined and, in principle improvable, accuracy. Therefore, this observable could be directly included in standard fits of PDFs. The results presented in this Letter account for the resummation of the tagging parameter at the \emph{next-to-leading logarithmic} (NLL) accuracy, matched to fixed order predictions at $O\left(\alpha_s^2 \right)$ with respect to the Born contribution, henceforth denoted as next-to-leading order (NLO)~\footnote{The fixed-order counting is slightly different than what is usually employed in standard transverse-momentum distribution. This has to do with the fact that, in order to obtain a non-vanishing value of any angularity, the jet must have at least two constituents. Thus, in what follows we will refer to the lowest-order ($2 \to 2$) scattering as Born approximation, while (N)LO will be reserved for the contributions with one (two) additional emission(s).}. \begin{figure*} \includegraphics[width=0.33\textwidth]{figs/fig_1_alpha05_muNP1_final.pdf}% \hfill% \includegraphics[width=0.33\textwidth]{figs/fig_1_alpha10_muNP1_final.pdf} \hfill% \includegraphics[width=0.33\textwidth]{figs/fig_1_alpha20_muNP1_final.pdf} \caption{Receiver Operating Characteristic (ROC) curves for different values of the angularity exponent $\alpha=0.5,1,2$, from left to right obtained with MC simulation using P\protect\scalebox{0.8}{YTHIA}\xspace. In each plot, different curves refer to ungroomed jets and SoftDrop\xspace jets with $z_{\text{cut}}=0.1$ and $\beta=0,1$. For comparison we also show the ROC curves corresponding to a random classifier (dashed grey) and the CS ones (dashed black). Straight lines represent the target performance, as detailed in the text. At the bottom of each plot we show the ratio between each ROC curve obtained at hadron-level and the corresponding parton-level result. The dashed portion of these lines indicate the region sensitive to splittings with relative transverse momentum below 1~GeV, where NP effects are expected to be sizeable. This region of phase-space is sensitive to shower cut-off effects, which cause the observed kinks. } \label{fig:roc-curves} \end{figure*} \section*{Enhancing the gluon contribution} We start by considering the channel fractions $f_{ij}$ that measure, to a given order in perturbation theory, the contribution to the $Z$ (or jet) transverse momentum distributions from the subprocess initiated by partons $i$ and $j$, which, for brevity, we indicate as $\sigma_{ij}^a$, where $a$ could be the $Z$ boson or the leading jet $J$: \begin{equation}\label{frac-before-tagging} f_{ij}^a=\frac{\sigma_{ij}^a}{\sigma^a_{qq}+\sigma^a_{q g}+\sigma_{gg}^a}, \quad a=Z, J, \end{equation} where $qq$ and $q g$ include any combination of quarks and anti-quarks. Our aim is to study how tagging a quark-jet in the final-state changes the relative contributions of the various partonic subprocesses. To this purpose, we define \begin{equation}\label{frac-after-tagging} \widetilde{f}_{ij}^a=\frac{\widetilde{\sigma}_{ij}^a}{\widetilde{\sigma}^a_{qq}+\widetilde{\sigma}^a_{q g}+\widetilde{\sigma}_{gg}^a}, \quad a=Z, J, \end{equation} where the tilde indicates ``after tagging''. Furthermore, in what follows we will be interested in the fractions of the event (before and after tagging) which feature at least one gluon in the initial state. To this purpose, we define \begin{equation} \label{gluon-f-before-after} f_g^a= f_{qg}^a +f_{gg}^a, \quad \text{and} \quad \widetilde{f}_g^a= \widetilde{f}_{qg}^a +\widetilde{f}_{gg}^a. \end{equation} We will refer to this fraction as \emph{gluon channel purity}. We begin our discussion in a simplified setting, which is nevertheless enough to capture the essential physics points. We will then validate our conclusions using Monte Carlo (MC) parton shower simulations. If we consider the Born approximation, then we have no $gg$ contribution and, because the $Z$ boson and the jet are back-to-back, we obtain $f_{ij}^Z=f_{ij}^J$. Fragmentation of the hard partons can lead to transverse momentum imbalance, however, we can limit ourselves to the leading logarithmic (LL) regime, where all parton splittings happen in the soft and collinear limit, with no recoil.~\footnote{We will lift this approximation when dealing with actual simulations. However, we can always choose a set of kinematical cuts that favours the back-to-back configuration.} Therefore, dropping the superscript $a$, we have \begin{equation}\label{frac-gluon-before-tagging} f_g =\frac{\sigma_{qg}}{\sigma_{qq}+\sigma_{q g}}. \end{equation} We note that, within our approximation, the fraction of events with a (properly defined) final-state quark can be considered as a proxy for the gluon channel purity. We find that, for $p_{t\,J} \ge 100$~GeV, $f_g \simeq 0.85$ and it exhibits a rather mild dependence on the transverse momentum. Next, we note that in our approximation we simply have \begin{align}\label{frac-gluon-after-tagging} \widetilde{f}_{g}&=\frac{\varepsilon_q \sigma_{qg}}{\varepsilon_g \sigma_{qq}+ \varepsilon_q \sigma_{q g}} =\left(1+ \frac{1-f_g}{f_g}\frac{\varepsilon_g}{\varepsilon_q} \right)^{-1}, \end{align} where $\varepsilon_q$ is the efficiency of the tagger to correctly label quark jets and $\varepsilon_g$ the false-positive rate, which measures how often gluon jets are wrongly labelled as quarks. We note that a perfect quark-tagger with $\varepsilon_q=1$ and $\varepsilon_g=0$ returns tagged events that have been originated by one gluon in the initial state, i.e.\ $\widetilde{f}_{g}=1$. On the other hand, with an efficiency of 50\%, which corresponds to tossing a coin, we recover Eq.~(\ref{frac-gluon-before-tagging}). A rather common class of \emph{q/g} ~taggers exhibits at LL a property known as Casimir scaling (CS)~\citep{Larkoski:2014pca}, namely the tagging efficiencies are related by $ \varepsilon_g = \left( \varepsilon_q \right)^{C_A/C_F}, $ where the exponent is given by the ratio of the Casimir operators in the appropriate colour representation, $C_F$ for quarks and $C_A$ for gluons. This property emerges because the LL distributions of both quarks and gluons exhibit the same Sudakov-like behaviour, with a coefficient determined by the appropriate colour factor. Thus, for a CS tagger, we find \begin{equation}\label{frac-gluon-after-tagging-CS} \widetilde{f}_{g}^\text{CS}=\left(1+ \frac{1-f_g}{f_g}{\varepsilon_q }^{\frac{C_A}{C_F}-1} \right)^{-1}. \end{equation} For instance, a CS tagger with $\varepsilon_q=0.65$, yields $\widetilde{f}_{g}^a\simeq0.9$. This can be pushed to 0.95 if the tighter working point $\varepsilon_q=0.35$ is considered. Taggers that obey CS are not the most performant, but they are interesting for us because they are under good theoretical control. The efficiencies $\varepsilon_i$ can be computed in QCD using resummed perturbation theory to a well-defined and, in principle improvable, theoretical accuracy. Furthermore, the inclusion of higher logarithmic corrections generally leads to an improvement with respect to strict CS, so that, depending on the specifics of the tagging procedure, the target gluon channel purity $\widetilde{f}_{g}^a\simeq0.95$ can be achieved at a reasonable working point. In the following we construct a CS tagger that is based on a particular class of jet substructure observables known as jet angularities. \section*{Jet angularities as a quark/gluon tagger} Jet angularities~\citep{Larkoski:2014pca} are defined as \begin{equation}\label{eq:ang-def} \lambda_\alpha= \sum_{i \in \text{jet}} \frac{p_{t,i}}{p_{t\,J}} \left(\frac{\Delta_i}{R_0} \right)^\alpha, \end{equation} where the sum runs over the constituents of the hardest jet in the event and $ \Delta_i=\sqrt{(y_i-y_J)^2+(\phi_i-\phi_J)^2}$ is the distance in the azimuth-rapidity plane of particle $i$ from the jet axis. We define jets with the anti-$k_t$ clustering algorithm~\citep{Cacciari:2008gp} with radius $R_0$ and standard $E$-scheme for recombination. IRC safety requires $\alpha>0$, while angularities with $\alpha \le 1$ are sensitive to recoil against soft emissions~\citep{Larkoski:2013eya}. In order to circumvent this issue, when $\alpha \le 1$, the jet axis is obtained using the Winner-Take-All (WTA) recombination scheme~\citep{Larkoski:2014uqa}. We also consider groomed jets. In this case, we recluster the jet with the Cambridge-Aachen algorithm~\citep{Dokshitzer:1997in, Wobisch:1998wt} and apply the SoftDrop\xspace grooming algorithm with parameters $z_{\text{cut}}$ and $\beta$~\citep{Larkoski:2014wba}. The angularity is then computed on the constituents of the groomed jet, with the WTA prescription adopted for angularities with $\alpha \le 1$. Because of the different colour factor, angularity distributions for quark- and gluon-initiated jets peak at different values. We can exploit this separation and define our \emph{q/g} ~tagger through a cut on the jet angularity. In particular, a jet with $\lambda_\alpha<\lambda_\text{cut}$ will be labelled as a quark jet: \begin{equation} \widetilde{\sigma}_{ij}^a= \int_0^{\lambda_\text{cut}}d \lambda_\alpha \frac{d {\sigma}_{ij}^a}{d \lambda_\alpha}, \end{equation} where we have introduced the differential angularity distribution. The implicit dependence of the tagged distribution on the transverse momentum, on the angularity exponent $\alpha$ and, optionally, on the SoftDrop\xspace parameters $z_{\text{cut}}$ and $\beta$, is understood. From the simple CS analysis above, we have concluded that we should work with quark efficiencies $\varepsilon_q\simeq 0.35$, in order to reach a purity of initial-state gluons around 0.95. The value of the angularity cut that is necessary to achieve this working point for the tagger clearly depends on the angular exponent $\alpha$ in Eq.~(\ref{eq:ang-def}) as well as on the parameters $z_{\text{cut}}$ and $\beta$ of the SoftDrop\xspace algorithm, should we wish to employ groomed jets. Different theoretical considerations can guide us with this choice. First of all, we would like to preserve calculability, \textit{i.e.}\ we want to cement our findings in perturbative field theory. Thus, we would like our tagger to be as insensitive as possible to non-perturbative (NP) contributions such as hadronisation corrections and the Underlying Event (UE). Secondly, although we can calculate transverse momentum spectra in resummed perturbation theory~\citep{Kang:2018qra,Kang:2018vgn,Caletti:2021oor}, we aim for perturbative stability. Thus, we favour working points for the tagger for which $\lambda_\text{cut}$ is not too small. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{figs/fig2_HAD_alpha10_Zboson_final.pdf} \cprotect\caption{The initial-state gluon purity before ($f_g$) and after tagging ($\widetilde{f}_g$) obtained with P\protect\scalebox{0.8}{YTHIA}\xspace simulations, as a function of the transverse momentum of the $Z$ boson. } \label{fig:initial-state-fractions} \end{center} \end{figure} In order to turn the above considerations into a quantitative study we use simulated data obtained with the MC event generator P\protect\scalebox{0.8}{YTHIA}\xspace~8.303 \citep{Sjostrand:2014zea}. The UE is simulated according to the model presented in~\citep{Sjostrand:1985vv, Sjostrand:1987su,Sjostrand:2004pf} and hadronisation effects according to the Lund string model~\citep{Andersson:1983ia, Sjostrand:1984ic}. Throughout the paper, we use the NNPDF 3.0 NLO set of PDFs \citep{NNPDF:2014otw}. We consider the inclusive production of a pair of oppositely charged muons in proton--proton collisions at $13~\text{TeV}$ centre-of-mass energy, requiring that the invariant mass of the muon pair to be within 70 and 110 GeV. Jets are clustered with the anti-$k_t$ algorithm with $R_0=0.4$ and ordered in transverse momentum. Henceforth, the jet will be implicitly considered to be the hardest one and we will refer to the muon-antimuon pair as the $Z$ boson. With this in mind, the fiducial volume is defined following Ref.~\citep{CMS:2021vsp}: $p_{t\,\mu} > 26~\mathrm{GeV}$, $p_{t\, Z}>30\;\text{GeV}$, $ p_{t\,J} > 15~\mathrm{GeV}$, $ |\eta_\mu|<2.4$, and $ |y_{\text{jet}}| < 1.7$. Furthermore, in order to enforce back-to-back configurations, we impose $ \left| \frac{p_{t\,J} - p_{t\, Z}}{ p_{t\,J} + p_{t\, Z}} \right| < 0.3$, $ \left|\phi_{\rm jet}-\phi_Z \right| > 2. $ For event selection and analysis we employ R\protect\scalebox{0.8}{IVET}\xspace~\citep{Buckley:2010ar,Bierlich:2019rhm}. Jet reconstruction is done with F\protect\scalebox{0.8}{AST}J\protect\scalebox{0.8}{ET}\xspace~\citep{Cacciari:2011ma}, and the SoftDrop\xspace implementation in the F\protect\scalebox{0.8}{AST}J\protect\scalebox{0.8}{ET}\xspace ~\verb|contrib| is used. We distinguish two different stages of the simulation: ``parton-level'', \textit{i.e.}\ with parton shower effects only, and ``hadron-level'', \textit{i.e.} with hadronisation and UE included. % By keeping the two partonic processes of interest separate, we compute Receiver Operating Characteristic (ROC) curves that show the mis-tag rate (gluon efficiency) $\varepsilon_g$ as a function of the signal (quark) efficiency. They are computed for different values of the angularity exponent $\alpha=0.5, 1, 2$ in the ungroomed case and for SoftDrop\xspace jets with $z_{\text{cut}}=0.1$ and $\beta=0,1$. We show hadron-level results in Fig.~\ref{fig:roc-curves}, as well as the ratios to their parton-level counterparts, which we take as a measure of NP contributions. The dotted portion of the latter indicates that the efficiency $\varepsilon_q$ is dominated by splittings in the non-perturbative region, as determined, for instance, in Ref.~\cite{Caletti:2021oor}. We also show the target line, which fixes the gluon efficiency $\varepsilon_g$ as a function of $\varepsilon_q$, for given $f_g$ and $\widetilde{f}_g$, \begin{equation} \varepsilon_g= \frac{f_g (1-\widetilde{f}_g)}{\widetilde{f}_g(1-f_g)}\varepsilon_q, \end{equation} which is easily derived from Eq.~(\ref{frac-gluon-after-tagging}). The slope shown in Fig.~\ref{fig:roc-curves} is determined by the original $f_g=0.85$ and target gluon purity $\widetilde{f}_g=0.95$. The intersections of each ROC curve with the target line set our tagger working points. Corresponding values of $\lambda_\text{cut}$ are reported in the figure. The choice of the tagger working points is clearly not unique. For instance, we could have optimised the signal-to-background ratio by choosing on each ROC curve the point that is closest to the $(1,0)$ corner. The analysis of Ref.~\citep{Caletti:2021oor} tells us that larger values of $\alpha$ are under better theoretical control. However, as it is clear from Fig.~\ref{fig:roc-curves}, lower values of $\alpha$ have better performance, essentially because of their increased sensitivity to the collinear region. Thus, the choice $\alpha=1$ appears to be a good compromise between performance and robustness. We note that the use of SoftDrop\xspace does not always provide us with improvements on the size of the NP contributions. This might be related to the fact that in order to obtain the same efficiency $\varepsilon_q$ we need to cut groomed jets at lower values of $\lambda_\text{cut}$, where NP physics may be more prominent. Furthermore, we note that we are working with a rather small jet radius, which prevents large contributions from the UE. We expect (light) grooming to be beneficial, should one consider larger jet radii. Finally, configurations with aggressive SoftDrop\xspace, i.e.\ $\beta=0$, typically result in worse performance, because important information is groomed away. Thus, in what follows, we shall focus on the $\alpha=1$ case either with no grooming or with $\beta=1$, $z_{\text{cut}}=0.1$. We now quantify the gain in the initial-state gluon purity that we obtain after tagging. Fig.~\ref{fig:initial-state-fractions} shows the fractions $\tilde{f}_g^Z$ as a function of the $Z$ boson transverse momentum for the selected taggers, with $f_g^Z$ also shown for comparison. We first note that the performance of both taggers is very good, leading to gluon channel purities that exceed our 95\% target. Analogous conclusions can also be drawn if we plot our results as a function of $p_{t\,J}$. However, as we will argue shortly, in this context, the transverse momentum distribution of the $Z$ boson is a more robust observable. We also notice that the gluon channel purities decrease with $p_{t\, Z}$. This is due to the fact that our taggers are defined looking at their efficiencies with $p_{t\,J}>100$~GeV and, therefore, we expect them to work better at the lower end of the transverse momentum spectrum. This loss in performance could be cured by adjusting the cut on the angularity as a function of $p_{t\,J}$. However, in this first study, we prefer to keep our framework simple. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{figs/fig3_HAD_NLO.pdf} \caption{Transverse momentum distribution of the $Z$ boson in $Z$+jet events, with the leading jet tagged as quark-initiated, according to our operational definition, detailed in the text. The untagged distribution is also shown for comparison. The NLO+NLL$'$ calculation is supplemented with a NP correction factor, shown at the bottom.} \label{fig:pt-distributions} \end{center} \end{figure} \section*{Transverse momentum distributions} We now provide theoretical predictions for our observables of interest, namely transverse-momentum distributions, in the presence of tagging. Our calculation includes the resummation of logarithms of $\lambda_\text{cut}$ at NLL accuracy and is matched to NLO. Thanks to a flavour-dependent matching procedure, \emph{cf.} also \cite{Banfi:2006hf, Baron:2020xoi}, we are able to achieve NLO+NLL$'$ accuracy. We also include a bin-by-bin NP correction factor obtained with MC simulations. ~\footnote{See Ref.~\cite{Caletti:2021oor} for details about the calculation and its numerical implementation in the resummation plugin \cite{Gerwick:2014gya, Baberuxki:2019ifp} to the S\protect\scalebox{0.8}{HERPA}\xspace \cite{Gleisberg:2008ta, Sherpa:2019gpd} framework, including perturbative uncertainties, obtained by varying the perturbative (renormalisation, factorisation and resummation) scales and NP corrections. We use COMIX \cite{Gleisberg:2008fv} in conjunction with OpenLoops \cite{Buccioni:2019sur} and Recola \cite{Actis:2016mpe, Biedermann:2017yoi} for the fixed order calculation. The NP corrections are based on S\protect\scalebox{0.8}{HERPA}\xspace parton shower simulations at MC@NLO accuracy \cite{Frixione:2002ik, Hoeche:2011fd} hadronised with S\protect\scalebox{0.8}{HERPA}\xspace's cluster fragmentation model \cite{Winter:2003tt}.} Our results are reported in Fig.~\ref{fig:pt-distributions}, where we show the $p_{t\, Z}$ distribution for events where the highest-$p_t$ jet is quark initiated, for the taggers selected for this study. We show the $p_{t\, Z}$ distribution with no-tagging, with tagging on standard jets and with tagging on SoftDrop\xspace jets. We note that NP corrections are rather sizeable at low $p_{t\, Z}$, making this observable most reliable in the high transverse momentum region. The latter is actually \emph{per se} interesting because it allows us to probe the proton dynamics described by the PDFs at large values of the momentum fraction $x$, \emph{i.e.} in a kinematic region where they are less constrained. To qualitatively assess the values of $x$ which would be accessible, we show, on the upper horizontal axis, the Born-level momentum fraction computed at central (zero) rapidities: $ \bar x=x_{1,2} = \frac{p_t e^{\pm y_J}+\sqrt{p_t^2+m_Z^2}e^{\pm y_Z}}{\sqrt{S}}\Big|_{y_J=y_Z=0}. $ We have also studied $p_{t\,J}$ distributions. However, as anticipated, these distributions turn out to be less robust, essentially because jet dynamics can be significantly altered by the cut on the angularity, as well as by the grooming procedure~\footnote{Note that if $\beta=0$, the groomed $p_{t\,J}$ distribution is not even IRC safe.}. In contrast, the $p_{t\, Z}$ spectrum is inclusive with respect to the jet activity and thus, for a given working point of the tagger, less dependent on the details of the tagging procedure itself. We can therefore take modifications in such distribution as more directly related to the bias that the tagger induces on the composition of the partonic initial state, which is what we want to achieve. \section*{Conclusions and future developments} We have shown how \emph{q/g} ~tagging can be successfully applied to $Z$+jets events in order to significantly enhance the gluon-initiated contributions. In particular, our tagger is realised through a simple cut on a jet angularity, which is an IRC safe observable and therefore can be studied in perturbation theory. Exploiting MC simulations, we have performed a study of their efficiencies and their dependence on NP effects, exploring different angularities and different levels of grooming. We have explicitly studied the transverse momentum of the $Z$ boson, providing theoretical predictions that included both all-order resummation and matching to NLO. We have shown that we can achieve initial-state gluon purities close to 95\%, thus increasing the sensitivity on the modelling of the initial-state gluon. We see several possible directions for future work in this context. First, we would like to assess in more detail the impact of this type of observables on PDF fits. Second, the results presented in this study are based on a calculation of the angularity spectra, and hence of closely related efficiencies, at NLO+NLL$'$. The resummed calculation can be promoted to higher accuracy~\citep{Frye:2016okc,Frye:2016aiz,Kardos:2020gty}. The fixed-order can also be improved by including the two-loop correction to $Z$+1\,jet (see~\citep{Gehrmann-DeRidder:2015wbt} and references therein). An improvement in the description of the angularity distribution away from $\lambda_\alpha=0$ is much more challenging because it requires $Z$+2\,partons at NNLO, which may become available in the near future. Finally, with the aim of enhancing performance while maintaining calculability, it would be interesting to consider more powerful, albeit more sophisticated, taggers. In this context, the Les Houches multiplicity~\citep{Amoroso:2020lgh} is very promising and, although its theoretical understanding is only in its infancy, we believe that achieving NLL accuracy is within reach. \section*{Acknowledgments} We thank S. Schumann and G. Soyez for collaboration on related topics. We also thank our ATLAS and CMS colleagues: R.~Aggleton, J.~Ferrando, A.~Hinzmann, M.~LeBlanc, B.~Nachman, and F.~Sforza, for useful discussions. This work is supported by Universit\`a di Genova under the curiosity-driven grant ``Using jets to challenge the Standard Model of particle physics'' and by the Italian Ministry of Research (MUR) under grant PRIN 20172LNEEZ. DR further acknowledges funding from the European Union’s Horizon 2020 research and innovation programme as part of the Marie Skłodowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104) and from BMBF (contract 05H18MGCA1). Figures were created with the Matplotlib \citep{Hunter:2007ouj} and NumPy \citep{NumPy} libraries. \bibliographystyle{jhep}
2024-02-18T23:40:29.794Z
2021-08-24T02:31:33.000Z
algebraic_stack_train_0000
2,568
4,332
proofpile-arXiv_065-12493
\section{Introduction}\label{sec:intro} For a random variable $X$ with density $f$ its R\'enyi entropy of order $\alpha \in (0,\infty) \setminus \{1\}$ is defined as \[ h_\alpha(X)=h_\alpha(f) = \frac{1}{1-\alpha} \log\left( \int f^\alpha(x) \mathrm{d} x \right), \] assuming that the integral converges, see \cite{R61}. If $\alpha \to 1$ one recovers the usual Shannon differential entropy $h(f)=h_1(f)=-\int f \ln f$. Also, by taking limits one can define $h_0(f)=\log|\supp f|$, where $\supp f$ stand for the support of $f$ and $h_\infty(f)=-\log\|f\|_\infty$, there $\|f\|_\infty$ is the essential supremum of $f$. It is a well known fact that for any random variable one has \[ h(X) \leq \frac12 \log \var(X) + \frac12 \log(2 \pi e) \] with equality only for Gaussian random variables, see e.g. Theorem 8.6.5 in \cite{CT06}. The problem of maximizing R\'enyi entropy under fixed variance has been considered independently by Costa, Hero and Vignat in \cite{CHV03} and by Lutwak, Yang and Zhang in \cite{LYZ05}, where the authors showed, in particular, that for $\alpha \in (\frac13,\infty) \setminus \{1\}$ the maximizer is of the form \[ f(x) = c_0(1+(1-\alpha)(c_1 x)^2)_+^{\frac{1}{\alpha-1}}, \] which will be called the \emph{generalized Gaussian density}. Any density satisfying $f(x) \sim x^{-3}(\log x)^{-2}$ shows that for $\alpha \leq \frac13$ the supremum of $h_\alpha$ under fixed variance is infinite. One may also ask for reverse bounds. However, the infimum of the functional $h_\alpha$ under fixed variance is $-\infty$ as can be seen by considering $f_n(x)=\frac{n}{2}\mathbf{1} _{[1,1+n^{-1}]}(|x|)$ for which the variance stays bounded whereas $h_\alpha(f_n) \to -\infty$ as $n \to \infty$. Therefore, it is natural to restrict the problem to a certain natural class of densities, in which the R\'enyi entropy remains lower bounded in terms of the variance. In this context it is natural to consider the class of log-concave densities, namely densities having the form $f=e^{-V}$, where $V:\mathbb{R} \to (-\infty,\infty]$ is convex. In \cite{MNT21} it was proved that for any symmetric log-concave random variable one has \[ h(X) \geq \frac12 \log \var(X) + \frac12 \log 12 \] with equality if and only if $X$ is a uniform random variable. In the present article we shall extend this result to general R\'enyi entropy. Namely, we shall prove the following theorem. \begin{theorem}\label{thm:main} Let $X$ be a symmetric log-concave random variable and $\alpha > 0$, $\alpha \neq 1$. Define $\alpha^*$ to be the unique solution to the equation $\frac{1}{\alpha-1}\log \alpha= \frac12 \log 6$ ($\alpha^* \approx 1.241$). Then \[ h_{\alpha}(X) \geq \frac12 \log \var(X) + \frac12 \log 12 \text{ \qquad for } \alpha \leq \alpha^* \] and \[ h_{\alpha}(X) \geq \frac12 \log \var(X) + \frac12 \log2+\frac{\log\alpha}{\alpha-1}\text{ \qquad for } \alpha \geq \alpha^*. \] For $\alpha < \alpha^*$ equality holds if and only if $X$ is uniform random variable on a symmetric interval, while for $\alpha > \alpha^*$ the bound is attained only for two-sided exponential distribution. When $\alpha=\alpha^*$, two previously mentioned densities are the only cases of equality. \end{theorem} The above theorem for $\alpha <1$ trivially follows from the case $\alpha=1$ as already observed in \cite{MNT21} (see Theorem 5 therein). This is due to the monotonicity of R\'enyi entropy in $\alpha$. As we can see the case $\alpha \in [1,\alpha^*]$ of Theorem \ref{thm:main} is a strengthening of the main result of \cite{MNT21}, as in this case $h_\alpha(X) \leq h(X)$ and the right hand sides are the same. This article is organized as follows. In Section \ref{sec:reduction} we reduce Theorem \ref{thm:main} to the case $\alpha=\alpha^*$. In Section \ref{sec:degrees-of-freedom} we further simplify the problem by reducing it to \emph{simple} functions via the concept of degrees of freedom. Section \ref{sec:proof} contains the proof for these simple functions. In the last section we derive two applications of our main result. \section{Reduction to the case $\alpha=\alpha^*$}\label{sec:reduction} The following lemma is well known. We present its proof for completeness. The proof of point (ii) is taken from \cite{FMW16}. As pointed out by the authors, it can also be derived from Theorem 2 in \cite{B73} or from Theorem VII.2 in \cite{BM11}. \begin{lemma}\label{lem:log-concavity} Suppose $f$ is a probability density in $\mathbb{R}^n$. \begin{itemize} \item[(i)] The function $p \mapsto \int f^p$ is log-convex on $(0,\infty)$. \item[(ii)] If $f$ is log-concave then the function $p \mapsto p^n \int f^p$ is log-concave on $(0,\infty)$. \end{itemize} \end{lemma} \begin{proof} (i) This is a simple consequence of H\"older's inequality. (ii) Let $\psi(p)=p^n \int f^p(x) \mathrm{d} x$. The function $f$ can be written as $f=e^{-V}$, where $V:\mathbb{R}^n \to (-\infty,+\infty]$ is convex. Changing variables we get $\psi(p)=\int e^{-p V(\frac{z}{p}) }\mathrm{d} z$. For any convex $V$ the so-called \emph{perspective function} $W(z,p)= pV(\frac{z}{p})$ is convex on $\mathbb{R}^n \times (0,\infty)$. Indeed, for $\lambda \in [0,1]$, $p_1,p_2 > 0$ and $z_1, z_2 \in \mathbb{R}^n$ we have \begin{align*} W(\lambda z_1 & + (1-\lambda) z_2, \lambda p_1+(1-\lambda)p_2) = (\lambda p_1+(1-\lambda)p_2) V\left( \frac{\lambda p_1 \frac{z_1}{p_1}+(1-\lambda)p_2 \frac{z_2}{p_2}}{\lambda p_1+(1-\lambda)p_2} \right) \\ & \leq \lambda p_1 V\left(\frac{z_1}{p_1} \right)+(1-\lambda) p_2 V\left(\frac{z_2}{p_2} \right) = \lambda W(z_1,p_1)+(1-\lambda) W(z_2,p_2). \end{align*} Since $\psi(p)=\int e^{-W(z,p)} \mathrm{d} z$, the assertion follows from the Pr\'ekopa’s theorem from \cite{P73} saying that a marginal of a log-concave function is again log-concave. \end{proof} \begin{remark*} The use of the term \emph{perspective function} appeared in \cite{H-UL93}, however the convexity of this function was known much earlier. \end{remark*} The next corollary is a simple consequence of Lemma \ref{lem:log-concavity}. The right inequality of this corollary appeared in \cite{FMW16}, whereas the left inequality is classical. \begin{corollary}\label{cor:monot-ent} Let $f$ be a log-concave probability density in $\mathbb{R}^n$. Then for any $p \geq q > 0$ we have \[ 0 \leq h_q(f)-h_p(f) \leq n \frac{\log q}{q-1} - n \frac{\log p}{p-1}. \] In fact the first inequality is valid without the log-concavity assumption. \end{corollary} \begin{proof} To prove the first inequality we observe that due to Lemma \ref{lem:log-concavity} the function defined by $\phi_1(p)=(1-p)h_p(f)$ is convex. From the monotonicity of slopes of $\phi_1$ we get that $\frac{\phi_1(p)-\phi_1(1)}{p-1} \geq \frac{\phi_1(q)-\phi_1(1)}{q-1}$, which together with the fact that $\phi_1(1)=0$ gives $h_p(f) \leq h_q(f)$. Similarly, to prove the right inequality we note that $\phi_2(p) = n \log p + (1-p)h_p(f)$ is concave with $\phi_2(1)=0$. Thus $\frac{\phi_2(p)-\phi_2(1)}{p-1} \leq \frac{\phi_2(q)-\phi_2(1)}{q-1}$ gives $\frac{n \log p}{p-1}-h_p(f) \leq \frac{n \log q}{q-1}-h_q(f)$, which finishes the proof. \end{proof} Having Corollary \ref{cor:monot-ent} we can easily reduce Theorem \ref{thm:main} to the case $\alpha=\alpha^*$. Indeed, the case $\alpha < \alpha^*$ follows from the left inequality of Corollary \ref{cor:monot-ent} ($h_p$ is non-increasing in $p$). The case $\alpha>\alpha^*$ is a consequence of the right inequality of the above corollary, according to which the quantity $h_\alpha(X)-\frac{\log \alpha}{\alpha-1}$ is non-decreasing in $\alpha$. \section{Reduction to simple functions via degrees of freedom}\label{sec:degrees-of-freedom} The content of this section is a rather straightforward adaptation of the method from \cite{MNT21}. Therefore, we shall only sketch the arguments. \\ By a standard approximation argument it is enough to prove our inequality for functions from the set $\mc{F}_L$ of all continuous even log-concave probability densities supported on $[-L,L]$. Thus, it suffices to show that \begin{equation}\label{eq:inf} \inf \ \{ h_{\alpha^*}(f): \ f \in \mc{F}_L, \ \var(f)=\sigma^2 \} \geq \log \sigma + \frac12 \log 2 + \frac{\log \alpha^*}{\alpha^*-1}. \end{equation} Take $A=\{f \in \mc{F}_L: \var(f)=\sigma^2\}$. We shall show that $\inf_{f \in A} h_{\alpha^*}(f)$ is attained on $A$. Equivalently, since $\alpha^*>1$ it suffices to show that $M=\sup_{f \in A} \int f^{\alpha^*}$ is attained on $A$. We first argue that this supremum is finite. This follows from the estimate $\int f^{\alpha^*} \leq 2L f(0)^{\alpha^*}$ and from the inequality $f(0) \leq \frac{1}{\sqrt{2 \var(f)}} = \frac{1}{\sqrt{2} \sigma}$, see Lemma 1 in \cite{MNT21}. Next, let $(f_n)$ be a sequence of functions from $A$ such that $\int f_n^{\alpha^*} \to M$. According to Lemma 2 from \cite{MNT21}, by passing to a subsequence one can assume that $f_n \to f$ pointwise, where $f$ is some function from $A$. Since $f_n \leq f_n(0)\leq \frac{1}{\sqrt{2}\sigma}$, by the Lebesgue dominated convergence theorem we get that $\int f_n^{\alpha^*} \to \int f^{\alpha^*}=M$ and therefore the supremum is attained on $A$. Now, we say that $f \in A$ is an \emph{extremal point} in $A$ if $f$ cannot be written as a convex combination of two different functions from $A$, that is, if $f = \lambda f_1 + (1-\lambda) f_2$ for some $\lambda \in (0,1)$ and $f_1, f_2 \in A$, then necessarily $f_1=f_2$. It is easy to observe that if $f$ is not extremal, then it cannot be a maximizer of $\int f^{\alpha^*}$ on $A$. Indeed, if $f = \lambda f_1 + (1-\lambda) f_2$ for some $\lambda \in (0,1)$ and $f_1, f_2 \in A$ with $f_1 \ne f_2$, then the strict convexity of $x \to x^{\alpha^*}$ implies \[ \int f^{\alpha^*} = \int(\lambda f_1 + (1-\lambda) f_2)^{\alpha^*} < \lambda \int f_1^{\alpha^*} + (1-\lambda) \int f_2^{\alpha^*} \leq M. \] This shows that in order to prove \eqref{eq:inf} it suffices to consider only the functions $f$ being extremal points of $A$. Finally, according to Steps III and IV of the proof of Theorem 1 from \cite{MNT21} these extremal points are of the form \[ f(x)= c \mathbf{1} _{[0,a]}(|x|)+ce^{-\gamma(|x|-a)}\mathbf{1} _{[a,a+b]}(|x|), \qquad a+b=L, \ c>0, \ a,b,\gamma \geq 0, \] where it is also assumed that $\int f=1$. \section{Proof for the case $\alpha=\alpha^*$} \label{sec:proof} Due to the previous section, we can restrict ourselves to probability densities $f$ of the form \[ f(x)= c \mathbf{1} _{[0,a]}(|x|)+ce^{-\gamma(|x|-a)}\mathbf{1} _{[a,a+b]}(|x|), \qquad a,b,\gamma \geq 0. \] The inequality is invariant under scaling $f(x) \mapsto \lambda f(\lambda x)$ for any positive $\lambda$, so we can assume that $\gamma=1$ (note that in the case $\gamma=0$ we get equality). We have \[ \int_\mathbb{R} f^\alpha = c^\alpha \int_\mathbb{R} \mathbf{1} _{[0,a]}(|x|)+ c^\alpha \int_\mathbb{R} e^{-\alpha x}\mathbf{1} _{[0,b]}(|x|)=2c^\alpha \left(a+\frac{1-e^{-\alpha b}}{\alpha}\right) \] and thus \[h_\alpha(f)=\frac{1}{1-\alpha}\log{\int_\mathbb{R} f^{\alpha}}=\frac{1}{1-\alpha}\log \left(2c^\alpha \left(a+\frac{1-e^{-\alpha b}}{\alpha}\right) \right). \] Moreover, \[ \var(f)= 2c \int_\mathbb{R} x^2\mathbf{1} _{[0,a]}(x)\mathrm{d} x + 2c \int_\mathbb{R} (x+a)^2 e^{-x} \mathbf{1} _{[0,b]}\mathrm{d} x =2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right), \] so our inequality can be rewritten as \[ \frac{1}{1-\alpha^*}\log \left(2c^{\alpha^*} \left(a+\frac{1-e^{-\alpha^* b}}{\alpha^*}\right) \right) +\frac{\log \alpha^*}{1-\alpha^*} \geq \frac12 \log\left(2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right) \right) +\frac12 \log 2, \] which is \[ \frac{1}{1-\alpha^*}\log \left(2c^{\alpha^*} \left(a \alpha^* +1-e^{-\alpha^* b}\right) \right) \geq \frac12 \log\left(2c\left (\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right) \right) +\frac12 \log 2. \] The constraint $\int_\mathbb{R} f=1$ gives $c=\frac12 (a+1-e^{-b})^{-1}$. After multiplying both sides by $2$, exponentiating both sides and plugging the expression for $c$ in, we get the equivalent form of the inequality, $G(a,b,\alpha^*) \geq 0$, where \begin{equation}\label{fundamental} G(a,b,\alpha)= 2 (a\alpha +1-e^{-\alpha b})^{\frac{2}{1-\alpha}}(a+1-e^{-b})^{\frac{1-3\alpha}{1-\alpha}} - \left(\frac{a^3}{3}+\int_0^b(x+a)^2e^{-x}\mathrm{d} x \right). \end{equation} We will also write $G(a,b)=G(a,b,\alpha^*)$. To finish the proof we shall need the following lemma. \begin{lemma}\label{lem:tech} The following holds: \begin{itemize} \item[(a)] $\frac{\partial^4 }{\partial a^4} G(a,b) \geq 0$ holds for every $a,b \geq 0$, \item[(b)] $\lim_{a \to \infty} \frac{\partial^3 }{\partial a^3} G(a,b) = 0$ for every $b \geq 0$, \item[(c)] $\lim_{a \to \infty} \frac{\partial^2 }{\partial a^2} G(a,b) \geq 0$ for every $b \geq 0$, \item[(d)] $\frac{\partial }{\partial a} G(a,b) \big|_{a=0} \geq 0$ for every $b \geq 0$, \item[(e)] $G(0,b) \geq 0$ for every $b \geq 0$. \end{itemize} \end{lemma} \noindent With these claims at hand it is easy to conclude the proof. Indeed, one easily gets, one by one, \[ \frac{\partial^3 }{\partial a^3} G(a,b) \leq 0, \qquad \frac{\partial^2 }{\partial a^2} G(a,b) \geq 0, \qquad \frac{\partial }{\partial a} G(a,b) \geq 0, \qquad G(a,b) \geq 0, \qquad b \geq 0. \] The proof of points (d) and (e) relies on the following simple lemma. \begin{lemma}\label{lem:sign} Let $f(x)=\sum_{n=0}^{\infty}a_n x^n$, where the series is convergent for every nonnegative $x$. If there exists a nonnegative integer $N$ such that $a_n \geq 0$ for $n<N$ and $a_n \leq 0$ for $n \geq N$, then $f$ changes sign on $(0,\infty)$ at most once. Moreover, if at least one coefficient $a_n$ is positive and at least one negative, then there exists $x_0$ such that $f(x) > 0$ on $[0,x_0)$ and $f(x) < 0$ on $(x_0,\infty)$. \end{lemma} \begin{proof} Clearly the function $f(x)x^{-N}$ is nonincreasing on $(0,\infty)$, so the first claim follows. To prove the second part we observe that for small $x$ the function $f$ must be strictly positive and $f(x)x^{-N}$ is strictly decreasing on $(0,\infty)$. \end{proof} With this preparation we are ready to prove Lemma \ref{lem:tech}. \begin{proof}[Proof of Lemma \ref{lem:tech}] \ (a) This point is the crucial observation of the proof. It turns out that \begin{align*} \frac{\partial^4 G}{\partial a^4}(a,b,\alpha) &= 8\alpha(\alpha+1)(3\alpha-1)(1+a-e^{-b})^{\frac{3\alpha-1}{\alpha-1}}(1+a\alpha-e^{-b\alpha})^{\frac{2}{1-\alpha}} \\ & \qquad \qquad \times \left(\frac{(e^b-\alpha e^{b\alpha}+(\alpha-1)e^{b+b\alpha})}{(\alpha-1)(e^b(a+1)-1)(e^{b\alpha}(a\alpha+1)-1)}\right)^4, \end{align*} which is nonegative for $\alpha> \frac13$. \\ (b) By a direct computation we have \begin{align*} \frac{\partial^3 G(a,b,\alpha)}{\partial a^3} &= -2 - \frac{4\alpha}{(1-\alpha)^3}(1+a-e^{-b})^{\frac{2}{\alpha-1}}(1+a\alpha-e^{-b \alpha})^{\frac{1-3\alpha}{\alpha-1}} \\ & \quad \times [ (\alpha+1)(3\alpha-1)(1+a\alpha-e^{-b\alpha})^3-2\alpha^3(\alpha+1)(1+a-e^{-b})^3 \\& \qquad \qquad +3\alpha(\alpha+1)(3\alpha-1)(1+a-e^{-b})^2(1+a\alpha-e^{-b\alpha}) \\ & \qquad \qquad +6\alpha(1-3\alpha)(1+a-e^{-b})(1+a\alpha-e^{-b\alpha})^2 ]. \end{align*} When $a$ tends to infinity with $b$ fixed this converges to \[ -2 - \frac{4\alpha}{(1-\alpha)^3} \alpha^{\frac{1-3\alpha}{\alpha-1}}\left( (\alpha+1)(3\alpha-1)\alpha^3 - 2 \alpha^3 (\alpha+1)+3\alpha^2 (\alpha+1)(3\alpha-1) + 6 \alpha^3(1-3\alpha) \right), \] which is $ -2 + 12 \alpha^3 \alpha^{\frac{1-3\alpha}{\alpha-1}} $. If $\alpha=\alpha^*$, using equality $(\alpha^*)^{\frac{2}{1-\alpha^*}}=\frac16$, we get that this expression is equal to $0$. \\ (c) Again a direct computation yields \begin{align*} \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} &= \frac{4\alpha^2(\alpha+1)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{2\alpha}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{2\alpha}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad +\frac{4\alpha(3\alpha-1)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{2}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{2}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad +\frac{8\alpha(1-3\alpha)(a-e^{-b}+1)\left(1-e^{-b}a^{-1}+a^{-1}\right)^{\frac{\alpha+1}{\alpha-1}}\left(\alpha-e^{-\alpha b}a^{-1}+a^{-1}\right)^{-\frac{\alpha+1}{\alpha-1}}}{(\alpha-1)^2} \\ & \qquad -2a+2e^{-b}-2. \end{align*} As $a$ tends to infinity, we have \[ (1-e^{-b}a^{-1}+a^{-1})^w=1+w(1-e^{-b}) a^{-1}+o(a^{-1}) \] and \[ (\alpha-e^{-\alpha b}a^{-1}+a^{-1})^w=\alpha^{w}+w(1-e^{-\alpha b})\alpha^{w-1} a^{-1}+o(a^{-1}). \] Using these formulas together with the above expression for the second derivative easily gives \[ \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} = h_1(\alpha) \frac{1}{x} + h_2(b,\alpha) + o(a^{-1}), \] where \[ h_1(\alpha) = 12 \alpha^{-\frac{2}{\alpha-1}}-2 \] and \[ h_2(b, \alpha) =2(e^{-b}-1) + \frac{4 \alpha \left(\alpha^{\frac{1}{1-\alpha}}-\alpha^{\frac{\alpha}{1-\alpha}}\right)^2}{(\alpha-1)^3}\left( 2 \left(\alpha-1 -\alpha e^{-b}+e^{-b \alpha}\right)+3 \left(1-e^{-b}\right) \alpha (\alpha-1) \right). \] We have $h_1(\alpha^*)=0$. Moreover, \[ \frac{4 \alpha^* \left((\alpha^*)^{\frac{1}{1-\alpha^*}}-(\alpha^*)^{\frac{\alpha^*}{1-\alpha^*}}\right)^2}{(\alpha^*-1)^3} = \frac{4\alpha^* \left( \frac{1}{\sqrt{6}}- \frac{1}{\sqrt{6} \alpha^*} \right)^2}{(\alpha^*-1)^3} = \frac{2}{3\alpha^*(\alpha^*-1)}. \] Hence, \[ \lim_{a \to \infty} \frac{\partial^2 G(a,b,\alpha)}{\partial a^2} = h_2(b,\alpha^*) = \frac{4}{3\alpha^*(\alpha^*-1)} \left( (1-e^{-b}) \alpha^* - (1-e^{-b \alpha^*}) \right) . \] This expression is nonnegative for $b \geq 0$ since the function $h_3(x) = 1-e^{-x}$ is concave, so we have $\frac{1-e^{-b}}{b} = \frac{h_3(b)}{b} \geq \frac{h_3(\alpha^* b)}{\alpha^* b} = \frac{1-e^{-\alpha^* b}}{\alpha^* b}$ as $\alpha^*>1$ (monotonicity of slopes). \\ (e) To illustrate our method, before proceeding with the proof of (d) we shall prove (e), as the idea of the proof of (d) is similar, but the details are more complicated. Our goal is to show the inequality \begin{equation} (1-e^{-\alpha^* b})^{\frac{2}{1-\alpha^*}}(1-e^{-b})^{\frac{1-3\alpha^*}{1-\alpha^*}} \geq 1-\frac{b^2+2b+2}{2}e^{-b}. \label{zero_b} \end{equation} after taking the logarithm of both sides our inequality reduces to nonnegativity of \[ \phi(b) = \frac{2}{1-\alpha^*} \log(1-e^{-\alpha^* b}) +\frac{1-3\alpha^*}{1-\alpha^*} \log(1-e^{-b}) - \log\left(1-\frac{b^2+2b+2}{2}e^{-b}\right). \] We have \begin{equation*} \phi'(b)=\frac{2\alpha^*}{(1-\alpha^*)(e^{\alpha^* b}-1)} +\frac{1-3\alpha^*}{(1-\alpha^*)(e^{b}-1)}+\frac{b^2}{b^2+2b-2e^{b}+2}. \end{equation*} It turns out that $\phi(b)$ changes sign on $(0,\infty)$ at most once. To show that, firstly, clear out the denominators (they have fixed sign on $(0,\infty)$) to obtain the expression \begin{equation} 2\alpha^*(b^2+2b-2e^{b}+2)(e^b-1)+ (1-3\alpha^*)(e^{\alpha^* b}-1)(b^2+2b-2e^b+2)+b^2(1-\alpha^*)(e^b-1)(e^{\alpha^* b}-1). \label{logderivative} \end{equation} Now we will apply Lemma $\ref{lem:sign}$ to $\eqref{logderivative}$. That expression can be rewritten as \[-4 \alpha^* \left(\sum_{n=3}^{\infty}\frac{b^n}{n!}\right)\left(\sum_{n=1}^{\infty} \frac{b^n}{n!}\right) +\left(6\alpha^*-2\right)\left(\sum_{n=1}^{\infty} \frac{(\alpha^*b)^n}{n!}\right)\left(\sum_{n=3}^{\infty} \frac{b^n}{n!}\right)+b^2(1-\alpha^*)\left(\sum_{n=1}^\infty \frac{b^n}{n!} \right)\left(\sum_{n=1}^\infty \frac{(\alpha^* b)^n}{n!}\right), \] so the $n$-th coefficient $a_{n}$ in the Taylor expansion is equal to \begin{align*} a_{n} &= (6\alpha^*-2)\left(\sum_{j=1}^{n-3} \frac{(\alpha^*)^j}{j!(n-j)!}\right) - 4\alpha^* \left(\sum_{j=1}^{n-3} \frac{1}{j!(n-j)!}\right) +(1-\alpha^*)\left(\sum_{j=1}^{n-3} \frac{(\alpha^*)^j}{j!(n-2-j)!}\right) \\ & \leq \frac{1}{n!} (6\alpha^*-2)(\alpha^*+1)^n +\frac{1-\alpha^*}{(n-2)!}\left( (\alpha^*+1)^{n-2} - 1-(\alpha^*)^{n-2} \right) \\ & \leq \frac{6}{n!}(\alpha^*+1)^n-\frac{n(n-1)}{30n!}(\alpha^*+1)^n+\frac{8n^2}{n!}(\alpha^*)^{n} . \end{align*} When $n \geq 17$, we have $\frac{n(n-1)}{30} > 7$ and $(\frac{\alpha^*+1}{\alpha^*})^n \geq (\frac85)^n \geq 8 n^2$, so $a_n$ is less than zero for $n \geq 17$. It can be checked (preferably using computational software) that the rest of coefficients $a_n$ satisfy the pattern from Lemma $\ref{lem:sign}$, with $a_n=0$ for $n \leq 4$, $a_n>0$ for $n=5,6,7$ and $a_n<0$ for $n \geq 8$. This way we have proved that $\phi'(b)$ changes sign in exactly one point $x_0 \in (0, \infty)$. Thus, $\phi$ is first increasing and then decreasing. Since $\phi(0)=0$ and $\lim_{b \to \infty} \phi(b)=0$, the assertion follows. \\ (d) We have to show that \[ \frac{(1-e^{-b})^{\frac{2\alpha^*}{\alpha^*-1}}(1-e^{-b\alpha^*})^{-\frac{1+\alpha^*}{\alpha^*-1}}}{\alpha^*-1}[(3\alpha^*-1)(1-e^{-b\alpha^*})-2\alpha^*(1-e^{-b})] \geq 1-(b+1)e^{-b}. \] Let $\varphi_1(b)$ be the expression on the left side and $\varphi_2(b)$ on the right side. Both $\varphi_1$ and $\varphi_2$ are positive for $b > 0$, so we can take the logarithm of both sides. We will now show that $(\log(\varphi_1))'-(\log(\varphi_2))'$ changes sign at most once on $(0,\infty)$. We have \begin{align*} \label{jakasnazwa} (\log(\varphi_1))'-(\log(\varphi_2))'&=\frac{2\alpha^*}{\left(e^b-1\right)(\alpha^*-1)}-\frac{(\alpha^*+1)\alpha^*}{(\alpha^*-1)\left(e^{b\alpha^*}-1\right)} \\& \qquad + \frac{\alpha^*(3\alpha^*-1)e^{b}-2e^{b\alpha^*}\alpha^*}{e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b}}-\frac{b}{e^b-b-1}. \end{align*} Multiplying the above expression by the product of denominators does not change the hypothesis, since each of the denominators is positive. After this multiplication we get the expression \begin{align*} & [-\left(e^b-1\right)\left(e^b-1-b \right)(\alpha^*+ 1)\alpha^*+2\left(e^b-1-b\right)\alpha^*\left(e^{b\alpha^*}-1\right)-b\left(e^b-1\right)(\alpha^*-1)\left(e^{b\alpha^*}-1\right)] \\ & \qquad \qquad \times \left(e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b}\right) \\ & \qquad + \alpha^* (\alpha^*-1) \left(e^b-1\right)\left(e^b-1-b\right)\left(e^{b\alpha^*}-1\right)\left(e^b(3\alpha^*-1)-2e^{b\alpha^*}\right). \end{align*} Let us consider the Taylor series $\sum_{n \geq 0} a_n b^n$ of this function (it is clear that the series converges to the function everywhere). It can be shown (again using computational software) that coefficients of this series up to order $9$ are nonnegative and coefficients of order greater than $9$, but lesser than $30$ are negative. Now we will show negativity of coefficients of order at least $30$ (our bound will be very crude, so it would not work, if we replaced $30$ with lower number). Firstly we note that \[ e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b} \] has $n$-th Taylor coefficient equal to \[ \frac{1-3\alpha^* + 2(\alpha^*)^{n+1}+(\alpha^*-1)(\alpha^*+1)^n}{n!} \geq \frac{1-3\alpha^* + 2\alpha^*+\alpha^*-1}{n!} = 0, \] so all its coefficients are nonnegative. Thus we can change expression in square brackets to $(e^b-1)(e^{b \alpha^*-1})(5/2- b/5)$ (we discard the first term and bound from above the second and third one) to increase every Taylor coefficient of main expression. Now we want to show the negativity of coefficients of order at least $30$ for \[ (e^b-1)(e^{b\alpha^*}-1)[(5/2-b/5)(e^b(1-3\alpha^*)+2\alpha^* e^{b\alpha^*}+(\alpha^*-1)e^{b\alpha^*+b})+\alpha^*(\alpha^*-1)(e^b-b-1)((3\alpha^*-1)e^b-2e^{b\alpha^*})] \] The expression in square brackets has $n$-th Taylor coefficient $c_n$ equal to zero for $n \in \{0,1\}$, while for $n \geq 2$ it is \begin{align*} c_n &= \frac{5(1-3\alpha^*)}{2n!}+\frac{3\alpha^*-1}{5(n-1)!}+\frac{5(\alpha^*)^{n+1}}{n!}-\frac{2(\alpha^*)^n}{5(n-1)!}+\frac{5(\alpha^*-1)(\alpha^*+1)^n}{2n!}-\frac{(\alpha^*-1)(\alpha^*+1)^{n-1}}{5(n-1)!} \\& \qquad + \alpha^*(\alpha^*-1)(3\alpha^*-1) \frac{2^n-n-1}{n!} -\frac{2\alpha^*(\alpha^*-1)}{n!}((\alpha^*+1)^n-(\alpha^*)^n-n(\alpha^*)^{n-1}) . \end{align*} Using the bounds \[ \frac{5(1-3\alpha^*)}{2n!} \leq 0, \qquad -\frac{2(\alpha^*)^n}{5(n-1)!} \leq 0, \qquad \alpha^*(\alpha^*-1)(3\alpha^*-1) \frac{2^n-n-1}{n!} \leq \frac{2^n}{n!} \] and \[ \frac{2\alpha^*(\alpha^*-1)}{n!}((\alpha^*)^n+n (\alpha^*)^{n-1}) \leq \frac{(n+1)(\alpha^*)^n}{n!}, \qquad -\frac{(\alpha^*-1)(\alpha^*+1)^{n-1}}{5(n-1)!} \leq -\frac{\frac45 n}{10 n!}(\alpha^*-1)(\alpha^*+1)^n \] we get the following upper bound for $c_n$ for $n \geq 2$ \begin{align*} c_n &\leq \frac{(3\alpha^*-1)}{5(n-1)!}+\frac{5(\alpha^*)^{n+1}}{n!}+\frac{2^n}{n!}+\frac{(n+1)(\alpha^*)^n}{n!} +\frac{(\alpha^*+1)^n(\alpha^*-1)(25-20\alpha^*-4n/5)}{10n!} \\ & \leq \frac{(n+8)(\alpha^*)^n}{n!}+\frac{n+2^{n}}{n!} +\frac{(\alpha^*+1)^n(1-3n)}{200n!}, \end{align*} since $\frac{1}{10}(\alpha^*-1)(25-20 \alpha^*) \leq \frac{1}{200}$ and $\frac{4}{50}(\alpha^*-1) \geq \frac{3}{200}$. This bound works for $n \in \{0,1\}$, too. We have \[ (e^b-1)(e^{b\alpha^*}-1)=\sum_{n=2}^{\infty}b^n \frac{(\alpha^*+1)^n-(\alpha^*)^n-1}{n!}, \] so $(e^b-1)(e^{b\alpha^*}-1)$ has nonnegative coefficients. Now we can bound the Taylor series coefficients $d_n$ of the main expression as follows \begin{equation*} \begin{split} d_n &\leq \frac{1}{n!} \sum_{k=0}^{n-2} \binom{n}{k} \left( (k+8)(\alpha^*)^k + k+2^k +\left(\alpha^*+1\right)^k\frac{1-3k}{200}\right)\left(\left(\alpha^*+1\right)^{n-k}-(\alpha^*)^{n-k}-1\right) \end{split} \end{equation*} Changing the upper limit of the sum from $n-2$ to $n$ increases the sum for $n\geq 30$ -- for $k=n-1$ we have $(\alpha^*+1)^{n-k}-(\alpha^*)^{n-k}-1=0$ and the term for $k=n$ is surely positive for $ n \geq 30$, thus we have \begin{align*} n! d_n \leq \sum_{k=0}^{n} &\binom{n}{k} \left( (k+8)(\alpha^*)^k + k+2^k +\left(\alpha^*+1\right)^k\frac{1-3k}{200}\right)\left(\left(\alpha^*+1\right)^{n-k}-(\alpha^*)^{n-k}-1\right) \leq \\ &\leq (n+8)(2\alpha^*+1)^n+n(\alpha^*+2)^n+(\alpha^*+3)^n +\frac{1}{200}(2\alpha^*+2)^n \\ & \qquad - \frac{3n}{400}(2\alpha^*+2)^{n} + \frac{3n}{200}(2\alpha^*+1)^{n} + \frac{3n}{200}(\alpha^*+2)^{n}, \end{align*} where we neglected all the negative terms except for the term $\sum_{k=0}^n {n \choose k}\frac{-3k}{200} (\alpha^*+1)^n = - \frac{3n}{400}(2\alpha^*+2)^{n} $ and bounded $k$ by $n$ in all the positive terms (whenever $k$ appeared linearly). It is clear that negative term $-\frac{3n}{400}(2\alpha^*+2)^{n}$ dominates, so $d_n$ is negative when $n$ is sufficiently large. In fact, the expression is negative for $n \geq 30$. It is not hard to prove (again by checking some concrete values numerically and using convexity arguments) that for $n \geq 30$ we have \[ n+8+\frac{3}{200} n < 0.104 \left( \frac{\alpha^*+3}{2\alpha^*+1} \right)^n, \qquad \left(1+\frac{3}{200} \right) n < 0.01 \left( \frac{\alpha^*+3}{2\alpha^*+1} \right)^n, \] so for $n \geq 30$ we have \[ n! d_n < 1.114 (\alpha^*+3)^n - \frac{3n-2}{400} (2\alpha^*+2)^n = (2\alpha^*+2)^n\left( 1.114 \left( \frac{\alpha^*+3}{2\alpha^*+2} \right)^n - \frac{3n-2}{400} \right)<0 . \] From Lemma \ref{lem:sign} we get that $(\log(\varphi_1))'-(\log(\varphi_2))'$ on $(0,\infty)$ is first positive and then negative. This means that $(\log(\varphi_1))-(\log(\varphi_2))$ first increasing and then decreasing. In order to prove that it is everywhere nonnegative it suffices to check that it is nonnegative when $b \to 0^+$ and $b \to \infty$. The limit when $b \to \infty$ is easily seen to be $0$. To check the limit when $b \to 0^+$ it is enough check the Taylor expansion of $\phi_1(b)-\phi_2(b))$. Note that \begin{align*} \frac{\phi_1(b)-\phi_2(b)}{b^2} & = \left(1- \frac{b}{2} + O(b^2) \right)^{\frac{2\alpha^*}{\alpha^*-1}} (\alpha^*)^{-\frac{1+\alpha^*}{\alpha^*-1}}\left( 1- \frac12 b \alpha^* + O(b^2) \right)^{-\frac{1+\alpha^*}{\alpha^*-1}} \\ & \qquad \times \left(3\alpha^* - \frac{\alpha^*(2+3\alpha^*)}{2} b + O(b^2) \right) - \frac12 + \frac{b}{3} + O(b^2). \end{align*} By using the equality $(\alpha^*)^{\frac{2}{1-\alpha^*}} = \frac16$ we see that the constant term vanishes. In fact \[ \frac{\phi_1(b)-\phi_2(b)}{b^2} = \left( \frac13 - (\alpha^*)^{\frac{2}{1-\alpha^*}} \right)b + O(b^2) = \frac16 b + O(b^2). \] \end{proof} \section{Applications} \subsection{Relative $\alpha$-entropy}\label{sec:q-entropy} Recall that if $f_X$ denotes the density of a random variable $X$ then the relative $\alpha$-entropy studied by Ashok Kumar and Sundaresan in \cite{AS15} is defined as \[ I_{\alpha}(X \| Y) = \frac{\alpha}{1-\alpha} \log\left( \int \frac{f_X}{\| f_X\|_\alpha} \left( \frac{f_Y}{\|f_Y\|_\alpha} \right)^{\alpha-1} \right) \] for $\alpha \in (0,1) \cup (1,\infty)$, where $\|f\|_\alpha= (\int |f|^\alpha)^{1/ \alpha}$. We shall derive an analogue of Corollary 5 from \cite{MNT21}. To this end we shall need the following fact. \begin{proposition}[\cite{AS15}, Corollary 13]\label{prop:1} Suppose $\alpha>0$, $\alpha \neq 1$ and let $\mc{P}$ be the family of probability measures such that the mean of the function $T : \mathbb{R} \to \mathbb{R}$ under them is fixed at a particular value $t$. Let the random variable $X$ have a distribution from $\mc{P}$, and let $Z$ be a random variable that maximizes the R\'enyi entropy of order $\alpha$ over $\mc{P}$. Then \[ I_\alpha(X \| Z) \leq h_\alpha(Z) - h_\alpha(X). \] \end{proposition} Combining Proposition \ref{prop:1} with Theorem \ref{thm:main} and using expressions for the R\'enyi entropy and variance of a generalized Gaussian density derived in \cite{LYZ05}, one gets the following corollary. \begin{corollary}\label{cor:relatice-q-entropy} Suppose $\alpha>1$. Let $X$ be a symmetric log-concave real random variable. Let $Z$ be the random variable having generalized Gaussian density with parameter $\alpha$ and satisfying $\var(X)=\var(Z)$. Then $I_\alpha(X \| Z) \leq C(\alpha)$, where \[ C(\alpha) = \log\left( (2\alpha)^{\frac{1}{1-\alpha}} (3\alpha-1)^{- \frac{1}{1-\alpha}}(\alpha-1)^{-\frac12} B\left(\frac12, \frac{\alpha}{\alpha-1} \right) \right)- \min\left( \frac12 \log12 , \frac12 \log 2 + \frac{\log \alpha}{\alpha-1} \right). \] Here $B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ stand for the Beta function. \end{corollary} \subsection{Reverse entropy power inequality}\label{sec:reverse-epi} The R\'enyi entropy power of order $\alpha>0$ of a real random variable $X$ is defined as $N_\alpha(X)=\exp(2h_\alpha(X))$. If we combine our Theorem \ref{thm:main} with Theorem 2 from \cite{LYZ05} we get the following sandwich bound for $\alpha>1$ and a symmetric log-concave random variable $X$, \[ C_-(\alpha) \var(X) \leq N_\alpha(X) \leq C_+(\alpha) \var(X), \] where \[ C_-(\alpha) = \left\{\begin{array}{ll} 12 & \alpha \in (1,\alpha^*) \\ 2 \alpha^{\frac{2}{\alpha-1}} & \alpha \geq \alpha^* \end{array} \right., \qquad C_+(\alpha) = \frac{3\alpha-1}{\alpha-1} \left( \frac{2\alpha}{3\alpha-1} \right)^{\frac{2}{1-\alpha}} B\left(\frac12, \frac{\alpha}{\alpha-1}\right)^2 . \] Note that the case of $\alpha \in (\frac13,1]$ was discussed in \cite{MNT21}. We point out that for the upper bound the log-concavity assumption is not needed. Nevertheless, note that for $\alpha>1$ the so called generalized Gaussian density for which the right inequality is saturated, is symmetric and log-concave. We can now easily derive an analogue of Corollary 6 from \cite{MNT21} for $\alpha>1$. \begin{corollary}\label{cor:reverse-epi} For $X,Y$ uncorrelated, symmetric real log-concave random variables one has \[ N_\alpha(X+Y) \leq \frac{C_+(\alpha)}{C_-(\alpha)} \left( N_\alpha(X)+ N_\alpha(Y) \right). \] \end{corollary} \begin{proof} We have \[ N_\alpha(X+Y) \leq C_+(\alpha) \var(X+Y) = C_+(\alpha)(\var(X)+\var(Y)) \leq \frac{C_+(\alpha)}{C_-(\alpha)} \left( N_\alpha(X)+ N_\alpha(Y) \right). \] \end{proof} More information on various forward and reverse forms of the entropy power inequality can be found in the survey article \cite{MMX17}. See also the recent articles \cite{L18} and \cite{MM19}.
2024-02-18T23:40:29.882Z
2021-08-24T02:34:40.000Z
algebraic_stack_train_0000
2,571
5,984
proofpile-arXiv_065-12550
\section{Conclusions and Outlook} In this overview, we have surveyed the three main types of CISS - in transmission, transport, and chemical reactions. For each, we critically overviewed existing theoretical approaches, while emphasizing advantages and disadvantages with respect to qualitative and quantitative agreement with known experimental results. At present, a unifying scheme that would allow one to interpret all experiments in terms of only a single microscopic effect – the ``CISS effect'' - has not yet been identified. While such a framework cannot be ruled out, chirality-induced spin selectivity may perhaps be thought of as a set of phenomena that have a unifying scheme only in the sense that they all derive from the interplay of spin-orbit interaction and chirality. For example, it has been suggested theoretically that spin-orbit interaction leads to non-conservation of spin currents in a two-terminal junction and consequently to a mechanical torque~\cite{Sasao2019}, which is a different experimental observable than the ones surveyed above. Such a CISS-induced torque has indeed been suggested as an explanation for a recent experiment demonstrating the use of a chiral molecule as a molecular motor~\cite{Wulfhekel2021}. Correspondingly, a large gap remains between the experimental observations and the quantitative estimates from theory. Further experiments are required for guiding the theory and for limiting the possible interpretations for this potentially very important phenomenon. Further exploration of recent suggestions of finite temperature effects, which go beyond pure electronic ones, is also of interest. Finally, theoretical studies have focused on steady-state transport, but very little theory addresses the growing number of experiments that report transient CISS phenomena (e.g.~\cite{Kumar2017}). Likewise, the hypothesized role that such transient phenomena may play in CISS in chemical reactions~\cite{Banerjee2018} has not been sufficiently explored yet. We believe that consolidating the field, in the sense of bringing theory and experiment much closer, with the goal of achieving a detailed microscopic understanding of CISS, is an ongoing challenge that provides many research opportunities. To this end, one can perhaps concentrate on model systems, where CISS can be studied in great detail in both experiment and theory. \section{Survey of Experimental CISS Studies} We start our considerations by presenting a brief survey of the status of CISS experimental work. This survey is not meant to be comprehensive. Rather, it aims to provide sufficient context for a meaningful analysis of the advantages and disadvantages of various theoretical approaches to CISS. For additional aspects of CISS studies, the reader is referred to past review articles, and references therein~\cite{Naaman2015,Michaeli2016,Fontanesi2018,Naaman2019,Pop2019,Naaman2020,Waldeck2021,Yang2021}. Relations between chirality and magnetic phenomena have a long history. Magnetically induced optical activity in crystals, as well as natural optical activity in chiral crystals, have been known since the nineteenth century, leading Pasteur himself to search, unsuccessfully, for a link between the two~\cite{Nakanishi1994}. It was not until 1997, however, that this link was finally found in the form of magneto-chiral dichroism, i.e., a difference in the magnetic optical activity of the two enantiomers of a chiral medium~\cite{Rikken1997}. CISS, discovered only two years after that~\cite{Ray1999}, takes a significant step further by establishing a direct link between chirality and spin, even in the absence of an external magnetic field and/or circularly polarized illumination. Generally speaking, CISS effects that have been observed since the original discovery can be divided into three broad categories: The first - also historically - involves transmission of unbound electrons (typically photoexcited from an underlying substrate) through a chiral medium to vacuum. The second - where most work to date has been done - involves transport of bound electrons, between leads, through a chiral medium. The third - and relatively new - category concerns relations between electron spin and chemical reactions. \subsection{CISS in electron transmission} CISS in electron transmission was first observed experimentally by Ray {\it et al.}~\cite{Ray1999}, who considered how electrons, emitted from an Au substrate upon excitation with circularly polarized light, are transmitted through an organized adsorbed monolayer comprising chiral molecules. They found that electrons excited using clockwise (cw) light exhibited a significant asymmetry in the transmission probability as compared to those excited with counter-clockwise (ccw) light, with electrons excited using linearly polarized light exhibiting an intermediate transmission probability~\cite{Ray1999}. This is demonstrated here using the later work of Carmeli {\it et al.}~\cite{Carmeli2002}, shown in Fig.\ \ref{fig:transmission}a. In these studies, the illumination (cw or ccw) for which transmission through a polyalanine layer is preferred was found to depend on the handedness of the peptide (L or D, which exhibit left- and right-handed chirality, respectively). Based on the well-known connection between the direction (cw or ccw) of circularly polarized light and the spin polarization (up or down) of the excited electrons, it was inferred that the molecular chirality induces spin selectivity in the transmision, i.e., CISS is observed. This conjecture, while reasonable, took another 12 years to verify. This was finally achieved by G\"ohler {\it et al.}~\cite{Goehler2011}, who used a Mott polarimeter to measure directly the spin of photoelectrons transmitted from an Au substrate through a monolayer of double-stranded DNA. As shown in Fig.\ \ref{fig:transmission}b, significant spin-polarization was found even when the photoelectrons were generated with linearly polarized light. Further direct confirmation for the CISS effect in transmission came from the work of Ni\~no {\it et al.}~\cite{Nino2014}, who found significant differences in spin polarization between two enantiomers of the same molecule (1,2-diphenyl-1,2-ethanediol). \begin{figure*} \includegraphics[width=8cm]{Figures/Fig1a.pdf} \includegraphics[width=8cm]{Figures/Fig1b.pdf} \caption{(a) Energy distribution for photoelectrons transmitted through (i) L, C-terminus connected, (ii) D, N-terminus connected, and (iii) D, C-terminus connected helical polyalanine films and excited using a cw (negative spin polarization; red, solid), ccw circularly (positive spin polarization; blue, dashed), or linearly (no spin polarization; black, dotted) polarized light. Taken from Ref.\ \protect\onlinecite{Carmeli2002}, used with permission. (b) Photoelectron polarization, measured for electrons ejected from a Au-coated substrate with a monolayer of 78-base pair double-stranded DNA, for cw circularly polarized light [(–54.5 $\pm$ 7.0\%); top, green], linearly polarized light [(-57.2 $\pm$ 5.9)\%; middle, blue] and ccw circularly polarized light [(-60.8 $\pm$ 5.8\%); bottom, red]. Taken from Ref.\ \protect\onlinecite{Goehler2011}, used with permission.} \label{fig:transmission} \end{figure*} Importantly, the essential effect does not depend on the specifics of the molecule and has been observed, e.g., in DNA of varying lengths, different oligopeptides, and helicenes (see Ref.\ \onlinecite{Nurenberg2019} and references therein). Interestingly, heptahelicene, composed of only carbon and hydrogen atoms and exhibiting only a single helical turn, was found to already show a longitudinal spin polarization of about 6\% to 8\% in the transmission of initially spin-balanced electrons~\cite{Kettner2018}. Note that, as also shown in Fig.\ \ref{fig:transmission}a for polyalanine, the direction of spin polarization can be inferred to depend not only on the handedness but also on the molecular dipole direction, as it changes sign depending on whether the molecule is bound to the substrate through the N-terminus (amine group side) or C-terminus (carboxyl group side)~\cite{Carmeli2002}. \subsection{CISS in electron transport} The above studies suggest that spin selectivity could be observed also for bound electrons traveling through a chiral medium, i.e., also for electron transport and not just for electron transmission. Experimental verification of this conjecture, however, requires proper contacting of the chiral medium - often an organic molecular layer - to metallic leads, which is not trivial. This was first accomplished by using a ferromagnetic substrate to inject spin (in this case to a DNA layer), with the tip of a conductive-probe atomic force microscope (CP-AFM) serving as the top contact~\cite{Xie2011}. As chiral media are often grown on non-magnetic substrates, more generally a magnetic CP-AFM (mCP-AFM) tip can be used, as shown in Fig.\ \ref{fig:transport}a~\cite{Lu2019}. Many other techniques have been used to detect spin imbalance due to CISS in electron transport. Two notable ones are magnetoresistance in a film where at least one electrode is ferromagnetic~\cite{Ravi2013,BenDor2014} and electrochemical measurements with a ferromagnetic electrode~\cite{Mishra2013}. Several recent observations of CISS in electron transport are directly relevant for the theoretical discussion that follows. First, a sizable signal interpreted as being due to CISS effect has been reported in many articles~\cite{Naaman2015}. The effect has been observed in electron transport through many media, including different types of chiral molecular layers (including biological ones)~\cite{Xie2011,Abendroth2017,Mishra2019,Jia2020}, carbon nanotubes~\cite{Alam2015,Alam2017}, and chiral materials~\cite{Kulkarni2020,Lu2019,Lu2020,DiNuzzo2020,Mondal2020,Waldeck2021}. It has even been reported in single-molecule experiments using a break-junction~\cite{Aragones2017}. Second, it has been repeatedly demonstrated that the effect generally increases with medium length~\cite{Mishra2020} and that it can be very large. For example, a spin-selectivity of up to 80\% was reported for electrons traversing ca.\ 2-6 $\mu$m-long self-assembled superhelical conducting polyaniline micro-fiber channels at room temperature~\cite{Jia2020}. Spin polarization exceeding 85\% was achieved using $\pi$-conjugated molecular materials based on coronene bisimide and tetra-amidated porphyrin cores appended with alkoxyphenyl groups~\cite{Kulkarni2020}. Even higher numbers were reported in recent studies of spin-dependent charge transport through 2D chiral hybrid organic-inorganic perovskite materials. Using ((R/S-)methylbenzylammonium (MBA) lead-iodide (see Fig.\ \ref{fig:transport}), a highest spin-polarization transport of up to 86\% was obtained~\cite{Lu2019} and with (R-/S)MBA-tin-iodide the efficiency was as high as 94\%~\cite{Lu2020}. Third, as in CISS in transmission, also here the sign of the preferred spin depends on the direction of the molecular dipole~\cite{Eckshtain2016}, in addition to the usual dependence on the handedness. Fourth, from a mechanistic point of view, two observations are important: CISS is repeatedly found to correlate with optical activity~\cite{Kulkarni2020,Mishra2020,Bloom2017} and CISS is prominent and clearly detected in the non-linear current-voltage regime (as also seen in Fig.\ \ref{fig:transport})~\cite{Kiran2017,Naaman2020,Yang2020}. We also note that the spin-selectivity is increasingly explored practically for demonstrating spintronic effects and devices that can be used for logic and memory~\cite{Michaeli2017,Naaman2019}. Two recent examples are a spin filter~\cite{Suda2019} and magnetless Hall voltage measurements~\cite{Eckshtain2016,Mishra2019}. Finally, it is important to note that a constructive and at the same time critical debate scrutinizing all measurements is still ongoing. In particular, CISS is not always observed in transport through chiral media and at present a broad consensus as to the precise experimental conditions under which a specific manifestation of CISS is expected does not exist. \begin{figure*} \centering \includegraphics[width=12cm]{Figures/Fig2.pdf} \caption{(A) Setup for mCP-AFM room-temperature measurements of the chirality dependence in out-of-plane charge transport through chiral $\sim$50 nm two-dimensional hybrid perovskite (MBA)$_2$PbI$_4$ thin films deposited on fluorine-doped tin oxide (FTO) substrates. (B-D) Current-voltage curves for S (left-handed chiral), achiral, and R (right-handed chiral) films, with the tip magnetized north (blue), south (red), or not magnetized (black). The curves for each film were averaged over 100 scans and the shaded region around the lines marks the 95\% confidence limits for the average results. Taken from Ref.\ \protect\onlinecite{Lu2019}, used with permission.} \label{fig:transport} \end{figure*} \subsection{CISS in chemical reactions} Beyond electron transmission or transport through a chemically stable medium, a third category of CISS uses the electron spin as an enantio-selective chemical reagent~\cite{Naaman2019b,Metzger2020}. The origins of this idea can be traced back to the work of Rosenberg {\it et al.}~\cite{Rosenberg2008}. They showed that use of low-energy spin-polarized secondary electrons, produced by irradiation of a magnetic substrate, results in different bond cleavage rates for R and S enantiomers of a chiral molecule adsorbed on the substrate. Based on CISS in transmission, the magnetic substrate was later replaced by a non-magnetic substrate with a chiral DNA overlayer acting as a spin filter, with similar consequences for enantio-selective chemistry~\cite{Rosenberg2015}. In the same manner, one can use CISS in transport, typically in an electrochemical setting, to employ spin in order to promote chemical reactions. For example, it was shown that when electrochemical water splitting occurs with an anode that accepts preferentially one spin owing to CISS, the process is enhanced and the formation of hydrogen peroxide is diminished~\cite{Mtangi2015,Mtangi2017,Ghosh2019,Ghosh2020}. More recently, using Hall voltage measurements it was observed that charge displacement in chiral molecules (in this case L- and D-oligopeptides) creates transient spin polarization~\cite{Kumar2017}, which in turns imparts an enantio-selective inter-molecular interaction even without electron injection. This transient spin-polarization is then also expected to affect properties at a ferromagnet/chiral molecule interface via spin-exchange interactions. The most dramatic demonstration of this principle so far was the separation of enantiomers by their interaction with a magnetic substrate, shown in Fig.\ \ref{fig:enantio}~\cite{Banerjee2018}. The same idea, with appropriate experimental modifications, was then used for enantioselective crystallization of amino acids~\cite{Tassinari2019}. We also note that an inverse phenomenon, namely magnetization reversal in a thin-film ferromagnet solely by chemisorption of a chiral molecular monolayer, has also been reported~\cite{BenDor2017}, with the effect possibly persisting for extended periods of time (hours)~\cite{Meirzada2021}. \begin{figure*} \centering \includegraphics[width=12 cm]{Figures/Fig3.pdf} \caption{Adsorption of a polyalanine oligopeptide [shown in inset of (v)] on ferromagnetic samples (silicon with a 1.8-nm Co film and a 5-nm Au film), magnetized with the magnetic dipole pointing up (H+) or down (H–) relative to the substrate surface. SiO2 nanoparticles were attached to the adsorbed oligopeptides. Panels (i,ii) and (iii,iv) exhibit L-polyalanine and D-polyalanine, respectively, adsorbed for 2 s on a substrate magnetized up (i,iii) or down (ii,iv). Panel (v) summarizes the nanoparticle adsorption densities shown in (i) to (iv), compared with the adsorption density on Au with an applied external magnetic field (red bars). Double-headed arrows represent error bars. The errors are the standard deviation among 10 measurements conducted on each of the 10 samples, hence a total of 100 measurements. Taken from Ref.\ \protect\onlinecite{Banerjee2018}, used with permission.} \label{fig:enantio} \end{figure*} To summarize, this short survey of experimental work shows that CISS is an important fundamental effect, with many manifestations and various practical consequences in a number of areas, from spintronic devices to chemical reactions. There is therefore great merit in theoretical understanding of CISS origins. \section{Status of Theoretical CISS studies} Inspired by the early experiments on CISS in transmission, theoretical studies have focused initially on scattering theory of photoelectrons off helical potentials. This was followed by a theoretical and computational focus on CISS in electron transport through helical wires, which constitutes the bulk of theory reported so far. Theory related to the more recent CISS in chemical reactions has been much more limited and is only now starting to emerge. We now provide a critical discussion of these efforts. \subsection{Theory of CISS in transmission} Historically, scattering off asymmetric potentials is a theoretical topic almost as old as quantum mechanics itself. Of special importance has been the scattering of light, where Ref.\ \onlinecite{Condon1937} is an early example. In particular, motivated by the need to provide a theoretical underpinning for the detection of chemical helicity with polarized light, scattering of light by {\it helical} potential shapes has been a longstanding topic, especially in theoretical chemistry and biophysics~\cite{Schmidt1970,Bustamante1980}, but also in the electromagnetic theory of chiral materials~\cite{Psilopoulos2005}. The scattering of electrons off helical - or chiral - obstacles appears to have received much less attention but more recently has been motivated by CISS experiments. An analytical scattering theory, which includes the effect of SOC, has been developed for electrons traversing a helical molecule~\cite{Yeganeh2009} or a self-assembled monolayer (SAM) thereof~\cite{Medina2012,Eremko2013,Varela2014}, or even a chiral molecule that is not necessarily helical~\cite{Ghazaryan2020}. Qualitatively, SOC is indeed found to induce spin-polarization. We note that dissipation can also polarize angular momenta. This can be illustrated within a classical scattering model, where the role of spin is played by classical angular momentum and SOC is replaced by friction~\cite{Hod2020}), leading us to speculate that a similar mechanism could also contribute to CISS. Unfortunately, for realistic model parameters, in particular for the SOC, the resulting polarization is too small~\cite{Gersten2013}, much smaller in magnitude than that found in some of the above-discussed experiments. \begin{figure} \includegraphics[width=7cm]{Figures/Fig4a.pdf} \includegraphics[width=8cm]{Figures/Fig4b.pdf} \caption{(a) Sketch of a helical scattering potential (black), indicating the pitch (p) and the polar scattering angle, $\theta$. (b) Dependence of the (normalized) scattering cross-section on the angular momentum of the incident particle, for wires of increasing length $L$. Depending on clockwise or counter-clockwise entry ($n\pm 1$), the scattering cross-section differs by two orders of magnitude. Potential parameters have been adjusted to the case of DNA; impinging energy is 0.5 eV. Taken from Ref.\ \protect\onlinecite{Gersten2013}, used with permission. } \label{fig:scattering} \end{figure} In light of this discrepancy, Gersten {\it et al.}~have extended the scattering approach~\cite{Gersten2013} to include the SOC of the substrate that supports the SAM. This introduces a concept of ``induced'' spin filtering, which expands the notion of ``current transfer'', developed earlier by Skourtis {\it et al.}~\cite{Skourtis2008}. The main idea is that an electron migrating through an obstacle retains a ``memory'' of its initial momentum. In the context of spin-filtering, this idea is used for angular momentum, implying that the helical molecule is ``filtering'' angular momentum. This means that spin-filtering occurs if the angular momentum of the impinging electron was at least partly spin-selected to begin with, as is indeed the case with a substrate possessing strong spin-orbit coupling - see Fig.\ \ref{fig:scattering}. While the authors expressed a hope that their theory could at least roughly account for the magnitude of the experimental observations~\cite{Gersten2013}, there appears to be no consensus concerning this claim~\cite{Matityahu2017}. Experimentally, CISS has been observed using substrates with negligible SOC.\cite{Mishra2013,Kettner2016,Kettner2018} Therefore substrate SOC may indeed be an important contribution where it exists, but cannot explain the whole effect. \subsection{Theory of CISS in transport} As discussed above, many \ciss-related transport phenomena have been reported experimentally, often through the measurement of spin-resolved current-voltage (I-V) characteristics of electrons flowing through a chiral medium, as for example in Fig.\ \ref{fig:transport} above. It is therefore only natural that much of the theoretical effort has been focused in the same direction. Before considering any specifics of the these efforts, we consider what basic aspects distinguish scattering and transport experiments on a conceptual level. Taking a rough perspective, the passage of electrons from a source to a drain appears similar for bound and unbound particles: in both cases electrons migrate through a region with obstacles and therefore conventional scattering terminology applies in either situation, emphasizing a notion of similarity. However, significant conceptual differences enter upon considering that: (i) The passage of bound particles through a thin wire is quasi-one-dimensional, while scattering of an unbound particle is intrinsically three dimensional; (ii) The bound particle experiences the properties of the underlying material much more strongly than the unbound particle. For example, the electron dispersion relation will, in general, no longer be parabolic and interactions with other degrees of freedom (e.g., inelastic "multi-scattering processes") tend to be much stronger. We begin by recalling that general conditions under which a two-terminal device can be expected to exhibit spin filtering, even in principle, have been worked out in the field of spintronics. This analysis includes CISS as a special case thereof and therefore provides an important framework for our discussion. We emphasize two basic facts arising from it: (i) It is well known that in single channel wires SOC can be removed by a gauge transformation~\cite{Meyer2002}. Mathematically, this is because SOC can be written as an SU(2) gauge field. Intuitively, this means that due to the absence of loops (and magnetic fields) the spin rotates in one-to-one correspondence with a position-space shift. Because the spin degree of freedom can be gauged out, a spin filtering functionality based solely on SOC is not expected~\cite{Meyer2002}. A particular consequence of this is that spin-filtering owing to SOC is not expected to arise in tight-binding models of single-stranded DNA that afford only a single orbital per site. Therefore, Refs.\ \onlinecite{Guo2012, Gutierrez2013, Matityahu2016} have emphasized the importance of two channels for the observation of {\ciss}. (ii) Even in two-terminal molecular junctions supporting several channels, spin-selectivity is still suppressed in the linear regime because of time-reversal symmetry, as emphasized in Ref.\ \onlinecite{Yang2019} (and see Refs.\ \onlinecite{Naaman2020c,Yang2021c} for additional discussion). The proof of this claim uses an Onsager-type argument: Consider a two-terminal device with a non-magnetic source. Let the drain exhibit magnetism, $\bM$, so that it can act as a spin-analyzer, but there is no magnetic field otherwise. The argument proceeds via {\it reductio ad absurdum:} Assume that the device could act as a spin-filter. Then the conductance would be sensitive to the direction of $\bM$ in the analyzer; in particular $G(\bM) \neq G(-\bM)$. However, an Onsager relation protected by time-reversal invariance implies $G(\bM)=G(-\bM)$, leading to a contradiction and thereby proving the claim. We also note that a special case of both of the above-statements, for non-interacting single-channel wires in the linear regime, is known as the "single-channel no-go-theorem"~\cite{Kiselev2005,Bardarson2008}. Below we survey many studies that have reported chirality-selective spin transport in models of single-channel wires. In many (but not all~\cite{Guo2012}) cases, violation of these restrictions is reported. A detailed analysis of each individual theoretical/computational model is beyond the scope of this overview. However, it is of utmost importance to assess model predictions against the above general restrictions~\cite{Yang2019,Entin2021}. In some cases, violations may be rationalized in terms of the computed quantities or the fundamental model assumptions (some examples are given below), while in others this may reveal mistakes in the analysis. A considerable number of studies based their analysis on transmission calculations for tight-binding models. \cite{Guo2012,Guo2012b,Gutierrez2012,Gutierrez2013,Guo2014,Guo2014b,Varela2016,Geyer2019,Sierra2020} In an early study, Gutierrez {\it et al.}~ \cite{Gutierrez2012} considered numerically a single-channel tight-binding model with nearest neighbor hopping and \soc. They reported a very large spin polarization near the band edges, reaching up to 100\%. The degree of spin polarization obviously depends on the tight-binding model parameters. The authors motivated their choice based on DNA, with the hopping parameter reported to be in the range of 20--40 meV and with chirality entering the model indirectly via its feedback into the {\soc}. To determine the latter, a heuristic argument was exploited, which yields for light atoms (C, B, N, O) typical (about 2 meV) values for coupling strength. Indeed, scales of meV can be reached with light elements, {\it e.g.}, when promoting a carbon atom in graphene from sp$^2$- to sp$^3$-hybridization~\cite{Gmitra2009}. However, as compared to this promotion, the chirality-induced symmetry breaking should be weaker by a geometric factor that incorporates the helical parameters of pitch and diameter. Clearly, further insight into a quantitative estimate of the effect can come from first principles studies that do not utilize model parameters. However, these are challenging because full helicity usually implies large molecules that need to be treated at a level of theory that includes proper relativistic corrections to the electronic structure. Facing these difficulties, Maslyuk {\it et al.}~\cite{Maslyuk2018} have attempted a first principles transport calculation for a DNA molecule. In their calculations, the {\soc} has been implemented employing pseudopotentials and the zero order regular approximation (ZORA)~\cite{vanLenthe1994}. Spin-filtering of an $\alpha-$helix was compared to that of a $\beta-$strand, the latter corresponding to an enforced linear geometry. The authors found that spin-filtering was stronger in the helical conformation, as compared to the linear one, which is expected due to the symmetry reduction. Quantitatively, however, the observed polarization was an order of magnitude below the experimental reports. This observation has been shared by Rebergen and Thijssen~\cite{Rebergen2019}: At a qualitative level, the existence of {\ciss} is confirmed, but quantitatively the native {\soc} in the molecules considered appears to be too small, possibly by an order of magnitude, in order to account for the experimental results. Thus, an additional and important challenge is the same quantitative issue faced by the theory of CISS in transmission. In light of this difficulty, and inspired by the early work of Gersten {\it et al.}~\cite{Gersten2013}, Liu {\it et al.} ~\cite{Liu2021} proposed an orbital-polarization model with SOC from the electrode to interpret CISS in transport. In their model, by going through the chiral molecule electrons become orbital-polarized (an effect also obtained in model calculations~\cite{Michaeli2019}) and the orbital polarization is converted to spin polarization by the SOC in the electrodes. This leads, in the non-linear regime, to unidirectional magnetoresistance, rationalizing CISS to some extent. Quantitative issues notwithstanding, the vexing problem of the microscopic origins of \ciss-type phenomena in transport has motivated many authors to investigate situations that definitively avoid the Onsager-based no-go theorem. In a recent example, Utsumi {\it et al.} calculated time-reversal symmetric charge and spin transport through a molecule comprising two-orbital channels and connected to two leads. They demonstrated that spin-resolved currents are generated when spin-flip processes are accompanied by a flip of the orbital channels.\cite{Utsumi2020} Guo {\it et al.}~\cite{Guo2012,Guo2014} (and in a different context independently also \onlinecite{Liu2021}) and \onlinecite{Matityahu2013} have suggested a ``symmetry workaround'', further explored in Refs.\ \onlinecite{Matityahu2016,Matityahu2017}. This involves a third bath that the electrons traversing the chiral molecule may couple to. The bath gives rise, in general, to non-unitary effects such as `dephasing' or `leakage', such that time-reversal symmetry is effectively broken and Onsager's theorem no longer applies and spin polarization ensues. An example is shown in Fig.\ \ref{fig:leakage}. Technically, this effect has been modeled in these studies by introducing an anti-hermitian self-energy ${\mathfrak i} \Gamma_\text{d}$. This was found to bring about spin-selective transport in the presence of {\soc}. An intuitive understanding of this finding can proceed from the observation that, in the presence of {\soc} and leakage, evanescent waves associated with opposite spins have different decay lengths~\cite{Matityahu2016}. \begin{figure}[ht!] \centering \includegraphics[width=7.5cm]{Figures/Fig5a.pdf} \includegraphics[width=8cm]{Figures/Fig5b.pdf} \caption{(a) Tight binding model of a single helical molecule with radius $R$, pitch $h$, and twist angle $\varphi$. Electrons can hop between adjacent sites along the helix with hopping amplitude $J$ or vertically to the $N^\mathrm{th}$ neighbor with hopping amplitude $\tilde{J}$. Spin-orbit interaction is assumed to act only between nearest neighbor sites. (i) Schematic view of the helical molecule. (ii) Mapping of the model onto a one-dimensional chain of $M$ unit cells, each containing $N$ sites. (b) Spin polarization (solid blue) in a chain of $M$=6 unit cells (other parameters are $N$=2, $J$=1.5$J_0$, $\tilde{J}$=0.6$J_0$, with a lateral component $J_x$=0.2$J_0$, using a complex self-energy. (i) as a function of energy (in units of $J_0$), with $\theta$, an indicator of SOC strength, given by 0.4$\pi$. (ii) as a function of $\theta$, with $E$=0 (center of the band). The spin polarization vanishes (dashed magenta) if either $J_x$=0 (unitary chain), $\tilde{J}$=0 (nearest neighbor chain), or $\theta$= 0 (no SOC). Inset: Reflection (black) and transmission (green) coefficients for spin up (solid) and spin down (dashed). The red line is the sum of these coefficients. Taken from Ref.\ \protect\onlinecite{Matityahu2016}, used with permission. } \label{fig:leakage} \end{figure} Quantitatively, this approach typically uses \cite{Guo2012,Guo2012b,Guo2014,Guo2014b,Pan2015} model parameters similar to those of Ref.\ \onlinecite{Gutierrez2012}, so that a quantitative uncertainty carries over. Furthermore, the overall magnitude of the filtering effect thus brought about is very sensitive to the leakage rate $\Gamma_\text{d}$. While the very existence of this rate is physically well motivated, its magnitude is still difficult to establish in realistic terms. The rates employed in the simulations to achieve spin-polarization in the experimentally reported regime are usually not small, typically a few percent of the hopping integral. Since the resulting leakage is very significant, the overall effect thus achieved still appears to be too small to explain the main experimental features, and one would further expect significant resistance effects. A different approach to the idea of a bath has been proposed recently by Volosniev {\it et al.}, who proposed that the {\ciss}-effect is a many-body phenomenon that arises from a bath that manifests in the carrier dynamics as friction~\cite{Volosniev2021}. To illustrate their idea, they adopted a qualitative single-channel model, in which friction enters as an effective electric field that is proportional to the product of friction constant and momentum expectation value. The latter is non-vanishing in the presence of spin-orbit interaction and points in opposite directions for different spin orientations. By construction, the model produces a quasi-stationary state with spins pointing at opposite directions for a wire of finite length. While the main idea is transparent, the relation to the {\ciss}-effect remains uncertain for two reasons: (a) The microscopic source of the friction, and therefore the relevance of the qualitative model, remains an open issue. (b) Spin-separation is brought about by a friction-controlled dynamical process, which requires the supply of a sufficient amount of energy, the source of which remains unspecified. A natural candidate for a physical bath that participates in the carrier dynamics are the atomic nuclei~\cite{Fransson2020,Fransson2021,Zhang2020}. As recently pointed out~\cite{Wu2021,Bian2021}, the effect of the nuclei on electronic spin separation can be enhanced due to conical intersections, so that the effect could indeed contribute to the {\ciss}-phenomenon. Exploring a different approach, Dalum and Hedeg{\aa}rd~\cite{Dalum2019} considered situations in which the source feeds a current into the device, i.e. the helical molecule, with an occupation of incoming scattering states that is out of equilibrium. In fact, such a situation arises very naturally when photo-electrons traverse a helical SAM; it is, however, more difficult to motivate in the context of conventional conductance experiments. Dalum and Hedegard point out, in addition, that even after a workaround has been implemented, one still faces the problem that {\soc} is small as compared to all other native energy scales, so a sizable polarization is not expected. To overcome this difficulty, the authors invoke degeneracies, the consequences of which they study by adopting helical polyacetylene as a paradigm system. An altogether different approach towards understanding \ciss-type effects has been undertaken by Yang {\it et al.}~\cite{Yang2019}. They take the existence of \ciss-effects for granted and construct, based on this assumption, phenomenological models that describe typical {\ciss} measurements. The model is formulated in terms of different versions of transfer matrices that represent different elements of the measurement circuit, such as magnetic and non-magnetic barriers, the helical molecule, etc. Due to the model simplicity, analytical calculations are feasible. By construction, Onsager's reciprocity theorems are satisfied by the model. Therefore, an explicit calculation of the two-terminal conductance yields the expected negative result, i.e., no spin-filtering in a linear-response two-terminal calculation. However, a positive result is found in multi-terminal calculations. A puzzling aspect of Ref.\ \onlinecite{Yang2019} is that on the one hand the existence of {\ciss} is deduced from experiments, which have been performed in two-point geometries, while on the other hand it is found that two-point conductance measurements will not show the {\ciss} effect. Routes escaping this dilemma are proposed in Ref.\ \onlinecite{Yang2020}. Specifically, two natural routes are discussed : (i) The reciprocity theorem makes no statement about non-linear effects; therefore, traces of {\ciss} can exist, and have been identified in Ref.\ \onlinecite{Yang2020}, also in two-terminal measurements if they are operated in the non-linear regime of bias voltages. (ii) Also, the reciprocity theorem does apply if time reversal invariance is broken. Hence, one might expect that aligning two helical molecules in series with a small resistor in between will yield different results, depending on whether the molecules have the same or opposite helicities. Along this idea another set of experiments has been proposed in Ref.\ \onlinecite{Yang2020}, which could show CISS in a two-terminal setup, here even in the linear regime. Finally, we mention a conceptually interesting field-theoretical approach towards {\ciss} that has recently been put forward~\cite{Shitade2020}. It considers the Dirac equation in a one-dimensional curved space-time. In this framework, the usual Foldy-Wouthousen transformation can be used in order to project into the non-relativistic (low kinetic energy) sector. The authors show that as a result of curvature, the kinetic energy acquires an SU(2)-gauge field, which plays a role analogous to the {\soc} in single-channel wires. Embarking on this observation, the authors apply their theory to transport in helical molecules, where the (quasi-)one-dimensional molecule is interpreted as a physical realization of a curved space-time for the traversing electron. The basic idea of this application is conceptually appealing, but the space-time considered in Ref.\ \onlinecite{Shitade2020} carries only a single parameter, the curvature $\kappa$ of the helical path. As a consequence, the theory does not describe the three-dimensional nature of experimental observations, which manifests as an emergence of molecular properties when gradually adding atom by atom. Furthermore, the results depend on the order of limits (first dimensional reduction, then non-relativistic limit), a point also emphasized by Geyer {\it et al.}~\cite{Geyer2020}. \subsection{Theory of CISS in chemical reactions} As explained above, CISS in chemical reactions is a very new field, even experimentally. Accordingly, theory is still limited. First principles calculations have repeatedly shown that if one assumes that chirality indeed begets spin polarization, then the latter can explain chemical enantio-selectivity~\cite{Kumar2017,Banerjee2018}. Recent first principles calculations for a chiral monolayer on a magnetic substrate provided first indications of an emergent electronic structure and emphasized the role of exchange interactions~\cite{Dianat2020}. However, the degree to which the emergent structure is affected by the choice of density functional~\cite{Eckshtain2016} was not investigated. In addition, chiral symmetry breaking was enforced ``manually'' via introduction of an external Ni atom, so that the effect of the intrinsic chirality remains to be clarified. To the best of our knowledge, a more complete theoretical framework that describes how CISS emerges in such scenarios has yet to be provided. \section{\label{sec:level1}Introduction}
2024-02-18T23:40:30.113Z
2021-08-24T02:29:57.000Z
algebraic_stack_train_0000
2,587
5,943
proofpile-arXiv_065-12689
\section{Introduction} \label{sec_intro} Polar codes build on channels polarization to efficiently achieve the capacity of symmetric channels (refer to \cite{arikan_channel_2009,arikan_rate_2009,sasoglu_polarization_2009, korada_polar_2010} for detailed presentations). Channel polarization is a method that takes two independent binary-input discrete memoryless channels $W(y|x)$ to a \textit{bad} channel and a \textit{good} channel, given by \begin{align} W(y_1^2|u_1) &= \sum_{u_2 \in \qty{0,1}}W(y_2 | u_2) W(y_1 | u_1 \oplus u_2), \label{eq:bad_channel} \\ W(y_1^2, u_1 |u_2) &= W(y_2 | u_2) W(y_1 | u_1 \oplus u_2) \label{eq:good_channel} \end{align} respectively, where $x_a^b = (x_a, x_{a+1} \ldots x_b)^\top$. These channels are obtained by combining two copies of $W(y|x)$ with a CNOT gate $(u_1,u_2)\rightarrow (u_1\oplus u_2, u_2)$ and then decoding successively bits $u_1$ and $u_2$. That is, output bit $u_1$ is decoded first assuming that $u_2$ is erased. Then bit $u_2$ is decoded taking into account the previously decoded value of $u_1$. Polar codes are obtained by recursing this process to obtain $2^l$ different channels from the polarization of $2^{l-1}$ pair of channels (\fig{subfig_circ_2_1}). As the number of polarization steps $l$ goes to infinity, the fraction of channels for which the error probability approaches 0 tends to $I(W)$ and the fraction of channels for which the error probability approaches 1 tends to $1 - I(W)$, where $I(W)$ is the mutual information of the channel with uniform distribution of the inputs \cite{arikan_channel_2009}. Thus, polar codes are capacity achieving for those channels. The above construction can be generalized by replacing the CNOT transformation by a different polarization kernel \cite{korada_polar_2010-1}. See \sec{kernels} for details. The kernel can generally take as input more than two copies of the channel $W(y|x)$ and the {\em breadth} $b$ of a kernel is define as the number of channels it combines. An increasing breadth offers the possibility of a more efficient polarization (i.e. a lower decoding error probability), but has the drawback of an increased decoding complexity. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=1.25in]{polar_codes_circ.pdf} \label{fig:subfig_circ_2_1}} \quad \subfloat[]{ \centering \includegraphics[width=1.83in]{circ_b_3_sub_1.pdf} \label{fig:subfig_circ_3_1}} \quad \\ \subfloat[]{ \centering \includegraphics[width=1.25in]{circ_b_2_sub_2.pdf} \label{fig:subfig_circ_2_2}} \quad \subfloat[]{ \centering \includegraphics[width=1.83in]{circ_b_3_sub_4.pdf} \label{fig:subfig_circ_3_4}} \caption{Examples of regular (depth=1) and convolutional (depth$>1$) polar code circuits. The parameters (breadth, depth, polarization steps) are \textbf{(a)} (2,1,4), \textbf{(b)} (3,1,3), \textbf{(c)} (2,2,4) and \textbf{(d)} (3,3,3).} \label{fig:fig_circuits} \end{figure} Another possible generalization of polar codes is to replace the block-structure polarization procedure by a convolutional structure. See \sec{conv} for details. Note indeed that each polarization step of a polar code consists of independent application of the polarization kernel on distinct blocks of $b$ bits (pairs of bits in the above example with $b=2$). Recently (\cite{ferris_branching_2014,ferris_convolutional_2017}), this idea was extended to a convolutional structure (see \fig{subfig_circ_2_2} and \fig{subfig_circ_3_4}), where each polarization step does not factor into a product of independent transformations on disjoint blocks but instead consists of $d$ layers of shifted block transformations. We refer to the number of layers $d$ as the {\em depth} of a code. An increasing depth offers the advantage of faster polarization and the drawback of an increased decoding complexity. The focus of the present work is to compare the trade-off between breadth and depth in terms of the speed at which the decoding error rate goes to zero and the decoding complexity. We focus on codes which have practically relevant sizes using Monte Carlo numerical simulations. \section{Decoding} In this section, the general successive cancellation decoding scheme is define in terms of tensor networks. This enables a straightforward extension to convolutional polar codes. \subsection{Successive cancellation} \label{sec:SC} Define $G$ as the reversible encoding circuit acting on $N$ input bits and $N$ output bits. $K$ of these input bits take arbitrary values $u_i$ while the $N-K$ others are frozen to the value $u_i=0$. From this input $u_1^N$, the message $x_1^N = G u_1^N$ is transmitted. The channel produces the received message $y_1^N$, resulting in a composite channel \begin{align} W_G(y_1^N| u_1^N) = \prod_{i=1}^N W(y_i | (G u_1^N)_i). \label{eq:comp_channel} \end{align} This composite channel induces a correlated distribution on the bits $u_i$ and is represented graphically on \fig{subfig_comp_channel}. Successive cancellation decoding converts this composite channel into $N$ different channels given by \begin{align} W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i) = \sum_{u_{i+1}, \ldots u_{N}} W_G(y_1^N | u_1^N), \label{eq:eff_channel} \end{align} for $i = 1,2,\ldots N$. Those channels are obtain by decoding successively symbols $u_1$ through $u_N$ (i.e., from right to left on \fig{fig_succ_dec}) by summing over all the bits that are not yet decoded and fixing the value of all the bits $u_1^{i-1}$. Either to their frozen value, if the corresponding original input bit was frozen, or to their previously decoded value. This effective channel is represented graphically on \fig{subfig_eff_channel}. Given $W^{(i)}_G$, $u_i$ is decoded by maximizing the likelihood of the acquired information: \begin{align} u_i = \mathop{\text{argmax}}_{\tilde u_i\in\qty{0,1}} \, W^{(i)}_G(y_1^N, u_{1}^{i-1}| \tilde u_i). \label{eq_ml_decoder} \end{align} Applying this procedure for all bits from right to left yield the so-called \textit{successive cancellation decoder}. Equation \ref{eq_ml_decoder} can be generalized straightforwardly by decoding not a single bit $u_i$ at the time but instead a $w$-bit sequence $u_i^{i+w-1}$ jointly, collectively viewed as a single symbol from a larger alphabet of size $2^w$. To this effect, the decoding width $w$ is defined as the number of bits that are decoded simultaneously. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=0.95in]{composite_channel.pdf} \label{fig:subfig_comp_channel}} \qquad \subfloat[]{ \centering \includegraphics[width=1.69in]{effective_channel.pdf} \label{fig:subfig_eff_channel}} \caption{Schematic representation of the successive cancellation decoder. \textbf{(a)} A composite channel is obtain from an encoding circuit $G$ and $N$ copies of a channel $W$. Contracting this tensor network for given $y_1^N$ and $u_1^N$ yields \eq{comp_channel}. \textbf{(b)} An effective channel is obtain from the composite channel by summing over all the values of bits $u_{i+1}^N$, graphically represented by the uniform tensor $e = \binom 11$, when decoding bit $u_i$. Contracting this tensor yields \eq{eff_channel} up to a normalization factor.} \label{fig:fig_succ_dec} \end{figure} \subsection{Decoding with tensor networks} \label{sec:TN} Convolutional polar codes were largely inspired by tensor network methods used in quantum many-body physics (see e.g. \cite{orus_practical_2014} and \cite{bridgeman_hand-waving_2017} for an introduction). Akin of the graphical tools used in information theory (Tanner graph, factor graph, etc.), tensor networks were introduced as compact graphical representation of probability distributions (or amplitudes in quantum mechanics) involving a large number of correlated variables. Moreover, certain computational procedures are more easily cast using these graphical representations. It is the case of the successive cancellation decoding problem described above, where the goal is to compute $W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i)$ given fixed values of $y_1^N, u_{1}^{i-1}$. While $G$ is a $\mathbb F_2^N$ linear transformation, it is sometime convenient to view it as a linear transformation on the space of probability over $N$-bit sequences, i.e., the linear space $\mathbb R^{2^N}$ whose basis vectors are labeled by all possible $N$-bit strings. On this space, $G$ acts as a permutation matrix mapping basis vector $u_1^N$ to basis vector $x_1^N = Gu_1^N$. A single bit is represented in the state $0$ by $u=\binom 10$, in the state $1$ by $u=\binom 01$ and a bit string $u_1^N$ is represented by the $2^N$ dimensional vector $u_1^N = u_1\otimes u_2\otimes \ldots \otimes u_N$. A single bit channel is a $2\times 2$ stochastic matrix and a CNOT gate is given by \begin{equation} {\rm CNOT} = \left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{array} \right), \label{eq:CNOT} \end{equation} because it permutes the inputs $10$ and $11$ while leaving the other inputs $00$ and $01$ unchanged. In this representation \begin{align} W^{(i)}_G&(y_1^N, u_{1}^{i-1}| u_i) = \nonumber \\ &\frac{1}{Z}[u_1 \otimes \ldots \otimes u_{i-1} \otimes u_i \otimes e^{\otimes(N-i)}]^T G W^{\otimes N} y_1^N, \label{eq:Wi} \end{align} where $e = \binom 11$ and $Z = \sum_{u_i \in \qty{0,1}} W^{(i)}_G(y_1^N, u_{1}^{i-1}| u_i)$ is a normalization factor. Ignoring normalization, this quantity can be represented graphically as a tensor network (see \fig{subfig_eff_channel}), where each element of the network is a rank-$r$ tensor, i.e., an element of $\mathbb R^{2^r}$. Specifically, a bit $u_i$ is a rank-one tensor, a channel $W$ is a rank-two tensor, and a two-bit gate is a rank-four tensor (two input bits and two output bits). The CNOT gate is obtained by reshaping \eq{CNOT} into a $(2 \times 2 \times 2 \times 2)$ tensor. In this graphical representation, a rank-$r$ tensor $A_{\mu_1,\mu_2,\ldots \mu_r}$ is represented by a degree-$r$ vertex, with one edge associated to each index $\mu_k$. An edge connecting two vertices means that the shared index is summed over \begin{equation} \includegraphics[width=6.5cm]{TNC},\label{eq:TNC} \end{equation} generalizing the notion of vector and matrix product to higher rank tensors. Tensors can be assembled into a network where edges represent input-output relations just like in an ordinary logical circuit representation. Evaluating \eq{Wi} then amounts to summing over edge values. This computational task, named {\em tensor contraction}, generally scales exponentially with the tree-width of the tensor network \cite{arad_quantum_2010}. The graphical calculus becomes valuable when using circuit identities that simplify the tensor network. Specifically, these identities encode two simple facts illustrated on \fig{tn_simp}: a permutation $G$ acting on the uniform distribution returns the uniform distribution $ G e^{\otimes t} = e^{\otimes t}$, and a permutation acting on a basis vector returns another basis vector $ Gx_1^N = y_1^N$. Once these circuit identities are applied to the evaluation of \eq{Wi} in the specific case of polar codes, it was shown in \cite{ferris_branching_2014,ferris_convolutional_2017} that the resulting tensor network is a tree, so it can be efficiently evaluated. Convolutional polar codes were introduced based on the observation that \eq{Wi} produces a tensor network of constant tree-width despite not being a tree (see \fig{causal_width}), an observation first made in the context of quantum many-body physics \cite{evenbly_class_2014}, so they can also be decoded efficiently. \begin{figure}[!t] \centering \subfloat[]{ \centering \includegraphics[width=1.57in]{tn_simp_e.pdf} \label{fig:tn_simp_e}} \quad \subfloat[]{ \centering \includegraphics[width=1.57in]{tn_simp_xy.pdf} \label{fig:tn_simp_xy}} \caption{Circuit identities. \textbf{(a)} Any permutation acting on the uniform distribution return the uniform distribution. \textbf{(b)} Any contraction of a permutation and basis vector $x_1^t$ gives another basis vector $y_1^t$.} \label{fig:tn_simp} \end{figure} \section{Polar code generalizations} \label{sec:generalizations} In this section, two possible generalizations of polar codes are described and their decoding complexity is analyzed. \subsection{Breadth} \label{sec:kernels} Channel polarization can be achieved using various kernels. In fact, as long as a kernel is not a permutation matrix on $\mathbb F_2^b$, it achieves a non-trivial polarization transform \cite{korada_polar_2010-1}. The CNOT gate is one such example that acts on two bits. However, a general kernel of breadth $b$ can act on $b$ bits, (see \fig{subfig_circ_3_1} for an illustration with $b=3$). An increasing breadth can produce faster polarization, i.e. a decoding error probability which decreases faster with the number of polarization steps. Indeed, in the asymptotic regime, Arikan \cite{arikan_channel_2009} showed that provided the code rate is below the symmetric channel capacity and that the location of the frozen bits are chosen optimally, the asymptotic decoding error probability of the polar code under successive cancellation decoding is $\mathbb P_e \in \mathcal O\qty(2^{N^{-1/2}})$. A different error scaling exponent $\mathbb P_e \in \mathcal O\qty(2^{N^{-\beta}})$ can be achieved from a broader kernel, but breadth 16 is required to asymptotically surpass $\beta=\frac 12$ \cite{korada_polar_2010-1}. Such a broad polarization kernel has the drawback of a substantially increased decoding complexity. Arikan \cite{arikan_channel_2009} showed that the decoding complexity of polar codes is $\mathcal O\qty(N \log_2 N)$. From a tensor network perspective, this complexity can be understood \cite{ferris_convolutional_2017} by counting the number of elementary contractions required to evaluate \eq{Wi} and by noting that the tensor network corresponding to \eq{Wi} for $u_i$ and for $u_{i+1}$ differ only on a fraction $1/\log_2 N$ of locations, so most intermediate calculations can be recycled and incur no additional complexity. As discussed previously, a breadth-$b$ polarization kernel can also be represented as a $2^b\times 2^b$ permutation matrix that act on $\mathbb R^{2^b}$. Applying such a matrix to a $b$-bit probability distribution has complexity $2^b$, and this dominates the complexity of each elementary tensor operation of the successive cancellation decoding algorithm. On the other hand, the total number of bits after $l$ polarization steps with a breadth-$b$ polarization kernel is $N = b^l$, so the overall decoding complexity in this setting is $\mathcal O\qty(2^b N \log_b N)$. \subsection{Depth} \label{sec:conv} The previous section described a natural generalization of polar codes which use a broader polarization kernel. A further generalization, first explored in \cite{ferris_branching_2014,ferris_convolutional_2017}, is to use a polarization step whose circuit is composed of $b$-local gates and has depth $d>1$ (see \fig{subfig_circ_2_2}), which results in a convolutional transformation. A $\text{CP}_{b,d}$ code, that is, a convolutional polar code with kernel breadth $b$ and circuit depth $d$, is define similarly to a polar code with a kernel of size $b$ where each polarization step is replace by a stack of $d$ polarization layers each shifted relative to the previous layer. \fig{subfig_circ_2_2} and \fig{subfig_circ_3_4} illustrates two realizations of convolutional polar codes. To analyze the decoding complexity, it is useful to introduce the concept of a causal cone. Given a circuit and a $w$-bit input sequence $u_i^{i+w-1}$, the associated causal cone is defined as the set of gates together with the set of edges of this circuit whose bit value depends on the value of $u_i^{i+w-1}$. Figure \ref{fig:causal_width} illustrates the causal cone of the sequence $u_{11}^{13}$ for the code $\text{CP}_{2,2}$. Given a convolutional code's breadth $b$ and depth $d$, define $m(d,b,w)$ to be the maximum number of gates in the causal cone of any $w$-bit input sequence of a single polarization step. Because a single convolutional step counts $d$ layers, define $m_s(d,b,w)$ as the number of those gates in the causal cone which are in the $s$-th layer (counting from the top) of the convolution. For the first layer, have $m_1(d,b,w) = \lceil\frac{w-1}b\rceil +1$. This number can at most increase by one for each layer, i.e., $m_{s+1}(d,b,w) \leq m_{s}(d,b,w)+1$, leading to a total number of gates in the causal cone of a single polarization step \begin{align} m(d,b,w) &= \sum_{s=1}^d m_s(d,b,w) \leq dm_1(d,b,w)+\frac{d(d-1)}2 \nonumber \\ &= d\left\lceil\frac{w-1}b\right\rceil +\frac{d(d+1)}2. \label{eq:max_gates} \end{align} Similarly, define the optimal decoding width $w^*(b,d)$ as the smallest value of $w$ for which the causal cone of any $w$ bit sequence after one step of polarization contains at most $bw$ output bits. Figure \ref{fig:causal_width} illustrates that $w^* = 3$ for a $\text{CP}_{2,2}$ code since any 3 consecutive input bits affect at most 6 consecutive bits after one polarization step. Choosing a decoding width $w^*(b,d)$ thus leads to a recursive decoding procedure which is identical at all polarization steps. Since the bottom layer counts $m_d(d,b,w) \leq \lceil\frac{w-1}b\rceil +d$ gates, each acting on $b$ bits, we see that there are at most $b\lceil\frac{w-1}b\rceil +db\leq w+db$ output bits in the causal cone of a single polarization step. The optimal decoding width $w^*$ is chosen such that this number does not exceed $bw^*$, thus \begin{equation} w^*(b,d) \leq \frac b{b-1}d. \label{eq:max_w} \end{equation} Using this optimal value in \eq{max_gates} bounds the number of rank-$b$ tensors that are contracted at each polarization layer, and each contraction has complexity $2^b$. Here again, only a fraction $1/\log_b N$ of these contractions differ at each step of successive cancellation decoding, leading to an overall decoding complexity \begin{align} C_{b,d}(N) = 2^b \frac{m(b,d, w^*)}{w^*} N \log_b N\in \mathcal O(2^b d N \log_b N). \label{eq:complexity} \end{align} Ref. \cite{ferris_convolutional_2017} provides analytical arguments that the resulting convolutional polar codes have a larger asymptotic error exponent $\beta>\frac 12$, and present numerical results showing clear performance improvement over standard polar codes at finite code lengths. These advantages come at the cost of a small constant increased decoding complexity \begin{figure}[!t] \centering \includegraphics[width=3.5in]{causal_width.pdf} \caption{Graphical representation of the causal cone of $u_{11}^{13}$ in the $\text{CP}_{2,2}$ code. Only the gates in the shaded region receive inputs that depend on the sequence $u_{11}^{13}$. Similarly, the edges contained in the shaded region represent bits at intermediate polarization steps whose value depends on sequence $u_{11}^{13}$. This shows that decoding a $\text{CP}_{2,2}$ code amouts to contracting a constant tree-width graph. The optimal width $w^* = 3$ and at most $m(2,2,w^*) = 5$ gates are involved per polarization step.} \label{fig:causal_width} \end{figure} \section{Simulation results} \label{sec:results} Numerical simulations were performed to analyze the performance of codes breadth and depth up to 4. The breadth-2 kernel used was the CNOT, while the breadth-3 and breadth-4 kernels were \begin{align*} &G_3 = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \\ \end{pmatrix}, &&G_4 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{pmatrix}, \end{align*} where these are given as representations over $\mathbb F_2^b$. It can easily be verified that these transformations are not permutations, so they can in principle be used to polarize \cite{korada_polar_2010-1}. Also, we choose a convolutional structure where each layer of gates is identical but shifted by one bit to the right (from top to bottom), c.f. \fig{subfig_circ_3_4}. Many others kernel and convolutional structures have been simulated, but those gave the best empirical results. The encoding circuit $G$ is used to define the code, but the complete definition of a polar code must also specify the set of frozen bits $\mathcal F$, i.e. the set of bits that are fixed to $u_i=0$ at the input of the encoding circuit $G$. In general, for a given encoding circuit $G$, the set $\mathcal F$ will depend on the channel and is chosen to minimize the error probability under successive cancellation decoding. Here, a simplified channel selection procedure which uses an error detection criteria described in the next section was used. All the simulations presented focus on the binary symmetric memoryless channel. \subsection{Error detection} \label{sec:ED} Considering an error detection setting enables an important simplification in which the channel selection and code simulation can be performed simultaneously without sampling. In this setting, it is consider that a transmission error $x_1^N \rightarrow y_1^N = x_1^N + \vb e$ is not detected if there exists a non-frozen bit $u_i$, $i\in \mathcal F^c$ which is flipped while none of the frozen bits to its right $u_j$, $j<i$, $j\in \mathcal F$ have been flipped. In other words, an error is considered not detected if its first error location (starting from the right) occurs on a non-frozen bit. Note that this does not correspond to the usual definition of an undetectable error which would be conventionally defined as an error which affects no frozen locations. By considering only frozen bits to the right of a given location, the notion used is tailored to the context of a sequential decoder. Empirically, it was observed that this simple notion is a good proxy to compare the performance of different coding schemes under more common settings. Denote $\mathbb P_U(i)$ the probability that the symbol $u_i$ is the first such undetected error. Then, given a frozen bit set $\mathcal F$, the probability of an undetected error is $\mathbb P_U = \sum_{i\in \mathcal F^c} \mathbb P_U(i)$. This can be evaluated efficiently using the representation of the encoding matrix over $\mathbb R^{2^N}$ as above. For $\vb e \in \mathbb F_2^N$, denote $\mathbb P(\vb e)$ the probability of a bit-flip pattern $\vb e$, viewed as a vector on $\mathbb R^{2^N}$. At the output of the symmetric channels with error probability $p$, $\mathbb P^T = (1-p,p)^{\otimes N}$. Then \begin{equation} \mathbb P_U(i) = (1-p,p)^{\otimes N} G \binom 10^{\otimes i-1} \otimes \binom 01\otimes e^{\otimes(i-1)}, \label{eq:detection} \end{equation} where here again $e = \binom 11$. In terms of tensor networks, this corresponds to the evaluation of the network of \fig{subfig_eff_channel} with $u_i = 1$ and all $u_j=0$ for all $j<i$. Thus, this can be accomplished with complexity given by \eq{complexity}. Because the evaluation of \eq{detection} is independent of the set of frozen bits, it can be evaluate for all positions $i$, selecting the frozen locations as the $N-K$ locations $i$ with the largest value of $\mathbb P_U(i)$. Then, the total undetected error probability is the sum of the $\mathbb P_U(i)$ over the remaining locations. This is equivalently the sum of the $K$ smallest values of $\mathbb P_U(i)$. \begin{figure*}[!th] \centering \subfloat[]{ \centering \includegraphics[width=2.25in]{results_decoding.pdf} \label{fig:subfig_erasure_all}} \subfloat[]{ \centering \includegraphics[width=2.25in]{results_erasure.pdf} \label{fig:subfig_erasure_complexity}} \subfloat[]{ \centering \includegraphics[width=2.25in]{results_flip.pdf} \label{fig:subfig_flip_complexity}} \quad \caption{Numerical simulation results. \textbf{(a)} Undetected error probability under successive cancellation decoding for polar codes ($d=1$) and convolutional polar codes ($d>1$) for various kernel breadths $b$, plotted as a function of code size $N=b^l$ by varying the number of polarization steps $l$. The channel is BSC($1/4$) and the encoding rate is $1/3$. \textbf{(b)} Same thing as (a) but plotted as a function of their decoding complexity, c.f. \eq{complexity}. The number of polarization steps $l$ is chosen in such a way that all codes are roughly of equal size $N(b,l)=b^l\approx 10^3$. The dots connected by a line all have the same kernel breadth $b$ but show a progression of depth $d=1,2,3,4$, with $d=1$ appears on the left and corresponds to regular polar codes. \textbf{(c)} The bit error rate for a BSC($1/20$) with an $1/3$ encoding rate plotted as a function of the decoding complexity. The depth is specify similarly to (b) by the connected dots. The number of polarization steps is chosen to have roughly $N \approx 250$ bits.} \label{fig:results} \end{figure*} The results are shown on \fig{subfig_erasure_all} for various combinations of kernel breadths $b$ and convolutional depth $d$. The code rate was $\frac 13$, meaning that the plotted quantity is the sum of the $N/3$ smallest values of $\mathbb P_U(i)$. \fig{subfig_erasure_complexity} presents a subset of the same data with parameters $b$ and $l$ resulting in codes of roughly equal size $N = b^l\approx 10^3$. This implies that codes with larger breadth use fewer polarization steps. The undetected error probability $\mathbb P_U$ is then plotted against the decoding complexity, compute from \eq{complexity}. Notice that increasing the depth is a very efficient way of suppressing errors with a modest complexity increase. In contrast, increasing the breadth actually deteriorates the performance of these finite-size code and increases the decoding complexity. \subsection{Error correction} \label{sec:EC} For the symmetric channel, the frozen bits were chosen using the error detection procedure describe in the previous section. This is not optimal, but it is sufficient for the sake of comparing different code constructions. Then, standard Monte Carlo simulations were done by transmitting the all 0 codeword sampling errors, using successive cancellation decoding and comparing the decoded message. The results are presented in \fig{subfig_flip_complexity}. The conclusions drawn from the error detection simulations all carry over to this more practically relevant setting. \section{Conclusion} \label{sec:conclusion} We numerically explored a generalization of the polar code family based on a convoluted polarization kernel given by a finite-depth local circuit. On practically relevant code sizes, it was found that these convoluted kernel offer a very interesting error-suppression {\em vs} decoding complexity trade-off compare to previously proposed polar code generalizations using broad kernels. Empirically, no incentive were found to consider increasing both the breadth and the depth: an increasing depth alone offers a greater noise suppression at comparable complexity increase. It will be interesting to see what further gains can be achieved, for instance, from list decoding of convolutional polar codes. \section*{Acknowledgment} This work was supported by Canada's NSERC and Québec's FRQNT. Computations were done using Compute Canada and Calcul Québec clusters. \bibliographystyle{IEEEtran}
2024-02-18T23:40:30.677Z
2018-05-24T02:14:51.000Z
algebraic_stack_train_0000
2,614
4,420
proofpile-arXiv_065-12693
\section{Introduction} Given a smooth algebraic variety $X$ over a field of characteristic zero, we have the Hodge-to-de Rham spectral sequence $E_1^{p,q}=H^q(X,\Omega^p_X)\Rightarrow H^{p+q}_{DR}(X).$ It is classically known that when $X$ is additionally proper, this spectral sequence degenerates at $E_1,$ that is, all differentials vanish. This follows from the classical Hodge theory for compact K\"ahler manifolds, and can be also proved algebraically \cite{DI}. We recall the following fundamental result of Kaledin \cite{Ka}, see also \cite{M} for a different proof. \begin{theo}\label{th:Kaledin_degen}\cite[Theorem 5.4]{Ka} Let $A$ be a smooth and proper DG algebra. Then the Hochschild-to-cyclic spectral sequence degenerates, so that we have an isomorphism $HP_{\bullet}(A)=HH_{\bullet}(A)((u)).$\end{theo} Here $u$ denotes a variable of degree $2.$ When applied to $\operatorname{Perf}(A)\simeq \operatorname{Perf}(X)$ for smooth and proper variety $X,$ Theorem \ref{th:Kaledin_degen} gives exactly the classical Hodge-to-de Rham degeneration. In this paper we study some generalizations of Hodge-to-de-Rham degeneration to DG categories which are not smooth and proper. Recall that for a proper DG algebra $B$ one has a pairing on $HH_{\bullet}(B)\otimes HH_{\bullet}(B^{op})\to \mathrm k,$ introduced by Shklyarov \cite{S}. Kontsevich \cite{Ko} proposed the following generalization of Theorem \ref{th:Kaledin_degen}. \begin{conj}\label{conj:degeneration_for_proper_intro} Let $B$ be a proper DG algebra. Then the composition map \begin{equation}\label{eq:composition_for_proper}(HH_{\bullet}(B)\otimes HC_{\bullet}(B^{op}))[1]\xrightarrow{\operatorname{id}\otimes\delta} HH_{\bullet}(B)\otimes HH_{\bullet}(B^{op})\to\mathrm k\end{equation} is zero.\end{conj} Kontsevich also proposed a "dual" version of Conjecture \ref{conj:degeneration_for_proper_intro} for smooth DG algebras. \begin{conj}\label{conj:degeneration_for_smooth_intro} Let $A$ be a smooth DG algebra. Then the composition $$K_0(A\otimes A^{op})\xrightarrow{\operatorname{ch}} (HH_{\bullet}(A)\otimes HH_{\bullet}(A^{op})_0\xrightarrow{\operatorname{id}\otimes\delta} (HH_{\bullet}(A)\otimes HC^-_{\bullet}(A^{op}))_1$$ vanishes on the class $[A]$ of the diagonal bimodule.\end{conj} The motivation for Conjectures \ref{conj:degeneration_for_proper_intro} and \ref{conj:degeneration_for_smooth_intro} is explained in Propositions \ref{prop:smooth_comp_implies_smooth} and \ref{prop:cat_res_implies_prop} below. Here we mention that the results of \cite{KL} imply that Conjecture \ref{conj:degeneration_for_proper_intro} holds for proper DG algebras of algebro-geometric origin: that is, for DG algebras of the form $B={\mathbf R}\operatorname{End}({\mathcal F}),$ where ${\mathcal F}\in\operatorname{Perf}_Z(X)$ is a perfect complex on a separated scheme $X$ of finite type over $\mathrm k,$ supported on a {\it proper} closed subscheme $Z\subset X.$ Similarly, the (weak version of) results of \cite{E2} imply that Conjecture \ref{conj:degeneration_for_smooth_intro} holds for smooth DG algebras of the form ${\mathbf R}\operatorname{End}({\mathcal G}),$ where ${\mathcal G}\in D^b_{coh}(X)$ is a generator of the category $D^b_{coh}(X).$ There is a closely related question formulated by B. To\"en \cite{To1}. \begin{ques}\label{ques:Toen} Is it true that any homotopically finitely presented DG category ${\mathcal B}$ is quasi-equivalent to a quotient ${\mathcal A}/{\mathcal S},$ where ${\mathcal A}$ is smooth and proper, and ${\mathcal S}\subset{\mathcal A}$ is a full subcategory?\end{ques} Such a quotient presentation of ${\mathcal B}$ is called a smooth categorical compactification. In this paper we disprove both Conjectures \ref{conj:degeneration_for_proper_intro} and \ref{conj:degeneration_for_smooth_intro}. As an application, we give a negative answer to Question \ref{ques:Toen}. The starting point for our counterexamples is to disprove the main conjecture of \cite{E}, see Section \ref{sec:disproving_very_general}. A counterexample to Conjecture \ref{conj:degeneration_for_smooth_intro} is obtained in Section \ref{sec:disproving_version_for_smooth}. It is deduced from the results of Section \ref{sec:disproving_very_general} by some trick. Finally, a counterexample to Conjecture \ref{conj:degeneration_for_proper_intro} is obtained in Section \ref{sec:disproving_for_proper}. It is deduced from our new result on nilpotent elements in the cohomology of a DG algebra (Theorem \ref{th:nilpotence_and_factorization}), which is of independent interest. In particular, we obtain an example of a proper DG algebra $B$ such that the DG category $\operatorname{Perf}(B)$ cannot be fully faithfully embedded into a saturated DG category. That is, it does not have a categorical resolution of singularities in the terminology of \cite{KL}. Section \ref{sec:disproving_for_proper} can be read independently from Sections \ref{sec:disproving_very_general} and \ref{sec:disproving_version_for_smooth}. {\noindent{\bf Acknowledgements.}} I am grateful to Dmitry Kaledin, Maxim Kontsevich and Bertrand To\"en for useful discussions. \section{Preliminaries on DG categories and $A_{\infty}$-algebras} \subsection{DG categories} For the introduction on DG categories, we refer the reader to \cite{Ke}. The references for DG quotients are \cite{Dr, Ke2}. For the model structures on DG categories we refer to \cite{Tab1, Tab2}, and for a general introduction on model categories we refer to \cite{Ho}. Everything will be considered over some base field $\mathrm k.$ Mostly we will consider DG categories up to a quasi-equivalence. By a functor between DG categories we sometimes mean a quasi-functor. In some cases it is convenient for us to choose a concrete DG model or a concrete DG functor. By a commutative diagram of functors we usually mean the commutative diagram in the homotopy category $\operatorname{Ho}(\operatorname{dgcat}_{\mathrm k}).$ Finally, we denote by $\operatorname{Ho}_M(\operatorname{dgcat}_{\mathrm k})$ the Morita homotopy category of DG categories (with inverted Morita equivalences). All modules are assumed to be right unless otherwise stated. For a small DG category ${\mathcal C}$ and a ${\mathcal C}$-module $M,$ we denote by $M^{\vee}$ the ${\mathcal C}^{op}$-module $\operatorname{Hom}_{{\mathcal C}}(M,{\mathcal C}).$ We denote by $M^*$ the ${\mathcal C}^{op}$-module $\operatorname{Hom}_{{\mathcal C}}(M,\mathrm k).$ Given a small DG category ${\mathcal C},$ we denote by $D({\mathcal C})$ its derived category of DG ${\mathcal C}$-modules. This is a compactly generated triangulated category. We denote by $D_{\operatorname{perf}}({\mathcal C})$ the full triangulated subcategory of perfect ${\mathcal C}$-modules. It coincides with the subcategory of compact objects. Recall from \cite{TV} that a ${\mathcal C}$-module $M$ is pseudo-perfect if for each $x\in{\mathcal C},$ the complex $M(x)$ is perfect over $\mathrm k$ (that is, $M(x)$ has finite-dimensional total cohomology). We denote by $D_{\operatorname{pspe}}({\mathcal C})\subset D({\mathcal C})$ the full triangulated subcategory of pseudo-perfect ${\mathcal C}$-modules. For any DG category ${\mathcal C},$ we denote by $[{\mathcal C}]$ its (non-graded) homotopy category, which has the same objects as ${\mathcal C},$ and the morphisms are given by $[{\mathcal C}](x,y)=H^0({\mathcal C}(x,y)).$ We use the the terminology of \cite[Definition 2.4]{TV} by calling ${\mathcal C}$ triangulated if the Yoneda embedding provides an equivalence $[{\mathcal C}]\xrightarrow{\sim}D_{\operatorname{perf}}({\mathcal C}).$ In this case $[{\mathcal C}]$ is a Karoubi complete triangulated category. We denote by $\operatorname{Mod}_{{\mathcal C}}$ the DG category of cofibrant DG ${\mathcal C}$-modules in the projective model structure (these are exactly the direct summands of semifree DG ${\mathcal C}$-modules). We have $D({\mathcal C})\simeq [\operatorname{Mod}_{{\mathcal C}}],$ where $D({\mathcal C})$ is the derived category of DG ${\mathcal C}$-modules. We denote by ${\mathbf Y}:{\mathcal C}\hookrightarrow\operatorname{Mod}_{{\mathcal C}}$ the standard Yoneda embedding given by ${\mathbf Y}(x)={\mathcal C}(-,x).$ We write $\operatorname{Perf}({\mathcal C})\subset \operatorname{Mod}_{{\mathcal C}}$ (resp. $\operatorname{PsPerf}({\mathcal C})\subset\operatorname{Mod}_{{\mathcal C}}$) for the full DG subcategory of perfect (resp. pseudo-perfect) ${\mathcal C}$-modules. For a DG functor $\Phi:{\mathcal C}_1\to{\mathcal C}_2$ between small DG categories, we denote by ${\mathbf L}\Phi^*:D({\mathcal C}_1)\to D({\mathcal C}_2),$ the derived extension of scalars functor. Its right adjoint functor (restriction of scalars) is denoted by $\Phi_*:D({\mathcal C}_2)\to D({\mathcal C}_1).$ We also recall from \cite[Definitions 3.6]{T} that a ${\mathcal C}$-module is called quasi-representable if it is quasi-isomorphic to a representable ${\mathcal C}$-module. For two DG categories ${\mathcal C},{\mathcal C}',$ a ${\mathcal C}\otimes{\mathcal C}'$-module $M$ is called right quasi-representable if for each object $x\in{\mathcal C},$ the ${\mathcal C}'$-module $M(x,-)$ is quasi-representable. We denote by ${\mathbf R}\underline{\operatorname{Hom}}({\mathcal C},{\mathcal C}')\subset\operatorname{Mod}_{{\mathcal C}^{op}\otimes{\mathcal C}'}$ the full subcategory of right quasi-representable ${\mathcal C}^{op}\otimes{\mathcal C}'$-modules. By \cite[Theorem 6.1]{T}, this DG category (considered up to a quasi-equivalence) is actually the internal Hom in the homotopy category of DG categories $\operatorname{Ho}(\operatorname{dgcat}_{\mathrm k})$ (with inverted quasi-equivalences). We have a natural quasi-functor $\operatorname{Fun}({\mathcal C},{\mathcal C}')\to {\mathbf R}\underline{Hom}({\mathcal C},{\mathcal C}'),$ where $\operatorname{Fun}({\mathcal C},{\mathcal C}')$ is the naive DG category of DG functors ${\mathcal C}\to{\mathcal C}',$ as defined in \cite{Ke}. Moreover, if ${\mathcal C}$ is cofibrant, this functor is essentially surjective on the homotopy categories. A small DG category ${\mathcal C}$ is called smooth (resp. locally proper) if the diagonal ${\mathcal C}\mhyphen{\mathcal C}$-bimodule is perfect (resp. pseudo-perfect). Moreover, ${\mathcal C}$ is called proper if it is locally proper and is Morita equivalent to a DG algebra (i.e. the triangulated category $D_{\operatorname{perf}}({\mathcal C})$ has a classical generator). We recall the notion of a short exact sequence of DG categories. \begin{defi}\label{defi:short_exact_dg}A pair of functors ${\mathcal A}_1\xrightarrow{F_1}{\mathcal A}_2\xrightarrow{F_3}{\mathcal A}_3$ is said to be a (Morita) short exact sequence of DG categories if the following conditions hold $\rm{i)}$ the composition $F_2F_1$ is homotopic to zero; $\rm ii)$ the functor $F_1$ is quasi-fully-faithful; $\rm iii)$ the induced quasi-functor $\overline{F_2}:{\mathcal A}_2/F_1({\mathcal A}_1)\to {\mathcal A}_3$ is a Morita equivalence. \end{defi} In particular, a short exact sequence of DG categories induces a long exact sequence of K-groups, where $K_{\bullet}({\mathcal A})$ is the Waldhausen K-theory \cite{W} of the Waldhausen category of cofibrant perfect ${\mathcal A}$-modules. We will in fact need only the boundary map $K_1({\mathcal A}_3)\to K_0({\mathcal A}_1).$ \subsection{$A_{\infty}$-algebras and $A_{\infty}$-(bi)modules} All the definitions and constructions regarding DG categories which are invariant under quasi-equivalences can be translated into the world of $A_{\infty}$-categories. For the introduction on $A_{\infty}$-categories and algebras see \cite{L-H, Ke3, KS}. It will be sufficient for us to work with $A_{\infty}$-algebras (that is, $A_{\infty}$-categories with a single object). In order to write down the signs in formulas it is convenient to adopt the following {\noindent {\bf Notation.}} {\it For a collection of homogeneous elements $a_0,\dots,a_n$ of a graded vector space $A,$ and $0\leq p,q\leq n,$ we put $$l_p^q(a)=\begin{cases}|a_p|+\dots+|a_q|+q-p+1 & \text{ if }p\leq q;\\ |a_p|+\dots+|a_n|+|a_0|+\dots+|a_q|+n-p+q & \text{ if }p>q.\end{cases}.$$ If the collection starts with $a_1$ (and there is no $a_0$) we only use $l_p^q(a)$ for $1\leq p\leq q\leq n.$} \begin{defi}A non-unital $A_{\infty}$-structure on a graded vector space $A$ is a sequence of multilinear operations $\mu_n=\mu_n^A:A^{\otimes n}\to A,$ where $\deg(\mu_n)=2-n,$ satisfying the following relations: \begin{equation}\label{eq:A_infty_rels}\sum\limits_{i+j+k=n+1}(-1)^{l_1^i(a)}\mu_{i+k+1}(a_1,\dots,a_i,\mu_j(a_{i+1},\dots,a_{i+j}),a_{i+j+1},\dots,a_n)=0,\end{equation} for $n\geq 0.$ Here for $1\leq p\leq q\leq n$ we put $l_p^q(a):=|a_p|+\dots+|a_q|+q-p+1.$\end{defi} \begin{remark}In our sign convention, a non-unital DG algebra $B$ can be considered as an $A_{\infty}$-algebra, with $\mu_1(a)=-d(a),$ $\mu_2(a_1,a_2)=(-1)^{|a_1|}a_1a_2,$ and $\mu_{\geq 3}=0.$\end{remark} \begin{defi}A non-unital $A_{\infty}$-morphism $f:A\to B$ is given by a sequence of linear maps $f_n:A^{\otimes n}\to B,$ where $\deg(f_n)=1-n,$ satisfying the following relations: \begin{multline}\label{eq:A_infty_morphism}\sum\limits_{i_1+\dots+i_k=n}\mu_k^B(f_{i_1}(a_1,\dots,a_{i_1}),\dots,f_{i_k}(a_{i_1+\dots+i_{k-1}+1},\dots,a_n))=\\ \sum\limits_{i+j+k=n}(-1)^{l_1^i(a)}f_{i+k+1}(a_1,\dots,a_i,\mu_j^A(a_{i+1},\dots,a_{i+j}),a_{i+j+1},\dots,a_n).\end{multline}\end{defi} Given an $A_{\infty}$-algebra $A,$ one defines the $A_{\infty}$-algebra $A^{op}$ as follows: it is equal to $A$ as a graded vector space, and we have $$\mu_n^{A^{op}}(a_1,\dots,a_n)=(-1)^{\sigma}\mu_n^A(a_n,\dots,a_1),$$ where $\sigma=\sum\limits_{1\leq i<j\leq n}(|a_i|+1)(|a_j|+1).$ We now define the notion of an $A_{\infty}$-module. \begin{defi}A right $A_{\infty}$-module $M$ over an $A_{\infty}$-algebra $A$ is a graded vector space with a sequence of operations $\mu_{n}^M:M\otimes A^{\otimes n-1}\to M,$ where $n>0,$ $\deg(\mu_n^M)=2-n,$ and the following relations are satisfied: \begin{multline}\sum\limits_{i+j=n}\mu_{j+1}^M(\mu_{i+1}^M(m,a_1,\dots,a_i),a_{i+1},\dots,a_n)+\\ \sum\limits_{i+j+k=n+1}(-1)^{|m|+l_1^i(a)}\mu_{i+k+1}(m,a_1,\dots,a_i,\mu_j(a_{i+1},\dots,a_{i+j}),a_{i+j+1},\dots,a_n)=0.\end{multline}\end{defi} We also need $A_{\infty}$-bimodules. \begin{defi}Let $A$ and $B$ be non-unital $A_{\infty}$-algebras. An $A_{\infty}$ $A\mhyphen B$-bimodule $M$ is a graded vector space with a collection of operations $\mu_{i,j}=\mu_{i,j}^M:A^{\otimes i}\otimes M\otimes B^{\otimes j}\to M,$ where $i,j\geq 0,$ such that for any $n,m\geq 0$ and homogeneous $a_1,\dots,a_n\in A,$ $b_1,\dots,b_m\in B,$ $m\in M,$ the following relation is satisfied: \begin{multline*}\sum\limits_{i+j+k=n+1}(-1)^{l_1^i(a)}\mu_{i+k+1,m}^M(a_1,\dots,\mu_j^A(a_{i+1},\dots,a_{i+j}),\dots,a_n,m,b_1,\dots,b_m)\\ +\sum\limits_{\substack{1\leq i\leq n+1;\\ 0\leq j\leq m}}\mu_{i-1,m-j}^M(a_1,\dots,a_{i-1},\mu_{n+1-i,j}^M(a_i,\dots,a_n,m,b_1,\dots,b_j),b_{j+1},\dots,b_m)\\ +\sum\limits_{i+j+k=m+1}(-1)^{l_1^n(a)+l_1^i(b)+|m|}\mu_{n,i+k+1}^M(a_1,\dots,a_n,m,b_1,\dots,\mu_j^B(b_{i+1},\dots,b_{i+j}),\dots,b_m)=0.\end{multline*} \end{defi} \begin{remark}1) In our sign convention, a non-unital DG algebra $B$ can be considered as an $A_{\infty}$-algebra, with $\mu_1(b)=-d(b),$ $\mu_2(b_1,b_2)=(-1)^{|b_1|}b_1 b_2,$ and $\mu_{\geq 3}=0.$ 2) If furthermore $M$ is a right DG $B$-module, then the $A_{\infty}$ $B$-module structure on $M$ is given by $\mu_1^M(m)=d(m),$ $\mu_2^M(m,a)=(-1)^{|m|+1}ma,$ and $\mu_{\geq 3}^M=0.$ 3) If $A$ is another non-unital DG algebra, and $M$ is a DG $A\mhyphen B$-bimodule, then the $A_{\infty}$ $A\mhyphen B$-bimodule structure on $M$ is given by $\mu_{0,0}^M(m)=d(m),$ $\mu_{1,0}^M(a,m)=am,$ $\mu_{0,1}^M(m,b)=(-1)^{|m|+1}mb,$ and $\mu_{i,j}^M=0$ for $i+j\geq 2.$\end{remark} We now recall the strict unitality. \begin{defi}1) A non-unital $A_{\infty}$-algebra $A$ is called strictly unital if there is a (unique) element $1=1_A\in A$ such that $\mu_1(1)=0,$ $\mu_2(1,a)=a=(-1)^{|a|}\mu_2(a,1)$ for any homogeneous element $a\in A,$ and for $n\geq 3$ we have $\mu_n(a_1,\dots,a_n)=0$ if at least one of the arguments $a_i$ equals $1.$ 2) A non-unital $A_{\infty}$-morphism $f:A\to B$ between strictly unital $A_{\infty}$-algebras is called strictly unital if $f_1(1_A)=1_B,$ and for $n\geq 2$ we have $f_n(a_1,\dots,a_n)=0$ if at least one of the arguments $a_i$ equals $1.$ 3) Given a strictly unital $A_{\infty}$-algebra $A,$ an $A_{\infty}$ $A$-module $M$ is called strictly unital if $\mu_2^M(m,1)=(-1)^{|m|+1}m,$ and for $n\geq 3$ we have $\mu_n^M(m,a_1,\dots,a_{n-1})=0$ if at least one of $a_i$'s equals $1.$ 4) Given strictly unital $A_{\infty}$-algebras $A,$ $B,$ an $A_{\infty}$ $A\mhyphen B$-bimodule is called strictly unital if $\mu_{1,0}^M(1_A,m)=m,$ $\mu_{0,1}^M(m,1_B)=(-1)^{|m|+1}m,$ and for $k+l\geq 2$ we have $\mu_{k,l}(a_1,\dots,a_k,m,b_1,\dots,b_l)=0$ if at least one of $a_i$'s equals $1_A$ or at least one of $b_j$'s equals $1_B.$ \end{defi} From now on, all $A_{\infty}$-algebras and (bi)modules will be strictly unital Given a strictly unital $A_{\infty}$-algebra $A,$ we define the DG category $\operatorname{Mod}^{\infty}\mhyphen A$ whose objects are $A_{\infty}$-modules and the morphisms are defined as follows. Given $M,N\in \operatorname{Mod}^{\infty}\mhyphen A,$ we put $$\operatorname{Hom}^{\infty}_A(M,N)^{gr}:=\prod\limits_{n\geq 0}\operatorname{Hom}_{\mathrm k}(M\otimes A[1]^{\otimes n},N),$$ and the differential is given by \begin{multline*}d(\varphi)_n(m,a_1,\dots,a_n)=\sum\limits_{i=0}^n \mu_{n-i+1}^N(\varphi_i(m,a_1,\dots,a_i),a_{i+1},\dots,a_n)\\ -\sum\limits_{i=0}^n(-1)^{|\varphi|}\varphi_{n-i+1}(\mu_{i+1}^M(m,a_1,\dots,a_i),a_{i+1},\dots,a_n)\\ -\sum\limits_{1\leq i\leq j\leq n}(-1)^{|\varphi|+|m|+l_1^{i-1}(a)}\varphi_{n+i-j-1}(m,a_1,\dots,\mu_{j-i+1}^A(a_i,\dots,a_j),\dots,a_n).\end{multline*} The composition is given by $$(\varphi\psi)_n(m,a_1,\dots,a_n)=\sum\limits_{i=0}^n\varphi_{n-i}(\psi_{i}(m,a_1,\dots,a_i),a_{i+1},\dots,a_n).$$ Given a unital DG algebra $B,$ we denote by $\operatorname{PsPerf}(B)\subset \operatorname{Mod}^{\infty}\mhyphen B$ the full DG subcategory formed by pseudo-perfect DG modules. We have $[\operatorname{PsPerf}(B)]\simeq D_{\operatorname{perf}}(B).$ \begin{remark}\label{rem:what_is_bimodule}Let $A,B$ be $A_{\infty}$-algebras. 1) An $A_{\infty}$ $A\mhyphen B$-bimodule structure on a graded vector space $M$ is equivalent to the following data: \begin{itemize} \item the right $A_{\infty}$ $B$-module structure on $M;$ \item the $A_{\infty}$-morphism $f:A\to \operatorname{End}^{\infty}_B(M).$ \end{itemize} Namely, given an $A_{\infty}$-bimodule $M,$ the induced $B$-module structure is given by $\mu_n^M=\mu_{0,n-1}^M,$ and the $A_{\infty}$-morphism is given by $f_n(a_1,\dots,a_n)(m,b_1,\dots,b_l)=\mu_{n,l}(a_1,\dots,a_n,m,b_1,\dots,b_l).$ 2) Also, an $A_{\infty}$-bimodule structure is equivalent to an \end{remark} We finally define a technically useful notion of an an $A_{\infty}$-bimorphism of (strictly unital) $A_{\infty}$-algebras $f:(A,B)\to C.$ It is given by the linear maps $f_{r,s}:A^{\otimes r}\otimes B^{\otimes s}\to C,$ where $r,s\geq 0,$ $r+s>0,$ so that the following relations are satisfied: \begin{multline}\sum\limits_{\substack{0=r_0\leq r_1\leq\dots\leq r_k=r;\\ 0=s_0\leq s_1\leq\dots\leq s_k=s}}(-1)^{\sigma}\mu_k^C(f_{r_1,s_1}(a_1,\dots,a_{r_1};b_1,\dots,b_{s_1}),\dots,\\ f_{r-r_{k-1},s-s_{k-1}}(a_{r_{k-1}+1}\dots,a_{r};b_{s_{k-1}+1},\dots,b_s))=\\ \sum\limits_{i+j+k=r}(-1)^{l_1^i(a)}f_{i+k+1,s}(a_1,\dots,a_i,\mu_j(a_{i+1},\dots,a_{i+j}),a_{i+j+1},\dots,a_r;b_1,\dots,b_s)+\\ \sum\limits_{i+j+k=s}(-1)^{l_1^r(a)+l_1^i(b)}f_{r,i+k+1}(a_1,\dots,a_r;b_1,\dots,b_i,\mu_j(b_{i+1},\dots,b_{i+j}),b_{i+j+1},\dots,b_s),\end{multline} where $\sigma=\sum\limits_{1\leq p<q\leq k}l_{r_{q-1}+1}^{r_q}(a)l_{s_{p-1}+1}^{s_p}(b).$ We require that $f_{1,0}(1_A)=1_C=f_{0,1}(1_B),$ and for $k+l\geq 2$ $f_{k,l}(a_1,\dots,a_k,b_1,\dots,b_l)=0$ if at least one of $a_i$'s equals $1_A,$ or at least one of $b_j$'s equals $1_B.$ \begin{remark}One can similarly define $A_{\infty}$ $n$-morphisms $(A_1,\dots,A_n)\to B,$ so that the category of $A_{\infty}$-algebras becomes a (non-symmetric) pseudo-monoidal category. In particular, the $A_{\infty}$-morphisms can be composed with $A_{\infty}$-morphisms in the natural way.\end{remark} \begin{remark}\label{rem:bimodule_as_bimorphism} If a graded vector space $M$ is given a differential $d,$ then an $A_{\infty}$-bimodule structure on $M$ (with $\mu_1^M=d$) is equivalent to an $A_{\infty}$-bimorphism $f:(A,B^{op})\to\operatorname{End}_{\mathrm k}(M).$ Given such an $A_{\infty}$-bimorphism, one puts $$\mu_{r,s}^M(a_1,\dots,a_r,m,b_1,\dots,b_s):=(-1)^{l}f_{r,s}(a_1,\dots,a_r,b_s,\dots,b_1)(m),$$ where $l=l_1^s(b)\cdot|m|+\sum\limits_{1\leq p<q\leq s}(|b_p|+1)(|b_q|+1).$\end{remark} The diagonal $A_{\infty}$ $A\mhyphen A$-bimodule is given by $A$ as a graded vector space, and we have $$\mu_{i,j}(a_1,\dots,a_i,b,c_1,\dots,c_j)=(-1)^{l_1^i(a)+1}\mu_{i+j+1}^A(a_1,\dots,a_i,b,c_1,\dots,c_j).$$ Finally, we mention the gluing of $A_{\infty}$-algebras. Let $M$ be an $A_{\infty}$ $A\mhyphen B$-bimodule. We denote by $\begin{pmatrix} B & 0\\ M & A \end{pmatrix}$ the $A_{\infty}$-algebra $C$ which equals $A\oplus B\oplus M$ as a graded vector space, so that the non-zero components of $\mu_n^C$ are given by $\mu_n^A,$ $\mu_n^B,$ and $$(-1)^{l_1^i(a)+1}\mu_{i,j}(a_1,\dots,a_i,m,b_1,\dots,b_j),\quad i+j+1=n,$$ where $a_1,\dots,a_i\in A,$ $b_1,\dots,b_j\in B.$ \section{Preliminaries on the Hochschild complex, pairings and copairings} In this section all $A_{\infty}$-algebras are strictly unital. For an $A_{\infty}$-algebra $A,$ we put $\overline{A}:=A/\mathrm k\cdot 1_A.$ The mixed Hochschild complex (see \cite{Ke2, KS}) $(C_{\bullet}(A),b,B)$ of an $A_{\infty}$-algebra $A$ is given as a graded vector space by $$C_{\bullet}(A):=\bigoplus\limits_{n\geq 0}A\otimes (\overline{A}[1])^{\otimes n}.$$ For convenience we write $(a_0,\dots,a_n)$ instead of $a_0\otimes\dots\otimes a_n\in C_{\bullet}(A).$ The Hochschild differential is given by \begin{multline}b(a_0,\dots,a_n)=\sum\limits_{0\leq i\leq j\leq n}(-1)^{l_0^{i-1}(a)+1}(a_0,\dots,\mu_{j-i+1}(a_i,\dots,a_j),\dots,a_n)+\\ \sum\limits_{0\leq p<q\leq n}(-1)^{l_0^{q-1}(a)l_q^n(a)+1}(\mu_{n+p+2-q}(a_q,\dots,a_n,a_0,\dots,a_p),a_{p+1},\dots,a_{q-1}).\end{multline} The Connes-Tsygan differential $B$ (see \cite{Co, FT, Ts}) is given by $$B(a_0,a_1,\dots,a_n)=\sum\limits_{0\leq i\leq n}(-1)^{l_0^{i-1}(a)l_i^n(a)+1}(1,a_i,\dots,a_n,a_0,\dots,a_{i-1}).$$ The Hochschild complex can be more generally defined for $A_{\infty}$-categories, and is Morita invariant \cite{KS}. We refer to \cite{KS} for the definition of cyclic homology $HC_{\bullet},$ negative cyclic homology $HC^{-}_{\bullet}$ and $HP_{\bullet}.$ In this paper we will in fact deal only with the first differential of the Hochschild-to-cyclic spectral sequence, which is the map $B:HH_n(A)\to HH_{n+1}(A)$ induced by the Connes-Tsygan differential. We recall the natural pairings and co-pairings on $HH_{\bullet}(A).$ Let us restrict ourselves to DG algebras for a moment. Given a DG algebra $A,$ we have a Chern character $\operatorname{ch}:K_n(A)\to HH_n(A)$ (see \cite{CT}; the Chern character naturally lifts to $HC^{-}(A)$), but we will not need this). In particular, given DG algebras $A,B$ and an object $M\in D_{\operatorname{perf}}(A\otimes B),$ we have a copairing $$\operatorname{ch}(M)\in (HH_{\bullet}(A)\otimes HH_{\bullet}(B))_0\cong HH_0(A\otimes B).$$ This copairing is used in the formulation of Conjecture \ref{conj:degeneration_for_smooth_intro} for $A=B^{op}$ being smooth, and $M=A.$ Dually \cite{S}, if we have DG algebras $A$ and $B,$ and an object $M\in D_{\operatorname{pspe}}(A^{op}\otimes B^{op}),$ then we have a pairing (of degree zero) $$HH_{\bullet}(A)\otimes HH_{\bullet}(B)\to HH_{\bullet}(A\otimes B)\to HH_{\bullet}(\operatorname{End}_{\mathrm k}(M))\to \mathrm k$$ (the last map is an isomorphism if and only if $M$ is not acyclic). In the formulation of Conjecture \ref{conj:degeneration_for_proper_intro} this pairing is used for $A=B^{op}$ proper, and $M=A.$ In this case we denote the pairing by $\langle \cdot,\cdot\rangle.$ We would like to obtain an explicit formula for the pairing in the $A_{\infty}$-setting. The reader who is not interested in (or is already familiar with) the details can skip to Corollary \ref{cor:corollary_on_m_3} which is essentially all we need. Let $A,B,C$ be $A_{\infty}$-algebras. Suppose that we are given an$A_{\infty}$-bimorphism $f:(A,B)\to C.$ We would like to define an explicit map of complexes $$f_*:C_{\bullet}(A)\otimes C_{\bullet}(B)\to C_{\bullet}(C).$$ It is given by \begin{multline}\label{eq:map_on HH_induced by_bimorphism}f_*((a_0,\dots,a_n)\otimes (b_0,\dots,b_m))=\\ \sum\limits_{\substack{0\leq i_0\leq \dots\leq i_k\leq n;\\0\leq j_0\leq\dots\leq j_k\leq m;\\ 0\leq p< q\leq k}}(-1)^{\varepsilon(i_0,\dots,i_k,j_1,\dots,j_k,p,q)}(\mu_{k+p+2-q}(f_{i_{q+1}-i_q,j_q-j_{q-1}}(a_{i_q+1},\dots,a_{i_{q+1}},b_{j_{q-1}+1},\dots,b_{j_q}),\\ \dots, f_{i_{p+1}-i_p,j_p-j_{p-1}}(a_{i_p+1},\dots,a_{i_{p+1}},b_{j_{p-1}+1},\dots,b_{j_p})),\\ f_{i_{p+2}-i_{p+1},j_{p+1}-j_p}(a_{i_{p+1}+1},\dots,a_{i_{p+2}},b_{j_p+1},\dots,b_{j_{p+1}}),\dots,\\ f_{i_q-i_{q-1},j_{q-1}-j_{q-2}}(a_{i_{q-1}+1},\dots,a_{i_q},b_{j_{q-2}+1},\dots,b_{j_{q-1}})),\end{multline} where \begin{multline*}\varepsilon(i_0,\dots,i_k,j_1,\dots,j_k,p,q)=l_0^m(a)+l_{i_q+1}^{n}(a)l_{0}^{i_q}(a)+l_{j_{q-1}+1}^{m}(b)l_{0}^{j_{q-1}}(b)+1+\\ \sum\limits_{s=1}^k l_{i_{q+s}+1}^{i_{q+s+1}}(a)l_{j_{q-1}+1}^{j_{q+s-1}}(b).\end{multline*} In this summation we mean that $i_{s+k+1}=i_s,$ $j_{s+k+1}=j_s,$ $a_{s+n+1}=a_s,$ $b_{s+m+1}=b_s.$ Also, we require that for all $s=1,\dots,k-1$ we have $(i_{s+1}-i_s)+(j_s-j_{s-1})>0,$ so that we don't get the (non-existing) $f_{0,0}$ anywhere. \begin{remark}Suppose that we are in the special situation when $A,$ $B$ and $C$ are DG algebras, and the $A_{\infty}$-bimorphism $f$ has only two non-zero components $f_{1,0}$ and $f_{0,1}.$ This is equivalent to a DG algebra morphism $A\otimes B\to C,$ which we still denote by $f.$ The map given by \eqref{eq:map_on HH_induced by_bimorphism} is obtained by composing the map $C_{\bullet}(A\otimes B)\to C_{\bullet}(C)$ with the Eilenberg-Zilber map $EZ:C_{\bullet}(A)\otimes C_{\bullet}(B)\to C_{\bullet}(A\otimes B).$ \end{remark} \begin{prop}\label{prop:explicit_pairing_via_str}Let $A_1$ and $A_2$ be strictly unital $A_{\infty}$-algebras, and $M$ a finite dimensional strictly unital $A_{\infty}$ $A_1\mhyphen A_2$-bimodule (we require that $\dim\oplus_{n}\dim(M^n)<\infty$). Then the composition map $$\psi:HH_{\bullet}(A_1)\otimes HH_{\bullet}(A_2^{op})\xrightarrow{\operatorname{id}\otimes B}HH_{\bullet}(A_1)\otimes HH_{\bullet}(A_2^{op})\to HH_{\bullet}(\operatorname{End}(V))\to \mathrm k$$ is given by the following explicit formula: \begin{multline*}\psi((a_0,\dots,a_n)\otimes (b_0,\dots,b_m))=\operatorname{str}_M(m\mapsto\\ \mapsto (-1)^{l_0^m(b)\cdot|m|}\sum\limits_{\substack{0\leq i\leq n;\\ 0\leq j\leq m}}(-1)^{\sigma_{i,j}}\mu_{n+1,m+1}(a_i,\dots,a_k,\dots,a_{i-1},m,b_j,\dots,b_0,b_l,\dots,b_{j+1}),\end{multline*} where $$\sigma_{i,j}=l_0^n(a)+l_0^{i-1}(a)l_i^n(a)+\sum\limits_{0\leq p<q\leq j}(|b_p|+1)(|b_q|+1)+\sum\limits_{j+1\leq p<q\leq m}(|b_p|+1)(|b_q|+1).$$\end{prop} \begin{proof}Recall that for a finite-dimensional complex $V$ the natural map $HH_{\bullet}(\operatorname{End}_{\mathrm k}(V))\to\mathrm k$ (which is an isomorphism if and only if $M$ is not acyclic) is given by the following morphism of complexes $C_{\bullet}(\operatorname{End}_{\mathrm k}(V))\to \mathrm k:$ $$(a_0,\dots,a_k)\mapsto \begin{cases}\operatorname{str}_M(a_0) & \text{ for }k=0,\,|a_0|=0;\\ 0 & \text{otherwise.}\end{cases}.$$ The result follows by applying the formula \eqref{eq:map_on HH_induced by_bimorphism} and Remark \ref{rem:bimodule_as_bimorphism} (and taking the strict unitality into account).\end{proof} Finally, we mention one particular corollary which we need in this paper. \begin{cor}\label{cor:corollary_on_m_3}Let $A$ be a finite-dimensional non-unital $A_{\infty}$-algebra, and $a,b\in A$ are closed homogeneous elements such that $|a|+|b|=1.$ If we consider $a$ and $b$ as classes in $HH_{\bullet}(A)$ and $HH_{\bullet}(A^{op})$ respectively. Then $$\langle a,B(b)\rangle=(-1)^{|a|+1}\operatorname{str}_A(v\mapsto (-1)^{(|b|+1)\cdot|v|}\mu_3(a,v,b)).$$\end{cor} \begin{proof}This follows immediately from Proposition \ref{prop:explicit_pairing_via_str}.\end{proof} \section{A counterexample to the generalized degeneration conjecture} \label{sec:disproving_very_general} We recall the main conjecture of \cite{E}. \begin{conj}\label{conj:very_general}\cite[Conjecture 1.3 for $n=0$]{E} Let ${\mathcal B}$ and ${\mathcal C}$ be small DG categories over a field $\mathrm k$ of characteristic zero. Then the composition map \begin{equation}\label{eq:map_phi_0}\varphi_0:K_0({\mathcal B}\otimes{\mathcal C})\xrightarrow{\operatorname{ch}} (HH_{\bullet}({\mathcal B})\otimes HH_{\bullet}({\mathcal C}))_0\xrightarrow{\operatorname{id}\otimes\delta} (HH_{\bullet}({\mathcal B})\otimes HC^-_{\bullet}({\mathcal C}))_1\end{equation} is zero.\end{conj} In this section we construct a counterexample to Conjecture \ref{conj:very_general}. We put $\Lambda_1=\mathrm k\langle\xi\rangle/\xi^2,$ where $|\xi|=1,$ and (automatically) $d\xi=0.$ We have a quasi-equivalence $\operatorname{Perf}(\Lambda_1)\simeq\operatorname{Perf}_{\{0\}}({\mathbb A}_{\mathrm k}^1)$ (the free $\Lambda_1$-module of rank $1$ corresponds to the skyscraper sheaf ${\mathcal O}_{0}$). In particular, we have a short exact sequence \begin{equation}\label{eq:ex_sec_on_A^1}0\to\operatorname{Perf}(\Lambda_1)\to \operatorname{Perf}({\mathbb A}^1)\to\operatorname{Perf}(\mathbb{G}_m)\to 0\end{equation} We also denote by $\mathrm k[\varepsilon]:=\mathrm k[t]/t^2$ the algebra of dual numbers ($|\varepsilon|=0,$ $d\varepsilon=0$). Let us denote by $x$ the coordinate on ${\mathbb A}^1,$ and put $T:=\operatorname{Spec}(\mathrm k[\varepsilon]).$ Tensoring \eqref{eq:ex_sec_on_A^1} by $\mathrm k[\epsilon]$ (and taking perfect complexes), we obtain another short exact sequence: \begin{equation}\label{eq:ex_sec_times_T}0\to \operatorname{Perf}(\Lambda_1\otimes \mathrm k[\epsilon])\to \operatorname{Perf}({\mathbb A}^1\times T)\to \operatorname{Perf}(\mathbb{G}_m\times T)\to 0.\end{equation} Now let us take the Cartier divisor $D:=\{x+\varepsilon=0\}\subset {\mathbb A}^1\times T.$ This is well-defined since $x+\varepsilon$ is not a zero divisor in $\mathrm k[x]\otimes\mathrm k[\varepsilon].$ Moreover, we have $D\cap (\mathbb{G}_m\times T)=\emptyset,$ since $x+\varepsilon$ is invertible in $\mathrm k[x^{\pm 1}]\otimes\mathrm k[\varepsilon]:$ we have $(x+\varepsilon)(x^{-1}-x^{-2}\varepsilon)=1.$ Therefore, by \eqref{eq:ex_sec_times_T}, we may and will consider ${\mathcal O}_D$ as an object of $\operatorname{Perf}(\Lambda_1\otimes\mathrm k[\varepsilon]).$ \begin{theo}\label{th:disproving_very_general}Conjecture \ref{conj:very_general} does not hold for the DG algebras $\Lambda_1$ and $\mathrm k[\varepsilon].$ Namely, we have $\varphi_0([{\mathcal O}_D])\ne 0,$ where $\varphi_0$ is defined in \eqref{eq:map_phi_0}.\end{theo} \begin{proof}We will prove a stronger statement: $\psi_0([{\mathcal O}_{D}])\ne 0,$ where $\psi_0$ is the composition $$K_0(\Lambda_1\otimes\mathrm k[\varepsilon])\xrightarrow{\operatorname{ch}}(HH_{\bullet}(\Lambda_1)\otimes HH_{\bullet}(\mathrm k[\varepsilon]))_0\xrightarrow{\operatorname{id}\otimes B} (HH_{\bullet}(\Lambda_1)\otimes HH_{\bullet}(\mathrm k[\varepsilon]))_1.$$ We use the notation $d_{dR}$ for the de Rham differential in order to avoid confusion with differentials in DG algebras. First let us identify the Hochschild homology of $\Lambda_1.$ Applying the long exact sequence in Hochschild homology to \eqref{eq:ex_sec_on_A^1}, we see that $$HH_{-1}(\Lambda_1)=\mathrm k[x^{\pm 1}]/\mathrm k[x],\text{ and }HH_0(\Lambda_1)=\mathrm k[x^{\pm 1}]d_{dR}x/\mathrm k[x]d_{dR}x,$$ and $HH_i(\Lambda_1)=0$ for $i\not\in\{-1,0\}.$ Further, for any commutative $\mathrm k$-algebra $R$ we have $HH_0(R)=R,$ and $HH_1(R)=\Omega^1_{R/\mathrm k},$ (and the Connes differential $B:HH_0(R)\to HH_1(R)$ is given by the de Rham differential). In particular, we have $HH_0(\mathrm k[\varepsilon])=\mathrm k[\varepsilon],$ and $HH_1(\mathrm k[\varepsilon])=\mathrm k\cdot d_{dR}\varepsilon$ (and we do not need $HH_{\geq 2}(\mathrm k[\varepsilon])$ for our considerations). {\noindent{\bf Claim.}} {\it Within the above notation, we have $\operatorname{ch}({\mathcal O}_D)=\frac{d_{dR}x}{x}\otimes 1-\frac{d_{dR}x}{x^2}\otimes \varepsilon+\frac{1}{x}\otimes d_{dR}\varepsilon.$} \begin{proof}As we already mentioned, the function $x+\varepsilon$ is invertible on $\mathbb{G}_m\times T,$ hence it gives an element $\alpha\in K_1(\mathbb{G}_m\times T).$ Moreover, the boundary map $$K_1(\mathbb{G}_m\times T)\to K_0(\Lambda_1\otimes\mathrm k[\varepsilon])$$ sends $\alpha$ to $[{\mathcal O}_D].$ We have $\operatorname{ch}(\alpha)=d_{dR}\log(x+\varepsilon)\in\Omega^1_{\mathbb{G}_m\times T}=HH_1(\mathbb{G}_m\times T).$ Explicitly, we have $$d_{dR}\log(x+\varepsilon)=(x^{-1}-x^{-2}\varepsilon)d_{dR}(x+\varepsilon)=\frac{d_{dR}x}{x}-\frac{\varepsilon d_{dR}x}{x^2}+\frac{d_{dR}\varepsilon}{x}.$$ Applying the boundary map $HH_1(\mathbb{G}_m\times T)\to HH_0(\Lambda_1\otimes \mathrm k[\varepsilon]),$ we obtain the desired formula for $\operatorname{ch}({\mathcal O}_D).$\end{proof} It follows from Claim that $$(\operatorname{id}\otimes B)(\operatorname{ch}([{\mathcal O}_D]))=-\frac{d_{dR}x}{x^2}\otimes d_{dR}\varepsilon\ne 0.$$ This proves the theorem. \end{proof} \section{A counterexample to Conjecture \ref{conj:degeneration_for_smooth_intro}} \label{sec:disproving_version_for_smooth} In this section we disprove Conjecture \ref{conj:degeneration_for_smooth_intro}. \begin{prop}\label{prop:smooth_comp_implies_smooth}Let $B$ be a smooth DG algebra and $F:\operatorname{Perf}(A)\to \operatorname{Perf}(B)$ a localization functor, where $A$ is a smooth and proper DG algebra. Then Conjecture \ref{conj:degeneration_for_proper_intro} holds for $B.$\end{prop} \begin{proof}This is actually explained in \cite[proof of Theorem 4.6]{E}. We explain the argument for completeness. The localization assumption implies that $(F\otimes F^{op})^*(I_A)=I_B.$ In particular, the map $HH_{\bullet}(A)\otimes HH_{\bullet}(A^{op})\to HH_{\bullet}(B)\otimes HH_{\bullet}(B^{op})$ takes $\operatorname{ch}(I_A)$ to $\operatorname{ch}(I_B).$ It remains to apply the commutative diagram $$\begin{CD}HH_{\bullet}(A)\otimes HH_{\bullet}(A^{op})@>\operatorname{id}\otimes\delta >> HH_{\bullet}(A)\otimes HC^{-}_{\bullet}(A^{op})[-1]\\ @VVV @VVV\\ HH_{\bullet}(B)\otimes HH_{\bullet}(B^{op})@>\operatorname{id}\otimes\delta >> HH_{\bullet}(B)\otimes HC^{-}_{\bullet}(B^{op})[1],\end{CD}$$ and Theorem \ref{th:Kaledin_degen} applied to $A.$ \end{proof} We have the following corollary, mentioned in the introduction. \begin{cor}\label{cor:conj_for_smooth_holds_for_D^b_coh} Let $X$ be a separated scheme of finite type over $\mathrm k,$ and ${\mathcal G}\in D^b_{coh}(X)$ -- a generator. Then Conjecture \ref{conj:degeneration_for_smooth_intro} holds for the smooth DG algebra $A={\mathbf R}\operatorname{End}({\mathcal G}).$\end{cor} \begin{proof}Indeed, by \cite[Theorem 1.8 1)]{E2}, there is a localization functor of the form $D^b_{coh}(Y)\to D^b_{coh}(X),$ where $Y$ is a smooth projective algebraic variety over $\mathrm k.$ The result follows by Proposition \ref{prop:smooth_comp_implies_smooth}. Note that here we don't even need to apply Theorem \ref{th:Kaledin_degen} since we only use the classical Hodge-to-de Rham degeneration for $Y.$\end{proof} \begin{remark}In fact, in the formulation of Proposition \ref{prop:smooth_comp_implies_smooth} we could weaken the assumption on the functor $F$ to be a localization, requiring it only to be a homological epimorphism, which means that the functor $D(A)\to D(B)$ a localization, see \cite[Section 3]{E2}. Then in the proof of Corollary \ref{cor:conj_for_smooth_holds_for_D^b_coh} we can apply the corresponding weakened version of \cite[Theorem 1.8 1)]{E2} which is much easier to prove.\end{remark} Clearly, Conjecture \ref{conj:degeneration_for_smooth_intro} is a special case of Conjecture \ref{conj:very_general}. On the other hand, it was proved in \cite{E} that Conjectures \ref{conj:very_general} and \ref{conj:degeneration_for_smooth_intro} are actually equivalent (more precisely, this follows from the proof of \cite[Theorem 4.6]{E}). However, deducing an explicit counterexample to Conjecture \ref{conj:degeneration_for_smooth_intro} along the lines of \cite{E} would require some computations, which we wish to avoid. Instead, we use some trick. Let us take some elliptic curve $E$ over $\mathrm k,$ with a $\mathrm k$-rational point $p\in E(\mathrm k).$ Choosing a local parameter $x\in{\mathcal O}_{E,p},$ we get an identification $\operatorname{Perf}(\Lambda_1)\simeq\operatorname{Perf}_{\{p\}}(E)\subset \operatorname{Perf}(E).$ Let us choose some generator ${\mathcal F}\in\operatorname{Perf}(E)$ (e.g. ${\mathcal F}={\mathcal O}_E\oplus{\mathcal O}_p$), and put $B_E={\mathbf R}\operatorname{End}({\mathcal F}),$ so that $\operatorname{Perf}(B_E)\simeq\operatorname{Perf}(E).$ We denote by $F:\operatorname{Perf}(\Lambda_1)\hookrightarrow \operatorname{Perf}(B_E)$ the resulting embedding. Further, we denote by $C$ the semi-free DG algebra $\mathrm k\langle t_1,t_2\rangle,$ with $|t_1|=0,$ $|t_2|=-1,$ $dt_1=0,$ and $dt_2=t_1^2.$ We take the object $M\in\operatorname{Perf}(\Lambda_1\otimes C\otimes C)$ whose image in $\operatorname{Perf}(\mathrm k[x]\otimes C\otimes C)$ is given by $$Cone(\mathrm k[x]\otimes C^{\otimes 2}\xrightarrow{x\otimes 1^{\otimes 2}+1\otimes t_1^{\otimes 2}}\mathrm k[x]\otimes C^{\otimes 2}).$$ As in the previous section, we see that $M$ is well-defined since the element $$x\otimes 1^{\otimes 2}+1\otimes t_1^{\otimes 2}\in H^0(\mathrm k[x^{\pm 1}]\otimes C\otimes C)=\mathrm k[x^{\pm 1}]\otimes\mathrm k[\varepsilon]\otimes\mathrm k[\varepsilon]$$ is invertible. Finally, we put $N:=(F\otimes\operatorname{id}_C^{\otimes 2})^*(M)\in \operatorname{Perf}(B_E\otimes C\otimes C).$ \begin{theo}\label{th:disproving_for_smooth} 1) Within the above notation, the dg algebra $$A:=\begin{pmatrix} B_E\otimes C & 0\\ N & C^{op} \end{pmatrix}$$ is homotopically finitely presented (hence smooth), but it does not satisfy Conjecture \ref{conj:degeneration_for_smooth_intro}. 2) The DG category $\operatorname{Perf}(A)$ gives a negative answer to Question \ref{ques:Toen}.\end{theo} \begin{proof}First, by Proposition \ref{prop:smooth_comp_implies_smooth} we see that 2) reduces to 1). We now prove 1). The homotopy finiteness of $A$ follows from \cite[Proposition 5.15]{E2} (gluing of homotopically finite DG algebras by a perfect bimodule is again homotopically finite). The functor $F:\operatorname{Perf}(\Lambda_1)\to \operatorname{Perf}(B_E)\simeq\operatorname{Perf}(E)$ induces a map $HH_F$ in Hochschild homology. We need the following values of $HH_F.$ First, the morphism $HH_F:HH_0(\Lambda_1)\to HH_0(E)=H^0({\mathcal O}_E)\oplus H^1(\omega_E)\cong\mathrm k\oplus\mathrm k$ is given by $$\frac{d_{dR}x}{x^n}\mapsto \begin{cases}(0,1) & \text{ for }n=1;\\ 0 & \text{ for }n>1.\end{cases}.$$ Further, the morphism $HH_F:HH_{-1}(\Lambda_1)\to HH_{-1}(E)=H^1({\mathcal O}_E)$ does not vanish on $x^{-1}$ (because there is no rational function on $E$ having single simple pole at $p$). We denote the image $HH_F(x^{-1})$ by $[x^{-1}].$ To prove 1), it suffices to show that $(\operatorname{id}\otimes\operatorname{id}\otimes B)(\operatorname{ch}(N))\in (HH_{\bullet}(\Lambda_1)\otimes HH_{\bullet}(C)^{\otimes 2})_{1}$ is non-zero. We have a natural projection $\pi:C\to H^0(C)\cong \mathrm k[\varepsilon].$ Let us put $\bar{N}:=(\operatorname{id}\otimes\pi^*\otimes\pi^*)(N)\in\operatorname{Perf}(E\times T\times T).$ Then $\bar{N}$ is naturally isomorphic to ${\mathcal O}_{D'},$ where $D'\subset E\times T\times T$ is a Cartier divisor, set-theoretically contained in $\{p\}\times T\times T,$ and given locally by the equation $x\otimes 1^{\otimes 2}+1\otimes \varepsilon^{\otimes 2}=0.$ The computation from Section \ref{sec:disproving_very_general} implies that $$\operatorname{ch}(\bar{N})=(0,1)\otimes 1^{\otimes 2}+[x^{-1}]\otimes d_{dR}\varepsilon\otimes\varepsilon+[x^{-1}]\otimes \varepsilon\otimes d_{dR}\varepsilon.$$ Therefore, we obtain $$(\operatorname{id}\otimes \operatorname{id}\otimes B)(\operatorname{ch}(\bar{N}))=[x^{-1}]\otimes d_{dR}\varepsilon\otimes d_{dR}\varepsilon\ne 0.$$ By functoriality, this implies $(\operatorname{id}\otimes \operatorname{id}\otimes B)(\operatorname{ch}(N))\ne 0.$ This proves 1). \end{proof} \section{A counterexample to Conjecture \ref{conj:degeneration_for_proper_intro}} \label{sec:disproving_for_proper} In this section we disprove Conjecture \ref{conj:degeneration_for_proper_intro}. More precisely, we will construct an example of a minimal finite-dimensional $A_{\infty}$-algebra $B$ and two elements $a,b\in B,$ such that $|a|+|b|=1,$ and $$\operatorname{str}_B(v\mapsto (-1^{(|b|+1)|v|})\mu_3(a,v,b))\ne 0,$$ thus disproving Conjecture \ref{conj:degeneration_for_proper_intro} (by Corollary \ref{cor:corollary_on_m_3}). We first mention the following observation, which in fact motivates Conjecture \ref{conj:degeneration_for_proper_intro}. \begin{prop}\label{prop:cat_res_implies_prop}Let $B$ be a proper DG algebra and $\operatorname{Perf}(B)\hookrightarrow \operatorname{Perf}(A)$ a quasi-fully-faithful functor, where $A$ is a smooth and proper DG algebra. Then Conjecture \ref{conj:degeneration_for_proper_intro} holds for $B.$\end{prop} \begin{proof}Indeed this follows from the commutative diagram $$\begin{CD}HH_{\bullet}(B)\otimes HC_{\bullet}(B^{op})[1] @>{\operatorname{id}\otimes\delta}>> HH_{\bullet}(B)\otimes HH_{\bullet}(B^{op})@>>> \mathrm k;\\ @VVV @VVV @V\operatorname{id} VV\\ HH_{\bullet}(A)\otimes HC_{\bullet}(A^{op})[1] @>{\operatorname{id}\otimes\delta}>> HH_{\bullet}(A)\otimes HH_{\bullet}(A^{op})@>>> \mathrm k\end{CD}$$ and Theorem \ref{th:Kaledin_degen} applied to $A.$ \end{proof} We have the following corollary, mentioned in the introduction. \begin{cor}\label{cor:conj_for_smooth_holds_for_D^b_coh} Let $X$ be a separated scheme of finite type over $\mathrm k,$ and $Z\subset X$ a closed proper subscheme. For any object ${\mathcal F}\in \operatorname{Perf}_Z(X),$ Conjecture \ref{conj:degeneration_for_proper_intro} holds for the proper DG algebra $B={\mathbf R}\operatorname{End}({\mathcal F}).$\end{cor} \begin{proof}Choosing some compactification $X\subset \bar{X}$ (which exist by Nagata's compactification theorem \cite{N}), we get $\operatorname{Perf}_Z(X)\simeq \operatorname{Perf}(\bar{X}).$ Thus, we may and will assume $X=\bar{X}=Z.$ Then the result follows by applying Proposition \ref{prop:cat_res_implies_prop} with \cite{KL}[Theorem 6.12]. As in the proof of Corollary \ref{cor:conj_for_smooth_holds_for_D^b_coh}, we only use here the classical Hodge-to-de Rham degeneration.\end{proof} The crucial point is the following theorem, which is of independent interest. \begin{theo}\label{th:nilpotence_and_factorization} 1) Let $A$ be a DG algebra, and $a\in H^0(A)$ a nilpotent element. Then the corresponding morphism $f:\mathrm k[x]\to A$ (where $|x|=0$) in $\operatorname{Ho}(\operatorname{dgalg}_{\mathrm k})$ factors through $\mathrm k[x]/x^n$ for a sufficiently large $n.$ 2) If moreover $a^2=0$ in $H^0(A),$ then it suffices to take $n=6.$\end{theo} Before we prove Theorem \ref{th:nilpotence_and_factorization}, we show how it allows to construct a counterexample to Conjecture \ref{conj:degeneration_for_proper_intro}. \begin{theo}1) Let us denote by $y$ the variable of degree $1.$ Then there exists an $A_{\infty}$ $\mathrm k[y]/y^3\mhyphen \mathrm k[x]/x^6$-bimodule structure on the $1$-dimensional vector space $V=\mathrm k\cdot z$ (where $|z|=0$) such that $\mu_3^V(x,z,y)=z.$ In particular, in the glued $A_{\infty}$-algebra $$B=\begin{pmatrix} \mathrm k[y]/y^3 & 0\\ V & \mathrm k[x]/x^6 \end{pmatrix}$$ we have $\operatorname{str}(v\mapsto \mu_3(x,v,y))=1.$ Therefore, by Corollary \ref{cor:corollary_on_m_3} this $A_{\infty}$-algebra (and any quasi-isomorphic DG algebra) does not satisfy Conjecture \ref{conj:degeneration_for_proper_intro}. 2) In particular, the proper DG category $\operatorname{Perf}^{\infty}(B)$ does not have a categorical resolution of singularities.\end{theo} \begin{proof}1) An easy computation shows that $\mathrm{Ext}^0_{\mathrm k[y]/y^3}(\mathrm k,\mathrm k)=\mathrm k[\varepsilon]$ (dual numbers). By Theorem \ref{th:nilpotence_and_factorization} 2), we have an $A_{\infty}$-morphism $g:\mathrm k[x]/x^6\to \operatorname{End}^{A_{\infty}}_{\mathrm k[y]/y^3}(\mathrm k),$ such that $\overline{g_1(x)}=\varepsilon\in H^0(\operatorname{End}^{A_{\infty}}_{\mathrm k[y]/y^3}(\mathrm k)).$ This gives the desired $A_{\infty}$-bimodule structure on $V.$ The rest conclusions are clear. 2) follows from 1) and Proposition \ref{prop:cat_res_implies_prop}. \end{proof} \begin{proof}[Proof of Theorem \ref{th:nilpotence_and_factorization}, part 1)] Let us denote by $A_f$ the $\mathrm k[x]\mhyphen A$-bimodule which equals $A$ as an $A$-module, and whose $\mathrm k[x]$-module structure comes from $f.$ Since the algebra $\mathrm k[x]$ is smooth, we have $A_f\in D_{\operatorname{perf}}(\mathrm k[x]\otimes A).$ Since $a\in H^0(A)$ is nilpotent, we have $\mathrm k[x^{\pm 1}]\Ltens{\mathrm k[x]}A=0.$ We conclude that $A_f$ is contained in the essential image of $D_{\operatorname{perf}}(\Lambda_1\otimes A)\hookrightarrow D_{\operatorname{perf}}(\mathrm k[x]\otimes A).$ Now, let us note that in $\operatorname{Ho}(\operatorname{dgcat}_{\mathrm k})$ we have $\operatorname{Perf}(\Lambda_1)\simeq \operatorname{colim}_{n}\operatorname{PsPerf}(\mathrm k[x]/x^n).$ It follows that we have an equivalence of triangulated categories $$D_{\operatorname{perf}}(\Lambda_1\otimes A)\simeq \operatorname{colim}_n D_{\operatorname{perf}}(\operatorname{PsPerf}(\mathrm k[x]/x^n)\otimes A).$$ Therefore, there exists $n>0$ such that $A_f$ is contained in the essential image of $D_{\operatorname{perf}}(\operatorname{PsPerf}(\mathrm k[x]/x^n)\otimes A).$ Let us denote by $\tilde{M}\in D_{\operatorname{perf}}(\operatorname{PsPerf}(\mathrm k[x]/x^n)\otimes A)$ an object whose image is isomorphic to $A_f.$ We have a natural functor $$\Phi:\operatorname{PsPerf}(\mathrm k[x]/x^n)\otimes\operatorname{Perf}(A)\to {\mathbf R}\underline{\operatorname{Hom}}(\mathrm k[x]/x^n,\operatorname{Perf}(A)).$$ By construction, the $\mathrm k[x]/x^n\mhyphen A$-bimodule $\Phi(\tilde{M})$ is quasi-isomorphic to $A$ as an $A$-module Choosing an isomorphism $\Phi(\tilde{M})_{\mid A}\xrightarrow{\sim} A$, we obtain the following composition morphism in $\operatorname{Ho}(\operatorname{dgalg}_{\mathrm k}):$ $$g:\mathrm k[x]/x^n\to {\mathbf R}\operatorname{End}_A(\Phi(\tilde{M}))\xrightarrow{\sim} A.$$ By construction, $H^0(g)(x)=a.$ Thus, $g$ factors $f$ through $\mathrm k[x]/x^n.$ This proves part 1)\end{proof} The proof of part 2) of Theorem \ref{th:nilpotence_and_factorization} requires some computations which we split into several lemmas. First, we may replace the abstract algebra $A$ by the concrete DG algebra $C$ which was used in Section \ref{sec:disproving_version_for_smooth}. Recall that it is freely generated by the elements $t_1,$ $t_2$ with $|t_1|=0,$ $|t_2|=-1,$ and $dt_1=0,$ $dt_2=t_1^2.$ Indeed choosing a representative $\tilde{a}\in A^0$ of $a,$ and an element $h\in A^{-1}$ such that $dh=\tilde{a}^2,$ we obtain a morphism of DG algebras $C\to A,$ $t_1\mapsto \tilde{a},$ $t_2\mapsto h.$ Thus, we may assume that $A=C$ and $a=\overline{t_1}.$ It will be very useful to introduce an additional $\mathbb{Z}$-grading on $C,$ which can be thought of as a $\mathbb{G}_m$-action. We will denote this grading by $\mathrm{w},$ putting $\mathrm{w}(t_1)=1,$ $\mathrm{w}(t_2)=2,$ and then extend by the rule $\mathrm{w}(uv)=\mathrm{w}(u)+\mathrm{w}(v).$ Clearly, the differential $d$ has degree zero with respect to $w.$ We thus have a decomposition of $C$ as a complex: $C=\bigoplus\limits_{n\geq 0}C^{\bullet,n}.$ Let us define $\hat{C}:=\prod\limits_{n\geq 0}C^{\bullet,n}.$ This is also a DG algebra, and we have a map $C\to\hat{C}$. The homogeneous elements of degree $-m$ in $\hat{C}$ are just non-commutative power series in $t_1,t_2$ such that in each monomial there are exactly $m$ copies of $t_2.$ \begin{lemma}\label{lem:cohom_of_hat_C} The cohomology algebra $H^{\bullet}(\hat{C})$ is generated by the elements $u_1=\overline{t_1}$ and $u_2=\overline{[t_1,t_2]},$ with two relations: $u_1^2=0,$ $u_1u_2+u_2u_1=0.$\end{lemma} \begin{proof}Indeed, it is easy to see that the DG algebra $\hat{C}$ is isomorphic to the endomorphism DG algebra $\operatorname{End}^{A_{\infty}}_{\mathrm k[y]/y^3}(\mathrm k).$ Thus, we have an isomorphism of graded algebras $H^{\bullet}(\hat{C})\cong \mathrm{Ext}^{\bullet}_{\mathrm k[y]/y^3}(\mathrm k,\mathrm k).$ To compute this Ext-algebra, we take the semi-free resolution $P\to\mathrm k.$ The underlying graded $\mathrm k[y]/y^3$-module is defined by $$P^{gr}:=\bigoplus_{n=0}^{\infty}e_n\cdot \mathrm k[y]/y^3,$$ where $|e_{n}|=\lfloor \frac{n}{2}\rfloor.$ The differential is given by $d(e_0)=0,$ and $d(e_{2k+1})=e_{2k}y,$ $d(e_{2k+2})=e_{2k+1}y^2$ for $k\geq 0.$ The morphism $P\to \mathrm k$ sends $e_0$ to $1,$ and $e_n$ to $0$ for $n>0.$ Clearly, this is a quasi-isomorphism. We see that $\mathrm{Ext}^{\bullet}_{\mathrm k[y]/y^3}(\mathrm k,\mathrm k)\cong \operatorname{Hom}_{\mathrm k[y]/y^3}^{\bullet}(P,\mathrm k),$ where the last complex has zero differential, and is equipped with the homogeneous basis $\{v_n\}_{n\geq 0},$ where $|v_n|=\lfloor\frac{n}{2}\rfloor,$ and $v_i(e_j)=\delta_{ij}.$ It is easy to see that the elements $v_1$ and $v_2$ correspond to the classes $u_1,u_2\in H^{\bullet}(\hat{C}),$ mentioned in the formulation of the lemma. Clearly, we have $u_1^2=0.$ It remains to show that $u_1u_2=-u_2u_1,$ and $u_1u_2^k\ne 0$ for $k\geq 0.$ Let us choose the lifts $\widetilde{v_n}\in\operatorname{End}_{\mathrm k[y]/y^3}(P)$ of $v_n,$ putting $$\widetilde{v_{2k}}(e_n)=\begin{cases}(-1)^{nk}e_{n-2k} & \text{for }n\geq 2k,\\ 0 & \text{otherwise};\end{cases}\quad \widetilde{v_{2k+1}}(e_n)=\begin{cases}e_{n-2k-1} & \text{for }n\text{ odd,}n\geq 2k+1,\\ (-1)^k e_{n-2k-1}y & \text{for }n\text{ even,}n\geq 2k+2,\\ 0 & \text{otherwise.}\end{cases}$$ It is easy to check that $\widetilde{v_n}$'s super-commute with the differential, and that $\widetilde{v_1}\widetilde{v_2}+\widetilde{v_2}\widetilde{v_1}=0,$ $\widetilde{v}_1(\widetilde{v_2})^k=(-1)^k\widetilde{v_{2k+1}}.$ This proves the lemma.\end{proof} \begin{lemma}\label{lem:q_iso_with_completion}The natural inclusion $C\to\hat{C}$ is a quasi-isomorphism.\end{lemma} \begin{proof}We already know that $\dim H^n(\hat{C})<\infty$ for all $n\in\mathbb{Z}.$ It remains to observe the following: for any infinite sequence of complexes of vector spaces ${\mathcal K}_{0}^{\bullet},{\mathcal K}_1^{\bullet},\dots$ such that $\dim H^n(\prod\limits_{n\geq 0}{\mathcal K}_n^{\bullet})<\infty$ for all $n\in\mathbb{Z},$ the morphism $\bigoplus\limits_{n\geq 0}{\mathcal K}_n^{\bullet}\to\prod\limits_{n\geq 0}{\mathcal K}_n^{\bullet}$ is a quasi-isomorphism. Applying this observation to the complexes $C^{\bullet,n},$ we conclude the proof.\end{proof} We now construct a strictly unital $A_{\infty}$-morphism $\mathrm k[x]/x^6\to C,$ using obstruction theory. First, we introduce the weight grading ($\mathbb{G}_m$-action) on $\mathrm k[x]/x^6$ by putting $\mathrm{w}(x)=1.$ Our $A_{\infty}$-morphism will be compatible with the $\mathbb{G}_m$-actions, and its component $f_1$ is given by \begin{equation}\label{eq:formula_for_f_1} f_1(x^k)=t_1^k\text{ for }0\leq k\leq 5.\end{equation} Note that all the cohomology spaces $H^n(C)$ are $H^0(C)\mhyphen H^0(C)$-bimodules, hence also over $\mathrm k[x]/x^6\mhyphen\mathrm k[x]/x^6$-bimodules (via $f_1$). Let us also note that for any $\mathrm k[x]/x^6\mhyphen\mathrm k[x]/x^6$-bimodule $M,$ equipped with the compatible $\mathbb{G}_m$-action, the Hochschild cohomology $HH^{\bullet}(\mathrm k[x]/x^6,M)$ also becomes bigraded; the second grading again corresponds to the $G_m$-action. For a vector space $V$ equipped with a $\mathbb{G}_m$-action, $V=\bigoplus_{n\in Z}V^n,$ we denote $V(k)$ the same space with a twisted $\mathbb{G}_m$-action: $V(k)^n=V^{k+n}.$ \begin{lemma}\label{lem:computattion_of_HH_cohom}We have $HH^{2k+2}(\mathrm k[x]/x^6,H^{-2k}(C))\cong \mathrm k[\varepsilon](6)$ for $k\geq 0,$ and $HH^{2k+3}(\mathrm k[x]/x^6,H^{-2k-1}(C))\cong \mathrm k(4)$ ($\mathbb{G}_m$-equivariant isomorphisms).\end{lemma} \begin{proof}We have the following $\mathbb{G}_m$-equivariant resolution of the diagonal bimodule: $$\dots\xrightarrow{d_3}\mathrm k[x]/x^6\otimes \mathrm k[x]/x^6(-6)\xrightarrow{d_2} \mathrm k[x]/x^6\otimes \mathrm k[x]/x^6(-1)\xrightarrow{d_1}\mathrm k[x]/x^6\otimes \mathrm k[x]/x^6\xrightarrow{m}\mathrm k[x]/x^6,$$ where $d_{2k+1}=x\otimes 1-1\otimes x,$ and $d_{2k}=x^5\otimes 1+x\otimes x^4+\dots1\otimes x^5.$ Further, by Lemmas \ref{lem:cohom_of_hat_C} and \ref{lem:q_iso_with_completion} we know the $\mathbb{G}_m$-equivariant $H^0(C)\mhyphen H^0(C)$-bimodules $H^n(C).$ Namely, $H^{-2k}(C)\cong\mathrm k[\varepsilon](-6k)$ (twisted diagonal bimodule), and $H^{-2k-1}(C)\cong (\mathrm k[\varepsilon])_{\sigma}(-6k-3)$ -- the twisted anti-diagonal bimodule. For the later, the left and right $H^0(C)$-actions are given respectively by $\varepsilon\cdot 1=\varepsilon,$ $1\cdot \varepsilon=-\varepsilon.$ The result follows by an elementary computation.\end{proof} We are finally able to finish the proof of the theorem. \begin{proof}[Proof of Theorem \ref{th:nilpotence_and_factorization}, part 2).] As we already mentioned, we will construct (or rather show the existence of) a $\mathbb{G}_m$-equivariant strictly unital $A_{\infty}$-morphism $f:\mathrm k[x]/x^6\to C,$ where $f_1$ is given by \eqref{eq:formula_for_f_1}. Since $H^0(f_1)$ is a homomorphism, we can construct $f_2$ such that the required relation is satisfied. Suppose that we have already constructed $\mathbb{G}_m$-equivariant $f_1,\dots,f_n$ (where $n\geq 2$) satisfying all the relations for the $A_{\infty}$-morphism that involve only $f_1,\dots,f_n.$ We want to construct the $(n+1)$ components $f_1,\dots,f_n',f_{n+1}$ (again, satisfying all the relevant relations) where only $f_n$ is possibly being replaced by another map $f_n'.$ The standard obstruction theory tells us that the obstruction to this is given by a class in $HH^{n+1,0}(\mathrm k[x]/x^6,H^{1-n}(C))$ (the $\mathbb{G}_m$-invariant part). Applying Lemma \ref{lem:computattion_of_HH_cohom}, we see that this space vanishes. Thus, proceeding inductively we can construct the desired $A_{\infty}$-morphism $f.$ This proves the theorem.\end{proof}
2024-02-18T23:40:30.690Z
2018-12-31T02:01:47.000Z
algebraic_stack_train_0000
2,615
9,784
proofpile-arXiv_065-12772
\section{Introduction} In a recent paper~\cite{Bordone:2017bld} we have proposed a model based on the flavor non-universal gauge group ${\rm PS}^3=[\mathrm{SU(4)}\times \mathrm{SU(2)_L }\times \mathrm{SU(2)_R}]^3$ as an interesting framework to describe the hints of lepton-flavor non-universality observed in $B$ meson decays, both in neutral currents~\cite{Aaij:2014ora,Aaij:2017vbb} and in charged currents~\cite{Lees:2013uzd,Aaij:2015yra,Hirose:2016wfn,Aaij:2017deq}. Besides the phenomenological success, the virtue of this model is the natural link between the pattern of ``anomalies'' observed so far and the hierarchical structure of quark and lepton mass matrices: both structures follow from the same dynamical breaking of the flavor symmetry present in the model. This, together with the unification of quarks and lepton quantum numbers \`a la Pati-Salam~\cite{Pati:1974yy}, makes the model quite interesting and worth being further investigated. The purpose of this paper is to analyze in more detail the rich low-energy phenomenology of the model, which presents several distinctive features with respect to other models proposed so far for a combined explanation of the two sets of anomalies. The link between the anomalies and Yukawa couplings in the ${\rm PS}^3$ model follows from an approximate $\mathrm{U(2)}^5$ flavor symmetry~\cite{Barbieri:2011ci,Barbieri:2012uh,Blankenburg:2012nx} that, as shown in a series of recent papers, provides a natural starting point to address this problem~\cite{Greljo:2015mma,Barbieri:2015yvd,Bordone:2017anc,Buttazzo:2017ixm}. Interestingly enough, in the ${\rm PS}^3$ model the $\mathrm{U(2)}^5$ flavor symmetry is an accidental symmetry of the gauge sector of the theory (below about $100$~TeV) and its breaking is controlled by the spontaneous symmetry breaking ${\rm PS}^3\to{\rm SM}$. The main TeV-scale mediator responsible for the $B$ anomalies is a vector leptoquark field, $U\sim(\mathbf{3},\mathbf{1})_{2/3}$, which has already been identified as an excellent single mediator for the anomalies (assuming pure left-handed couplings) in Ref.~\cite{Barbieri:2015yvd,Calibbi:2015kma,Buttazzo:2017ixm}, and has indeed been at the center of a series of explicit model-building attempts~\cite{Barbieri:2016las,DiLuzio:2017vat,Calibbi:2017qbu,Barbieri:2017tuq,Blanke:2018sro,Greljo:2018tuh}.\footnote{Interesting recent attempts to explain the anomalies not based on vector leptoquark mediators have been presented in Ref.~\cite{Marzocca:2018wcf,Greljo:2018ogz,Asadi:2018wea,Dorsner:2017ufx, Megias:2017vdg,Aloni:2017ixa,Descotes-Genon:2017ptp,Fraser:2018aqj,Camargo-Molina:2018cwu, Azatov:2018knx}.} The difference of the ${\rm PS}^3$ model with respect to these previous attempts is twofold: on the one hand, two other TeV-scale fields can mediate flavor-changing processes: a color octet and a $Z'$ (as also in~\cite{DiLuzio:2017vat}); on the other hand, all these TeV fields are not only coupled to left-handed currents, but also to right-handed currents. In this paper we present a systematic analysis of the low-energy phenomenology of the model. We focus mainly on the effects of the TeV-scale gauge mediators in processes involving the transition of the $b$ quark and $\tau$ lepton into lighter fermions, since they are the most directly connected to the anomalies. In particular, we show that if the anomalies were to be confirmed, the model would predict a rather characteristic pattern of correlations among these observables. Processes involving only the light families, such as those in $K$ and $D$ physics and $\mu\to e$ transitions, are controlled by subleading free parameters (more precisely subleading breaking terms of the $\mathrm{U(2)}^5$ symmetry) which are constrained neither by the anomalies nor by the Yukawa couplings and are therefore more model dependent. As far as these transitions are concerned, we investigate the consistency of the model and the constraints on these subleading effects arising from neutral meson mixing and $\mu\to e$ Lepton Flavor Violating (LFV) observables. The paper is organized as follows: in Section~\ref{sect:model} we summarize the key features of the model, focusing in particular on the flavor structure of the massive gauge bosons at the TeV scale. In Section~\ref{sect:LEFT} we briefly illustrate the procedure adopted to integrate out the heavy fields and build a corresponding low-energy effective theory. In Section~\ref{sect:pheno} we present a detailed analytical discussion of the most interesting observables, namely $\Delta F=2$ amplitudes, $b\to c \ell \nu$ decays, $b\to s \ell\ell$ decays, and LFV processes. The results of a global fit and a general discussion of the low-energy phenomenology is presented in Section~\ref{sect:Fit}. The results are summarized in the conclusions. A series of technical details about the model, the construction of the low-energy effective theory, and expressions for the observables are reported in the various appendices. \section{The ${\rm PS}^3$ model}\label{sect:model} In this section we briefly summarize the main features of the model, with particular attention to its flavor structure, that plays a key role in low-energy flavor-changing observables, and to the spectrum of exotic gauge bosons at the TeV scale. \subsection{High-scale dynamics} The gauge symmetry holding at high energies is $\mathrm{PS}^3\equiv\mathrm{PS}_1\times\mathrm{PS}_2\times\mathrm{PS}_3$, where $\mathrm{PS}_i = \mathrm{SU(4)}_i\times \mathrm{[SU(2)_L]}_i\times \mathrm{[SU(2)_R]}_i$. The fermion content is the same as in the SM plus three right-handed neutrinos, such that each fermion family is embedded in left- and right-handed multiplets of a given $\mathrm{PS}_i$ subgroup: $(\mathbf{4},\mathbf{2},\mathbf{1})_i$ and $(\mathbf{4},\mathbf{1},\mathbf{2})_i$. At this level the index $i=1,2,3$ can be identified with the generation index. The SM gauge group is a subgroup of the diagonal group, $\mathrm{PS}_{\rm diag} = \mathrm{PS}_{1+2+3}$. The spontaneous symmetry breaking (SSB) $\mathrm{PS}^3 \to {\rm SM}$ occurs in a series of steps at different energy scales, with appropriate scalar fields acquiring non-vanishing vacuum expectation values (VEVs), as described in Ref.~\cite{Bordone:2017bld}. As far as low-energy physics is concerned, we can ignore what happens above the scale where the initial gauge group is spontaneously broken to $\mathrm{SM}_{1+2} \times \mathrm{PS}_3$. This SSB scale ($\Lambda_{12}$) is chosen sufficiently high to neglect the effect of the $d\ge 6$ effective operators generated at this scale, even for rare processes such as $K_L \to \mu e$ or $K$--$\bar K$ mixing. The key aspect of the $\mathrm{SM}_{1+2} \times \mathrm{PS}_3$ {\em local} symmetry is the corresponding accidental $\mathrm{U(2)}^5$ {\em global} flavor symmetry~\cite{Barbieri:2011ci,Blankenburg:2012nx} \be \mathrm{U(2)}^5 = \mathrm{U(2)}_{q}\times \mathrm{U(2)}_{\ell}\times \mathrm{U(2)}_{u}\times \mathrm{U(2)}_{d}\times \mathrm{U(2)}_{e}\,, \ee acting on the first two generations of SM fermions, in the limit where we ignore the scalar sector of the theory. The SSB $\mathrm{SM}_{1+2} \times \mathrm{PS}_3 \to \mathrm{SM}$ occurs below the scale $\Lambda_{23} = {\rm few} \times 10~{\rm TeV}$ via an appropriate set of scalar (link) fields acquiring a non-trivial VEV:\footnote{For simplicity, we classify the link fields according to their transformation properties under $[\mathrm{SU(2)_R}]_{1+2}$, rather than $\mathrm{[U(1)_Y]}_{1+2}$. We also changed notation for the link fields with respect to Ref.~\cite{Bordone:2017bld}, given we focus only in the last step of the breaking chain.} \begin{align} \label{eq:linkfields} \begin{aligned} \Phi_L&\sim(\mathbf{1},\mathbf{2},\mathbf{1})_{1+2} \times(\mathbf{1},\mathbf{\bar 2},\mathbf{1})_3~, \qquad \Phi_R\sim(\mathbf{1},\mathbf{1},\mathbf{2})_{1+2}\times(\mathbf{1},\mathbf{1},\mathbf{\bar 2})_3~,\\ \Omega_1 & \sim(\mathbf{1},\mathbf{2},\mathbf{1})_{1+2} \times(\mathbf{\bar 4} ,\mathbf{\bar 2},\mathbf{1})_3~, \qquad \Omega_3 \sim(\mathbf{3},\mathbf{2},\mathbf{1})_{1+2} \times(\mathbf{\bar 4} ,\mathbf{\bar 2},\mathbf{1})_3~. \end{aligned} \end{align} The VEV of such fields obey a hierarchical pattern, $\langle \Phi_{L,R} \rangle > \langle \Omega_{1,3} \rangle$, such that the heavy fields with masses proportional to $\langle \Phi_{L,R} \rangle = {\cal O}(10~{\rm TeV})$ can safely be decoupled due to their heavy mass and the $\mathrm{U(2)}^5$ flavor symmetry. The gauge bosons responsible for the flavor anomalies, and potentially relevant in many flavor observables, are those acquiring mass in the last step of the breaking chain, \be \mathrm{SU(4)}_3 \times \mathrm{SU(3)}_{1+2} \times \mathrm{SU(2)_L} \times \mathrm{ U(1)^\prime} \to~{\rm SM}~, \label{eq:4321} \ee triggered by $\langle \Omega_{1,3} \rangle \not=0$ around the TeV scale. The 15 broken generators give rise to the following massive spin-1 fields: a leptoquark, $U\sim(\mathbf{3},\mathbf{1})_{2/3}$, a coloron, $G^\prime\sim(\mathbf{8},\mathbf{1})_0$, and a $Z^\prime\sim(\mathbf{1},\mathbf{1})_0$. As we discuss below, these are not the only TeV-scale fields: the spectrum contains additional scalars and fermions with masses of the order of a few TeV. However, these play no direct role in low-energy observables. Finally, the breaking of the electroweak symmetry takes place through the VEV of four SM-like Higgs fields (or two fields transforming as bi-doublets under $\mathrm{SU(2)_L}\times \mathrm{SU(2)_R}$) that, before the breaking of $\mathrm{PS}_3$, are embedded in the following two scalars: \begin{align} H_1\sim(\mathbf{1},\mathbf{2},\mathbf{\bar 2})_3~, \qquad H_{15}\sim(\mathbf{15},\mathbf{2},\mathbf{\bar 2})_3\,, \end{align} with $\langle H_{15} \rangle$ aligned along the $T^{15}$ generator of $\mathrm{SU(4)_3}$. Being singlets of $\mathrm{SM}_{1+2}$, these fields allow us to extend the $\mathrm{U(2)^5}$ symmetry also to the Yukawa sector, which remains exact at the level of renormalizable operators. \subsection{Yukawa couplings and breaking of the $\mathrm{U(2)}^5$ flavor symmetry} \label{sect:MainYuk} The Yukawa couplings for the light generations and, more generally, the breaking of the $\mathrm{U(2)}^5$ symmetry, arise from higher-dimensional operators involving the link fields $\Omega_{1,3}$ and $\Phi_{L,R}$, generated at the scale $\Lambda_{23}$~\cite{Bordone:2017bld}. Taking into account the effect of operators up to $d=7$, quark and charged-lepton Yukawa couplings assume the following general parametric structure \be Y_f \sim \begin{pmatrix} \frac{\langle\Phi_L\rangle \langle\Phi_{R}^\dagger \rangle}{\Lambda_{23}^2}& \frac{\langle\Omega_{a} \rangle}{\Lambda_{23}}\\ \frac{\langle\Phi_L\rangle \langle\Phi_{R}^\dagger \rangle \langle\Omega_{a}\rangle}{\Lambda_{23}^3} \phantom{\Big]^{1}} & y^f_{3} \end{pmatrix}~, \label{eq:Y1} \ee with $a=3\,(1)$ for quarks (leptons). Here, the 11 (12) entry of this matrix should be understood as a $2\times2$ matrix (2-component vector) in flavor space (see appendix~\ref{app:Yukawa}). The only entries in Eq.~(\ref{eq:Y1}) induced by renormalizable interactions below the scale $\Lambda_{23}$ are the Yukawa couplings for the third generation, which arise from \bea \begin{aligned} \mathcal{L}_{\rm Yuk} &= y_{1} \bar \Psi^3_L H_1 \Psi^3_R + y_{15} \bar \Psi^3_L H_{15} \Psi^3_R + y^\prime_{1} \bar \Psi^3_L H^c_1 \Psi^3_R + y^\prime_{15} \bar \Psi^3_L H^c_{15} \Psi^3_R + {\rm h.c.}~, \label{eq:d4Yuk} \end{aligned} \eea where $(\Psi^3_{L(R)})^\intercal = [(q_{L(R)}^3)^\intercal, (\ell_{L(R)}^3)^\intercal]$ denote the PS multiplets of third-generation fermions. Here $(q_R^3)^\intercal=(t_R, b_R)$, $(\ell_R^3)^\intercal=(\tau_R, \nu^\tau_R)$, and $q^3_L$ and $\ell^3_L$ indicate the SM left-handed doublets.\footnote{In the absence of tuning, this Lagrangian predicts $y_t$ and $y_{\nu_\tau}$ to be of similar size. As pointed out in~\cite{Greljo:2018tuh}, this prediction can be made compatible with realistic light-neutrino masses by means of an appropriate inverse seesaw mechanism.} The $y^f_{3}$ couplings in Eq.~(\ref{eq:Y1}) are combinations of the $y^{(\prime)}_{1(15)}$ weighted by the VEVs of $H_1$ and $H_{15}$ normalised to $v=246~\text{GeV}$. The leading terms controlling the left-handed mixing between third and second generations are generated by the following dimension-five operators \be \mathcal{L}_{\rm \Omega}^{d=5} = \frac{ y_{q3} }{\Lambda_{23}}\, \bar q^2_L H_1 \Omega_3 \Psi^3_R + \frac{ y_{\ell3} }{\Lambda_{23}}\, \bar \ell^{\,2}_L H_1 \Omega_1 \Psi^3_R + \frac{y_{q3}^\prime}{\Lambda_{23}} \bar q^2_L H_1^c \Omega_3 \Psi^3_R + \frac{ y_{\ell3}^\prime }{\Lambda_{23}}\, \bar \ell^{\,2}_L H_1^c \Omega_1 \Psi^3_R + {\rm h.c.} \label{eq:d5spurions} \ee The upper index on the left-handed doublets denotes the second family (in the interaction basis) that, by construction, is defined as the fermion combination appearing in these operators (see appendix~\ref{app:Yukawa}). Similarly, operators of $d=6$ and $7$ involving also the link fields $\Phi_{L,R}$ are responsible for the subleading terms in (\ref{eq:Y1}). The dynamical origin of these higher-dimensional operators is not relevant to analyze low-energy phenomenology. The only important point is the $\mathrm{U(2)}^5$ symmetry breaking structure they induce. This is highlighted by re-writing each Yukawa matrix in terms of three normalized $\mathrm{U(2)}^5$ breaking spurions $\{V_L$, $V_R$, $X_{LR}\}$, with hierarchical ordered coefficients ($|\epsilon^f_R| \ll |\epsilon^f_{LR}| \ll |\epsilon^f_L| \ll 1$): \be Y_f = y_3^f \begin{pmatrix} \epsilon^f_{LR}\, X_{LR} & \epsilon^{f}_L\, V_L \\[2pt] \epsilon^f_{R}\, V^\intercal_R & 1 \end{pmatrix}~. \label{eq:Y2} \ee Here $V_L$ and $V_R$ are unit vectors in the $\mathrm{U(2)}_{q+\ell}$ and $\mathrm{U(2)}_{u+d+e}$ space, while $X_{LR}$ is a bi-fundamental spurion of $\mathrm{U(2)}^5$. \\ We define the interaction basis for the left-handed doublets as the basis where the second generation is identified by the direction of leading spurion $V_{L}$ in flavor space (i.e.~in this basis $V_{L}$ is aligned to the second generation). We move from the interaction to the mass basis by means of the rotations \be L_u^\dagger Y_u R_u = {\rm diag}(y_u,y_c,y_t)~ ,\quad L_d^\dagger Y_d R_d = {\rm diag}(y_d,y_s,y_b)~, \quad L_e^\dagger Y_e R_e = {\rm diag}(y_e,y_\mu,y_\tau)~, \label{eq:rotations} \ee where the $y_i$ are real and positive and $V_{\rm CKM}=L_u^\dagger L_d$. The left-handed rotation matrices, generated by the leading spurions, play a prominent role in the phenomenological analysis. As discussed in detail in appendix~\ref{app:Yukawa}, the known structure of the SM Yukawa couplings determines only some of the (complex) coefficients $\epsilon^f_{L,R,LR}$. In particular three real parameters and two phases in the quark sector can be expressed in terms of CKM matrix elements, leaving us with the mixing angles and phases listed in Table \ref{tab:mixingList}. In the left-handed sector we end up with three mixing angles ($s_b$, $s_\tau$, $s_{e}$) and four CP-violating phases, out of which only two play a relevant role ($\phi_b$ and $\alpha_d$). The other two phases ($\phi_\tau$ and $\alpha_e$) are set to zero for simplicity. The left-handed mixing angles, which are nothing but the magnitudes of the $\epsilon^f_L$ parameters in the down and charged-lepton sector, are expected to be small, the natural size being set by $|V_{ts}|$. The subleading right-handed rotations in the lepton sector, controlled by the parameter $\epsilon^e_R$, play an important role in the rare $B_s\to\bar \mu\mu$ decay and in LFV transitions. Right-handed rotations in the quark sector, controlled by $\epsilon_R^d$ and $\epsilon_R^u$, do not significantly affect the phenomenology and thus are neglected in the following. \begin{table}[t] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{ccccc} \toprule & \multicolumn{3}{c}{Parameters} & Natural size\\ \midrule \multirow{2}{*}{\centering Left-handed mixing} & $\epsilon_{L}^{u,d},\,\epsilon_{LR}^{u,d}$ & $\stackrel{\rm CKM}{\longrightarrow}$ & $s_b$, $\phi_b$, $\alpha_d$ & $s_b = \mathcal{O}(|V_{ts}|)$\\ &$\epsilon_{L}^{e}$, $\epsilon_{LR}^e$ & $\longrightarrow$ & $s_\tau$, $s_e$, $\phi_\tau$, $\alpha_e$ & $s_\tau = \mathcal{O}(|V_{ts}|)$, $s_e \ll s_{\tau}$\\ \midrule \multirow{2}{*}{\centering Right-handed mixing} & \multicolumn{3}{c}{$\epsilon_R^d$, $\epsilon_R^u$} & $|\epsilon_R^d|=\mathcal{O}(\frac{m_{s}}{m_{b}} s_{b})$, $|\epsilon_R^u|=\mathcal{O}(\frac{m_{c}}{m_{t}} |V_{cb}| )$ \\ & \multicolumn{3}{c}{$\epsilon_R^e$} & $|\epsilon_R^e|=\mathcal{O}(\frac{m_{\mu}}{m_{\tau}} s_{\tau})$ \\ \bottomrule \end{tabular} \caption{Flavor mixing parameters arising from the $\mathrm{U(2)}^5$-breaking spurions in the Yukawa sector. The mixing parameters in the left-handed sector ($\epsilon_{L,LR}^f$) are parameterized in terms of mixing angles and phases after removing terms fixed by known CKM elements. The parameters $\phi_\tau$, $\alpha_e$, and $\epsilon_R^{u,d}$ are listed for completeness but are set to zero in the phenomenological analysis since they play a marginal role (see main text). }\label{tab:mixingList} \end{table} \subsubsection{Additional $\mathrm{U(2)}^5$ breaking from non-Yukawa operators}\label{sec:noYukBreak} An additional important aspect to analyze low-energy physics is the fact that the $\mathrm{U(2)}^5$ breaking is not confined only to the Yukawa sector, but it appears also in other effective operators. Among them, those with phenomenological implications at low energies are the $d=6$ operators bilinear in the light fermion fields and in the $\Omega_{1,3}$ link fields: \bea \begin{aligned} \mathcal{L}_{\rm \Omega}^{d=6} &= \frac{ c_{q \ell} }{\Lambda^2_{23}} (X_{q\ell})_{ij} \, {\rm Tr}[ i \Omega_1^\dagger D^\mu \Omega_3 ] ( \bar q^i_L \gamma_\mu \ell^j_L) + \frac{ c_{qq} }{\Lambda^2_{23}} (X_{qq})_{ij} \, {\rm Tr}[ i \Omega_3^\dagger D^\mu \Omega_3] ( \bar q^i_L \gamma_\mu q^j_L) \\ &+ \frac{ c_{\ell\ell} }{\Lambda^2_{23}} (X_{\ell\ell})_{ij} {\rm Tr}[ i \Omega_1^\dagger D^\mu \Omega_1] ( \bar \ell^i_L \gamma_\mu \ell^j_L)~+~{\rm h.c.}\,, \end{aligned} \label{eq:d6eps} \eea (with $i,j=1,2$). These operators introduce three new bi-fundamental spurions of $\mathrm{U(2)}^5$, $X_{q\ell} \sim 2_{q}\times\bar 2_\ell$, $X_{\ell\ell} \sim 2_{\ell} \times \bar 2_\ell$, and $X_{qq}\sim 2_{q}\times\bar 2_{q}$ that, as shown below, modify the couplings of the TeV-scale vectors to the SM fermions. In order to simplify the phenomenological discussion, it is convenient to define a {\em minimal breaking structure} for these additional spurions \be \left. X_{qq} \right|_{\rm min} = \mathbb{1}~, \qquad \left. X_{\ell\ell} \right|_{\rm min} = \left. X_{q\ell} \right|_{\rm min} = {\rm diag}(0,1)~, \label{eq:minU2} \ee corresponding to $\mathrm{U(2)}^5$ symmetric couplings for quark currents, and breaking terms aligned to those appearing in the Yukawa couplings for lepton currents. As we show in Section~\ref{sect:pheno}, such minimal breaking structure helps evading the tight bounds from neutral meson mixing while maximizing the impact on the $b\to s\ell\ell$ anomalies. In the limit where we neglect deviations from this structure, the relevant parameters controlling the breaking of $\mathrm{U(2)}^5$ in the coupling of the TeV-scale leptoquark and $Z^{\prime}$ are \be \epsilon_U ~=~ c_{q \ell}\, \frac{ \omega_1 \omega_3 }{\Lambda^2_{23} }\,, \qquad \epsilon_\ell ~=~ c_{\ell \ell}\, \frac{ \omega_1^{2}}{\Lambda^2_{23}}\,, \ee with $\omega_{1,3}$ defined in (\ref{eq:omegadef}). For completeness we also mention the $\mathrm{U(2)}^5$-preserving parameter \be \epsilon_q ~=~ c_{qq}\, \frac{\omega_3^{2}}{\Lambda^2_{23} }\,, \ee which however does not play any role in the phenomenological analysis. Deviations from the minimal $\mathrm{U(2)}^5$ breaking stucture of Eq.\ref{eq:minU2} are possible, and are unavoidably generated when considering the product of two or more spurions, hence they are expected to be small. Leading and sub-leading $\mathrm{U(2)}^5$-breaking parameters are summarized in Table \ref{tab:epsilonList}, together with their expected relative size (see Eq.(\ref{eq:DeltaU}) for the definition of the subleading terms). Analogous sub-leading $\mathrm{U(2)}_\ell$ breaking parameters could also be present; however, their effect is irrelevant and thus we do not consider them here. In appendix~\ref{sect:Omegaeffops} we present an explicit dynamical realization of $\mathcal{L}_{\rm \Omega}^{d=5}$ and $\mathcal{L}_{\rm \Omega}^{d=6}$ in terms of heavy fields to be integrated out. In particular, we show how these operators and the minimal breaking structure can be generated by integrating out an appropriate set of TeV-scale vector-like fermions with renormalizable interactions at the scale of unbroken $\mathrm{SM}_{1+2} \times \mathrm{PS}_3$. A discussion about the possible deviations from the minimal breaking structure in Eq.~(\ref{eq:minU2}), is also presented. \begin{table}[t] \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{ccccc} \toprule Breaking & Leading & Sub-leading & Sub-sub-leading & Natural size \\ \midrule $\mathrm{U(2)}_q\times\mathrm{U(2)}_\ell$ & $\epsilon_{U}$ & $\tilde\epsilon^d_U$, $\tilde\epsilon^e_U$ & $\Delta \epsilon_{U}$ & $\tilde\epsilon^{d,e}_U = {\cal O}(\epsilon_U s_{d,e}),\,\Delta\epsilon_U = {\cal O}(\epsilon_U s_e s_d)$ \\ $\mathrm{U(2)}_{q}$ &- & $\tilde \epsilon_{q}$ & $\Delta \epsilon_{q}$ & $ \tilde\epsilon_q = {\cal O}(\epsilon_U s_\tau ), \,\Delta\epsilon_q = {\cal O}(\epsilon^2_U)$ \\ $\mathrm{U(2)}_{\ell}$ & $\epsilon_{\ell}$ & - & - & $\epsilon_{\ell} = {\cal O}(\epsilon_U)$ \\ \bottomrule \end{tabular} \caption{$\mathrm{U(2)}^5$ breaking parameters arising from non-Yukawa operators. Only $\epsilon_U$ is used as free parameter in the fit. All the subleading terms are set to zero after checking that bounds set by present data are less stringent than the expected natural size. } \label{tab:epsilonList} \end{table} In principle, also $d=6$ operators involving right-handed light fermion fields could be relevant at low-energies. However, it is easy to conceive ultraviolet completions where such operators are not generated (or are extremely suppressed), as in the example presented in the appendix~\ref{sect:Omegaeffops}. As argued in Ref.~\cite{Bordone:2017bld} (see discussion in Sec.~II.B of this reference), all other \textit{$\mathrm{U(2)}$-violating operators at} $d=6$ operators either contribute to the Yukawa couplings or have negligible impact at low energies. In particular, given the connection of $\mathrm{U(2)}$-violating terms with the link fields, $\mathrm{U(2)}^5$-violating four-fermion operators are forbidden in our model. \subsection{The model at the TeV scale}\label{sec:modelTeV} Here we focus on the last step of the breaking chain before reaching the SM, namely Eq.~(\ref{eq:4321}). With an obvious notation, we denote the gauge couplings before the symmetry breaking by $g_{i}$, with $i=1\ldots 4$, and the gauge fields of $\mathrm{SU(4)_3}$, $\mathrm{SU(3)_{1+2}}$, and $\mathrm{U(1)^\prime}$ by $H^a_\mu$, $A^a_\mu$, and $B^\prime_\mu$, respectively. The symmetry breaking in Eq.~(\ref{eq:4321}) occurs via the VEVs of $\Omega_{1,3}$ along the SM direction, that we normalize as \begin{align} \langle\Omega^\intercal_{3}\rangle&=\frac{1}{\sqrt{2}} \begin{pmatrix} \omega_3 & 0 & 0\\ 0 & \omega_3 & 0\\ 0 & 0 & \omega_3\\ 0 & 0 & 0\\ \end{pmatrix} \,,\quad \langle\Omega^\intercal_{1}\rangle=\frac{1}{\sqrt{2}} \begin{pmatrix} 0\\ 0\\ 0\\ \omega_1\\ \end{pmatrix} \,, \label{eq:omegadef} \end{align} with $\omega_{1,3} = \mathcal{O}(\mathrm{TeV})$. These scalar fields can be decomposed under the unbroken SM subgroup as $\Omega_3\sim(\textbf{8},\textbf{1})_0\oplus(\textbf{1},\textbf{1})_0\oplus(\textbf{3},\textbf{1})_{2/3}$ and $\Omega_1\sim(\mathbf{\bar 3},\textbf{1})_{-2/3}\oplus(\textbf{1},\textbf{1})_0$. As a result, after removing the Goldstones, we end up with a real color octect, one real and one complex singlet, and a complex leptoquark. The gauge spectrum, which coincides with the one originally proposed in Ref.~\cite{DiLuzio:2017vat}, contains the following massive fields \bea \begin{aligned} & U_\mu^{1,2,3}=\frac{1}{\sqrt{2}}\left(H_{\mu}^{9,11,13}-iH_{\mu}^{10,12,14}\right)~, \quad G_\mu^{\prime\, a} =\frac{1}{ \sqrt{ g_4^2+g_3^2 } } \left( g_3 A_{\mu}^a- g_4 H_{\mu}^a \right)~, \\ & Z^\prime_\mu = \frac{1}{ \sqrt{g_4^2 + \frac{2}{3}\, g_1^2 } } \left( g_4 H_{\mu}^{15}- \sqrt{\frac{2}{3}} \,g_1 B^\prime_\mu \right)~, \end{aligned} \eea with masses \be M_{U}=\frac{g_4}{2}\sqrt{\omega_1^2+\omega_3^2}\,, \quad M_{G^\prime}= \sqrt{ \frac{ g_4^2+g_3^2 }{2} } \, \omega_3 \,,\quad M_{Z^\prime}= \frac{1}{2} \sqrt{ \frac{3}{2} g_4^2+ g_1^2} \sqrt{\omega_1^2+\frac{\omega_3^2}{3}}\,. \ee The combinations orthogonal to $G_\mu^{\prime\, a}$ and $ Z^\prime_\mu$ are the SM gauge fields $G_\mu^a$ and $ B_\mu$, whose couplings are $g_c=g_3 g_4/\sqrt{ g_4^2+g_3^2 } $ and $g_Y= g_1 g_4 / \sqrt{g_4^2 + \frac{2}{3}\, g_1^2 }$. For later convenience, we introduce the effective couplings \be g_U \equiv g_4~, \qquad g_{G^\prime} \equiv \sqrt{g_U^2-g_c^2}~, \qquad g_{Z^\prime}\equiv\frac{1}{2\sqrt{6}}\,\sqrt{g_U^2-\frac{2}{3}g_Y^2}\,, \ee that control the strength of the interactions with third-generation fermions. Note that in the limit $g_4\gg g_3$ (hence $g_U \gg g_c$), one has $g_U\approx g_{G^\prime}\approx 2\sqrt{6}\,g_{Z^\prime}$. The interactions of the heavy gauge bosons with SM fermions (and right-handed neutrinos) are described by the following Lagrangian \be {\cal L}_{\rm int} ~\supset~ \frac{g_U}{\sqrt{2}} \left( U_\mu J_U^\mu+{\rm h.c.}\right) -g_{G^\prime}\, G^{\prime \,a}_\mu J_{G^\prime}^{\mu\,a} -g_{Z^\prime}\, Z^\prime_\mu J_{Z^\prime}^\mu \,, \ee where \bea \begin{aligned} J_U^\mu & \supset \overline{q}_L N^L_U \gamma^\mu \ell_L + \overline{u}_R N^R_U \gamma_\mu \nu_R +\overline{d}_R N^R_U \gamma_\mu e_R~, \\ J_{G^\prime}^{\mu\,a} & \supset \overline{q}_L N^L_{G^\prime} \gamma^\mu T^a q_L + \overline{u}_R N^R_{G^\prime} \gamma^\mu T^a u_R +\overline{d}_R N^R_{G^\prime} \gamma^\mu T^a d_R~, \\ J_{Z^\prime}^\mu &\supset 3\,\overline{\ell}_L N^\ell_{Z^\prime} \gamma^\mu \ell_L + 3\, \overline{\nu}_R N^\nu_{Z^\prime} \gamma^\mu \nu_R -\overline{q}_L N^q_{Z^\prime} \gamma^\mu q_L \\ &\quad + 3\, \overline{e}_R N^e_{Z^\prime} \gamma^\mu e_R -\overline{u}_R N^u_{Z^\prime} \gamma^\mu u_R -\overline{d}_R N^d_{Z^\prime} \gamma^\mu d_R~, \end{aligned} \label{eq:UGZcurr0} \eea and the $N$'s are $3\times3$ matrices in flavor space. In the absence of $\mathrm{U(2)}^5$ breaking, these matrices assume the following form in the interaction basis \begin{align} \begin{aligned} N^{L,R}_{U}&= N_U \equiv \mathrm{diag}\left(0,0,1\right)~, \\ N_{Z^\prime}^\ell&= N_{Z^\prime}^q = N_{Z^\prime} \equiv \mathrm{diag}\left(-\frac{2}{3} \left(\frac{g_1}{g_4}\right)^2,-\frac{2}{3} \left(\frac{g_1}{g_4}\right)^2,1\right)~,\\ N_{G^\prime}^{L,R} &=N_{G^\prime} \equiv \mathrm{diag}\left(-\left(\frac{g_3}{g_4}\right)^2,-\left(\frac{g_3}{g_4}\right)^2,1\right)~, \\ N_{Z^\prime}^{\nu(e)}&=N_{Z^\prime} \pm \frac{2}{3} \left(\frac{g_1}{g_4}\right)^2\,\mathbb{1}~, \qquad\qquad N_{Z^\prime}^{u(d)}=N_{Z^\prime}\mp 2 \left(\frac{g_1}{g_4}\right)^2 \mathbb{1}~. \end{aligned} \end{align} The inclusion of the effective operators of $\mathcal{L}_{\rm \Omega}^{d=6}$ in Eq.~(\ref{eq:d6eps}) modifies these flavor couplings into \begin{align}\label{eq:CoupMod} \begin{aligned} & N^L_{U}\to \begin{pmatrix} \epsilon_U X_{ql} & 0 \\ 0 & 1 \end{pmatrix}~, \qquad N_{Z^\prime}^\ell \to N_{Z^\prime} +\begin{pmatrix} \epsilon_{\ell} X_{\ell\ell} & 0 \\ 0 & 0 \end{pmatrix}~, \\ & N_{Z^\prime}^q (N^L_{G^\prime}) \to N_{Z^\prime} (N_{G^\prime}) +\begin{pmatrix} \epsilon_q X_{qq} & 0 \\ 0 & 0 \end{pmatrix}~. \end{aligned} \end{align} As discussed in appendix~\ref{sect:Omegaeffops}, the natural size for the $\epsilon_{\ell,q,U}$ parameters is $10^{-3} \stackrel{<}{_\sim} | \epsilon_{\ell,q,U} | \stackrel{<}{_\sim} 10^{-2}$. In the limit where we adopt the minimal breaking structure in Eq.~(\ref{eq:minU2}) the $Z^\prime$ and $G^\prime$ couplings to quarks remain $\mathrm{ U(2)_q}$ symmetric. Additional modifications to the couplings in Eq.~\eqref{eq:CoupMod} arise when considering deviations from the minimal breaking structure (see Table~\ref{tab:epsilonList}). In this case one finds \bea N^L_{G^\prime} (N_{Z^\prime}^q) &\to& \left. N^L_{G^\prime} (N_{Z^\prime}^q) \right|_{\mathrm{U(2)_q-symm}} + \begin{pmatrix} 0 & 0 & 0 \\ 0 & \Delta\epsilon_q & \tilde\epsilon_q \\ 0 & \tilde\epsilon^{\,*}_q & 0 \end{pmatrix}, \label{eq:DeltaU} \\ N^L_{U} &\to& \begin{pmatrix} \Delta\epsilon_U & \tilde\epsilon^d_U & 0 \\ \tilde\epsilon^{e}_U & \epsilon_U & 0 \\ 0 & 0 & 1 \end{pmatrix}. \no \eea These subleading effects are specially relevant in two cases: i)~$\mathrm{U(2)_{q}}$ violating terms in the $Z^\prime$ and $G^\prime$ couplings to quarks, which are severely constrained by $\Delta F=2$ amplitudes; ii)~non-vanishing entries of the $U$ couplings involving the first family, which receive important constraints from $K_L \to \mu e$. When discussing low-energy observables, the heavy vectors are integrated out and the overall strength of their interactions is controlled by three effective Fermi-like couplings \be C_U \equiv \frac{g_U^2v^2}{4M_U^2} = \frac{v^2}{ \omega_1^2 + \omega_3^2}~, \qquad C_{G^\prime} \equiv \frac{g_{G^\prime}^2 v^2}{4M_{G^\prime}^2}~, \qquad C_{Z^\prime} \equiv \frac{g_{Z^\prime}^2v^2}{4M_{Z^\prime}^2}~, \label{eq:effFermi} \ee which span a limited range depending on the values of $\omega_1$ and $\omega_3$ and, to a smaller extent, $g_U$. These effective couplings (or better $\omega_1$ and $\omega_3$), together with the flavor parameters listed in Tables~\ref{tab:mixingList} and~\ref{tab:epsilonList}, are the free parameters used in the phenomenological analysis of the low-energy observables. \section{Construction of the low-energy EFT}\label{sect:LEFT} The construction of the EFT relevant for low-energy phenomenology occurs in three steps: i)~we integrate out the TeV fields at the tree-level, matching the theory into the so-called SM effective field theory (SMEFT), for which we adopt the Warsaw operator basis~\cite{Grzadkowski:2010es}; ii)~the SMEFT operators are evolved down to the electroweak scale using the one-loop Renormalization Group (RG) equations in Ref.~\cite{Jenkins:2013wua,Jenkins:2013zja,Alonso:2013hga}. At this point, all the ingredients necessary to check possible modifications of the on-shell $W$ and $Z$ couplings are available. For all the other observables a third step is needed: iii)~the heavy SM fields are integrated out and the theory is matched into a low-energy effective field theory (LEFT) containing only light SM fields~\cite{Jenkins:2017jig}. The key points of these three steps are briefly illustrated below. \subsection{Matching heavy gauge boson contributions to the SMEFT}\label{sec:Model2SMEFT} Moving from the interaction basis to the quark down-basis, defined in~\eqref{eq:down_basis}, and the mass-eigenstate basis of charged leptons, the currents in Eq.~(\ref{eq:UGZcurr0}) assume the form \begin{align} \begin{aligned} J_U^\mu &\supset \overline{q}_L \,\cUq \gamma^\mu \ell_L + \overline{u}_R\,\cUu \gamma^\mu \nu_R + \overline{d}_R\, \cUd \gamma^\mu e_R\,,\\ J_{G^\prime}^{\mu\,a} &\supset \overline{q}_L \,\cGq \gamma^\mu T^a q_L + \overline{u}_R \,\cGu \gamma^\mu T^a u_R + \overline{d}_R \,\cGd \gamma^\mu T^a d_R \,, \\ J_{Z^\prime}^\mu &\supset 3 \, \overline{\ell}_L \,\cZl \gamma^\mu \ell_L - \overline{q}_L \,\cZq \gamma^\mu q_L + 3 \, \overline{\nu}_R \,\cZnu \gamma^\mu \nu_R + 3 \,\overline{e}_R \,\cZe \gamma^\mu e_R \\ &\quad-\overline{u}_R \,\cZu \gamma^\mu u_R -\overline{d}_R \,\cZd \gamma^\mu d_R+2\left(\frac{g_1}{g_4}\right)^2\phi^\dagger \,i\overleftrightarrow{D}_{\!\!\!\mu}\,\phi\,, \end{aligned} \label{eq:currents} \end{align} where the new flavor structures are expressed in terms of the $N$'s and the unitary rotation matrices that diagonalize the Yukawa couplings: \begin{align} \begin{aligned} \cUq &= L_d^\dagger N_U^L L_\ell\,, & \cGq &= L_d^\dagger N_{G^\prime}^L L_d\,, & \cZq &= L_d^\dagger N_{Z^\prime}^q L_d\,, & \cZl &= L_e^\dagger N_{Z^\prime}^\ell L_e\,,\\ \cUu &= R_u^\dagger N_U^R R_\nu\,, & \cGu &= R_u^\dagger N_{G^\prime}^R R_u\,, & \cZu &= R_u^\dagger N_{Z^\prime}^u R_u \,, & \cZe &= R_e^\dagger N_{Z^\prime}^e R_e \,,\\ \cUd &= -R_d^\dagger N_U^R R_e\,, & \cGd &= R_d^\dagger N_{G^\prime}^R R_d\,, & \cZd &= R_d^\dagger N_{Z^\prime}^d R_d\,, & \cZnu &= R_\nu^\dagger N_{Z^\prime}^\nu R_\nu\,. \end{aligned} \end{align} The relative sign in $\beta_d$ follows from the phase choice discussed in appendix~\ref{app:Yukawa}. This phase choice fixes the sign of the scalar contribution to $\Delta R_{D^{(*)}}$, see Eqs.~\eqref{eq:bctanuLag} and~\eqref{eq:DRDCU}, and therefore it plays a key role in the explanation of the $R_{D^{(*)}}$ anomalies. Also note that, in the case of the $Z'$ current, we have included also the contribution of the SM Higgs ($\phi$), which is obtained combining the four SM-like Higgses of the model. By integrating out $U$, $Z^{\prime}$ and $G^{\prime}$ at the tree level we obtain the effective Lagrangians \begin{align} \begin{aligned} \mathcal{L}_{\rm EFT}^U &= -\frac{4\,G_F}{\sqrt{2}}\,C_U\,J_{U}^\mu J_{U\,\mu}^\dagger= -\frac{2}{v^2}\,C_U \sum_k \eftU{k}\, Q_k\,, \\ \mathcal{L}_{\rm EFT}^{G^\prime} &= -\frac{4\,G_F}{\sqrt{2}}\,C_{G^\prime}\,(J_{G^\prime}^\mu)^2=-\frac{2}{v^2}\,C_{G^\prime} \sum_k \eftG{k}\, Q_k\,,\\ \mathcal{L}_{\rm EFT}^{Z^\prime}&= -\frac{4\,G_F}{\sqrt{2}}\,C_{Z^\prime}\,(J_{Z^\prime}^\mu)^2=-\frac{2}{v^2}\,C_{Z^\prime} \sum_k \eftZp{k}\, Q_k\,, \\ \end{aligned} \end{align} where $Q_{k}$ denote the SMEFT operators in the Warsaw basis~\cite{Grzadkowski:2010es}, plus additional dimension six operators involving right-handed neutrinos, reported in Table~\ref{eq:operators_nuR}. More compactly, \begin{align} \begin{aligned} \mathcal{L}_{\rm SMEFT} = - \frac{4\,G_F}{\sqrt{2}} \sum_k \mathcal{C}_{k} Q_{k} \qquad\;\; \mathcal{C}_{k}= C_U \eftU{k} + C_{G^\prime} \eftG{k} + C_{Z^\prime} \eftZp{k} \,. \end{aligned} \label{eq:lag_smeft} \end{align} Tables \ref{tab:no4ferm}, \ref{tab:4ferm}, and \ref{tab:4ferm_nuR} contain the tree level matching results for the SMEFT Wilson coefficients $\mathcal{C}_k$. \subsection{From the SMEFT to the LEFT} After matching, we perform the RG evolution of the resulting Wilson coefficients using DsixTools~\cite{Celis:2017hod}. RG effects are particularly important for the scalar operators and for dimension-six operators in the $\psi^2\phi^2\,D$ category. The latter introduce modifications to the $W$ and $Z$ after SSB (see e.g.~\cite{Jenkins:2017jig})\footnote{Contributions to other dimension-six operators that could potentially induce $W$ and $Z$ coupling modifications, such as those of the class $X^2 H^2$ or $Q_{HD}$, are negligible in our model.} which are tightly constrained by electroweak precision data at LEP as well as by universality tests in lepton decays~\cite{Feruglio:2016gvd,Feruglio:2017rjo,Cornella:2018tfd}. NP effects below the electroweak scale are conveniently described in terms of a low-energy effective field theory (LEFT) in which the $W$, the $Z$, the $t$ and the Higgs have been integrated out: \begin{align} \mathcal{L}^{\rm LEFT}=-\frac{4 G_F}{\sqrt{2}}\sum_k\,\eftWC{k}{}\mathcal{O}_k\,. \end{align} We then proceed by matching the SMEFT to the LEFT and provide the expressions for the relevant observables in terms of its Wilson coefficients. We adopt the same operator basis for the LEFT as in Table 7 of Ref.~\cite{Jenkins:2017jig}, where the matching conditions between the SMEFT and the LEFT can also be found. \section{The key low-energy observables}\label{sect:pheno} In what follows we provide simplified expressions for the most relevant low-energy observables, and discuss their role in constraining the model and in offering future test of this framework. This simplified expressions are mainly for illustration purposes; for all figures and numerical estimates throughout the paper we use the full expressions quoted in appendix~\ref{app:obs}. \subsection{$\Delta F=2$ transitions}\label{subsec:DF2} \begin{figure}[p] \centering \includegraphics[width=0.45\textwidth]{./figDF2U2.png}\qquad\quad\includegraphics[width=0.45\textwidth]{./figDF2NoU2.png}\\ \includegraphics[width=0.45\textwidth]{./figDFU2CP.png}\qquad\quad\includegraphics[width=0.45\textwidth]{./figDFNoU2CP.png} \caption{NP effects in $B_{s,d}-\bar B_{s,d}$ mixing as function of the phase $\phi_b$ for $\Delta\alpha_d=0,\pi$ (left) and $\alpha_d=0,\pi$ (right). The blue and orange bands correspond to the $95\%$ CL experimental bounds for $B_s$ and $B_d$ mixing, respectively. We use the following inputs: $s_b=0.10\,|V_{ts}|$ (solid), $s_b=0.15\,|V_{ts}|$ (dashed), $\epsilon_R^d=0$, $g_4=3.0$, $M_{Z^\prime}=1.75$~TeV, and $M_{G^\prime}=2.5$~TeV.}\label{fig:DF2} \end{figure} As in any extension of the SM with non-trivial flavor strucutre, also in the ${\rm PS}^3$ framework $\Delta F=2$ amplitudes provide one of the most significant constraints on model parameters, particularly on the new sources of flavor violation in the quark sector. These amplitudes receive tree-level contributions mediated by the $Z^\prime$ and $G^\prime$, whose strength is controlled by the $\mathrm{U(2)}^5$ breaking spurions. To a good approximation, the three down-type $\Delta F=2$ amplitudes can be written as \begin{align}\label{eq:M12simp} \begin{aligned} \mathcal{M}(K^0\to\bar K^0)&\approx\left|\mathcal{M}_{\rm SM}^{(tt)}\right|\left[\frac{(V_{td}V_{ts}^*)^2}{\left|V_{td}V_{ts}^*\right|^2}+e^{-2i\alpha_d}\frac{c_d^4\,[s_b^2+2\,s_b\,\Re(\tilde\epsilon_q\,e^{-i\phi_b})+\Delta\epsilon_q]^2}{|V_{ts}|^4}F_0\right]+\mathcal{M}_{\rm SM}^{(tc+cc)}\,,\\[5pt] \mathcal{M}(B_d\to\bar B_d)&\approx\left|\mathcal{M}_{\rm SM}\right|\frac{(V_{td}V_{tb}^*)^2}{\left|V_{td}V_{tb}^*\right|^2}\left[1+\frac{c_d^2\,(s_b\,e^{-i\phi_b}+\tilde\epsilon_q^{*})^2}{|V_{ts}|^2}\,F_0\,e^{-2i \Delta \alpha_d} \right]\,,\\[5pt] \mathcal{M}(B_s\to\bar B_s)&\approx\left|\mathcal{M}_{\rm SM}\right|\frac{(V_{ts}V_{tb}^*)^2}{\left|V_{ts}V_{tb}^*\right|^2}\left[1+\frac{c_d^2\,(s_b\,e^{-i\phi_b}+\tilde\epsilon_q^{*})^2}{|V_{ts}|^{2}}\,F_0\,(1+f(\tbsR))\right]\,, \end{aligned} \end{align} where \begin{align} F_0=\frac{16\pi^2}{\sqrt{2}\,G_FM_W^2\, S_0(x_t)}\,\left(C_{Z^\prime}+\frac{C_{G^\prime}}{3}\right)\,, \end{align} and $S_0(x_t=m_t^2/M_W^2)\approx2.4$ denotes the SM one-loop function (in the $\Delta S=2$ case we normalize the NP amplitude to the short-distance top-quark SM contribution). As far as left-handed flavor-mixing parameters are concerned, $s_b$ and $\phi_b$ arise from the leading $\mathrm{U(2)_q}$ breaking term in the quark sector; $\Delta \alpha_d = \alpha_d- (\pi-\mathrm{Arg}\left\{V_{td}/V_{ts}\right\})$ denotes the phase difference between the leading quark spurion and subleading terms describing light-quark masses (see appendix~\ref{app:Yukawa}); $c_d=1 +{\cal O}(|V_{us}|^2)$; $\Delta \epsilon_q$ and $\tilde\epsilon_q$, defined in Eq.~(\ref{eq:DeltaU}), encode the effect of the subleading breaking terms in the $Z^\prime$ and $G^\prime$ couplings. Finally, $f(\tbsR)$ describes the contributions from the right-handed flavor rotations in~\eqref{eq:RHrot1}. Using the inputs in~\cite{Bazavov:2016nty} for the bag parameters of non-SM operators, we find \begin{align} f(\tbsR)&\approx\frac{16\,C_{Z^\prime}+22\,C_{G^\prime}}{3\,C_{Z^\prime}+C_{G^\prime}}\, \frac{(\tbsR)^*}{c_d\,s_b\,e^{-i\phi_b}} + {\cal O}[(\tbsR)^2]~. \end{align} As shown in appendix~\ref{app:Yukawa}, in the limit where we neglect contributions to the Yukawa couplings from $d=7$ effective operators, i.e.~when we set $\epsilon_R^d=0$, the right-handed rotation angle is unambiguously fixed to $\tbsR=m_s/m_b\,s_b\,e^{i\phi_b}$, that in turn implies $f(\tbsR)\approx0.4$ for typical values of $C_{Z^\prime}/C_{G^\prime}$. \paragraph*{CP violation in Kaon mixing.} The most significant constraints on the subleading parameters $\Delta \epsilon_q$ and $\tilde\epsilon_q$, which describe the deviations from the exact $\mathrm{U(2)_q}$ limit in the $Z^\prime$ and $G^\prime$ left-handed couplings, arise from the CP-violating observable $\epsilon_K \propto \Im[\mathcal{M}(K^0\to\bar K^0)]$, that can be decomposed as \begin{align} \epsilon_K&\approx\epsilon_K^{\rm SM}-\sqrt{2}\,\epsilon_K^{\rm SM,\,(tt)}\,\sin (2\alpha_d)\,\left[s_b^2+2\,s_b\,\,\Re(\tilde\epsilon_q\,e^{-i\phi_b})+\Delta\epsilon_q\right]^2\,\frac{c_d^4\,F_0}{|V_{ts}|^4}\,, \end{align} where $\epsilon_K^{\rm SM,\,(tt)}$ corresponds to the top-mediated SM contribution. The NP contribution to $\epsilon_K$ vanishes for $\alpha_d\to 0$. Setting $\Delta\epsilon_q=\tilde\epsilon_q=0$, and choosing the other parameters in their natural range, we find that $\epsilon_K$ is well within its current bound, irrespective of the value of $\alpha_d$. Allowing for $\Delta\epsilon_q, \tilde\epsilon_q \not =0$, imposing modifications in $|\epsilon_K|$ of up to $\mathcal{O}(15\%)$, and barring accidental cancellations with generic values of $\alpha_d$, we find \be | \Delta\epsilon_q | \lesssim0.1\,|V_{ts}|^2~, \qquad | \tilde\epsilon_q | \lesssim0.3\,|V_{ts}|~. \ee Similar limits, although slightly less stringent, are obtained from $B_{s,d}-\bar B_{s,d}$ and $D-\bar D$ mixing. Despite being stringent, these limits are below the natural size of these subleading breaking terms inferred in Table~\ref{tab:epsilonList} (setting $|\epsilon_U| \leq 10^{-2}$). This result implies that: i)~it is perfectly consistent to focus on the scenario $\Delta\epsilon_q=\tilde\epsilon_q=0$; ii)~once the symmetry breaking terms assume their natural size, no fine-tuning on the CP-violating phases is necessary in order to satisfy the $\epsilon_K$ constraint. \paragraph{$\Delta B=2$ observables.} Setting $\Delta\epsilon_q=\tilde\epsilon_q=0$, the physical observables sensitive to $\Delta B=2$ amplitudes, namely the mass differences ($\Delta M_q)$ and the CP violating asymmetries $S_{\psi K_S}$ and $S_{\psi \phi}$ can be expressed as \begin{align} \begin{aligned} C_{B_d}&\equiv\frac{\Delta M_d}{\Delta M_d^{\rm SM}}\approx\left|1+\frac{c_d^2\, s^2_b\,e^{-2i(\phi_b+\Delta\alpha_d)}}{|V_{ts}|^2}\,F_0\right|\,,\\ C_{B_s}&\equiv\frac{\Delta M_s}{\Delta M_s^{\rm SM}}\approx\left|1+\frac{c_d^2\, s^2_b\,e^{-2 i\phi_b}}{|V_{ts}|^2}\,F_0\,\left( 1+f(\tbsR) \right)\right|\,,\\ \end{aligned} \end{align} and \begin{align} \begin{aligned} S_{\psi K_s}&=\sin\left(2\beta+\Phi_{B_d}\right)\,, \qquad \Phi_{B_d}\approx\mathrm{Arg}\left(1+\frac{c_d^2 \,s^2_b\,e^{-2i(\phi_b+\Delta\alpha_d)} }{|V_{ts}|^2}\,F_0\right)\,,\\ S_{\psi \phi}&=\sin\left(2|\beta_s|-\Phi_{B_s}\right)\,,~ \quad \Phi_{B_s}\approx\mathrm{Arg}\left(1+\frac{c_d^2\, s^2_b\,e^{-2i\phi_b} }{|V_{ts}|^2}\,F_0\,\left(1+f(\tbsR)\right)\right)\,. \end{aligned} \end{align} Current lattice data~\cite{Bazavov:2016nty} point to a deficit in the experimental values of $\Delta M_{d,s}$ with respect to the SM prediction (or equivalently to values of $C_{B_{s,d}}$ smaller than one). As show in Figure~\ref{fig:DF2}, the presence of the free phase $\phi_b$ allows the model to accommodate this deficit, even for small departures from $\phi_b=\pi/2$, while satisfying the bounds from CP violation (see Ref.~\cite{DiLuzio:2017fdq} for a similar discussion). The mixing angle $s_b$ is constrained to be up to 0.2 $|V_{ts}|$ (depending on $\phi_b$), indicating a mild alignment of the leading $\mathrm{U(2)_q}$ breaking spurion in the down sector. As we discuss in Section~\ref{subsec:b2sll}, in our framework the vector leptoquark provides a good fit of the semileptonic anomalies irrespective of the value of $\phi_b$ (contrary to the case discussed in Ref.~\cite{DiLuzio:2017fdq}). We thus conclude that the model leads to a good description of $\Delta B=2$ observables, possibly improved compared to the SM case. We also note that using previous lattice determinations of the SM prediction for $\Delta M_{d,s}$, consistent with the experimental value but with larger errors (see e.g.~\cite{Artuso:2015swg,Lenz:2011ti,DiLuzio:2017fdq}), does not affect the results of our phenomenological analysis. \paragraph*{CP violation in $D$ mixing.} Last but not least, we analyze the bounds from $\Delta C=2$ amplitudes. Following the analysis from UTfit~\cite{Bona:2017cxr,UTfit2018,Carrasco:2014uya}, the constraint obtained from the non-observation of CP-violation in the $D-\bar D$ transition can be expressed as \be \Im (C_1^D) = \frac{4 G_F}{ \sqrt{2} } \Im\left(\eftWCF{uu}{V, LL}{2121}(\mu_t) \right) = (-0.03\pm0.46) \times 10^{-14}\ {\rm GeV}^{-2}~. \ee Taking into account also the subleading breaking terms, we find the following simplified expression for this Wilson coefficient: \begin{align} \mathrm{Im}\left(C_1^D\right) &\approx \frac{4 G_F}{ \sqrt{2} }\, \Im\left\{(V_{ub}^*V_{cb})^2\left[\left(1+c_d\,(s_b\,e^{-i\phi_b}+\tilde\epsilon_q^{\,*})\,\frac{V_{tb}}{|V_{ts}|}\,\Lambda_u^*\right)\left(1+c_d\,(s_b\,e^{i\phi_b}+\tilde\epsilon_q))\,\frac{V_{tb}^*}{|V_{ts}|}\,\Lambda_c\right)\right.\right. \no\\ &\quad \left.\left.+\,\Delta\epsilon_q\,c_d^2\,\frac{|V_{tb}|^2}{|V_{ts}|^2}\,\Lambda_u^*\,\Lambda_c\right]^2\right\}\left(C_{Z^\prime}+\frac{C_{G^\prime}}{3}\right) \no\\ & = \frac{4 G_F}{ \sqrt{2} }\, \left(C_{Z^\prime}+\frac{C_{G^\prime}}{3}\right)\, \Im\left\{(V_{ub}^*V_{cb})^2 \left[ 1+ {\cal O}(s_b, \tilde\epsilon_q, \Delta\epsilon_q) \right] \right\}\,, \end{align} where we have defined \begin{align} \begin{aligned} \Lambda_{i}&=\frac{V_{is}|V_{ts}|-V_{id}\left|V_{td}\right|e^{i\alpha_d}}{V_{ib}V_{tb}^*} ~= ~ \left\{ \begin{array}{ll} 1 +{\cal O}(\lambda^2) & (i=c) \\ 1-\frac{V_{ud}V_{td}^*}{V_{ub} }\left[1-e^{i\Delta \alpha_d}\right] +{\cal O}(\lambda^2) & (i=u) \end{array} \right.\,, \end{aligned} \end{align} which in the limit $\Delta \alpha_d \to 0$ reduces to the $\mathrm{U(2)}$ symmetric result $\Lambda_c = \Lambda_u=1$. Contrary to down-type observables, in this case non-vanishing NP contributions are generated also in the $s_b\to 0$ limit. Setting to zero the subleading breaking terms ($\Delta\epsilon_q= \tilde\epsilon_q =0$), we find that the experimental bound is satisfied over a wide range of $\{s_b,\phi_b\}$ values compatible with the $\Delta B=2$ constraints. Note in particular that in the limit where $\Delta\alpha_d=\pi$, we have $\Lambda_u=1.1-4.6\,i$. In this case the large imaginary piece of $\Lambda_u$, together with the values of $s_b$ and $\phi_b$ introduced to explain the deficit in $\Delta B=2$ transitions, yields a partial cancellation in $C_1^D$, both in the real and in the imaginary part. This is shown in Figure~\ref{fig:DDbar} where we plot the $Z^\prime$ and $G^\prime$ mediated tree-level contributions to the imaginary part of $C_1^D$ together with the current bound from UTfit. A similar behaviour is also obtained when $\alpha_d=\pi$, in which case $\Lambda_u=0.2-4.4\,i$. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figDDbar.png} \caption{ Model contributions to $\mathrm{Im}(C_1^D)$ as function of $\phi_b$. We use the following inputs: $s_b=0.10\,|V_{ts}|$, $g_4=3.0$, $M_{Z^\prime}=1.75$~TeV, and $M_{G^\prime}=2.5$~TeV. The dark- and light-blue bands correspond to the $68\%$ and $95\%$ CL bound from UTfit~\cite{UTfit2018}, respectively.}\label{fig:DDbar} \end{figure} \subsection{LFU tests in charged lepton decays} Beside $\Delta F=2$ observables, another very relevant set of constraints on the model is posed by LFU tests in charged-lepton decays. These provide an important bound on the overall strength of leptoquark interactions, yielding an upper limit on the possible NP contribution to $R_{D^{(*)}}$. Such tests are constructed by performing ratios of the partial widths of a lepton decaying to lighter leptons or hadrons (see appendix~\ref{app:LFU}). In our model, both the $\mu$ vs $e$ and the $\tau$ vs $\mu$ ratios are modified: the former is dominated by the tree-level exchange of a $Z^\prime$, the latter by a leptoquark loop. Setting $M_U=2$~TeV to evaluate the leptoquark loop we find\footnote{In the $\tau$ vs $\mu$ ratio we include the full RG running from $M_U$ to $m_t$ using DsixTools~\cite{Celis:2017hod}. Because of the large running effects in the top Yukawa coupling, we find differences of $\mathcal{O}(20\%)$ in the NP contribution when comparing the full RG result to the non-RG improved one-loop expression. We also include the non-logarithmic terms computed in~\cite{Bordone:2017bld}.} \begin{align} \left(\frac{g_\mu}{g_e}\right)_\ell &\approx 1 + 9 \, C_{Z^{\prime}}\, s_\tau^{2}\,, \\ \left(\frac{g_\tau}{g_\mu}\right)_{\ell,\pi,K}&\approx1 - 0.063\,C_U\,. \end{align} The high-precision measurements of these effective couplings only allow for per mille modifications of the ratios. This in turn implies a strong bound on the possible value of $C_U$. Taking the HFLAV average in the $\tau$ vs $\mu$ ratio~\cite{Amhis:2016xyh} \begin{align} \left(\frac{g_\tau}{g_\mu}\right)_{\ell+\pi+K}&=1.0000\pm0.0014\,. \end{align} we find the following limit on $C_U$ at $95\%$~CL: \begin{align} C_U\lesssim0.04\stackrel{M_U=2~\mathrm{TeV}}{\Longrightarrow}g_4\lesssim3.2\,. \label{eq:LFUbound} \end{align} This bound is shown in Figure~\ref{fig:CC} together with the NP enhancement in $b\to c(u)\ell\nu$ transitions. On the other hand, we find that possible modifications in the $\mu$ vs $e$ ratio are of $\mathcal{O}(10^{-4})$ and thus do not yield any relevant constraint. We also find that tests of LFU from precision $Z$- and $W$-pole measurements at LEP do not lead to stringent bounds. In particular we note that the $Z^\prime$ tree-level contribution to $Z$ anomalous couplings, given in terms of the $\psi^2\phi^2 D$ SMEFT operators in Table~\ref{tab:no4ferm}, is found to be well below the present limits. \subsection{$b\to c(u)\tau\nu$} \label{subsec:b2ctaunu} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./CC1.png}\qquad\quad\includegraphics[width=0.45\textwidth]{./CC2.png} \caption{NP enhancements in $\mathcal{B}(B\to\tau\nu),\, R_D$ and $R_{D^*}$ as function of $M_U/g_U$. We use the following inputs: $s_b=0.15\,|V_{ts}|$ (left), $s_b=0.10\,|V_{ts}|$ (right), $\phi_b= \pi/2$. The red and orange bands correspond, respectively, to the 95\% CL exclusion limits from LFU tests in $\tau$ decays and from $\mathcal{B}(B\to\tau\nu)$.}\label{fig:CC} \end{figure} The violation of LFU in $b\to c \ell \nu$ transitions, measured via the ratios $R_D$ and $R_{D^*}$, sets the scale of NP (or the preferred value of $C_U$). In the ${\rm PS}^3$ model NP effects in $b\to c(u)\tau\nu$ transitions are described by the following effective operators \begin{equation}\label{eq:bctanuLag} \mathcal{L}(b\to u_i \tau \bar{\nu})=-\frac{4G_F}{\sqrt{2}}\left(\eftWCF{\nu edu}{V, LL}{333i}^*(\overline{\tau}_L\gamma^\mu\nu_{L3})(\overline{u}_L^{\,i}\gamma_{\mu}b_L)+\eftWCF{\nu edu}{S, RL}{333i}^* (\overline{\tau}_R\,\nu_{L3})(\overline{u}^{\,i}_Lb_R)\right)\, , \end{equation} where $i=1(2)$ for up (charm) quarks. At $\Lambda=M_U$ we have to a good approximation \begin{align} \eftWCFS{\nu edu}{S, RL}{M_U}{333i}=2\,\eftWCFS{\nu edu}{V, LL}{M_U}{333i}\approx\,2\,C_U\,V_{ib}^*\,. \end{align} The RG running (due to QCD) introduces an important correction to the scalar operator contributions. To account for these effects we define the following RG factor \begin{align} \eftWCFS{\nu edu}{S, RL}{m_b}{333i}=\,\etaF\,\eftWCFS{\nu edu}{S, RL}{M_U}{333i}\,. \end{align} Using DsixTools~\cite{Celis:2017hod} (see also~\cite{Aebischer:2017gaw,Gonzalez-Alonso:2017iyc}) we find $\etaF\approx1.8$ for $M_U=2$~TeV. On the other hand, the running of the vector operator (which is a conserved current as far as QCD is concerned) is very small and will be neglected in the following discussion. Due to the presence of a scalar operator, we predict departures from a pure $V-A$ structure, hence different NP contributions to $R_D$ and $R_{D^*}$. We define the relative NP contribution to these observables as \begin{align}\label{eq:DRD} \Delta R_{D^{(*)}}&=\frac{R_{D^{(*)}}}{R_{D^{(*)}}^{\rm SM}}-1\,. \end{align} Using the results in~\cite{Fajfer:2012vx} for the scalar form factors, we find the following simplified expressions \begin{align}\label{eq:DRDCU} \begin{aligned} \Delta R_{D}&\approx 2\,C_U\times (1+1.5\,\etaF)\,,\\ \Delta R_{D^*}&\approx 2\,C_U\times (1+0.12\,\etaF)\,, \end{aligned} \end{align} which imply a $30\%$ ($10\%$) NP effect in $R_{D}$ ($R_{D^*}$) for $C_U \approx 0.04$, i.e.~a value around the upper bound of the LFU constraint in Eq.~(\ref{eq:LFUbound}). The (non-standard) contributions to $\mathcal{B}\left(B_c\to\tau\nu\right)$ induced by the scalar operator is chirally enhanced, yielding an enhancement of $\mathcal{O}(100\%)$ compared to the SM prediction. However, given the low experimental accuracy in this observable, this does not pose any significant bound on the model. Similarly, the modification of the $B_c$ lifetime, which has been shown to introduce important constraints on explanations of the $b\to c\tau\nu$ anomalies based on pure scalar operators~\cite{Alonso:2016oyd}, is well below the experimental limit. Given the approximate $\mathrm{U(2)_q}$ symmetry, similar NP effects are also expected in $b\to u \ell \nu$. So far, the most relevant measurement involving these transition is $\mathcal{B}\left(B\to\tau\nu\right)$. In analogy to the case $R(D^{(*)})$ case, we define \begin{align} \Delta \mathcal{B}\left(B\to\tau\nu\right)&=\frac{\mathcal{B}\left(B\to\tau\nu\right)}{\mathcal{B}\left(B\to\tau\nu\right)^{\rm SM}}-1\,. \end{align} Using the current experimental value~\cite{Patrignani:2016xqp} and the result from UTfit~\cite{Bona:2017cxr} for the SM prediction, we find \begin{align} \Delta \mathcal{B}\left(B\to\tau\nu\right)&=0.35 \pm 0.31\,. \end{align} In our model, we obtain \begin{align} \Delta\mathcal{B}\left(B\to\tau\nu\right)&\approx \left|1+C_U\left[1+c_d\,s_b\,e^{i\phi_b}\,\frac{V_{tb}^*}{|V_{ts}|}\,\Lambda_u\right]\left(1+\etaF\,\frac{2\,m_B^2}{m_\tau(m_b+m_u)}\right)\right|^2-1\,. \end{align} Also in this case scalar contributions are chirally enhanced and we typically expect large NP effects. However, similarly to $D$--$\bar D$ mixing, in the limit where $\Delta\alpha_d\to\pi$ (and analogously for $\alpha_d\to\pi$) the large phase in $ \Lambda_u$, together with the values of $s_b$ and $\phi_b$ required to explain the deficit in $\Delta B=2$ transitions, yields a significant attenuation of the NP enhancement. The possible range of deviations from the SM is illustrated in Figure~\ref{fig:CC}. Contrary to $B$ decays, LFU breaking effects in charged-current $K$ and $D$ decays are strongly CKM suppressed (relative to the corresponding SM amplitudes) and do not lead to significant constraints. \subsection{$b\to s\ell\ell$ and $b\to s\nu\nu$}\label{subsec:b2sll} The violation of LFU in $b\to s \ell \ell$ transitions, measured via the ratios $R_K$ and $R_{K^*}$, sets the amount of $\mathrm{U(2)}^5$ breaking in the model which is not directly related to the Yukawa couplings. After imposing the constraints from $\Delta F=2$ observables, the $Z^\prime$-mediated contributions to $b\to s\ell\ell$ amplitudes turn out to be well below those mediated by the vector leptoquark. This is because the $\Delta F=2$ constraints require the effective $bsZ^\prime$ coupling to be either very small in size or almost purely imaginary (hence with a tiny interference with the SM contribution). As a result, the following approximate relations hold (assuming $\phi_\tau=0$ and $\epsilon_U$ real): \begin{align}\label{eq:bsllWET} \begin{aligned} \mathrm{Re}\,(\Delta {\cal C}_9^{\mu\mu }) &\approx -\, \mathrm{Re}\,(\Delta {\cal C}_{10}^{\mu\mu}) \approx -\frac{2\,\pi}{\alpha_{\rm em} } \, \frac{s_\tau\,\epsilon_U}{|V_{ts}|}\,C_U \,,\\ \mathrm{Re}\,(\Delta {\cal C}_9^{\tau\tau }) &\approx -\, \mathrm{Re}\,(\Delta {\cal C}_{10}^{\tau\tau}) \approx \frac{2\,\pi}{\alpha_{\rm em} } \, \frac{s_\tau\,\epsilon_U}{|V_{ts}|}\,C_U \,, \end{aligned} \end{align} where $\Delta {\cal C}_i^{\alpha\alpha } = {\cal C}_i^{\alpha\alpha } -{\cal C}_i^{\rm SM}$, and $\Delta {\cal C}_9^{ee} \approx \Delta {\cal C}_{10}^{ee} \approx 0$. Hence, the deviations from unity in the LFU ratios $R_K$ and $R_{K^*}$ can be expressed as~\cite{Capdevila:2017bsm,Celis:2017doq} \begin{align} \begin{aligned} \Delta R_K &=& 1 - \left.R_K\right|_{[1,\,6]~{\rm GeV}^2} ~\approx~ & 0.23\,\Delta {\cal C}_9^{\mu\mu} - 0.23\, \Delta {\cal C}_{10}^{\mu\mu} \approx 0.46\, \Delta {\cal C}_9^{\mu\mu}~,\\ \Delta R_{K^*} &=& 1 - \left.R_{K^*} \right|_{[1.1,\,6]~{\rm GeV}^2} ~\approx~ & 0.20\, \Delta {\cal C}_9^{\mu\mu} -0.27\,\Delta {\cal C}_{10}^{\mu\mu} \approx 0.47\, \Delta {\cal C}_9^{\mu\mu} ~. \end{aligned} \end{align} Contrary to other models aiming at a combined explanation of the anomalies, we predict $\mathrm{Re}\,(\Delta {\cal C}_{9,10}^{\mu\mu})$ and $\mathrm{Re}\,(\Delta {\cal C}_{9,10}^{\tau\tau})$ to be of similar size. This is a consequence of the different $\mathrm{U(2)}^5$ breaking structure discussed in Section~\ref{sect:MainYuk}. Another key difference with respect to the existing literature is the presence of right-handed leptoquark currents. These generate the following scalar and pseudo-scalar contributions:\footnote{Given that the leading RG effects for the scalar operators are dominated by QCD, the RG running factor for ${\cal C}_{S,P}$ and $\eftWC{\nu edu}{S, RL}$ remains the same to a very good approximation.} \begin{align}\label{eq:scalarOp} \begin{aligned} \mathcal{C}_S^{\mu\mu}&=-\,\mathcal{C}_P^{\mu\mu}\approx \frac{4\,\pi}{\alpha_{\rm em}V_{tb}V_{ts}^*}\,C_U\,\etaF\,\epsilon_U\,\theta^R_{\tau\mu}\,,\\[2pt] \mathcal{C}_S^{\tau\tau}&=-\,\mathcal{C}_P^{\tau\tau}\approx -\frac{4\,\pi}{\alpha_{\rm em}V_{tb}V_{ts}^*}\,C_U \,\etaF\,\left[\epsilon_U\, s_\tau\,e^{i\phi_\tau}+s_b\,e^{i\phi_b}\right]\,. \end{aligned} \end{align} While the effect of these operators is negligible in chirally-allowed transitions, this is not the case for $P\to\ell\ell$ decays (see appendix~\ref{app:obs}). In particular, the enhancement of scalar amplitudes is enough to overcome the mass suppression of the right-handed rotation angle $\theta^R_{\tau\mu}$ in $\mathcal{C}_{S,P}^{\mu\mu}$. Setting $\Delta\mathcal{C}_{9}^{\mu\mu}=-0.6$, as required by the central value of the $R_K$ and $R_{K^*}$ anomalies, and using the latest LHCb measurement of $\mathcal{B}(B_s\to\mu\mu)=3.02(65)\times10^{-9}$~\cite{Aaij:2017vad}, we find the following bounds at $95\%$~CL on the right-handed mixing in the lepton sector: \begin{align} \left| \theta^R_{\tau\mu} / s_\tau \right| \leq 0.013~, \qquad\qquad~ 0.04 \leq\theta^R_{\tau\mu} / s_\tau \leq 0.07~. \end{align} The second solution corresponds to a destructive interference between a large NP amplitude and the SM, yielding $\mathcal{B}(B_s\to\mu\mu)$ close to the SM expectation. As we discuss in the following section, this accidental cancellation is disfavored by LFV constraints. Therefore, we focus on the first solution, which requires the $\mu$--$\tau$ right-handed mixing angle to be slightly smaller than what we expect in absence of dimension-7 operators ($| \theta^R_{\tau\mu} / s_\tau | =m_\mu/m_\tau=0.06$), but it is still natural. We also expect relatively large NP enhancement in $\mathcal{B}(B_s\to\tau\tau)$, dominated by the chirally-enhanced scalar contributions in~\eqref{eq:scalarOp}. Setting $\Delta\mathcal{C}_{9}^{\mu\mu}=-0.6$ and $C_U=0.04$, and assuming $\phi_b\approx\pi/2$ and $\phi_\tau \approx0$ we find \begin{align} \frac{\mathcal{B}(B_s\to\tau\tau)}{\mathcal{B}(B_s\to\tau\tau)^{\rm SM}}&\approx 5+45 \left(\frac{s_b}{0.1\,|V_{ts}|}\right)^2\,, \end{align} where $\mathcal{B}(B_s\to\tau\tau)^{\rm SM}=\left(7.73\pm0.49\right)\times10^{-7}$~\cite{Bobeth:2013uxa}. We stress the strong correlation between the possible NP contribution to $\Delta B=2$ amplitudes discussed in Section~\ref{subsec:DF2} (controlled by $|s_b|$) and a large enhancement in $\mathcal{B}(B_s\to\tau\tau)$. Finally, we mention that $b\to s\nu\nu$ transitions do not get significantly modified in this framework. On the one hand, due to its coupling structure, the vector leptoquark does not contribute at tree-level to such transitions. On the other hand, the $Z^\prime$ contribution is negligible because of the constraints on the $bsZ^\prime$ coupling, as already discussed in the $b\to s\ell\ell$ case. \subsection{LFV processes} \label{subsec:LFV} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figLFVRad.png}\qquad\quad\includegraphics[width=0.45\textwidth]{./figLFVBK.png} \caption{Left: $\mathcal{B}(\tau\to\mu\gamma)$ as function of the NP shift in $R_K$ for different values of $\epsilon_U$. Right: $\mathcal{B}(B^+\to K^+\tau^+\mu^-)$ as function of the NP shift in $R_K$ for different values of $s_\tau$.} \label{fig:LFVpheno} \end{figure} We finally turn to LFV processes. Given the unambiguous prediction of a large $\tau \to \mu$ effective coupling, they represent a striking signature of the model. In $b\to s\ell\ell^\prime$ transitions the dominant contribution is mediated by the leptoquark, leading to \begin{align} \begin{aligned} \Re({\cal C}_9^{\tau\mu}) &\approx -\Re({\cal C}_{10}^{\tau\mu}) \approx -\frac{\mathrm{Re}\,(\Delta {\cal C}_9^{\mu\mu})}{s_\tau}\,,\quad\qquad \Re(\mathcal{C}_S^{\tau\mu}) =-\Re(\mathcal{C}_P^{\tau\mu}) \approx - \frac{2\,\etaF\,\mathrm{Re}\,(\Delta {\cal C}_9^{\mu\mu})}{s_\tau}\,. \end{aligned} \end{align} Due to the $s_\tau^{-1}$ enhancement, large NP contributions in $\mathcal{B}(B_s\to\tau\mu)$ and in $\mathcal{B}(B\to K\tau\mu)$ are expected. In the former case the effect is further reinforced by the chiral-enhancement of scalar amplitudes, leading to \begin{align} \begin{aligned} \mathcal{B}(B_s\to\tau^+\mu^-) &\approx 2 \times 10^{-4}\, \left(\frac{\Delta R_K}{0.3}\right)^2\left( \frac{0.1}{s_\tau}\right)^2\,,\\[5pt] \mathcal{B}(B\to K^*\tau^+\mu^-)&\approx 1.5 \times 10^{-6}\, \left(\frac{\Delta R_K}{0.3}\right)^2\left( \frac{0.1}{s_\tau}\right)^2\,,\\[5pt] \mathcal{B}(B^+\to K^+\tau^+\mu^-)&\approx 2 \times 10^{-5}\, \left(\frac{\Delta R_K}{0.3}\right)^2\left( \frac{0.1}{s_\tau}\right)^2\,, \end{aligned} \end{align} with $\mathcal{B}(B^-\to K^-\tau^-\mu^+)=\mathcal{B}(B^+\to K^+\tau^+\mu^-)$ and $\mathcal{B}(B^+\to K^+\tau^-\mu^+)\approx\mathcal{B}(B_s\to\tau^-\mu^+)\approx 0$, and similarly for the $K^*$ channel. NP effects in the latter are predicted to be smaller because, contrary to the $K$ channel, the scalar contributions are suppressed in this case. While there are no experimental constraints in $B_s\to\tau\mu$ so far, the model prediction for $B^+\to K^+\tau^+\mu^-$ lies close to the current experimental limit by BaBar: $\mathcal{B}(B^+\to K^+\tau^+\mu^-)<2.8\times10^{-5}$~(90\% CL)~\cite{Lees:2012zz}. In figure~\ref{fig:LFVpheno} (right) we show the predicted values of $\mathcal{B}(B^+\to K^+\tau^+\mu^-)$ as a function of the NP shift in $R_K$ and for different benchmark values of $s_\tau$. We also note that, contrary to other proposed solutions to the anomalies, in our model the $s\tau U$ coupling is very small, resulting in a negligible contribution to the $\tau\to\phi\mu$ decay rate. In purely leptonic decays the most interesting observable is $\tau\to\mu\gamma$. Radiative LFV decays are generated at the one loop level, both by $Z^\prime$ and $U$ loops. The leptoquark yields the largest contribution due to its larger couplings and the $m_b$-enhancement of the loop function. From the explicit one-loop calculation (see appendix~\ref{app:LFV}), we find which is just below the current experimental limit set by Babar: $\mathcal{B}(\tau \to \mu \gamma)<4.4 \times 10^{-8}\;(90\%\,\mathrm{CL})$~\cite{Aubert:2009ag}. In figure~\ref{fig:LFVpheno} (left) we show the prediction for $\mathcal{B}\left(\tau\to\mu\gamma\right)$ as a function of the NP contribution to $R_K$ for different values of $\epsilon_U$. The model also predicts a sizable NP contribution to $\tau\to3\mu$, mediated by a tree-level $Z^\prime$ exchange. We obtain the following approximate expression \begin{align} \mathcal{B}\left(\tau\to3\mu\right)&\approx C_{Z^\prime}^2\, s_\tau^2\left[ 28\,(s_\tau^2+\epsilon_\ell)^2-38\left(\frac{g_1}{g_4}\right)^2\left(s_\tau^2+\epsilon_\ell-2\left(\frac{g_1}{g_4}\right)^2\right)\right]\,. \end{align} For typical values of the model parameters, this contribution lies about one order of magnitude below the current experimental limit by Belle: $\mathcal{B}\left(\tau\to3\mu\right)<1.1\times 10^{-8}\;(90\%\,\mathrm{CL})$~\cite{Hayasaka:2010np}. However, this conclusion is strongly dependent on the precise value of $s_\tau$. Purely leptonic LFV transitions of the type $\mu\to e$ are controlled by the mixing angle $s_e$ in Eq.~\eqref{eq:Le}. We find that the most stringent constraint on this angle is obtained, at present, by the experimental bound on $\mu\to3e$ set by the Sindrum Collaboration: $\mathcal{B}\left(\mu\to3e\right)<1.0\times10^{-13}~(90\%\,\mathrm{CL})$~\cite{Bellgardt:1987du}. Similarly to $\tau \to 3\mu$, also $\mu\to3e$ is dominated by the tree-level exchange of the $Z^\prime$, which yields \begin{align} \begin{aligned} \mathcal{B}\left(\mu\to3e\right)&\approx 420\,C_{Z^\prime}^2\left(\frac{g_1}{g_4}\right)^4\,s_e^2\left(\epsilon_l+s_\tau^2\right)^2 \\ &\approx (1-10) \times 10^{-14} \left(\frac{s_e}{0.01}\right)^2\left(\frac{\epsilon_l+s_\tau^2}{0.02}\right)^2\,. \end{aligned} \end{align} where the range in the second numerical expression reflects the uncertainty on the $Z'$ mass and couplings. Assuming $\epsilon_l\sim\epsilon_U\sim\mathcal{O}(10^{-2})$, and taking natural values for the other parameters, we find \begin{align} s_e\lesssim 10^{-2}\,, \label{eq:boundL12} \end{align} consistently with the EFT estimate derived in~\cite{Bordone:2017anc}.\footnote{Despite stringent, the bound on $s_e$ in \eqref{eq:boundL12} is not unnatural. The benchmark for subleading $\mathrm{U(2)}_\ell$ breaking terms not aligned to the second generation is provided by $(m_e/m_\mu)^{1/2} \approx 7 \times 10^{-2}$.} Another important constraint on $s_e$, which however depends also on $\theta^R_{\tau\mu}$, is provided by $\mu\to e\gamma$. As in $\tau\to\mu\gamma$, contributions to this observable appear in our model at one loop, with the dominant effect being mediated by the leptoquark. We find \begin{align} \mathcal{B}(\mu\to e\gamma)&\approx6 \times 10^{-13}\left(\frac{\Delta R_K}{0.3}\right)^2\left(\frac{0.01}{\epsilon_U}\right)^2\left(\frac{s_e}{0.01}\right)^2\left(\frac{\left|\theta^R_{\tau\mu}\right|}{0.01}\right)^2\,, \end{align} to be compared with the bound by the MEG Collaboration: $\mathcal{B}(\mu\to e\gamma)<4.2\times10^{-13}$~(90\% CL)~\cite{TheMEG:2016wtm}. Other limits on this angle are significantly weaker. In particular, from the $Z'$ contribution to $\bar \mu e \bar d d$ effective operators, which are constrained by $\mu\to e$ conversion~\cite{Carpentier:2010ue,Giudice:2014tma}, we get $s_e\lesssim 10^{-1}$. On the other hand, the leading contribution to $\bar \mu e \bar d d^{(\prime)}$ effective operators is due to the leptoquark exchange, and the dominant constraint is set by $K_L\to\mu e$~\cite{Giudice:2014tma}. In this case the amplitude is (formally) independent from $s_e$, but it depends on the subleading $\mathrm{U(2)}_\ell$ breaking parameter $\Delta\epsilon_U$, defined in Eq.~(\ref{eq:DeltaU}): \begin{align} \mathcal{B}( K_L \to \mu^\pm e^\mp ) & \approx0.8\times 10^{-5} \, (\Delta \epsilon_U)^2\,\left(\frac{\Delta R_K}{0.3}\right)^2\left( \frac{0.1}{s_\tau}\right)^2\,. \end{align} Using the current experimental bound by the BNL Collaboration, $\mathcal{B}( K_L \to \mu^\pm e^\mp )=0.47\times10^{-11}~(90\%\,\mathrm{CL})$~\cite{Ambrose:1998us}, we find \begin{align} \Delta \epsilon_U\lesssim 6\times 10^{-4}\,. \end{align} This bound is consistent with the naive estimate of this parameter, $\Delta\epsilon_U = {\cal O}(\epsilon_U s_e s_d)$, provided $s_e$ satisfies the bound in Eq.~(\ref{eq:boundL12}). \section{Low-energy fit and discussion}\label{sect:Fit} In order to precisely quantify the quality of the proposed model in the description of the anomalies, we perform a fit to low-energy data. We work in the minimal breaking scenario presented in Section~\ref{sect:MainYuk} and set $\Delta\alpha_d=\pi$ to minimize undesired NP contributions in $\mathcal{B}(B\to\tau\nu)$ and $\Delta F=2$ transitions, as discussed in Section~\ref{sect:pheno}. We also restrict ourselves to the case $s_e=0$, hence to vanishing LFV in $\mu\to e$ transitions, given that this parameter has no impact on the description of the anomalies. Under these assumptions, the following model parameters have a relevant impact at low energies: $\omega_1,\omega_3,\,s_\tau,\epsilon_R^e,s_b,\phi_b,\epsilon_U$.\footnote{In order to remove marginally relevant parameters we fix $\epsilon_q=\epsilon_\ell=\epsilon_U$. We have checked explicitly that departing from this restriction, while keeping $\epsilon_q$ and $\epsilon_\ell$ within their expected range, has no effect on fit results. We also set $\phi_\tau$ to zero and treat $\epsilon_U$ and $\epsilon_R^e$ as a real parameters, since these extra phases do not introduce any interesting features. Finally, we conservatively assume $\epsilon_R^d=0$; a non-zero value for this parameter would slightly improve the agreement with $\Delta F=2$ data.} The first two are related to the NP scale while, the other five control the breaking of the $\mathrm{U(2)}^5$ symmetry. We perform a Bayesian estimation for these parameters using the log-likelihood \begin{align} \log\mathcal{L}=-\frac{1}{2}\sum_{i\in \rm obs}\left(\frac{x^{{\rm PS}^3}_i-x^{\rm exp}_i}{\sigma_i}\right)^2\,, \end{align} constructed from the observables listed in Tables~\ref{tab:LFV_exp},~\ref{tab:LFU_exp},~\ref{tab:semileptonic} and~\ref{tab:hadronic} and using the expressions in appendix~\ref{app:obs} for the model predictions. For the CKM matrix elements we take the values reported in the NP fit from UTFit and for the remaining input parameters we use PDG values~\cite{Patrignani:2016xqp}. For the Bayesian analysis we use the nested sampling algorithm implemented in the public package MultiNest~\cite{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea}. The resulting posterior probabilities are analysed using the Markov Chain sample analysis tool GetDist~\cite{GetDist}. In the analysis we consider flat priors in all the parameters for the following ranges\footnote{Since the observables considered in the fit are not sensitive to the individual signs of $\epsilon_U$ and $s_\tau$ but only to their product, there is a double degeneracy in the fit. We remove this degeneracy by considering both $\epsilon_U$ and $s_\tau$ to be positve.} \begin{align} \begin{aligned} \omega_1&\in\left[0.3,1.5\right]~\mathrm{TeV},&\qquad\omega_3&\in\left[0.3,1.5\right]~\mathrm{TeV},&\qquad s_\tau &\in\left[0,0.15\right]\,,\\[5pt] s_b&\in\left[-0.1,0.1\right],&\phi_b&\in\left[0,\pi\right],&\epsilon_R^e &\in\left[-0.01,0.01\right]\,,\\[5pt] \epsilon_U&\in\left[0,0.02\right]\,. \end{aligned} \end{align} \begin{figure}[t] \centering \includegraphics[width=0.325\textwidth]{./sbVSphib.png}~~~\includegraphics[width=0.325\textwidth]{./epsUVSstau.png}~~\includegraphics[width=0.325\textwidth]{./epsRSstau.png} \caption{$68\%$ (dark blue) and $95\%$ (light blue) posterior probabilities of $\phi_b$ and $s_b$ (left), $\epsilon_U$ and $s_\tau$ (mid), and of $\epsilon_R^e$ and $s_\tau$ (right).} \label{fig:fitparam} \end{figure} We obtain the following $68\%$ probability ranges for the model parameters extracted from the marginalized posterior probabilities \begin{align} \omega_1&=1.0\pm 0.3~\mathrm{TeV},&\qquad\omega_3&=1.2\pm 0.2~\mathrm{TeV},&\qquad s_\tau &=0.11\pm 0.03,\nonumber\\[2pt] s_b&=(0.09\pm 0.06)\,|V_{ts}|,&\phi_b&=\left(0.55\pm 0.15\right)\pi,& \epsilon_R^e &=(0.11\pm 0.03)\,\frac{m_\mu}{m_\tau}\,,\nonumber\\[2pt] \epsilon_U&=(1.2\pm 0.3)\times 10^{-2}\,. \end{align} In figure~\ref{fig:fitparam}, we show the $68\%$ and $95\%$ two-dimensional posterior probabilities for $s_b$ and $\phi_b$, $\epsilon_U$ and $s_\tau$, and for $\epsilon_R^e$ and $s_\tau$. As can be seen, there is a clear correlation between the phase $\phi_b$ and the maximum allowed value for $s_b$. We also find that positive values of $s_b$ are preferred. This behaviour is expected from the discussion in the previous section: while the size of $s_b$ and preferred value for $\phi_b$ are connected to the (negative) NP contribution to $\Delta F=2$, the preference for a positive $s_b$ is related to the partial cancellations in $D-\bar D$ mixing and $\mathcal{B}(B\to\tau\nu)$. On the other hand, the anti-correlation between $\epsilon_U$ and $s_\tau$ can be easily understood from the fact that the NP contribution in $b\to s\ell\ell$ transitions is proportional to the product of these two parameters, i.e. $\mathrm{Re}\,(\Delta {\cal C}_9^{\mu\mu }) \approx -\, \mathrm{Re}\,(\Delta {\cal C}_{10}^{\mu\mu}) \propto C_U\,s_\tau\,\epsilon_U$. Finally, we find a significant correlation between $\epsilon_R^e$ and $s_\tau$. As shown in the previous section, a mild cancellation (at the level of $20\%$) among these two parameters is required to ensure a sufficiently small $\theta^R_{\tau\mu}$, as indicated by $\mathcal{B}(B_s\to\mu\mu)$ and $\mathcal{B}(\mu\to e\gamma)$. Note that, beside the smallness of $s_b$ compared to $|V_{ts}|$, the other three mixing parameters ($\epsilon_U$, $s_\tau$, and $\epsilon_R^e$) turn out to have magnitudes in good agreement with their natural parametric size. Concerning low energy observables, we reach similar conclusions to those already discussed in Section~\ref{sect:pheno} in terms of simplified analytical expressions. In Figure~\ref{fig:anomalies} we show the $68\%$ and $95\%$ posterior probabilities for $\Delta R_{D^{(*)}}$ and $\Delta R_K$. As can be seen, the model can fully accommodate the anomalies in $b\to s\ell\ell$. However, as anticipated in Section~\ref{subsec:b2ctaunu}, the complete explanation of the $R_{D^{(*)}}$ anomalies within this framework is limited by LFU tests in $\tau$ decays. From the fit we obtain a NP enhancement of around $7\%$--$8\%$ for $R_{D^*}$ and $18\%$--$22\%$ for $R_D$. \begin{figure}[p] \centering \includegraphics[width=0.45\textwidth]{./RDvsRK.pdf}~~~\includegraphics[width=0.45\textwidth]{./RDsvsRK.pdf} \caption{$68\%$ (dark blue) and $95\%$ (light blue) posterior probabilities of the NP shifts in $R_{D^{*}}$ vs. $\Delta R_K$. The experimental values at $1\sigma$ ($2\sigma$) are indicated by the dark (light) coloured bands.} \label{fig:anomalies} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{./LFVfit2.png}~~~\includegraphics[width=0.48\textwidth]{./LFVfit1.png} \caption{Left: $68\%$ (dark blue) and $95\%$ (light blue) posterior probabilities of $\mathcal{B}(\tau\to\mu\gamma)$ and $\mathcal{B}(B^+\to K^+ \tau^+\mu^-)$ from the global fit. The black lines denote the $95\%$ posterior probabilities fixing $\Delta R_K=-0.3$ (solid) and $\Delta R_K=-0.2$ (dashed). The red bands show the 90\% CL exclusion limits for these observables. Right: $68\%$ (dark blue) and $95\%$ (light blue) posterior probabilities of $\mathcal{B}(\tau\to3 \mu)$ and $\mathcal{B}(B_s \to \tau^+\mu^-)$ from the global fit. } \label{fig:LFVfit} \end{figure} As already emphasized in Section~\ref{subsec:LFV}, in our setup the explanation of the anomalies implies large LFV effects in $\tau\to\mu$ transitions, in particular in $\tau\to\mu\gamma$, $\tau\to3\mu$, $B\to K\tau\mu$, and $B_s\to \tau\mu$. Interestingly, we find that the NP effects in $\tau\to\mu\gamma$ are anti-correlated to those in $B_s\to \tau\mu$ (and $B\to K\tau\mu$), allowing us to directly connect the product of these LFV rates to the NP enhancement in $R_{D^{(*)}}$ and $b\to s\ell\ell$. More precisely, we find the following relations among NP observables \begin{align}\label{eq:Anom2LFV} \begin{aligned} \left(\frac{\Delta R_D}{0.2}\right)^2\left(\frac{\Delta R_K}{0.3}\right)^2&\approx3\left[\frac{\mathcal{B}(B\to K\tau^+\mu^-)}{3\times10^{-5}}\right]\left[\frac{\mathcal{B}(\tau\to\mu\gamma)}{5\times10^{-8}}\right] \\[5pt] &\approx\left[\frac{\mathcal{B}(B_s\to \tau^+\mu^-)}{1\times10^{-4}}\right]\left[\frac{\mathcal{B}(\tau\to\mu\gamma)}{5\times10^{-8}}\right]\,, \end{aligned} \end{align} which hold almost independently from any model parameter. This is illustrated in Figure~\ref{fig:LFVfit} (left) where we show the $68\%$ and $95\%$ posterior probabilities for $\mathcal{B}(\tau\to\mu\gamma)$ and $\mathcal{B}(B\to K\tau\mu)$. We see that the model predictions for these two observables are close to their experimental bounds shown in the red bands, as implied by the expressions in \eqref{eq:Anom2LFV}. A partial anti-correlation is present also between $\tau\to3 \mu$ and LFV in $B$ decays, as illustrated in Figure~\ref{fig:LFVfit} (right). However, in this case the effect is diluted by the uncertainty on $Z^\prime$ mass and couplings, which are not strongly constrained by other observables. As a final comment, it is worth stressing that this low-energy fit does not pose stringent constraints on the masses of the heavy vector bosons. The low-energy observables constrain only the effective Fermi couplings in Eq.~(\ref{eq:effFermi}), or $\omega_{1,3}$. Still, we can derive a well-defined range for vector boson masses taking into account that $g_U \gg g_c$: setting $2.5 \leq g_U \leq 3.0$, the masses of $Z'$, $U$, and $G'$ range between 2 and 3 TeV. \section{Conclusions} The main idea behind the ${\rm PS}^3$ model is that the flavor universality of strong, weak, and electromagnetic interactions observed at low energies is only a low-energy property: the ultraviolet completion of the SM is a theory where gauge interactions are completely flavor non-universal, with each fermion family being charged under its own gauge group. The motivation for this hypothesis, and the explicit construction of the ${\rm PS}^3$ model presented in Ref.~\cite{Bordone:2017bld} is twofold: it explains the pattern of anomalies recently observed in $B$ meson decays and, at the same time, the well-known hierarchical structure of quark and lepton mass matrices. These two phenomena turn out to be closely connected: they both follow from the dynamical breaking of the flavor non-universal gauge structure holding at high energies down to the SM. On general grounds, low-energy observables put very stringent constraints on flavor non-universal interactions mediated by TeV-scale bosons, as expected in the ${\rm PS}^3$ model. In this paper we have presented a comprehensive analysis of such constrains, and the corresponding implications for future low-energy measurements. As far as the constraints are concerned, we confirm the main conclusions of Ref.~\cite{Bordone:2017bld}: i)~the model is in very good agreement with all existing bounds, without significant tuning of its free parameters; ii)~the model could account for the $B$ anomalies, reaching the $1\sigma$ range of all the present measurements with the exception of $R_{D^*}$, where the maximal allowed deviation from the SM does not exceed the 10\% level. In addition, we have shown that the model can slightly improve the description of $\Delta F=2$ observables with respect to the SM. The most interesting aspect of this analysis is related to the possible implications of the ${\rm PS}^3$ model in view of future low-energy measurements. We have shown that a remarkable feature is the prediction of sizeable rates for LFV processes of the type $\tau\to\mu$, both in $B$ decays (such as $B\to K \tau \mu$ and $B_s\to\tau\mu$) as well as in $\tau$ decays (most notably $\tau \to \mu\gamma$ and $\tau \to 3\mu$). The fact that the $B$ anomalies could naturally imply large LFV effects in $B$ decays was first pointed out in Ref.~\cite{Glashow:2014iga}, on the basis of general considerations. The ${\rm PS}^3$ model provides an explicit realization of this mechanism, predicting in addition a strict anti-correlation between $\tau \to \mu\gamma$ and $b\to s \tau\mu$ transitions, illustrated in Figure~\ref{fig:LFVfit}, that can be viewed as a distinctive signature. As we have shown in Section~\ref{subsec:LFV}, also $\mu \to 3 e$, $\mu \to e\gamma$, and $K_L \to \mu e$ decays could be close to their present exclusion limits; however, this conclusion is less strict given the uncertainty on the $\mu\to e$ mixing, which is not constrained by the anomalies. Besides LFV processes, we have shown that the model predicts interesting non-standard effects in $\Delta F=1$ and $\Delta F=2$ observables, with non-trivial correlations. Particularly relevant and distinctive are the predictions for the violations of LFU in charged currents illustrated in Figure~\ref{fig:CC}: the presence of right-handed currents implies $\Delta R_{D} \approx 2.6\, \Delta R_{D^*}$ and a possible large enhancement of $\mathcal{B}(B\to \tau\nu)$ ranging from $30\%$ up to $100\%$ of the SM prediction. Most of the predictions for low-energy observables presented in this work differ with respect to what is expected in other models proposed for a combined explanation of the $B$ anomalies. The corresponding measurements would therefore be of great value in shedding light on the dynamics behind the anomalies, if unambiguously confirmed as due to physics beyond the SM, and clarify their possible link to the origin of quark and lepton masses. \subsubsection*{Acknowledgments} We thank L. Di Luzio, M. Nardecchia, A. Greljo and M. K\"onig for useful comments and discussions. This research was supported in part by the Swiss National Science Foundation (SNF) under contract 200021-159720. \bigskip
2024-02-18T23:40:31.013Z
2018-11-07T02:12:12.000Z
algebraic_stack_train_0000
2,625
14,083
proofpile-arXiv_065-12799
\section{Introduction} Adaptive BCIs have shown to improve performance \cite{mladenovic:hal-01542504}, however thorough adaptation is far from being reached, and a general and flexible framework to implement adaptive features is still lacking. We appeal to a generic Bayesian approach, called Active Inference (AI), which tightly couples perception and action \cite{friston2006free}. Endowing the machine with AI, enables: (1) to infer user's intentions or states by accumulating observations (e.g. electrophysiological data) in a flexible manner, as well as (2) to act adaptively in a way that optimizes performance. We illustrate AI applied to BCI using realistic P300-speller simulations. We demonstrate it can implement new features such as optimizing the sequence of flashed letters and yield significant bit rate increases. \section{Material, Methods and Results} Active Inference rests on an explicit probabilistic model of user and task. Key variables include observed data, user hidden states, and machine's action, as follows. The observed data, here electroencephalagraphy (EEG) responses to target, non-target (P300 or not) and feedback stimuli (Error Potentials -- ErrPs or not), allow the machine to infer user's hidden states, here the intention to spell a letter or pause as well as the recognition of a target/non-target or a correct/incorrect feedback. Depending on the hidden states inferred, the computer has possible actions, here to flash in order to accumulate confidence about the target letter, to stop flashing and to display the chosen letter, or to switch off the screen if it infers an idle state of the user, i.e. no P300 response has been observed for some time. Each hidden state is mapped onto observations through the data likelihood matrix which can be learned from calibration data. Given the machine's actions, the transitions between hidden states are modeled by a probability (Markov) martix. We also predefine the preference over all possible outcomes. Typically, the preferred outcome is to be in the state of observing a correctly spelled letter. Finally, a parameter $\gamma$ sets the exploration-exploitation tradeoff for action selection. We compared AI to two classical approaches: \begin{enumerate} \item P300-spelling with a fixed flash number (12) of repetitions and pseudo-random flashing; \item P300-spelling with pseudo-random flashing but optimal stopping \cite{mattout2015improving}. \end{enumerate} To do so, we used data from 18 subjects from a previous P300-speller experiment \cite{perrin2011detecting}. For each algorithm and subject, we simulated the spelling of 12000 letters. Furthermore, to demonstrate AI's flexibility, we implemented a "LookAway" case, in which the machine would infer the user to be in idle state and would switch the screen off. We also simulated an ErrP classification enabling the automated detection of a wrongly spelled letter. In case of such detection, AI picks the next most probable letter to spell or choose to continue flashing to strengthen its confidence. AI showed significantly higher bit rate (54.12bit/min) than the second best strategy (optimal stopping, 45.70bit/min), see Figure \ref{fig:plot}. Its performance increased even further when a perfect ErrP classifier is used (73bit/min). Finally, when idle user states are simulated, it accurately switches off the speller $\approx$89\% of the time, after $\approx$24 flashes. \begin{figure}[h!] \vspace{-1em} \begin{center} \includegraphics[width=0.75\textwidth]{ai.png} \vspace{-1em} \caption{\textit{Comparison in bit rate (bit/min) between various flashing methods in a P300 BCI application. Data collected from the simulated spelling of 12000 letters with 18 subjects who were recorded in a previous experiment \cite{perrin2011detecting}. All methods significantly differ from one another (ANOVA, Tukey post-hoc, $p < 0.01$).} \label{fig:plot}} \end{center} \vspace{-1.5em} \end{figure} \section{Discussion} Our results demonstrate a great potential for implementing adaptive BCI beyond existing approaches, showing an increase of 18\% and 59\% (using ErrP classifier) in bit rate. \section{Significance} AI outperforms other algorithms while offering a possibility of unifying various adaptive implementations within one generic framework. Thanks to such genericity, with only a few tuning of its parameters, AI can incorporate many features, such as automated correction or accounting for an idle user state. It can adjust to signal variability by inferring about the user, but it can also take into account the influence of its actions onto the user. This approach lays ground for future co-adaptive systems. {
2024-02-18T23:40:31.133Z
2018-05-24T02:10:21.000Z
algebraic_stack_train_0000
2,636
725
proofpile-arXiv_065-12804
\section*{Funding Information} P. G. gratefully acknowledges financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement FLATLIGHT No. 639109). J.R.O., H.S.C. and V.H.C. acknowledge the funding support from Agency for Science, Technology and Research - Science and Engineering Research Council for Pharos grant award No. 152-73-00025. \bibliographystyle{abbrv}
2024-02-18T23:40:31.146Z
2018-05-24T02:08:19.000Z
algebraic_stack_train_0000
2,638
69
proofpile-arXiv_065-12810
\section{Symmetric spaces in characteristic 2} \para{1.1.} Let $V$ be an $N$-dimensional vector space over an algebraically closed field $\Bk$. We assume that $\ch \Bk \ne 2$. Let $\lp \ ,\ \rp$ be the non-degenerate symmetric bilinear form on $V$. The orthogonal group $O(V)$ associated to the form $\lp \ ,\ \rp$ is defined as \begin{equation*} \tag{1.1.1} O(V) = \{ g \in GL(V) \mid \lp gv, gw\rp = \lp v,w\rp \text{ for any } v,w \in V \}. \end{equation*} If we define the quadratic from $Q(v)$ on $V$ by $Q(v) = \lp v, v\rp$, (1.1.1) is equivalent to \begin{equation*} \tag{1.1.2} O(V) = \{ g \in GL(V) \mid Q(gv) = Q(v) \text{ for any } v \in V\}. \end{equation*} \par We fix a basis $e_1, \dots, e_N$ of $V$, and identify $GL(V)$ with $G = GL_N$ by using this basis. If we define the matrix $J \in GL_N$ by $J = (\lp e_i, e_j\rp)$, (1.1.1) is also written as \begin{equation*} \tag{1.1.3} O(V) = \{ g \in G \mid {}^tgJg = J\}. \end{equation*} \par We define a map $\th : G \to G$ by $\th(g) = J\iv ({}^tg\iv)J$. Then $\th$ gives rise to an involutive automorphism on $G$, and we have $G^{\th} = O(V)$. \par In the above discussion, if we replace the symmetric bilinear form by the non-degenerate skew-symmetric bilinear form $\lp \ ,\ \rp$ on $V$ with even $N$, we can define the symplectic group $Sp(V)$ in a similar way as $O(V)$ by using (1.1.1). The matrix $J \in GL_N$ is defined similarly, and by using an involutive automorphism $\th : G \to G$ defined by $\th(g) = J\iv({}^tg\iv)J$, we obtain an analogue of (1.1.3) for $Sp(V)$. Hence in this case also $G^{\th} = Sp(V)$. \para{1.2.} Hereafter, throughout the paper, we assume that $\ch\Bk = 2$. Put $G = GL_N$. Consider an involutive automorphism $\th : G \to G$ defined by $\th(g) = J\iv({}^tg\iv)J$, where \begin{align*} \tag{1.2.1} J &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1_n \\ 0 & 1_n & 0 \end{pmatrix} \quad \text{ if } N = 2n+1, \\ \\ \tag{1.2.2} J &= \begin{pmatrix} 0 & 1_n \\ 1_n & 0 \end{pmatrix} \qquad\quad \text{ if } N = 2n, \end{align*} with $1_n$ the identity matrix of degree $n$. Thus we can consider the fixed point subgroup $G^{\th}$ and the symmetric space $G^{\io\th}$. If $\ch\Bk \ne 2$, then $G^{\th} \simeq O(V)$, and the generalized Springer correspondence with respect to $G^{\io\th}$ was discussed in [SY]. The aim of this paper is to consider a similar problem for $G^{\io\th}$ in the case where $\ch\Bk = 2$. \para{1.3.} We consider $\th : G \to G$ as in 1.2. Since $J = J\iv = {}^t\!J$, we have \begin{align*} \tag{1.3.1} G^{\io\th} &= \{ g \in G \mid J\iv({}^tg\iv)J = g\iv \} \\ &= \{ g \in G \mid {}^t(Jg) = Jg \}. \end{align*} Let $\Fg = \Fg\Fl_N$ be the Lie algebra of $G$, and $\th : \Fg \to \Fg$ be the linear automorphism induced from $\th : G \to G$. Since $\ch\Bk = 2$, $\th$ is given as $x \mapsto - J({}^tx)J = J({}^tx)J$ for $x \in \Fg$. Hence \begin{align*} \tag{1.3.2} \Fg^{\th} &= \{ x \in \Fg \mid J({}^tx)J = x \} \\ &= \{ x \in \Fg \mid {}^t(Jx) = Jx\}. \end{align*} Put $\Fg^{\th}\nil = \Fg^{\th} \cap \Fg\nil$. By comparing (1.3.1) and (1.3.2), we have \begin{lem} The map $g \mapsto g-1$ gives an isomorphism $G^{\io\th}\uni \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \Fg^{\th}\nil$, which is compatible with the conjugate action of $G^{\th}$. \end{lem} \para{1.5.} In order to study $G^{\th}$ and $G^{\io\th}$, we need to consider the orthogonal group over the field of characteristic 2. Since (1.1.1) and (1.1.2) are not equivalent in the case where $\ch \Bk = 2$, we have to define the orthogonal group by using the quadratic form $Q(v)$ on $V$. Let $Q(v)$ be a quadratic form on $V$. We define an associated bilinear form $\lp \ , \ \rp$ by \begin{equation*} \tag{1.5.1} \lp v, w \rp = Q(v + w) - Q(v) - Q(w). \end{equation*} \par Here we give the quadratic form $Q(v)$ explicitly as follows. First assume that $N = 2n$, and fix a basis $e_1, \dots, e_n, f_1, \dots f_n$ of $V$. For $v = \sum_ix_ie_i + \sum_iy_if_i$, define \begin{equation*} Q(v) = \sum_{i=1}^nx_iy_i. \end{equation*} Then the associated bilinear form is given by $\lp v, w\rp = \sum_i(x_iy_i' + x_iy_i')$ for $v = \sum_ix_ie_i + \sum_iy_ie_i, w = \sum_ix_i'e_i + \sum_iy_i'f_i$. Next assume that $N = 2n + 1$, and fix a basis $e_0, e_1, \dots, e_n, f_1, \dots, f_n$ of $V$. For $v = \sum_ix_ie_i + \sum_iy_if_i$, define \begin{equation*} Q(v) = x_0^2 + \sum_{i=1}^nx_iy_i. \end{equation*} Then the associated bilinear form is given by $\lp v, w\rp = \sum_{i\ge 1}(x_iy_i' + x_i'y_i)$ for $v = \sum_{i \ge 0}x_ie_i + \sum_{i \ge 1}y_if_i$, $w = \sum_{i \ge 0}x_i'e_i + \sum_{i \ge 1}y_i'f_i$. \par We define an orthogonal group $O(V)$ as in (1.1.1). If $g \in O(V)$, $g$ leaves the form $\lp\ ,\ \rp$ invariant by (1.5.1). It follows that \begin{equation*} \tag{1.5.2} O(V) \subset \{ g \in GL(V) \mid \lp gv, gw\rp = \lp v,w \rp \text{ for any $v,w \in V$} \}. \end{equation*} \para{1.6.} Assume that $N = 2n$. Since $\ch\Bk = 2$, the symmetric bilinear form on $V$ coincides with the skew-symmetric bilinear form on $V$. The definition of the symplectic group given in 1.1 makes sense even if $\ch\Bk = 2$, which we denote by $Sp(V)$. Thus the right hand side of (1.5.2) coincides with $Sp(V)$ with respect to this form, and we have \begin{equation*} \tag{1.6.1} O(V) \subset Sp(V). \end{equation*} By using the explicit description of the associated symmetric bilinear form $\lp \ ,\ \rp$ on $V$ given in 1.5, we see that $Sp(V)$ can be written as \begin{equation*} \tag{1.6.2} Sp(V) = \{ g \in G \mid {}^tgJg = J \}, \end{equation*} where $J$ is as in (1.2.2). In particular, we have $G^{\th} = Sp(V)$. \par Let $\Fs\Fp(V)$ be the Lie algebra of $Sp(V)$. It follows from (1.6.2), we have \begin{align*} \Fs\Fp(V) &\subseteq \{ x \in \Fg\Fl(V) \mid \lp xv,w \rp + \lp v, xw \rp = 0 \text{ for any } v,w \in V \} \\ &= \{ x \in \Fg\Fl(V) \mid {}^txJ + Jx = 0\} \\ &=\Fg^{\th}. \end{align*} The last equality follows from (1.3.2). Here $\dim \Fs\Fp(V) = \dim Sp(V) = 2n^2 + n$. The dimension of the space of symmetric matrices in $V$ is equal to $n(2n + 1)$. Thus the equality holds in the above formulas. We have \begin{equation*} \tag{1.6.3} \Fs\Fp(V) = \Fg^{\th}. \end{equation*} \par Summing up the above arguments, together with Lemma 1.5, we have the following. \begin{prop} Assume that $N = 2n$. Then $G^{\th} = Sp(V)$, and $G^{\io\th}\uni \simeq \Fg^{\th}\nil = \Fs\Fp(V)\nil$. The behaviour of $G^{\th}$-orbits on $G^{\io\th}\uni$ is the same as that of $Sp(V)$-orbits on $\Fs\Fp(V)\nil$. \end{prop} \remark{1.8.} Let $W_n$ be the Weyl group of type $C_n$. It is known by Hesselink [Hes] and Spaltenstein [Spa] that the number of $Sp(V)$-orbits in $\Fs\Fp(V)\nil$ is finite, and those orbits are parametrized by $W_n\wg$. \remark{1.9.} In the case where $\ch\Bk = 2$ and $N$ is even, it is not known whether $O(V)$ is realized as $G^{\th}$ for a certain involutive automorphism $\th : G \to G$. \para{1.10.} Assume that $N = 2n+1$. By changing the notation, we consider the vector space $V'$ with basis $e_0, e_1, \dots, e_n, f_1, \dots, f_n$, and let $V$ be the subspace of $V'$ spanned by $e_1, \dots, e_n, f_1, \dots, f_n$. We identify $Sp(V)$ with $Sp_{2n}$ by using this basis. $Sp_{2n}$ can be explicitly written as \begin{equation*} \tag{1.10.1} Sp_{2n} = \biggl\{ \begin{pmatrix} f & g \\ h & k \end{pmatrix} \in GL_{2n} \big |\ f, g, h, k \in \SM_n, \text{ (*) } \biggr\}, \end{equation*} where $\SM_n$ is the set of square matrices of degree $n$, and the condition (*) is given by \begin{equation*} \tag{1.10.2} {}^thf = {}^tfh, \quad {}^tkg = {}^tgk, \quad {}^thg + {}^tfk = 1. \end{equation*} We have the following result. \begin{prop} Assume that $N = 2n + 1$, and $G = GL_N$. Then \begin{equation*} \tag{1.11.1} G^{\th} = \biggl\{ \begin{pmatrix} 1 & 0 \\ 0 & y \end{pmatrix} \big | \ y \in Sp_{2n} \biggr\}. \end{equation*} In particular, $G^{\th} = SO(V') \simeq Sp(V)$. \end{prop} \begin{proof} Take $x \in GL_N$, and write it as \begin{equation*} x = \begin{pmatrix} a & \Bb & \Bc \\ {}^t\Bd & f & g \\ {}^t\Be & h & k \end{pmatrix}, \end{equation*} where $\Bb = (b_1, \dots, b_n), \Bc = (c_1, \dots, c_n)$ and $\Bd = (d_1, \dots, d_n), \Be = (e_1, \dots, e_n)$, together with $f,g,h,k \in \SM_n$. Note that $G^{\th} = \{ x \in G \mid {}^txJx =J\}$. The condition ${}^txJx = J$ implies, in particular, that \begin{align*} a^2 + \Be\cdot {}^t\Bd + \Bd\cdot {}^t\Be &= 1, \\ {}^t\Bb \cdot \Bb + {}^thf + {}^tfh &= 0, \\ {}^t\Bc\cdot \Bc + {}^tkg + {}^tgk &= 0. \end{align*} Since $\Be\cdot{}^t\Bd + \Bd\cdot{}^t\Be = 2\sum_{i=1}^nd_ie_i = 0$, the first equality implies that $a = 1$. Since the diagonal entries of ${}^thf + {}^tfh$ are all zero, the $ii$-entry of the matrix ${}^t\Bb\cdot\Bb + {}^thf + {}^tfh$ is equal to $b_i^2$. Hence $\Bb = ${ \bf 0} by the second equality. Similar argument by using the third shows that $\Bc = 0$. Now the condition ${}^txJx = J$ is equivalent to the condition that \begin{equation*} \tag{1.11.2} \begin{cases} \Be f + \Bd h = 0, \\ \Be g + \Bd k = 0, \\ {}^thg + {}^tfk = 1, \\ {}^thf = {}^tfh, \\ {}^tkg = {}^tgk. \end{cases} \end{equation*} By comparing (1.11.2) with (1.10.2), we can write as \begin{equation*} x = \begin{pmatrix} 1 & 0 & 0 \\ {}^t\Bd & f & g \\ {}^t\Be & h & k \end{pmatrix} \quad\text{ with } y = \begin{pmatrix} f & g \\ h & k \end{pmatrix} \in Sp_{2n}. \end{equation*} Since $y$ is non-degenerate, the relation $(\Be,\Bd)y = 0$ implies that $\Bd = \Be = 0$. This proves (1.11.1). Now by 1.5, one can check that $G^{\th} \subset O(V')$. We have $\dim G^{\th} = \dim Sp_{2n} = \dim SO_{2n+1} = 2n^2 + n$. Since $Sp(V)$ is connected, we conclude that $G^{\th} = SO(V')$. The proposition is proved. \end{proof} \para{1.12.} We determine $\Fg^{\th}$ in the case where $N = 2n+1$. By (1.3.2), we have $\dim \Fg^{\th} = (n + 1)(2n + 1)$. Since $\dim G^{\th} = \dim Sp_{2n} = 2n^2 + n$, we see that \par\medskip \noindent (1.12.1) \ $\Lie G^{\th} \subsetneq \Fg^{\th}$, namely (0.1.1) does not hold for $G^{\th}$. \par\medskip More precisely, by the direct computation, we have the following. \begin{align*} \tag{1.12.2} \Fg^{\th} &= \left\{ x = \begin{pmatrix} a & \Bb & \Bc \\ {}^t\Bc & f & g \\ {}^t\Bb & h & {}^tf \end{pmatrix} \bigg | \ a \in \Bk, f,g,h \in \SM_n, {}^th = h, {}^t g = g \right\}, \\ \Lie G^{\th} &= \{ x \in \Fg^{\th} \mid a = \Bb = \Bc = 0 \} \simeq \Fs\Fp(V). \end{align*} Summing up the above arguments, together with Lemma 1.4, we have the following. \begin{prop} Assume that $N = 2n+1$. Then $G^{\th} \simeq Sp(V)$, and $G^{\io\th} \simeq \Fg^{\th}\nil$. Under the embedding $\Fs\Fp(V)\nil \hra \Fg^{\th}\nil$, $\Fs\Fp(V)\nil$ is a $G^{\th}$-stable subset of $\Fg^{\th}\nil$, and the action of $G^{\th}$ on $\Fs\Fp(V)\nil$ coincides with the conjugate action of $Sp(V)$ on it. \end{prop} \para{1.14.} We write $H = Sp(V)$ and $\Fh = \Fs\Fp(V)$. We identify $H$ with $G^{\th}$, and $\Fh$ with a subspace of $\Fg^{\th}$. Then $\Fg^{\th}$ can be written as $\Fg^{\th} = \Fh \oplus \Fg_{V'}$, where $\Fg_{V'}$ is a subspace of $\Fg^{\th}$ consisting of $x \in \Fg^{\th}$ such that $f = g = h = 0$ (notation in (1.13.2)). We express $x \in \Fg_{V'}$ as $x = (a, \Bb, \Bc)$. Put $\Fz = \{ (a, 0, 0) \mid a \in \Bk \}$, and let $\Fg_V$ be the subspace of $\Fg_{V'}$ consisting of $x = (0, \Bb, \Bc)$. Then $\Fg_{V'} = \Fg_{V} \oplus \Fz$. $\Fg_V$ is $H$-stable, and $H$ acts trivially on $\Fz$. We identify $\Fg_{V}$ with $V$ under the correspondence $(0,\Bb,\Bc) \in \Fg_{V} \lra \sum_{i=1}^nc_ie_i + \sum_{i=1}^nb_if_i \in V$ for $\Bb = (b_1, \dots, b_n), \Bc = (c_1, \dots, c_n)$. Then we can identify $\Fh \oplus \Fg_{V}$ with $\Fh \times V$, where the natural action of $H$ on $\Fg^{\th}$ preserves $\Fh \oplus \Fg_{V}$, which corresponds to the diagonal action of $H$ on $\Fh \times V$. \remark{1.15.} The above discussion shows that considering the action of $H$ on $\Fg^{\th}$ is the same as considering the diagonal action of $H$ on $\Fh \times V$. Hence the situation is quite similar to the case of exotic symmetric spaces studied in [K1], [SS]. Recall that the exotic symmetric space (in the Lie algebra case) is defined as $\Fg^{-\th} \times V$ for $G^{\th} = Sp(V)$ with $\ch\Bk \ne 2$, together with the diagonal action of $G^{\th}$. But note that the structure of the nilpotent variety is different. In the exotic case, as the nilpotent variety, we have considered $\Fg^{-\th}\nil \times V$ (Kato's exotic nilpotent cone [K1]). In the present case, we consider $\Fg^{\th}\nil = (\Fh\oplus \Fg_{V}) \cap \Fg\nil$, which corresponds to a certain subset of $\Fh\nil \times V$. \para{1.16.} We follow the notation in 1.14. In the remainder of this section, we shall show that $\Fg^{\th}\nil$ has infinitely many $H$-orbits. Let $B'$ be the subgroup of $G = GL_N$ consisting of elements \begin{equation*} \tag{1.16.1} \begin{pmatrix} a & 0 & \Bc \\ {}^t\Bd & f & g \\ 0 & 0 & k, \end{pmatrix} \end{equation*} where $a \in \Bk, \Bc, \Bd \in \Bk^n$ (row vectors), $f, g, k \in \SM_n$, and $f$ is upper triangular, $k$ is lower triangular. Then $B'$ is a $\th$-stable Borel subgroup of $G$. Put $\Fb' = \Lie B'$, and let $\Fn'$ be the nilpotent radical of $\Fb'$. Then $\Fg^{\th} \cap \Fn' \subset \Fg^{\th}\nil$, and we have \begin{equation*} \tag{1.16.2} \Fg^{\th} \cap \Fn' = \biggl\{ \begin{pmatrix} 0 & 0 & \Bc \\ {}^t\Bc & f & g \\ 0 & 0 & {}^tf \end{pmatrix} \mid f \text{ : strict upper triangular, } {}^tg = g \biggr\}. \end{equation*} The following result gives a counter-example for (0.1.2) in the case where $\ch\Bk = 2$. \begin{prop} Let $\mathbb O_0$ be the regular nilpotent orbit in $\Fg\Fl_N$. Then $\mathbb O_0 \cap \Fg^{\th}$ has infinitely many $H$-orbits. In particular, $\Fg^{\th}\nil$ has infinitely many $H$-orbits. \end{prop} \begin{proof} Assume that $f \in \SM_n$ corresponds to the transformation on the subspace $\lp e_1, \dots, e_n \rp$ of $V$ defined by \begin{equation*} f : e_n \mapsto e_{n-1} \mapsto \cdots \mapsto e_1 \mapsto 0, \end{equation*} and put $g = \Diag(0, \dots, 0,1) \in \SM_n$. Then $y = \begin{pmatrix} f & g \\ 0 & {}^tf \end{pmatrix}$ gives an element in $\Fh = \Fs\Fp_{2n}$, which is a regular nilpotent element in $\Fs\Fp_{2n}$. Let $\Bc = (0, 0, \dots, 0, \xi)$ with $\xi \in \Bk$, and put $z = (0, 0, \Bc) \in \Fg_V$. Then $x = y + z \in \Fg^{\th}\nil$ by (1.16.2), which we denote by $x(\xi)$. Since the operation of $x(\xi)$ on $V'$ is given by \begin{equation*} \begin{cases} f_1 \mapsto f_2 \mapsto \cdots \mapsto f_{n-1} \mapsto f_n, \\ f_n \mapsto e_n + \xi e_0, \\ e_0 \mapsto \xi e_n, \\ e_n \mapsto e_{n-1} \mapsto \cdots \mapsto e_1 \mapsto 0, \end{cases} \end{equation*} $x(\xi) \in \mathbb O_0$ for any $\xi \in \Bk^*$. In order to prove the proposition, it is enough to see that \par\medskip\noindent (1.17.1) \ $x(\xi)$ and $x(\xi')$ are not conjugate under $H$ if $\xi \ne \xi'$. \par\medskip We show (1.17.1). Assume there exists $h \in H$ such that $h(x(\xi)) = x(\xi')$. Write $x(\xi) = y + z, x(\xi') = y + z'$. Since $H$ leaves the decomposition $\Fg^{\th} = \Fh \oplus \Fg_{V'}$ invariant, we must have $h(y) = y$ and $h(z) = z'$. Hence $h \in Z_H(y)$. Since $y \in \Fg^{\th} \cap \Fn'$ is regular nilplotent, $Z_H(y)$ satisfies the property \begin{equation*} \tag{1.17.2} Z_H(y) \subset \left\{ \begin{pmatrix} f_1 & g_1 \\ 0 & {}^tf_1\iv \end{pmatrix} \in Sp_{2n} \mid f_1 : \text{ upper unitriangular }\right\}. \end{equation*} On the other hand, the action of $H$ on $\Fg_V$ is given as in 1.14. By (1.17.2), we have \begin{equation*} h\cdot {}^t(\underbrace{0, \dots, 0, \xi}_{\text{$n$-times}}, \underbrace{0, 0, \dots, 0}_{\text{$n$-times}}) = {}^t(\underbrace{c_1, \dots, c_n}_{\text{$n$-times}}, \underbrace{0, 0, \dots, 0}_{\text{$n$-times}}) \end{equation*} for some $c_1, \dots, c_n$ with $c_n = \xi$. Since $h(z) = z'$, we have $\xi = \xi'$. Thus (1.17.1) holds. The proposition is proved. \end{proof} \remark{1.18.} In later discussion, we use the Jordan decomposition of Lie algebras. It is known (see [Spr, 4.4.20]) that if $\Fg$ is the Lie algebra of an algebraic group $G$, the Jordan decomposition works. So, in the setting of 1.15, we can apply the Jordan decomposition for $\Fh = \Lie H$, but not for $\Fg^{\th}$. \par\bigskip \section{ Intersection cohomology related to $\Fs\Fp(V)_{\sr}$ } \para{2.1.} Let $H = Sp(V), \Fh = \Fs\Fp(V)$, and we follow the notation in 1.6. Let $W_n$ be the Weyl group of $H$. As pointed out in Remark 1.8, nilpotent orbits in $\Fh$ are parametrized by $W_n\wg$. The Springer correspondence between the set of nilpotent orbits and $W_n\wg$ was first considered by Kato [K2]. Then Xue [X1], [X2] established the general theory of the Springer correspondence for classical Lie algebras in characteristic 2. \par The main difficulty in considering $H$ and $\Fh$ relies on the fact that the regular semisimple elements do not exist for $\Fh$. In order to overcome this defect, Xue replaced $H$ and $\Fh$ by bigger ones $\wt H$ and $\wt\Fh$, extension by connected center, so that $\wt\Fh$ has regular semisimple elements, and established the Springer correspondence by using the bijection $\wt\Fh\nil \simeq \Fh\nil$. (Actually Xue considers the adjoint group $\wt H_{\ad}$ and its Lie algebra $\wt\Fh_{\ad}$, but the theory of the Springer correspondence for them is essentially the same as that for $\wt H$ and $\wt\Fh$.) \par In this paper, for later applications to the case where $N = 2n+1$, we discuss the Springer correspondence for $\Fh$ more directly, without using the regular semisimple elements (though we need to use $\wt\Fh$). In the discussion below, we borrowed the idea to use $\FD$ from [SY]. Note that those discussions have strong resemblance with the case of exotic symmetric spaces associated to symplectic groups with $\ch\Bk \ne 2$ ([SS, 3]), as explained in the Introduction. \para{2.2.} Let $T \subset B$ be subgroups of $G = GL_N$ with $N = 2n$ given by \begin{align*} B &= \biggl\{ \begin{pmatrix} a & b \\ 0 & {}^ta\iv \end{pmatrix} \mid a, b \in \SM_n, a \text{ : upper triangular, } {}^tb = a\iv b\,{}^t\!a \biggr\}, \\ T &= \{ \Diag(t_1, \dots, t_n, t_1\iv, \dots, t_n\iv) \mid t_i \in \Bk^*\}. \end{align*} By (1.10.1), $B$ is a Borel subgroup of $H$ and $T$ is a maximal torus of $H$ contained in $B$. Let $\Ft$ be the Lie algebra of $T$. Since $\ch\Bk = 2$, we have \begin{equation*} \tag{2.2.1} \Ft = \{ \Diag(t_1, \dots, t_n, t_1, \dots, t_n) \mid t_i \in \Bk \}. \end{equation*} We define a subset $\Ft_{\sr}$ of $\Ft$ by \begin{equation*} \tag{2.2.2} \Ft_{\sr} = \{ s = \Diag(t_1, \dots, t_n, t_1, \dots, t_n) \mid t_i \ne t_j \text{ for } i \ne j \}. \end{equation*} $\Ft_{\sr}$ is open dense in $\Ft$, and for any $s \in \Ft_{\sr}$, $Z_H(s) \simeq SL_2 \times \cdots \times SL_2$ ($n$-times). Put $\Fh_{\sr} = \bigcup_{g \in H}g(\Ft_{\sr})$. Then $\Fh_{\sr}$ is the set of semisimple elements in $\Fh$ such that all the eigenspaces have dimension 2. \par Recall that $s \in \Fh$ is called regular semisimple if $Z_H^0(s)$ is a maximal torus of $H$. For any $s \in \Ft$, $\dim Z_H(s) \ge 3n$. Hence $\Ft$ does not contain regular semisimple elements. This implies that $\Fh$ does not contain regular semisimple elements (see Lemma 2.3). An element $s \in \Fh_{\sr}$ is said to be subregular semisimple. \par Let $U$ be the unipotent radical of $B$. Let $\Fb$ be the Lie algebra of $B$, and $\Fn = \Lie U$ the nilpotent radical of $\Fb$. We have $\Fb = \Ft \oplus \Fn$. $\Fn$ is written as \begin{equation*} \tag{2.2.3} \Fn = \biggl\{ \begin{pmatrix} a & b \\ 0 & {}^ta \end{pmatrix} \mid a \text{ : strict upper triangular, } {}^tb = b \biggr\} \end{equation*} \par Let $\Phi^+ \subset \Hom (T, \Bk^*) \simeq \BZ^n$ be the set of positive roots of type $C_n$, \begin{equation*} \Phi^+ = \{ \ve_i - \ve_j, \ve_i + \ve_j \quad (1\le i < j \le n), 2\ve_i \quad (1 \le i \le n) \} \end{equation*} where $\ve_1, \dots,\ve_n$ is a basis of $\Hom(T, \Bk^*)$ given by $\ve_i : \Diag(t_1, \dots,t_n, t_1\iv, \dots, t_n\iv) \mapsto t_i$. The set of positive long (resp. short) roots $\Phi^+_{l}$, (resp, $\Phi^+_s$) is given as \begin{align*} \Phi^+_l &= \{ 2\ve_i \mid 1 \le i \le n \}, \\ \Phi^+_s &= \{ \ve_i - \ve_j, \ve_i + \ve_j \mid 1 \le i < j \le n\}. \end{align*} \par\noindent The root space decomposition of $\Fn$ with respect to $T$ is given as \begin{equation*} \Fn = \bigoplus_{\a \in \Phi^+}\Fg_{\a}, \end{equation*} where $\Fg_{\a} = \{ x \in \Fn \mid s\cdot x = \a(s)x \text{ for any } s \in T\}$. Let $d\a \in \Hom (\Ft, \Bk)$ be the differential of $\a \in \Hom (T,\Bk^*)$. Then $\Ft$ acts on $\Fg_{\a}$ by $[s,x] = d\a(s)x$ for $s \in \Ft$. Since $\ch\Bk = 2$, the weight space decomposition of $\Fn$ with respect to $\Ft$ is given as \begin{equation*} \Fn = \FD \oplus \bigoplus_{\a \in \Phi^+_{s}}\Fg_{\a} = \FD \oplus \Fn_{s}, \end{equation*} where $\Fn_s = \bigoplus_{\a \in \Phi^+_s}\Fg_{\a}$, and $\FD = \bigoplus_{\a \in \Phi^+_{l}}\Fg_{\a}$ is the weight space of weight 0. Explicitly, $\FD$ is given as follows; \begin{equation*} \tag{2.2.4} \FD = \biggl\{ \begin{pmatrix} 0 & b \\ 0 & 0 \end{pmatrix} \mid b \text{ : diagonal }\biggr\}. \end{equation*} In particular, we have \begin{equation*} \tag{2.2.5} [\Ft, \FD] = 0. \end{equation*} According to (2.2.4), we express an element in $\FD \simeq \Bk^n$ as $\Bb = (b_1, \dots, b_n)\in \FD$ for $b = \Diag(b_1, \dots, b_n)$. For $k = 0, \dots, n$, put $\FD_k = \{ \Bb \in \FD \mid b_i = 0 \text{ for } i > k \}$. Then $\FD_k$ is a $T$-stable subspace of $\FD$. We also put $\FD^0_k = \{ \Bb \in \FD_k \mid b_i \ne 0 \text{ for } 1 \le i \le k \}$, which is an open dense subset of $\FD_k$. \par We need a lemma. \begin{lem} \begin{enumerate} \item Assume that $x \in \Fh$ is semisimple. Then there exists $g \in H$ such that $gx \in \Ft$. \item Assume that $x \in \Fb$ is semisimple. Then there exists $g \in B$ such that $gx \in \Ft$. \end{enumerate} \end{lem} \begin{proof} Assume that $x \in \Fh$ is semisimple. Consider the eigenspace decomposition $V = W_1 \oplus \cdots \oplus W_a$ of $x$. Then $W_1, \dots, W_a$ are mutually orthogonal with respect to the form $\lp\ ,\ \rp$. Then the restriction of the form on $W_i$ gives a non-degenerate skew-symmetric bilinear form. In particular, $\dim W_i$ is even. We can find a basis $e_1', \dots, e_n', f_1', \dots, f_n'$ of $V$ such that $\{ e_j', f_j'\mid j \in I_i\}$ gives a symplectic basis of $W_i$, where $[1,n] = \coprod_{1 \le i \le a}I_i$ is a partition of $[1,n]$. We define a map $g : V \to V$ by $g(e_j) = e_j', g(f_j) = f_j'$ for each $j$. Then $g \in H$, and $x' = g\iv x \in \Ft$. This proves (i). \par Next assume that $x \in \Fb$ is semisimple. Let $s \in \Ft$ be the projection of $x \in \Fb$. We consider the eigenspace decomposition of $s$ on $V$, $V = V_1\oplus\cdots\oplus V_a$, where $V_i$ is the eigenspace of $s$ with respect to the eigenvalue $\mu_i$. This defines a partition $[1,n] = \coprod_iI_i$ such that $\{ e_j, f_j \mid j \in I_i\}$ gives a symplectic basis of $V_i$. Since $x$ is semisimple, and has the same eigenvalues $\mu_1, \dots, \mu_a$, we have a decomposition $V = W_1 \oplus\cdots \oplus W_a$ into eigenspaces of $x$, where $W_i$ is the eigenspace with respect to the eigenvalue $\mu_i$. Here $\dim W_i = \dim V_i$. As before, we can find a symplectic basis $e_1', \dots, e_n', f_1', \dots, f_n'$ of $V$, and $g \in H$ associated to this basis such that $x' = g\iv x \in \Ft$. We show that there exists a choice of $e_1', \dots, e_n', f_1', \dots, f_n'$ such that $g \in B$, i.e., the choice such that the subspace spanned by $e_1, \dots, e_k$ coincides with that by $e_1', \dots, e_k'$ for $k = 1, \dots, n$. Since $e_1$ is an eigenvector for $x$, we can put $e_1' = e_1$. We may assume that $e_1' \in W_1$. Let $\ol V = \lp e_1\rp^{\perp}/\lp e_1 \rp$. Then $\ol V$ has a natural symplectic basis $\ol e_j, \ol f_j$ $(2 \le j \le n)$, and $x$ induces $\ol x \in \Fs\Fp(\ol V)$. By induction on $n$, one can find the required basis $\ol e'_j, \ol f'_j$ $(2 \le j \le n)$ of $\ol V$. This produces vectors $e'_1, \dots, e'_n, f'_2, \dots, f'_n$, and finally we can choose $f'_1 \in W_1$ by the condition that $\lp e'_1, f'_1\rp = 1$ and $f'_1$ is orthogonal for all other vectors $e'_j, f_j'$. Thus we obtain the basis $e_1', \dots, e_n', f_1', \dots, f_n'$ as required, and $g \in B$. (ii) holds. The lemma is proved. \end{proof} \para{2.4.} We consider the varieties \begin{align*} \wt X &= \{ (x, gB) \in \Fh \times H/B \mid g\iv x \in \Fb\}, \\ X &= \bigcup_{g \in H}g(\Fb), \end{align*} and define a map $\pi : \wt X \to X$ by $(x, gB) \mapsto x$. $\pi$ is a proper map onto $X$, and so $X$ is irreducible, closed in $\Fh$. (Later in Lemma 2.9, it will be shown that $X = \Fh$.) \par For $0 \le k \le n$, put $\FN_{k,\sr} = \bigcup_{g \in B}g(\Ft_{\sr} + \FD_k)$. We define varieties \begin{align*} \wt Y_k &= \{ (x, gB) \in \Fh \times H/B \mid g\iv x \in \FN_{k,\sr} \}, \\ Y_k &= \bigcup_{g \in H}g(\FN_{k,\sr}) = \bigcup_{g \in H}g(\Ft_{\sr} + \FD_k), \end{align*} and define a map $\psi^{(k)} : \wt Y_k \to Y_k$ by $(x,gB) \to x$. In the case where $k = n$, we write $\wt Y_k, Y_k$ and $\psi^{(k)}$ simply by $\wt Y, Y$ and $\psi$. We have a lemma. \begin{lem} $\wt Y = \pi\iv(Y)$, and $\psi$ coincides with the restriction of $\pi$ on $\pi\iv(Y)$. In particular, $\psi: \wt Y \to Y$ is a proper surjective map. \end{lem} \begin{proof} It is enough to show that $Y \cap \Fb = \FN_{n,\sr}$. Assume that $y \in Y \cap \Fb$. Since $y \in Y$, there exists $g \in H, s \in \Ft_{sr}, z \in \FD$ such that $y = g(s) + g(z) \in \Fb$, where $g(s)$: semisimple, $g(z)$: nilpotent. Moreover since $[s, z] = 0$ by (2.2.5), we have $[g(s), g(z)] = 0$. By Lemma 2.3, replacing $y$ by its $B$-conjugate, we may assume that $g(s) \in \Ft_{sr}, g(z) \in \Fn$. Then there exists $\dw \in N_H(T)$ with $w \in S_n \subset W_n$ such that $g\dw (s) = s$. Since $\dw$ leaves $\FD$ invariant, by replacing $g$ by $g\dw$ and $z$ by $\dw\iv z$, we may further assume that $g(s) = s, g(z) \in \Fn$ with $s \in \Ft_{\sr}, z \in \FD$. Hence $g \in Z_H(s) \simeq SL_2 \times \cdots \times SL_2$ ($n$-factors). If we write $g = (g_1, \dots, g_n)$ with $g_i \in SL_2$, and $z = (z_1, \dots, z_n) \in \FD$, the action of $g$ on $z \in \FD$ corresponds to the conjugate action of $g_i$ on the matrix $\begin{pmatrix} 0 & z_i \\ 0 & 0 \end{pmatrix}$ for $i = 1, \dots, n$. Now the condition $g(z) \in \Fn$ implies that, if $z_i \ne 0$ then $g_i$ is upper triangular. It follows that $g(z) \in \FD$, and so $y \in \Ft_{sr} + \FD$, up to $B$-conjugate. We have $Y \cap \Fb \subset \FN_{n,\sr}$. The other inclusion is obvious. The lemma is proved. \end{proof} \para{2.6.} For $s \in \Ft_{sr}$, we have $Z_H(s) = Z_H(\Ft_{\sr}) \simeq SL_2 \times \cdots \times SL_2$ ($n$-factors), and $B \cap Z_H(\Ft_{\sr}) \simeq B_2 \times \cdots \times B_2$, where $B_2$ is the Borel subgroup of $SL_2$ consisting of upper triangular matrices. The action of $Z_H(\Ft_{\sr})$ on $\FD$ is described as in the proof of Lemma 2.5. In particular, $B \cap Z_H(\Ft_{\sr})$ leaves $\FD_k$ invariant for each $k$. Then $\wt Y_k$ can be expressed as \begin{align*} \tag{2.6.1} \wt Y_k \simeq H \times^{B}\FN_{k,\sr} \simeq H\times^{B \cap Z_{H}(\Ft_{\sr})}(\Ft_{\sr} + \FD_k). \end{align*} Hence $\wt Y_k$ is smooth and irreducible. $\wt Y \simeq H \times^{B \cap Z_H(\Ft_{\sr})}(\Ft_{\sr} + \FD)$ is a locally trivial fibration over $H/(B \cap Z_H(\Ft_{\sr}))$ with fibre isomorphic to $\Ft_{\sr} + \FD$, and $\wt Y_k$ is a subbundle of $\wt Y$ corresponding to a closed subset $\Ft_{\sr} + \FD_k$ of $\Ft_{\sr} + \FD$. Hence $\wt Y_k$ is a closed subset of $\wt Y$ for each $k$. The map $\psi^{(k)}$ is the restriction of $\psi$ on $\wt Y_k$. Since $\psi :\wt Y \to Y$ is proper, $Y_k = \psi(\wt Y_k)$ is an irreducible closed subset in $Y$. $\psi^{(k)}: \wt Y_k \to Y_k$ is a proper surjective map. \par The following relation can be verified by a similar argument as in the proof of Lemma 2.5. \begin{equation*} \tag{2.6.2} Y_k \cap (\Ft_{sr} + \FD) = \bigcup_{w \in S_n}\dw(\Ft_{\sr} + \FD_k), \quad (0 \le k \le n). \end{equation*} Put $Y_k^0 = Y_k - Y_{k-1}$. By (2.6.2), we have \begin{equation*} \tag{2.6.3} Y_k^0 = \bigcup_{g \in H}g(\Ft_{\sr} + \FD_k^0). \end{equation*} \para{2.7.} For any subset $I \subset [1,n]$, put \begin{equation*} \tag{2.7.1} \FD_I = \{ \Bb \in \FD \mid b_i \ne 0 \text{ for } i \in I, b_i = 0 \text{ for } i \notin I \}. \end{equation*} Note that if $I = [1,k]$, $\FD_I$ coincides with $\FD^0_k$. Since the action of $B \cap Z_H(\Ft_{\sr})$ on $\FD$ is given by the action of its $T$-part, $\Ft_{\sr} + \FD_I$ is $B \cap Z_H(\Ft_{\sr})$-stable. We define a locally closed subvariety $\wt Y_I$ of $\wt Y$ by \begin{equation*} \tag{2.7.2} \wt Y_I \simeq H\times^{B \cap Z_H(\Ft_{\sr})}(\Ft_{\sr} + \FD_I), \end{equation*} and a map $\psi_I : \wt Y_I \to Y$ by $g*x \mapsto gx$, where $g*x$ is the image of $(g,x) \in H \times (\Ft_{\sr} + \FD_I)$ on its quotient. Then $\Im \psi_I = \bigcup_{g \in H}(\Ft_{\sr} + \FD_I)$ coincides with $Y_k^0$ for $k = |I|$ by (2.6.3), which depends only on $k$. \par For $I \subset [1,n]$, we define a parabolic subgroup $Z_H(\Ft_{\sr})_I$ of $Z_H(\Ft_{\sr})$ by the condition that the $i$-th factor is $B_2$ if $i \in I$, and is $SL_2$ otherwise. Since $Z_H(\Ft_{\sr})_I$ stabilizes $\FD_I$, one can define \begin{equation*} \tag{2.7.3} \wh Y_I = H \times^{Z_H(\Ft_{\sr})_I}(\Ft_{\sr} + \FD_I). \end{equation*} Then $\psi_I$ factors through $\wh Y_I$, \begin{equation*} \tag{2.7.4} \begin{CD} \psi_I : \wt Y_I @>\xi_I>> \wh Y_I @>\eta_I>> Y_k^0, \end{CD} \end{equation*} for $|I| = k$, where $\xi_I$ is the natural surjection, and $\eta_I$ is given by $g*x \mapsto gx$ (similar notation as $\psi_I$). Then $\xi_I$ is a locally trivial fibration with fibre isomorphic to \begin{equation*} Z_H(\Ft_{\sr})_I/(B \cap Z_H(\Ft_{\sr})) \simeq (SL_2/B_2)^{I'} \simeq \BP_1^{I'}, \end{equation*} where $I'$ is the compliment of $I$ in $[1,n]$. \par Let $S_I$ be the symmetric group of letters in $I \subset [1,n]$, hence $S_I \simeq S_k$ for $|I| = k$. Then $\SW_I = N_H(Z_H(\Ft_{\sr})_I)/Z_H(\Ft_{\sr})_I \simeq S_I \times S_{I'}$. $\SW_I$ acts on $\wt Y_I$ and $\wh Y_I$ since $\Ft_{\sr} + \FD_I$ is stable by $N_H(Z_H(\Ft_{\sr})_I)$. Now the map $\eta_I : \wh Y_I \to Y^0_k$ turns out to be a finite Galois covering with Gaolis group $\SW_I$, \begin{equation*} \tag{2.7.5} \eta_I : \wh Y_I \to \wh Y_I/\SW_I \simeq Y^0_k. \end{equation*} \para{2.8.} For $0 \le k \le n$, we define $\wt Y^+_k$ as $\psi\iv(Y^0_k)$. Then $\wt Y^+_k = \coprod_I \wt Y_I$, where $I$ runs over all the subsets $I \subset [1,n]$ such that $|I| = k$. (The disjointness follows from (2.6.2)). $\wt Y_I$ is smooth, irreducible by (2.7.2), and $\wt Y_I$ form the connected components of $\wt Y^+_k$. Since $Y = \coprod_{0 \le k \le n}Y^0_k$, we have \begin{equation*} \wt Y = \coprod_{0 \le k \le n}\wt Y^+_k. \end{equation*} In the case where $I = [1,k]$, we denote $\wt Y_I$ by $\wt Y_k^0$. Then $\wt Y^0_k$ is an open dense subset of $\wt Y_k$. By (2.6.1), $S_n \simeq N_H(Z_H(\Ft_{\sr}))/Z_H(\Ft_{\sr})$ acts on $\wt Y$, which leaves $\wt Y^+_k$ stable for any $k$. We have \begin{equation*} \tag{2.8.1} \wt Y^+_k = \coprod_{\substack{ I \subset [1,n] \\ |I| = k}} \wt Y_I = \coprod_{w \in S_n/(S_k \times S_{n-k})}w(\wt Y^0_k). \end{equation*} We have the following lemma. \begin{lem} Let the notations be as before. \begin{enumerate} \item $X = \bigcup_{g \in H}g(\Fb) = \Fh$. \item $Y_k$ is an irreducible closed subset in $Y$. Hence $Y_k^0$ is open dense in $Y_k$. \item $\dim \wt Y_k = \dim H - n + k$. \item $\dim Y_k = \dim \wt Y_k - (n-k) = (\dim H - 2n) + 2k$. \item $Y = \coprod_{0 \le k \le n}Y_k^0$ gives a stratification of $Y$ by smooth strata $Y_k^0$, and the map $\psi : \wt Y \to Y$ is semismall with respect to this stratification. \end{enumerate} \end{lem} \begin{proof} (ii) is already given in 2.6. By (2.6.1), \begin{align*} \dim \wt Y_k &= \dim H - \dim (B \cap Z_H(\Ft_{\sr})) + \dim (\Ft_{\sr} + \FD_k) \\ &= \dim H - 2n + (n + k) \\ &= \dim H - n + k, \end{align*} since $\dim (B \cap Z_H(\Ft_{\sr})) = \dim (B_2 \times \cdots \times B_2) = 2n$. Thus (iii) holds. Since $\wh Y_I$ is smooth, irreducible, and $\eta_I$ is a finite Galois covering, $Y_k^0 = \eta_I(\wh Y_I)$ is smooth, irreducible. By using the decomposition $\psi_I = \eta_I\circ \xi_I$ for $I = [1,k]$, we see that $\dim \wt Y_k = \dim Y_k + (n-k)$. Hence (iv) holds. It follows from (iv), $\dim Y = \dim Y_n = \dim H$. Since $\dim \psi\iv(x) = n - k$ for any $x \in Y_k^0$ by (2.7.1) and by $\psi_I = \eta_I\circ \xi_I$, we have $\dim \psi\iv(x) = (\dim Y - \dim Y_k^0)/2$ by (iv). Thus (v) holds. Since $Y$ is open dense in $X$, $\dim X = \dim H$. Since $X$ is irreducible closed in $\Fh$, we obtain (i). \end{proof} \para{2.10.} Let $\psi_k : \wt Y^+_k \to Y_k^0$ be the restriction of $\psi$ on $\psi\iv(Y_k^0)$. Since $\psi$ is proper, $\psi_k$ is also proper. Let $\Ql$ be the constant sheaf on $\wt Y^+_k$. Since $\wt Y_I$ is a connected component for any $I$ by (2.8.1), we have \begin{equation*} \tag{2.10.1} (\psi_k)_!\Ql \simeq \bigoplus_{\substack{ I \subset [1,n] \\ |I| = k }} (\psi_I)_!\Ql. \end{equation*} \par On the other hand, since $\eta_I : \wh Y_I \to Y^0_k$ is a finite Galois covering with group $\SW_I$, we have \begin{equation*} \tag{2.10.2} (\eta_I)_!\Ql \simeq \bigoplus_{\r \in (\SW_I)\wg}\r \otimes \SL_{\r}, \end{equation*} where $\SL_{\r} = \Hom (\r, (\eta_I)_!\Ql)$ is a simple local system on $Y^0_k$ corresponding to $\r \in (\SW_I)\wg$. \par Since $\psi_k$ is proper, and $\wt Y_I$ is a closed subset of $\wt Y^+_k$, $\psi_I$ is proper. As $\psi_I = \eta_I\circ \xi_I$, $\xi_I$ is also proper. By a similar discussion as in [Sh, (1.6.1)], we see that $R^i(\xi_I)_!\Ql$ is a constant sheaf for any $i \ge 0$. Since $\xi_I$ is a $\BP_1^{I'}$-bundle, we have \begin{equation*} \tag{2.10.3} (\xi_I)_!\Ql \simeq (\xi_I)_!(\xi_I)^*\Ql \simeq H^{\bullet}(\BP_1^{I'})\otimes \Ql, \end{equation*} where $H^{\bullet}(\BP_1^{I'})$ denotes $\bigoplus_{i \ge 0}H^{2i}(\BP_1^{I'},\Ql)$, which we regard as a complex of vector spaces $(K_i)$ with $K_{\odd} = 0$. It follows that \begin{equation*} \tag{2.10.4} (\psi_I)_!\Ql \simeq (\eta_I)_!(\xi_I)_!\Ql \simeq H^{\bullet}(\BP_1^{I'})\otimes (\eta_I)_!\Ql. \end{equation*} \par Note that $\BP_1$ is the flag variety of $SL_2$, and $\BZ/2\BZ$ is the Weyl group of $SL_2$. We define an action of $\BZ/2\BZ$ on $H^{\bullet}(\BP_1)$ as the Springer representation of $\BZ/2\BZ$, i.e., $\BZ/2\BZ$ acts non-trivially on $H^2(\BP_1) = \Ql$, and acts trivially on $H^0(\BP_1) = \Ql$. We define an action of $(\BZ/2\BZ)^{[1,n]}$ on $H^{\bullet}(\BP_1^{I'}) \simeq H^{\bullet}(\BP_1)^{\otimes |I'|}$ by the Springer action of the factor $\BZ/2\BZ$ corresponding to $I'$, and the trivial action of the factor $\BZ/2\BZ$ corresponding to $I$. Note that $W_n = S_n \ltimes (\BZ/2\BZ)^n$ and $\SW_I \simeq S_I \times S_{I'}$. Let $W_I = S_I \ltimes (\BZ/2\BZ)^I$ be the Weyl group of type $C_{|I|}$, and define $W_{I'}$ similarly. For $\r \in \SW_I\wg = (S_I \times S_{I'})\wg$, we consider the action of $W_I \times W_{I'}$ on $H^{\bullet}(\BP_1^{I'})\otimes \r$, where $(\BZ/2\BZ)^{[1,n]}$ acts trivially on $\r$. In particular, for $I = [1,k]$, we obtain $(W_k \times W_{n-k})$-module $H^{\bullet}(\BP_1^{n-k})\otimes \r$. Note that $S_n/(S_k \times S_{n-k}) \simeq W_n/(W_k \times W_{n-k})$. By (2.8.1), (2.10.1) and (2.10.4), we have \begin{equation*} \tag{2.10.5} (\psi_k)_!\Ql \simeq \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \Ind_{W_k \times W_{n-k}}^{W_n} (H^{\bullet}(\BP_1^{n - k})\otimes \r)\otimes \SL_{\r}, \end{equation*} where $\SL_{\r} = \Hom (\r, (\eta_I)_!\Ql)$ is the simple local system on $Y_k^0$ with $I = [1,k]$. \para{2.11.} For each $k$, let $\ol\psi_k$ be the restriction of $\psi$ on $\psi\iv(Y_k)$. Then $\ol\psi_k : \psi\iv(Y_k) \to Y_k$ is a proper map. Assume that $k \ge 1$. We have $Y_k - Y_{k-1} = Y_k^0$. Since $\ol \psi_k$ is proper, $(\ol\psi_k)_!\Ql$ is a semisimple complex on $Y_k$. We note the following. \par\medskip\noindent (2.11.1) \ Assume that $(\ol\psi_{k-1})_!\Ql$ is equipped with an action of $W_n$. Then the $W_n$-action can be extended to a $W_n$-action on $(\ol\psi_k)_!$. \par\medskip In fact, let $j : Y_k^0 \hra Y_k$ be the open immersion. By (2.10.5), $(\psi_k)_!\Ql$ has an action of $W_n$. This induces an action of $W_n$ on $(j\circ \psi_k)_!\Ql$, and on its perverse cohomology ${}^pH^i((j\circ\psi_k)_!\Ql)$. On the other hand, by the assumption, ${}^pH^i((\ol\psi_{k-1})_!\Ql)$ is equipped with $W_n$-action. We consider the long exact sequence of the perverse cohomology obtained from the distinguished triangle $(j_!(\psi_k)_!\Ql, (\ol\psi_k)_!\Ql, (\ol\psi_{k-1})_!\Ql)$. By (2.10.5), $(\psi_k)_!\Ql$ is a semisimple complex which is a sum of various $\SL_{\r}[2i]$. Hence ${}^pH^i((j\circ\psi_k)_!\Ql) = 0$ for odd $i$. By induction, we have ${}^pH^i((\psi_k)_!\Ql)= 0$ for odd $i$. Since $(\ol\psi_k)_!\Ql$ is a semisimple complex, the $W_n$-action of ${}^pH^{i}((\ol\psi_{k-1})_!\Ql)$ and on ${}^pH^{i}((j\circ\psi_k)_!\Ql)$ determines the $W_n$-action on ${}^pH^{i}((\ol\psi_k)_!\Ql)$, uniquely. (2.11.1) is proved. \para{2.12.} We have a natural bijection \begin{equation*} \tag{2.12.1} \coprod_{0 \le k \le n}(S_k \times S_{n-k})\wg \simeq W_n\wg, \qquad \r \longleftrightarrow \wh\r \end{equation*} satisfying the following properties; take $\r = \r'\boxtimes \r'' \in (S_k \times S_{n - k})\wg$, where $\r' \in S_k\wg, \r'' \in S_{n-k}\wg$. We extend $\r'$ to $\wt\r' \in W_k\wg$ so that $(\BZ/2\BZ)^k$ acts trivially on it. On the other hand, we extend $\r''$ to $\wt\r'' \in W_{n-k}\wg$ so that each factor $\BZ/2\BZ$ of $(\BZ/2\BZ)^{n-k}$ acts non-trivially on $\wt\r''$. Put \begin{equation*} \tag{2.12.2} \wh\r = \Ind_{W_k \times W_{n-k}}^{W_n} (\wt\r'\boxtimes \wt\r''). \end{equation*} Then $\wh\r \in W_n\wg$, and the correspondence $\r \mapsto \wh\r$ gives the required bijection. We show the following proposition. Put $d_k = \dim Y_k$. \begin{prop} $\psi_!\Ql[d_n]$ is a semisimple perverse sheaf on $Y$, equipped with $W_n$-action, and is decomposed as \begin{equation*} \tag{2.13.1} \psi_!\Ql[d_n] \simeq \bigoplus_{0 \le k \le n} \bigoplus_{\r \in (S_k \times S_{n-k})\wg}\wh \r \otimes \IC(Y_k, \SL_{\r})[d_k], \end{equation*} where $\SL_{\r}$ is a simple local system on $Y_k^0$ defined in (2.10.2). \end{prop} \begin{proof} The formula (2.10.5) can be rewritten as \begin{equation*} \tag{2.13.2} (\psi_k)_!\Ql \simeq \biggl(\bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \SL_{\r}\biggr)[-2(n - k)] + \SN_k, \end{equation*} where $\SN_k$ is a sum of various $\SL_{\r'}[-2i]$ for $\r' \in (S_k \times S_{n-k})\wg$ with $0 \le i < n - k$. \par For $0 \le m \le n$, let $\ol\psi_m$ be as in 2.10. We consider the following formula. \begin{align*} \tag{2.13.3} (\ol\psi_m)_!\Ql \simeq \bigoplus_{0 \le k \le m}\bigoplus_{\r \in (S_k \times S_{n - k})\wg} \wh\r \otimes \IC(Y_k, \SL_{\r})[-2(n - k)] + \ol\SN_m, \end{align*} where $\ol\SN_m$ is a $\BZ$-linear combination of various $\IC(Y_k, \SL_{\r'})[-2i]$ for $0 \le k \le m$ and $\r' \in (S_k \times S_{n-k})\wg$ with $i < n - k$. We note that (2.13.3) will imply the proposition. In fact, $\ol\psi_n = \psi$ for $k = n$, and $d_n - d_k = 2n - 2k$ by Lemma 2.9. But since $d_n - 2i > d_k$ if $i < n-k$, $\IC(Y_k, \SL_{\r'})[d_n - 2i]$ is not a perverse sheaf. Since $\psi$ is semismall, $\psi_!\Ql$ is a semisimple perverse sheaf. Thus $\ol\SN_n = 0$ and (2.13.1) follows. \par We show (2.13.3) by induction on $m$. If $m = 0$, then $(\ol\psi_m)_!\Ql$ coincides with $(\psi_m)_!\Ql$. Hence (2.13.3) holds by (2.13.2) applied for $k = 0$. We assume that (2.13.3) holds for any $k < m$. Recall that $Y_m^0 = Y_m - Y_{m -1}$. Since $(\ol\psi_m)_!\Ql$ is a semisimple complex, it is a direct sum of the form $A[s]$ for a simple perverse sheaf $A$. Suppose that the support $\supp A$ of $A$ is not contained in $Y_{m -1}$. Then $(\supp A) \cap Y_m^0 \neq \emptyset$, and $A|_{Y_m^0}$ is a simple perverse sheaf on $Y_m^0$. The restriction of $(\ol\psi_m)_!\Ql$ on $Y_m^0$ is isomorphic to $(\psi_m)_!\Ql$, and it is described as in (2.13.2). It follows that $A|_{Y_m^0} = \SL_{\r}$ (up to shift) for some $\r \in (S_m \times S_{n-m})\wg$. This implies that $A = \IC(Y_m, \SL_{\r})[d_m]$, and the direct sum of $A[s]$ appearing in $(\ol\psi_m)_!\Ql$ such that $(\supp A) \cap Y_m^0 \ne \emptyset$ is given by \begin{equation*} K_1 = \bigoplus_{\r \in (S_m \times S_{n-m})\wg}\wh\r \otimes \IC(Y_m, \SL_{\r})[-2(n - m)] + \SN'_m, \end{equation*} where $\SN'_m$ is a $\BZ$-linear combination of $\IC(Y_m, \SL_{\r'})[-2i]$ for $\r' \in (S_m \times S_{n-m})\wg$ with $0 \le i < n - m$. \par If $\supp A$ is contained in $Y_{m-1}$, $A[s]$ appears as a summand of $(\ol\psi_{m -1})_!\Ql$. By induction, $(\ol\psi_{m -1})_!\Ql$ is described as in (2.13.3) by replacing $m$ by $m -1$. Thus if exclude the contribution from the restriction of $K_1$ on $Y_{m -1}$, such $A[s]$ is determined from $(\ol\psi_{m -1})_!\Ql$. Note that, by induction, we can construct an action of $W_n$ on $(\ol\psi_m)_!\Ql$ by (2.11.1). We consider the restriction of $K_1$ on $Y_{m -1}$. Since each simple component of $\SN'_m$ is contained in $\ol\SN_m$, we can ignore this part. Let $K_1'$ be the direct sum part of $K_1$. Then the restriction of $K_1'$ on $Y_{m - 1}$ affords the representation of $W_n$ corresponding to a sum of various $\wh\r$ for $\r \in (S_m \times S_{n-m})\wg$. But by (2.13.3) applied for $m - 1$, the direct sum part of $(\ol\psi_{m -1})_!\Ql$ affords the representation of $W_n$. The irreducible representations appearing there is of the form $\wh\r'$, which are different from $\wh\r$ for $K_1'|_{Y_{m -1}}$. Since each component of $\ol\SN_{m -1}$ is contained in $\ol\SN_m$, we see that the restriction of $K_1$ on $Y_{m -1}$ has no overlapping with $(\ol\psi_{m -1})_!\Ql$ modulo $\ol\SN_m$. Thus (2.13.3) holds for $m$. The proposition is proved. \end{proof} \par\bigskip \section{ Intersection cohomology on $\Fc\Fs\Fp(V)$} \para{3.1.} We follow the notation in Section 2. The conformal symplectic group $CSp_N$ is defined by \begin{equation*} CSp_N = \{ g \in GL_N \mid {}^tgJg = \la_gJ \text{ for some }\la_g \in \Bk^* \}, \end{equation*} which we denote by $\wt H$. By fixing the basis of $V$ as before, we also write it as $\wt H = CSp(V)$. $\wt H$ is a connected group with connected center $\wt Z$, where \begin{equation*} \wt Z = \{ \la 1_N \mid \la \in \Bk^*\}, \end{equation*} and contains $H$ as a closed subgroup. $\wt H/\wt Z$ is the adjoint symplectic group $\wt H_{\ad}$. Let $\wt\Fh = \Fc\Fs\Fp(V)$ be the Lie algebra of $\wt H$. $\wt\Fh$ contains the center $\wt\Fz \simeq \Bk$, and we put $\wt\Fh_{\ad} = \wt\Fh/\wt\Fz$, which is the Lie algebra of $\wt H_{\ad}$, and is called the adjoint Lie algebra. In [X1, Lemma 6.2], Xue proved that $\wt\Fh_{\ad}$ has regular semisimple elements, and established the Springer correspondence for $\Fh\nil$ by making use of the intersection cohomology on $\wt\Fh_{\ad}$. Considering $\wt\Fh_{\ad}$ is essentially the same as considering $\wt\Fh$. In this section, we shall connect Xue's result with ours discussed in Section 2. \para{3.2.} Put \begin{align*} \tag{3.2.1} \wt T &= \{\Diag(t_1, \dots, t_n, \la t_1\iv, \dots, \la t_n\iv) \mid t_i \in \Bk^*, \la \in \Bk^* \}, \\ \wt \Ft &= \{\Diag(t_1, \dots, t_n, t_1 + \la, \dots, t_n + \la) \mid t_i \in \Bk, \la \in \Bk \}. \end{align*} Then $\wt T$ is a maximal torus of $\wt H$ containing $T$, and $\Lie \wt T = \wt \Ft \supset \Ft$. We denote an element in $\wt\Ft$ as $\xi = (s,\la)$ for $s = (t_1, \dots, t_n) \in \Bk^n$ and $\la \in \Bk^*$. Let $\wt B = \wt TU$ be the Borel subgroup of $\wt H$ containing $B$. Put $\wt\Fb = \Lie \wt B$. We have $\wt\Fb = \wt\Ft \oplus \Fn$. Put \begin{align*} \tag{3.2.2} \wt\Ft\reg = \{ (s,\la) \in \wt\Ft \mid t_i \ne t_j \text{ for } i \ne j, \la \in \Bk^*\}. \end{align*} Then for $\xi \in \wt\Ft\reg$, we have $Z_H(\xi) = \wt T$. Thus $\xi$ is a regular semisimple element in $\wt\Ft$. $\wt\Ft\reg$ is open dense in $\wt\Ft$. Put $\wt\Fh\reg = \bigcup_{g \in \wt H}g(\wt\Ft\reg)$. By using the conjugacy of maximal tori in $\wt H$, we see that $\wt\Fh\reg$ coincides with the set of regular semisimple elements in $\wt\Fh$. $\wt\Fh\reg$ is open dense in $\wt\Fh$ since it is the intersection with regular semisimple elements in $\Fg\Fl_N$. Put $\wt\Fb\reg = \wt\Fh\reg \cap \wt\Fb$. We consider the varieties \begin{align*} \wt Y\flt &= \{ (x, g\wt B ) \in \wt\Fh \times \wt H/\wt B \mid g\iv x \in \wt\Fb\reg \}, \\ Y\flt &= \bigcup_{g \in \wt H}g(\wt\Fb\reg) = \wt\Fh\reg, \end{align*} and define a map $\psi\flt : \wt Y\flt \to Y\flt$ by $(x, g\wt B) \mapsto x$. Then \begin{equation*} \wt Y\flt \simeq \wt H\times^{\wt B}\wt\Fb\reg \simeq \wt H \times^{\wt T}\wt\Ft\reg, \end{equation*} and $\psi\flt$ is a finite Galois covering with Galois group $W_n$. \par Let $\Ql$ be the constant sheaf on $\wt Y\flt$. Then $(\psi\flt)_!\Ql$ is a semisimple local system on $Y\flt$, equipped with $W_n$-action, and is decomposed as \begin{equation*} \tag{3.2.3} (\psi\flt)_!\Ql \simeq \bigoplus_{\r \in W_n\wg}\r \otimes \SL\flt_{\r}, \end{equation*} where $\SL\flt_{\r} = \Hom (\r, (\psi\flt)_!\Ql)$ is a simple local system on $Y\flt$. \para{3.3.} We consider the varieties \begin{align*} \wt X\flt &= \{ (x, g\wt B) \in \wt\Fh \times \wt H/\wt B \mid g\iv x \in \wt\Fb \}, \\ X\flt &= \bigcup_{g \in \wt H}g(\wt\Fb), \end{align*} and define a map $\pi\flt : \wt X\flt \to X\flt$ by $(x, g\wt B) \mapsto x$. $\pi\flt$ is a proper map onto $X\flt$. Hence $X\flt$ is irreducible and closed in $\wt\Fh$. Since $\wt\Fh\reg \subset X\flt$, we have $X\flt = \wt\Fh$. \par We consider the complex $K = (\pi\flt)_!\Ql$ on $X\flt = \wt\Fh$. We can define a similar map $\pi\flt_{\ad} ; \wt X\flt_{\ad} \to X\flt_{\ad} = \wt\Fh_{\ad}$, by replacing $\wt X\flt, X\flt, \pi\flt$ by $\wt X\flt_{\ad}, X\flt_{\ad}, \pi\flt_{\ad}$, respectively. We consider the complex $K_{\ad} = (\pi\flt_{\ad})_!\Ql$ on $\wt\Fh_{\ad}$. Let $\f : \wt\Fh \to \wt\Fh_{\ad}$ be the natural projection. By the base change theorem, we have $\f^*K_{\ad} \simeq K$. It is known by [X1, Prop. 6.6] that $K_{\ad}$ coincides with the intersection cohomology $\IC(\wt\Fh_{\ad}, \SL_{\ad})$ for a certain semisimple local system $\SL_{\ad}$ on the set of regular semisimple elements in $\wt\Fh_{\ad}$. Since $\f$ is smooth with connected fibre, $K$ is also expressed by an intersection cohomology on $\wt\Fh$. Since $K|_{Y\flt} \simeq (\psi\flt)_!\Ql$, we have $K \simeq \IC(\wt\Fh, (\psi\flt)_!\Ql)$. Thus by (3.2.3), the following result holds. \begin{prop} $(\pi\flt)_!\Ql[\dim \wt\Fh]$ is a semisimple perverse sheaf on $\wt\Fh$, equipped with $W_n$-action, and is decomposed as \begin{equation*} \tag{3.4.1} (\pi\flt)_!\Ql[\dim \wt\Fh] \simeq \bigoplus_{\r \in W_n\wg} \r \otimes \IC(\wt\Fh, \SL\flt_{\r})[\dim \wt\Fh], \end{equation*} where $\SL\flt_{\r}$ is a simple local system on $\wt\Fh\reg$ given in (3.2.3). \end{prop} \para{3.5.} The set of nilpotent elements $\wt\Fh\nil$ in $\wt\Fh$ coincides with $\Fh\nil$. The subvariety $(\pi\flt)\iv(\Fh\nil)$ of $\wt X\flt$ can be identified with \begin{equation*} \wt X\nil = \{ (x, gB) \in \Fh \times G/B \mid g\iv x \in \Fn\}, \end{equation*} and the restriction of $\pi\flt$ on $\wt X\nil$ coincides with the map $\pi_1 : \wt X\nil \to \Fh\nil$. Note that $\pi_1$ is surjective since $\bigcup_{g \in H}g(\Fn) = \Fh\nil$ by Lemma 2.9 (i). Since $(\pi\flt)_!\Ql$ has a natural action of $W_n$ by Proposition 3.4, $(\pi_1)_!\Ql$ has also an action of $W_n$. The following result gives the Springer correspondence for $\Fs\Fp(V)$, which is essentially due to Xue [X1]. \begin{thm} \begin{enumerate} \item $(\pi_1)_!\Ql[\dim \Fh\nil]$ is a semisimple perverse sheaf on $\Fh\nil$, equipped with $W_n$-action, and is decomposed as \begin{equation*} (\pi_1)_!\Ql[\dim \Fh\nil] \simeq \bigoplus_{\r \in W_n\wg} \r \otimes \IC(\ol\SO_{\r}, \Ql)[\dim \SO_{\r}], \end{equation*} where $\SO_{\r}$ is an $H$-orbit in $\Fh\nil$, and the map $\r \mapsto \SO_{\r}$ gives a bijective correspondence between $W_n\wg$ and the set of $H$-orbits in $\Fh\nil$. \item For each $\r \in W_n\wg$, we have \begin{equation*} \IC(\wt\Fh, \SL\flt_{\r})|_{\Fh\nil} \simeq \IC(\ol\SO_{\r}, \Ql), \quad\text{ $($up to shift$)$.} \end{equation*} \end{enumerate} \end{thm} \par In fact, Xue proved in [X1, Prop. 6.4] the corresponding formula for $(\wt\Fh_{\ad})\nil$, the nilpotent variety of $\wt\Fh_{\ad}$. Since $\f\iv((\wt\Fh_{\ad})\nil) \simeq \wt\Fz \times \wt\Fh\nil$, his result can be translated to the formula in $\wt\Fh\nil$, which induces our formula for $\Fh\nil$. Note that $\f$ gives a bijective map $\Fh\nil \to (\wt\Fh_{\ad})\nil$, compatible with the action of $\wt H$ and $\wt H_{\ad}$. Also note that it is known by [Spa] that for any $x \in \Fh\nil$, $Z_H(x)$ is connected. Hence only the constant sheaf $\Ql$ appears as a local system on $\SO_{\r}$ in the Springer correspondence. The explicit correspondence was described in [X2] by making use of (a generalization of) Lusztig's symbols. \para{3.7.} Let $\SB = \wt H/\wt B$ be the flag variety of $\wt H$, and $W = N_{\wt H}(\wt T)/\wt T \simeq W_n$ be the Weyl group of $\wt H$. For each $x \in \wt\Fh$, put $\SB_x = \{ g\wt B \in \SB \mid g\iv x \in \wt\Fb \}$. We consider the structure of $\SB_x$. Let $x = s + z$ be the Jordan decomposition of $x$ in $\wt \Fh$, where $s$: semisimple and $z$: nilpotent such that $[s, z] = 0$. We assume that $s \in \wt\Ft$. Put $C = Z^0_{\wt H}(s)$ and $\Fc = \Lie C$. Then $z \in \Fc\nil$. $\wt B_C = \wt B \cap C$ is a Borel subgroup of $C$ containing $\wt T$, and we consider the flag variety $\SB^C = C/\wt B_C$, and its subvariety $\SB^C_z$. For each $w \in W $, $\wt B_{C,w} = w\wt Bw\iv \cap C$ is a Borel subgroup of $C$ containing $\wt T$, and one can consider the flag variety $\SB^{C,w} = C/\wt B_{C,w}$ of $C$. Let $W_s$ be the stabilizer of $s$ in $W$. Then $W_s$ is the Weyl group of $C$. Put \begin{align*} \wh\SM &= \{ g \in \wt H \mid g\iv s\in \wt\Fb \}, \\ \SM &= \{ g \in \wt H \mid g\iv s \in \wt\Ft \}. \end{align*} Then $C \times \wt B$ acts on $\wh\SM$, by $(h, b) : y \mapsto hyb\iv$, and similarly $C \times \wt T$ acts on $\SM$. Put $\vG = C\backslash \wt H/\wt T$, $\wh\vG = C \backslash \wt H/\wt B$. The natural map $\vG \to \wh\vG$ gives a bijection $\vG \simeq \wh\vG$, and we can identify $\vG$ with a set of representatives in $W$ of the cosets $W_s\backslash W$. This implies that $\coprod_{w \in \vG}\SB^{C,w} \simeq \SB_s$ by $h(\wt B_{C,w}) \mapsto hwB$, and we have \begin{equation*} \tag{3.7.1} \SB_x \simeq \coprod_{w \in \vG}\SB^{C,w}_z \simeq \coprod_{w \in W_s\backslash W}\SB^C_z. \end{equation*} \par Recall that $K = (\pi\flt)_!\Ql$ is a complex with $W$-action by Proposition 3.4. Hence for any $x \in \wt\Fh$, $\SH^i_xK \simeq H^i(\SB_x, \Ql)$ has a structure of $W$-module (Springer representation of $W$). For $C$, we can also consider $\pi_C : C \times^{\wt B_C}\wt\Fb_C \mapsto \Fc$, similarly to $\pi\flt$, where $\wt\Fb_C = \Lie \wt B_C$. Since $\Fc$ has regular semisimple elements, the previous discussion can be applied, and $(\pi_C)_!\Ql$ turns out to be a complex with $W_s$-action. It follows that $H^i(\SB^C_z, \Ql)$ is a $W_s$-module (Springer representation of $W_s$). Then (3.7.1) can be interpreted by Springer representations as follows. \begin{thm} $H^i(\SB_x, \Ql) \simeq \Ind_{W_s}^WH^i(\SB^C_z, \Ql)$ as $W$-modules. \end{thm} \begin{proof} This formula corresponds to the special case of Lusztig's character formula for generalized Green functions [L2, Thm. 8.5]. Concerning the proof, essentially the same argument can be applied to our setting (but ignoring the $\BF_q$-structure). We give an outline of the proof below for the sake of completeness. \par We fix $s \in \wt\Ft, z \in \Fn$ such that $[s,z] = 0$. For $x \in \wt\Fh$, let $x_s$ be its semisimple part. As in [L2, Lemma 8.6], one can find an open dense subset $\SU$ of $\Fc = \Lie Z^0_{\wt H}(s)$ satisfying the following properties; \par\medskip\noindent (3.8.1) \ $\SU$ contains 0, and \par\medskip (i) $g(\SU) = \SU$ for any $g \in C$, \par (ii) $x \in \SU$ if and only if $x_s \in \SU$, \par (iii) If $x \in \SU, g \in C$, $g\iv(s + x) \in \wt\Fb$, then $g\iv x_s \in \wt\Fb$ and $g\iv s \in \wt\Fb$. \par (iv) If $x \in \SU, g \in C$, $g\iv(s + x) \in \wt\Ft$, then $g\iv x_s \in \wt\Ft$ and $g\iv s \in \wt\Ft$. \par \medskip Note that $\SU$ contains $\Fc\nil$ by (ii). We define a subvariety $\wt X\flt_{\SU}$ of $\wt X\flt$ by \begin{equation*} \wt X\flt_{\SU} = \{ s + x, g\wt B) \in \wt X\flt \mid x \in \SU \}. \end{equation*} Let $\wh\g \in \wh\vG$ be a double coset in $\wt H$, and consider the variety \begin{align*} \wt X\flt_{\SU,\wh\g} &= \{ (s + x, g\wt B) \in \wt X\flt_{\SU} \mid g \in \wh\g \} \\ &= \{ (s + x, g\wt B) \in (s + \SU) \times \wt H/\wt B \mid g \in \wh\g, g\iv(s + x) \in \wt\Fb\}. \end{align*} Then one can show that \begin{equation*} \tag{3.8.2} \wt X\flt_{\SU} = \coprod_{\wh\g \in \wh\vG}\wt X\flt_{\SU,\wh\g}, \end{equation*} where $\wt X\flt_{\SU,\wh\g}$ is an open and closed subset of $\wt X\flt_{\SU}$ for $\wh\g \in \wh\vG$. \par Let $\g \in \vG$ be the double coset in $\wt H$ corresponding to $\wh\g \in \wh\vG$. Take $g_{\g} \in \g$ and put $B_{\g} = g_{\g}\wt Bg_{\g}\iv \cap C$. By definition, $s \in g_{\g}(\wt\Ft)$, and $B_{\g}$ is a Borel subgroup of $C$ containing a maximal torus $T_{\g} = \wt T$. By replacing $\wt H, \wt B, \wt T$ by $C = Z_{\wt H}^0(s), B_{\g}, T_{\g}$, we can define $\psi_{\g} : \wt Y_{\g} \to Y_{\g} = \Fc\reg$, $\pi_{\g} : \wt X_{\g} \to X_{\g} = \Fc$ corresponding to $\psi\flt : \wt Y\flt \to Y\flt = \wt\Ft\reg$, $\pi\flt : \wt X\flt \to X\flt = \wt\Ft$. Put \begin{equation*} \wt X_{\SU,\g} = \pi_{\g}\iv(\SU) \subset \wt X_{\g}. \end{equation*} By using th property (iv) in (3.8.1), one can show that \par\medskip\noindent (3.8.3) \ The map $(x, zB_{\g}) \mapsto (s + x, zg_{\g}\wt B)$ gives an isomorphism $\wt X_{\SU,\g} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \wt X\flt_{\SU, \wh\g}$. \par\medskip Replacing $\wt X\flt$ by $\wt Y\flt$, a similar discussion works. Put \begin{align*} \wt Y\flt_{\SU, \g} &= \{ (x, g\wt T) \in \wt Y\flt_{\SU} \mid g \in \g \}, \\ \wt Y_{\SU,\g} &= (\psi_{\g})\iv(\SU) \subset \wt Y_{\g}, \end{align*} where $\wt Y\flt_{\SU} = (\psi\flt)\iv(\wt\Ft\reg \cap (s + \SU))$. Let $\g_0$ be the orbit in $\vG$ corresponding to $W_s \subset W$. Now $W$ acts on $\wt Y\flt_{\SU}$ by $w : (x, g\wt T) \mapsto (x, gw\iv \wt T)$. Then $W$ permutes the subsets $\wt Y\flt_{\SU,\g}$, which corresponds to the right action of $W$ on $\vG$. In particular, the stabilizer of $\wt Y\flt_{\SU,\g_0}$ in $W$ coincides with $W_s$. As an analogue of (3.8.2), we have \begin{equation*} \tag{3.8.4} \wt Y\flt_{\SU} = \coprod_{\g \in \vG}\wt Y\flt_{\SU, \g} \simeq \coprod_{w \in W/W_s}w(\wt Y\flt_{\SU,\g_0}), \end{equation*} where $\wt Y\flt_{\SU,\g}$ is a non-empty, open and closed subset for each $\g \in \vG$. \par Combining it with (3.8.3), the following holds. \par\medskip\noindent (3.8.5) \ The map $\pi\flt : \wt X\flt_{\SU,\wh\g} \to \wt\Fh$ is a proper map onto $s + \SU$. $\wt X\flt_{\SU,\wh\g}$ is irreducible. $\psi\flt(\wt Y\flt_{\SU,\g}) = (s + \SU) \cap \wt\Fh\reg$, and $\wt Y\flt_{\SU,\g} = (\pi\flt)\iv\bigl((s + \SU) \cap \wt\Fh\reg\bigr) \cap \wt X\flt_{\SU,\g}$. \par\medskip Let $\SV = s + (\Fc\reg \cap \SU)$, and define \begin{align*} \wt Y\flt_{\SV} &= \{ (x, g\wt T) \in \wt Y\flt \mid x \in \SV \}, \\ (\wt Y_{\g})_{-s + \SV} &= \{ (x, gT_{\g}) \in \wt Y_{\g} \mid s + x \in \SV \}. \end{align*} Then we have the following commutative diagram \begin{equation*} \tag{3.8.6} \begin{CD} \wt Y\flt_{\SV} @<\a<< \coprod_{\g \in \vG}(\wt Y_{\g})_{-s + \SV} \\ @VVV @VVV \\ \SV @<\b<< -s + \SV, \end{CD} \end{equation*} where $\a: (x, zT_{\g}) \mapsto (s + x, zg_{\g}\wt T)$, $\b: x \mapsto s + x$, and the vertical maps are projections to the first factor. It is shown that $\a$ turns out to be an isomorphism. $\b$ is also an isomorphism. \par $\wt Y\flt_{\SV}$ is invariant under the action of $W$ on $\wt Y\flt_{\SU}$. In view of (3.8.4), it is decomposed as \begin{equation*} \tag{3.8.7} \wt Y\flt_{\SV} = \coprod_{\g \in \vG}\wt Y\flt_{\SV, \g} \simeq \coprod_{w \in W/W_s}w(\wt Y\flt_{\SV,\g_0}), \end{equation*} where $\wt Y\flt_{\SV,\g} = \{ (x, g\wt T) \in \wt Y\flt \mid x \in \SV, g \in \vG\}$. $\a$ gives an isomorphism $(\wt Y_{\g})_{-s + \SV} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \wt Y\flt_{\SV, \g}$ for each $\g \in \vG$. $W_s$ acts naturally on $(\wt Y_{\g_0})_{-s + \SV}$, and the map $(\wt Y_{\g_0})_{-s + \SV} \to \wt Y\flt_{\SV, \g_0}$ is $W_s$-equivariant. \par Now (3.8.6) and (3.8.7) imply that \begin{equation*} \tag{3.8.8} \b^*\bigl((\psi\flt)_!\Ql\bigr)|_{\SV} \simeq \bigoplus_{\g \in \vG}(\psi_{\g})_!\Ql|_{-s + \SV} \simeq \Ind_{W_s}^W \left((\psi_{\g_0})_!\Ql|_{-s + \SV}\right) \end{equation*} as local systems equipped with $W$-action. Here we consider $(\psi_{\g_0})_!\Ql$ as a local system with $W_s$-action. Put $K = (\pi\flt)_!\Ql$, and $K_{\g} = (\pi_{\g})_!\Ql$. Since $K|_{Y\flt} = (\psi\flt)_!\Ql$, $K_{\g}|_{Y_{\g}} \simeq (\psi_{\g})_!\Ql$, (3.8.8) can be rewritten as \begin{equation*} \b^*(K|_{\SV}) \simeq \bigoplus_{\g \in \vG}K_{\g}|_{-s + \SV}. \end{equation*} Note that $\SV$ is an open subset of $s + \SU$. Then the above isomorphism can be extended to an isomorphism \begin{equation*} \tag{3.8.9} \b^*(K|_{s + \SU}) \simeq \bigoplus_{\g \in \vG}(K_{\g}|_{\SU}). \end{equation*} $K|_{s + \SU}$ has a natural $W$-action induced from the $W$-action on $(\psi\flt)_!\Ql$, which coincides with the $W$-action restricted from the $W$-action on $K$. Similarly, the $W_s$-action on $K_{\g_0}|_{\SU}$ induced from that on $(\psi_{\g_0})_!\Ql$ coincides with the $W_s$-action restricted from that on $K_{\g_0}$. Thus (3.8.9) can be written, as complexes with $W$-action, \begin{equation*} \b^*(K|_{s + \SU}) \simeq \Ind_{W_s}^W (K_{\g_0}|_{\SU}). \end{equation*} Now taking the stalk at $s + z \in s + \SU$ of the $i$-th cohomology sheaf on both sides, we have an isomorphism of $W$-modules. \begin{equation*} \SH^i_{s + z}K \simeq \Ind_{W_s}^W (\SH^i_z K_{\g_0}). \end{equation*} The theorem follows from this. \end{proof} \para{3.9.} Recall that $Y$ is an open dense subset of $\Fh$. We regard $Y$ as a locally closed subset of $\wt\Fh$. We consider the restriction $(\pi\flt)_!\Ql|_Y$ of $(\pi\flt)_!\Ql$ on $Y$, which inherits the $W$-module structure from $(\pi\flt)_!\Ql$. By applying Theorem 3.8, we shall prove the following result. \begin{prop} $(\pi\flt)_!\Ql|_Y$ is isomorphic to $\psi_!\Ql$ as complexes equipped with $W$-action. In particular, for $\r \in (S_k \times S_{n-k})\wg$, we have \begin{equation*} \tag{3.10.1} \IC(\wt\Fh, \SL\flt_{\wh\r})|_Y \simeq \IC(Y_k, \SL_{\r})[d_k - d_n]. \end{equation*} \end{prop} \begin{proof} The isomorphism $(\pi\flt)_!\Ql|_Y \simeq \psi_!\Ql$ follows from the base change theorem. We show that this isomorphism is compatible with $W$-action. For $0 \le k \le n$, we can consider $Y_k^0$ as a locally closed subvariety of $\wt\Fh$. By the base change theorem, we have $(\pi\flt)_!\Ql|_{Y_k^0} \simeq (\psi_k)_!\Ql$. $(\pi\flt)_!\Ql|_{Y_k^0}$ has a $W $-action inherited from that on $(\pi\flt)_!\Ql$. We compare this $W$-action with that of $(\psi_k)_!\Ql$. For this, we investigate the $W$-module structure of the stalk at $x \in Y^0_k$ of both cohomology sheaves. We apply Theorem 3.8 in the case where $x = s + z \in \Ft_{sr} + \FD^0_k$. In this case, $C = Z_{\wt H}^0(s) \simeq \wt Z (SL_2 \times \cdots \times SL_2)$ ($n$-factors) under the notation in 2.6, and $\Fc \simeq \wt\Fz + (\Fs\Fl_2 \oplus \cdots \oplus \Fs\Fl_2)$. $z \in \Fc\nil$ is written as $z = \sum_{i=1}^nz_i$, where $z_i \in (\Fs\Fl_2)\nil$ is such that $z_i \ne 0$ for $1 \le i \le k$ and $z_i = 0$ for $i > k$. Thus $\SB^C \simeq \BP_1^n$ and $\SB^C_z \simeq \BP_1^{n-k}$. We have \begin{equation*} \tag{3.10.2} H^{\bullet}(\BP_1^{n - k}) = (\Ql[-2] \oplus \Ql)^{\otimes (n-k)} \simeq \bigoplus_{J \subset [k+1,n]}\Ql[-2|J|]. \end{equation*} Since $W_s$ is the Weyl group of $C$, we have $W_s \simeq (\BZ/2\BZ)^n$, and the Springer representation of $W_s$ on $H^{\bullet}(\SB^C_z)$ is given by $\vf_J$ on each factor $\Ql[-2|J|]$, where $\vf_J$ is a one-dimensional representation of $(\BZ/2\BZ)^n$ such that the $i$-th factor $\BZ/2\BZ$ acts non-trivially for $i \in J$, and acts trivially for $i \in [1,n] - J$. It follows, for $|J| = k'$, that \begin{equation*} \tag{3.10.3} \Ind_{W_s}^W\vf_J = \bigoplus_{\r \in (S_{n - k'}\times S_{k'})\wg} \wh\r \otimes \Ql^{\dim \r}. \end{equation*} In particular, by applying $J = [k+1, n]$, we have \begin{equation*} \tag{3.10.4} ((\pi\flt)_!\Ql)_x \simeq H^{\bullet}(\SB_x, \Ql) \simeq \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \Ql^{\dim \r}[-2(n-k)] \oplus \SN_x, \end{equation*} where $\SN_x$ is a sum of complexes $\Ql[-2i]$ with $i < n-k$. \par On the other hand, by taking the stalk at $x \in \Fh$ in both sides of (2.13.2), and by taking into account the action of $W$ given in (2.10.5), we have \begin{equation*} \tag{3.10.5} ((\psi_k)_!\Ql)_x \simeq \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \Ql^{\dim \r}[-2(n-k)] + \SN_x', \end{equation*} where $\SN_x'$ is a sum of complexes of the form $\Ql[-2i]$ with $i < n - k$. By comparing (3.10.4) and (3.10.5), we obtain the following. \par\medskip\noindent (3.10.6) \ The $W$-module structure of $(\psi_k)_!\Ql$ coincides with the $W$-module structure of $(\pi\flt)_!\Ql|_{Y^0_k}$, up to a sum of $\SL[-2i]$ with $i < n - k$ for various local systems $\SL$ on $Y^0_k$. \par\medskip Now the proof of Proposition 2.14 shows that the $W$-module structure of $\psi_!\Ql$ is completely determined from the $W$-module structure of $(\psi_k)_!\Ql$ ignoring the part $\SN_k$. A similar discussion holds also for $(\pi\flt)_!\Ql$, the $W$-module structure of $(\pi\flt)_!\Ql|_Y$ is completely determined from that of $(\pi\flt)_!\Ql|_{Y_k^0}$ ignoring the part $\SL[-2i]$ with $i < n-k$. This proves the first assertion of the proposition. (3.10.1) then follows by comparing (2.14.1) and (3.4.1). The proposition is proved. \end{proof} \par\bigskip \section{ The variety of semisimple orbits } \para{4.1.} Recall the map $\pi : \wt X \to X = \Fh$ as in 2.4. In this section, we consider $\SB$ as $H/B$ (not as $\wt H/\wt B$), and for each $x \in \Fh$ put $\SB_x = \{ gB \in H/B \mid g\iv x \in \Fb \}$. Then $\SB_x \simeq \pi\iv(x)$. Let $\SO_x$ be the $H$-orbit of $x$ in $\Fh$. Put $\nu_H = \dim U$. We have the following. \begin{equation*} \tag{4.1.1} \dim \SB_x \le \nu_H - \frac{1}{2}\dim \SO_x = \frac{1}{2}(\dim Z_H(x) - \dim T). \end{equation*} \par In fact, this formula was proved by Xue [X1, Lemma 6.1] in the case where $x \in \Fh$ is nilpotent. Although he assumes that the group is of adjont type, the proof works for $H$ and $\Fh$. In the general case, write $x = s + z$, where $s$: semisimple, $z$: nilpotent such that $[s,z] = 0$. Put $C = Z^0_H(s)$, and consider the variety $\SB^C_z$ defined similarly to $\SB_x$, where $\SB^C$ is the flag variety for $C$, and $z \in \Lie C$ is nilpotent. By a similar discussion as in (3.7.1), we have $\dim \SB_x = \dim \SB^C_z$. Then (4.1.1) follows from the corresponding formula for the nilpotent case. \para{4.2.} In this section we put $W = N_H(T)/T \simeq W_n$. For $x \in \Fh$, let $x = x_s + x_n$ be the Jordan decomposition of $x$, where $x_s$ is semisimple, $x_n$ is nilpotent such that $[x_s, x_n] = 0$. By Lemma 2.3, the set of semisimple orbits in $\Fh$ is in bijection with $\Xi = \Ft/S_n$, under the natural action of $S_n \subset W_n = W $ on $\Ft$. Since $\Xi$ is identified with the set of closed orbits in $\Fh$, the Steinberg map $\w : \Fh \to \Xi$ is defined by associating the $H$-orbit of $x_s$ for $x$ (see [X1, 6.4]). \par For a semisimple element $s \in \Fh$, let $V = V_1\oplus V_2 \oplus \cdots \oplus V_a$ be the eigenspace decomposition of $s$, with $\dim V_i = n_i$ : even. Then $Z_H(s) \simeq Sp_{n_1} \times \cdots \times Sp_{n_a}$. Put $\Bn = (n_1, \dots, n_a)$. Then $\Bn$ determines the structure of $Z_H(s)$ up to isomorphism, which we denote by $C(\Bn)$. Let $\SO$ be a nilpotent orbit in $\Lie C(\Bn)$. We define \begin{equation*} X_{\Bn, \SO} = \{ x \in \Fh \mid x = x_s + x_n, Z_H(x_s) \simeq C(\Bn), x_n \in \SO \}. \end{equation*} Then $X_{\Bn,\SO} = X_{\Bn',\SO'}$ if they are not disjoint, and $\Fh = \bigcup_{\Bn, \SO}X_{\Bn,\SO}$ gives a partition of $\Fh$. By considering the Steinberg map, for $x \in X_{\Bn,\SO}$, we have \begin{equation*} \dim X_{\Bn,\SO} = \dim \SO_x + \dim \{ s \in \Ft \mid Z_H(s) \simeq C(\Bn) \}. \end{equation*} In particular, for $x \in X_{\Bn,\SO}$, \begin{equation*} \tag{4.2.1} \dim X_{\Bn,\SO} \le \dim \SO_x + \dim \Ft. \end{equation*} We have a lemma. \begin{lem} The map $\pi : \wt X \to X$ is semismall. \end{lem} \begin{proof} We know that $\wt X$ is smooth and $\pi$ is proper. For $x \in X_{\Bn,\SO}$, by (4.1.1) and (4.2.1), \begin{align*} \dim \SB_x &\le \frac{1}{2}\bigl(\dim X - (\dim \SO_x + \dim \Ft)\bigr) \\ &\le \frac{1}{2}(\dim X - \dim X_{\Bn,\SO}). \end{align*} The lemma follows. \end{proof} \para{4.4.} We consider the Steinberg variety \begin{equation*} Z = \{ (x, gB, g'B) \in \Fh \times \SB \times \SB \mid g\iv x\in \Fb, {g'}\iv x \in \Fb \}. \end{equation*} We denote by $\vf : Z \to \Fh$ the projection $(x, gB, g'B) \mapsto x$. We have a commutative diagram \begin{equation*} \begin{CD} Z @>\a>> \Ft \\ @V\vf VV @VV\w_1 V \\ \Fh @> \w>> \Xi, \end{CD} \end{equation*} where $\a : (x, gB, g'B) \mapsto p_1(g\iv x)$ ($p_1$ is the projection $\Fb \to \Ft$), $\w_1$ is the restriction of $\w$ on $\Ft$. Note that $\w_1$ is a finite morphism. Put $\s = \w_1\circ \a$, and $d' = \dim \Fh - \dim \Ft$. We define a constructible sheaf $\ST$ on $\Xi$ by \begin{equation*} \tag{4.4.1} \ST = \SH^{2d'}(\s_!\Ql) = R^{2d'}\s_!\Ql. \end{equation*} \par Recall the notion of perfect sheaves in [L1, (5.4.4)]. A constructible sheaf $\SE$ on an irreducible variety is called a perfect sheaf if it satisfies the following two condition; (a) there exists an open dense smooth subset $V_0$ of $V$ such that $\SE|_{V_0}$ is locally constant, and that $\SE = \IC(V, \SE|_{V_0})$, (b) the support of any non-zero constructible subsheaf of $\SE$ has support $V$. \par Perfect sheaves enjoy the following properties; if $\pi: V' \to V$ is a finite morphism with $V'$ smooth, and $\SE'$ is a locally constant sheaf on $V'$, then $\SE = \pi_*\SE'$ is a perfect sheaf. Moreover, if $0 \to \SE_1 \to \SE_2 \to \SE_3 \to 0$ is an exact sequence of constructible sheaves on $V$ such that $\SE_1, \SE_3$ are perfect, then $\SE_2$ is perfect. \par We have the following lemma. \begin{lem} The sheaf $\ST$ is a perfect sheaf on $\Xi$. \end{lem} \begin{proof} The lemma can be proved by a similar argument as in the proof of Theorem 5.5 in [L1]. For the sake of completeness, and for fixing the notations, we will give the proof below. Let $p : Z \to \SB \times \SB$ be the projection $(x, gB, g'B) \mapsto (gB, g'B)$. $\SB \times \SB$ is decomposed into $H$-orbits, $\SB \times \SB = \coprod_{w \in W }\SO_w$, where $\SO_w$ is the $H$-orbit containing $(B, wB)$. Put $Z_w = p\iv(\SO_w)$ for each $w \in W$. $Z_w \to \SO_w$ is a locally trivial fibration with fibre isomorphic to $\Fb \cap w\Fb$. Let $\a_w$ be the restriction of $\a$ on $Z_w$. Then $\a_w$ is a locally trivial fibration with fibre isomorphic to \begin{equation*} \tag{4.5.1} H \times^{(B \cap wBw\iv)}(\Fn \cap w\Fn), \end{equation*} where $\Fn = \Lie U$. In particular, $\dim \a_w\iv(s) = d'$ for any $s \in \Ft$. Moreover, each fibre is an irreducible variety. Let $\s_w$ be the restriction of $\s$ on $Z_w$, and put $\ST_w = \SH^{2d'}((\s_w)_!\Ql)$. It follows from the above remark that \begin{equation*} R^{2d'}(\a_w)_!\Ql \simeq \Ql. \end{equation*} Since $\w_1$ is a finite morphism, we have \begin{equation*} \tag{4.5.2} R^{2d'}(\s_w)_!\Ql \simeq R^0(\w_1)_!R^{2d'}(\a_w)_!\Ql \simeq (\w_1)_!\Ql. \end{equation*} It follows that $\ST_w$ is a perfect sheaf since $\w_1$ is finite. By (4.5.1), $\a_w\iv(s)$ is a vector bundle over $\SO_w$, and $\SO_w$ is a vector bundle over $\SB$. It follows that $H^i_c(\a_w\iv(s),\Ql) = 0$ for odd $i$. This implies that $R^i(\s_w)_!\Ql = 0$ for odd $i$. Now we have a filtration $Z = \coprod_{w \in W}Z_w$ by locally closed subvarieties $Z_w$. For an integer $m \ge 0$, let $Z_m$ be the union of $Z_w$ such that $\dim \SO_w = m$, and put $Z_{\le m} = \coprod_{m' \le m}Z_{m'}$. Then $Z_{\le m}$ is closed in $Z$, and $Z_m$ is open in $Z_{\le m}$. Let $\s_m$ be the restriction of $\s$ on $Z_m$, and define $\s_{\le m}$ similarly. Since $R^i(\s_w)_!\Ql = 0$ for odd $i$, we have $R^i(\s_m)_!\Ql = 0$ for odd $i$. Thus we have an exact sequence \begin{equation*} \begin{CD} 0 @>>> R^{2d'}(\s_m)_!\Ql @>>> R^{2d'}(\s_{\le m})_! @>>> R^{2d'}(\s_{\le m-1})_!\Ql @>>> 0. \end{CD} \end{equation*} Since $R^{2d'}(\s_m )_!\Ql = \bigoplus_{\dim \SO_w = m}\ST_w$ is a perfect sheaf on $\Xi$, by induction on $m$, we see that $R^{2d'}\s_!\Ql$ is a perfect sheaf. The lemma is proved. \end{proof} \begin{prop} $\ST \simeq \bigoplus_{w \in W}\ST_w$ as sheaves on $\Xi$. \end{prop} \begin{proof} Recall $\Ft_{\sr}$ in (2.2.2), and put $\Xi_{\sr} = \w_1(\Ft_{\sr})$. Then $\Xi_{\sr}$ is an open dense subset of $\Xi$. Since $\ST$ and $\bigoplus_{w \in W}\ST_w$ are perfect sheaves on $\Xi$, it is enough to show that their restrictions on $\Xi_{\sr}$ are isomorphic. Put $Z_0 = \s\iv(\Xi_{\sr})$. Then $Z_0 \simeq \wt Y \times_Y \wt Y$, where $\psi : \wt Y \to Y$ is as in 2.4. Let $\s_0$ be the restriction of $\s$ on $Z_0$, which is the composite of the natural map $Z_0 \to Y$ with the map $Y \to \Xi_{\sr}$. (Note that for any $s \in \Ft_{\sr}$, $\w\iv(\w_1(s)) = \bigcup_{g \in H}g(s + \FD) = Y$). The restriction of $\ST$ on $\Xi_{\sr}$ is isomorphic to $R^{2d'}(\s_0)_!\Ql$. Recall that $\wt Y^+_k = \coprod_I\wt Y_I$ for $0 \le k \le n$ in (2.8.1). For any $I \subset [1,n]$ such that $|I| = k$, we consider the map $\psi_I : \wt Y_I \to Y_k^0$ as in 2.7. Let $\wh Y_I$ be as in (2.7.3). Then $\psi_I$ is decomposed as $\psi_I = \eta_I\circ \xi_I$, where $\eta_I$ is a finite Galois covering with Galois group $\SW_I$, and $\xi_I$ is a locally trivial fibration with fibre isomorphic to $\BP_1^{I'}$. For each subset $I, J$ of $[1,n]$ such that $|I| = |J| = k$, put $\wt Z_{IJ} = \wt Y_I \times_{Y_k^0}\wt Y_J$ under the inclusion $Y^0_k \hra Y$. We have a partition $Z_0 = \coprod_{I,J}\wt Z_{IJ}$ by locally closed subsets $\wt Z_{IJ}$. We define $\wh Z_{IJ} = \wh Y_I \times_{Y_k^0}\wh Y_j$. The natural map $\vf_{IJ} : \wt Z_{IJ} \to Y_k^0$ is decomposed as $\vf_{IJ} = \eta_{IJ}\circ \xi_{IJ}$, where $\eta_{IJ} : \wh Z_{IJ} \to Y_k^0$ is a finite Galois covering with Galois group $\SW_I \times \SW_J$, and $\xi_{IJ} : \wt Z_{IJ} \to \wh Z_{IJ}$ is a locally trivial fibration with fibre isomorphic to $\BP_1^{I'} \times \BP_1^{J'}$. \par Let $\a_0 : Z_0 \to \Ft$ be the restriction of $\a$ on $Z_0$, and $\a_0^w$ the restriction of $\a_0$ on $Z_0 \cap Z_w$. For $s \in \Ft_{\sr}$, we have a partition of $\wt Z_{IJ}$, \begin{equation*} \wt Z_{IJ} \cap \a_0\iv(s) = \coprod_{w \in W}\bigl(\a_0^w)\iv(s) \cap \wt Z_{IJ}\bigr). \end{equation*} By using the property of $\vf_{IJ} = \xi_{IJ}\circ \eta_{IJ}$ mentioned above, we see that $(\a^w_0)\iv(s) \cap \wt Z_{IJ}$ is an open and closed subset of $\wt Z_{IJ}$ for any $w \in W$. Moreover, the odd cohomology of $(\a^w_0)\iv(s) \cap \wt Z_{IJ}$ vanishes. It follows that \begin{equation*} \tag{4.6.1} (\a_{IJ})_!\Ql \simeq \bigoplus_{w \in W }(\a^w_{IJ})_!\Ql, \end{equation*} where $\a_{IJ}$ is the restriction of $\a_0$ on $\wt Z_{IJ}$, and $\a^w_{IJ}$ is the restriction of $\a_0^w$ on $Z_w \cap \wt Z_{IJ}$. Moreover, we have $R^{i}(\a^w_{IJ})_!\Ql = 0$ for odd $i$. By considering the long exact sequence arising from the filtration $Z_0 = \coprod_{I,J}\wt Z_{IJ}$, (4.6.1) implies that \begin{equation*} \tag{4.6.2} R^{2d'}(\a_0)_!\Ql \simeq \bigoplus_{w \in W}R^{2d'}(\a^w_0)_!\Ql. \end{equation*} By applying $R^0(\w_1)_!$ on both sides of (4.6.2), we have \begin{equation*} R^{2d'}(\s_0)_!\Ql \simeq \bigoplus_{w \in W } R^{2d'}(\s^w_0)_!\Ql, \end{equation*} where $\s^w_0$ is the restriction of $\s_0$ on $Z_0 \cap Z_w$. This shows that the restrictions of $\SF$ and of $\bigoplus_w\SF_w$ on $\Xi_{\sr}$ are isomorphic. The proposition is proved. \end{proof} \para{4.7.} By the K\"unneth formula, we have $\vf_!\Ql \simeq \pi_!\Ql \otimes \pi_!\Ql$. Since $(\pi\flt)_!\Ql$ is a complex with $W$-action by Proposition 3.4, $\pi_!\Ql = (\pi\flt)_!\Ql|_{\Fh}$ has the action of $W$ inherited from $(\pi\flt)_!\Ql$. Hence $\vf_!\Ql$ has a natural action of $W \times W $. It follows that $\ST = \SH^{2d'}(\s_!\Ql) \simeq \SH^{2d'}(\w_!\vf_!\Ql)$ is a sheaf equipped with $W \times W$-action. We note that under the decomposition of $\ST$ in Proposition 4.6, the action of $W \times W$ has the following property; for each $w_1, w_2 \in W$, \begin{equation*} \tag{4.7.1} (w_1, w_2)\cdot \ST_w = \ST_{w_1ww_2\iv}. \end{equation*} In fact, since $\ST$ is a perfect sheaf by Lemma 4.5, it is enough to check the relation (4.7.1) for the restriction of $\ST$ on $\Xi_{\sr}$. The action of $\SW_I \times \SW_J$ on $(\vf_{IJ})_!\Ql$ is extended to the action of $\wt\SW _I\times \wt\SW_J$ (here $\wt \SW_I = \SW_I\ltimes (\BZ/2\BZ)^n$) so that the $i$-th factor $\BZ/2\BZ$ of $\wt\SW_I$ acts trivially if $i \in I$ and acts non-trivially if $i \in I'$, and similarly for $\wt\SW_J$. This action induces an action of $W \times W$ on $(\vf_0)_!\Ql$, which is nothing but the action of $W \times W$ inherited from the action of $W$ on $\pi_!\Ql$ by Proposition 3.10. Then a similar relation as (4.7.1) for $(\vf_{IJ})_!\Ql$ under the decomposition \begin{equation*} (\vf_{IJ})_!\Ql \simeq \bigoplus_{w \in W}(\vf^w_{IJ})_!\Ql, \end{equation*} where $\vf^w_{IJ}$ is the restriction of $\vf_{IJ}$ on $\wt Z_{IJ} \cap Z_w$, can be verified directly by using the decomposition $\vf_{IJ} = \eta_{IJ}\circ \xi_{IJ}$. Thus (4.7.1) is proved. \par We consider the cohomology group $H^{2n}_c(\Xi, \ST)$. The following fact holds. \begin{prop} $H^{2n}_c(\Xi,\ST)$ has a structure of $W \times W$-module, which is isomorphic to the two-sided regular representation of $W$. \end{prop} \begin{proof} Since $\SF$ is a sheaf with $W \times W$-action, $H^i_c(\Xi, \SF)$ has a structure of $W \times W$-module. By Proposition 4.6, we have a decomposition \begin{equation*} H^{2n}_c(\Xi,\ST) \simeq \bigoplus_{w \in W}H^{2n}_c(\Xi, \ST_w). \end{equation*} By (4.5.2) \begin{equation*} H^{2n}_c(\Xi,\ST_w) \simeq H^{2n}_c(\Xi, (\w_1)_!\Ql) \simeq H^{2n}_c(\Ft, \Ql) = \Ql \end{equation*} since $\dim \Ft = n$. The proposition then follows from (4.7.1). \end{proof} The following lemma, originally due to Lusztig [L1, Lemma 6.7], was proved in [Sh, Lemma 7.6]. Note that in [Sh] it is stated for the unipotent variety, but it works for any variety. Actually this result is proved in [L2, (7.4.2)], in a full generality. \begin{lem} Let $A, A'$ be simple perverse sheaves on $X = \Fh$. Then we have \begin{equation*} \dim \BH^0_c(X, A \otimes A') = \begin{cases} 1 &\quad\text{ if } A' \simeq D(A), \\ 0 &\quad\text{ otherwise, } \end{cases} \end{equation*} where $D(A)$ is the Verdier dual of $A$. \end{lem} \para{4.10.} Recall that $\pi : \wt X \to X$ is semismall by Lemma 4.3, and so $K = \pi_!\Ql[d]$ is a semisimple perverse sheaf on $X = \Fh$, where $d = \dim X = \dim \Fh$. We can write it as \begin{equation*} \tag{4.10.1} K = \bigoplus_A V_A\otimes A, \end{equation*} where $A$ is a simple perverse sheaf and $V_A = \Hom (K, A)$ is the multiplicity space for $A$. We have the following. \begin{prop} Put $m_A = \dim V_A$ for each $A$. Then we have \begin{equation*} \sum_Am^2_A = |W |. \end{equation*} \end{prop} \begin{proof} We have \begin{align*} H^{2n}_c(\Xi, \ST) = H^{2n}_c(\Xi, R^{2d'}\w_!(\pi_!\Ql\otimes\pi_!\Ql)). \end{align*} Consider the spectral sequence \begin{equation*} H^i_c(\Xi, R^j\w_!(\pi_!\Ql\otimes\pi_!\Ql)) \Longrightarrow \BH^{i+j}_c(X, \pi_!\Ql\otimes\pi_!\Ql). \end{equation*} Since $\dim X = d' + n = d$, $\dim \Xi = n$, we have \begin{align*} H^{2n}_c(\Xi, \ST) &\simeq \BH^{2d}_c(X, \pi_!\Ql\otimes\pi_!\Ql) \\ &\simeq \BH^0_c(X, K\otimes K). \end{align*} Hence by (4.10.1), we have \begin{equation*} \dim H^{2n}_c(\Xi,\ST) = \sum_{A,A'}m_Am_{A'}\dim \BH^0_c(X, A\otimes A'). \end{equation*} By Lemma 4.9, $\BH^0(X, A\otimes A') \ne 0$ only when $D(A) \simeq A'$, in which case, we have $\dim \BH^0(X, A\otimes A') = 1$. But since $K$ is self-dual, $m_A = m_{D(A)}$ for each $A$. It follows that $\dim H^{2n}_c(\Xi,\ST) = \sum_Am_A^2$. On the other hand, by Proposition 4.8, $\dim H^{2n}_c(\Xi,\ST) = |W |$. The proposition is proved. \end{proof} \par\bigskip \section{ Intersection cohomology on $\Fs\Fp(V)$ } \para{5.1.} In Section 3, we have considered the intersection cohomologies on $\wt\Fh$. In this section, we consider their restriction on $\Fh$. We follow the notation in 2.2. In particular, $\Fn_s = \bigoplus_{\a \in \Phi^+_s}\Fg_{\a}$ and $\FD = \bigoplus_{\a \in \Phi^+_{l}}\Fg_{\a}$ are subspaces of $\Fh$ such that $\Fn = \Fn_s \oplus \FD$. Put $\FN_0 = \bigcup_{g \in B}g(\Ft)$. Let $\ol\FN_0$ be the closure of $\FN_0$ in $\Fh$. We show the following lemma. \begin{lem} \begin{enumerate} \item $\ol\FN_0 = \Ft \oplus \Fn_s$. In particular, $\Ft\oplus \Fn_s$ is $B$-stable. \item Let $\Fh_{ss}$ be the set of semisimple elements in $\Fh$. Then \begin{equation*} \tag{5.2.1} \ol\Fh_{ss} = \bigcup_{g \in H}g(\Ft\oplus \Fn_s). \end{equation*} Moreover, $\dim \ol\Fh_{ss} = \dim \Fh - 2n$. \end{enumerate} \end{lem} \begin{proof} First we show that \begin{equation*} \tag{5.2.2} \bigcup_{g \in B}g(\Ft) \subset \Ft \oplus \Fn_s. \end{equation*} Any $g \in B$ can be written by (1.10.1) as \begin{equation*} g = \begin{pmatrix} b & c \\ 0 & {}^tb\iv \end{pmatrix}, \end{equation*} where $b,c$ are square matrices of degree $n$, with $b$ non-singular upper triangular, and $c$ satisfies the condition that ${}^tc = b\iv c\,{}^tb$. Then $g\iv$ can be written as \begin{equation*} g\iv = \begin{pmatrix} b\iv & -b\iv c\, {}^tb \\ 0 & {}^t b \end{pmatrix} = \begin{pmatrix} b\iv & {}^tc \\ 0 & {}^tb \end{pmatrix}. \end{equation*} Thus, for a diagonal matrix $s$ of degree $n$, we have \begin{equation*} x = g\begin{pmatrix} s & 0 \\ 0 & s \end{pmatrix}g\iv = \begin{pmatrix} bs b\iv & bs\,{}^tc + cs\,{}^tb \\ 0 & {}^tb\iv s\, {}^tb \end{pmatrix}. \end{equation*} Since $bs\,{}^tc + cs\,{}^tb$ is of the form $A + {}^tA$ for a squar matrix $A$, its diagonal entries are all zero, hence $x \in \Ft \oplus \Fn_s$. (5.2.2) holds. \par Recall that, for $k = 0$, $\wt Y_0 \simeq H \times^B\FN_{0,\sr}$, where $\FN_{0,\sr} = \bigcup_{g \in B}g(\Ft_{\sr})$. Hence $\dim \wt Y_0 = \dim H - \dim B + \dim \FN_{0,\sr}$. Since $\dim \wt Y_0 = \dim H - n$ by Lemma 2.9 (iii), we see that $\dim \FN_{0,\sr} = \dim B - n$. We have \begin{equation*} \FN_{0,\sr} \subset \FN_0 \subset \Ft \oplus \Fn_s \end{equation*} by (5.2.2). Since $\dim \Ft\oplus \Fn_s = \dim B - n$, we have $\dim \FN_0 = \dim \Ft \oplus \Fn_s$. Since $\FN_0$ is irreducible, (i) holds. \par Put $\wt X_0 = H \times^B(\Ft\oplus\Fn_s)$ and $X_0 = \bigcup_{g \in H}(\Ft\oplus\Fn_s)$. The map $\pi^{(0)} : \wt X_0 \to \Fh$ is proper and $\Im\pi^{(0)} = X_0$. Hence $X_0$ is a closed subset of $\Fh$. Recall that $Y_0 = \bigcup_{g \in H}g(\Ft_{\sr})$, then $Y_0 \subset \bigcup_{g \in G}g(\Ft) \subset X_0$. Since $Y_0$ is open dense in $\Fh_{ss}$, $\ol Y_0 = \ol \Fh_{ss}$. Hence $\ol\Fh_{ss} \subset X_0$. On the other hand, $\bigcup_{g \in B}g(\Ft) = \FN_0$ is contained in $\Fh_{ss}$. Hence $\Ft\oplus \Fn_s \subset \ol\Fh_{ss}$ by (i), and so $X_0 \subset \ol\Fh_{ss}$. It follows that $X_0 = \ol\Fh_{ss}$. (5.2.1) is proved. The last assertion follows from Lemma 2.9 (iv) since $\dim \ol\Fh_{ss} = \dim Y_0$. \end{proof} \remark{5.3.} In the case of reductive Lie algebras $\Fg$ with $p \ne 2$, clearly the closure of $\bigcup_{g \in B}g(\Ft)$ coincides with $\Fb$, and $\ol\Fg_{ss} = \Fg$ holds. Lemma 5.2 gives a special phenomenon occurring in the case where $p = 2$ and regular semisimple elements do not exist. \para{5.4.} We shall generalize Lemma 5.2 in connection with $\FN_{k,\sr}$. For $k = 0,1, \dots, n$, put \begin{equation*} \tag{5.4.1} \FN_k = \bigcup_{g \in B}g(\Ft + \FD_k). \end{equation*} Note that for $k = 0$, $\FN_k$ coincides with the previous notation. Let $\ol\FN_k$ be the closure of $\FN_k$ in $\Fh$. We define varieties \begin{align*} \tag{5.4.2} \wt X_k &= \{(x, gB) \in \Fh \times H/B \mid g\iv x \in \ol\FN_k \}, \\ X_k &= \bigcup_{g \in H}g(\ol\FN_k), \end{align*} and define a map $\pi^{(k)}: \wt X_k \to \Fh$ by $(x, gB) \mapsto x$. Then $\pi^{(k)}$ is proper, and $\Im \pi^{(k)} = X_k$. Hence $X_k$ is closed in $\Fh$. We show a lemma. \begin{lem} \begin{enumerate} \item $\ol\FN_k = \Ft\oplus\Fn_s \oplus \FD_k$ for each $k$. In particular, $\Ft\oplus\Fn_s\oplus \FD_k$ is $B$-stable. \item $X_0 = \ol\Fh_{ss}$ and $X_n = \Fh$. In particular, the map $\pi^{(n)} : \wt X_n \to X_n$ coincides with the map $\pi : \wt X \to X$ given in 2.4. \end{enumerate} \end{lem} \begin{proof} First we show \begin{equation*} \tag{5.5.1} \bigcup_{g \in B}g(\FD_k) \subset \bigoplus_{\substack{\a \in \Phi^+ \\ \a = \ve_i + \ve_j }}\Fg_{\a} \oplus \FD_k. \end{equation*} \par In fact, $x \in \FD_k$ is written as $x = \begin{pmatrix} 0 & d \\ 0 & 0 \end{pmatrix}$, where $d = \Diag (d_1, \dots, d_n)$ is a diagonal matrix with $d_i = 0$ for $i > k$. Thus under the notation in the proof of Lemma 5.2, for $g \in B$, we have \begin{equation*} gxg\iv = \begin{pmatrix} 0 & bd\, {}^tb \\ 0 & 0 \end{pmatrix} \in \bigoplus_{\a = \ve_i + \ve_j}\Fg_{\a} \oplus \FD. \end{equation*} Put $y = (y_{ij}) = bd\, {}^tb$, and $b = (b_{ij})$. Then $y_{ii} = \sum_{j \ge i}b_{ij}^2d_j$. It follows that $y_{ii} = 0$ for $i > k$, and (5.5.1) holds. \par Since we know $\bigcup_{g \in B}g(\Ft) \subset \Ft\oplus\Fn_s$ by Lemma 5.2, we have \begin{equation*} \tag{5.5.2} \FN_k = \bigcup_{g \in B}g(\Ft + \FD_k) \subset \Ft \oplus \Fn_s \oplus \FD_k. \end{equation*} \par On the other hand, since $\FN_{k,\sr} \subset \FN_k$ and $\wt X_k \simeq H \times^B\ol\FN_k$, $\wt Y_k$ is regarded as a subvariety of $\wt X_k$. It follows, by (5.5.2), that \begin{align*} \tag{5.5.3} \dim \wt Y_k &\le \dim \wt X_k \\ &= \dim H - \dim B + \dim \FN_k \\ &\le \dim H - \dim B + \dim (\Ft\oplus \Fn_s \oplus \FD_k) \\ &= \dim H - n + k. \end{align*} We know $\dim \wt Y_k = \dim H - n + k$ by Lemma 2.9. Hence the inequalities in(5.5.3) are actually equalities, and we have $\dim \FN_k = \dim \Ft\oplus\Fn_s\oplus\FD_k$. Since $\FN_k$ is irreducible, we conclude that $\ol\FN_k = \Ft\oplus\Fn_s\oplus\FD_k$. This proves (i). \par For (ii), we know $X_0 = \ol\Fh_{ss}$ by Lemma 5.2. Since $\ol \FN_n = \Fb$ by (i), we have $X_n = X$. Thus (ii) holds. The lemma is proved. \end{proof} As a corollary, we have the following. \begin{prop} For $k = 0, 1, \dots, n$, we have \begin{enumerate} \item $\wt X_k$ is a smooth, irreducible variety. \item $X_k$ is a closed subset of $\Fh$, with $X_k = \bigcup_{g \in H}g(\Ft\oplus\Fn_s\oplus\FD_k)$. \item $\wt Y_k$ is open dense in $\wt X_k$, and $Y_k$ is open dense in $X_k$. \item $\dim \wt X_k = \dim H - n + k$, $\dim X_k = \dim H - 2n + 2k$. \end{enumerate} \end{prop} \begin{proof} By Lemma 5.5, $\wt X_k \simeq H \times^B(\Ft\oplus\Fn_s\oplus\FD_k)$. Hence $\wt X_k$ is smooth, irreducible. This proves (i). (ii) also follows from Lemma 5.5. We have $\wt Y_k \simeq H \times^B\FN_{k,\sr}$, and $\FN_{k,\sr}$ is open dense in $\Ft\oplus\Fn_s\oplus\FD_k$. Hence $\wt Y_k$ is open dense in $\wt X_k$. $Y_k$ coincides with the subset of $X_k$ consisting of $x \in X_k$ such that its semisimple part $x_s$ is contained in $Y_0 = \bigcup_{g \in H}g(\Ft_{\sr})$. Since $Y_0$ is open dense in $\Fh_{ss}$, $Y_k$ is open dense in $X_k$. This proves (iii). Then (iv) follows from Lemma 2.9. \end{proof} We shall prove the following theorem. \begin{thm} $\pi_!\Ql[d_n]$ is a semisimple perverse sheaf on $X = \Fh$, equipped with the action of $W_n $, and is decomposed as \begin{equation*} \pi_!\Ql[d_n] \simeq \bigoplus_{0 \le k \le n} \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \IC(X_k, \SL_{\r})[d_k]. \end{equation*} \end{thm} \para{5.8.} As in the case of $Y_k^0$, put $X_k^0 = X_k - X_{k-1}$. By (2.6.3), $Y_k^0 \subset X_k^0$. Hence $Y_k^0$ is open dense in $X_k^0$. For each $k$, we define a locally closed subvariety $\wt X^+_k$ of $\wt X$ by $\wt X^+_k = \pi\iv(X_k^0)$. Let $\pi_k : \wt X^+_k \to X_k^0$ be the restriction of $\pi$ on $\wt X^+_k$. \par Take $s \in \Ft$, and consider the decomposition $V = V_1 \oplus \cdots \oplus V_a$ into eigenspaces of $s$, with $\dim V_i = 2n_i$. Then $Z_H(s) \simeq Sp(V_1) \times \cdots \times Sp(V_a)$. Put $H_i = Sp(V_i)$ and $\Fh_i = \Fs\Fp(V_i)$. We consider the corresponding decomposition $\FD = \FD^1 \oplus\cdots \oplus\FD^a$ with $\FD^i \subset \Fh_i$. The subvariety $X^{H_i,0}_{k_i}$ of $\Fh_i$ is defined similarly to $X^0_k$ by replacing $H, \Fh, \FD$, etc. by $H_i, \Fh_i, \FD^i$, etc. \par For each $(x, gB) \in \wt X^+_k$, we associate a subset $I \subset [1,n]$ such that $|I| = k$ as follows. By definition, $g\iv x \in X_k^0 \cap \Fb$. In the case where $g\iv x \in \Fn$, put $I = [1,k]$. In general, by replacing $g\iv x \in \Fb$ by its $B$-conjugate if necessary, we may assume that $g\iv x = s + z$, where $s \in \Ft, z \in \Fn$ with $[s,z] = 0$, hence $z \in \Lie Z_H(s)$. Then $z$ can be written as $z = \sum_iz_i$ with $z_i \in \Fh_i$. There exists $k_i$ such that $z_i \in X^{H_i,0}_{k_i}$ for $i = 1, \dots, a$ and that $\sum_{i=1}^ak_i = k$. We put $I_i' = [1, k_i] \subset [1, n_i]$, which corresponds to the nilpotent case for $H_i$. Now $V = V_1\oplus\cdots \oplus V_a$ gives a partition $[1,n] = \coprod_iJ_i$ such that $|J_i| = n_i$. Under the correspondence $J_i \lra [1, n_i]$, $I_i'$ gives a subset $I_i \subset J_i$ (the first $k_i$ letters in $J_i$). Put $I = \coprod_{i=1}^a I_i$. Thus $I$ is a subset of $[1,n]$ such that $|I| = k$. Note that $I$ depends only on the $B$-conjugates of $g\iv x$. Thus $I$ is determined by $(x, gB) \in \wt X^+_k$. We denote this assignment by $(x, gB) \mapsto I$. For each $I$ with $|I| = k$, we define a subset $\wt X_I$ of $\wt X^+_k$ by \begin{equation*} \tag{5.8.1} \wt X_I = \{ (x, gB) \in \wt X^+_k \mid (x, gB) \mapsto I \}. \end{equation*} We show the following lemma. \begin{lem} $\wt X^+_k$ is decomposed as \begin{equation*} \tag{5.9.1} \wt X^+_k = \coprod_{\substack{I \subset [1,n] \\ |I| = k}} \wt X_I, \end{equation*} where $\wt X_I$ is an irreducible component of $\wt X^+_k$ for each $I$. \end{lem} \begin{proof} It is clear from the definition that $\wt X_I$ are mutually disjoint, and gives a partition (5.9.1) of $\wt X^+_k$. We show that $\wt X_I$ is irreducible. In fact, $X_k \cap \Fh\nil = \bigcup_{g \in H}g(\Fn_s\oplus\FD_k)$ is irreducible. The set of $s \in \Ft$ such that the eigenspace decomposition of $V$ gives a fixed partition $[1,n] = \coprod_iJ_i$ is irreducible. Hence the set of $x = s + z \in X^0_k \cap \Fb$ with $s$ above is irreducible. Since $\wt X_I$ is the set of $H$-conjugates of $(x, B)$ with $x$ as above, $\wt X_I$ is irreducible. Now $\wt Y_I$ is a subset of $\wt X_I$, which consists of $(x, gB) \mapsto I$ such that $g\iv x = s + z$ with $s \in \Ft_{\sr}$. Thus $\wt Y_I$ is an open dense subset of $\wt X_I$. Since $\wt Y^+_I = \coprod_i \wt Y_I$, with $\wt Y_I$ irreducible, $\wt X^+_k = \bigcup_i\ol{\wt Y_I}$ gives a decomposition into irreducible components, where $\ol{\wt Y_I}$ is the closure of $\wt Y_I$ in $\wt X^+_k$. In order to prove the lemma, it is enough to see that $\wt X_I$ is closed. But the set $Z_I = \bigcup_{I'}\wt X_{I'}$ is closed in $\wt X$, where $I'$ runs over all subsets of $[1,n]$ such that $I' \subseteq I$. Hence $\wt X_I = Z_I \cap \wt X^+_k$ is closed in $\wt X^+_k$. The lemma is proved. \end{proof} \para{5.10.} For each $x \in X^0_k$, we associate an $x$-stable isotropic subspace $W_x \subset V$ with $\dim W_x = k$ as follows. Assume that $x \in X^0_k \cap \Fb$. Up to $B$-conjugate, we can write $x = s + z$ with $s \in \Ft, z \in \Fn, [s, z] = 0$. In the case where $x \in \Fn$, namely $s = 0$, let $W_x$ be the subspace of $V$ spanned by $e_1, \dots, e_k$. Then $W_x$ is an $B$-stable isotropic subspace of $V$, and is $x$-stable. For $x = s + z \in \Fb$ in general, as in the discussion in 5.8, $z$ can be written as $z = \sum_{i=1}^az_i$ with $z_i \in X^{H_i,0}_{k_i}$ for some $k_i$ such that $0 \le k_i \le n_i$ and that $\sum k_i = k$. We define an isotropic subspace $W_i \subset V_i$ for each $i$, by applying the above discussion to the nilpotent element $z_i \in \Fh_i$, and put $W_x = W_1\oplus \cdots \oplus W_a$. Then $W_x$ is an $(B \cap Z_H(s))$-stable isotropic subspace of $V$ with $\dim W_x = k$. In general, for $x \in X^0_k$, one can find $g \in H$ such that $g\iv x \in \Fb$, and that $g\iv x$ can be written as $g\iv x = x' = s + z$ as above. If we fix such $s$, the choice of $g$ is unique up to $(B \cap Z_H(s))$-conjugate. We define $W_{x'}$ as above, and put $W_x = g(W_{x'})$. This $W_x$ satisfies the required property. \par $W_x^{\perp}/W_x$ has a natural symplectic structure, and put $H_x = Sp(W_x^{\perp}/W_x)$. The action of $x$ on $W_x$ induces $x|_{W_x^{\perp}/W_x} \in \Lie H_x$. One can check that $x|_{W_x^{\perp}/W_x}$ is contained in $X_0^{H_x}$, where $X_0^{H_x}$ is defined similarly to $X_0$ by replacing $H$ by $H_x$. \para{5.11.} For $i = 1, \dots, n$, let $M_i$ be the isotropic subspace of $V$ spanned by $e_1, \dots, e_i$. Put $\ol M_i = M_i^{\perp}/M_i$. $\ol M_i$ has a natural symplectic structure. We fix $k$, and consider $G_1 = GL(M_k)$, $H_2 = Sp(\ol M_k)$. Also put $\Fg_1 = \Lie G_1, \Fh_2 = \Lie H_2$. Let $X'^0_{k'}$ be the subvariety of $X' = \Fh_2$ defined similarly to $X^0_k$ by replacing $H$ by $H_2$. We consider $X'^0_0 = X'_0$ for $k' = 0$. \par We define a variety $\SG_k$ by \begin{align*} \tag{5.11.1} \SG_k = \{ (x, &\f_1, \f_2) \mid x \in X^0_k, \\ &\f_1 : W_x \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, M_k, \f_2 : W_x^{\perp}/W_x \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol M_k (\text{ symplectic isom.}) \}. \end{align*} We consider the diagram \begin{equation*} \tag{5.11.2} \begin{CD} \Fg_1 \times X'_0 @<\s<< \SG_k @>q>> X^0_k, \end{CD} \end{equation*} where \begin{align*} q &: (x, \f_1, \f_2) \mapsto x, \\ \s &: (x, \f_1, \f_2) \mapsto (\f_1(x|_{W_x})\f_1\iv, \f_2(x|_{W_x^{\perp}/W_x})\f_2\iv). \end{align*} \par $H \times (G_1 \times H_2)$ acts on $\SG_k$ by \begin{equation*} (g, (h_1, h_2)) : (x, \f_1, \f_2) \mapsto (gx, h_1\f_1g\iv, h_2\f_2g\iv) \end{equation*} for $g \in H, h_1 \in G_1, h_2 \in H_2$. Moreover, $\s$ is $H \times (G_1 \times H_2)$-equivariant with respect to the adjoint action of $G_1 \times H_2$ and the trivial action of $H$ on $\Fg_1 \times X'_0$. By a standard argument, one can check \par\medskip\noindent (5.11.3) \ The map $q$ is a principal bundle with fibre isomorphic to $G_1 \times H_2$. The map $\s$ is a locally trivial fibration with smooth, connected fibre of dimension $\dim H$. \remark{5.12.} The variety $\SG_k$ introduced here is a different type of the variety $\SG_k$ discussed in [SS, 4.5]. The discussion below has some similarly with the discussion in [Sh, Section 2]. \para{5.13.} Let $B_1$ be the Borel subgroup of $G_1$ which is the stabilizer of the flag $(M_i)_{0 \le i \le k}$ in $G_1$, and $B_2$ the Borel subgroup of $H_2$ which is the stabilizer of the flag $(M_{k + i}/M_k)_{k \le i \le n}$ in $H_2$. Put \begin{equation*} \wt\Fg_1 = \{ (x, gB_1) \in \Fg_1 \times G_1/B_1 \mid g\iv x \in \Lie B_1 \}, \end{equation*} and define $\pi^1: \wt\Fg_1 \to \Fg_1$ by $(x,gB_1) \mapsto x$. We define $\pi^2: \wt X' \to X' = \Fh_2$ similarly to $\pi : \wt X \to X$, by replacing $H$ by $H_2$. We put $\wt X'_0 = \wt X'^+_{0} = (\pi^2)\iv(X'_{0})$, and let $\pi^2_{0}$ be the restriction of $\pi^2$ on $\wt X'_{0}$. We define a variety \begin{align*} \tag{5.13.1} \wt Z^+_k = \{ (x, gB, &\f_1, \f_2) \mid (x, gB) \in \wt X^+_k, \\ &\f_1 : W_x \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, M_k, \f_2 : W_x^{\perp}/W_x \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol M_k \} \end{align*} and define a map $\wt q : \wt Z^+_k \to \wt X^+_k$ by the natural projection. We define a map $\wt\s : \wt Z^+_k \to \wt\Fg_1 \times \wt X'^+_{0}$ as follows; take $(x,gB, \f_1, \f_2) \in \wt Z^+_k$. Let $s$ be the semisimple part of $x$. Then $B_s = Z_H(s) \cap gBg\iv$ is a Borel subgroup of $Z_H(s)$ such that $x \in \Lie B_s$. By 5.10, $W_x$ is a $B_s$-stable subspace of $V$, and is decomposed as $W_x = W_1 \oplus \cdots \oplus W_a$ according to the decomposition $V = V_1 \oplus \cdots \oplus V_a$ into eigenspaces of $s$. Then $\prod_i(GL(W_i) \cap B_s)$ is a Borel subgroup of $\prod_iGL(W_i)$, and there exists a unique Borel subgroup $B^1_x$ of $GL(W_x)$ containing it. We see that $x|_{W_x} \in \Lie B^1_x$. We denote by $g_1B_1 \in G_1/B_1$ the element corresponding to the Borel subgroup $\f_1(B^1_x)\f_1\iv$ of $G_1$. On the other hand, $W_i$ is $B_s$-stable, and we have a homomorphism $B_s \to Sp(W_i^{\perp}/W_i)$. If we denote by $B_{s,i}$ the image of this map, then $\prod_i B_{s,i}$ is a Borel subgroup of $\prod_iSp(W_i^{\perp}/W_i)$, and there exists a unique Borel subgroup $B'_x$ of $Sp(W_x^{\perp}/W_x)$ containing it. We see that $x|_{W_x^{\perp}/W_x} \in B'_x$. We denote by $g_2B_2 \in H_2/B_2$ the element corresponding to the Borel subgroup $\f_2(B'_x)\f_2\iv$ of $H_2$. We now define $\wt\s : \wt Z^+_k \to \wt \Fg_1 \times \wt X'_0$ by \begin{equation*} \wt\s : (x, gB, \f_1, \f_2) \mapsto ((\f_1(x|_{W_x})\f_1\iv, g_1B_1), (\f_2(x_{W_x^{\perp}/W_x})\f_2\iv, g_2B_2). \end{equation*} \par We define a map $\wt\pi_k : \wt Z^+_k \to \SG_k$ by $(x, gB, \f_1, \f_2) \mapsto (x, \f_1, \f_2)$. Then we have the following commutative diagram extending (5.11.2). \begin{equation*} \tag{5.13.2} \begin{CD} \wt \Fg_1 \times \wt X'_{0} @<\wt\s << \wt Z^+_k @>\wt q>> \wt X^+_k \\ @V\pi^1 \times \pi^2_{0}VV @VV\wt\pi_kV @VV\pi_kV \\ \Fg_1 \times X'_{0} @<\s<< \SG_k @>q>> X^0_k. \end{CD} \end{equation*} \para{5.14.} Let $\Fg_{1,\rg}$ be the set of regular semisimple elements in $\Fg_1$. Let $\psi^1$ be the restriction of $\pi^1: \wt\Fg_1 \to \Fg_1$ to $(\pi^1)\iv (\Fg_{1,\rg})$. Then $\psi^1$ is a finite Galois covering with Galois group $S_k$, and $(\psi^1)_!\Ql$ is decomposed as \begin{equation*} (\psi^1)_!\Ql \simeq \bigoplus_{\r_1 \in S_k\wg}\r_1 \otimes \SL^1_{\r_1}, \end{equation*} where $\SL^1_{\r_1}$ is a simple local system on $\Fg_{1,\rg}$ corresponding to $\r_1$. $\Fg_{1,\rg}$ is an open dense subset of $\Fg_1$, and it is well-known that $(\pi^1)_!\Ql[\dim \Fg_1]$ is a semisimple perverse sheaf, equipped with $S_k$-action, and is decomposed as \begin{equation*} \tag{5.14.1} (\pi^1)_!\Ql[\dim \Fg_1] \simeq \bigoplus_{\r_1 \in S_k\wg} \r_1\otimes \IC(\Fg_1, \SL^1_{\r_1})[\dim \Fg_1]. \end{equation*} We put $A_{\r_1} = \IC(\Fg_1, \SL^1_{\r_1})[\dim \Fg_1]$. \par On the other hand, the varieties $Y'^0_{i}, \wt Y'^+_{i}$ and the map $\psi^2_{i} : \wt Y'^+_{i} \to Y'^0_{i}$ are defined similarly to $Y^0_i, \wt Y_i$ and $\psi_i : \wt Y^+_i \to Y_i^0$, by replacing $H$ by $H_2$. In particular, in the case where $i = 0$, we have, by (2.10.5), \begin{equation*} \tag{5.14.2} (\psi^2_{0})_!\Ql \simeq \bigoplus_{\r_2 \in S_{n-k}\wg} H^{\bullet}(\BP_1^{n-k})\otimes \r_2 \otimes \SL^2_{\r_2}, \end{equation*} where $\SL^2_{\r_2}$ is a simple local system on $Y'_0 = Y'^0_{0}$ corresponding to $\r_2$. Since $Y'_{0}$ is an open dense smooth subset of $X'_{0}$, we can consider the intersection cohomology $A_{\r_2} = \IC(X'_{0}, \SL^2_{\r_2})[\dim X'_{0}]$ on $X'_{0}$. \par Now $A_{\r_1}\boxtimes A_{\r_2}$ is a $(G_1 \times H_2)$-equivariant simple perverse sheaf on $\Fg_1 \times X'_{0}$. By (5.11.3), there exists a unique simple perverse sheaf $A_{\r}$ on $X^0_k$ such that \begin{equation*} \tag{5.14.3} q^*A_{\r}[\b_2] \simeq \s^*(A_{\r_1} \boxtimes A_{\r_2})[\b_1], \end{equation*} where $\b_1 = \dim H$ and $\b_2 = \dim (G_1 \times H_2)$. (Here we put $\r = \r_1\boxtimes \r_2 \in (S_k \times S_{n-k})\wg$.) \par We have the following lemma. \begin{lem} Let $\SL_{\r}$ be a simple local system on $Y_k^0$ given in (2.10.5). Then we have \begin{equation*} A_{\r} \simeq \IC(X^0_k, \SL_{\r})[d_k]. \end{equation*} \end{lem} \begin{proof} Since $A_{\r}$ is a simple perverse sheaf on $X^0_k$, in order to prove the lemma, it is enough to see that \begin{equation*} \tag{5.15.1} \SH^{-d_k}A_{\r}|_{Y^0_k} \simeq \SL_{\r}. \end{equation*} \par We consider, for $I = [1,k], I' = \emptyset$, the following commutative diagram. \begin{equation*} \tag{5.15.2} \begin{CD} \wt\Fg_{1,\rg} \times \wt Y'_{I'} @<\wt \s_0<< \wt Z^0_I @>\wt q_0>> \wt Y_I \\ @V\xi^1 \times \xi^2_{I'}VV @VV\wt\xi_IV @VV\xi_IV \\ (G_1/T_1 \times \Ft_{1,\rg}) \times \wh Y'_{I'} @<\wh\s_0<< \wh Z_I @>\wh q_0>> \wh Y_I \\ @V\eta^1 \times \eta^2_{I'}VV @VV\wt\eta_IV @VV\eta_IV \\ \Fg_{1,\rg} \times Y'_{0} @<\s_0<< \SG_{k,\sr} @>q_0>> Y^0_k, \end{CD} \end{equation*} where $\SG_{k,\sr} = q\iv(Y^0_k), \wt Z^0_I = \wt q\iv(\wt Y_I)$, and $\wh Z_I$ is the quotient of $\wt Z^0_I$ by the natural action of the group $Z_G(\Ft_{\sr})_I/(Z_G(\Ft_{\sr}) \cap B)$. The maps $\wt q_0, q_0, \wt\s_0, \s_0$ are defined as the restriction of the corresponding maps $\wt q, q, \wt s, \s$. \par Now the map $\eta^1 \times \eta^2_{I'}$ is a finite Galois covering with Galois group $S_k \times S_{n-k}$. Since the bottom squares in the diagram (5.15.2) are cartesian, this Galois covering is compatible with the Galois covering $\eta_I$. Hence for any $\r_1 \in S_k\wg, \r_2 \in S_{n-k}\wg$, we have \begin{equation*} \s^*(\SL^1_{\r_1} \boxtimes \SL^2_{\r_2}) \simeq q^*\SL_{\r} \end{equation*} for $\r = \r_1\boxtimes\r_2 \in (S_k \times S_{n-k})\wg$. (5.15.1) follows from this. The lemma is proved. \end{proof} \par We can now state the following result. \begin{prop} $(\pi_k)_!\Ql$ is decomposed as \begin{equation*} (\pi_k)_!\Ql \simeq H^{\bullet}(\BP_1^{n-k}) \otimes \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh \r \otimes \IC(X^0_k, \SL_{\r}), \end{equation*} where $\wh \r$ is regarded as a vector space, ignoring the $W _n$-action. \end{prop} \para{5.17.} We prove Proposition 5.16 and Theorem 5.7 simultaneously, by induction on $n$. We assume that the theorem and the proposition holds for $n' < n$. First we show \begin{lem} Proposition 5.16 holds for $k \ne 0$. \end{lem} \begin{proof} For any $I \subset [1,n]$ with $|I| = k$, put $\wt Z_I = \wt q\iv(\wt X_I)$. We have the following commutative diagram \begin{equation*} \tag{5.18.1} \begin{CD} \wt\Fg_1 \times \wt X'_0 @<<< \wt Z_I @>>> \wt X_I \\ @V\pi^1 \times \pi^2_0VV @VVV @VV\pi_IV \\ \Fg_1 \times X'_0 @<\s<< \SG_k @>q>> X^0_k, \end{CD} \end{equation*} where $\pi^1, \pi^2_0$ are as in (5.13.2). $\pi_I$ is the restriction of $\pi_k : \wt X^+_k \to X_k^0$. Since $\wt X_I$ is closed in $\wt X^+_k$, $\pi_I$ is proper. We note that both squares are cartesian squares. \par We show the following. \par\medskip\noindent (5.18.2) \ Any simple summand (up to shift) of the semisimple complex $(\pi_I)_!\Ql$ is contained in the set $\{ A_{\r} \mid \r \in (S_k \times S_{n-k})\wg \}$. \par\medskip Put $K_1 = (\pi^1)_!\Ql$ and $K_2 = (\pi^2_0)_!\Ql$. We have \begin{align*} K_1 &\simeq \bigoplus_{\r_1 \in S_k\wg}\r_1\otimes \IC(\Fg_1, \SL^1_{\r_1}), \\ K_2 &\simeq H^{\bullet}(\BP_1^{n-k})\otimes \bigoplus_{\r_2 \in S_{n-k}\wg} \wh\r_2 \otimes \IC(X'_0, \SL^2_{\r_2}). \end{align*} In fact, the first formula follows from (5.14.1), the second formula follows from Proposition 5.16, by applying the induction hypothesis to the case $n' = n - k < n$ and $k' = 0$. Since both squares in (5.18.1) are cartesian, we have $\s^*(K_1 \boxtimes K_2) \simeq q^*(\pi_I)_!\Ql$, up to shift. Then (5.18.2) follows from (5.14.3). \par Now by Lemma 5.9, we have $(\pi_k)_!\Ql \simeq \bigoplus_{I}(\pi_I)_!\Ql$. Hence (5.18.2) implies, by Lemma 5.15, that \par\medskip\noindent (5.18.3) \ Any simple summand (up to shift) of the semisimple complex $(\pi_k)_!\Ql$ is contained in the set $\{ \IC(X^0_k, \SL_{\r}) \mid \r \in (S_k \times S_{n-k})\wg\}$. \par\medskip (5.18.3) implies, in particular, that any simple summand of $K = (\pi_k)_!\Ql$ has its support $X_k^0$. Since the restriction of $K$ on $Y^0_k$ coincides with $K_0 = (\psi_k)_!\Ql$, the decomposition of $K$ into simple summands is determined by the decomposition of $K_0$. Hence the lemma follows from (2.10.5). \end{proof} \para{5.19.} We consider the semisimple complex $(\pi_0)_!\Ql$ for $\pi_0 : \wt X_0 \to X_0 = \ol\Fh_{ss}$. In this case, the induction hypothesis can not be applied. But $(\pi_0)_!\Ql|_{Y_0} \simeq (\psi_0)_!\Ql$, and by (2.10.5) we have \begin{equation*} (\psi_0)_!\Ql \simeq H^{\bullet}(\BP_1^n)\otimes \bigoplus_{\r \in S_n\wg}\wh\r \otimes \SL_{\r}, \end{equation*} by ignoring the $W _n$-module structure. It follows that $(\pi_0)_!\Ql$ can be written as \begin{equation*} \tag{5.19.1} (\pi_0)_!\Ql \simeq H^{\bullet}(\BP_1^n)\otimes\bigoplus_{\r \in S_n\wg} \wh\r \otimes \IC(X_0, \SL_{\r}) + \SN_0. \end{equation*} Here $\SN_0$ is a sum of various complexes of the form $A[i]$, where $A$ is a simple perverse sheaf such that $\dim\supp A < \dim X_0$. \par For each $0 \le m \le n$, let $\ol\pi_m$ be the restriction of $\pi$ on $\pi\iv(X_m)$. The following formula can be proved by a similar argument as in the proof of (2.13.3), by using Lemma 5.18 and (5.19.1) instead of (2.10.5). \begin{align*} \tag{5.19.2} (&\ol\pi_m)_!\Ql[d_m] \\ &\simeq \bigoplus_{0 \le k \le n} \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \IC(X_k, \SL_{\r})[d_m - 2(n-k)] + \SM_m + \SN_0, \end{align*} where $\SM_m$ is a sum of various $\IC(X_k, \SL_{\r})[d_m -2i]$ for $0 \le k \le m$ and $\r \in (S_k \times S_{n-k})\wg$ with $i < n - k$. \par\medskip Note that, if $k > 0$, then all the simple perverse sheaves $A$ appearing in the decomposition of $(\pi_k)_!\Ql$ (up to shift) have support $X_k$ by Lemma 5.18. This is also true for a simple perverse sheaf $A$ appearing in the first term of $(\pi_0)_!\Ql$ in (5.19.1). By Lemma 2.9, we have $\dim X_k \ge \dim X_0$ for any $k$. Hence the above perverse sheaves $A$ have the property that $\dim\supp A \ge \dim X_0$. Since any perverse sheaf $A'$ appearing in $\SN_0$ has the property that $\dim\supp A' < \dim X_0$, there is no interaction between $\SN_0$ and other parts in the computation of $(\ol\pi_m)_!\Ql$. Thus $\SN_0$ appears in (5.19.2) without change (up to shift). \par We consider the case where $m = n$. In this case, $(\ol\pi_n)_!\Ql[d_n] = \pi_!\Ql[d]$ is a semisimple perverse sheaf by Lemma 4.3. This implies that $\SM_n = 0$, and we have \begin{equation*} \tag{5.19.3} \pi_!\Ql[d] \simeq \bigoplus_{0 \le k \le n} \bigoplus_{\r \in (S_k \times S_{n-k})\wg} \wh\r \otimes \IC(X_k, \SL_{\r})[d_k] + \SN_0. \end{equation*} \par We now apply Proposition 4.11. Since $\sum_{\wh\r \in W_n\wg}(\dim \wh\r)^2 = |W _n |$, we must have $\SN_0 = 0$. This proves Proposition 5.16 in the case where $k = 0$. Hence Proposition 5.16 holds for any $k$ by Lemma 5.18. The theorem now follows from (5.19.3). It remains to consider the case where $n = 1$. But in this case, $X_0 = \Ft$ coincides with the center of $\Fh = \Fs\Fl_2$, and the proposition is easily verified. This completes the proof of Theorem 5.7 and Proposition 5.16. \par\medskip As a corollary to Theorem 5.7, we have the following. \begin{cor} Assume that $\r \in (S_k \times S_{n-k})\wg$, and let $\wh\r \in W_n\wg$ be as in (2.12.1). Then \begin{align*} \tag{5.20.1} \IC(\wt\Fh, \SL\flt_{\wh\r})|_{\Fh} &\simeq \IC(X_k, \SL_{\r}) \quad \text{\rm (up to shift)}, \\ \tag{5.20.2} \IC(X_k, \SL_{\r})|_{\Fh\nil} &\simeq \IC(\ol\SO_{\wh\r}, \Ql) \quad \text{\rm (up to shift)}. \end{align*} \end{cor} \begin{proof} (5.20.1) is obtained by comparing Theorem 5.7 with Proposition 3.4. By comparing Proposition 3.4 and Theorem 3.6, we have $\IC(\wt\Fh, \SL\flt_{\wh\r})|_{\Fh\nil} \simeq \IC(\ol\SO_{\wh\r}, \Ql)$, up to shift. Hence by (5.20.1), \begin{equation*} \IC(X_k, \SL_{\r})|_{\Fh\nil} \simeq \IC(\wt\Fh, \SL\flt_{\wh\r})|_{\Fh\nil} \simeq \IC(\ol\SO_{\wh\r}, \Ql), \end{equation*} up to shift. Thus (5.20.2) holds. \end{proof} \par\bigskip \section{Intersection cohomology on $\Fg^{\th}$} \para{6.1.} From this section until the end of this paper, we discuss about $G^{\io\th}$ with $N$ : odd. So, assume that $N = 2n+1$, and $G = GL_N$. We follow the notation in 1.10. We have $G^{\th} \simeq SO(V') \simeq Sp(V)$, and $G^{\io\th} \simeq \Fg^{\th}$ by Proposition 1.11 and 1.13. Put $H = Sp(V)$ and $\Fh = \Fs\Fp(V)$. As in 1.14, $\Fg^{\th} = \Fh \oplus \Fg_{V'} = \Fh \oplus \Fg_V \oplus \Fz$. $H$ acts trivially on $\Fz \simeq \Bk$, and the action of $H$ on $\Fh\oplus\Fg_V$ can be identified with the diagonal action of $H$ on $\Fh \times V$. Moreover, $G^{\io\th}\uni \simeq \Fg^{\th}\nil = (\Fh\oplus \Fg_V)\nil$. Hence considering $G^{\io\th}$ with $G^{\th}$-action is essentially the same as considering the variety $\Fh \times V$ with diagonal action of $H$ (but see Remark 1.15). For $H$ and $\Fh$, we use the same notation as in the previous sections. \para{6.2.} Put \begin{equation*} \SQ_{n,3} = \{ \Bm = (m_1, m_2, m_3) \in \BZ^3_{\ge 0}\mid \sum m_i = n\}. \end{equation*} Recall the definition of $\FN_k$ in (5.4.1) and $\FN_{k,\sr}$ in 2.4.. For $i = 1, \dots, n$, let $M_i$ be the subspace of $V$ spanned by $e_1, \dots, e_i$. For $\Bm \in \SQ_{n,3}$, we define varieties \begin{align*} \wt\SX_{\Bm} &= \{ (x, v, gB) \in \Fh \times V \times H/B \mid g \iv x \in \ol\FN_{p_2}, g\iv v \in M_{p_1} \} \\ \SX_{\Bm} &= \bigcup_{g \in H}g(\ol\FN_{p_2} \times M_{p_1}), \end{align*} where we put $p_1 = m_1, p_2 = m_1 + m_2$. We define a map $\pi^{(\Bm)} : \wt\SX_{\Bm} \to \SX_{\Bm}$ by $(x,v, gB) \mapsto (x,v)$. $\pi^{(\Bm)}$ is a proper surjective map. Since \begin{equation*} \wt \SX_{\Bm} \simeq H \times^B(\ol\FN_{p_2} \times M_{p_1}), \end{equation*} $\wt\SX_{\Bm}$ is smooth, irreducible by Lemma 5.5. We also define varieties \begin{align*} \wt\SY_{\Bm} &= \{ (x, v, gB) \in \Fh \times V \times H/B \mid g\iv x \in \FN_{p_2, \sr}, g\iv v \in M_{p_1} \}, \\ \SY_{\Bm} &= \bigcup_{g \in H}g(\FN_{p_2,\sr} \times M_{p_1}), \end{align*} and a map $\psi^{(\Bm)} : \wt\SY_{\Bm} \to \SY_{\Bm}$ by $(x,v, gB) \mapsto (x,v)$. In the case where $\Bm = (n,0,0)$, we write $\wt\SX_{\Bm}, \SX_{\Bm}, \pi^{(\Bm)}$ and $\wt\SY_{\Bm}, \SY_{\Bm},\psi^{(\Bm)}$ simply as $\wt\SX, \SX, \pi$ and $\wt\SY, \SY, \psi$. Note that $\SX = \bigcup_{g \in H}g(\Fb \times M_n)$ is a closed subset of $\Fh \times V$, and $\SX_{\Bm}$ is a closed subset of $\SX$ for any $\Bm$. Also note that in the case where $m_1 = 0$, $\SX_{\Bm}, \SY_{\Bm}$, etc. coincide with $X_{m_2}, Y_{m_2}$, etc. in previous sections. \par As in (2.6.1), one can write $\wt\SY_{\Bm}$ as \begin{align*} \wt \SY_{\Bm} &\simeq H\times^B(\FN_{p_2,\sr} \times M_{p_1}) \\ &\simeq H \times^{B \cap Z_H(\Ft_{\sr})} \bigl((\Ft_{\sr} + \FD_{p_2}) \times M_{p_1}\bigr). \end{align*} \par For each $I \subset [1,n]$, let $M_I$ be the subset of $M_n$ consisting of $v = \sum_{i \in I} c_ie_i$ with $c_i \ne 0$. For $\Bm \in \SQ_{n,3}$, we define $\SI(\Bm)$ as the set of $\BI = (I_1, I_2, I_3)$ such that $[1,n] = \coprod_{1 \le i \le 3} I_i$ with $|I_i| = m_i$ For $\BI \in \SI(\Bm)$, put $\FD_{\BI} = \FD_{I_2} + \ol \FD_{I_1}$, where $\ol\FD_{I_1}$ is the closure of $\FD_{I_1}$ in $\FD$. We define a variety \begin{align*} \tag{6.2.1} \wt\SY_{\BI} = H \times^{B \cap Z_H(\Ft_{\sr})} \bigl((\Ft_{\sr} + \FD_{\BI}) \times M_{I_1}\bigr). \end{align*} Since the actions of $B \cap Z_H(\Ft_{\sr})$ on $\FD$ and on $M_n$ are given by the actions of the torus part $T$, $(\Ft_{\sr} + \FD_I) \times M_{I_1}$ is $B \cap Z_H(\Ft_{\sr})$-stable. Hence $\wt \SY_{\BI}$ is well-defined. Let $\psi_{\BI} : \wt\SY_{\BI} \to \SY$ be the map defined by $g*(x, v) \mapsto (gx, gv)$, where $g \in H, (x,v) \in (\Ft_{\sr} + \SD_{\BI}) \times M_{I_1}$. Then $\Im \psi_{\BI}$ is independent of the choice of $\BI \in \SI(\Bm)$, which we denote by $\SY^0_{\Bm}$. We have, for any $\BI \in \SI(\Bm)$, \begin{equation*} \SY^0_{\Bm} = \bigcup_{g \in H}g\bigl((\Ft_{\sr}+ \SD_{\BI}) \times M_{I_1}\bigr). \end{equation*} \par For $\BI \in \SI(\Bm)$, we define a parabolic subgroup $Z_H(\Ft_{\sr})_{\BI}$ by the condition that the $i$-th factor is $SL_2$ if $i \in I_3$ and is $B_2$ otherwise (a generalization of $Z_H(\Ft_{\sr})_I$ in 2.7). Since $Z_H(\Ft_{\sr})_{\BI}$ stabilizes $\SD_{\BI}$ and $M_{I_1}$, one can define \begin{equation*} \tag{6.2.2} \wh\SY_{\BI} = H \times^{Z_H(\Ft_{\sr})_{\BI}} \bigl((\Ft_{\sr} + \SD_{\BI}) \times M_{I_1}\bigr). \end{equation*} The map $\psi_{\BI}$ factors through $\wh\SY_{\BI}$, \begin{equation*} \begin{CD} \psi_{\BI} : \wt\SY_{\BI} @>\xi_{\BI}>> \wh\SY_{\BI} @>\eta_{\BI}>> \SY^0_{\Bm}, \end{CD} \end{equation*} where $\xi_{\BI}$ is the natural projection, and $\eta_{\BI}$ is the map defined by $g*(x,v) \mapsto (gx, gv)$. $\xi_{\BI}$ is the locally trivial fibration with fibre isomorphic to \begin{equation*} Z_H(\Ft_{\sr})_{\BI}/(B \cap Z_H(\Ft_{\sr})) \simeq (SL_2/B_2)^{I_3} \simeq \BP_1^{I_3}. \end{equation*} \par Let $S_{\BI}\simeq S_{I_1}\times S_{I_2} \times S_{I_3}$ be the stabilizer of $\BI = (I_1, I_2, I_3)$ in $S_n$. Now $N_H(\Ft_{\sr})/Z_H(\Ft_{\sr}) \simeq S_n$, and $S_n$ acts on $Z_H(\Ft_{\sr}) \simeq SL_2 \times \cdots \times SL_2$ as the permutation of factors. Since $S_{\BI}$ stabilizes $M_{I_1}$ and $\SD_{\BI}$, $S_{\BI}$ acts on $\wt\SY_{\BI}$ and on $\wh\SY_{\BI}$. Now the map $\eta_{\BI}$ is a finite Galois covering with Galois group $S_{\BI}$. \par For each $\Bm$, we define $\BI(\Bm) = ([1, p_1], [p_1 +1, p_2], [p_2 + 1, n])$, and put $\wt\SY_{\BI(\Bm)} = \wt\SY^0_{\Bm}$, $S_{\BI(\Bm)} = S_{\Bm}$. Note that $\wt\SY^0_{\Bm}$ is an open dense subset of $\wt\SY_{\Bm}$, hence irreducible. Put $\psi\iv(\SY^0_{\Bm}) = \wt\SY^+_{\Bm}$. $S_n$ acts naturally on $\wt\SY$, and the map $\psi$ is $S_n$-equivariant with respect to the trivial action of $S_n$ on $\SY$. Hence it preserves the subset $\wt\SY^+_{\Bm}$ of $\wt\SY$, and the stabilizer of $\wt\SY^0_{\Bm}$ in $S_n$ coincides with $S_{\Bm}$. One can check that \begin{equation*} \tag{6.2.3} \wt\SY^+_{\Bm} = \coprod_{\BI \in \SI(\Bm)}\wt\SY_{\BI} = \coprod_{w \in S_n/S_{\Bm}}w(\wt\SY^0_{\Bm}), \end{equation*} where $\wt\SY_{\BI}$ is an irreducible component, hence is a connected component. \par As in [Sh, 1.3], we define a partial order on $\SQ_{n,3}$ by $\Bm' \le \Bm$ if $p_i' \le p_i$ for $i = 1,2$, where $(p_1',p_2')$ are define for $\Bm'$ similarly to $(p_1,p_2)$ for $\Bm$. Then $\SY_{\Bm'} \subset \SY_{\Bm}$ and $\SX_{\Bm'} \subset \SX_{\Bm}$ if $\Bm' \le \Bm$. One can check that \begin{equation*} \tag{6.2.4} \SY^0_{\Bm} = \SY_{\Bm} - \bigcup_{\Bm' < \Bm}\SY_{\Bm'}. \end{equation*} Thus $\SY^0_{\Bm}$ is an open dense subset of $\SY_{\Bm}$, and we have a partition $\SY_{\Bm} = \coprod_{\Bm'\le \Bm}\SY^0_{\Bm'}$. It follows that $\SY_{\Bm'} \subseteq \SY_{\Bm}$ if and only if $\Bm' \le \Bm$. Also we have $\SY = \coprod_{\Bm \in \SQ_{n,3}}\SY^0_{\Bm}$. \par The following lemma can be proved in a similar way as Lemma 1.4 in [Sh] (the special case where $r = 3$). \begin{lem} Let $\Bm \in \SQ_{n,3}$. \begin{enumerate} \item $\SY_{\Bm}$ is open dense in $\SX_{\Bm}$, and $\wt\SY_{\Bm}$ is open dense in $\wt\SX_{\Bm}$. \item $\dim \wt\SX_{\Bm} = \dim\wt\SY_{\Bm} = 2n^2 + 2m_1 + m_2$. \item $\dim \SX_{\Bm} = \dim \SY_{\Bm} = 2n^2 + 2m_1 + m_2 - m_3$. \item $\SY = \coprod_{\Bm \in \SQ_{n,3}}\SY^0_{\Bm}$ gives a stratification of $\SY$ by smooth strata $\SY^0_{\Bm}$, and the map $\psi : \wt\SY \to \SY$ is semismall with respect to this stratification. \end{enumerate} \end{lem} \para{6.4.} Let $\psi_{\Bm} : \wt\SY^+_{\Bm} \to \SY^0_{\Bm}$ be the restriction of $\psi$ on $\wt\SY^+_{\Bm}$. Then $\psi_{\Bm}$ is $S_n$-equivariant with respect to the natural action of $S_n$ on $\wt\SY^+_{\Bm}$ and the trivial action of $S_n$ on $\SY^0_{\Bm}$. By (6.2.3), we have \begin{equation*} \tag{6.4.1} (\psi_{\Bm})_!\Ql \simeq \bigoplus_{\BI \in \SI(\Bm)}(\psi_{\BI})_!\Ql. \end{equation*} Since $\eta_{\BI} : \wh\SY_{\BI} \to \SY^0_{\Bm}$ is a finite Galois covering with Galois group $S_{\BI}$, $(\eta_{\BI})_!\Ql$ is a semisimple local system on $\SY^0_{\Bm}$, and is decomposed as \begin{equation*} \tag{6.4.2} (\eta_{\BI})_!\Ql \simeq \bigoplus_{\r \in S_{\BI}\wg}\r \otimes \SL_{\r}, \end{equation*} where $\SL_{\r} = \Hom (\r, (\eta_{\BI})_!\Ql)$ is a simple local system on $\SY^0_{\Bm}$. \par Now by a similar argument as in 2.10, we have \begin{equation*} \tag{6.4.3} (\psi_{\BI})_!\Ql \simeq (\eta_{\BI})_!(\xi_{\BI})_!\Ql \simeq H^{\bullet}(\BP_1^{I_3})\otimes (\eta_{\BI})_!\Ql. \end{equation*} For a positive integer $r$, let $W_{n,r} = S_n \ltimes (\BZ/r\BZ)^n$ be the complex reflection group. Hereafter we assume that $r = 3$, and consider $W_{n,r} = W_{n,3}$. Since $S_{\Bm}$ is a subgroup of $S_n$, we can consider $W_{\Bm, r} = S_{\Bm}\ltimes (\BZ/r\BZ)^n$ as a subgroup of $W_{n,r}$. Let $\z$ be a primitive $r$-th root of unity in $\Ql$, and define a linear character $\tau_i : \BZ/r\BZ \to \Ql^*$ by $\tau_i(a) = \z^{i-1}$, where $a$ is a fixed generator of $\BZ/r\BZ$. Let $\r \in S_{\Bm}\wg$. Since $S_{\Bm} \simeq \prod_iS_{m_i}$, $\r$ is written as $\r = \r_1\boxtimes \r_2 \boxtimes \r_3$, with $\r_i \in S_{m_i}\wg$. Here $W_{\Bm,r} \simeq \prod_iW_{m_i,r}$. We extend the irreducible $S_{m_i}$-module $\r_i$ to an irreducible $W_{m_i,r}$-module $\wt\r_i$ by defining the action of $(\BZ/r\BZ)^{m_i}$ via $\tau_i^{\otimes m_i}$, and put $\wt\r = \wt\r_1\boxtimes\wt\r_2\boxtimes\wt\r_3 \in W_{\Bm,r}\wg$. We also define $\wt\r' = \wt\r_1\boxtimes\wt\r_2\boxtimes\wt\r_3' \in W_{\Bm,r}\wg$, where $\wt\r_3'$ is the trivial extension of $\r_3$ to $W_{m_3,r}$. Put $\wh\r = \Ind_{W_{\Bm,r}}^{W_{n,r}}\wt\r$. Then $\wh\r$ is an irreducible $W_{n,r}$-module, and any irreducible representation of $W_{n,r}$ is obtained in this way from $\r \in S_{\Bm}\wg$ for various $\Bm$. \par In view of (6.4.1), (6.4.2) and (6.4.3), similarly to the discussion in 2.10, $(\psi_{\Bm})_!\Ql$ can be written as \begin{equation*} \tag{6.4.4} (\psi_{\Bm})_!\Ql \simeq \bigoplus_{\r \in S_{\Bm}\wg} \Ind_{S_{\Bm}}^{S_n}(H^{\bullet}(\BP_1^{m_3}) \otimes \r) \otimes \SL_{\r}. \end{equation*} We define an action of $\BZ/r\BZ$ on $H^{\bullet}(\BP_1) = H^2(\BP_1) \oplus H^0(\BP_1)$ by $\tau_r \oplus \tau_1$, and define an action of $(\BZ/r\BZ)^{m_r}$ on $H^{\bullet}(\BP_1^{I_r}) \simeq H^{\bullet}(\BP_1)^{\otimes m_r}$ as its tensor product. Thus we can extend $H^{\bullet}(\BP_1^{m_r})\otimes \r$ to a complex of $W_{\Bm,r}$-modules $H^{\bullet}(\BP_1^{m_r})\otimes \wt\r'$. Thus by (6.4.4), we can define a $W_{n,r}$-action on $(\psi_{\Bm})_!\Ql$, \begin{equation*} \tag{6.4.5} (\psi_{\Bm})_!\Ql \simeq \bigoplus_{\r \in S_{\Bm}\wg} \Ind_{W_{\Bm,r}}^{W_{n,r}}(H^{\bullet}(\BP_1^{m_3})\otimes \wt\r') \otimes \SL_{\r}. \end{equation*} \par Note that (6.4.5) can be rewritten as \begin{equation*} \tag{6.4.6} (\psi_{\Bm})_!\Ql \simeq \biggl(\bigoplus_{\r \in S_{\Bm}\wg}\wh\r \otimes \SL_{\r}\biggr) [-2m_3] + \SN_{\Bm}, \end{equation*} where $\SN_{\Bm}$ is a sum of various $\SL_{\r}[-2i]$ for $\r \in S_{\Bm}\wg$ with $0 \le i < m_3$. \para{6.5.} For each $\Bm \in \SQ_{n,3}$, let $\ol\psi_{\Bm}$ be the restriction of $\psi$ on $\psi\iv(\SY_{\Bm})$. Put $d_{\Bm} = \dim \SY_{\Bm}$. Let $\SQ_{n,3}^0$ be the set of $\Bm = (m_1, m_2, m_3) \in \SQ_{n,3}$ such that $m_3 = 0$. Take $\Bm = (m_1, m_2, 0) \in \SQ_{n,3}^0$. For $0 \le k \le m_2$, define $\Bm(k) \in \SQ_{n,3}$ by $\Bm(k) = (m_1, k, m_2 - k)$. The following result can be proved in a similar way as Proposition 1.7 in [Sh]. It is a special case where $r = 3$ of [loc. cit.]. (See also the proof of Proposition 2.13). \begin{prop} Assume that $\Bm \in \SQ^0_{n,3}$. Then $(\ol\psi_{\Bm})_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf on $\SY_{\Bm}$, equipped with $W_{n,3}$-action, and is decomposed as \begin{equation*} (\ol\psi_{\Bm})_!\Ql[d_{\Bm}] \simeq \bigoplus_{0 \le k \le m_2} \bigoplus_{\r \in S_{\Bm(k)}\wg} \wh\r \otimes \IC(\SY_{\Bm(k)}, \SL_{\r})[d_{\Bm(k)}]. \end{equation*} \end{prop} \para{6.7.} For $\Bm = (m_1, m_2, 0) \in \SQ^0_{n,3}$, let $W\nat_{\Bm}$ be the subgroup of $W_n$ defined by $W\nat_{\Bm} \simeq S_{m_1} \times W_{m_2}$, where $S_{m_1}$ is the group of permutations for $[1, m_1]$ and $W_{m_2} = W_{m_2,2}$ is the group of signed permutations for $[m_1+1, n]$. Let $\Bm(k) = (m_1,k, k')$ with $k + k' = m_2$. For $\r = \r_1\boxtimes \r_2 \boxtimes \r_3 \in S_{\Bm(k)}\wg$, we define $\r\nat \in W\nat_{\Bm}$ by $\r\nat = \r_1\boxtimes \r_2'$, where $\r_2' \in W_{m_2}\wg$ is given by $\r_2' = \Ind_{W_{k} \times W_{k'}}^{W_{m_2}}(\wt\r_2 \boxtimes \wt\r_3)$. (Here $\wt\r_2$ is the trivial extension of $\r_2$ to $W_{k}$, $\wt\r_3$ is the extension of $\r_3$ to $W_{k'}$ with non-trivial action of $\BZ/2\BZ$ for each factor.) \par Recall the map $\psi^{(\Bm)} : \wt\SY_{\Bm} \to \SY_{\Bm}$ given in 6.2. The following result is a variant of Proposition 6.6, and is proved by a similar argument (see also Proposition 3.5 in [Sh]). \begin{prop} Assume that $\Bm = (m_1, m_2, 0) \in \SQ^0_{n,3}$. $\psi^{(\Bm)}_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf on $\SY_{\Bm}$, equipped with $W\nat_{\Bm}$-action, and is decomposed as \begin{equation*} \psi^{(\Bm)}_!\Ql[d_{\Bm}] \simeq \bigoplus_{0 \le k \le m_2} \bigoplus_{\r \in S\wg_{\Bm(k)}} \r\nat\otimes \IC(\SY_{\Bm(k)}, \SL_{\r})[d_{\Bm(k)}]. \end{equation*} \end{prop} \para{6.9.} For each $\Bm \in \SQ_{n,3}$, we define $\ol\pi_{\Bm}$ the map $\pi\iv(\SX_{\Bm}) \to \SX_{\Bm}$ as the restriction of $\pi$ to $\pi\iv(\SX_{\Bm})$. Thus $\ol\pi_{\Bm}$ is a proper surjective map to $\SX_{\Bm}$. The following result is a generalization of Theorem 5.7 (note that if $\Bm = (0, m_2, 0)$, this coincides with Theorem 5.7.) \begin{thm} Assume that $\Bm \in \SQ_{n,3}^0$. Then $(\ol\pi_{\Bm})_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm}$, equipped with $W_{n,3}$-action, and is decomposed as \begin{equation*} (\ol\pi_{\Bm})_!\Ql[d_{\Bm}] \simeq \bigoplus_{0 \le k \le m_2}\bigoplus_{\r \in S_{\Bm(k)}\wg } \wh\r \otimes \IC(\SX_{\Bm(k)}, \SL_{\r})[d_{\Bm(k)}]. \end{equation*} \end{thm} \para{6.11.} The theorem can be proved in an almost parallel way as the proof of Theorem 2.2 in [Sh], the special case where $r = 3$, once formulated appropriately. Also the proof is quite similar to the proof of Lemma 5.18. So in the following, we give an outline of the proof. \par For $(x, v) \in \SX$, we define an $x$-stable isotropic subspace $W = W(x,v)$ as follows; If $x$ is nilpotent, put $W = \Bk[x]v$. Since $(x,v) \in g(\Fn \times M_n)$ for some $g \in H$, $W$ is isotropic. For general $x = s + z$, where $s$ is semisimple, $z$ is nilpotent such that $[s,z] = 0$, we consider the decomposition of $V$ into eigenspaces of $s$, $V = V_1 \oplus \cdots \oplus V_a$ as in 5.8. Then $Z_H(s) \simeq Sp(V_1) \times \cdots \times Sp(V_a)$, and $z \in \Fh\nil$ can be written as $z = \sum_{i= 1}^az_i$, where $z_i \in \Fs\Fp(V_i)\nil$. We write $v = \sum_{i=1}^av_i$ with $v_i \in V_i$. Then $(z_i, v_i) \in \SX^{(i)}$, where $\SX^{(i)}$ is defined similarly to $\SX$ by replacing $H$ by $Sp(V_i)$. We define a subspace $W_i = W(z_i,v_i)$ of $V_i$ as above, and put $W = \bigoplus_iW_i$. Then $W = W(x,v)$ satisfies the required properties. We consider $\SX_{\Bm}$ for $\Bm \in \SQ_{n,3}$. Note that \begin{equation*} \tag{6.11.1} \dim W(x,v) \le m_1 \text{ if } (x,v) \in \SX_{\Bm}. \end{equation*} It follows from the construction of $W$, we have \par\medskip\noindent (6.11.2) \ $x|_W \in \Fg\Fl(W)$ is a regular element. \par\medskip Put $W = W(x,v)$ for $(x,v) \in \SX_{\Bm}$. Then $V' = W^{\perp}/W$ has a natural symplectic structure, and we define $H' = Sp(V'), \Fh' = \Lie H'$. $x$ induces an endomorphism $x'$ on $V'$, and we have $x' \in \Fh'$. Let $X'_k, {X'}^0_k, \FN'_k$, etc. be the varieties defined for $H'$, similarly to $X_k, X^0_k, \FN_k$, etc. defined for $H$. It is easy to see that if $(x,v) \in \ol\FN_{m_1 + m_2} \times M_{m_1}$ with $W(x,v) = M_{m_1}$, then $x' \in \ol\FN'_{m_2}$. It follows that \begin{equation*} \tag{6.11.3} x' \in X'_{m_2} \text{ if } (x,v) \in \SX_{\Bm} \text{ and } \dim W(x,v) = m_1. \end{equation*} \para{6.12.} Assume that $\Bm \in \SQ_{n,3}$. As in the case of $\SY^0_{\Bm}$, we define an open subset $\SX^0_{\Bm}$ of $\SX_{\Bm}$ as \begin{equation*} \SX^0_{\Bm} = \SX_{\Bm} - \bigcup_{\Bm' < \Bm}\SX_{\Bm'}. \end{equation*} It follows from (6.11.1) and (6.11.3) that \begin{equation*} \tag{6.12.1} \SX^0_{\Bm} = \{ (x,v) \in \SX \mid \dim W(x,v) = m_1 \text{ and } x' \in {X'}^0_{m_2} \}. \end{equation*} We define $\wt\SX^+_{\Bm} = \pi\iv(\SX^0_{\Bm})$, and let $\pi_{\Bm} : \wt\SX^+_{\Bm} \to \SX^0_{\Bm}$ be the restriction of $\pi$ on $\wt\SX^+_{\Bm}$. To $(x,v, gB) \in \wt\SX^+_{\Bm}$, we associate $\BI \in \SI(\Bm)$ as follows; assume that $(x,v) \in \Fb \times M_n$, and $x = s + z$ be the Jordan decomposition of $x \in \Fb$. By replacing $(x,v)$ by $B$-conjugate, we assume that $s \in \Ft, z \in \Fn$. Then the decomposition $V = V_1 \oplus \cdots \oplus V_a$ gives a decomposition $M_n = M_{n,1}\oplus\cdots\oplus M_{n,a}$. Here $M_n$ has a basis $\{ e_1, \dots, e_n\}$, and $M_{n,i}$ has a basis $\{ e_j \mid j \in J_i\}$, which gives a partition $[1,n] = \coprod_{1 \le i \le a}J_i$. $(x,v)$ determines $(z_i, v_i)$, and put $q_i = \dim W(z_i,v_i)$. Clearly $q_i \le \dim M_{n,i}$, and put $J_i' \subset J_i$ as the set of first $q_i$ letters in $J_i$. We define $I_1 = \coprod_{1 \le i \le a}J_i'$. Since $\dim W(x,v) = m_1$, we have $|I_1| = m_1$ by (6.12.1). Here $x' \in {X'}^0_{m_2}$ again by (6.12.1). Thus by 5.8, one can associate to $x'$ a subset $I_2$ of $[m_1 + 1,n]$ such that $|I_2| = m_2$. We have $I_1 \cap I_2 = \emptyset$, and $\BI = (I_1, I_2, I_3)$ gives an element in $\SI(\Bm)$ ($I_3$ is the complement of $I_1 \cup I_2$.) The assignment $(x, v) \mapsto \BI$ does not depend on the $B$-conjugates of $(x,v)$, Thus we obtain a well-defined map $(x,v, gB) \mapsto \BI$. \par We define a subvariety $\wt\SX_{\BI}$ of $\wt\SX^+_{\Bm}$ by \begin{equation*} \tag{6.12.2} \wt\SX^+_{\Bm} = \{ (x,v, B) \in \wt\SX^+_{\Bm} \mid (x,v, gB) \mapsto \BI \}. \end{equation*} The following lemma can be proved in a similar way as Lemma 5.9 (see also Lemma 2.4 in [Sh]). \begin{lem} $\wt\SX^+_{\Bm}$ is decomposed as \begin{equation*} \wt\SX^+_{\Bm} = \coprod_{\BI \in \SI(\Bm)}\wt\SX_{\BI}. \end{equation*} \end{lem} \para{6.14.} We fix $\Bm = (m_1, m_2, m_3) \in \SQ_{n,3}$. We consider the space $V_0 = M_{m_1}$ and $\ol V_0 = V_0^{\perp}/V_0$. As in 5.11, we put $G_1 = GL(V_0), H_2 = Sp(\ol V_0)$. We consider $X'_{m_2} \subset X'$ with respect to $H_2$. Put $\Fg_1 = \Lie G_1$, and let $\Fg_1^0$ be the set of regular elements in $\Fg_1$. For $\xi = (x,v) \in \SX$, put $W_{\xi} = W(x,v)$. As a variant of (5.11.1), (see also [Sh, 2.5]) we define a variety \begin{align*} \tag{6.14.1} \CK_{\Bm} = \{ (x,&v, \f_1, \f_2) \mid \xi = (x,v) \in \SX^0_{\Bm}, \\ &\f_1 : W_{\xi} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, V_0, \vf_2 : W_{\xi}^{\perp}/W_{\xi} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol V_0 (\text{symplectic isom.})\} \end{align*} with morphisms \begin{equation*} \begin{aligned} q : &\CK_{\Bm} \to \SX^0_{\Bm}, &\quad &(x,v,\f_1, \f_2) \mapsto (x,v), \\ \s : &\CK_{\Bm} \to \Fg_1^0 \times {X'}^0_{m_2}, &\quad &(x,v, \f_1, \f_2) \mapsto (\f_1(x|_{W_{\xi}})\f_1\iv, \f_2(x|_{W _{\xi}^{\perp}/W_{\xi}})\f_2\iv) \end{aligned} \end{equation*} \par $H \times (G_1 \times H_2)$ acts on $\CK_{\Bm}$ by \begin{equation*} (g, (h_1, h_2)) : (x,v,\f_1, \f_2) \mapsto (gx, gv, h_1\f_1g\iv, h_2\f_2g\iv). \end{equation*} Moreover, $\s$ is $H \times (G_1 \times H_2)$-equivariant with respect to the adjoint action of $G_1 \times H_2$ and the trivial action of $H$ on $\Fg_1^0 \times {X'}^0_{m_2}$. Similarly to (5.11.3), we have the following. (The proof is similar to ([Sh, 2.5]). \par\medskip\noindent (6.14.2) \ The map $q$ is a principal bundle with fibre isomorphic to $G_1 \times H_2$. The map $\s$ is a locally trivial fibration with smooth connected fibre of dimension $\dim H + m_1$. \par\medskip We define a Borel subgroup $B_1$ of $G_1$ and $B_2$ of $H_2$ as in 5.13, by replacing $k$ by $m_1$, and define $\pi^1: \wt\Fg_1 \to \Fg_1$ similarly. Put $\wt\Fg^0_1 = (\pi^1)\iv(\Fg^0_1)$, and let $\vf^1 : \wt\Fg^0_1 \to \Fg^0_1$ be the restriction of $\pi^1$. We define $X'$ by using $H_2/B_2$, and let $\pi^2_{m_2} : {\wt X}'^+_{m_2} \to {X'}^0_{m_2}$ be the corresponding map. We define a variety \begin{align*} \wt\CZ^+_{\Bm} = \{ (\xi, gB, &\f_1, \f_2) \mid (\xi, gB) \in \wt\SX^+_{\Bm}, \\ &\f_1 : W_{\xi} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, V_0, \f_2 : W_{\xi}^{\perp}/W_{\xi} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol V_0 \}, \end{align*} and define a map $\wt q: \wt\CZ^+_{\Bm} \to \wt\SX^+_{\Bm}$ by the natural projection. We define a map $\wt\s : \wt\SZ^+_{\Bm} \to \wt\Fg^0_1 \times {\wt X}'^+_{m_2}$ as follows; take $(x,v,B, \f_1, \f_2) \in \wt\CZ^+_{\Bm}$. Since $\xi = (x,v) \in \SX^0_{\Bm}$, $W_{\xi}$ coincides with $g(M_{m_1})$. Let $g_1B_1$ be the element corresponding to the flag $\f_1(g(M_i))_{0 \le i \le m_1}$, and $g_2B_2$ the element corresponding to the isotropic flag $\f_2(g(M_i)/g(M_{m_1}))_{m_1 \le i \le n}$. Then \begin{equation*} \wt\s : (x,v,gB, \f_1, \f_2) \mapsto ((\f_1(x|_{W_{\xi}})\f_1\iv, g_1B_1), (\f_2(x|_{W_{\xi}^{\perp}/W_{\xi}})\f_2\iv, g_2B_2). \end{equation*} We also define a map $\wt\pi_{\Bm} : \wt\CZ^+_{\Bm} \to \CK_{\Bm}$ by $(x,v,gB, \f_1, \f_2) \mapsto (x,v,\f_1,\f_2)$. Then we have the following commutative diagram (compare this with (5.13.2)); \begin{equation*} \tag{6.14.3} \begin{CD} \wt\Fg^0_1 \times {\wt X}'^+_{m_2} @<\wt s<< \wt\CZ^+_{\Bm} @>\wt q>> \wt\SX^+_{\Bm} \\ @V\vf^1 \times \pi^2_{m_2}VV @VV\wt\pi_{\Bm}V @VV\pi_{\Bm}V \\ \Fg_1^0 \times X'^0_{m_2} @<\s<< \CK_{\Bm} @>q>> \SX^0_{\Bm}. \end{CD} \end{equation*} \para{6.15.} By (5.14.1), $(\pi^1)_!\Ql$ can be written as $(\pi^1)_!\Ql \simeq \IC(\Fg_1, \SL)$ for a semisimple local system $\SL$ on $(\Fg_1)\reg$; the set of regular semisimple elements in $\Fg_1$. Since $\Fg_1^0$ is an open dense subset of $\Fg_1$ containing $(\Fg_1)\reg$, $(\vf^1)_!\Ql$ can be written as \begin{equation*} \tag{6.15.1} (\vf^1)_!\Ql \simeq \bigoplus_{\r_1 \in S_{m_1}\wg}\r_1\otimes \IC(\Fg_1^0, \SL^1_{\r_1}). \end{equation*} By applying Proposition 5.16 to the map $\pi^2_{m_2} ; \wt X'^+_{m_2} \to X'^0_{m_2}$, we have \begin{equation*} \tag{6.15.2} (\pi^2_{m_2})_!\Ql \simeq H^{\bullet}(\BP_1^{m_3})\otimes \bigoplus_{\r_2 \in (S_{m_2} \times S_{m_3})\wg} \wh \r_2 \otimes \IC(X'^0_{m_2}, \SL'_{\r'}). \end{equation*} Put $A_{\r_1} = \IC(\Fg^0_1, \SL^1_{\r_1})[\dim \Fg_1]$ and $A_{\r_2} = \IC(X'^0_{m_2}, \SL'_{\r_2})[\dim X'_{m_2}]$. Then $A_{\r_1}\boxtimes A_{\r_2}$ is a $(G_1 \times H_2)$-equivariant simple perverse sheaf on $\Fg_1^0 \times X'^0_{m_2}$. By a similar argument as in 5.14, thanks to (6.14.2), one can construct a simple perverse sheaf $A_{\r}$ on $\SX^0_{\Bm}$, where $\r = \r_1 \boxtimes \r_2 \in S_{\Bm}\wg$, satisfying the following property. \begin{equation*} q^*A_{\r}[\b_2] \simeq \s^*(A_{\r_1}\boxtimes A_{\r_2})[\b_1], \end{equation*} where $\b_1 = \dim H + m_1$ and $\b_2 = \dim (G_1 \times H_2)$. The following lemma can be proved in a similar way as [Sh, Lemma 2.7] (see also the proof of Lemma 5.15). \begin{lem} Let $\SL_{\r}$ be a simple local system on $\SY^0_{\Bm}$ as given in (6.4.4). Then we have \begin{equation*} A_{\r} \simeq \IC(\SX^0_{\Bm}, \SL_{\r})[d_{\Bm}]. \end{equation*} \end{lem} By using Lemma 6.16, we can prove the following. \begin{prop} Under the notation of Lemma 6.16, $(\pi_{\Bm})_!\Ql$ is decomposed as \begin{equation*} (\pi_{\Bm})_!\Ql \simeq H^{\bullet}(\BP_1^{m_3})\otimes \bigoplus_{\r \in S_{\Bm}\wg}\wh\r \otimes \IC(\SX^0_{\Bm}, \SL_{\r}), \end{equation*} where $\wh\r$ is regarded as a vector space, ignoring the $W_{n,3}$-action. \end{prop} \begin{proof} We fix $\BI = (I_1, I_2, I_3) \in \SI(\Bm)$. Then the following commutative diagram is obtained from (6.14.3). \begin{equation*} \begin{CD} \wt\Fg^0_1 \times {\wt X}'_{I_2} @<<< \wt\CZ_{\BI} @>>> \wt\SX_{\BI} \\ @V\vf^1 \times \pi^2_{I_2}VV @VVV @VV\pi_{\BI}V \\ \Fg_1^0 \times X'^0_{m_2} @<\s<< \CK_{\Bm} @>q>> \SX^0_{\Bm}, \end{CD} \end{equation*} where $\wt\CZ_{\BI} = \wt q\iv(\wt\SX_{\BI})$. Note that $\pi_{\BI}$ is proper, and the both squares are cartesian squares. Then the proposition can be proved in a similar way as in the proof of Proposition 2.8 in [Sh]. Also see the discussion in the proof of Lemma 5.18. We omit the details. \end{proof} \remark{6.18.} The proof of [Sh, Prop. 2.8] uses the induction on $r$, and depends on the Henderson's result [Hen] for the case where $r = 1$, which was proved by making use of the Fourier-Deligne transform on perverse sheaves on Lie algebras. In turn, in the proof of Lemma 5.18, we needed to assume that $k \ne 0$. But thanks to Proposition 5.16, we don't need any restriction in the proof of Proposition 6.17. Note that Proposition 5.16 was proved by making use of the $W_n$-actions on perverse sheaves on $\wt\Fh$. \para{6.19.} Now the theorem is proved by a similar argument as in 2.10 in [Sh]. See also the discussion in 5.19. Note that in our situation, $\SN_0$ does not appear thanks to Proposition 5.16. In 5.19, if $\SN_0 = 0$, one can construct a representation of $W_n$ on $(\ol\pi_m)_!\Ql$ by a similar method as in the case of $(\ol\psi_m)_!\Ql$ (see the discussion in 2.11). A similar argument works for $\ol\pi_{\Bm}$ also. \para{6.20.} Recall the map $\pi^{(\Bm)}: \wt\SX_{\Bm} \to \SX_{\Bm}$ given in 6.2, and $W\nat_{\Bm}$ given in 6.7. The following result is a variant of Theorem 6.10, and is proved in a similar way as Theorem 3.2 in [Sh]. Note that if $\Bm = (m_1, m_2, 0)$, then $W\nat_{\Bm} = S_{m_1} \times W_{m_2,2}$. In the special case where $m_1 = 0, m_2 = n$, Theorem 6.21 coincides with Theorem 5.7. \begin{thm} Assume that $\Bm = (m_1, m_2, 0)\in \SQ^0_{n,3}$. $\pi^{(\Bm)}_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm}$, equipped with $W\nat_{\Bm}$-action, and is decomposed as \begin{equation*} \pi^{(\Bm)}_!\Ql[d_{\Bm}] \simeq \bigoplus_{0 \le k \le m_2} \bigoplus_{\r \in S_{\Bm(k)}\wg}\r\nat\otimes \IC(\SX_{\Bm(k)}, \SL_{\r})[d_{\Bm(k)}]. \end{equation*} \end{thm} \par\bigskip \section{Nilpotent variety for $\Fg^{\th}$} \para{7.1.} For each $\Bm \in \SQ_{n,3}$ with $p_1 = m_1, p_2 = m_1 + m_2$, we define varieties \begin{align*} \wt\SX_{\Bm,\nilp} &= \{ (x,v, gB) \in \Fh \times V \times H/B \mid g\iv x \in \Fn_s \oplus \SD_{p_2}, g\iv v \in M_{p_1} \}, \\ \SX_{\Bm,\nilp} &= \bigcup_{g \in H}g\bigl((\Fn_s\oplus\FD_{p_2}) \times M_{p_1}\bigr). \end{align*} We write $\SX_{\Bm,\nilp}$ as $\SX\nil$ if $\Bm = (n,0,0)$. Note that $\Fn \times M_n \simeq \Fn \oplus (M_n)_{\Fg}$ is contained in the nilpotent radical of a Borel subalgebra of $\Fg\Fl_{2n+1}$ (here $(M_n)_{\Fg}$ is the corresponding subspace of $V_{\Fg}$). Hence \begin{equation*} \tag{7.1.1} \SX_{\Bm, \nilp} \subset \SX\nil \subset \Fg^{\th}\nil. \end{equation*} In this paper, we are only concerned with $\SX\nil$, not with $\Fg^{\th}\nil$. However it is likely that $\Fg^{\th}\nil = \SX\nil$. \par We define a map $\pi_1^{(\Bm)}: \wt\SX_{\Bm, \nilp} \to \SX_{\Bm,\nilp}$ by $(x,v,gB) \mapsto (x,v)$. It is clear that $\wt\SX_{\Bm,\nilp}$ is smooth and irreducible, and $\pi_1^{(\Bm)}$ is a proper map onto $\SX_{\Bm,\nilp}$. Since \begin{equation*} \wt\SX_{\Bm,\nilp} \simeq H \times^B\bigl((\Fn_s\oplus \FD_{p_2}) \times M_{p_1}\bigr), \end{equation*} we have \begin{align*} \tag{7.1.2} \dim \wt\SX_{\Bm, \nilp} &= \dim H - (n + m_3) + m_1 \\ &= 2n^2 + m_1 - m_3. \end{align*} \para{7.2} For an integer $r \ge 1$, let $\SP_{n,r}$ be the set of $r$-tuples of partitions $\Bla = (\la^{(1)}, \dots, \la^{(r)})$ such that $\sum_{1 \le i \le r}|\la^{(i)}| = n$. In the case where $r = 1$, we simply denote by $\SP_n$ the set of partitions of $n$. It is well-known that $S_n\wg$ is in bijection with $\SP_n$. We denote by $\r_{\la}$ the irreducible representation of $S_n$ corresponding to $\la \in \SP_n$. We normalize the parametrization so that the unit representation $1_{S_n}$ corresponds to $\la = (n)$. We consider the complex reflection group $W_{n,r}$. As explained in 6.4, the irreducible representation of $W_{n,r}$ is constructed from irreducible representations of symmetric groups. By this construction, we have a natural parametrization between $W_{n,r}\wg$ and $\SP_{n,r}$. We denote by $\r_{\Bla}$ the irreducible representation of $W_{n,r}$ corresponding to $\Bla \in \SP_{n,r}$. \par We consider the nilpotent orbits in $\Fh = \Fs\Fp(V)$. By Theorem 3.6, nilpotent orbits $\SO$ are parametrized by $W\wg_{n,2}$, as $\SO = \SO_{\r}$ for $\r \in W\wg_{n,2}$. We denote by $\SO_{\Bla}$ the nilpotent orbit $\SO_{\r}$ in $\Fh$ corresponding to $\r = \r_{\Bla}$ with $\Bla \in \SP_{n,2}$. \para{7.3} Let $V_1$ be an $n$-dimensional vector space over $\Bk$, and $G_1 = GL(V_1)$. Put $\Fg_1 = \Lie G_1$. We consider the diagonal action of $G_1$ on $\Fg_1 \times V_1$. $G_1$ acts on $(\Fg_1)\nil \times V_1$. It is known by [AH], [T] that the set of $G_1$-orbits in $(\Fg_1)\nil \times V_1$ is parametrized by $\SP_{n,2}$. We denote by $\mathbb O_{\Bla}$ the $G_1$-orbit corresponding to $\Bla \in \SP_{n,2}$. According to [AH], the explicit correspondence is given as follows; take $(x,v) \in (\Fg_1)\nil \times V_1$. Put $E^x = \{ y \in \End(V_1) \mid yx = xy \}$. $E^x$ is a subalgebra of $\End(V_1)$, stable by the multiplication of $x$. If we put $W = E^xv$, $W$ is an $x$-stable subspace of $V_1$. We denote by $\la^{(1)}$ the Jordan type of $x|_W$ and by $\la^{(2)}$ the Jordan type of $x|_{V_1/W}$. Then the Jordan type of $x$ is $\la^{(1)} + \la^{(2)}$, and $\Bla = (\la^{(1)}, \la^{(2)}) \in \SP_{n,2}$. $\mathbb O_{\Bla}$ is defined as the $G_1$-orbit containing $(x,v)$. This gives the required labelling of $G_1$-orbits in $(\Fg_1)\nil \times V_1$. Note that if $(x,v) \in \mathbb O_{\Bla}$, the Jordan type of $x$ is $\la^{(1)} + \la^{(2)}$ for $\Bla = (\la^{(1)},\la^{(2)})$. \par For a partition $\la = (\la_1, \la_2, \cdots) \in \SP_n$, put $n(\la) = \sum_{i \ge 1}(i-1)\la_i$. Let $\SO_{\la}$ be the $G_1$-orbit in $(\Fg_1)\nil$ consisting of $x$ of Jordan type $\la$. It is well-known that \begin{equation*} \tag{7.3.1} \dim \SO_{\la} = n^2 - n - 2n(\la). \end{equation*} \par Let $\mathbb O_{\Bla}$ be as above. The following formula was proved in [AH, Prop. 2.8]. Put $\nu = \la^{(1)} + \la^{(2)}$, and define $n(\Bla) = n(\la^{(1)}) + n(\la^{(2)})$. Then \begin{align*} \tag{7.3.2} \dim \mathbb O_{\Bla} &= \dim \SO_{\nu} + |\la^{(1)}| \\ &= n^2 - n - 2n(\Bla) + |\la^{(1)}|. \end{align*} \para{7.4.} For $\Bla \in \SP_{n,3}$, we shall define a variety $X_{\Bla} \subset \Fh \times V$. Put $\Bla = (\la^{(1)}, \la^{(2)}, \la^{(3)})$, and $m_i = |\la^{(i)}|$ for $i = 1,2,3$. Let $P$ be the parabolic subgroup of $H$, which is the stabilizer of the subspace $V_0 = M_{m_1}$, and $L$ the Levi subgroup of $P$ containing $T$. Then $L \simeq GL(V_0) \times Sp(\ol V_0)$, where $\ol V_0 = V_0^{\perp}/V_0$. Put $G_1 = GL(V_0)$ and $\Fg_1 = \Lie G_1$. $G_1$ acts on $\Fg_1 \times V_0$, as the restriction of the action of $H$ on $\Fh \times V$. Let $\SO_1$ be the $G_1$-orbit in $(\Fg_1)\nil \times V_0$ corresponding to $\mathbb O_{\Bla}$ with $\Bla = (\la^{(1)}, \emptyset) \in \SP_{m_1,2}$. Put $H_2 = Sp(\ol V_0)$ and $\Fh_2 = \Lie H_2$. We denote by $\SO_2$ the $H_2$-orbit $\SO_{\Bla'}$ in $(\Fh_2)\nil$ corresponding to $\Bla' = (\la^{(2)}, \la^{(3)}) \in \SP_{m_2 + m_3, 2}$. Put $\Lie P = \Fp$. We define a set $\SM_{\Bla} \subset \Fp\nil \times V_0$ by \begin{equation*} \tag{7.4.1} \SM_{\Bla} = \{ (x,v) \in \Fp\nil \times V_0 \mid (x|_{V_0} , v) \in \SO_1, x|_{\ol V_0} \in \SO_2 \}, \end{equation*} and define $X_{\Bla} = \bigcup_{g \in H}g(\SM_{\Bla})$. Let $\Fn_P$ be the nilpotent radical of $\Fp$. We define a variety \begin{equation*} \tag{7.4.2} \wt X_{\Bla} = H \times^P((\ol\SO_1 + \ol\SO_2) + \Fn_P) \end{equation*} and a map $\pi_{\Bla} : \wt X_{\Bla} \to \SX\nil$ by $g*x \mapsto gx$. Then $\pi_{\Bla}$ is proper, and $\Im \pi_{\Bla} = \bigcup_{g \in H}g(\ol\SO_1 + \ol\SO_2 + \Fn_P)$ is closed in $\SX\nil$. We show that \begin{lem} Under the notation above, \begin{enumerate} \item $X_{\Bla}$ is an $H$-stable, smooth, irreducible, locally closed subvariety of $\SX\nil$. Moreover, $\Im \pi_{\Bla} = \ol X_{\Bla}$. \item $\dim \wt X_{\Bla} = \dim X_{\Bla} = 2\dim U_P + \dim \SO_1 + \dim \SO_2$. \item $X_{\Bla}$ gives a partition of $\SX\nil$, namely, \begin{equation*} \SX\nil = \coprod_{\Bla \in \SP_{n,3}}X_{\Bla}. \end{equation*} \end{enumerate} \end{lem} \begin{proof} Take $(x,v) \in L \times M_n$, with $x = (x_1, x_2)$ such that $(x_1,v) \in \SO_1$ and $x_2 \in \SO_2$. Then we can write as \begin{equation*} \tag{7.5.1} X_{\Bla} \simeq H \times^{Z_L(x,v)U_P}\bigl((x + \Fn_P) \times \{ v\}\bigr). \end{equation*} Hence $X_{\Bla}$ is smooth and irreducible. We have \begin{align*} \dim X_{\Bla} &= \dim H - \dim Z_L(x,v) \\ &= \dim H - \dim Z_{G_1}(x_1,v) - \dim Z_{H_2}(x_2) \\ &= 2\dim U_P + \dim \SO_1 + \dim \SO_2. \end{align*} Thus by (7.4.2), we see that $\dim \wt X_{\Bla} = \dim X_{\Bla}$. (ii) is proved. \par We have $X_{\Bla} \subset \Im \pi_{\Bla}$. As a variant of $\pi_{\Bla}$, it is possible to define $\pi'_{\Bla}$ by replacing $\SO_i$ by any orbits $\SO_i' \subset \ol\SO_i$ for $i = 1,2$. In that case, $\Im \pi'_{\Bla}$ is also closed. This implies that $X_{\Bla}$ is an open dense subset of $\Im \pi_{\Bla}$, hence $X_{\Bla}$ is locally closed in $\SX\nil$, and $\Im \pi_{\Bla} = \ol X_{\Bla}$. This proves (i). \par We show (iii). Take $(x,v) \in \SX\nil$. Up to $H$-conjugate, we may assume that $(x,v) \in \Fh\nil \times M_n$. Let $x'$ be the restriction of $x$ on $M_n$. Let $W = E^{x'}v$ for $V_1 = M_n$ under the notation in 7.3. Let $\la^{(1)}$ be the Jordan type of $x'|_W$. Then $(x'|_W,v) \in \mathbb O_{(\la^{(1)}, -)}$ in $W$. Let $P$ be the stabilizer of $W$. Then $x'' = x|_{\ol W}$ gives an element in $\Fs\Fp(\ol W)\nil$. Assume that $x'' \in \SO_{\Bla'}$ with $\Bla'' = (\la^{(2)}, \la^{(3)})$. Then $(x,v) \in X_{\Bla}$ with $\Bla = (\la^{(1)}, \la^{(2)}, \la^{(3)}) \in \SP_{n,3}$. This implies that $\SX\nil = \bigcup_{\Bla}X_{\Bla}$. The above discussion shows that the $H$-conjugates of $(x,v)$determines $\Bla$ uniquely. Hence they are disjoint, and (iii) holds. \end{proof} \remark{7.6.} It follows from the construction, $X_{\Bla}$ is a single $H$-orbit if $m_1 = 0$. Since $\Fg^{\th}$ contains infinitely many $H$-orbits by Proposition 1.8, some of $X_{\Bla}$ contains infinitely many $H$-orbits. \para{7.7.} For $\Bm \in \SQ_{n,3}$, we define $\Bla = \Bla(\Bm) \in \SP_{n,3}$ by $\la^{(i)} = (m_i)$ for $i = 1,2,3$. Also we define a subset $\SP(\Bm)$ of $\SP_n$ as the set of $\Bla \in \SP_{n,3}$ such that $|\la^{(i)}| = m_i$ for $i = 1,2,3$. For $\Bm \in \SQ^0_{n,3}$, put \begin{equation*} \tag{7.7.1} \wt\SP(\Bm) = \coprod_{0 \le k \le m_2}\SP(\Bm(k)). \end{equation*} We have the following result. \begin{prop} Assume that $\Bm \in \SQ_{n,3}^0$. Then we have \begin{enumerate} \item $\SX_{\Bm, \nilp} = \ol X_{\Bla(\Bm)}$. \item $\dim \wt\SX_{\Bm,\nilp} = \dim \SX_{\Bm,\nilp} = 2n^2 + m_1$. \item For $\Bmu \in \wt\SP(\Bm)$, we have $X_{\Bmu} \subset \SX_{\Bm, \nilp}$. \end{enumerate} \end{prop} \begin{proof} We show (iii). Since $m_3 = 0$, we have $\SD_{m_1 + m_2} = \SD$, hence we can write as \begin{equation*} \SX_{\Bm, \nilp} = \bigcup_{g \in H}g(\Fn \times M_{m_1}). \end{equation*} Take $(x, v) \in X_{\Bmu}$. Up to $H$-conjugate, we may assume that $(x,v) \in \SM_{\Bmu}$, where $\SM_{\Bmu}$ is as in (7.4.1). Assume that $\Bmu \in \wt\SP(\Bm)$. Let $P$ be the stabilizer of $W = M_{m_1}$, and $L \simeq GL(W) \times Sp(\ol W)$. We may further assume that $(x,v)$ is of the form $(x_1 + x_2 + z, v) \in \Fh\nil \times V$, where $(x_1,v) \in \SO_1, x_2 \in \SO_2, z \in \Fn_P$ in the notation of 7.4.. Hence, up to $H$-conjugate, we can take $v \in M_{m_1}$ and $x \in \Fn$. It follows that $X_{\Bmu} \subset \SX_{\Bm\nilp}$. This proves (iii). Now by (iii), $X_{\Bla(\Bm)} \subset \SX_{\Bm, \nilp}$. By Lemma 7.5 (ii), $\dim X_{\Bla(\Bm)} = 2\dim U_P + \dim \SO_1 + \dim \SO_2$. Since $\SO_1$ corresponds to $((m_1), -)$, we have $\dim \SO_1 = m_1^2$ by (7.3.2). On the other hand, since $\SO_2$ corresponds to $((m_2), -)$, which is the regular nilpotent orbit in $(\Fh_2)\nil$. Thus $\dim \SO_2 = \dim H_2 - m_2$. It follows that \begin{align*} \dim X_{\Bla(\Bm)} &= (\dim H - \dim G_1 - \dim H_2) + m_1^2 + (\dim H_2 - m_2) \\ &= 2n^2 + m_1. \end{align*} By the previous discussion, we have \begin{equation*} \dim X_{\Bla(\Bm)} \le \dim \SX_{\Bm,\nilp} \le \dim\wt\SX_{\Bm,\nilp}. \end{equation*} By (7.1.2) (note that $m_3 = 0$), the above inequalities are actually equalities. Since $X_{\Bla(\Bm)}$ is irreducible, we conclude that $\ol X_{\Bla(\Bm)} = \SX_{\Bm,\nilp}$. Hence (i) holds. (ii) also follows from this. \end{proof} \para{7.9.} Let $P = LU_P$ be a parabolic subgroup of $H$, where $L$ is a Levi subgroup, and $U_P$ is the unipotent radical of $P$. Let $\pi_P : P \to L$ be the natural projection. Let $\SO'$ be an $L$-orbit in $(\Lie L)\nil$. Let $\SO$ be an $H$-orbit in $\Fh\nil$ such that $\SO \cap \pi_P\iv(\SO')$ is open dense in $\pi_P\iv(\SO')$. For any $x \in \SO$, consider the variety \begin{equation*} \tag{7.9.1} \SP_x = \{gP \in H/P \mid g\iv x \in \pi_P\iv(\SO') \}. \end{equation*} Then clearly $\SP_x \ne \emptyset$. The following result can be proved in a similar way as [Sh, Prop. 6.3]. \begin{prop} Under the setting in 7.9, assume that $x \in \SO$. \begin{enumerate} \item $\SP_x$ consists of one point. \item $\dim Z_H(x) = \dim Z_L(x')$ for $x' \in \SO'$. \item Let $x_1 \in \pi_P\iv(\SO')$ be such that $\dim Z_H(x_1) = \dim Z_H(x)$. Then $x_1 \in \SO$. \item Take $x_1 \in \SO \cap \pi_P\iv(\SO')$ and put $x' = \pi_P(x_1)$. Let $Q = Z_P(x')$. Then $\dim Z_Q(x_1) = \dim Z_H(x_1)$. In particular, \begin{equation*} Z_H(x_1) = Z_P(x_1) = Z_Q(x_1). \end{equation*} \item $P$ acts transitively on $\SO \cap \pi_P\iv(\SO')$, and $Q$ acts transitively on $\SO \cap \pi_P\iv(x')$. \end{enumerate} \end{prop} \begin{proof} For the sake of completeness, we give an outline of the proof below. First we show that \begin{equation*} \tag{7.10.1} \dim \SP_x = 0. \end{equation*} Replacing by $H$-conjugate, we may assume that $x \in \SO \cap \pi_P\iv(\SO')$. Put $x' = \pi_P(x) \in \SO'$. We have $\dim \pi_P\iv(x) = \dim U_P$. Put $c = \dim \SO, c' = \dim \SO'$. By [L1, Prop.1.2] (actually a similar argument works also for the Lie algebra case, see [X2, Prop. 3.1]), we have $\dim (\SO \cap \pi_P\iv(x')) \le (c - c')/2$. Since $\SO \cap \pi_P\iv(\SO')$ is open dense in $\pi\iv(\SO')$, $\SO \cap \pi_P\iv(x')$ is open dense in $\pi_P\iv(x')$. It follows that \begin{equation*} \dim U_P \le (c - c')/2. \end{equation*} On the other hand, by Proposition 3.1 (ii) in [X2] (or [L1, Prop. 1.2 (b)]), we have \begin{align*} \tag{7.10.2} \dim \SP_x &\le (\dim U - c/2) - (\dim U_L - c'/2) \\ &= \dim U_P - (c - c')/2, \end{align*} where $U_L = U \cap L$ is a maximal unipotent subgroup in $L$. Hence $\dim \SP_x \le 0$. Since $\SP_x \ne \emptyset$, we obtain (7.10.1). We also have $c - c' = 2\dim U_P$. This implies that $\dim Z_H(x) = \dim Z_L(x')$. Hence (ii) holds. Now consider (iv). Based on the above computation, in a similar way as in [Sh, Prop. 6.3], we can show $\dim Z_H(x_1) = \dim Z_Q(x_1)$. Since it is known that $Z_H(x_1)$ is connected ([X2, 2.14]) in the case of characteristic 2, we have $Z_H(x_1) = Z_Q(x_1)$. Hence $Z_H(x_1) = Z_P(x_1)$, which proves (iv). Put \begin{equation*} \SU = \{ (x_1, gP) \in \Fh\nil \times H/P \mid g\iv x_1 \in \pi_P\iv(\SO')\}. \end{equation*} Then $\SU \simeq H \times^P\pi_P\iv(\SO')$ and so $\SU$ is irreducible. Let $f : \SU \to \Fh\nil$ be the first projection, and put $\SU_{\SO} = f\iv(\SO)$. Then $\SU_{\SO} \simeq H \times^P(\SO \cap \pi_P\iv(\SO'))$, and so $\SU_{\SO}$ is also irreducible. For any $x \in \SO$, $\dim f\iv(x) = 0$ by (7.10.1). Thus $\dim \SU_{\SO} = \dim \SO$. Since $f : \SU_{\SO} \to \SO$ is an $H$-equivariant surjective map, for any $\xi \in \SU_{\SO}$, the $H$-orbit $H\xi$ is open dense in $\SU_{\SO}$. It follows that \par\medskip\noindent (7.10.3) \ $H$ acts transitively on $\SU_{\SO}$. \par\medskip Take $x, x_1 \in \SO \cap \pi_P\iv(\SO')$. Since $(x, P), (x_1, P) \in \SU_{\SO}$, there exists $g\in P$ such that $gx = x_1$. This proves the first statement of (v). Now assume that $x_1, x_2 \in \SO \cap \pi_P\iv(x')$. Then there exists $g \in P$ such that $gx_1 = x_2$. But since $\pi_P$ is $P$-equivariant, $g \in Z_P(x') = Q$. This proves the second statement of (v). \par We show (i). We may assume that $x \in \SO \cap \pi_P\iv(\SO')$. Then $P \in \SP_x$. Assume that $gP \in \SP_x$. Then $(x, P), (x, gP) \in \SU_{\SO}$, and so there exists $h \in Z_H(x)$ such that $gP = hP$ by (7.10.3). But by (iv), $h \in P$, and so (i) holds. (iii) is proved in the same way as in [Sh]. The proposition is proved. \end{proof} \para{7.11.} We return to the original setting. For later application, we shall consider some open dense subvariety $X_{\Bla}^0$ of $X_{\Bla}$. As in 7.4, let $\SO_1$ be a $G_1$-orbit in $(\Fg_1)\nil \times W$ and $\SO_2$ an $H_2$-orbit in $(\Fh_2)\nil$. We denote by $\SO_1'$ the $G_1$-orbit in $(\Fg_1)\nil$ which is the projection of $\SO_1$ to $(\Fg_1)\nil$, hence $\SO_1' = \SO_{\la^{(1)}}$ (see 7.3). We define a subset $\SM_{\Bla}^0$ of $\SM_{\Bla}$ as the set of $(x,v)$ such that the orbit $\SO$ containing $x$ satsifies the property that $\SO \cap \pi_P\iv(\SO'_1 \times \SO_2)$ is open dense in $\pi_P\iv(\SO'_1\times \SO_2) = (\SO'_1 \times \SO_2) + \Fn_P$. Clearly, $\SM_{\Bla}^0$ is open dense in $\SM_{\Bla}$. We define \begin{equation*} \tag{7.11.1} X_{\Bla}^0 = \bigcup_{g \in H}g(\SM_{\Bla}^0). \end{equation*} Let $\pi_{\Bla} : \wt X_{\Bla} \to \ol X_{\Bla}$ be as in 7.4. We define \begin{equation*} \wt X^0_{\Bla} = H \times^P\SM^0_{\Bla}. \end{equation*} Since $\wt X_{\Bla} \simeq H \times^P\ol\SM_{\Bla}$, and $\SM_{\Bla}$ is open dense in $\ol\SM_{\Bla}$, $\wt\SX^0_{\Bla}$ is open dense in $\wt X_{\Bla}$, and one can check that $\pi_{\Bla}\iv(X^0_{\Bla}) = \wt X^0_{\Bla}$. Since $\pi_{\Bla}$ is proper, $X_{\Bla}^0$ is open dense in $X_{\Bla}$. Let $\pi_{\Bla}^0: \wt X^0_{\Bla} \to X^0_{\Bla}$ be the restriction of $\pi_{\Bla}$ on $\wt X^0_{\Bla}$. We have a lemma. \begin{lem} $\pi_{\Bla}^0$ gives an isomorphism $\wt X^0_{\Bla} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, X^0_{\Bla}$. \end{lem} \begin{proof} $\pi^0_{\Bla}$ is $H$-equivariant and surjective. Take $(x,v) \in \SM^0_{\Bla}$, and put $x' = \pi_P(x) = (x_1, x_2)$ with $(x_1,v) \in \SO_1, x_2 \in \SO_2$. Furthermore, by our assumption, $\SO \cap \pi_P\iv(\SO')$ is open dense in $\pi_P\iv(\SO')$, where $\SO$ is the $H$-orbit containing $x$, and $\SO' = \SO_1' \times \SO_2$. Hence, under the notation of (7.9.1), we have \begin{equation*} \pi_{\Bla}\iv(x,v) \simeq \{ gP \in H/P \mid g\iv(x,v)\in \SM_{\Bla}^0\} \subset \SP_x. \end{equation*} By Proposition 7.10, we have $\SP_x = \{ P \}$. It follows that $\pi_{\Bla}\iv(x,v) = \{ P\}$. This shows that $\pi^0_{\Bla}$ gives a bijective morphism from $\wt X^0_{\Bla}$ onto $X^0_{\Bla}$. The map $g(x,v) \mapsto gP$ gives a well-defined morphism $X^0_{\Bla} \to \wt X^0_{\Bla}$, which is the inverse of $\pi_{\Bla}$. \end{proof} \para{7.13.} From now on, we fix $\Bm \in \SQ_{n,3}^0$. Put $\SB = H/B$. For any $z = (x,v) \in \SX_{\Bm,\nilp}$, define \begin{align*} \SB_z &= \{ gB \in \SB \mid g\iv x \in \Fn, g\iv v \in M_n\}, \\ \SB_z^{(\Bm)} &= \{ gB \in \SB \mid g\iv x \in \Fn, g\iv v \in M_{m_1} \}. \end{align*} Hence $\SB_z^{(\Bm)} \subset \SB_z$ are closed subvarieties of $\SB$. \par For each integer $d \ge 0$, we define a subset $X(d)$ of $\SX_{\Bm,\nilp}$ by \begin{equation*} \tag{7.13.1} X(d) = \{ z \in \SX_{\Bm,\nilp} \mid \dim \SB_z^{(\Bm)} = d \}. \end{equation*} Then $X(d)$ is a locally closed subvariety of $\SX_{\Bm,\nilp}$, and $\SX_{\Bm,\nilp} = \coprod_{d \ge 0}X(d)$. We consider the Steinberg varieties $\CZ^{(\Bm)}$ and $\CZ^{(\Bm)}_1$, which are a generalization of the Steinberg variety considered in 4.4. \begin{align*} \CZ^{(\Bm)} &= \{ (z, gB, g'B) \in \SX \times \SB \times \SB \mid (z, gB) \in \wt\SX_{\Bm}, (z, g'B) \in \wt\SX_{\Bm}\} \\ \CZ^{(\Bm)}_1 &= \{ (z, gB, g'B) \in \SX\nil \times \SB \times \SB \mid (z, gB) \in \wt\SX_{\Bm,\nilp}, (z,g'B) \in \wt\SX_{\Bm,\nilp}\}. \end{align*} Recall, for $\Bm = (m_1, m_2, 0) \in \SQ_{n,3}^0$, that $W\nat_{\Bm} = S_{m_1} \times W_{m_2}$. We show the following lemma. \begin{lem} Under the notation in 7.13, \begin{enumerate} \item $\dim \CZ^{(\Bm)}_1 = \dim \SX_{\Bm \nilp}$. The set of irreducible components of $\CZ^{(\Bm)}_1$ with maximal dimension is parametrized by $w \in W\nat_{\Bm}$. \item $\dim \CZ^{(\Bm)} = \dim \CZ^{(\Bm)}_1 + n$. The set of irreducible components of $\CZ^{(\Bm)}$ with maximal dimension is parametrized by $w \in W\nat_{\Bm}$. \item Assume that $X(d) \ne \emptyset$. For any $z \in X(d)$, we have \begin{equation*} \dim \SB^{(\Bm)}_z \le \frac{1}{2}(\dim \SX_{\Bm,\nilp} - \dim X(d)). \end{equation*} In particular, $\pi_1^{(\Bm)} : \wt\SX_{\Bm,\nilp} \to \SX_{\Bm,\nilp}$ is semismall with respect to the stratification $\SX_{\Bm\nilp} = \coprod_{d}X(d)$. \end{enumerate} \end{lem} \begin{proof} Let $p_1 : \CZ^{(\Bm)}_1 \to \SB \times \SB$ be the projection to the second and third factors. For each $w \in W_n$, let $\SO_w$ be the $H$-orbit of $(B, wB)$ in $\SB \times \SB$. We have $\SB \times \SB = \coprod_{w \in W_n}\SO_w$. Put $Z_w = p_1\iv(\SO_w)$. Then $\CZ^{(\Bm)}_1 = \coprod_{w \in W_n}Z_w$. Here $Z_w$ is a vector bundle over $\SO_w \simeq H/(B \cap wBw\iv)$ with fibre isomorphic to \begin{equation*} (\Fn \cap w(\Fn)) \times (M_{m_1} \cap w(M_{m_1})). \end{equation*} (Note that $\ol\FN_{p_2} = \Fn_s \oplus \FD_{m_1 +m_2} = \Fn$ since $m_3 = 0$.) We have \begin{equation*} \dim Z_w = \dim H - \dim T + \dim (M_{m_1} \cap w(M_{m_1})). \end{equation*} Here $\dim (M_{m_1} \cap w(M_{m_1})) \le m_1$, and the equality holds if and only if $w(M_{m_1}) = M_{m_1}$, namely $w \in W\nat_{\Bm}$. It follows that $\dim Z_w \le 2n^2 + m_1$ and the equality holds if and only if $w \in W\nat_{\Bm}$. Hence by Proposition 7.8, $\dim Z_w = \dim \SX_{\Bm,\nilp}$ if $Z_w$ has the maximal dimension. This implies that $\{ \ol Z_w \mid w \in W\nat_{\Bm}\}$ gives the set of irreducible components of $\CZ^{(\Bm)}_1$ with maximal dimension. This proves (i). For (ii), we consider $\wt Z_w = p\iv(\SO_w)$, where $p: \CZ^{(\Bm)} \to \SB \times \SB$ is the projection to the second and third factors. Then $\wt Z_w$ is a vector bundle over $\SO_w$, with fibre isomorphic to \begin{equation*} \Ft \times (\Fn \cap w(\Fn))\times (M_{m_1} \cap w(M_{m_1})). \end{equation*} Hence (ii) is proved in a similar way as (i). \par We show (iii). Let $q_1 : \CZ^{(\Bm)}_1 \to \SX_{\Bm, \nilp}$ be the projection on the first factor. For each $z \in \SX_{\Bm,\nilp}$, $q_1\iv(z) \simeq \SB_z^{(\Bm)} \times \SB_z^{(\Bm)}$. By (7.13.1), we have \begin{equation*} \dim q_1\iv(X(d)) = \dim X(d) + 2d. \end{equation*} Since $\dim q_1\iv(X(d)) \le \dim \CZ^{(\Bm)}_1 = \dim \SX_{\Bm, \nilp}$, we see that $2d \le \dim \SX_{\Bm,\nilp} - \dim X(d)$. This proves (iii). The lemma is proved. \end{proof} \par\bigskip \section{ Springer correspondence for $\Fg^{\th}$} \par\medskip \para{8.1.} In this section we shall prove the Springer correspondence for $\Fg^{\th}$. In [Sh], the Springer correspondence was established for the exotic symmetric space of level $r$ for arbitrary $r \ge 1$. Once Theorem 6.10 and Theorem 6.21 are proved, a similar discussion as in [Sh, 7] can be applied to our situation as the special case where $r = 3$. Assume that $\Bm \in \SQ_{n,3}^0$. We consider the variety $\CZ^{(\Bm)}$ as in 7.13. We denote by $\vf: \CZ^{(\Bm)} \to \SX_{\Bm}$ the map $(z,gB,g'B) \mapsto z$. Let $\a: \CZ^{(\Bm)} \to \Ft$ be the map defined by $(x,v, gB, g'B) \mapsto p_1(g\iv x)$, similarly to 4.4. As in 4.4, we have a commutative diagram \begin{equation*} \begin{CD} \CZ^{(\Bm)} @>\a>> \Ft \\ @V\vf VV @VV\w_1V \\ \SX_{\Bm} @>\wt\w>> \Xi, \end{CD} \end{equation*} where $\wt\w$ is the composite of the projection $\SX_{\Bm} \to \Fh$ and $\w$. Put $\s = \w_1\circ \a$, and $d'_{\Bm} = \dim \SX_{\Bm, \nilp} = 2n^2 + m_1$. We define a constructible sheaf $\ST$ on $\Xi$ by \begin{equation*} \tag{8.1.1} \ST = \SH^{2d'_{\Bm}}(\s_!\Ql) = R^{2d'_{\Bm}}\s_!\Ql. \end{equation*} Note that if $m_1 = 0$, $\CZ^{(\Bm)}$ coincides with $Z$ in 4.4, and $\ST$ is nothing but the sheaf $\ST$ defined in (4.4.1). Also $\SF$ corresponds to the special case where $r = 3$ of the sheaf $\SF$ defined in [Sh, 7.1]. The following properties of $\SF$ are obtained by similar arguments as in [Sh], so we omit the proof. The discussion in the proof of Lemma 4.5 can be generalized to this case (see also [Sh, Lemma 7.2]), and we obtain \begin{lem} The sheaf $\ST$ is a perfect sheaf on $\Xi$. \end{lem} Let $p: \CZ^{(\Bm)} \to \SB \times \SB$ be the projection to the second and third factors. We put $\CZ^{(\Bm)}_w = p\iv(\SO_w)$ for $w \in W_n$, where $\SO_w$ is as in the proof of Lemma 4.5. Let $\s_w$ be the restriction of $\s$ on $\CZ^{(\Bm)}_w$, and put $\ST_w = \SH^{2d'_{\Bm}}((\s_w)_!\Ql)$. In a similar way as in the proof of Proposition 4.6 (see also [Sh, Prop. 7.3], here the role of $W_n$ is replaced by $W\nat_{\Bm}$), we can prove \begin{prop} $\ST \simeq \bigoplus_{w \in W\nat_{\Bm}}\ST_w$. \end{prop} \para{8.4.} By the K\"unneth formula, we have $\vf_!\Ql \simeq \pi^{(\Bm)}_!\Ql \otimes \pi^{(\Bm)}_!\Ql$. By Theorem 6.21, $\pi^{(\Bm)}_!\Ql$ has a natural structure of $W\nat_{\Bm}$-module. Hence $\vf_!\Ql$ has a structure of $W\nat_{\Bm} \times W\nat_{\Bm}$-module. It follows that $\ST = \SH^{2d'_{\Bm}}(\s_!\Ql) \simeq \SH^{2d'_{\Bm}}(\wt\w_!(\vf_!\Ql))$ has a natural action of $W\nat_{\Bm} \times W\nat_{\Bm}$. Under the decomposition of $\ST$ in Proposition 8.3, the action of $W\nat_{\Bm} \times W\nat_{\Bm}$ has the following property (see the discussion in 4.7). For each $w_1, w_2 \in W\nat_{\Bm}$, \begin{equation*} \tag{8.4.1} (w_1,w_2)\cdot \ST_w = \ST_{w_1ww_2\iv}. \end{equation*} \par Let $a_0$ be the element in $\Xi$ corresponding to the $S_n$-orbit of $0 \in \Ft$, and $\ST_{a_0}$ be the stalk of $\ST$ at $a_0 \in \Xi$. By Proposition 8.3, we have a decomposition \begin{equation*} \tag{8.4.2} \ST_{a_0} = \bigoplus_{w \in W\nat_{\Bm}}(\ST_w)_{a_0}, \end{equation*} where $(\ST_w)_{a_0}$ is the stalk of $\ST_w$ at $a_0$. $W\nat_{\Bm} \times W\nat_{\Bm}$ acts on $\ST_{a_0}$ following (8.4.1). In a similar way as in (4.5.2), one can show that $\ST_w \simeq (\w_1)_!\Ql$. Since $\w_1\iv(a_0) = \{ 0\} \in \Ft$, $(\ST_w)_{a_0} \simeq H_c^0(\w_1\iv(a_0),\Ql) \simeq \Ql$. Thus we have proved the following result, which is the stalk version of Proposition 4.8. \begin{prop} $\ST_{a_0}$ has a structure of $W\nat_{\Bm} \times W\nat_{\Bm}$-module, which coincides with the two-sided regular representation of $W\nat_{\Bm}$. \end{prop} \para{8.6.} We consider the map $\pi_1^{(\Bm)} : \wt\SX_{\Bm\nilp} \to \SX_{\Bm,\nilp}$, and put $K_{\Bm,1} = (\pi_1^{(\Bm)})_!\Ql[d'_{\Bm}]$. By Lemma 7.14 (iii), $\pi_1^{(\Bm)}$ is semismall. Hence $K_{\Bm,1}$ is a semisimple perverse sheaf on $\SX_{\Bm,\nilp}$, and is decomposed as \begin{equation*} \tag{8.6.1} K_{\Bm,\nilp} \simeq \bigoplus_{A}V_A\otimes A, \end{equation*} where $A$ is a simple perverse sheaf which is isomorphic to a direct summand of $K_{\Bm,1}$, and $V_A = \Hom (K_{\Bm,1}, A)$ is the multiplicity space for $A$. The following result is a counter part of Proposition 4.11 to the case of nilpotent variety (see also Proposition 7.8 in [Sh]). \begin{prop} Under the notation above, put $m_A = \dim V_A$ for each direct summand $A$. Then we have \begin{equation*} \sum_{A}m_A^2 = |W\nat_{\Bm}|. \end{equation*} \end{prop} \begin{proof} By 8.4, we have \begin{align*} \ST_{a_0} &\simeq \BH^{2d'_{\Bm}}_c(\SX_{\Bm\nilp}, \pi^{(\Bm)}_!\Ql\otimes \pi^{(\Bm)}_!\Ql) \\ &\simeq \BH^0_c(\SX_{\Bm,\nilp}, K_{\Bm,1}\otimes K_{\Bm,1}). \end{align*} By applying (8.6.1), we have \begin{equation*} \dim \ST_{a_0} \simeq \sum_{A,A'}(m_Am_{A'}) \dim \BH^0_c(\SX_{\Bm,\nilp}, A\otimes A'). \end{equation*} Apply Lemma 4.9 for $X = \SX_{\Bm,\nilp}$. Then $\BH^0_c(\SX_{\Bm,\nilp}, A\otimes A') \ne 0$ only when $D(A) = A'$, in which case $\dim \BH^0_c(\SX_{\Bm,\nilp}, A \otimes A') = 1$. But since $K_{\Bm,1}$ is self-dual, $m_A = m_{D(A)}$ for each $A$. It follows that $\dim \ST_{a_0} = \sum_A m_A^2$. On the other hand, by Proposition 8.5, we have $\dim \ST_{a_0} = |W\nat_{\Bm}|$. This proves the proposition. \end{proof} \para{8.8.} We consider the map $\pi^{(\Bm)} : \wt\SX_{\Bm} \to \SX_{\Bm}$. By Theorem 6.21, $\pi^{(\Bm)}_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm}$, equipped with $W\nat_{\Bm}$-action, and is decomposed as \begin{equation*} \tag{8.8.1} \pi^{(\Bm)}_!\Ql[d_{\Bm}] \simeq \bigoplus_{\r \in (W\nat_{\Bm})\wg} \r \otimes K_{\r}, \end{equation*} where $K_{\r}$ is a simple perverse sheaf on $\SX_{\Bm}$, and $d_{\Bm} = \dim \SX_{\Bm}$. More precisely, it is given as $K_{\r} = \IC(\SX_{\Bm(k)}, \SL_{\r_1})[d_{\Bm(k)}]$ if $\r = \r_1\nat$ for $\r_1 \in S_{\Bm(k)}\wg$. We consider the complex $(\pi_1^{(\Bm)})_!\Ql[d'_{\Bm}]$ for the map $\pi_1^{(\Bm)}:\wt\SX_{\Bm, \nilp} \to \SX_{\Bm,\nilp}$, where $d'_{\Bm} = \dim \SX_{\Bm, \nilp}$. The following result gives the Springer correspondence for $W\nat_{\Bm}$. (Compare Theorem 8.9 and Corollary 8.11 with Theorem 7.12 and Corollary 7.14 in [Sh]). \begin{thm}[Springer correspondence for $W\nat_{\Bm}$] Assume that $\Bm \in \SQ^0_{n,3}$. Then $(\pi^{(\Bm)}_1)_!\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm,\nilp}$, equipped with $W\nat_{\Bm}$-action, and is decomposed as \begin{equation*} \tag{8.9.1} (\pi^{(\Bm)}_1)_!\Ql[d'_{\Bm}] \simeq \bigoplus_{\r \in (W\nat_{\Bm})\wg}\r \otimes L_{\r}, \end{equation*} where $L_{\r}$ is a simple perverse sheaf on $\SX_{\Bm,\nilp}$ such that \begin{equation*} \tag{8.9.2} K_{\r}|_{\SX_{\Bm,\nilp}} \simeq L_{\r}[d_{\Bm} - d'_{\Bm}]. \end{equation*} \end{thm} \begin{proof} As discussed in (8.6.1), $K_{\Bm,1} = (\pi_1^{(\Bm)})_!\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf. Since $K_{\Bm,1}$ is the restriction of $\pi^{(\Bm)}_!\Ql$ on $\SX_{\Bm,\nilp}$, we have a natural homomorphism \begin{equation*} \a : \Ql[W\nat_{\Bm}] \simeq \End (\pi^{(\Bm)}_!\Ql) \to \End K_{\Bm,1}. \end{equation*} In order to prove (8.9.1), it is enough to show that $\a$ is an isomorphism. By Proposition 8.7, we have $\dim \End K_{\Bm,1} = |W\nat_{\Bm}|$. Thus we have only to show that $\a$ is injective. Note that $\ST_{a_0} = \BH^0_c(\SX_{\Bm,\nilp}, K_{\Bm,1}\otimes K_{\Bm,1})$, and $K_{\Bm,1}$ is decomposed as $K_{\Bm,1} = \bigoplus_{\r}\r\otimes (K_{\r}|_{\SX_{\Bm,\nilp}})$ by (8.8.1). This decomposition determines the $W\nat_{\Bm} \times W\nat_{\Bm}$-module structure of $\ST_{a_0}$. But by Proposition 8.5, $\ST_{a_0}$ is isomorphic to the two-sided regular representation of $W\nat_{\Bm}$. This implies, in particular, $K_{\r}|_{\SX_{\Bm,\nilp}} \ne 0$ for any $\r \in W\nat_{\Bm}$. Hence $\a$ is injective, and so (8.9.1) holds. Now (8.9.2) follows by comparing (8.8.1) and (8.9.1). The theorem is proved. \end{proof} \para{8.10.} For each $\Bm \in \SQ^0_{n,3}$, we denote by $(W\wg_{n,3})_{\Bm}$ the set of irreducible representations $\wh\r \in W\wg_{n,3}$ corresponding to $\r \in S\wg_{\Bm(k)}$ for various $0 \le k \le m_2$. Then we have \begin{equation*} \tag{8.10.1} W\wg_{n,3} = \coprod_{\Bm \in \SQ^0_{n,3}}(W\wg_{n,3})_{\Bm}. \end{equation*} For each $\r \in S_{\Bm(k)}\wg$, we can construct $\r\nat \in (W\nat_{\Bm})\wg$ as in 6.7, and the map $\r \mapsto \r\nat$ gives a bijective correspondence \begin{equation*} \tag{8.10.2} \coprod_{0 \le k \le m_2}S_{\Bm(k)}\wg \simeq (W\nat_{\Bm})\wg. \end{equation*} It follows that the correspondence $\wh\r \lra \r \lra \r\nat$ gives a bijective correspondence \begin{equation*} \tag{8.10.3} (W\wg_{n,3})_{\Bm} \simeq (W\nat_{\Bm})\wg, \qquad \wh\r \lra \r\nat. \end{equation*} \par We consider the map $\ol\pi_{\Bm} : \pi\iv(\SX_{\Bm}) \to \SX_{\Bm}$. Then by Theorem 6.10, $(\ol\pi_{\Bm})_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf, equipped with $W_{n,3}$-action, and is decomposed as \begin{equation*} (\ol\pi_{\Bm})_!\Ql[d_{\Bm}] \simeq \bigoplus_{\wh\r \in (W\wg_{n,3})_{\Bm}} \wh\r \otimes K_{\r\nat}, \end{equation*} where $K_{\r\nat}$ is a simple perverse sheaf on $\SX_{\Bm}$ as defined in (8.8.1). Let $\ol\pi_{\Bm,1}: \pi\iv(\SX_{\Bm,\nilp}) \to \SX_{\Bm,\nilp}$ be the restriction of $\ol\pi_{\Bm}$ on $\SX_{\Bm,\nilp}$. By applying (8.9.2), we see that $(\ol\pi_{\Bm,1})_!\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf. As a corollary to Theorem 8.9, we obtain the Springer correspondence for $W_{n,3}$. \begin{cor}[Springer correspondence for $W_{n,3}$] Assume that $\Bm \in \SQ^0_{n,3}$. Then $(\ol\pi_{\Bm,1})_!\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm,\nilp}$, equipped with $W_{n,3}$-action, and is decomposed as \begin{equation*} \tag{8.11.1} (\ol\pi_{\Bm,1})_!\Ql[d'_{\Bm}] \simeq \bigoplus_{\wh\r \in (W\wg_{n,3})_{\Bm}} \wh\r \otimes L_{\r\nat}, \end{equation*} where $L_{\r\nat}$ is the simple perverse sheaf on $\SX_{\Bm,\nilp}$ as given in Theorem 8.9. \end{cor} \par\bigskip \section {Determination of the Springer correspondence} \para{9.1.} In this section, we shall determine $L_{\r}$ appearing in the Springer correspondence explicitly. Let $\Bm = (m_1, m_2, 0) \in \SQ^0_{n,3}$. We define a variety $\CG_{\Bm}$ by \begin{align*} \CG_{\Bm} = \{ (x,&v, W_1) \mid (x,v) \in \SX_{\Bm}, W_1 \text{ : isotropic, } \\ &\dim W_1 = m_1, x(W_1) \subset W_1, v \in W_1 \}. \end{align*} Let $\z: \CG_{\Bm} \to \SX_{\Bm}$ be the projection to the first two factors. Then the map $\pi^{(\Bm)} : \wt\SX_{\Bm} \to \SX_{\Bm}$ is factored as \begin{equation*} \tag{9.1.1} \begin{CD} \pi^{(\Bm)} : \wt\SX_{\Bm} @>\vf>> \CG_{\Bm} @>\z>> \SX_{\Bm}, \end{CD} \end{equation*} where $\vf$ is defined by $(x,v,gB) \mapsto (x,v, gM_{m_1})$. $\vf$ is surjective since there exists an $x$-stable maximal isotropic subspace containing $W_1$. $\z$ is also surjective since $\pi^{(\Bm)}$ is surjective. Since $\Bm \in \SQ^0_{n,3}$, we have $\dim \wt\SX_{\Bm} = \dim \SX_{\Bm}$. It follows that $\dim \CG_{\Bm} = \dim \SX_{\Bm}$. \par In the case where $m_1 = 0$, $\pi^{(\Bm)} : \wt\SX_{\Bm} \to \SX_{\Bm}$ coincides with the map $\pi : \wt X \to X$ (since $m_2 = n$), and (9.1.1) can be written as $\pi : \wt X \to \CG = X = \Fh$, where $\pi = \vf, \z = \id$. \par By modifying the definition of $\CK_{\Bm}$ in 6.14, we define a variety $\CH_{\Bm}$ by \begin{align*} \CH_{\Bm} = \{ (x,v, &W_1, \f_1, \f_2) \mid (x,v, W_1) \in \CG_{\Bm}, \\ &\f_1 : W_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, V_0, \f_2 : W_1^{\perp}/W_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol V_0 \text{ (symplectic isom.)} \}, \end{align*} where $V_0 = M_{m_1}$ and $\ol V_0 = V_0^{\perp}/V_0$. We also define a variety $\wt\CZ_{\Bm}$ by \begin{align*} \wt\CZ_{\Bm} = \{(x,v, &gB, \f_1, \f_2) \mid (x,v,gB) \in \wt\SX_{\Bm}, \\ &\f_1 : gM_{m_1} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, V_0, \f_2 : (gM_{m_1})^{\perp}/gM_{m_1} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \ol V_0 \}. \end{align*} As in 6.14, we consider $G_1 = GL(V_0), H_2 = Sp(\ol V_0)$, and $\Fh_2 = \Lie H_2$. The maps $\pi^2: \wt X' \to X' = \Fh_2$, $\pi^1: \wt\Fg_1 \to \Fg_1$ are given as in 6.14. We have the following commutative diagram \begin{equation*} \tag{9.1.3} \begin{CD} \wt\Fg_1 \times \wt X' @<\wt\s<< \wt\CZ_{\Bm} @>\wt q>> \wt\SX_{\Bm} \\ @V\pi^1 \times \pi^2VV @VV\wt\vf V @VV\vf V \\ \Fg_1 \times X' @<\s<< \CH_{\Bm} @>q>> \CG_{\Bm} \\ @. @. @VV\z V \\ @. @. \SX_{\Bm}, \end{CD} \end{equation*} where morphisms are defined as \begin{align*} q: &(x,v,W_1, \f_1, \f_2) \mapsto (x,v, W_1), \\ \s : &(x,v, W_1, \f_1, \f_2) \mapsto (\f_1(x|_{W_1})\f_1\iv, \f_2(x|_{W_1^{\perp}/W_1})\f_2\iv), \\ \wt\vf : &(x,v, gB, \f_1, \f_2) \mapsto (x, v, (gM_{m_1}), \f_1, \f_2). \end{align*} $\wt\s, \wt q$ are defined naturally. \par One can check that both squares are cartesian squares. Moreover, it is easy to see that \par\medskip\noindent (9.1.4) \ $q$ is a principal bundle with fibre isomorphic to $G_1 \times H_2$, and $\s$ is a locally trivial fibration with smooth connected fibre of dimension $\dim H + m_1$. \para{9.2.} For a fixed $k$, we consider the variety $\wt\SY^+_{\Bm(k)} = (\psi^{(\Bm)})\iv(\SY^0_{\Bm(k)})$ as in 6.2, and let $\CG_{\Bm(k),\sr} = \z\iv(\SY^0_{\Bm(k)})$ be the locally closed subvariety of $\CG_{\Bm}$. Here the varieties $Y'^0_k, \wt Y'^+_k$ are defined similarly as in Section 2 by replacing $X$ by $X'$. As the restriction of (9.1.3), we have the following commutative diagram \begin{equation*} \tag{9.2.1} \begin{CD} \wt\Fg_{1,\rg} \times \wt Y_k @<<< \wt\CZ^+_{\Bm(k)} @>>> \wt\SY^+_{\Bm(k)} \\ @VVV @VVV @VV\vf_0V \\ \Fg_{1,\rg} \times Y^0_k @<<< \CH_{\Bm(k), \sr} @>>> \CG_{\Bm(k), \sr} \\ @. @ . @VV\z_0V \\ @ . @ . \SY^0_{\Bm(k)}, \end{CD} \end{equation*} where $\CH_{\Bm(k), \sr} = q\iv(\CG_{\Bm(k), \sr})$ and $\wt\CZ^+_{\Bm(k)} = \wt q\iv(\wt \SY^+_{\Bm(k)})$, and $\vf_0, \z_0$ are restrictions of $\vf, \z$, respectively. The following result can be proved in a similar way as [Sh, (8.2.4)]. \par\medskip \noindent (9.2.2) \ The map $\z_0$ gives an isomorphism $\CG_{\Bm(k), \sr} \simeq \SY^0_{\Bm(k)}$. \par\medskip Take $\r \in (W\nat_{\Bm})\wg$. Then by (8.10.2), there exists an integer $k$ and $\r_0 \in S_{\Bm(k)}\wg$ such that $\r = \r_0\nat$. Thus $K_{\r}$ in (8.8.1) is given by $K_{\r} = \IC(\SX_{\Bm(k)}, \SL_{\r_0})[d_{\Bm(k)}]$, where $\SL_{\r_0}$ is a simple local system on $\SY^0_{\Bm(k)}$. By (9.2.2), we can regard $\SL_{\r_0}$ as a simple local system on $\CG_{\Bm(k), \sr}$. We put $A_{\r} = \IC(\ol\CG_{\Bm(k),\sr}, \SL_{\r_0})[d_{\Bm(k)}]$. Then $A_{\r}$ is an $H$-equivariant simple perverse sheaf on $\ol\CG_{\Bm(k),\sr}$, which we regard as a perverse sheaf on $\CG_{\Bm}$ by extension by zero. \par We show the following result. \begin{prop} Under the setting in 9.2, we have \begin{enumerate} \item $\vf_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf, equipped with $W\nat_{\Bm}$-action, and is decomposed as \begin{equation*} \tag{9.3.1} \vf_!\Ql[d_{\Bm}] \simeq \bigoplus_{\r \in (W\nat_{\Bm})\wg}\r \otimes A_{\r}. \end{equation*} \item $\z_!A_{\r} \simeq K_{\r}$. \end{enumerate} \end{prop} \begin{proof} By (5.14.1), we can write as \begin{equation*} \tag{9.3.2} (\pi^1)_!\Ql[\dim \Fg_1] \simeq \bigoplus_{\r_1 \in S_{m_1}\wg} \r_1 \otimes A_{\r_1} \end{equation*} where $A_{\r_1} = IC(\Fg_1, \SL^1_{\r_1})[\dim \Fg_1]$ is a simple perverse sheaf on $\Fg_1$. On the other hand, by Theorem 5.7, we can write as \begin{equation*} \tag{9.3.3} (\pi^2)_!\Ql[\dim \Fh_2]\simeq \bigoplus_{\r' \in W_{m_2, 2}\wg} \r'\otimes A_{\r'}, \end{equation*} where $A_{\r'} = \IC(X'_k, \SL^2_{\r_2})[d_k]$ for some $k$ and $\r_2 \in (S_k \times S_{m_2-k})\wg$ such that $\r' = \wh\r_2$, which is a simple perverse sheaf on $X'$. By applying a similar argument as in 6.14 to the diagram (9.1.3), together with (9.1.4), one can find an $H$-equivariant simple perverse sheaf $\wt A_{\r}$ on $\CG_{\Bm}$ such that \begin{equation*} \tag{9.3.4} q^*\wt A_{\r}[\b_2] \simeq \s^*(A_{\r_1} \boxtimes A_{\r'})[\b_1], \end{equation*} where $\b_1 = \dim H + m_1$, and $\b_2 = \dim G_1 + \dim H_2$. Moreover, $\r \in W\nat_{\Bm}$ is given by $\r = \r_1 \boxtimes \r' \in (S_{m_1}\times W_{m_2,2})\wg$. By using a similar argument as in [Sh, 8.2], based on the diagram (9.2.1), one can show that the restriction of $\wt A_{\r}$ on $\CG_{\Bm(k),\sr}$ coincides with $\SL_{\r_0}$. Hence we have $\wt A_{\r} = A_{\r}$. \par Put $K_1 = (\pi^1)_!\Ql[\dim \Fg_1], K_2 = (\pi^2)_!\Ql[\dim \Fh_2]$, and also put $K = \vf_!\Ql[d_{\Bm}]$. Since both squares in (9.1.3) are cartesian, we have \begin{equation*} q^*K[\b_2] \simeq \s^*(K_1\boxtimes K_2)[\b_1]. \end{equation*} In particular, $K$ is a semisimple perverse sheaf. It follows from the discussion based on the diagram (9.2.1), we see that $K$ has a natural action of $W\nat_{\Bm}$. Then by using (9.3.2), (9.3.3) and (9.3.4), we obtain (9.3.1). This proves (i). \par Next we show (ii). Since $\z$ is proper, $\z_!A_{\r}$ is a semisimple complex on $\SX_{\Bm}$. Since $\z_!K = \pi^{(\Bm)}_!\Ql[d_{\Bm}]$ is a semisimple perverse sheaf, $\z_!A_{\r}$ is also a semisimple perverse sheaf by (i). By applying $\z_!$ on both sides of (9.3.1), we have \begin{equation*} \tag{9.3.5} \pi^{(\Bm)}_!\Ql[d_{\Bm}] \simeq \bigoplus_{\r \in (W\nat_{\Bm})\wg} \r \otimes \z_!A_{\r}. \end{equation*} By using the diagram (9.2.1), one can show that the $W\nat_{\Bm}$-module structure of $\pi^{(\Bm)}_!\Ql[d_{\Bm}]$ induced from $\z_!$ coincides with the $W\nat_{\Bm}$-structure given in the formula (8.8.1). Thus by comparing (9.3.5) with (8.8.1), we obtain (ii). The proposition is proved. \end{proof} \para{9.4.} For each $\Bm \in \SQ^0_{n,3}$, put $\CG_{\Bm, \nilp} = \z\iv(\SX_{\Bm, \nilp})$. Then the map $\pi_1^{(\Bm)}$ is factored as \begin{equation*} \begin{CD} \pi_1^{(\Bm)} : \wt\SX_{\Bm,\nilp} @>\vf_1>> \CG_{\Bm, \nilp} @>\z_1>> \SX_{\Bm,\nilp}, \end{CD} \end{equation*} where $\vf_1, \z_1$ are restrictions of $\vf, \z$. Since $\pi^{(\Bm)}$ is surjective, $\vf_1$ is surjective. Put $\CH_{\Bm, \nilp} = q\iv(\CG_{\Bm,\nilp})$. The inclusion map $\CG_{\Bm,\nilp} \hra \CG_{\Bm}$ is compatible with the diagram (9.1.3), and we have a commutative diagram \begin{equation*} \tag{9.4.1} \begin{CD} \Fg_1 \times X' @<\s<< \CH_{\Bm} @>q>> \CG_{\Bm} \\ @AAA @AAA @AAA \\ (\Fg_1)\nil \times X'\nil @<\s_1<< \CH_{\Bm,\nilp} @>q_1>> \CG_{\Bm,\nilp}, \end{CD} \end{equation*} \par\medskip\noindent where $\s_1, q_1$ are restrictions of $\s, q$, respectively, and vertical maps are natural inclusions. A similar property as (9.1.4) still holds for $\s_1, q_1$, and both squares are cartesian squares. \par For each $\Bla \in \SP(\Bm(k))$, we define a subset $\CG_{\Bla}$ of $\CG_{\Bm,\nilp}$ as follows. Write $\Bla = (\la^{(1)}, \la^{(2)}, \la^{(3)})$, where $|\la^{(1)}| = m_1, |\la^{(2)}| = k, |\la^{(3)}| = m_2 - k$. Let $\SO'_1 = \SO_{\la^{(1)}}$ be the $G_1$-orbit in $(\Fg_1)\nil$ and $\SO_2 = \SO_{(\la^{(2)},\la^{(3)})}$ be the $H_2$-orbit in $X'\nil = (\Fh_2)\nil$ (see the notation in 7.2). Put $\CG_{\Bla} = q_1(\s_1\iv(\SO'_1 \times \SO_2))$. Then $\CG_{\Bla}$ is an $H$-stable, irreducible smooth subvariety of $\CG_{\Bm, \nilp}$. \par Let $\ol\CG_{\Bla}$ be the closure of $\CG_{\Bla}$ in $\CG_{\Bm,\nilp}$. Recall the map $\pi_{\Bla} : \wt X_{\Bla} \to \ol X_{\Bla}$ defined in 7.4, and let $\wt X^0_{\Bla}$ be as in 7.11. It follows from the construction that $\wt X_{\Bla}$ is a closed subset of $\CG_{\Bm,\nilp}$. We show a lemma. \begin{lem} \begin{enumerate} \item $\ol\CG_{\Bla}$ coincides with $\wt X_{\Bla}$. In particular, $\wt X^0_{\Bla}$ is an open dense subset of $\ol\CG_{\Bla}$. \item $\z_1(\ol\CG_{\Bla}) = \ol X_{\Bla}$, and $\z_1\iv(X^0_{\Bla}) = \wt X^0_{\Bla}$. Hence the restriction of $\z_1$ on $\z_1\iv(X^0_{\Bla})$ gives an isomorphism $\z_1\iv(X^0_{\Bla}) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, X^0_{\Bla}$. \end{enumerate} \end{lem} \begin{proof} It follows from the construction that $\wt X^0_{\Bla} \subset \CG_{\Bla}$. Hence $\wt X_{\Bla} \subset \ol\CG_{\Bla}$. By Lemma 7.5, $\dim \wt X_{\Bla} = 2\dim U_P + \dim \SO_1 + \dim \SO_2$. On the other hand, \begin{align*} \dim \CG_{\Bla} &= \dim \SO'_1 + \dim \SO_2 + \b_1 - \b_2 \\ &= 2\dim U_P + \dim \SO_1 + \dim \SO_2. \end{align*} (Here $\b_1, \b_2$ are as in (9.3.4), and $\dim \SO_1 = \dim \SO_1' + m_1$ by (7.3.1).) Since $\wt X_{\Bla}$ is irreducible and closed, we have $\wt X_{\Bla} = \ol\CG_{\Bla}$. This proves (i). Then the restriction of $\z_1$ on $\ol\CG_{\Bla}$ coincides with the map $\pi_{\Bla} : \wt X_{\Bla} \to \ol X_{\Bla}$. Thus (ii) follows from Lemma 7.12. The lemma is proved. \end{proof} \para{9.6.} Recall the set $\wt\SP(\Bm)$ in (7.7.1) for each $\Bm \in \SQ^0_{n,3}$. By (8.10.2), the set $(W\nat_{\Bm})\wg$ is parametrized by $\wt\SP(\Bm)$. We denote by $\r_{\Bla}\nat$ the irreducible representation of $W\nat_{\Bm}$ corresponding to $\Bla \in \wt\SP(\Bm)$. On the other hand, we denote by $\wh\r_{\Bla}$ the irreducible representation of $W_{n,3}$ belonging to $(W\wg_{n,3})_{\Bm}$ under the correspondence (8.10.3). By (8.10.1), we have a parametrization \begin{equation*} W\wg_{n,3} \simeq \coprod_{\Bm \in \SQ^0_{n,3}}\wt\SP(\Bm). \end{equation*} The following result determines the Springer correspondence explicitly (compare with [Sh, Thm. 8.7]). \begin{thm} Assume that $\Bm \in \SQ^0_{n,3}$. \begin{enumerate} \item Let $L_{\r}$ be as in Theorem 8.9. Assume that $\r = \r\nat_{\Bla} \in (W\nat_{\Bm})\wg$ for $\Bla \in \wt\SP(\Bm)$. Then we have \begin{equation*} \tag{9.7.1} L_{\r} \simeq \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]. \end{equation*} \item $($Springer correspondence for $W\nat_{\Bm}$$)$ \begin{equation*} (\pi^{(\Bm)}_1)_!\Ql[d'_{\Bm}] \simeq \bigoplus_{\Bla \in \wt\SP(\Bm)} \r\nat_{\Bla}\otimes \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]. \end{equation*} \item $($Springer correspondence for $W_{n,3}$$)$ \begin{equation*} (\ol\pi_{\Bm})_!\Ql[d'_{\Bm}] \simeq \bigoplus_{\Bla \in \wt\SP(\Bm)} \wh\r_{\Bla} \otimes \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]. \end{equation*} \end{enumerate} \end{thm} \begin{proof} By Proposition 9.3, we know that $\z_!A_{\r} = K_{\r}$ under the notation in 9.2. Hence by the base change theorem , $(\z_1)_!(A_{\r}|_{\CG_{\Bm\nil}}) \simeq K_{\r}|_{\CG_{\Bm,\nil}}$. For $\Bla \in \SP(\Bm(k))$, put $\r = \r\nat_{\Bla}$. We define a simple perverse sheaf $B_{\Bla}$ on $\CG_{\Bm,\nilp}$ as follows. Let $\SO_1', \SO_2$ be as in 9.4. Put $B_{\r_1} = \IC(\ol\SO_1', \Ql)[\dim \SO_1']$ for $\r_1 = \r_{\la^{(1)}} \in S_{m_1}\wg$, and $B_{\r'} = \IC(\ol\SO_2, \Ql)[\dim \SO_2]$ for $\r' = \wh\r_2 \in W_{m_2,2}\wg$ with $\r_2 \in (S_k \times S_{m_2-k})\wg$. By a similar construction as in the proof of Proposition 9.3, there exists a unique simple perverse sheaf $B_{\r}$ on $\CG_{\Bm,\nilp}$ satisfying the relation \begin{equation*} \tag{9.7.2} q_1^*B_{\Bla}[\b_2] \simeq \s^*(B_{\r_1}\boxtimes B_{\r'})[\b_1]. \end{equation*} We know that $A_{\r_1}|_{(\Fg_1)\nil} \simeq B_{\r_1}$, up to shift. On the other hand, by Corollary 5.20, we have $A_{\r'}|_{(\Fh_2)\nil} \simeq B_{\r'}$, up to shift. Thus by comparing (9.7.2) and (9.3.4), we see that the restriction of $A_{\r}$ on $\CG_{\Bm,\nilp}$ coincides with $B_{\r}$, up to shift. Also by (9.7.2), the restriction of $B_{\r}$ on $\CG_{\Bla}$ is a constant sheaf $\Ql$. In particular, $\supp B_{\r} = \ol\CG_{\Bla}$. By Lemma 9.5, the support of $(\z_1)_!B_{\Bla}$ coincides with $\ol X_{\Bla}$. By Theorem 8.9, we know that the restriction of $K_{\r}$ on $\SX_{\Bm\nilp}$ is a simple perverse sheaf $L_{\r}$. Hence in order to show (9.7.1), it is enough to see that $L_{\r}|_{X^0_{\Bla}}$ is a constant sheaf $\Ql$. By Lemma 9.5 (ii), $\z\iv(X_{\Bla}^0) = \wt X^0_{\Bla} \subset \CG_{\Bla}$, and $\z_1\iv(X_{\Bla}^0) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, X_{\Bla}^0$. It follows that $(\z_1)_!B_{\la}|_{X_{\Bla}^0}$ coincides with $\Ql$, up to shift. This proves (9.7.1). (ii) and (iii) then follows from Theorem 8.9 and Corollary 8.11. The theorem is proved. \end{proof} \para{9.8.} For each $z \in \SX_{\Bm, \nilp}$, we consider the Springer fibres $\SB_z = \pi\iv(z)$ and $\SB^{(\Bm)}_z = (\pi^{(\Bm)})\iv(z)$ as in 7.13. We have $\SB_z^{(\Bm)} \subset \SB_z$. The cohomology group $H^i(\SB^{(\Bm)}_z,\Ql)$ has a structure of $W\nat_{\Bm}$-module, and $H^i(\SB_z, \Ql)$ has a structure of $W_{n,3}$-module. For $\Bla \in \wt\SP(\Bm)$, put \begin{equation*} \tag{9.8.1} d_{\Bla} = \frac{1}{2}(\dim \SX_{\Bm,\nilp} - \dim X_{\Bla}) \end{equation*} We have a lemma. \begin{lem} Assume that $\Bla \in \wt\SP(\Bm)$. \begin{enumerate} \item For any $z \in X_{\Bla}$, $\dim \SB_z^{(\Bm)} \ge d_{\Bla}$. The set of $z \in X_{\Bla}$ such that $\dim \SB_z^{(\Bm)} = d_{\Bla}$ forms an open dense subset of $X_{\Bla}$. \item For any $z \in X_{\Bla}$, $H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql)$ contains an irreducible $W\nat_{\Bm}$-module $\r\nat_{\Bla}$. \end{enumerate} \end{lem} \begin{proof} First we show (ii). For any $z \in \SX_{\Bm,\nilp}$, Theorem 9.7 (ii) implies that \begin{equation*} \tag{9.9.1} H^i(\SB_z^{(\Bm)},\Ql) \simeq \bigoplus_{\Bmu \in \wt\SP(\Bm)} \r\nat_{\Bmu}\otimes \SH_z^{i - d'_{\Bm} + \dim X_{\Bmu}}\IC(\ol X_{\Bmu}, \Ql) \end{equation*} as $W\nat_{\Bm}$-modules. Assume that $z \in X_{\Bla}$ and put $i = 2d_{\Bla}$. Since $\SH^0\IC(\ol X_{\Bla}, \Ql) = \Ql$, $H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql)$ contains $\r\nat_{\Bla}$. This proves (ii). \par (ii) implies, in particular, that $\dim \SB_z^{(\Bm)} \ge d_{\Bla}$. Put $d = \dim (\pi^{(\Bm)})\iv(X_{\Bla}) - \dim X_{\Bla}$. Let $X(d)$ be as in (7.13.1). Then $X(d) \cap X_{\Bla}$ is open dense in $X_{\Bla}$. Hence $\dim X_{\Bla} \le \dim X(d)$. By Lemma 7.14 (iii), we have, for any $z \in X(d) \cap \SB_z^{(\Bm)}$, \begin{equation*} \dim \SB_z^{(\Bm)} \le \frac{1}{2}(\dim \SX_{\Bm,\nilp} - \dim X(d)) \le \frac{1}{2}(\dim \SX_{\Bm,\nilp} - \dim X_{\Bla}) = d_{\Bla}. \end{equation*} Hence $\dim \SB^{(\Bm)}_z = d_{\Bla}$ and $d = d_{\Bla}$. This proves (i). \end{proof} We show the following result (compare with [Sh, Prop. 8.16]). \begin{prop} Take $z \in X^0_{\Bla}$, and assume that $\Bla \in \wt\SP(\Bm)$. \begin{enumerate} \item $\dim \SB_z^{(\Bm)} = d_{\Bla}$, and $H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql) \simeq \r\nat_{\Bla}$ as $W\nat_{\Bm}$-modules. \item $\dim \SB_z = d_{\Bla}$, and $H^{2d_{\Bla}}(\SB_z, \Ql) \simeq \wh\r_{\Bla}$ as $W_{n,3}$-modules. Hence the map $z \mapsto H^{2d_{\Bla}}(\SB_z, \Ql)$ gives a canonical bijection \begin{equation*} \{ X^0_{\Bla} \mid \Bla \in \SP_{n,3}\} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, W_{n,3}\wg. \end{equation*} \end{enumerate} \end{prop} \begin{proof} We prove (i). We consider the diagram as in (9.1.3), restricted to the nilpotent variety as in 9.4. Write $\Bla = (\la^{(1)}, \Bla')$ with $\Bla' = (\la^{(2)}, \la^{(3)})$. Put \begin{align*} d_{\la^{(1)}} &= (\dim (\Fg_1)\nil - \dim \SO_1')/2, \\ d_{\Bla'} &= (\dim X'\nil - \dim \SO_2)/2. \end{align*} We note that \begin{equation*} \tag{9.10.1} d_{\Bla} = d_{\la^{(1)}} + d_{\Bla'}. \end{equation*} In fact, by Proposition 7.8 and Lemma 7.5, \begin{align*} d_{\Bla} &= ((2n^2 + m_1) - (2\dim U_P + \dim \SO_1 + \dim \SO_2))/2 \\ &= (\dim G_1 + \dim H_2 - n - \dim \SO_1' - \dim \SO_2)/2 \\ &= (\dim (\Fg_1)\nil + \dim (\Fh_2)\nil - \dim \SO_1' - \dim \SO_2)/2. \end{align*} Thus (9.10.1) holds. \par Take $z \in X^0_{\Bla}$. Assume that $\Bla \in \wt\SP(\Bm)$. By Lemma 9.5 (ii), $\z_1$ gives an isomorphism $\z_1\iv(X^0_{\Bla}) \to X^0_{\Bla}$. Hence there exists a unique $z_* \in \z_1\iv(X^0_{\Bla})$ such that $\z_1(z_*) = z$. Since $\z_1\iv(X_{\Bla}^0) \subset \CG_{\Bla}$, by using the diagram (9.1.3) and its restriction on the nilpotent variety (9.4.1), one can find $(x_1, x_2) \in \SO_1' \times \SO_2$ such that $\s_1\iv(x_1,x_2) = q_1\iv(z_*)$. Note that $\dim \SB^1_{x_1} = d_{\la^{(1)}}, \dim \SB^2_{x_2} = d_{\Bla'}$, where $\SB^1$ is the flag variety for $G_1$ and $\SB^2$ is the flag variety for $H_2$. Thus by (9.1.3) together with (9.10.1), we have \begin{equation*} \tag{9.10.2} \bigl(R^{2d_{\la^{(1)}}}\pi^1_!\Ql\bigr)_{x_1} \otimes \bigl(R^{2d_{\Bla'}}\pi^2_!\Ql\bigr)_{x_2} \simeq \bigl(R^{2d_{\Bla}}\wt\vf_!\Ql\bigr)_{\xi} \simeq \bigl(R^{2d_{\Bla}}(\vf_1)_!\Ql\bigr)_{z_*}, \end{equation*} where $\xi$ is an element in $\s_1\iv(x_1,x_2) = q_1\iv(z_*)$. Since $\z_1\iv(X^0_{\Bla}) \simeq X^0_{\Bla}$, we have \begin{equation*} \tag{9.10.3} H^{2d_{\Bla}}(\SB^{(\Bm)}_z,\Ql) \simeq \bigl(R^{2d_{\Bla}}(\pi_1)_!\Ql\bigr)_z \simeq \bigl(R^{2d_{\Bla}}(\vf_1)_!\Ql)_{z_*}. \end{equation*} We already know, from the Springer correspondence for $\Fg_1$ and $\Fh_2$, \begin{align*} \dim \bigl(R^{2d_{\la^{(1)}}}\pi^1_!\Ql\bigr)_{x_1} &= \dim H^{2d_{\la^{(1)}}}(\SB^1_{x_1},\Ql) = \r_{\la^{(1)}}, \\ \dim \bigl(R^{2d_{\Bla'}}\pi^2_!\Ql\bigr)_{x_2} &= \dim H^{2d_{\Bla'}}(\SB^2_{x_2},\Ql) = \wh\r_{\Bla'}. \end{align*} Then (9.10.2) and (9.10.3) show that $\dim H^{2d_{\Bla}}(\SB^{(\Bm)}_z, \Ql) = \dim \r_{\Bla}\nat$. By Lemma 9.9 (ii), $H^{2d_{\Bla}}(\SB^{(\Bm)}_z,\Ql)$ contains $\r\nat_{\Bla}$. It follows that $H^{2d_{\Bla}}(\SB^{(\Bm)}_z,\Ql) \simeq \r\nat_{\Bla}$ as $W\nat_{\Bm}$-modules. (9.10.2) also shows that $\dim \SB^{(\Bm)}_z = d_{\Bla}$. This proves (i). \par Next we show (ii). We consider the decomposition of $H^i(\SB^{(\Bm)}_z,\Ql)$ in (9.9.1). By Theorem 9.7 (iii), we have a similar decomposition \begin{equation*} \tag{9.10.4} H^i(\SB_z, \Ql) \simeq \bigoplus_{\Bmu \in \wt\SP(\Bm)} \wh\r_{\Bmu} \otimes \SH_z^{i - d'_{\Bm} + \dim X_{\Bmu}}\IC(\ol X_{\Bmu},\Ql). \end{equation*} Since $H^i(\SB^{(\Bm)}_z,\Ql) = 0$ for $i > 2d_{\Bla}$, (9.9.1) implies that $\SH^{i-d'_{\Bm} + \dim X_{\Bmu}}\IC(\ol X_{\Bmu},\Ql) = 0$ for any choice of $\Bmu \in \wt\SP(\Bm)$ and of $i > 2d_{\Bla}$. It follows, by (9.10.4) that $H^i(\SB_z,\Ql) = 0$ for $i > 2d_{\Bla}$. Since $\SB^{(\Bm)}_z \subset \SB_z$, $\dim \SB_z \ge \dim \SB_z^{(\Bm)} = d_{\Bla}$. Hence $\dim \SB_z = d_{\Bla}$. By (i) and (9.9.1), we see that $\SH^{2d_{\Bla} - d'_{\Bm} + \dim X_{\Bmu}}\IC(\ol X_{\Bmu},\Ql) = 0$ for any $\Bmu \ne \Bla$, and is equal to $\Ql$ for $\Bmu = \Bla$. Hence by (9.10.4), we have $H^{2d_{\Bla}}(\SB_z, \Ql) \simeq \wh\r_{\Bla}$ as $W_{n,3}$-modules. This proves (ii). The proposition is proved. \end{proof} \par\bigskip
2024-02-18T23:40:31.164Z
2018-05-24T02:07:44.000Z
algebraic_stack_train_0000
2,639
35,509
proofpile-arXiv_065-12870
\section{Introduction} Ranking systems are ubiquitous across both online marketplaces (e-commerce, gig-economy, multimedia) and other socio-technical systems (admissions or labor platforms), playing a role in which products are bought, who is hired, and what media is consumed. In many of these systems, ranking algorithms form a core aspect of how a large search space is made manageable for \textit{consumers} (employers, buyers, admissions officers, etc). In turn, these algorithms are consequential to the \textit{providers} (sellers, workers, job seekers, content creators, media houses, etc.) who are being ranked. Much of the initial work on such ranking, recommendation, or retrieval systems (RS\footnote{While we often use ``RS'' or ranking systems as shorthand, in this work we often mean ranking, recommendation, retrieval, and constrained allocation algorithmic systems more broadly -- systems that select (and potentially order) a subset of providers from a larger available set.}) focused on learning to maximize \textit{relevance}---often measured through proxies like clickthrough rate---, showing the most relevant items to the consumer, based solely on the consumer's objective \cite{liu2011learning,adomavicius2005toward}. However, like all machine learning techniques, such systems have been found to `unfairly' favor or discriminate against certain individuals or groups of individuals in various scenarios \cite{ekstrand2018all,BaezaYates2018,chen2020bias}. Thus, as part of the burgeoning algorithmic fairness literature \cite{mehrabi2019survey,Mitchell2021}, there have recently been many works on fairness in ranking, recommendation, and constrained allocation more broadly \cite{burke2017multisided,zehlike2017fa, zehlike2022fair, geyik2019fairness, celis2018ranking, asudeh2019designing,singh2018fairness, biega2018equity, surer2018multistakeholder,guo2021stereotyping,cai2020fair}. For example, suppose that the platform is deciding how to rank 10 items on a product search result page, and each item has demographic characteristics (such as those of the seller). Then---in addition to considering each item's relevance---how should the platform rank the items, in a manner that is ``fair'' to the providers, either on an individual or group level? This question is often considered on an abstract level, independent of the specific ranking context; moreover, the literature primarily focuses on fairness of one instance of the ranking \cite{zehlike2017fa, zehlike2020reducing, zehlike2022fair, singh2018fairness}, or multiple independent instances of rankings with an additive objective across instances \cite{biega2018equity, suhr2019two}. The goals of this paper are to synthesize the current state of the fair ranking and recommendation field, and to lay the agenda for future work. In line with recent papers \cite{Jannach2020,Selbst2018} on both broader fairness and recommendations systems, our view is that the fair ranking literature risks being ineffective for problems faced in real-world ranking and recommendation settings, if it focuses too narrowly on an abstract, static ranking settings. To combat this trend, we identify several pitfalls that have been overlooked in the literature, and should be considered in context-specific ways: toward a broader, long-term view of the fairness implications of a particular ranking system. Like much of the algorithmic fairness literature, fair ranking mechanisms typically are designed by abstracting away contextual specifics, under a ``reducibility'' assumption; i.e., many fair ranking problems of interest can be reduced to a standard problem of ranking, that is a set of items or individuals constrained to a chosen notion of fairness or optimized for a suitable fairness measure (or multiple instances of such ranking over time with simple additive extensions); however, as \citet{Selbst2018} elucidate, the abstractions necessary for such a reduction often ``render technical interventions ineffective, inaccurate, and sometimes dangerously misguided.'' \begin{figure}[t!] \center{ \includegraphics[width=1\textwidth]{Arxiv_block_diagram.pdf}} \caption{This figure paints a big picture of the paper and succinctly summarizes our position on the field of fairness in retrieval systems, i.e., current fair RS mechanisms often fail to recognize several real-world nuances like delayed impacts, uncertainties in outcomes, ecosystem behaviour (discussed in \Cref{sec:pitfalls}); thus we must design fairness interventions in an impact-oriented approach with a holistic and long-term view of RS in mind. In \Cref{sec:long_term_fairness}, we discuss how algorithmic impact assessment can be helpful in this regard. More specifically in \Cref{subsec:simulations}, we overview various applied modeling techniques and simulation frameworks which in tandem can be used for impact-oriented studies of fairness in RS. Following this, in \Cref{subsec:data_bottlenecks,subsec:legal_bottlenecks} we briefly discuss various data bottlenecks and legal hurdles which might challenge the efforts towards a holistic view of RS fairness.} \label{fig:block_diagram} \end{figure} \textbf{Overview and Contributions}. In this work, we outline the many ways in which such a reduction often abstracts away many of the important aspects in the fair ranking context: the gap between position-based metrics and true provider utility, spillovers from one ranking to another across time and products, strategic incentives induced by the system, and the (differential) consequences of ranking noise. Studying fair ranking questions in such a reduced format and ignoring these issues might work in the ideal environment chosen during the problem reduction, but is likely insufficient to bring fairness in a real-world ranking system. For example, a ranking algorithm that does not consider how relevance or consumer discrimination affects outcomes, or how early popularity leads to compounding rewards on many platforms, is unlikely to achieve its fairness desiderata; furthermore, ignoring strategic manipulation (such as Sybil attacks where a provider creates multiple copies of their profile or items) may lead to fairness mechanisms amplifying rather than mitigating inequities on the platform. We believe that these aspects must be tackled by the fair ranking literature, in order for this literature to positively affect practice. We then overview methodological paths forward to incorporate these aspects into fair ranking research, as part of a broader long-term framework of algorithmic impact assessments---simulations, applied modeling, and data-driven approaches---along with their challenges. Finally, we conclude with a discussion on the broader regulatory, legal, and external audit landscape, necessary to translate the fair ranking literature into systems in practice. \Cref{fig:block_diagram} summarizes our paper at a high level. \textbf{Outline.} \Cref{sec:ranking_n_fairness} contains an overview of the fair RS literature. \Cref{sec:pitfalls} presents the aspects of ranking systems that we believe should be most covered by future fair RS work. \Cref{sec:long_term_fairness} contains the discussion of the paths forward within the broader data and regulatory landscape. \section{Overview of Fair Ranking Literature}\label{sec:ranking_n_fairness} Designing effective ranking, recommendation, or retrieval systems (RSs) requires tackling many of the same challenges as to build general machine learning algorithms---with additional challenges stemming from the characteristic that such systems make \textit{comparative} judgments across items; a high position in the ranking is a constrained resource. RSs often employ machine learned models to estimate the {\it relevance} (or {\it probability of relevance}) of the items to any search or recommendation query \cite{liu2011learning,adomavicius2005toward}. Historically, while user utility is the broader objective \cite{pu2011user}, the most popular guiding principle is the {\it Probability Ranking Principle} \cite{robertson1977probability}: items are ranked in descending order of their probability to be relevant to the user, often estimated through click-through rates. For a broad range of user utility metrics---such as mean average precision \cite{voorhees2000variations}, mean reciprocal rank \cite{voorhees1999trec}, and cumulative gain based metrics \cite{jarvelin2002cumulated,jarvelin2017ir}---this principle in turn maximizes the expected utility of users \cite{jarvelin2017ir}. However, not only are more (estimated to be) relevant items typically ranked higher, but also users tend to click more on higher positioned items, even conditioned on relevance. Such a {\it position bias} \cite{craswell2008experimental} means that expected attention (\textit{exposure}) from users decreases significantly while moving from the top rank to the bottom one; for example, users may evaluate items sequentially from the top rank, until they find a satisfactory one. It is thus important for producers to be ranked highly; a small difference in relevance estimation could result in a large difference in expected user attention (for example, see Appendix \Cref{tab:position_bias}). Depending on the ranking context, e.g., ranking products vs. ranking job candidates, high ranking positions directly translate to rewards, or at least increase their likelihood. (However, as we explain in the next section, the gap between exposure and true provider utility is an important one to understand.) \textbf{Fairness in Rankings.} Due to the importance of rankings for providers,\footnote{Note that despite the recent explorations into multi-sided fairness in online platforms \cite{burke2017multisided,patro2020fairrec,suhr2019two}, we restrict our discussion to provider fairness which has been studied quite extensively.} and as part of the increased focus on machine learning injustices, there has been much recent interest in fairness and equity for providers rather than just ranking utility for consumers. There are numerous definitions, criteria and evaluation metrics to estimate a system's ability to be \textit{fair} \cite{corbett2018measure,mehrabi2019survey,Mitchell2021,ekstrand2019fairness,distributive,castillo2019fairness,yao2017beyond}. Given heterogeneous settings, the complex environment in which retrieval systems are developed, and the multitude of stakeholders involved that may have differing moral goals \cite{finocchiaro2021bridging} and worldviews \cite{Friedler2021}, there is obviously no universal fairness definition; at a high level, however, many definitions can be classified into whether the objective is to treat similar individuals similarly (\textit{individual fairness}) \cite{dwork2012fairness}, or if different groups of individuals, defined by certain characteristics such as demographics, should be treated in a similar manner (\textit{group fairness}) \cite{speicher2018unified}. In the following, we overview the concepts and works most relevant for our critiques and the agenda that we advocate. Fairness notions from the domain of classification can---to a certain extent---be adopted to serve in ranking settings. They typically only require additional consideration of the comparative nature of rankings and of how utility is modeled \cite{castillo2019fairness}. Compared to relevance-only ranking, adding fairness considerations often leads to the optimization of a multi-objective (or a constrained objective), where the usual utility (or relevance) objective comes along with a fairness constraint or objective focused on the providers \cite{Ribeiro2013,xiao2017fairness}. One branch of the literature \cite{zehlike2017fa, zehlike2022fair, geyik2019fairness, celis2018ranking, asudeh2019designing} reasons about probability-based fairness in the top-$k$ ranking positions, which puts the focus onto group fairness. These works commonly provide a minimum (and for some cases also maximum) number or proportion of items/individuals from a protected groups, that are to be distributed evenly across the ranking. The methods do not usually allow later compensation, if the fairness constraints are not met at any of the top-$k$ positions (e.g., by putting more protected than non-protected items to lower positions). Another set of works \cite{singh2018fairness, biega2018equity, surer2018multistakeholder,diaz2020evaluating, zehlike2020reducing} assign values (often referred to as {\it attention} or {\it exposure} scores) to each ranking position based on the expected user attention or click probability. These works argue that the total exposure is a limited resource on any platform (due to position bias), and advocate for fair distribution of exposure to ensure fairness for the providers. In contrast to the former line of work, using exposure as a metric to quantify provider utility has brought up not only group fairness notions~\cite{singh2018fairness,morik2020controlling}, but also definitions to enhance individual fairness~\cite{singh2018fairness,biega2018equity,bower2021individually}. Further, in contrast to probability-based methods, these methods balance the \emph{total} exposure across individuals or groups, and thus they do allow compensations in lower positions. Generally the problem definitions in these works center around a single instance of ranking, i.e., at a particular point in time we are given a set of items or individuals, their sensitive or protected attribute(s) (e.g., race and gender), and their relevance scores; the task is to create a ranking which follows some notion of fairness (like demographic parity or equal opportunity) for the items or individuals, while maximizing the user utility. Some exceptions are \citet{biega2018equity}, \citet{suhr2019two} and \citet{surer2018multistakeholder}, that propose to deterministically ensure fairness through equity in amortized exposure, i.e., addition over time or over multiple instances of ranking. In the next section, we argue that both these broad approaches (probability-based, and exposure-based) may be incomplete in many applications, due to their exclusive focus (either directly or indirectly) on ranking positions. \section{Pitfalls of existing fair ranking models}\label{sec:pitfalls} In this section, we enumerate several crucial aspects of ranking and recommendation systems that substantially influence their fairness properties, but are ignored when considering an abstract fair ranking setting. The left hand side of \Cref{fig:block_diagram} summarizes this section. We begin in \Cref{subsec:beyond_exposure} by noting that exposure (or more generally, equating higher positions with higher utility) often does not translate to provider utility. \Cref{subsec:temporal_significance} discusses spillovers across rankings, either over time, across different rankings on the same user interface, or competition across platforms. \Cref{subsec:strategic_behavior} discusses strategic provider responses, and how they may counter-act (or worsen) the effects of a fair ranking mechanism. Finally, \Cref{subsec:uncertainty} illustrates how noise---either in demographic variables or in other aspects---may differentially affect providers within a fair ranking mechanism. Note that these issues are also present in other aspects of ranking, and in algorithmic fairness literature more generally; in fact, we also discuss if and how such issues have been studied in related settings. However, we believe that the intersection of fairness and ranking challenges amplify these concerns; for example, the naturally comparative aspect of rankings worsens the effects of competitive behavior and differential uncertainties. Finally, while these pitfalls may not be the only ones, we believe these are the major ones which may cause the failure of proposed fair ranking frameworks in delivering fair outcomes in several real-world scenarios. In the next section (\cref{sec:long_term_fairness}), we elaborate on how to tackle these challenges. \subsection{Provider Utility beyond Position-based Exposure}\label{subsec:beyond_exposure} As discussed above, the fair ranking literature often uses \textit{exposure} as a proxy for provider utility\footnote{Note that, here we are talking about the utility gained by a provider as a result of getting ranked. Thus provider utility is not same as user utility.} \cite{ekstrand2019fairness, singh2018fairness, castillo2019fairness, zehlike2020reducing}. For example, well-known fair ranking mechanisms like {\it equity of attention} \cite{biega2018equity} and {\it fairness of exposure} \cite{singh2018fairness,zehlike2020reducing} emphasize fairly allocating exposure among providers. Such works often implicitly assume that exposure is measured solely through a provider's position in the ranking; i.e., each position is assigned a value, independent of context. While such ranking-position-based exposure is often a useful measure of provider utility, such a focus misses context-specific factors due to which higher exposure does not necessarily lead to increased user attention, or that increased user attention may not directly translate to provider utility, as measured through, e.g., sales or long-term satisfaction. This measurement-construct gap---between exposure as a measurement and provider utility as the construct of interest---is not a challenge unique to fairness-related questions in ranking. For example, not distinguishing between varying levels of attention from users could affect the performance of algorithms designed to maximize sales, as it would affect the predictions of algorithms using exposure to calculate sales probabilities \cite{moe2004dynamic} or information diffusion on a social network \cite{bakshy2012role}. However, this gap may be especially important to be considered in a research direction that often seeks algorithmic solutions to inequities stemming from multiple causes, including the actions of other platform participants; for example, much work has analyzed (statistical or taste-based) discrimination on online platforms in which, even conditional on exposure, one type of stakeholders are treated inequitably by other stakeholders (see, e.g., racial discrimination by employers \cite{edelman2017racial,monachou2019discrimination}). In such settings, fair-exposure based algorithms may not uniformly or even substantially improve outcomes (we give an example in Appendix \Cref{tab:fair_exposure_gone_wrong}); this was recently underscored by \citet{suhr2020does}, which found through a user survey that such algorithms' effectiveness substantially depends on context such as job description and candidate profiles. Another especially relevant contextual factor beyond position is \textit{time}: in fast moving domains like media, items may only be relevant for a short period of time \cite{campos2014time,yuan2013time}. In such scenarios, the stakeholders (both users and providers) most benefit from immediate exposure. For example, recency is an important aspect of relevance in breaking news \cite{chakraborty2017optimizing}, job candidates should be shown before vacancies are filled, and restaurants get more orders if recommended during peak hours to nearby customers \cite{yuan2013time, Banerjee2020AnalyzingM}. More broadly, one should consider \textit{which providers} are being exposed to \textit{which users} and \textit{when}, as the value of a ranking position depends substantially on such match relevance and participant characteristics. Fair ranking models focusing solely on position, and thus oblivious to such context, may not have the desired downstream effects and may fail to deliver on fairness. We illustrate this consequence in an example in Appendix \Cref{tab:temporal_significance}. \subsection{Spillovers effects: compounding popularity, related items, and competition}\label{subsec:temporal_significance} While the immediate effect of an item's position in the ranking (e.g., an immediate sale) may be first-order, there are often substantial \textit{spillover} effects or \textit{externalities}, which should be incorporated in fair RS models. Here, we discuss three of such effects: compounding popularity or first-exposed-advantage, spillovers across products and ranking types, and competition effects. Perhaps the most important spillover is a \textit{compounding popularity} or \textit{first-exposed-advantage},\footnote{The phrase is used to indicate its similarity to the {\it first-mover-advantage} phenomenon \cite{kerin1992first}.} in which the exposure an item receives during its early stages can significantly affect its long-term popularity \cite{figueiredo2014dynamics}. For example, early feedback in terms of clicks, sales, etc. could improve an item's estimated relevance scores, raising its future rankings; there may further be a popularity bias or herding phenomenon in which users are more likely to select an item, if they observe that others have selected it before them \cite{steck2011item,abdollahpouri2017controlling,salganik2008leading}. Similarly, as reflected in re-targeting in advertising, user preferences may change with exposure to an item. Thus, past exposure plays a huge role in determining the long-term effects of future exposure; denial of early exposure could risk the viability of small providers \cite{mladenov2020optimizing}. Though one may intuitively think that continuous re-balancing of exposure through fairness-enhancing methods may overcome (or at least reduce) this problem, the real-world-proof is still to be made and early evidence suggests otherwise (see \citet{suhr2020does}). Second, ranking systems---such as product recommendations---are rarely deployed as stand-alone services. They are often accompanied by associated services such as sponsored advertisements \cite{hillard2010improving}, similar or complementary item recommendations on individual item pages on e-commerce, media-streaming platforms and other marketplaces \cite{pazzani2007content,lai2021understanding}, non-personalized trending items \cite{cremonesi2010performance,benhardus2013streaming,platt2015international}, and other quality endorsements like editor's choice \cite{holly2012play}. Due to the presence of these associated services, user attention reaching an item may spill over to other items \cite{liang2019spillover,raj2021friends}. For example, complementary items or items similar to an item may receive spillover exposure thereby resulting in increased exposure levels for such items, via `you may also be interested` or `items similar to' recommendations, potentially leading to undesirable inequalities even under a fair RS model; we give such an example in Appendix \Cref{tab:spillover_example}. Finally, there are competition and cross-platform spillover effects \cite{krijestorac2020cross,farahat2016app}: users may reach an item, not through the recommendation engine on the platform, but, e.g., via a search engine \cite{jansen2006effectiveness}, product or price comparison sites \cite{jung2014online}, or other platforms like social media \cite{hoffman2010can,saravanakumar2012social}. In these instances, the recommendation engine at the user entry-point, e.g., the search engine’s recommendation system, will have a downstream effect on the exposure of items on the end site where the items are listed. These spillover effects could be important to analyze when designing potential `entry-point' recommendation systems. Perhaps more importantly---since a platform does not have control over all the off-platform systems that may influence item exposure on its own platform---one should consider how such external sources affect both the goals and the behavior of a fair RS system. In this regard, the major questions which remain understudied and unanswered at large are: should a fair RS consider the inequities induced via external systems and seek to counteract through interventions or should it ignore these effects for the sake of free market competition? Together, these spillover effects suggest that fairness in RS (especially in recommendations) should not be modeled in isolation from associated and external services, and must take into account how the recommendations may have downstream consequences over time and space for either the same provider or on other providers. We note that these spillover effects are analogous to the \textit{Ripple Effect trap} as described by \citet{Selbst2018}, in which harmful effects often stem from the failure of understanding how the introduction of new technologies could alter behaviours and values in existing social systems. \subsection{Strategic Behavior}\label{subsec:strategic_behavior} Current fair ranking mechanisms often fail to consider that the providers themselves could be strategic players who might try to \emph{actively} maximize their utilities \cite{tennenholtz2019rethinking,bahar2015economic}. Providers often have an incentive to suitably strategize their offerings, e.g., content creators on media platforms could leave their own area of expertise and try to copy other popular creators or follow the popular trends \cite{ben2018game,ben2020content}, sellers could perform data poisoning attacks (through fake reviews, views, etc.) on the RS to improve their ranking \cite{zhang2020practical}, influencers on social network sites could try to hijack popular trends \cite{goga2015doppelganger,chakraborty2019equality}. Providers can even strategically exploit the deployed fair ranking mechanisms to extract more benefits \cite{frobe2020effect,diincentives}. Not factoring in such strategic behavior could impact ranking and recommendation systems, and especially the performance of fair ranking mechanisms. In the following, we overview some examples of strategic behavior and their consequences. As in the measurement-construct gap between exposure and producer utility, strategic behavior as a reaction to ranking models is not just a question of fairness. Numerous works suggest that relevance estimation models are highly vulnerable to various types of adversarial attacks: \begin{inparaenum} \item \emph{shilling attacks}, in which a provider gets associated with a group of users who then add supportive reviews, feedbacks, clicks, etc. to manipulate rankings in favor of the provider \cite{lam2004shilling}; \item \emph{data poisoning attacks}, where a provider strategically generates malicious data and feeds it into the system through a set of manipulated interactions \cite{li2016data,zhang2020practical}; or \item \emph{doppelganger bot attacks}, where a number of fake users or bots are created and then strategically placed in a social network to hijack news feed ranking systems in favor of the malicious party \cite{goga2015doppelganger,chakraborty2019equality,molavi2013iolaus}. \end{inparaenum} However, some strategic behavior may specifically exploit characteristics of fair ranking algorithms. For example, fair ranking mechanisms may incentivize \emph{content duplication attacks} \cite{frobe2020effect}. Strategic providers can create duplicates or near-duplicates---possibly hard to automatically identify---of their existing offerings in a ranking system. Since certain fair ranking mechanisms may try to ensure benefits for all listed items, providers with more copies of same items stand to gain more benefits \cite{frobe2020effect,diincentives}. We give such an example in Appendix \Cref{tab:duplication_attack}. Other `undesirable' strategic behavior includes the purposeful provision or withholding of information, which may help some participants maximize their ranking; For example, in admissions settings, test-optional admissions policies that aim to be fair to students without test access may inadvertently be susceptible to strategic behavior by students with access but low test scores~\cite{liutestoptional21}. Strategic behavior by providers need not always be malicious; rather, it could also represent a sincere effort for improvement (e.g., effort to improve restaurant's quality \cite{luca2016reviews}) or just a change in content offering strategy (e.g., strategic selection of topics for future content production \cite{halvorson2012content,raifer2017information}). However, such `legitimate' strategic behavior may nevertheless affect the efficacy of fair ranking mechanisms over time, as such behavior may affect the relative performance of marketplace participants. For example, \citet{vonderau2019spotify} shows that providers on various content sharing platforms may partly or completely change their content production strategy to cater to the taste of a ranking algorithm (instead of the taste of users). Studies by \citet{Chaney2018} and \citet{ben2020content} suggest that ranking mechanisms which are unaware of such behavior could cause homogenization of a platform's item-space and degrade user utility over time; such behavior could also risk the long-term viability and welfare of small-scale providers \cite{mladenov2020optimizing}. Theoretically, \citet{liu2021strategic} extend the strategic classification literature to the ranking setting, to show that such effort (and its differential cost) could have substantial equity implications on the ultimate ranking. Fair ranking mechanisms which seek to equalize exposure affect such incentives, both for desirable and undesirable strategic behavior, and it is necessary to take them into account when designing fair ranking mechanisms for real world settings. Designing fairness mechanisms which can distinguish between such desirable and undesirable behavior may be further challenging (cf. \cite{liutestoptional21}). Finally, we note that the above discussion---that of strategic behavior of individual providers---does not consider the setting in which the platform---a seemingly neutral player and deployer of a ranking algorithm---also plays the role of a competitive provider (through a subsidiary or partner). Since such providers have access to private platform data and control over their algorithms, they may be able to deploy undetectable strategic manipulations (e.g., Amazon's private label of products on its marketplace \cite{dash2021umpire}) which the other providers are not able to match, leading to an unfair strategy playing field for providers. The design and auditing of ranking algorithms robust to such behavior is an important direction for future work. \subsection{Consequences of Uncertainty}\label{subsec:uncertainty} Fairness-aware ranking mechanisms proposed for exposure- and probability-based fairness often assume knowledge of true relevance of providers or items, demographic characteristics on which to remain fair and of the value of each position in the ranking. However, such scores are rarely available in real-world settings. For example, machine-learned models or other statistical techniques used to estimate relevance scores are often uncertain about the relevance of items due to various reasons, for example, biased or noisy feedback, the initial unavailability of data \cite{morik2020controlling,yang2021maximizing}, and platform updates in dynamic settings \cite{patro2020incremental}. While such estimation noise (or bias) is important for all algorithmic ranking or recommendations challenges, it is especially important to consider for fair ranking algorithms, as we illustrate below. Current fair ranking mechanisms assume the availability of the demographic data of individuals to be ranked. Whilst such assumptions help algorithmic developments for fair ranking, the availability of demographic data can not be taken for granted. Demographic data such as race and gender is often hard to obtain due to reasons like legal prohibitions or privacy concerns on their collection in various domains \cite{andrus2021we,bogen2020awareness}. To overcome the data gap, platform designers often resort to data-driven inference of demographic information \cite{lahoti2020fairness}, which usually involves huge uncertainty and errors \cite{andrus2021we}; the use of such uncertain estimates of demographic data in fair ranking mechanisms can cause significant harm to vulnerable groups, and ultimately fail to ensure fairness \cite{ghosh2021fair}. Moreover, in dynamic market settings where protected groups of providers or items are often set based on popularity levels, the protected group membership changes over time, thereby adding temporal variations in demographics along with the uncertainty issues \cite{ge2021towards}. To tackle such variations, \citet{ge2021towards} propose to use constrained reinforcement learning algorithms which can dynamically adjust the recommendation policy to nevertheless maintain long-term fairness. However, incorporating such demographic uncertainty to broader fair ranking algorithms remains an open question. Another crucial part of rankings systems is the estimation of position bias \cite{agarwal2019estimating,chandar2018estimating} which acts as a proxy measure for click-through probability and helps quantify the possible utilities of providers based on their ranks \cite{bar2009presentation}. Fairness-aware ranking mechanisms need these position bias estimates to ensure fair randomized or amortized click-through utility (exposure) for the providers. While these estimates are often assumed to be readily available in most of the recent fair ranking systems works \cite{singh2018fairness,biega2018equity,diaz2020evaluating}, it also has huge uncertainty attached since it heavily depends on the specifics of the user interface. Dynamic and interactive user interfaces \cite{mesbah2012crawling} used on many platforms, usually go through automatic changes which affects the attention bias (position and vertical bias) based on changes in web-page layout \cite{oosterhuis2018ranking}. Furthermore, factors like the presence of attractive summaries and highlighted evidences for relevance---often generated in automated manners---alongside ranking results also differentially affect click-through probabilities over time and across items \cite{yue2010beyond,joachims2017accurately}. Finally, the presence of relevant images, their sizes, text fonts, and other design constraints also play a huge role \cite{liu2015influence, wang2016beyond,granka2004eye}. Together, as also discussed in \citet{wang2018position} and \citet{sapiezynski2019quantifying}, inaccuracies in position bias estimation and corresponding consequences remain important challenges in fair RS. Finally, we note that uncertainties, including the above, may be \textit{differential}, affecting some participants more than others, even within the same protected groups. Such differential informativeness, for example, might occur in ranking settings where the platform has more information on some participants (through longer histories, or other access differences) than others \cite{emelianov2020fair,garg2021standardized}. The result of such differential informativeness may cause downstream disparate impact, such as privileging longer-serving providers over newer and smaller ones. Together, these sources and areas of uncertainty should be an important aspect of future work in fair ranking. \vspace{1em} \noindent{\textbf{Fair ranking desiderata. }} What should a comprehensive and long-term view of fairness in RS and its dynamics be composed of? First, the provider utility measure should look beyond mere exposure, and account for user beliefs, perceptions, preferences and effects over time (as discussed in \Cref{subsec:beyond_exposure}). Second, fair RS works should consider not just immediate impacts but also their spillovers, whether over time for the same item or spillover effects on other items (as discussed in \Cref{subsec:temporal_significance}). Third, strategic behavior and systems incentives should also be modeled to anticipate manipulation concerns and their adverse effects (as discussed in \Cref{subsec:strategic_behavior}). Finally, fair RS mechanisms should incorporate the (potentially differential) effects of estimation noise (as discussed in \Cref{subsec:uncertainty}). Putting things together, this section illustrated various challenges and downstream effects of developing and deploying algorithms from the fair RS literature. As we discuss in the next section, overcoming these challenges requires both longer-term thinking---beyond the immediate effect of a ranking position---and moving beyond studying general RS settings to modeling and analyzing specific settings and their context-specific dynamics. \subsection{\bf Data Bottlenecks}\label{subsec:data_bottlenecks} A major challenge faced by researchers outside industry working on long-term comprehensive evaluations of fair RS is the unavailability of suitable data. The traditional RS datasets \cite{harper2015movielens,mcfee2012million,bennett2007netflix,TREC_data} that often used in the literature were collected in times when goals like accuracy or click-through rates and so may not be a good fit for today's impact-oriented research \cite{Jannach2020}. For example, a set of user-item ratings data such as the canonical MovieLens dataset \cite{harper2015movielens} may not capture how a user may value the item differently at different points in time or how a user's preferences evolve over time, or the user's or item's associated demographics. Similarly, such data gives little insight into fake reviews or ratings \cite{luca2016fake,he2021market,li2016data,zhang2020practical}, or other strategic manipulations as discussed above. More broadly, such datasets do not include vital information such interface design changes that may have a behavioural impact on user choice (as discussed in \cref{subsec:uncertainty}), and associated services like complementary recommender systems or embedded advertisement blocks (as elaborated in \cref{subsec:beyond_exposure}) that work alongside the one being audited, the type and time of provider interactions and changes in their behaviour. Such missing components of standard ranking and recommendation system datasets are a major bottleneck to studying the questions from \Cref{sec:pitfalls}. On the other hand, the flourishing of the algorithmic fairness literature have contributed to the spread of several experimental datasets covering a wide range of scenarios such as school admission, credit score, house listings, news articles, and much more (see \cite{Zehlike2021survey, mehrabi2019survey} for a list of datasets used in fair ranking and ML research). Datasets such as \textit{COMPAS} or the \textit{German Credit} datasets, originally classification tasks, have been adapted to ranking settings. A major issue related to the use of these datasets in fair ranking research is that they are often far from the contexts in which fair ranking algorithms would be used. While potentially useful in the advancing the conceptual state-of-the-art in algorithmic fairness research, reliance on such datasets may raise significant concerns to the ecological validity of such research. Therefore, a more detailed analysis on the use and characteristics of such datasets is a much needed work to address in future, similarly to what has been done in the context of Computer Vision research \cite{Miceli2021, Koch2021, Scheuerman2021}. Here, we detail the characteristics that a RS dataset would need to be suitable for impact-oriented fairness analysis, in addition to the traditional indicators of user preference or experience (precision or click through rates). One recurring theme is that ranking and recommendation systems operate within a broader socio-technical environment (that they themselves shape), and existing datasets do not allow researchers to understand this broader environment and the underlying dynamics.\footnote{We note that while \textit{more} data is not always better (e.g., see the case of NLP models discussed by \citet{Bender2021})---we believe that a certain level of {\it completeness and richness of data} is required to perform more comprehensive and long-term impact analysis.} \begin{enumerate}[(1)] \item Most easily, it would be useful to complement existing datasets with past data on the same platform, such as user-provider interactions and their behaviour; on RS's associated services and related rankings; on other contextual details such as user interface, page layout and design; and on past results from rankings, such as whether the user selected a custom sorting criteria like date or price instead of platform's default ranking criteria, whether the user was redirected to a product from an external or affiliate link, and whether the user's behaviour follows the platform's guidelines. Such complementary data would allow understand how the broader environment affects and is affected by a fair ranking algorithm. \item More broadly, a move from static datasets to temporal datasets -- with timestamps on ratings and displayed recommendations/ratings -- would allow finding temporal variations in RS and its stakeholders. It would further allow studying fairness beyond demographic characteristics, such as that related to new providers. For example, as discussed in \Cref{subsec:temporal_significance}, higher ranked results can often lead to increased user attention and conversion rates \cite{craswell2008experimental}, i.e., results initially ranked higher could then have a greater chance of being ranked highly in subsequent rankings. Since such biased feedback could easily creep into temporal datasets, one must factor this in their RS impact analysis (e.g., an unbiased learning method by \citet{joachims2017unbiased} in presence of biased feedback). Studying such dynamics and their fairness implications in the real world requires observing such interactions. \item Finally, as discussed in \Cref{subsec:uncertainty}, a key aspect of fairness in rankings is uncertainty, especially differential uncertainty. While some datasets may allow researchers to infer certain components of recommendation system uncertainty (such as by numbers of ratings for a provider), other uncertainties are hidden. External to such companies, it is unclear how to best reflect the correctness of provided user attributes (such as race and gender so as to avoid uncertainties in a platform's compliance to fairness), the genuineness of ratings and reviews (so as to account for manipulations in fair RS analysis) \cite{trustpilotrankeligibility,youtuberankeligibility}) when feedback is given, and other model uncertainties. While it may be difficult for companies to quantify their uncertainties when releasing datasets, one beneficial step would be to release more information on the origin of the data, i.e., dataset datasheets as described by \citet{Gebru2018}. \end{enumerate} Unfortunately, as might be expected, there are several challenges to such comprehensive datasets. The most important challenges are from the legal domain, which might even affect researchers and developers within a company. For example, the data minimization principle in GDPR \cite{data_min_gdpr} could restrict platforms to collect sensitive information like gender or race, thereby indirectly closing the doors for the implementation of fairness interventions, and inferred attributes would contain huge uncertainty which may render fairness interventions useless (as discussed in \cref{subsec:uncertainty}). In fact, a study by \citet{biega2020operationalizing} finds that the performance might not substantially decrease due to data minimization, but it might disparately impact different users. Additional legal principles which might present challenges are other privacy regulations, data retention policies, intellectual property rights of platforms, etc. We discuss these challenges in the next section. Furthermore, while a comprehensive and long-term view on fair RS may be of huge societal need and expectation, the creation of suitable datasets and their availability to external researchers heavily rely on the interests of platform owners. Such external access, even if restricted in various ways, is an important aspect of regulation and auditing. We now turn to discussing such legal and regulatory concerns. \subsection{\bf Legal Bottlenecks}\label{subsec:legal_bottlenecks} In the previous section we discussed issues of missing data and the challenges to obtain necessary information due to platform interests and legal regulations on privacy. Regulations and other legal interventions by governments are helpful in some aspects of ensuring external audits, while hindering fair ranking and recommendation in other contexts. Legal provisions will vary across jurisdictions, causing different challenges in data access and algorithmic disclosure depending on the location of: the data requested, the users of platforms that implement RS’s, the individuals impacted by the rankings, and the researchers seeking access to RS information. For example, data protection laws may potentially restrict access to data located in the EU, for non-EU based researchers or vice versa. In this section we give an overview of legal hurdles that prevent researchers of fair RS from assessing the impact of their methods, along with information on specific laws and guidelines that can be used as a starting point for discussions to shape a more robust set of legal provisions for long term fair RS. There are existing laws/guidance that could be applied to long term fairness in RS. But the wording of some of these laws/guidance leaves them open to interpretation, such that a platform could reasonably argue that it is fulfilling its obligations under the guidance, without taking into account long term fairness in RS. The European Commission Ethics Guidelines for Trustworthy AI~\cite{EUEthicsGuidelines} state that a system should be tested and validated to ensure it is working as intended throughout its entire life cycle, both during development and after deployment. The guidelines list fairness as well as societal well-being as a requirement of trustworthy AI. However, if the word ``intended'' is interpreted narrowly, as point in time and in isolation from the dynamic and interconnected nature of recommendations, platforms could demonstrate that their systems are working as ``intended,'' considering both fairness and societal impact---even if in practice the platform may not be evaluating for long-term fairness or modelling various spillover effects. In addition, the European Commission Guidelines on Ranking Transparency~ \cite{EURankingTransparency} reflect hesitancy that platforms have to be fully transparent on the details of their ranking; they recognise that providers are ``not required to disclose algorithms or any information that, with reasonable certainty, would result in the enabling of deception of consumers or consumer harm through the manipulation of search results.'' This privacy-transparency trade-off may cause the problem of missing data for algorithmic impact assessments to continue. On the other hand, there is a push from regulators to make data from algorithmic systems available---if not to the general public, at least to independent third party auditors---to mitigate conflicts of interest when platforms audit their own systems. In the US, the FTC’s Algorithmic Accountability Act \cite{FTCAlgorithmicAccountability} provides that if reasonably possible, impact assessments are to be performed in consultation with external third parties, including independent auditors and technology experts. However, the EU harmonised rules for AI \cite{EUHarmonisedAIRules} acknowledge that given the early phase of the regulatory intervention and the fact the AI sector is very innovative, expertise for auditing is only now being accumulated. In the absence of underlying data and full knowledge of the ranking algorithm, researchers could still adopt a forward looking approach of implementing simulations, based on what they do know about the ranking, to help predict the longer term effects of a ranking algorithm (as already explained in Section~\ref{subsec:simulations}). It remains to be seen however, whether the advised disclosure of ``meaningful explanations'' of the main parameters of ranking algorithms---referred to in the European Commission Guidelines on Ranking Transparency \cite{EURankingTransparency}---provide enough information upon which to base an evaluation of the long term fairness of the RS. There is also uncertainty over whether these meaningful explanations reduce sufficiently the impact of information asymmetry between users of the platform, and the platform itself, particularly where the platform both controls the RS, and includes its own items to be eligible in ranking results, alongside those of third party providers. Further consideration also needs to be given to the timing of the release of the explanations when an RS method is updated, to give stakeholders sufficient opportunity to challenge reliance on these parameters, from a long term fairness perspective, pre-implementation of the RS update. Applying laws to, or developing laws for, long term fairness scenarios in RS is in its infancy. Those involved in shaping this legal framework should consider for long term fairness evaluation purposes: data access for different stakeholders, timings for this access, and level of detail that needs to be given; as well as providing actionable guidance on a platform’s responsibility for developing RS with long term fairness goals in mind. \section{Towards Impact-oriented Fairness in Ranking and Recommender Systems}\label{sec:long_term_fairness} In order to avoid the pitfalls discussed in the last section and to design `truly' fair RS, one must understand and assess the full range and long-term effects of various RS mechanisms. In this regard, we apply recent lessons from and critiques of Algorithmic Impact Assessment (AIA), both within and beyond the FAccT community. Algorithmic Impact Assessment (AIA) can be described as a set of practices and measurements with the purpose of establishing the (direct or indirect) impacts of algorithmic systems, identifying the accountability of those causing harms, and designing effective solutions \cite{Metcalf2021,Reisman2018}. More specifically to ranking and recommendation systems, \citet{Jannach2020} introduces a comprehensive collection of issues related to impact-oriented research in RS. There are two broad lessons from this literature, that we explain and apply to the design of fair RS, in a manner that involves integrated effort from different actors and a comprehensive view of their effects. First, as discussed by \citet{Vecchione2021}, a key point when assessing or auditing algorithmic systems is to move \textit{beyond discrete moments of decision making}, i.e., to understand how those decision-points affect the long-run system evolution; this point is particularly true for fairness interventions in ranking and recommender systems, as discussed in \Cref{sec:pitfalls}. \citet{Jannach2020} also highlight the limitations and unsuitability of traditional research in RS, which focused solely on accurately predicting user ratings for items (``leaderboard chasing") or optimizing click-through rates. Thus, in \Cref{subsec:simulations}, we begin with a discussion of methodologies that can be used to study such long-run effects of fair RS mechanisms, that have been used to study other questions in RS fields -- mainly, simulation and applied modeling. We detail not only the useful frameworks but also potential limitations and challenges when studying fairness-specific questions. Second, a key aspect of effective assessments is the participation of every suitable stakeholder, including systems developers, affected communities, external experts, and public agencies; otherwise, a danger is that the research community focuses on impacts most measurable by its preferred methods and ignores others \cite{Metcalf2021}. However, there are bottlenecks to such holistic work, especially for RS used in private or sensitive contexts. We discuss data availability challenges in \Cref{subsec:data_bottlenecks}. Then, in \Cref{subsec:legal_bottlenecks}, we overview various regulatory frameworks -- along with their limitations -- designed to govern RS or algorithmic systems in general, and hold them accountable. Researchers should contribute to tackling these challenges as well. \subsection{Simulation and Applied Modeling to Study Long-term Effects and Context-specific Dynamics}\label{subsec:simulations} Many of the challenges discussed in \Cref{sec:pitfalls} are regarding impacts that do not appear in the short-term, immediately after a given ranking; for example, it may take time for strategic agents to respond to a ranking systems. These long-term impacts are difficult to capture without considering a specific context, or with solely relying on ``traditional'' metrics that assess instantaneous precision-fairness trade-offs. Outside of fair ranking, the recommendations literature has investigated such long-term and indirect effects using \textit{simulation and applied modeling} methods, motivated for example by the observation that offline (and commonly, precision-driven) recommendation experiments are not always predictive of long-term simulation or online A/B testing outcomes \cite{gomez2015netflix,bodapati2008recommendation, krauth2020offline}. However, surprisingly, such an approach has been relatively rare in the fair rankings and recommendations literature; to spur such work, here we overview various simulation and modeling tools along that are advantageous in our context. First, {\bf simulations} have already been used in the past to demonstrate long-term effects of recommender systems and search engines---although unrelated to fairness, in ways that static precision-based analyses can not. Examples are the demonstration of the {\it performance paradox} (users' higher reliance on recommendations may lead to lower RS performance accuracy and discovery) by \citet{Zhang2020}, the study of {\it homogenization} effects on RS users by \citet{Chaney2018}, a study on the emergence of {\it filter bubbles} \cite{Nguyen2014} in collaborative filtering recommendation systems and its impacts by \citet{Aridor2020}, the evaluation of reinforcement learning to rank for search engines by \citet{hu2018reinforcement}, and a study on {\it popularity bias} in search engines by \citet{fortunato2006topical}. All relied on context-specific simulations of RS. Many other works also leverage simulations \cite{Hazrati2020, Ferraro2020, patro2020incremental, Banerjee2020AnalyzingM, Bountouridis2019, DAmour2020, Yao2020, Mansoury2020, patro2020towards} to study various dynamics in recommender systems. In summary, these works illustrate how simulation-based environments can help in {\it (i)} studying various hypothesized relationships between the usage of systems and individual and collective behavior and effects, {\it (ii)} detecting new forms of relationships, and {\it (iii)} replicating results obtained in empirical studies. Given the usefulness of simulations, many simulation frameworks have been developed to study various fairness approaches for information retrieval systems; just to mention a few: MARS-Gym \cite{MARSGYM}, ML-fairness-gym \cite{DAmour2020}, Accordion \cite{McInerney2021}, RecLab \cite{krauth2020offline}, RecSim NG \cite{Mladenov2021}, SIREN \cite{Bountouridis2019}, T-RECS \cite{lucherini2021t}, RecoGym \cite{Rohde2018}, AESim \cite{gao2021imitate}, Virtual-Taobao \cite{shi2019virtual}. Note however, that the simulated environments are created under certain assumptions on the interactions between the stakeholders and the system, which may not always hold in real-world. As emphasized by \citet{Friedler2021}, it is important to question how different value assumptions may be influential on the simulated environments, and which worldviews have been modeled while developing such frameworks. On a positive note, simulation frameworks can be designed to be flexible enough to give freedom in (de)selecting or changing the fundamental value assumptions in fair RS; for example RecoGym \cite{Rohde2018} and MARS-Gym \cite{MARSGYM} provide freedom in setting various types of user behaviours and interactions with the system. This flexibility allows impact and efficacy assessment under different ethical scenarios, and the study of fair RS mechanisms under various delayed effects and user biases (as discussed in \cref{subsec:beyond_exposure,subsec:temporal_significance}) -- we believe that leveraging such simulation frameworks is an important path forward to studying the various effects discussed above in a context-specific manner. Second, various {\bf temporal, behavioural and causal models} have traditionally been used to formally define, understand and study complex dynamical systems in fields like social networks \cite{handcock2010modeling,hanneke2010discrete,farajtabar2017coevolve}, game theory and economics \cite{camerer2003behavioural,ariely2008predictably}, machine learning \cite{yao2021survey,guo2020survey}, and epidemiology \cite{grenfell2001travelling}. These models often rely on real-world observations of individual behaviour, extract broader insights, and then try to formally represent both individual and system dynamics through mathematical modeling. While the simulation frameworks can function as technical tools to study RS dynamics, suitable temporal, behavioural and causal models can be integrated within the simulation to ensure that the eco-system parametrization, stakeholder behaviour and system pay-offs are representative of the real-world. A good example: \citet{radinsky2013behavioral} try to improve search engine performance with the use of suitable behavioural and temporal models in their framework. Similarly, simulation frameworks with suitable applied modeling can be used to design and evaluate fair RS mechanisms which can withstand strategic user behaviour and other temporal environment variations. Causal models can be utilized to study the impact of fair RS \cite{sharma2015estimating, schnabel2016recommendations,wang2020causal} in presence or absence of uncertainties and various associated services. Applied modeling tools are further an effective way to study strategic concerns in ranking, along with their fairness implications \citep{liu2021strategic}. Even though simulations along with applied modeling may not exactly mirror the real world effects of fair RS, they could give enough of a basis to highlight likely risks, which could then be taken into account while designing and optimizing fair RS mechanisms. They also bring an opportunity to model the effects of proposed fairness interventions, so that their long-term and indirect effects can be better understood and compared. However, these approaches would further benefit from availability of certain data and the resolution of related legal bottlenecks. For example, studies on spillover effects can not proceed without the data on complementary and associated services. These data and legal bottlenecks might have also contributed to the fact that there are very few works exploring this direction, and out of the limited works, some are limited to either theoretical analysis \cite{mladenov2020optimizing,ben2020content} or simulations with assumed parametrizations \cite{Zhang2020,ge2021towards,xue2019enhancing} in absence of complementary data.\footnote{Note that a few recent works look into long-term assessment of fair machine learning \cite{liu2018delayed,zhang2020long,DAmour2020}, which we overlook so as not to divert from the primary focus of our discussion.} We discuss these bottlenecks in \cref{subsec:data_bottlenecks} and \cref{subsec:legal_bottlenecks}. \input{Arxiv_4.2_data} \input{Arxiv_4.3_legal} \section{Conclusion} In this paper we provided a critical overview of the current state of research on fairness in ranking, recommendations, and retrieval systems, and especially the aspects often abstracted away in existing research. Much of the existing research has focused on instant-based, static fairness definitions that are prone to oversimplifying real-world ranking systems and their environments. Such a focus may do more harm than good and result in `fair-washing,' if those methods are deployed without continuous critical investigation on their outcomes. Guidelines and methods to consider the effects of the entire ranking system through its life cycle, including effects from interactions with the outside world, are urgently needed. We discussed various aspects beyond the actual ordering of items that affect rankings, such as spillover effects, temporal variations, and varying user characteristics ranging from their levels of activity. We further examined the effects of strategic behaviors and uncertainties in an RS. These effects play an important role for the successful creation and assessment of fair rankings, and yet they are rarely considered in state-of-the-art fair ranking research. Finally, we proposed next steps to overcome these research gaps. As a promising first step we have identified simulations frameworks and applied-modeling methods, which can reflect the complexity of ranking systems and their environments. However, in order to create meaningful impact analysis, concerns around datasets for fair ranking research, certain data bottlenecks and legal hurdles are yet to be resolved. Our analysis concerning existing research gaps is of course by no means exhaustive, and many other issues of high complexity remain to be discussed. In this paper, we focused on fair ranking methods that try to enhance fairness for a single side of stakeholders, mostly the individuals being ranked, or the providers of items that are ranked. Research that is concerned with multi-stakeholder problems has recently started to emerge---finding, for example, that fairness objectives for providers and consumers in conflict to each other. Similarly, we also did not explicitly discuss ranking platforms as two-sided markets, in which both sides may receive rankings for the other side. While it is a promising direction with a vast corpus of economic research on the topic, it is important to understand that \begin{inparaenum}[(1)] \item not all ranking platforms and their environments are two-sided in a literal sense: e.g., Amazon is a platform and a provider at the same time; and \item depending on what is happening on the platform, different justice frameworks have to be applied: e.g., school choice, LinkedIn, and Amazon can all be seen as two-sided markets in a broader sense, but they need very different approaches when it comes to the question on what it means for them to be fair. \end{inparaenum} Depending on whether people or products are ranked, one might expect different user bias manifestations, as well as different requirements on data privacy and minimization policies. These differences have to be taken into account when designing fair ranking methods. Finally, we note that, to the best of our knowledge, all known definitions of fairness in ranking are drawn from an understanding of fairness as distributive justice: (limited) \textit{primary goods}---these are goods essential for a person's life, such as housing, access to job opportunities, health care, etc.---are to be distributed fairly across a set of individuals. Fair ranking definitions of this kind may be a good fit for hiring or admissions, because we distribute a limited number of primary goods, namely jobs and education, among a set of individuals. However, fairness definitions based on the distributive justice framework may not make sense in other scenarios. For instance, e-commerce platforms may not qualify for properties of distributive justice, because they lack the aspect to distribute \emph{primary} goods: e-commerce settings, e.g., whether a single item is sold, may not qualify as immediately life-changing. Overall, we conclude that there is still a long way ahead of us; many more aspects from the ranking systems' universe have to be considered before we achieve substantive and robust algorithmic justice in rankings, recommendations, and retrieval systems. \section*{Appendix: Comprehensive Examples} Here we give some toy examples relevant to our discussion in the paper. \Cref{tab:position_bias} gives an example on how position bias in ranking could further widen the already existing inequalities. In \Cref{tab:fair_exposure_gone_wrong}, we give an example where the traditional fair ranking would fail to ensure equity in presence of user biases. \Cref{tab:temporal_significance} gives an example where fair ranking mechanisms would fail in presence of temporal variations. \Cref{tab:duplication_attack} and \Cref{tab:spillover_example} give examples on how duplication attacks and spillovers could cause the failure of fair ranking mechanisms. \begin{table*}[h] \small \subfloat[An optimal ranking]{ \begin{tabular}{|c|c|c|c|c|} \hline {\bf Rank} & {\bf Expected} & {\bf Individual} & {\bf Relevance} & {\bf Group} \\ & {\bf attention} & & & {\bf membership}\\\cline{1-5} $1$ & $0.5$ & \textcolor{blue}{A} & $0.92$ & \multirow{2}*{\textcolor{blue}{blue}} \\\cline{1-4} $2$ & $0.25$ & \textcolor{blue}{B} & $0.91$ & \\\cline{1-5} $3$ & $0.125$ & \textcolor{red}{C} & $0.90$ & \multirow{2}*{\textcolor{red}{red}}\\\cline{1-4} $4$ & $0.0625$ & \textcolor{red}{D} & $0.89$ & \\\cline{1-5} \end{tabular}\label{tab:position_bias_ranking}} \hfil \subfloat[Group-level analysis]{ \begin{tabular}{|c|c|c|} \hline {\bf Group} & {\bf Mean} & {\bf Exposure}\\ & {\bf relevance} & \\\cline{1-3} \textcolor{blue}{blue} & $0.915$ & $0.75$ \\\cline{1-3} \textcolor{red}{red} & $0.895$ & $0.1875$ \\\cline{1-3} \end{tabular}\label{tab:position_bias_inequality}} \caption{\textmd{Here we give an example (inspired by \citet{singh2018fairness}) on how position bias could further widen the existing inequalities. On a gig-economy platform there are four workers: A, B from the blue group, and C, D from the red group. For a certain employer, the platform wants to create a ranking of the workers. Let us assume that, in reality, all the workers are equally relevant to the employer. However, due to a pre-existing bias in historical training data, the relevance scores estimated by the platform's model are: $0.92$, $0.91$, $0.90$, $0.89$ for A, B, C, D respectively. Using the probability ranking principle \cite{robertson1977probability} we can optimize the user utility by ranking them in descending order of their relevance: i.e., A$\succ$B$\succ$C$\succ$D as given in table (a). The second column of table (a) has the expected user attention for each rank (this follows from real world observations of position or rank bias indicating close to an exponential decrease of attention while moving from the top to bottom ranks \cite{craswell2008experimental}). Next we give a group-level analysis in table (b). The mean relevance scores of the blue group (A \& B) and the red group (C \& D) were $0.915$ and $0.895$ respectively which are not so different. On the other hand the exposure (sum of expected attention) of the blue and red groups---in the optimal ranking--- were $0.75$ and $0.1875$ respectively which are very different. We can clearly see that how the optimal ranking in presence of position bias, could significantly widen the gap in exposure even for a small difference in relevance estimation.}}\label{tab:position_bias} \end{table*} \begin{table*}[h] \small \subfloat[Expected attention]{ \begin{tabular}{|c|c|} \hline {\bf Rank} & {\bf Attention} \\\cline{1-2} $1$ & $0.6$ \\\cline{1-2} $2$ & $0.3$ \\\cline{1-2} $3$ & $0.1$ \\\cline{1-2} \end{tabular}\label{tab:avg_attention}} \hfil \subfloat[Non-discriminatory employer]{ \begin{tabular}{|c|c|c|} \hline {\bf Rank} & {\bf Individual} & {\bf Group}\\\cline{1-3} $1$ & \textcolor{red}{A} & \textcolor{red}{red}\\\cline{1-3} $2$ & \textcolor{blue}{D} & \textcolor{blue}{blue} \\\cline{1-3} $3$ & \textcolor{blue}{E} & \textcolor{blue}{blue} \\\cline{1-3} \end{tabular}\label{tab:ranking1}} \hfil \subfloat[Discriminatory employer]{ \begin{tabular}{|c|c|c|} \hline {\bf Rank} & {\bf Individual} & {\bf Group} \\\cline{1-3} $1$ & \textcolor{blue}{F} & \textcolor{blue}{blue} \\\cline{1-3} $2$ & \textcolor{red}{B} & \textcolor{red}{red} \\\cline{1-3} $3$ & \textcolor{red}{C} & \textcolor{red}{red} \\\cline{1-3} \end{tabular}\label{tab:ranking2}} \caption{\textmd{Here, we give a simple example of ranking in hiring or gig-economy platform setting where exposure, if used as a measure of producer utility, may fail to deliver desired fairness even after satisfying fairness of exposure. We have six workers (A, B, and C from the red group while D, E, and F from the blue group) on the platform. The platform's RS presents a ranked list (size $3$) of workers to the consumers i.e., the employers. In table (a), we give a sample distribution of expected attention from employer over the ranks (i.e., on average there are $0.6$, $0.3$, $0.1$ chances of an employer clicking on individual ranked $1$, $2$, $3$ respectively). Tables (b) and (c) show the rankings given to two different employers. Now the overall exposure of red group will be exposure$(A)+$ exposure$(B)+$ exposure$(C)= 0.6+0.3+0.1=1$. Similarly for blue group's exposure will be exposure$(D)+$ exposure$(E)+$ exposure$(F)= 0.3+0.1+0.6=1$. It is clear that this set of rankings follow the notions of fairness of exposure \cite{singh2018fairness} and equity of attention \cite{biega2018equity}. However, if we look more closely, one employer (in table (b)) is a non-discriminatory employer while the other one (in table (c)) is a discriminatory employer biased against the blue group. The second employer ignores the top ranked individual $F$ from blue group, and treats $B$, $C$ as if they are ranked at the first and second positions. Thus under these circumstances, the expected impact on the red group increases while that of the blue group decreases even though the rankings are fair in terms of exposure distribution.}} \label{tab:fair_exposure_gone_wrong} \end{table*} \begin{table*}[h] \small \subfloat[Expected attention]{ \begin{tabular}{|c|c|} \hline {\bf Rank} & {\bf Attention} \\\cline{1-2} $1$ & $0.6$ \\\cline{1-2} $2$ & $0.4$ \\\cline{1-2} \end{tabular}\label{tab:exp_exposure}} \hfil \subfloat[At Time $t$]{ \begin{tabular}{|c|c|c|c|} \hline {\bf Rank} & {\bf Item} & {\bf Exposure} & {\bf Overall}\\ & & & {\bf interest}\\\cline{1-4} $1$ & $a$ & $0.6$ & \multirow{2}*{$1$} \\\cline{1-3} $2$ & $b$ & $0.4$ & \\\cline{1-4} \end{tabular}\label{tab:with_temporal_sig_1}} \hfil \subfloat[Time $t+1$ ($50\%$ reduction in overall interest)]{ \begin{tabular}{|c|c|c|c|} \hline {\bf Rank} & {\bf Item} & {\bf Exposure} & {\bf Overall} \\ & & & {\bf interest}\\\cline{1-4} $1$ & $b$ & $0.6$ & \multirow{2}*{$0.5$} \\\cline{1-3} $2$ & $a$ & $0.4$ & \\\cline{1-4} \end{tabular}\label{tab:with_temporal_sig_2}} \caption{\textmd{Here we give an example on how temporal variations in the significance of rankings could fail the fair ranking mechanisms. Consider a scenario where two news agencies namely A and B regularly publish their articles on a news aggregator platform which then ranks the news articles while recommending to the readers (users). In table (a), we give a sample distribution of expected attention from readers over the ranks. At some time just before $t$, a big event happens, and both A and B quickly report on this through equally good articles $a$ and $b$, both published at time $t$. The table (b) and (c) show the rankings of articles on the platform at time $t$ and $t+1$. If we sum up the total exposure of each agency, we get exposure$(A)=0.6+0.4=1$ and exposure$(B)=0.4+0.6=1$. However, if we look more closely, the overall interest of readers on the breaking news at time $t$ is $1$ which decreases to $0.5$ at time $t+1$. This is because the readers who have already read the news on the particular event at $t$, will be less likely to read the same news again from a different agency at $t+1$. Thus, even though the exposure metric of the news agencies are the same in this case, they end up getting disparate impact due to the temporal degradation of user interest. A way to avoid such outcomes, would be to design and use suitable context-specific weighting mechanisms for rankings which can anticipate and account for such temporal variations.}} \label{tab:temporal_significance} \end{table*} \begin{table*}[h] \small \subfloat[List of relevant items]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf Provider} & {\bf \% times}\\ {\bf items} & & {\bf recommended}\\\cline{1-3} $a_1$ & \multirow{2}*{$A$} & \multirow{2}*{$50\%$}\\\cline{1-1} $a_2$ & &\\\cline{1-3} $b_1$ & \multirow{2}*{$B$} & \multirow{2}*{$50\%$}\\\cline{1-1} $b_2$ & &\\\cline{1-3} \end{tabular}\label{tab:normal_catalogue}} \hfil \subfloat[List with item duplication]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf Provider} & {\bf \% times}\\ {\bf items} & & {\bf recommended}\\\cline{1-3} $a_1$ & \multirow{3}*{$A$} & \multirow{2}*{$60\%$}\\\cline{1-1} $a_1$\_copy & & \\\cline{1-1} $a_2$ & &\\\cline{1-3} $b_1$ & \multirow{2}*{$B$} & \multirow{2}*{$40\%$}\\\cline{1-1} $b_2$ & &\\\cline{1-3} \end{tabular}\label{tab:manipulated_catalogue}} \caption{\textmd{An example of duplication attack (inspired by \citet{diincentives} and the Sybil attacks in networks \cite{goga2015doppelganger}): Here, the table (a) has a list of relevant items for certain information need. The list contains two items each from providers $A$ and $B$. Let us consider a recommender system which recommends exactly one item every time. In such a recommendation setting, the fairness notions which advocate for fair allocation of exposure, visibility, or impact \cite{singh2018fairness,biega2018equity,surer2018multistakeholder}, would try to allocate $25\%$ to each item, i.e., each item is recommended $25\%$ of the time; thus each provider gets $50\%$ of the exposure or visibility. Now, if provider $A$ tries to manipulate by introducing a copy of its own item $a_1$ as a new item $a_1$\_copy (as shown in table (b)) potentially undetectable by the platform, then it is highly likely that the machine learned relevance scoring model would assign same or similar relevance to the copied item. In this scenario, due to the fairness notion, provider $A$ potentially increases her share of exposure to $60\%$ while reducing it to $40\%$ for provider $B$. Allocation-based fair ranking methods can create incentives for providers to do such strategic manipulations. Possible ways to dis-incentivise such duplication would be to actively include those item features in the relevance scoring model which are particularly harder to duplicate (e.g., \#views on YouTube videos, \#reviews on Amazon).}} \label{tab:duplication_attack} \end{table*} \begin{table*}[h] \small \subfloat[Items and recommendations]{ \begin{tabular}{|c|c|c|} \hline {\bf Relevant} & {\bf \% times}\\ {\bf items} & {\bf recommended}\\\cline{1-2} $a$ & $20\%$\\\cline{1-2} $b$ & $20\%$\\\cline{1-2} $c$ & $20\%$\\\cline{1-2} $d$ & $20\%$\\\cline{1-2} $e$ & $20\%$\\\cline{1-2} \end{tabular}\label{tab:item_catalogue}} \hfil \subfloat[Similar items]{ \begin{tabular}{|c|c|c|} \hline {\bf Item} & {\bf Similar items}\\ {\bf page} & {\bf recommended}\\\cline{1-2} $a$ & $b,c$\\\cline{1-2} $b$ & $c,d$\\\cline{1-2} $c$ & $a,b$\\\cline{1-2} $d$ & $b,c$\\\cline{1-2} $e$ & $b,c$\\\cline{1-2} \end{tabular}\label{tab:similar_items_list}} \hfil \subfloat[Resultant exposure distribution]{ \begin{tabular}{|c|c|c|} \hline {\bf Item} & {\bf Resultant exposure}\\ & {\bf (with $20\%$ spillover)}\\\cline{1-2} $a$ & $20-4+1\times 2=18\%$\\\cline{1-2} $b$ & $20-4+4\times 2=24\%$\\\cline{1-2} $c$ & $20-4+4\times 2=24\%$\\\cline{1-2} $d$ & $20-4+1\times 2=18\%$\\\cline{1-2} $e$ & $20-4+0\times 2=16\%$\\\cline{1-2} \end{tabular}\label{tab:resultant_exposure}} \caption{\textmd{An example on exposure spillover: Here we consider an e-commerce setting where there are five items relevant to a certain type of users. Following fairness of exposure \cite{singh2018fairness} or equity of attention \cite{biega2018equity}, each of the five items gets recommended same number of times as shown in table (a). Apart from this regular recommendation, e-commerce platforms often have similar or complementary item recommendations \cite{amazon_reco,sharma2015estimating} towards the bottom of individual item pages. Table (b) shows the similar items shown on each individual item's page. Assuming that there is 20\% spillover, i.e., 20\% of the user crowd coming to any item page moves to the similar items shown on the page, the resultant expected exposure of the items after one step of user spillovers is given in table (c). It can be clearly seen that even though the regular recommender system ensures fairness (as in table (a)), the resultant effects may not be fair due to spillover effects.}} \label{tab:spillover_example} \end{table*}
2024-02-18T23:40:31.434Z
2022-02-01T02:19:51.000Z
algebraic_stack_train_0000
2,652
11,329
proofpile-arXiv_065-12871
\section{Introduction} \label{sec:introduction} In the past decade several measurements of $b$-quark decays with final leptons have shown disagreement with the overly successful Standard Model (SM). Such disagreements are collectively referred to as “flavour anomalies”, and they typically feature tensions at the level of 2–3 standard deviations between experimental results and SM predictions. An interesting aspect of these anomalies lies in the fact that they all seem to point towards the presence of lepton flavour universality (LFU) violation in the interactions mediating the processes. Last year, the measurements of rare decays $B^+ \to K^+ \ell^+ \ell^-$, with $\ell$ denoting an electron or a muon, have provided the further evidence for the breaking of LFU in beauty-quark decays in a single process, with a significance of 3.1 standard deviations, based on 9 fb$^{-1}$ of proton-proton collision data collected at LHCb \cite{LHCb:2021trn}. The accuracy of the predictions for the branching fractions of semileptonic $B$ decays is generally higher than the one of hadronic decays, due to the reliability of perturbative techniques. Moreover, this precision can be further increased by taking ratios of processes with electrons or muons in the final state, since they are affected equally by the strong force, which does not couple directly to leptons. % Thus to minimize the hadronic uncertainties one usually introduces branching fraction ratios which in the case of $B \to K^{(*)} \ell^+ \ell^-$ can be defined as \begin{equation}\begin{split} &R_{K^{(*)}[q^2_{\rm min},q^2_{\rm max}]}=\frac{\mathcal B(B\to K^{(*)} \mu^+\mu^-)_{q^2\in[q^2_{\rm min},q^2_{\rm max}]}}{\mathcal B(B\to K^{(*) }e^+e^-)_{q^2\in[q^2_{\rm min},q^2_{\rm max}]}} \end{split}\end{equation} where $\mathcal{B}$ denotes the branching fraction for the given decay mode measured over a bin size of $[q^2_{\rm min},q^2_{\rm max}]$. The resulting $R_{K^{(*)}}$ are measured over specific ranges for the squared di-lepton invariant mass $q^2$. The $B \to K^{(*)} \ell^+ \ell^-$ decays are driven at the quark-level by the $b \to s\ell^+ \ell^-$ decay. The hadronic process involved is mediated by Flavour Changing Neutral Currents (FCNCs), which are forbidden at tree-level in the SM. The branching fractions in the ratio $R_{K^{(*)}}$ differ only by the leptons in the final state, hence this ratio is expected to be 1 by virtue of Lepton Flavour Universality (LFU), with small deviations induced by phase space differences and QED corrections. By comparing recent LHCb experimental values with theoretical determinations we have: \begin{widetext} \begin{equation} \begin{aligned}[l] &R_{K^+[1.1,6.0]}^\text{exp}=0.846^{+0.042\, +0.013}_{-0.039\,-0.012}\text{~\cite{LHCb:2021trn}}\\ &R_{K^{*0}[0.045,1.1]}^\text{exp}=0.66^{+0.11}_{-0.07}\pm 0.03\text{~\cite{LHCb:2017avl}}\\ &R_{K^{*0}[1.1,6.0]}^\text{exp}=0.69^{+0.11}_{-0.07}\pm 0.05\text{~\cite{LHCb:2017avl}}\\ \end{aligned} \quad \begin{aligned}[l] &R_{K^+}^\text{th}=1.00\pm 0.01\text{~\cite{Bordone:2016gaq,Capdevila:2017ert}}\\ &R_{K^{*0}[0.045,1.1]}^\text{th}=0.922\pm 0.022\text{~\cite{Capdevila:2017ert}}\\ &R_{K^{*0}[1.1,6.0]}^\text{th}=1.000\pm 0.006\text{~\cite{Capdevila:2017ert}}\\ \end{aligned} \quad \begin{aligned}[l] & 3.1~ \sigma\\ & 2.3~ \sigma\\ & 3.4~ \sigma\\ \end{aligned} \label{lhcbdata} \end{equation}\end{widetext} where $q^2$ is given in GeV$^2$. In the experimental data the first errors are statistical and the second ones systematic. % The first result is the most precise measurement to date and consistent with the SM prediction with a p-value of 0.10\%. This gives evidence for the violation of lepton universality in these decays with a significance of 3.1$\sigma$. We have also listed the statistical significance of the anomalies for the other experimental results. Recently LHCb has investigated $B^0 \to K^{0}_S\ell^+ \ell^-$ and $B^+ \to K^{*+} \ell^+ \ell^-$ decays, with $\ell$ being an electron or a muon. Notice that these decays involve mesons which are the isospin partners of the ones in the previously measured channels $B^+ \to K^{+} \ell^+ \ell^-$ and $B^0 \to K^{*0} \ell^+ \ell^-$. Although these decays have similar branching fractions as their isospin partners, they suffer from a reduced experimental efficiency at LHCb, due to the presence of a long-lived $K^{0}_S$ or $\pi^0$ meson in the final states. The measured ratios are \begin{widetext} \begin{equation} \begin{aligned}[l] &R_{{K^{0}_S}[1.1,6.0]}^\text{exp}=0.66^{+0.20\, +0.02}_{-0.14\,-0.04}\text{~\cite{LHCb:2021lvy}}\\ &R_{K^{*+}[0.045,6.0]}^\text{exp}=0.70^{+0.18\, +0.03}_{-0.13\,-0.04}\text{~\cite{LHCb:2021lvy}}\\ \end{aligned} \label{lhcbdataoct} \end{equation}\end{widetext} and provide $\sim$ 1.5$\sigma$ hints of departures from the SM~\cite{LHCb:2021lvy}. Recent experimental determinations of $R_{K^{*}}$ have also been given by the Belle collaboration, using the full $\Upsilon(4S)$ data sample containing 772 $\times 10^6$ $B \bar B$ events. For the same range of di-lepton invariant mass reported in \eqref{lhcbdata}, they find \begin{widetext} \begin{equation} \begin{aligned}[l] &R_{K^{*0}[0.045,1.1]}^\text{exp}=0.46^{+0.55}_{-0.27}\pm 0.13\text{~\cite{Belle:2019oag}}\\ &R_{K^{*0}[1.1,6.0]}^\text{exp}=1.06^{+0.63}_{-0.38}\pm 0.14\text{~\cite{Belle:2019oag}}\\ &R_{K^{*+}[0.045,6.0]}^\text{exp}=0.62^{+0.60}_{-0.36}\pm 0.09\text{~\cite{Belle:2019oag}}\\ \end{aligned} \quad \end{equation}\end{widetext} Babar, Belle and LHCb have provided other prominent contributions to ratio determinations in these as well as in different channels \cite{BaBar:2012obs,BaBar:2013mob,LHCb:2015gmp,Belle:2015qfa, Belle:2016kgw, LHCb:2014vgu, LHCb:2017avl, Belle:2019oag, LHCb:2021lvy}. The primary requirement for any model to explain these $b \to s$ anomalies is to have a symmetry which distinguishes between semi-leptonic $B$ decays to $\mu^+ \mu^-$ and to $e^+ e^-$ such that $R_{K^{(*)}}$ deviates appreciably from one. Within the {SM } this cannot be achieved as the theory is sequential, so that the $e^-$ and $\mu^-$ carry the same gauge charges. One possible way out is to postulate a new $U(1)_X$ gauge symmetry under which $e^-$ and $\mu^-$ carry different charges~\cite{Altmannshofer:2016jzy,Crivellin:2015era,Bonilla:2017lsq,Allanach:2020kss,Bause:2021prv}. Here we propose that the deviation from $R_{K^{(*)}} = 1$ can be achieved from a bigger non-trivial gauge symmetry, from which the standard $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$~gauge group emerges as a subgroup. We do this in the framework of the so-called 331 models~\cite{Singer:1980sw,Valle:1983dk,Montero:1992jk}, which constitute one of the simplest well-motivated extensions of the SM. The name 331 follows from their extended $\mathrm{SU(3)_c \otimes SU(3)_L \otimes U(1)_X } $~gauge group. Several issues which remain unanswered within the SM, for instance the origin of light-neutrino masses, typically call for larger gauge structures and/or new particles. Grand Unified Theories play certainly an important role in this respect, but 331 models have the advantage that they can provide scenarios where larger gauge symmetries can be probed already at the TeV scale. Moreover, so far no hard evidence in favour of conventional unification schemes has been found. % Here we note that 331 models lead to a consistent theoretical structure and also a phenomenologically viable weak neutral current~\footnote{Earlier 331 models were suggested to account for the high y-anomaly, which turned out to be a fake~\cite{Lee:1977qs,Lee:1977tx,Buccella:1977gx,Buccella:1978nc}. while shedding light on mysteries such as the number of particle families. }\textsuperscript{,}\footnote{Recently there are some attempts to relate the $(g-2)_{\mu}$ anomaly with 331 models (see for Refs.\cite{deJesus:2020ngn,Hue:2020wnn,Hue:2021zyw}).}. As a result, 331-based extensions have attracted a lot of interest, see for instance~\cite{Boucenna:2014dia, Dong:2014wsa, Alves:2016fqe, Dias:2020kbj, Hernandez:2021zje}. These models experience two stages of breaking: at a larger scale $\Lambda_{NP}$, the extended group is broken down to the SM gauge group, while the electroweak symmetry breaking occurs at the lower scale $\Lambda_{EW}$. Phenomenologically, these models feature additional heavy gauge bosons, as well as an extended Higgs sector to drive the two spontaneous symmetry breakdowns. Left-handed fermions transform according to one of the two fundamental representations, i.e. triplets (or antitriplets) under the action of $SU(3)_L$. In the simplest version of 331 theories~\cite{Singer:1980sw,Valle:1983dk} exactly three families emerge from the cancellation of chiral anomalies, which requires that the number of triplets matches the number of antitriplets. In contrast to the SM, where the anomaly is canceled within each generation of fermions, in these 331 models all families must be considered to fulfill the anomaly cancelation. Since quarks come in three colours, there must be three families of quarks and leptons, with leptons appearing in the same fundamental representation of the group. As a result their couplings with gauge bosons are necessarily family-independent, preventing any LFU violation in their gauge couplings. Here we are concerned with other versions of the 331 model extending the lepton sector with additional species. This assumption allows us to choose at least one lepton family transforming differently from the others, ensuring the presence of LFU violation. The minimal choice preserving anomaly cancellation requires two additional lepton species. These versions of the 331 model have been considered in Refs.~\cite{Cabarcas:2012uf, Cabarcas:2013jba,Diaz:2004fs,Diaz:2003dk, Ponce:2001jn,Anderson:2005ab}. In the preliminary analysis \cite{Descotes-Genon:2017ptp}, it was studied whether they can reproduce the anomalies observed in $b\to s\ell\ell$ processes under simple assumptions: LFU violation is dominated by neutral gauge boson exchange, with no significant Lepton Flavour Violation (LFV) of the form $b\to s\ell_1\ell_2$, nor large contributions to $B_s\bar{B}_s$ mixing. It was found that under these simple assumptions an extended 331 model without exotic electric charges for fermions and gauge bosons can yield large contributions to $(C_9^\mu,C_{10}^\mu)$ in good agreement with 2018 global fit analyses \cite{Capdevila:2017bsm}. This result is rather non-trivial, given that the model is quite constrained. Apart from providing an updated numerical analysis including the recent $B$-anomaly data, here we fully develop the proposal in Ref. \cite{Descotes-Genon:2017ptp}, by adding the neutral fermions required for an adequate description of the neutrino mass matrix. In addition to gauge symmetries, we assume the presence of two auxiliary discreet $\mathbb{Z}_2$ and $\mathbb{Z}_3$ symmetries, which are needed in order to ensure an adequate pattern of fermion masses. As we describe in Sec. \ref{sec:model} in more details, the primary purpose of the $\mathbb{Z}_3$ symmetry is to forbid direct gauge invariant couplings between SM and exotic leptons. Presence of such coupling would imply either unacceptable large masses for SM leptons or unacceptable small masses for exotic charged leptons, both scenarios of course being experimentally rejected. An additional $\mathbb{Z}_2$ is further needed to generate different masses for SM and exotic fermions which carry the same gauge quantum numbers, without need for fine tunning. The paper is organized as follows. In Sec.~\ref{sec:model} we sketch the model and its field representations. In Sec.~\ref{sec:yukawa-interactions} we discuss the Yukawa interactions, including those used in the implementation of the seesaw mechanism. In Sec.~\ref{sec:ferm-mass-matr} we comment on fermion mass generation, including neutrino masses. In Sec.~\ref{sec:b-flavour-global} we perform a comparison with $B$ flavour global analyses. We found that this 331 model can generate large new physics contributions to $(C_9^\mu,C_{10}^\mu)$ parameters, in agreement with new physics scenarios favoured by global fits. In Sec.~\ref{Conclusion} we present our conclusions. \vspace{-0.2cm} \section{The Model} \label{sec:model} Apart from gluons, any 331 model has nine vector bosons associated to each generator of the gauge group, eight $W^a_\mu$ for SU(3)$_\text{L}$ and one $X_\mu$ for U(1)$_\text{X}$. We indicate the generators of the SU(3)$_\text{L}$ gauge group with $\hat T^1 \cdots \hat T^8$, normalized as $\mathrm{Tr}[\hat T^i \, \hat T^j]=\delta^{ij}/2 $, and define the $U(1)_X$ generator as $\hat T^9 = {\mathds 1}/\sqrt{6}$, where ${\mathds 1} = \mathrm{diag} (1, 1, 1)$ is the identity matrix. The electric charge is defined in general as a linear combination of the diagonal generators of the group \begin{equation} \hat Q = a \hat T^3+ \beta \hat T^8+X{\mathds 1} \end{equation} where the values of the proportionality constants $a$ and $\beta$ distinguish different 331 models. We have $\hat T^3 = 1/2 \, \hat{\lambda}^3= 1/2 \, \mathrm{diag}(1, -1,0)$ and $\hat T^8 = 1/2 \, \hat{\lambda}^8= 1/(2\sqrt{3}) \, \mathrm{diag}(1, 1,-2)$, where $\hat{\lambda}^i$ are the Gell-Mann matrices. $X$ is the quantum number associated with $U(1)_X$. We set $a=1$ to obtain isospin doublets which embed $\mathrm{ SU(2) \otimes U(1)}$ into $\mathrm{SU(3) \otimes U(1)}$. In order to restrict $\beta$ we demand that no new particle introduced in the model has exotic charges (i.e. different from the SM ones). This can be done by choosing the particular value \begin{equation} \beta= -1/\sqrt3 \end{equation} which is the original assignment made in~\cite{Singer:1980sw}. We will thus have the following definition of the electric charge operator \begin{equation} \hat Q= \hat T^3-\frac{1}{\sqrt3} \hat T^8+X{\mathds 1} \label{charge1} \end{equation} Complex gauge fields are defined by the combinations $W^{\pm}_{\mu}=\frac{1}{\sqrt{2}}(W^1_{\mu}{\mp}iW^2_\mu)$, $V^{\pm}_{\mu}=\frac{1}{\sqrt{2}}(W^6_{\mu}{\mp}iW^7_\mu)$ and $Y^{0(0_*)}_{\mu}=\frac{1}{\sqrt{2}}(W^4_{\mu}{\mp}iW^5_\mu)$, where the superscripts $\pm,0$ denote electric charges of the fields, a notation we will follow throughout this work. In general, the values of the electric charges of the $V_\mu$ and $Y_\mu$ bosons depend on the value of $\beta$. With our choice of $\beta=-\frac{1}{\sqrt{3}}$ the electric charges of all gauge bosons are fixed to either $\pm1$ or $0$, i.e. non-anomalous values. \subsection{Symmetry breaking} Starting from the $\mathrm{SU(3)_c \otimes SU(3)_L \otimes U(1)_X } $ gauge group (with gauge couplings $g_S,g,g_X$), the model will undergo two spontaneous symmetry breakings (SSB) triggered by colour singlet scalar fields acquiring non-vanishing vacuum expectation values, in a way analogous to the SM. The overall pattern of SSB is the following \vspace{3mm} \begin{widetext} \begin{center} \begin{adjustbox}{max width=\textwidth} \begin{tikzpicture} \node at (0,0) {$SU(3)_\text{c}\times SU(3)_\text{L} \times U(1)_X$}; \draw [->] (2.5,0) -- node[above] {}node[below] {$\Lambda_\text{NP}$} (3.5,0); \node at (6,0) {$SU(3)_\text{c}\times SU(2)_\text{L} \times U(1)_Y$}; \draw [->] (8.5,0) -- node[above] {}node[below] {$\Lambda_\text{EW}$} (9.5,0); \node at (11.5,0) {$SU(3)_\text{c}\times U(1)_\text{EM}$}; \end{tikzpicture} \end{adjustbox} \end{center}\end{widetext} The first SSB occurs at an energy scale $\Lambda_\text{NP}$ and allows to recover the {SM } gauge group. The subsequent one, at energy scale $\Lambda_\text{EW}$, reproduces the electroweak symmetry breaking (EWSB) of the SM. We assume that $\Lambda_\text{NP}\gg\Lambda_\text{EW}$, and introduce a small parameter $\epsilon=\Lambda_\text{EW}/\Lambda_\text{NP}$ characterizing the order of magnitude of the new physics (NP). As in the SM, the Higgs fields, besides giving mass to the gauge bosons, are used to generate fermion mass terms through gauge invariant Yukawa terms. The need to build gauge invariant terms in such a way to obtain appropriate mass terms after SSB constrains possible scalar Higgs field representations. Since the fermions transform either as a $3$ or as a $\bar 3$ under SU(3)$_{\textrm{L}}$, we only have a limited number of possibilities~\cite{Diaz:2003dk} for a scalar field $\Phi$, which at both stages can only be a triplet, a sextet or a singlet~\footnote{A 331 gauge singlet scalar can in principle contribute to the neutral fermion mass term; however, since it does not change our conclusions, we ignore this possibility for simplicity. Though we have a different number of triplets and sextets than in Ref.~\cite{Diaz:2003dk}, their conclusions on the structure of the gauge boson mass sector do not change, since we assume the same vacuum expectation value (vev) alignments.}. We assume that the breaking of the SU(3)$_\text{L}$ symmetry is accomplished through two triplets $\chi$ and $\tilde \chi$ and a sextet $S_1$. There are five gauge fields that acquire a mass of the order of $\Lambda_\text{NP}$, whereas the remaining three gauge fields are the SM gauge bosons. At the first SSB stage, the gauge bosons acquiring mass are the charged ones $V^{\pm}$, the neutral gauge bosons, $Y^{0(0_*)}$ and a massive neutral gauge boson $Z'$ given as a combination of the two neutral gauge bosons $X, W^8$, which also yields the gauge boson $B$. Their mixing angle $\theta_{331}$ is given by: \begin{equation} \begin{pmatrix}Z'\\B\end{pmatrix}=\begin{pmatrix}\cos\theta_{331}&-\sin\theta_{331}\\\sin\theta_{331}&\cos\theta_{331}\end{pmatrix}\begin{pmatrix}X\\W^8\end{pmatrix}, \end{equation} The angle $\theta_{331}$ is found by singling out the $Z'$ field in the sector of the Lagrangian including the masses of the gauge bosons, which follow from the covariant derivative in the Higgs Lagrangian. It yields \begin{equation} \sin\theta_{331}=\frac{g}{\sqrt{g^2+\frac{g_X^2}{18}}}\,,\qquad \cos\theta_{331}=-\frac{\frac{g_X}{3\sqrt2}}{\sqrt{g^2+\frac{g_X^2}{18}}}. \label{mixing:angle} \end{equation} where $g$,$g_X$ denote the coupling constants for $SU(3)_L$ and $U(1)_X$ respectively. The second stage of symmetry breaking is the usual electroweak symmetry breaking to the electromagnetic subgroup i.e. $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ $\to \text{SU(3)}_\text{c} \otimes \text{U(1)}_\text{EM}$. This breaking is driven by the triplets $\eta, \rho$, $\tilde \eta, \tilde \rho$ and the sextet $S_{c}$. After electroweak symmetry breaking, the neutral gauge bosons $W^3$ and $B$ mix with each other to give the {SM } $Z$ and $\gamma$ bosons as follows \begin{equation} \begin{pmatrix} Z\\ \gamma \end{pmatrix}=\begin{pmatrix}\cos\theta_{W}&-\sin\theta_{W}\\\sin\theta_{W}&\cos\theta_{W}\end{pmatrix}\begin{pmatrix} W^3 \\ B \end{pmatrix}, \end{equation} where the mixing angle $\theta_{W}$ is the usual electroweak mixing angle. \\ Summarizing, our scalar sector is similar to that in Ref. \cite{Descotes-Genon:2017ptp}, except for the addition of the triplets $\tilde \chi$, $\tilde \eta, \tilde \rho$ and the removal of the sextet $S_b$, for reasons that will be detailed later. % The two-step spontaneous symmetry breaking ensures that all the new gauge bosons indeed get large masses through the large vevs of the scalars breaking $\mathrm{SU(3)_c \otimes SU(3)_L \otimes U(1)_X } $ $\to$ $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$. Only the SM gauge bosons get their masses in the second symmetry breaking step due to the electroweak-scale vev carried by the scalars breaking $\mathrm{SU(3)_c \otimes SU(2)_L \otimes U(1)_Y}$ $\to \text{SU(3)}_\text{c} \otimes \text{U(1)}_\text{EM}$. \subsection{ Matter content} \label{sec:fieldsandrepr} In the previous section we have discussed the gauge structure and symmetry breaking pattern, here we focus on the matter content of our 331 model, looking into detail of the charge assignment. This 331 model contains three families of left-handed quarks and five families of left-handed leptons~\cite{Ponce:2001jn,Anderson:2005ab,Cabarcas:2012uf,Cabarcas:2013jba,Descotes-Genon:2017ptp}. They all belong to the fundamental representations of SU(3)$_\text{L}$. Two generations of quarks and one of leptons behave as anti-triplets, all the others as triplets of SU(3)$_\text{L}$. This fermion content ensures at the same time the cancellation of the anomalies and allows LFU violation, but otherwise departs from the SM as little as possible. Fixing $\beta=-1/\sqrt{3}$ has ensured that both SM and new fields in the spectra all have non-exotic charges. Using the notation (SU(3)$_\text c$, SU(3)$_\text{L}$, U$_X$(1)) while referring to the representations of the fermions, we write for the left-handed ones \begin{itemize} \item three families of quarks~\footnote{Note that the order in which the triplet components are arranged is a matter of choice. An alternative convention is to have the first component of quark triplets to be up-type, whereas the others are down-type. For leptons the upper one would be charged, while the others neutral. The third component is always exotic~\cite{Singer:1980sw}. } \begin {equation} \begin{split} q_m &=\begin{pmatrix}d^L_m\\-u^L_m\\B^L_m\end{pmatrix}\sim (3, \bar 3, 0), \quad m=1,2 \\ q_3 &=\begin{pmatrix}u^L_3\\d^L_3\$\mathcal{T}$^L_3\end{pmatrix}\sim (3, 3, \frac 1 3); \end{split} \label{qh} \end{equation} \item five species of leptons \begin {equation} \begin{split} \ell_1&=\begin{pmatrix}e^{-L}_1 \\ -\nu^L_1 \\ E^{-L}_1\end{pmatrix}\sim (1, \bar3, -\frac 2 3), \\ \ell_n &=\begin{pmatrix}\nu^L_n\\ e^{-L}_n \\N^{0L}_n\end{pmatrix}\sim (1, 3, -\frac 1 3), \qquad n=2,3 \\ L_4 &=\begin{pmatrix} \nu^{0L}_4\\ E^{-L}_4 \\ N^{0L}_4\end{pmatrix}\sim (1, 3, -\frac 1 3), \\ L_5 &=\begin{pmatrix}\bigl(E^{-R}_4\bigr)^c\\ N^{0L}_5 \\ \bigl(e^{-R}_3\bigr)^c\end{pmatrix}\sim (1, 3, \frac 2 3). \\ \end{split} \label{lh} \end{equation} \end{itemize} Notice that, as in the original 331-model in~\cite{Singer:1980sw}, no positively charged leptons have been introduced in the triplets. Indeed, they would only appear in $L_5$, but we identify them with the charge conjugate of the right-handed components of $E^{-}_4$ and $e^{-}_3$. This economical identification avoids the presence of charged exotic particles at the electroweak scale. We have labelled the SM fermions with lower-case ($e_i, \, \nu_i$ with $i=1,2,3$), and the exotic ones with $\nu_4$ and upper-case ($E_{1,4}, \, N_{2,3,4,5}$), choosing letters and/or superscripts recalling their electric charge assignments and chirality. In contrast, reference to chirality has been eliminated for simplicity when naming left-handed triplets/antitriplets as a whole: left-handed SM quarks, SM leptons and exotic leptons are indicated with $q_{1,2,3}$, $\ell_{1,2,3}$ and $L_{4,5}$, respectively. Capital letters have been used for the last two triplets because they include only exotic fermions. The right-handed components of charged fermions are defined as singlets of SU(3)$_\text{L}$; the SM ones are labelled as $u_{1,2,3}$, $d_{1,2,3}$ and $e_{1,2}$ with lower-case, and the exotic ones $B_{1,2}$, $T_{3}$ and $E_{1}$ with upper-case, without any chirality or charge superscript. Altogether, we have the following list of right-handed fermions \begin{itemize} \item the quark fields \begin {equation} \begin{split} d_{1,2,3} &\sim(3,1,-1/3)\\ B_m &\sim(3,1,-1/3),\qquad m=1,2\\ u_{1,2,3} &\sim(3,1,2/3)\\ T_3 &\sim(3,1,2/3) \end{split} \end{equation} \item the charged lepton fields \begin {equation} \begin{split} e_{1,2} &\sim(1,1,-1) \\ E_{1} &\sim(1,1,-1) \end{split} \label{rh} \end{equation} As already mentioned, the right-handed parts of $e_3^-$ and $E_4^-$ are included in the SU(3)$_\text L$ lepton triplet $L_5$. \item the neutral lepton fields \footnote{Compared with the fermion content of Ref. \cite{Descotes-Genon:2017ptp}, we have three extra neutral two-component fermions $\nu^R_{1,2,3}$ to implement neutrino mass generation \textit{\`a la seesaw}.} \begin {equation} \begin{split} \nu^R_{1,2,3} \sim (1,1,0) \end{split} \end{equation} We do not include right-handed partners for the neutral lepton fields $N^{0L}_{2,3,4,5}$ and $\nu^{0L}_4$, which get Majorana mass terms. \end{itemize} The representation assigments for the fermions and scalars are summarized in Table~\ref{tab:seesaw-z2}, where one also sees the presence of two auxiliary discrete symmetries $\mathbb{Z}_2$ and $\mathbb{Z}_3$. The latter is the discrete abelian cyclic group of order 3. It has three elements and a convenient representation is obtained by using the cube roots of unity. These are given by $1, \omega, \omega^2$ where $\omega = exp[\frac{2\pi i}{3}]$ with $\omega^3 =1$. Note that $\omega^{-1} = \omega^2$ and that $\omega^{3n} = 1$ if $n$ is an integer. This cyclic nature further implies that $\omega^n = \omega^{n-3}$, so that $\omega^4 = \omega^3 \times \omega = \omega$, $\omega^5 = \omega^3 \times \omega^2 = \omega^2$ and so on. These extra symmetries are needed in order to ensure an adequate pattern of fermion masses. In the absence of the $\mathbb{Z}_3$ symmetry, e.g., the unwanted invariant mass term $\bar{\ell}_1 (L_5)^c$ would be present. % On the other hand, since the SU(3)$_\text c$, SU(3)$_\text L$, U$_X$(1) gauge charge as well as the $\mathbb{Z}_3$ charges of the SM fermion triplets $\ell_{2,3}$ and of the exotic triplet $L_{4}$ are the same, these symmetries cannot distinguish between the SM and the exotic fermions inside the $L_4$ triplet. To prevent having similar masses for the exotic and SM fermions, we make a distinction between them by means of an additional $\mathbb{Z}_2$ symmetry, as shown in Table. \ref{tab:seesaw-z2}. \begin{table}[!t] \centering \begin{tabular}{| c || c | c | c | c || c | c | c | c | } \hline & Fields & $\rm SU(3)_c \otimes SU(3)_L \otimes U(1)_X$ &\hspace{.05cm} $\mathbb{Z}_3$ \hspace{.05cm} &\hspace{.05cm} $\mathbb{Z}_2$ \hspace{.05cm} & Fields & $\rm SU(3)_c \otimes SU(3)_L \otimes U(1)_X$ &\hspace{.05cm} $\mathbb{Z}_3$ \hspace{.05cm} &\hspace{.05cm} $\mathbb{Z}_2$ \hspace{.05cm} \\ \hline \hline \multirow{4}{*}{ \begin{turn}{90} \hspace{0.9cm} \small{Quarks} \hspace{0.05cm} \end{turn} } & $q_{1,2}$ & ($\mathbf{3}, \mathbf{\bar{3}}, \mathbf{0}$) & $\mathbf{1}$ & $\mathbf{1} $ & $q_3$ & ($\mathbf{3}, \mathbf{3}, \mathbf{1/3}$) & $ \mathbf{1}$ & $\mathbf{1} $ \\ & $u_{1,2,3}$ & ($\mathbf{3}, \mathbf{1}, \mathbf{2/3}$) & $ \mathbf{\omega^2}$ & $\mathbf{1} $ & $d_{1,2,3}$ & ($\mathbf{3}, \mathbf{1}, \mathbf{-1/3}$ & $ \mathbf{\omega} $ & $\mathbf{1} $\\ & $T_3$ & ($\mathbf{3}, \mathbf{1}, \mathbf{2/3}$) & $\mathbf{\omega^2}$ & $\mathbf{1}$ & $B_{1,2}$ & ($\mathbf{3}, \mathbf{1}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{1}$ \\ \hline \hline \multirow{4}{*}{ \begin{turn}{90} \hspace{0.9cm} \small{Leptons} \hspace{0.25cm} \end{turn} } & $\ell_1$ & ($\mathbf{1}, \mathbf{\bar{3}}, \mathbf{-2/3}$) & $\mathbf{1}$ & $\mathbf{1}$ & $\ell_{2,3}$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $ \mathbf{\omega}$ & $\mathbf{1} $ \\ & $e_{1,2}$ & ($\mathbf{1}, \mathbf{1}, \mathbf{-1}$) & $\mathbf{\omega}$ & $\mathbf{1} $ & $E_1$ & ($\mathbf{1}, \mathbf{1}, \mathbf{-1}$) & $\mathbf{\omega}$ & $\mathbf{-1}$ \\ & $L_4$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{-1}$ & $L_5$ & ($\mathbf{1}, \mathbf{3}, \mathbf{2/3}$) & $\mathbf{\omega}$ & $\mathbf{-1}$ \\ & $\nu^R_{1,2,3}$ & ($\mathbf{1}, \mathbf{1}, \mathbf{0}$) & $\mathbf{1}$ & $\mathbf{1} $ & & & & \\ \hline \hline \multirow{5}{*}{ \begin{turn}{90}\hspace{1.5cm} \small{Scalars} \hspace{0.25cm} \end{turn} } & $\chi$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{1} $ & $S_1$ & ($\mathbf{1}, \mathbf{6}, \mathbf{-2/3}$) & $ \mathbf{\omega^2}$ & $\mathbf{1} $ \\ & $\tilde{\chi}$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{-1} $ & $\tilde{\eta}$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{-1}$ \\ &$\eta$ & ($\mathbf{1}, \mathbf{3}, \mathbf{-1/3}$) & $\mathbf{\omega}$ & $\mathbf{1}$ &$\rho$ & ($\mathbf{1}, \mathbf{3}, \mathbf{2/3}$) & $\mathbf{\omega^2}$ & $\mathbf{1}$ \\ & $S_c$ & ($\mathbf{1}, \mathbf{6}, \mathbf{4/3}$) & $\mathbf{\omega^2}$ & $\mathbf{1}$ & $\tilde{\rho}$ & ($\mathbf{1}, \mathbf{3}, \mathbf{2/3}$) & $ \mathbf{1} $ & $\mathbf{1} $ \\ \hline \end{tabular} \caption{\begin{footnotesize} Particle content of the 331 model, where in addition to the SU(3)$_\text c$, SU(3)$_\text L$, U$_X$(1) gauge symmetries, we have listed two abelian discrete symmetries, see text. \end{footnotesize}} \label{tab:seesaw-z2} \end{table} \vspace{-.55cm} \section{Yukawa interactions} \label{sec:yukawa-interactions} Before discussing the details of the fermion masses, we summarize the Higgs scalar representations what will drive the breaking of $\rm SU(3)_c \otimes SU(3)_L \otimes U(1)_X$ in the Yukawa sector ~\cite{Diaz:2003dk, Descotes-Genon:2017ptp}. There are two stages of symmetry breaking, at the high 331 scale and the EW scale. Vevs of a generic field $\psi$ are denoted by $\langle\psi \rangle$. \subsection{331 Breaking} \label{sec:331-breaking} This is the first SSB stage, which is accomplished by the $\rm SU(3)_L$ scalar sextet $S_1$ and triplets $\chi, \tilde{\chi}$ with (U(1)$_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2)$ charges and non-zero vevs as follows: \begin{equation}\begin{split} \langle S_1\rangle&=\begin{pmatrix} 0&0&0\\ 0&0&0\\ 0&0& \langle (S_1)_{33} \rangle \end{pmatrix},\,(\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac 2 3, \omega^2, 1) \\ \langle \chi\rangle&=\frac 1 {\sqrt 2} \begin{pmatrix} 0\\0\\ \langle \chi_3 \rangle \end{pmatrix},\,(\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac 1 3, \omega, 1) \\ \langle \tilde{\chi}\rangle&=\frac 1 {\sqrt 2} \begin{pmatrix} 0\\0\\ \langle\tilde{\chi} \rangle\end{pmatrix},\,(\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac 1 3, \omega, -1) \end{split} \end{equation} The $\mathbb{Z}_3 \otimes \mathbb{Z}_2$ and gauge symmetry invariant Yukawa terms that can be built with the sextet are: \begin {eqnarray} &&\bar\ell_a S_1(\ell_b)^c\quad \qquad a,b = 2,3 \nonumber \\ && \bar{L}_4 S_1(L_4)^c \end{eqnarray} These terms lead to Majorana masses for the exotic neutral leptons $ N^0_{2,3,4}$. The $\mathbb{Z}_3 \otimes \mathbb{Z}_2$ and gauge symmetry invariant Yukawa terms that can be built with the triplets are: \begin{itemize} \item The up- and down-quark mass terms \begin{equation} \begin{split} & \bar{q}_m \chi^* D \quad \qquad m = 1,2 \\ &\bar{q}_3 \chi\, U \end{split} \end{equation} where $D$ represents any right-handed $d_{1,2,3}$ or $B_{1,2}$, while $U$ represents any right-handed $u_{1,2,3}$ or $T_3$. After SSB, they contribute to mix charged SM and exotic quarks, and give Dirac mass to $B_{1,2}$ and $T_3$. \item The equivalent terms in the lepton sector \begin{equation} \begin{split} & \bar{\ell}_1 \chi^* e_{1} \\ & \bar{\ell}_1 \chi^* e_{2} \\ & \bar{\ell}_1 \tilde{\chi}^* E_1 \end{split} \end{equation} Here one sees how the scalar triplet $\tilde{\chi}$, odd under the $\mathbb{Z}_2$ symmetry, allows a coupling between $E_1$ with $\ell_1$, providing a Dirac mass term for $E_1$. \item We also have the anti-symmetric combination of SU(3)$_\text{L}$ triplets or antitriplets, i.e. % \begin{equation} \begin{split} & \epsilon_{ijk}\chi^{*i}\bar{L}^{j}_4(L_5)^{c\,k} \\ & \epsilon_{ijk}\tilde{\chi}^{*i}\bar\ell^{j}_{m}(L_5)^{c\,k} \quad \qquad m = 2,3 \end{split} \end{equation} where the $i,j,k=1,2,3$ indices refer to SU(3)$_\text{L}$. The first term includes mixing between $N^0_5$ and $\nu^{0L}_4$ and allows mass term for $E_4$. \end{itemize} Summarizing, all the exotic charged and neutral fermions, except for $N_0^5$ and $\nu^{0L}_4$, have Yukawa couplings with scalars which get large vevs corresponding to the first stage of spontaneous symmetry breaking breaking. The new $N_0^5$ and $\nu^{0L}_4$ fields also need to get large masses, at least in GeV range, which can arise as discussed in the following sections. \subsection{Electroweak Breaking} \label{sec:electroweak-breaking} Turning now to electroweak symmetry breaking, the corresponding vevs of the scalar fields are given as\\[-.5cm] \begin{equation}\begin{split} \langle S_c\rangle&=\begin{pmatrix} 0&0&0\\0& \langle (S_c)_{22} \rangle &0\\0&0&0\\ \end{pmatrix},\, (\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (\frac 4 3, \omega^2, 1) \\ \langle \eta\rangle&=\frac{1}{\sqrt 2}\begin{pmatrix} \langle \eta_1 \rangle \\0\\ \langle \eta_3 \rangle\end{pmatrix},\, (\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac 1 3, \omega, 1) \\ \langle \tilde{\eta} \rangle &=\frac{1}{\sqrt 2}\begin{pmatrix} \langle \tilde{\eta}_1 \rangle \\0\\ \langle \tilde{\eta}_3 \rangle \end{pmatrix},\, (\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac 1 3, \omega, -1) \\ \langle \rho\rangle&=\frac{1}{\sqrt 2}\begin{pmatrix}0\\ \langle \rho_2 \rangle \\0\end{pmatrix},\, (\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (\frac 2 3, \omega^2, 1) \\ \langle \tilde{\rho} \rangle&=\frac{1}{\sqrt 2}\begin{pmatrix}0\\ \langle \tilde{\rho}_2 \rangle \\0\end{pmatrix},\, (\mathrm{U(1)}_\mathrm{X}, \mathbb{Z}_3, \mathbb{Z}_2) = (\frac 2 3, 1, 1) \end{split} \end{equation} The neutral component of $L_5$ gets mass through invariant terms built with sextet, i.e. \begin{equation}\begin{split} &\bar L_5 S_c(L_5)^c\\ \end{split} \end{equation} This Yukawa term gives a diagonal mass term mass for the neutral $N^0_5$. Note that since $S_c$ gets vev in its 22-component, a large value of $\vev{ (S_c)_{22}}$ will change the $\rho$-parameter from its canonical SM value. Therefore, the vev of $S_c$ field need to be small, less than 2 GeV or so. Thus, the dominant contribution to $N^0_5$ field's mass does not come from the above term but rather through its coupling with other fields (see~table~V), a fact that we have also checked numerically. For the triplets, the relevant Yukawa terms for quarks and leptons are the following:\\[-1cm] \begin{itemize} \item for quarks: \begin{equation}\begin{split} &\bar{q}_m \eta^* D\\ &\bar{q}_3 \eta U\\ &\bar{q}_3 \rho D\\ &\bar{q}_m \rho^* U\end{split} \end{equation} where $D$ represents any right-handed $d_{1,2,3}$ or $B_{1,2}$, $U$ represents any right-handed $u_{1,2,3}$, or $T_3$ and $m=1,2$ \item for leptons: \begin{equation}\begin{split} &\bar{\ell}_1 \eta^* e_{1,2} \\ & \bar{\ell}_1 \tilde{\eta}^* E_{1} \\ &\bar{\ell}_m \tilde{\rho} e_{1,2} \, \qquad \qquad \qquad m = 2,3\\ & \bar{L}_4 \tilde{\rho} E_1 \\ &\epsilon_{ijk} \tilde{\eta}^{*i}\bar\ell^{j}_m(L_5)^{c\,k} \, \qquad \; \; m = 2,3\\ & \epsilon_{ijk} \eta^{*i}\bar L^{j}_4 (L_5)^{c\,k} \end{split} \end{equation} where the $i,j,k=1,2,3$ indices refer to SU(3)$_\text{L}$. All these terms provide mass to charged leptons. The last two terms also provide mixing among neutral exotic states as well as mixing among SM and exotic ones. However, since the $\eta$ and $\tilde{\eta}$ vevs of electroweak level, none of these terms lead to unacceptable large masses for any SM particles, a fact that can be seen from the explicit forms of charged and neutral lepton mass matrices given in Tables~(IV) and~(V) respectively. We have also numerically cross-checked this fact. \end{itemize} Actually, another Higgs sextet $S_b$ would be allowed by the symmetries of the model, with vev as \begin{equation} \langle S_b\rangle = \begin{pmatrix} \langle(S_b)_{11}\rangle & 0 & \langle(S_b)_{13}\rangle \\ 0 & 0 & 0 \\ \langle(S_b)_{13}\rangle & 0 & \langle(S_b)_{33}\rangle \end{pmatrix},\, (U(1)_X, \mathbb{Z}_3, \mathbb{Z}_2) = (-\frac{2}{3}, \omega^2, 1) \\ \nonumber \end{equation} leading to the $ U(1)_X \otimes \mathbb{Z}_3 \otimes \mathbb{Z}_2$ Majorana mass terms \begin{equation}\begin{split} &\bar\ell_nS_b(\ell_m)^c,\quad n,m=2,3\\ &\bar L_4 S_b (L_4)^c \nonumber \end{split} \end{equation} The first of these terms gives rise to diagonal mass terms for left-handed neutrinos of the order of the EW scale. Therefore, in order to get the observed tiny neutrino masses through a seesaw mechanism, we exclude the $S_b$ sextet from the particle content. \subsection{Type-I Seesaw mechanism in 331-setup} \label{sec:type-i-seesaw} For implementing the Type-I seesaw mechanism we need the following terms % \begin{equation}\begin{split} & \bar{\ell}_m \, \eta \, \nu^R_a \\ &\bar{L}_4 \, \tilde{\eta} \, \nu^R_a \\ & \bar{\nu}^R_a (\nu_b^{R})^c \label{seesaw-terms1-z2} \end{split} \end{equation} where $m = 2,3$ and $a,b = 1,2,3$. They provide Dirac and Majorana masses for the SM-like neutrinos as well as their mixing with heavy neutral fermions. The second term in \eqref{seesaw-terms1-z2} differs from the first, since $\ell_m$ is replaced by $L_4$. They are distinct thanks to the $\mathbb{Z}_2$ symmetry. This ensures that the neutrino-like fermion in $L_4$ receives an adequately large mass because of suitably tuned Yukawa coupling. In addition the following terms are also allowed by all the symmetries of the model % \begin{equation}\begin{split} &\bar{\ell}_m \, \chi \, \nu^R_a \\ & \bar{L}_4 \, \tilde{\chi} \, \nu^R_a \\ & \bar{\ell}_1 \, \tilde{\rho}^* \, \nu^R_a \label{seesaw-terms2-z2} \end{split} \end{equation} % As in the previous case, the first two terms in \eqref{seesaw-terms2-z2} are distinct due to the $\mathbb{Z}_2$ symmetry (though in this case, a single term would not be dangerous as it would only give mass to the third component of $\ell_m$ due to vev alignment of $\chi$). \section{ Fermion Mass Matrices} \label{sec:ferm-mass-matr} In the full Yukawa Lagrangian characterizing our model % \begin{itemize} \item for quarks we have \begin{equation} \begin{split} \mathcal{L}^{q}_Y&=\bigl(\bar q_m\chi^*Y^d_{mi}+\bar q_3\rho y^d_{3i}+\bar q_m \eta^* j^d_{mi}\bigr)D_{i}+\\&+\bigl(\bar q_3\chi Y^u_{3j}+\bar q_m\rho^*y^u_{mj}+\bar q_3\eta j^u_{3j}\bigr)U_{j}, \end{split} \label{eq:yukq} \end{equation} where $Y^{d,u}, y^{d,u}, j^{d,u}$ represent the Yukawa couplings introduced respectively for $\chi, \rho$ and $\eta$. We remind that $D$ represents any right-handed $d_{1,2,3}$ or $B_{1,2}$, $U$ represents any right-handed $u_{1,2,3}$, or $T_3$, and $m=1,2$. % \item for leptons we have % \begin{equation} \label{eq:yuklep} \begin{split} \mathcal{L}^{\ell}_Y = & \bigl(Y_{1a} \bar\ell_1\chi^* + f_{ma} \bar\ell_m\rho + y_{1a} \bar\ell_1\eta^* \bigr)e_{a} \, + \,\bigl(Y_{1E} \bar\ell_1 \tilde{\chi}^* + y_{1E} \bar\ell_1 \tilde{\eta}^* \bigr) E_1 \, + \, f_{4E} \bar L_4 \tilde{\rho} E_1 \\ & \, + \, J_m \epsilon_{ijk}(\tilde{\chi}^*)^i(L_5)^{c\,k} \bar\ell_m^{j} \, + \, J_4 \epsilon_{ijk}(\chi^*)^i(L_5)^{c\,k} \bar{L}_4^{j} \, + \, j_{m} \epsilon_{ijk}(\tilde{\eta}^*)^i(L_5)^{c\,k} \bar\ell_m^{j} \\ & \, + \, j_{4} \epsilon_{ijk}(\eta^*)^i(L_5)^{c\,k} \bar{L}_4^{j} + \frac{K_{mn}}{\sqrt{2}} \bar\ell_m S_1(\ell_n)^c \, + \, \frac{K_{44}}{\sqrt{2}} \bar{L}_4 S_1 (L_4)^c + \frac{c_5}{\sqrt{2}} \bar L_5S_c(L_5)^c+ \\ & \, + \, (y_\eta)_{m s} \bar{\ell}_m \eta \nu_s^R \, + \, (y_{\tilde{\eta}})_{4s} \bar{L}_4 \tilde{\eta} \nu_s^R \,+ \, (Y_\chi)_{ms} \bar{\ell}_m \chi \nu_s^R \,+ \, (Y_{\tilde{\chi}})_{4 s} \bar{L}_4 \tilde{\chi} \nu_s^R \, + \, (y_{\tilde{\rho}})_{1s} \bar{\ell}_1 \tilde{\rho}^* \nu_s^R \\ & \,+ \, \frac{M^{s t}}{\sqrt{2}} \bar{\nu}_s^R (\nu_t^R)^c \, + \, \text{h.c.} \end{split} \end{equation} % where $Y, y, K, k, f, c, J, j, M$ represent the Yukawa couplings, with $m,n \in \{2,3\}$, $a,b \in \{1,2)$, $s,t \in \{1,2,3\}$, and the $i,j,k \in \{1,2,3\}$ indices act on SU(3)$_\text{L}$. \end{itemize} The mass matrices for the up-type quarks ($\sqrt{2} M^u_{ij}$), shown in Table.~\ref{tab:up}, and down-type quarks ($\sqrt{2} M^d_{ij}$), shown in Table.\ref{tab:down} remain exactly the same as before, namely % \begin{table}[h!t] \centering \begin{tabular}{| c | c | c | c | c | } \hline Fields & \hspace{.05cm} $u^R_1$ \hspace{.05cm} & \hspace{.05cm} $u^R_2$ \hspace{.05cm} & \hspace{.05cm} $u^R_3$ \hspace{.05cm} & \hspace{.05cm} $T^R_3$ \hspace{.05cm} \hspace{.05cm} \\ \hline $\bar{u}^L_1$ & $ - y^u_{11} \langle \rho^*_2\rangle $ & $ - y^u_{12} \langle \rho^*_2\rangle $ & $ - y^u_{13} \langle \rho^*_2\rangle $ & $ - y^u_{14} \langle \rho^*_2\rangle $ \\ \hline $\bar{u}^L_2$ & $ - y^u_{21} \langle \rho^*_2\rangle $ & $ - y^u_{22} \langle \rho^*_2\rangle $ & $ - y^u_{23} \langle \rho^*_2\rangle $ & $ - y^u_{24} \langle \rho^*_2\rangle $ \\ \hline $\bar{u}^L_3$ & $ j^u_{31} \langle \eta_1 \rangle $ & $ j^u_{32} \langle \eta_1 \rangle $ & $ j^u_{33} \langle \eta_1 \rangle $ & $ j^u_{34} \langle \eta_1 \rangle $ \\ \hline $\bar{T}^L_3$ & $ Y^u_{31} \langle \chi_3 \rangle + j^u_{31} \langle \eta_3 \rangle$ & $ Y^u_{32} \langle \chi_3 \rangle + j^u_{32} \langle \eta_3 \rangle$ & $ Y^u_{33} \langle \chi_3 \rangle + j^u_{33} \langle \eta_3 \rangle$ & $ Y^u_{34} \langle \chi_3 \rangle + j^u_{34} \langle \eta_3 \rangle$ \\ \hline \end{tabular} \caption{ Up-type quark mass matrix $\sqrt{2} M^u_{ij}$. Here the $L$ and $R$ superscripts indicate the left and right-handed fields.} \label{tab:up} \end{table} \begin{table}[h!t] \centering \begin{tabular}{| c | c | c | c | c | c | } \hline Fields & \hspace{.05cm} $d^R_1$ \hspace{.05cm} & \hspace{.05cm} $d^R_2$ \hspace{.05cm} & \hspace{.05cm} $d^R_3$ \hspace{.05cm} & \hspace{.05cm} $B^R_1$ \hspace{.05cm} & \hspace{.05cm} $B^R_2$ \hspace{.05cm} \\ \hline $\bar{d}^L_1$ & $ j^d_{11} \langle \eta^*_1\rangle $ & $ j^d_{12}\langle \eta^*_1\rangle $ & $ j^d_{13}\langle \eta^*_1\rangle $ & $ j^d_{14} \langle \eta^*_1 \rangle $ & $ j^d_{15} \langle \eta^*_1 \rangle $ \\ \hline $\bar{d}^L_2$ & $ j^d_{21} \langle \eta^*_1\rangle $ & $ j^d_{22}\langle \eta^*_1\rangle $ & $ j^d_{23}\langle \eta^*_1\rangle $ & $ j^d_{24} \langle \eta^*_1 \rangle $ & $ j^d_{25} \langle \eta^*_1 \rangle $ \\ \hline $\bar{d}^L_3$ & $ y^d_{31} \langle \rho_2 \rangle $ & $ y^d_{32}\langle \rho_2\rangle $ & $ y^d_{33}\langle \rho_2\rangle $ & $ y^d_{34} \langle \rho_2 \rangle $ & $ y^d_{35} \langle \rho_2 \rangle $ \\ \hline $\bar{B}^L_1$ & $ Y^d_{11} \langle \chi^*_3 \rangle + j^d_{11} \langle \eta^*_3 \rangle$ & $ Y^d_{12} \langle \chi^*_3 \rangle + j^d_{12} \langle \eta^*_3 \rangle$ & $ Y^d_{13} \langle \chi^*_3 \rangle + j^d_{13} \langle \eta^*_3 \rangle$ & $ Y^d_{14} \langle \chi^*_3 \rangle + j^d_{14} \langle \eta^*_3 \rangle$ & $ Y^d_{15} \langle \chi^*_3 \rangle + j^d_{15} \langle \eta^*_3 \rangle$ \\ \hline $\bar{B}^L_2$ & $ Y^d_{21} \langle \chi^*_3 \rangle + j^d_{21} \langle \eta^*_3 \rangle$ & $ Y^d_{22} \langle \chi^*_3 \rangle + j^d_{22} \langle \eta^*_3 \rangle$ & $ Y^d_{23} \langle \chi^*_3 \rangle + j^d_{23} \langle \eta^*_3 \rangle$ & $ Y^d_{24} \langle \chi^*_3 \rangle + j^d_{24} \langle \eta^*_3 \rangle$ & $ Y^d_{25} \langle \chi^*_3 \rangle + j^d_{25} \langle \eta^*_3 \rangle$ \\ \hline \end{tabular} \caption{ Down-type mass matrix $\sqrt{2} M^d_{ij}$. Here the $L$ and $R$ superscripts indicate the left and right-handed fields.} \label{tab:down} \end{table} Turning to the lepton mass matrices, we begin with charged lepton mass matrix ($\sqrt{2} M^e_{ij}$), whose explicit form is given in Table~\ref{tab:chlep}. \begin{table}[h!t] \centering \begin{tabular}{| c | c | c | c | c | c | } \hline Fields & \hspace{.05cm} $e^R_1$ \hspace{.05cm} & \hspace{.05cm} $e^R_2$ \hspace{.05cm} & \hspace{.05cm} $e^R_3$ \hspace{.05cm} & \hspace{.05cm} $E^R_1$ \hspace{.05cm} & \hspace{.05cm} $E^R_4$ \hspace{.05cm} \\ \hline $\bar{e}^L_1$ & $ y_{11} \langle \eta^*_1\rangle $ & $ y_{12}\langle \eta^*_1\rangle $ & $0$ & $ y_{1E} \langle \tilde{\eta}^*_1 \rangle $ & $0 $ \\ \hline $\bar{e}^L_2$ & $ f_{21} \langle \rho_2 \rangle $ & $ f_{22} \langle \rho_2 \rangle $ & $ j_2 \langle \tilde{\eta}^*_1 \rangle $ & $0$ & $ -(J_2 \langle \tilde{\chi}^*_3 \rangle + j_2 \langle \tilde{\eta}^*_3 \rangle)$ \\ \hline $\bar{e}^L_3$ & $ f_{31} \langle \rho_2 \rangle $ & $ f_{33} \langle \rho_2 \rangle $ & $ j_3 \langle \tilde{\eta}^*_1 \rangle $ & $0$ & $ -(J_3 \langle \tilde{\chi}^*_3 \rangle + j_3 \langle \tilde{\eta}^*_3 \rangle)$ \\ \hline $\bar{E}^L_1$ & $ Y_{11} \langle \chi^*_3 \rangle + y_{11} \langle \eta^*_3 \rangle$ & $ Y_{12} \langle \chi^*_3 \rangle + y_{12} \langle \eta^*_3 \rangle$ & $ 0 $ & $ Y_{1E} \langle \tilde{\chi}^*_3 \rangle + y_{1E} \langle \tilde{\eta}^*_3 \rangle$ & $ 0 $ \\ \hline $\bar{E}^L_4$ & $0 $ & $ 0 $ & $ j_4 \langle \eta^*_1 \rangle $ & $ f_{4E} \langle \tilde{\rho}_2 \rangle $ & $ -(J_4 \langle \chi^*_3 \rangle + j_4 \langle \eta^*_3 \rangle) $ \\ \hline \end{tabular} \caption{ The charged lepton mass matrix $\sqrt{2} M^e_{ij}$. Here subscripts of the vev-carrying scalars indicate the scalar compenents whose non-zero vev comes in a given entry. } \label{tab:chlep} \end{table} Concerning the mass matrix of the neutral fermions ($\sqrt{2} M^n_{ij}$), it incorporates type-I seesaw mass terms. Its complete form is given in the Appendix, Table V. We have numerically verified that it leads to an adequate spectrum of light neutrino masses. \section{B flavour global analyses} \label{sec:b-flavour-global} These analyses are performed in the framework of the effective Hamiltonian at the $b$-mass scale, separating short- and long-distance physics in the Wilson coefficients and local operators~\cite{Grinstein:1987vj, Buchalla:1995vs}: \begin{equation} {\mathcal H}_{\rm eff}=-\frac{4G_F}{\sqrt{2}} V_{tb} V_{ts}^* \sum_i C_i O_i \end{equation} The main operators of interest for this discussion are the following: \begin{equation} \begin{split} O_7=&\frac{e}{16\pi^2} m_b (\bar s \sigma_{\mu\nu} P_R b)F^{\mu\nu}\\ O_{7'}=&\frac{e}{16\pi^2} m_b (\bar s \sigma_{\mu\nu} P_L b)F^{\mu\nu}\\ O_9^\ell=&\frac{e^2}{16\pi^2}(\bar s \gamma_{\mu} P_L b) (\bar\ell\gamma^\mu \ell)\\ O_{10}^\ell=&\frac{e^2}{16\pi^2}(\bar s \gamma_{\mu} P_L b) (\bar\ell\gamma^\mu\gamma^5 \ell)\\ O_{9'}^\ell=&\frac{e^2}{16\pi^2}(\bar s \gamma_{\mu} P_R b) (\bar\ell\gamma^\mu \ell)\\ O_{10'}^\ell=&\frac{e^2}{16\pi^2}(\bar s \gamma_{\mu} P_R b) (\bar\ell\gamma^\mu\gamma^5 \ell).\\ \end{split} \label{eq:OP} \end{equation} where $P_{L,R}=(1\mp \gamma_5)/2$ and the fields are understood as mass eigenstates. In the SM, only $O_7$, $O_9^\ell$ and $O_{10}^\ell$ are significant, with the values of the Wilson coefficients given as $C_9^\ell\simeq 4.1$ and $C_{10}^\ell\simeq -4.3$ at the scale $\mu=m_b$. In contrast, the primed operators are $m_s/m_b$ suppressed due to the chirality of the quarks involved. The analyses of several $b\to s\gamma$ and $b\to s\ell\ell$ observables (including angular ones) point towards a pattern of deviations consistent with a large NP short-distance contribution to $C_9^\mu$, around 1/4 of the SM contribution, see e.g. Refs.~\cite{Descotes-Genon:2015uva, Descotes-Genon:2016hem,Capdevila:2017bsm, Hiller:2003js}. Scenarios with NP contributions in $C_9^\mu$ only, in $(C_9^\mu,C_{10}^\mu)$ or in $(C_9^\mu,C_{9'}^\mu)$ seem particularly favoured. Moreover, the LFU violating observables agree well with the absence of significant NP contributions to any electron-type Wilson coefficients $C_{i}^{e}$. Results of the global fit analyses seem to rule out the possibility of large contributions from other operators suppressed in the SM, in particular scalar and pseudoscalar operators. They are constrained especially by the good agreement between the observed value for the $B_s \to \mu \mu$ branching ratio and its SM prediction, as well as by the limits on the $B \to X_s \gamma$ branching ratio. We proceed along the lines of the phenomenological analysis of Ref. \cite{Descotes-Genon:2017ptp} to which we refer for details. We focus on the vector/axial contributions which are assumed to be the larger ones. The neutral lepton mass matrix and the neutral lepton mixing do not affect the effective Hamiltonian contributing to the process, since the relevant operators only include charged leptons. Hence after the expansion in $\epsilon=\Lambda_\text{EW}/\Lambda_\text{NP}$ {(NP denotes here the 331 scale) one finds that nonzero contributions at the lowest order, namely $O(\epsilon^2)$, can only come from the neutral gauge bosons $Z'$ and $Z$. The transitions mediated by the heavy gauge boson $Z'$ are expressed in the effective Hamiltonian by the term {% \begin{widetext}\begin{eqnarray}\label{eff:Z'} \mathcal H_{\text{eff}}&\supset &\frac{g_X^2}{54\cos^2\theta_{331}}\frac{1}{M^2_{Z'}}V^{(d)*}_{3k}V^{(d)}_{3l} \frac{4\pi}{\alpha} \\\nonumber && \Biggl\{\left[-\frac 1 2 V^{(e)*}_{1i}V^{(e)}_{1j}+\frac{1-6\cos^2\theta_{331}}2W^{(e)*}_{3i}W^{(e)}_{3j}+\frac{1+3\cos^2\theta_{331}}{4}\delta_{ij}\right]O^{klij}_9+\\ &&\qquad+\left[\frac 1 2 V^{(e)*}_{1i}V^{(e)}_{1j}+\frac{1-6\cos^2\theta_{331}}2W^{(e)*}_{3i}W^{(e)}_{3j}+\frac{-1+9\cos^2\theta_{331}}{4}\delta_{ij}\right]O^{klij}_{10}\Biggr\}.\nonumber \end{eqnarray}\end{widetext} where the indices $k,l$ refer to the SM generations of the quark mass eigenstates (assuming $k\neq l$), while $i, j$ refer to the SM lepton mass eigenstates (either from the same or different generations). The effective operators $O_{{9,10}}^{klij}$ are defined exactly as in Eq.~\eqref{eq:OP}, taking into account the $(\bar q_k\, q_l) (\bar \ell_i\, \ell_j)$ flavour structure. Here $\alpha=e^2/(4 \pi)$ is the fine-structure constant. The $V$ and $W$ matrices provide the mixing matrices arising from the diagonalisation of the EWSB mass terms in the subspace of left-handed and right-handed SM fields, with the superscript $(d)$ and $(e)$ referring to down-type quarks and charged leptons, respectively. At the same lowest order the contribution to the effective Hamiltonian given by the SM gauge boson $Z$ can be written as \begin{equation}\begin{split} \mathcal H_{\text{eff}}\supset \frac{\cos^2\theta_W(1+3\cos^2\theta_{331})}{8}\frac{g^2}{M^2_{Z}}\frac{4\pi}{\alpha}\sum_\lambda \hat{V}^{(d)*}_{\lambda k}\hat{V}^{(d)}_{\lambda l}\delta_{ij}\times\\\times\Bigl\{(-1+9\cos^2\theta_{331})O^{klij}_9+(1+3\cos^2\theta_{331})O^{klij}_{10}\Bigr\}. \label{eff:Z}\end{split} \end{equation} where $\hat{V}^{(d)}$ represents the $O(\epsilon^1)$ correction to the rotation matrix $V^{(d)}$ between interaction and mass eigenstates for the left-handed down sector. % Notice that at this order the coupling is the same for all the light leptons, i.e. non-universality does not arise in the interaction with $Z$. LFU violating contributions arise only from the $Z'$ contribution. In addition to LFU violation, the model allows for lepton-flavour violation, which we assume suppressed, in agreement with experimental restrictions, and set it to zero for simplicity. These further assumptions constrain the parameter space $(C_9^\mu,C_{10}^\mu)$ to two scenarios detailed in Ref. \cite{Descotes-Genon:2017ptp}. For both of them, we can compare the allowed regions with the latest data, as done in Fig.~\ref{fig:caseA}. In this figure, and from now on, we focus only on the non SM contribution to the Wilson coefficients, that is we set $C_i=C_i^{NP}$. The thick black intervals correspond to the 1$\sigma$ interval for the one-dimensional scenarios from the latest data \cite{Alguero:2021anc}. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{caseA} \includegraphics[width=0.3\textwidth]{caseB} \caption{Regions allowed for the Wilson coefficient $C_9^{\mu}$ and $C_{10}^{\mu}$ (abscissa and ordinate, respectively) in scenarios A (left) and B (right) described in Ref. \cite{Descotes-Genon:2017ptp}. The thick black intervals correspond to the 1$\sigma$ interval for one-dimensional scenarios \cite{Alguero:2021anc}. } \label{fig:caseA} \end{figure} A comparison between 2018 and 2021 intervals for $C_{9\mu}$ given by global analyses \cite{Capdevila:2017bsm, Alguero:2021anc} is reported below: \begin{itemize} \item $C_{9\mu}$, $C_{10\mu}=0$ \begin{equation} [-1.28,-0.94], \quad (2018) \end{equation} \begin{equation} [-1.20,-0.91], \quad (2021) \end{equation} \item $C_{9\mu}=-C_{10\mu}$ \begin{equation} [-0.75,-0.49], \quad (2018) \end{equation} \begin{equation} [-0.52,-0.37], \quad (2021) \end{equation} \end{itemize} As can be seen in Fig.~\ref{fig:caseA}, also with new data in both scenarios A and B we are able to account for the anomalies observed as long as we consider the $C_9^\mu=-C_{10}^\mu$ case. In our model the $b \to s \ell \ell$ transitions originate from the tree-level exchange of the $Z$ and $Z'$ gauge bosons. % The former breaks the GIM mechanism through the mixing between normal and exotic quarks, and depends on the Yukawa couplings. The latter involves just the unsuppressed exchange of the heavy $Z'$ gauge boson. Both give suppressed contributions to the $bsZ$ vertex, as can be seen on Fig. \ref{fig:Bsmix}. To make a quantitative analysis we must take into account phenomenological constraints on $Z$ and $Z'$ couplings. Restricting our discussion to the leading contributions of order ${\mathcal O}(\epsilon^2)$, the $Z$-exchange contribution to $B_s-\bar{B}_s$ mixing will have two such vertices, and hence the amplitude will be suppressed by a factor ${\mathcal O}(\epsilon^4)$. On the other hand, the $bs$ vertex is mediated by $Z'$ at ${\mathcal O}(\epsilon^0)$, implying that in this case we have only the suppression coming from the heavy propagator must be taken into account. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{BBfeynm} \caption{Tree level contributions to $B_s-\bar{B}_s $ mixing.} \label{fig:Bsmix} \end{figure} The corresponding part of the effective Hamiltonian is \begin{equation} \begin{split} &\mathcal H_{\rm eff}\supset \frac{g_X^2}{54M^2_{Z'}\cos^2 \theta_{331}}(V^{*(d)}_{3k}V^{(d)}_{3l})^2(\overline{D_k}\gamma^\mu D_l)(\overline{D_k}\gamma^\mu D_l)=\\ &=\frac{8G_F}{\sqrt 2 (3-\tan^2\theta_W)}\frac{M_W^2}{M_{Z'}^2}(V^{*(d)}_{3k}V^{(d)}_{3l})^2(\overline{D_k}\gamma^\mu D_l)(\overline{D_k}\gamma^\mu D_l)\\ \end{split} \end{equation} Our case of interest is $k=2,l=3$.\ The SM contribution to the mixing reads~\cite{Lenz:2010gu} \begin{equation} \mathcal H^\text{SM}_{\rm eff} = (V_{ts}^*V_{tb})^2\frac{G_F^2}{4\pi^2} M_W^2 \hat{\eta}_B S\Bigl(\frac{\overline{m_t}^2}{M_W^2}\Bigr)(\overline{s_L}\gamma^\mu b_L)(\overline{s_L}\gamma^\mu b_L) \end{equation} where $S$ is the Inami-Lim function and $\overline{m_t}$ is the top quark mass defined in the $\overline{MS}$ scheme. As in Ref.~\cite{Lenz:2010gu}, we take $S\Bigl(\frac{\overline{m_t}^2}{M_W^2}\Bigr)\simeq 2.35$, for a top mass of about 165 GeV, and $\hat{\eta}_B=0.8393\pm 0.0034$, which includes QCD corrections. Considering the modulus of the ratio of the NP contribution over the SM, one gets \begin{equation}\begin{split} r_{B_s}&=\left|\frac{C_\text{NP}}{C_\text{SM}}\right| =\\&= \frac{32\pi^2|V^{*(d)}_{32}V^{(d)}_{33}|^2}{\sqrt 2 (3-\tan^2\theta_W)|V_{ts}^*V_{tb}|^2G_FM_W^2\hat{\eta}_B S} \frac{M_W^2}{M_{Z'}^2}\end{split} \end{equation} Here the only variables are $d=V^{*(d)}_{32}V^{(d)}_{33}$ and $M_{Z'}^2$ or, equivalently, $M_W^2/M_{Z'}^2$. In order to get a quantitative idea of the values allowed, we perform a scan varying $d$ in $[-1,1]$ (since $d$ consist of products of elements of unitary matrices). We fix the range of the other variable $M_W/M_{Z'}$ to $[0,0.1]$, corresponding roughly to a NP scale at least of the order of 10 times the electroweak scale, and assume that the NP contributions to the $B_s$ mixing is at most $10\%$ by setting $r_{B_s}\leq 0.1$. For those values, we evaluate the NP contribution to the Wilson coefficient in the one-dimensional scenario with $C_9^\mu=-C_{10}^\mu$. The allowed values found in the scan are plotted in Fig.~\ref{fig:scanBB}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{ScanMixVfin1} \caption{Allowed points in the ($C^\mu_{9},r_{B_s}$) plane.} \label{fig:scanBB} \end{figure} We see that values of $C_9^\mu=-C_{10}^\mu$ can reach -0.6, in agreement with the results of global analyses of $b\to s\ell\ell$, corresponding to $r_{B_s}=0.1$, $M_W/M_{Z'}=0.1$ and $d\simeq -0.005$. The allowed region is limited by the fact that we have numerically taken \begin{equation}\begin{split} r_{B_s}&\simeq 347\cdot 10^3 \times \left(\frac{M_W}{M_{Z'}}\right)^2 \times d^2\leq 0.1\\ C_9^\mu &\simeq 11.3 \cdot 10^3 \times \left(\frac{M_W}{M_{Z'}}\right)^2 \times d\qquad |d|\leq 1\end{split} \end{equation} Therefore, in the simple one-dimensional scenario $C_9^\mu=-C_{10}^\mu$, the present 331 model can accomodate both $B_s-\bar{B}_s$ mixing and $b \to s \ell \ell$ data, with a NP scale (and in particular a $Z'$) around the TeV scale. Searches for high-mass dilepton resonances at ATLAS~\cite{ATLAS:2019erb} have set higher lower limits for $Z'$ by comparison with different 331 models~\cite{Queiroz:2016gif}. As the limits on the $Z'$ mass from direct searches gets higher, our points are pushed towards the plot edges, requiring a larger value of $r_{B_s}$. However, care must be used to extrapolate results from other 331 models, especially minimal ones, since different couplings and interference patterns may affect the results of the searches. The lower bounds of $Z'$ mass can be significantly lower than those obtained from LHC, if all decay channels of $Z'$ into new particles are included. \section{Summary and outlook} \label{Conclusion} In this paper we have explored the possibility of explaining data on flavour anomalies for $B \to K^{(*)}$ decays within a 331 extension of the Standard Model. We explored the possibility of having a new massive 331 $Z'$ boson coupled in a different way to muons and electrons. We are aware of the intrinsic limitations of fiddling with gauge couplings in the absence of a dedicated family symmetry. Nevertheless our analysis is encouraged by previous results in Ref. \cite{Descotes-Genon:2017ptp}, and motivated by recent data that tend to confirm flavour anomalies; in particular, 2021 data of LHCb achieve a $3.1\sigma$ deviation from SM predictions in the $R_{K^{(*)}} $ observable in $B^+ \to K^+ \ell^+ \ell^-$ decays with 9 fb$^{-1}$ of proton-proton collision data \cite{LHCb:2021trn}. Prompted by these new data we examine the viability of generalizing the scheme in Ref. \cite{Descotes-Genon:2017ptp} so as to provide a complete 331 model explaining LFU violation and generating viable neutrino masses through a type-I seesaw mechanism. We have shown the viability of a 331 gauge symmetry model setup putting together both flavour anomalies and a consistent neutrino mass spectrum. The model introduces new massive particles at mass scales allowed by current laboratory data and requires a sophisticated structure beyond the "traditional" 331 schemes. Indeed, in order to eliminate dangerous mass terms and mixings our model employs a $\mathrm{SU(3)_{c}\times SU_{L}(3)\times U(1) } \times \mathbb{Z}_2 \times \mathbb{Z}_3$ symmetry. The new global discrete symmetries ensure a realistic mass hierarchy pattern for the fermions. Within the model-independent effective approach, deviations from lepton flavour universality in the $b \to s \ell \ell$ transitions are parameterized by new physics contributions to the Wilson coefficients. Our extended 331 model can generate such large new physics contributions to $(C_9^\mu, C_{10}^\mu)$ parameters, as required by current global fits \cite{Capdevila:2017bsm,Alguero:2021anc}. Trying to stick to minimality requirements, we assumed that neutral gauge bosons give a dominant contributions to the flavour violating observables without contributions to $b \to s e e$ or large lepton flavour violation of the form $b\to s\ell_1\ell_2$, as suggested by experimental observations. Within a simple one-dimensional scenario with opposite contributions to $C_9^\mu$ and $ C_{10}^\mu$, we accommodate both $B_s-\bar{B}_s$ mixing and $b\to s\ell\ell$ data, with a new physics $Z'$ mass scale around the TeV scale. Going to different values for $(C_9^\mu, C_{10}^\mu)$ would possibly extend the allowed parameter space for new physics. In order to comply with experimental limits for processes involving charged leptons, we assume that contributions to $b \to s l_1 l_2$ as well as lepton-universality violating processes are suppressed. This allows us to set constraints on the fermionic mixing matrices, as discussed in Ref.~\cite{Descotes-Genon:2017ptp}. In summary, we have reconciled the LFU violation data with a viable neutrino oscillation pattern in a 331 setup, a goal never achieved earlier. Our explanation for $B$-anomaly decays may be reformulated within alternative neutrino mass generation mechanisms such as inverse seesaw mechanism. Likewise, the inclusion of dark matter may be implemented through a scotogenic approach. \begin{table}[p] \vspace{-1.5cm} \centering\begin{tiny} \rotatebox{-90}{ \begin{minipage}{\textheight} \begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c | c | } \hline Fields & \hspace{.05cm} $(\nu^L_1)^c$ \hspace{.05cm} & \hspace{.05cm} $(\nu^L_2)^c$ \hspace{.05cm} & \hspace{.05cm} $(\nu^L_3)^c$ \hspace{.05cm} & \hspace{.05cm} $(\nu^L_4)^c$ \hspace{.05cm} & \hspace{.05cm} $(N^L_2)^c$ \hspace{.05cm} & \hspace{.05cm} $(N^L_3)^c$ \hspace{.05cm} & \hspace{.05cm} $(N^L_4)^c$ \hspace{.05cm} & \hspace{.05cm} $(N^L_5)^c$ \hspace{.05cm} & \hspace{.05cm} $\nu^R_1$ \hspace{.05cm} & \hspace{.05cm} $\nu^R_2$ \hspace{.05cm} & \hspace{.05cm} $\nu^R_3$ \hspace{.05cm} \\ \hline $\bar{\nu}^L_1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-(y_{\tilde{\rho}})_{11}\langle \tilde{\rho}_2^*\rangle$ & $-(y_{\tilde{\rho}})_{12}\langle \tilde{\rho}_2^*\rangle$ & $ -(y_{\tilde{\rho}})_{13}\langle \tilde{\rho}_2^*\rangle$ \\ \hline $\bar{\nu}^L_2$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $ J_2 \langle \tilde{\chi}^*_3\rangle + j_2 \langle \tilde{\eta}^*_3\rangle$ & $ (y_\eta)_{21} \langle \eta_1\rangle$ & $ (y_\eta)_{22} \langle \eta_1\rangle$ & $ (y_\eta)_{23} \langle \eta_1\rangle$ \\ \hline $\bar{\nu}^L_3$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $ J_3 \langle \tilde{\chi}^*_3\rangle + j_3 \langle \tilde{\eta}^*_3\rangle$ & $ (y_\eta)_{31} \langle \eta_1\rangle$ & $ (y_\eta)_{32} \langle \eta_1\rangle$ & $ (y_\eta)_{33} \langle \eta_1\rangle$ \\ \hline $\bar{\nu}^L_4$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $ J_4 \langle \chi^*_3\rangle + j_4 \langle \eta^*_3\rangle$ & $ (y_{\tilde{\eta}})_{41} \langle \tilde{\eta}_1\rangle$ & $ (y_{\tilde{\eta}})_{42} \langle \tilde{\eta}_1\rangle$ & $ (y_{\tilde{\eta}})_{43} \langle \tilde{\eta}_1\rangle$ \\ \hline $\bar{N}^L_2$ & $0$ & $0$ & $0$ & $0$ & $K_{22} \langle S_1 \rangle$ & $K_{23} \langle S_1 \rangle$ & $0$ & $-j_2 \langle \tilde{\eta}^*_1 \rangle$ & $(y_\eta)_{21} \langle \eta_3\rangle + (Y_\chi)_{21}\langle \chi_3\rangle$ & $(y_\eta)_{22} \langle \eta_3\rangle + (Y_\chi)_{22}\langle \chi_3\rangle$ & $(y_\eta)_{23} \langle \eta_3\rangle + (Y_\chi)_{23}\langle \chi_3\rangle$ \\ \hline $\bar{N}^L_3$ & $0$ & $0$ & $0$ & $0$ & $K_{32} \langle S_1 \rangle$ & $K_{33} \langle S_1 \rangle$ & $0$ & $-j_3 \langle \tilde{\eta}^*_1 \rangle$ & $(y_\eta)_{31} \langle \eta_3\rangle + (Y_\chi)_{31}\langle \chi_3\rangle$ & $(y_\eta)_{32} \langle \eta_3\rangle + (Y_\chi)_{32}\langle \chi_3\rangle$ & $(y_\eta)_{33} \langle \eta_3\rangle + (Y_\chi)_{33}\langle \chi_3\rangle$ \\ \hline $\bar{N}^L_4$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $K_{44} \langle S_1 \rangle$ & $-j_4 \langle \eta^*_1 \rangle$ & $(y_{\tilde{\eta}})_{41} \langle \tilde{\eta}_3\rangle + (Y_{\tilde{\chi}})_{41}\langle \tilde{\chi}_3\rangle$ & $(y_{\tilde{\eta}})_{42} \langle \tilde{\eta}_3\rangle + (Y_{\tilde{\chi}})_{42}\langle \tilde{\chi}_3\rangle$ & $(y_{\tilde{\eta}})_{43} \langle \tilde{\eta}_3\rangle + (Y_{\tilde{\chi}})_{43}\langle \tilde{\chi}_3\rangle$ \\ \hline $\bar{N}^L_5$ & $0$ & $ J_2 \langle \tilde{\chi}_3\rangle + j_2 \langle \tilde{\eta}_3\rangle$ & $ J_3 \langle \tilde{\chi}_3\rangle + j_3 \langle \tilde{\eta}_3\rangle$ & $ J_4 \langle \chi_3\rangle + j_4 \langle \eta_3\rangle$ & $-j_2 \langle \tilde{\eta}_1 \rangle$ & $-j_3 \langle \tilde{\eta}_1 \rangle$ & $-j_4 \langle \eta_1 \rangle$ & $c_5 \langle S_c \rangle$ & $0$ & $0$ & $0$ \\ \hline $(\bar{\nu}^R_1)^c$ & $-(y_{\tilde{\rho}})_{11}\langle \tilde{\rho}_2\rangle$ & $ (y_\eta)_{21} \langle \eta_1^*\rangle$ & $ (y_\eta)_{31} \langle \eta_1^*\rangle$ & $ (y_{\tilde{\eta}})_{41} \langle \tilde{\eta}_1^*\rangle$ & $(y_\eta)_{21} \langle \eta_3^*\rangle + (Y_\chi)_{21}\langle \chi_3^*\rangle$ & $(y_\eta)_{31} \langle \eta_3^*\rangle + (Y_\chi)_{31}\langle \chi_3^*\rangle$ & $(y_{\tilde{\eta}})_{41} \langle \tilde{\eta}_3^*\rangle + (Y_{\tilde{\chi}})_{41}\langle \tilde{\chi}_3^*\rangle$ & $0$ & $M_{11}$ & $M_{12}$ & $M_{13}$ \\ \hline $(\bar{\nu}^R_2)^c$ & $-(y_{\tilde{\rho}})_{12}\langle \tilde{\rho}_2\rangle$ & $ (y_\eta)_{22} \langle \eta_1^*\rangle$ & $ (y_\eta)_{32} \langle \eta_1^*\rangle$ & $ (y_{\tilde{\eta}})_{42} \langle \tilde{\eta}_1^*\rangle$ & $(y_\eta)_{22} \langle \eta_3^*\rangle + (Y_\chi)_{22}\langle \chi_3^*\rangle$ & $(y_\eta)_{32} \langle \eta_3^*\rangle + (Y_\chi)_{32}\langle \chi_3^*\rangle$ & $(y_{\tilde{\eta}})_{42} \langle \tilde{\eta}_3^*\rangle + (Y_{\tilde{\chi}})_{42}\langle \tilde{\chi}_3^*\rangle$ & $0$ & $M_{21}$ & $M_{22}$ & $M_{23}$ \\ \hline $(\bar{\nu}^R_3)^c$ & $ -(y_{\tilde{\rho}})_{13}\langle \tilde{\rho}_2\rangle$ & $ (y_\eta)_{23} \langle \eta_1^*\rangle$ & $ (y_\eta)_{33} \langle \eta_1^*\rangle$ & $ (y_{\tilde{\eta}})_{43} \langle \tilde{\eta}_1^*\rangle$ & $(y_\eta)_{23} \langle \eta_3^*\rangle + (Y_\chi)_{23}\langle \chi_3^*\rangle$ & $(y_\eta)_{33} \langle \eta_3^*\rangle + (Y_\chi)_{33}\langle \chi_3^*\rangle$ & $(y_{\tilde{\eta}})_{43} \langle \tilde{\eta}_3^*\rangle + (Y_{\tilde{\chi}})_{43}\langle \tilde{\chi}_3^*\rangle$ & $0$ & $M_{31}$ & $M_{32}$ & $M_{33}$ \\ \hline \end{tabular} \label{nlep} \caption{The neutral lepton mass matrix $\sqrt{2} M^n_{ij}$ written so as to highlight the seesaw structure.} \end{minipage}} \end{tiny} \end{table} \newpage \begin{acknowledgments} G.R. and S. S. thank Natascia Vignaroli for interesting and useful discussions. A.A. is supported by the Talent Scientific Research Program of College of Physics, Sichuan University, Grant No.1082204112427 \& the Fostering Program in Disciplines Possessing Novel Features for Natural Science of Sichuan University, Grant No. 2020SCUNL209 \& 1000 Talent program of Sichuan province 2021. Work partially supported by Spanish grant PID2020-113775GB-I00 (AEI/ 10.13039/501100011033), Prometeo CIPROM/2021/054 (Generalitat Valenciana), by the Government of India, SERB Startup Grant SRG/2020/002303, by MIUR under Project No. 2015P5SBHT and by the INFN research initiative ENP. \end{acknowledgments} \bibliographystyle{utphys}
2024-02-18T23:40:31.439Z
2022-08-08T02:12:56.000Z
algebraic_stack_train_0000
2,653
11,568
proofpile-arXiv_065-12897
\section{Introduction} \label{sec:intro} Successful collaboration between agents requires coordination \citep{tomasello2005understanding,misyak2014unwritten, kleiman2016coordinate}, which is challenging because coordinated strategies can be arbitrary \citep{lewis1969convention, young1993evolution, lerer2018learning}. A priori, one can neither deduce which side of the road to drive, nor which utterance to use to refer to $\heartsuit$ \citep{pal2020emergent}. In these cases coordination can arise from actors best responding to what others are already doing---i.e., following a convention. For example, Americans drive on the right side of the road and say ``heart'' to refer to $\heartsuit$ while Japanese drive on the left and say ``shinzo''. Yet in many situations prior conventions may not be available and agents may be faced with entirely novel situations or partners. In this work we study ways that agents may learn to leverage semantic relations between observations and actions to coordinate with agents they have had no experience interacting with before. To illustrate, consider the following situations where people can figure out how to coordinate without prior shared conventions. Imagine a store that sells strawberries and blueberries. You want to buy strawberries but you don't share any common language with the clerk. You are, however, wearing a red hat and you wave the hat to hint that the strawberries are what you want. The clerk has two baskets of strawberries remaining, and so you raise a single finger to indicate that you only want one of the baskets. The clerk produces a paper and plastic bag and you point to the paper bag to indicate that you want the paper one. These examples are so simple that they seem obvious: the red hat matches the color of strawberries, the number of fingers matches the number of baskets you want, and you extend a finger in the direction of the desired packaging \citep{grice1975logic}. While obvious to people, who rely on a theory-of-mind in understanding others, we show that these inferences remain a challenge for multi-agent reinforcement learning agents. \begin{figure}[ht] \centering \includegraphics[width=0.3\textwidth]{figures/bk.png} \caption{The ``Bouba'' (right) and ``Kiki'' (left) effect.} \label{fig:bk} \end{figure} Less obvious examples are common in the cognitive science literature. Consider the shapes in Fig.~\ref{fig:bk}. When asked to assign the names ``Bouba'' and ``Kiki'' to the two shapes, people name the jagged object ``Kiki'' and the curvy object ``Bouba'' \citep{kohler1929gestalt}. This finding is robust across different linguistic communities and cultures and is even found in young children \citep{maurer2006shape}. The causal explanation is that people match a ``jaggedness''-feature and ``curvey''-feature in both the visual and auditory data. Across the above cases, there seems to be a generalized mechanism for mapping the features of the person's action with the features of the action that the person desires the other agent to take. In the absence of norms or conventions, people may minimize the distance between these features when making a choice. This basic form of \textit{zero-shot coordination} (ZSC, defined more formally below) in humans predates verbal behavior \citep{tomasello2007new} and this capability has been hypothesized as a key predecessor to more sophisticated language development and acquisition \citep{tomasello2005understanding}. Modeling these capacities is key for building machines that can robustly coordinate with other agents and with people \citep{kleiman2016coordinate, dafoe2020open}. Might this general mechanism emerge through multi-agent reinforcement learning across a range of tasks? As we will show, reinforcement learning agents naively trained with self-play fail to learn to coordinate even in these obvious ways. Instead, they develop arbitrary private languages that are uninterpretable to both the \emph{same} models trained with a different random seed as well as to human partners \citep{Hu.2020}. For instance in the examples above, they will be equally likely to wave a red-hat to hint they want strawberries as they would to indicate that they want blueberries. These problems also emerge at scale in the decentralized partially observable Markov decision process (Dec-POMDP) benchmark Hanabi \citep{Bard.2019}. When agents are trained with self-play using standard architectures, they do not develop strategies that take into account the correspondence between the features of the actions (colored and numbered cards) and the observation of the game state (other colored and numbered cards). Unfortunately, developing an inductive bias that might take into account these correspondences is not straightforward because describing the kind of abstract knowledge that these agents lack in closed form is challenging. Rather than attempting to do so, we take a \textit{learning-based} approach. Our aim is to build an agent with the capacity to develop these kinds of abstract correspondences during self-play such that they can robustly succeed during \emph{cross-play}, a process where different models are paired together to play, or during play with humans. To summarize, our key contributions are: \begin{itemize}[nolistsep,leftmargin=*] \item We extend the Dec-POMDP formalism to allow actions and observations to be represented using shared features and design a human-interpretable environment for studying coordination with these enrichments. \item We evaluate the role of neural network architectures including feedforward, recurrent, and attention mechanisms on both cross-play generalization and ability to create human-interpretable policies. \item We demonstrate that an attention architecture which takes \emph{both} the action and observations as input allows the agent to exploit the semantic relationships between action and obervation features for coordination, resulting in strong cross-play that outperform baseline ZSC methods. \item We show that the above agents achieve human-level performance when paired with people in a behavioral experiment. The model demonstrates sophisticated human-like coordination patterns that exploit mutual exclusivity and implicature, two well-known phenomena studied in cognitive science \citep{markman1988,grice1975logic}. \end{itemize} \section{Background} \label{sec:bg} \textbf{Dec-POMDPs.} We start with decentralized partially observable Markov decision processes (Dec-POMDPs) to formalize our setting \citep{nair2003}. In a Dec-POMDP, each player $i$ receives an observation $\Omega^i(s) \in \mathcal{O}^i$ generated by the underlying state $s$, and takes action $a^i \in \mathcal{A}^i$. Players receive a common reward $R(s,a)$ and the state transitions according to the function $\mathcal{T}(s,a)$. The historical trajectory is $\tau = (s_1, a_1, \dots, a_{t-1}, s_t)$. Player $i$'s action-observation history (AOH) is denoted as $\tau_t^i = (\Omega^i(s_1), a_1^i, \dots, a_{t-1}^i, \Omega^i(s_t))$. The policy for player $i$ takes as input an AOH and outputs a distribution over actions, denoted by $\pi^i(a^i\mid \tau_t^i)$. The joint policy is denoted by~$\pi$. \textbf{MARL and Zero-Shot Coordination.} The standard paradigm for training multi-agent reinforcement learning (MARL) agents in Dec-POMDPs is self-play (SP). However, the failure of such policies to achieve high reward when evaluated in cross-play (XP) is well-documented. \citet{carroll2019utility} used grid-world MDPs to show that both SP and population-based training fail when paired with human collaborators. \citet{Bard.2019,Hu.2020} showed that agents perform significantly worse when paired with independently trained agents than they do at training time in Hanabi, even though the agents are trained under identical circumstances. This drop in XP performance directly results in poor human-AI coordination, as shown in~\citep{Hu.2020}. \citet{psro} also find similar qualitative XP results in a partially-cooperative laser tag game. To address this issue, \citet{Hu.2020} introduced the \textit{ zero-shot coordination (ZSC) setting, where the goal is to maximize the XP returns of independently trained agents using the same algorithm}.\footnote{In this work we use the language zero-shot coordination (and the acronym ZSC) technically, as defined above and in previous literature \citep{Hu.2020,hu2021off}, but also colloquially, to mean coordination between agents that did not train together.} Clearly, good performance in the ZSC setting is a necessary but insufficient condition for successful coordination with humans. If agents trained from independent runs or random seeds using the same algorithm cannot coordinate well with each other, it is unlikely they will be able to coordinate with agents with different model architectures, not to mention humans. Thus formulated, ZSC is an alternative to ad-hoc teamplay, a framework for measuring coordinated team success when faced with players with unknown behavior \citep{stone2010ad, barrett2011empirical}, which assessed by measuring the average performance of the agent against a distribution of known others. A few methods have been developed for the ZSC setting. other-play \citep[OP]{Hu.2020} exploits the symmetries in a given Dec-POMDP to prevent agents from learning permutation equivalent but mutually incompatible policies. Another recent method, off-belief learning \citep[OBL]{hu2021off}, regularizes agents' ability to make inferences based on the behavior of others. Compared to prior work on Hanabi in which SP scores were high but XP scores were low, both of OP and OBL improve XP scores and show promising preliminary results in play with humans. However, neither of these algorithms exploit the correspondence between the features of actions and observations as we show in this work. \textbf{Dot-Product Attention.} As we will see in our experiments, one way to leverage the correspondences between action features and observation features is by using attention mechanisms \citep{Vaswani.2017, bahdanau2016neural, xu2016show}. Given a set of input vectors $(x_1,...,x_m)$, dot-product attention uses three weight matrices $(Q,K,V)$ to obtain triples $(Q x_i, K x_i, V x_i)$ for each $i \in \{1, \dots, m\}$, called query vectors, key vectors, and value vectors. We abbreviate these as $(q_i, k_i, v_i)$. Next, for each $i, j$, dot-product attention computes logits using dot products $q_i \cdot k_j$. These logits are in turn used to compute an output matrix $[ \mathrm{softmax}(q_i \cdot k_1 / \sqrt{m}, \dots, q_i \cdot k_m / \sqrt{m}) \cdot v_j]_{i, j}$. We denote this output matrix as $\mathrm{Attention}(x_1, \dots, x_m)$. \section{Dec-POMDPs with Shared Action and Observation Features} \label{sec:saof} It is common to describe the states and observations in Dec-POMDPs using features, e.g. in card games each card has a rank and a suit. These featurized observations can be exploited by function approximators. In contrast, in typical RL implementations the actions are merely outputs of the neural network and the models do not take advantage of features of the actions. In the standard representation of Dec-POMDPs, actions are defined solely through their effect on the environment through the reward and the state transition functions. In contrast, in real world environments are often grounded and actions can be described with semantic features that refer to the object they act on, e.g. ``I pull the \emph{red lever}''. To allow action features to be used by RL agents, we first formalize the concept of observation and action features in Dec-POMDPs. We say a Dec-POMDP has \textit{observation features} if for at least one player $i$, we can represent the observation $\Omega^i(s)$ as a set of $\ell$ objects $\Omega^i(s) = \{O_1, \dots, O_{\ell}\}$, where each object $O_j = (f_1, \dots, f_{n_j})$ is described by a vector of $n_j$ features. Each of these features $f_k$ exists in a feature space $F_k$. Similarly, a Dec-POMDP has \textit{action features} if one can factor the representation of the actions into features $a^i = (\hat{f}_1, \dots, \hat{f}_m)$, where each action feature $\hat{f}_r \in \hat{F_r}$, $r =1,...,m$, and $\hat{F_r}$ is the action feature space. In some Dec-POMDPs actions can be described using some of the \textit{same} features that describe the observations. For example, an agent might observe the ``red'' light and take the action of pulling the ``red'' lever where ``red'' is a shared feature between observations and actions. In such cases there is a \textit{non-empty intersection} between $F_k$ and $\hat{F_r}$ (``shared action-observation features") which may be exploited for coordination. Even in the absence of an exact match, the distance between similar features (e.g., ``pink'' and ``red'' and vs. ``green'' and ``red'') might also be useful for coordination. We study this possibility in a novel generative environment with action and observation features described next. \section{The Hint-Guess Game} \label{sec:hp} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/scene.jpeg} \caption{Example scenarios in \textit{hint-guess}. Shown above are four hand-crafted scenarios that test distinct dimensions important for ZSC. The highlighted yellow card corresponds to a human-compatible choice. The two right scenarios require agents to reason about implicatures, i.e., the intuitive choice has zero feature overlap with the target card. Model performance in these scenario types is shown in Table~\ref{tab:human}. } \label{fig:scene} \end{figure} To study Dec-POMDPs with shared action-observation features, we introduce a novel setting that we call \textit{hint-guess}. Hint-guess is a two-player game where players must coordinate to successfully guess a target card. The game consists of a \textit{hinter} and a \textit{guesser}. Both players are given a hand of $N$ cards, $H_1 = \{C^1_1,...C^1_N\}$ for the \textit{hinter} and $H_2 = \{C^2_1,...C^2_N\}$ for the \textit{guesser}. Each card has two features $(f_1, f_2)$ where $f_1 \in F_1$ and $f_2 \in F_2$. Cards in each hand are drawn independently and randomly with replacement, with equal probability for any combination of features. Both hands, $H_1$ and $H_2$, are public information exposed to both players. Before each game, one of the \textit{guesser's} cards, $C^2_i$, is randomly chosen to be the target card and its features are revealed to the \textit{hinter}, but not the \textit{guesser}. In the first round, the \textit{hinter} (who observes $H_1, H_2, C^2_i$) chooses a card of their own, which we refer to as $C^1_j$, to show to the \textit{guesser}. In the second round, the \textit{guesser} (who observes $H_1, H_2$, $C^1_j$) guesses which of its cards is the target. Both players receive a common reward $r=1$ if the features of the card played match those of the target, otherwise $r=0$ for both players. Fig.~\ref{fig:scene} shows some simple scenarios that probe key dimensions of coordination with $N=2$, $F_1 = \{1,2,3\}$ and $F_2 = \{A, B, C\}$. Each of these scenarios has a human-compatible and intuitive solution. The first scenario (exact match) is the most simple---the \textit{hinter} has a copy of the target card (2B) so it can simply hint 2B. The next scenario (feature similarity) requires reasoning about the features under some ambiguity since neither of the cards in the two hands are a direct match. In this case, both cards in the \textit{hinter}'s hand share one feature with the \textit{guesser}. Thus, the human-compatible strategy would be to match the cards that share features to each other. The third and fourth examples (labeled implicatures in Fig.~\ref{fig:scene}) require understanding the action embedded within its context, e.g. what the \textit{hinter} would have done had the goal been different. The third scenario invokes a simple kind of implicature: mutual exclusivity. In this scenario, human-compatible intuitive reasoning follows the logic of: ``if the target card \textit{was} 1B, the \textit{hinter} would choose 1B. So that means 1B is taken and 3C should correspond to 2A even though they share no common feature overlap". The final scenario combines feature similarity and mutual exclusivity. These scenarios are particularly interesting as deep learning models often struggle to effectively grapple with mutual exclusivity \citep{gandhi2020mutual}. \section{The Effect of Architecture Choice on Zero-Shot Coordination} \label{sec:arc} We consider the following architectures to investigate the effect of policy parameterization on the agents’ ability to exploit shared action and observation features for ZSC. For details about the model architectures, see Appendix~\ref{app:modeldetails}. \textbf{Feedforward Networks (MLPs).} The most basic architecture we test is a standard fully connected feedforward network with ReLU activations. All featurized representation of objects in the observation are concatenated and fed into the network, which outputs the estimated Q-value for each action. There is no explicit representation of action-observation relationships in this model, since observations are inputs and actions are outputs. \textbf{Recurrent Networks (LSTMs).} We also examine a recurrent model, wherein we feed in objects in the observation (namely, vectors representing cards) sequentially to a long short-term memory (LSTM) network \citep{lstm}. To improve trainability we concatenate all hidden states from each step and use them as input to a feedforward neural network, noting that this is unconventional. Like the MLP, the LSTM does not explicitly model the relationship between action and observation features. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{figures/model-scheme.pdf} \caption{Model architecture for the attention-based models. Top: Attention \ ($\mathrm{Att}$). Middle: Attention with Linear Constraints \ ($\mathrm{Att + Lin \ Cons}$). Bottom: Attention with Action as Input \ ($\mathrm{A2I}$). The red blocks denote featurized objects in the observation, e.g. cards in the deck. The cyan blocks denote featurized actions, e.g. cards in the hand that can be hinted/guessed. \emph{Self-Attn} and \emph{MLP} denote the attention and fully-connected layers, respectively.} \label{fig:model} \end{figure} \textbf{Attention \ (Att).} We also investigate three attention-based models as shown in Fig.~\ref{fig:model}. The first model processes the observations using attention, takes the object-wise mean, and feeds the output into a feedforward network, which produces a vector with a Q-value for each action \begin{align*} \mathrm{Q} &= \text{MLP}(\text{Mean}(\text{Attention}(O_1, \dots, O_n)). \end{align*} \textbf{Attention with Linear Constraints \ (Att + Lin Cons).} The second is setup such that the action-values are constrained to be linear in their features for each decision point. In this model, a linear function of the object-wise mean is multiplied with a linear function of the action feature vectors to produce the values for each action \begin{align*} \mathrm{S} &= \text{Mean}(\text{Attention}(O_1, \dots, O_n))\\ \mathrm{Q} &= \text{Linear}(\mathrm{S}) \cdot \text{Linear}({A}). \end{align*} \textbf{Attention with Action as Input \ (A2I).} Lastly, we look at an attention-based architecture similar to $\mathrm{Att}$, where the featurized action is passed as input to the attention module(s) along with the observations. This outputs a single scalar value at a time, the estimated Q-value for the specific action being fed into the network \begin{align*} \mathrm{Q}_k &= \text{MLP}(\text{Mean}(\text{Attention}(O_1, \dots, O_n, A_k))). \end{align*} for ${k}=1, \dots, {m}$. To be clear, this architecture requires a forward pass for each action to calculate the Q-value vector. \section{Experiment Setup} We experimentally evaluate the architectures in the hint-guess game introduced in Section~\ref{sec:hp}. In Sections \ref{subsec:xpp}-\ref{subsec:huamn}, we fix the hand size to be $N=5$ and the features to be $F_1 = \{1,2,3\}$ and $F_2 = \{A, B, C\}$. \footnote{There is nothing particular about the hand size, and as shown in Appendix~\ref{app:handsize}, similar results can be obtained with either a larger or smaller hand size.} We use a one-hot encoding for features; more specifically, we use a two-hot vector to represent the two features of each card. In Sec.~\ref{subsec:sin}-\ref{subsec:multihead}, we examine a qualitatively different version of the game where $N=3$ and there is only one feature, $F_1 = \{0,1,...,19\}$. In this version, we investigate whether it is possible to capture ordinal relationships between actions using sinusoidal positional encodings. For these experiments, we encode each number as a $200$-dimensional vector consisting of sine and cosine functions of different frequencies, following the procedure of \citet{Vaswani.2017}. For both variants of the game, the observation input is a sequence of card representations for both hands $H_1$ and $H_2$, as well as the representation of the target card, $C^2_{i}$ (for the \textit{hinter}) or the hinted card $C^1_{j}$ (for the \textit{guesser}), and we train agents in the standard self-play setting using independent Q-learning \citep[IQL]{tan1993multi}, where the \textit{hinter} and \textit{guesser} are jointly trained to maximize their score in randomly initialized games. To avoid giving the set-based attention architectures an unfair advantage, we also permute the cards in the hands observed by all agents so that agents are not able to coordinate using the position of the cards. To evaluate success, we consider the agents' performance and behavior in both SP and the ZSC setting. We also provide fine-grained examination of their policies and investigate their ability to match the human-compatible response in different scenarios. See Appendix~\ref{app:modeldetails} for training details. \section{Result Analysis} \subsection{Cross-play Performance.} \label{subsec:xpp} First, we evaluate model cross-play (XP) performance for each architecture in the ZSC setting. In this setting, agents from independent training runs with different random seeds are paired together. Fig.~\ref{fig:xp} records the scores obtained by each pair of agents, where the diagonal entries are the within-pair SP scores and the off-diagonal entries are XP scores. Table~\ref{tabel:xp} summarizes average SP and XP scores across agents. \textbf{Comparison Across Architectures.} Fig.~\ref{fig:xp} shows that the XP matrix of all architectures except $\mathrm{A2I}$ \ (Attention with Action as Input) lack an interpretable pattern. The XP score is near chance for these architectures as shown in Table~\ref{tabel:xp}. In contrast, the XP matrix for the $\mathrm{A2I}$ \ model shows two clear clusters. Within the clusters, agents show XP performance nearly identical to that of their SP, implying that they coordinate nearly perfectly with other agents trained with a different seed, whereas outside the clusters they achieve a return close to zero. As we will show in the next section, the upper cluster, which has a higher average XP score, corresponds to a highly interpretable and human-like strategy where agents \emph{maximize} the ``similarity'' between the target and the hint card (as well as between the hint card and the guess card). In the lower, second cluster, agents do the opposite. They try to hint/guess cards that share no common feature with the target/hint cards. In the rest of the paper, we will refer to the cluster where agents maximize the similarity between cards as \textbf{A2I Sim}, and the cluster where agents maximize the dissimilarity as \textbf{A2I Dissim}. However, as we will see in section~\ref{subsec:pe}, the $\mathrm{A2I}$ \ agents do not just maximize/minimize feature similarity; they also demonstrate more sophisticated coordination patterns that exploit implicature. \begin{table}[bth] \medskip \begin{center} \resizebox{0.75\textwidth}{!}{% \begin{tabular}{lll} \multicolumn{3}{c}{Model Architectures} \\ \hline Model & \multicolumn{1}{c}{Cross-Play} & \multicolumn{1}{c}{Self-Play} \\ \hline $\mathrm{MLP}$ & $0.27\pm 0.04$ & $0.85\pm 0.02$ \\ $\mathrm{LSTM}$ & $0.30\pm 0.05$ & $0.86\pm 0.01$ \\ $\mathrm{Att}$ & $0.27\pm 0.04$ & ${0.87}\pm {0.01}$ \\ $\mathrm{Att + Lin \ Cons}$ & $0.26\pm 0.03$ & $0.76\pm 0.02$ \\ \hline $\mathrm{A2I}$ & $0.37\pm 0.12$ & $0.76\pm 0.02$ \\ $\mathrm{A2I \ Sim}$ & ${0.77}\pm {0.01}$ & $0.82\pm 0.01$ \\ $\mathrm{A2I \ Dissim}$ & $0.71\pm 0.01$ & $0.72\pm 0.01$ \\ \hline & & \\ \multicolumn{3}{c}{Baseline Training Algorithms} \\ \hline Algothrim & \multicolumn{1}{c}{Cross-Play} & \multicolumn{1}{c}{Self-Play} \\ \hline OP & $0.35\pm 0.02$ & $0.35\pm 0.02$ \\ OBL (level 1) & ${0.27}\pm {0.05}$ & $0.29\pm 0.06$ \\ OBL (level 2) & $0.28\pm 0.04$ & $0.28\pm 0.05$ \\ \hline \end{tabular} } \end{center} \caption{Cross-play performance. Each entry is the average performance of 20 pairs of agents that are trained with different random seeds. The XP score is the off-diagonal mean of each grid. The SP score is the diagonal mean, i.e. the score attained when agents play with the peer they are trained with. A ``chance agent" that acts randomly is expected to obtain a score of 0.28. All models in the ``Model Architecture'' part are trained with IQL \citep{tan1993multi}, and all training algorithms in the ``Baseline Training Algorithm'' section use an MLP architecture. } \label{tabel:xp} \end{table} \textbf{Comparison with ZSC Baselines.} The bottom part of Table \ref{tabel:xp} contains the SP and XP results for two recent ZSC algorithms, other-play \citep[OP]{Hu.2020} and off-belief learning \citep[OBL]{hu2021off}. For details and implementation of the baseline algorthims, see Appendix~\ref{app:baseline}. As shown, the XP scores for OP agents only show marginal improvement over $\mathrm{MLP}$ \ agents. By preventing arbitrary symmetry breaking, OP improves XP performance, but only to a limited extent. In contrast, the OBL agents fail to obtain scores beyond chance both in XP and SP. This is expected as OBL is designed to explicitly prevent \textit{cheap talk}, i.e., sending costless messages between players, which is exactly the key for coordination in hint-guess. \subsection{Policy Examination} \label{subsec:pe} \begin{figure*}[ht] \centering \includegraphics[width=0.85\textwidth]{figures/xp_diff.pdf} \caption{Cross-play matrices. Visualization of paired evaluation of different agents trained under the same method. The y-axis represents the agent index of the \textit{hinter} and the x-axis represents the agent index of the \textit{guesser}. Each block in the grid is obtained by evaluating the pair of agents on 10K games with different random seeds. Numerical performance is shown in Table~\ref{tabel:xp}.} \smallskip \label{fig:xp} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.96\textwidth]{figures/mp.pdf} \caption{Conditional probability matrices. We show $\Pr$ (Guess$\mid$Hint), for the \textit{guesser} to guess a particular card (x-axis) when the hinted card is the card on the y-axis. Each subplot is the sample average of 20 agent-pairs with different seeds for 1K games within pair.} \smallskip \label{fig:mp} \end{figure*} \textbf{Conditional Probability Analysis.} In Fig.~\ref{fig:mp}, we provide the conditional probability for the \textit{guesser} to guess a card given the hinted card (bottom row). \footnote{The conditional probability matrices for the \textit{hinter} to hint a card for given target card look exactly identical so we omit them.} One crucial thing to analyze is whether agents assign different probabilities to actions based on the features they share with the observation. One can see that for $\mathrm{MLP}$, $\mathrm{LSTM}$ \ and $\mathrm{Att}$, the probability matrices for both target-hint and hint-guess are nearly uniform. This implies that the SP policies across seeds each form their own private language for arbitrary and undecipherable coordination. While $\mathrm{Att + Lin \ Cons}$ \ shows some preference for actions that share one or two features with the observations, the probability matrix remains noisy. In contrast, for the two clusters of $\mathrm{A2I}$ \ agents the correlation (or anti-correlation) between the action features and target/hint card features is much stronger. For $\mathrm{A2I \ Sim}$, both the \textit{hinter} and the \textit{guesser} prioritize exact matches when they are present. If the exact match is not present, they turn to cards that share one feature in common. The $\mathrm{A2I \ Dissim}$ \ agents do the exact opposite---matching cards together that share as few features as possible. \textbf{Human Compatibility Analysis.} However, we find that the nuance with which these clusters play goes beyond simply maximizing or minimizing feature similarity. To demonstrate this, we run simulations on the four scenarios (exact match, feature similarity, mutual exclusivity, exclusivity+similarity) shown in Fig.~\ref{fig:scene} and described in Section~\ref{sec:hp}. In Table~\ref{tab:human}, we record the percentage of times where $\mathrm{A2I}$ \ agents in each cluster chose the human-compatible actions in Fig.~\ref{fig:scene}. We find that $\mathrm{A2I \ Sim}$ \ agents demonstrate coordination patterns that are nearly identical to a human-compatible policy. These results are surprising given that our models have never been trained with any human data. Furthermore, mutual exclusivity was thought to be hard for deep learning models to learn \citep{gandhi2020mutual}. In contrast, $\mathrm{A2I \ Dissim}$ \ agents always perform actions that are the \textit{opposite} to the human-compatible policy, but this policy \emph{per se} is still interpretable and non-arbitrary. \begin{table*}[t] \centering \smallskip \resizebox{0.86\textwidth}{!}{% \begin{tabular}{lccccccccccc} & \multicolumn{2}{c}{Self-Play ($\mathrm{A2I \ Sim}$)} & \multicolumn{1}{l}{} & \multicolumn{2}{l}{Cross-Play ($\mathrm{A2I \ Sim}$)} & \multicolumn{1}{l}{} & \multicolumn{2}{l}{Self-Play ($\mathrm{A2I \ Dissim}$)} & \multicolumn{1}{l}{} & \multicolumn{2}{l}{Cross-Play ($\mathrm{A2I \ Dissim}$)} \\ \cline{2-12} Scenario & \multicolumn{1}{l}{Human (\%)} & \multicolumn{2}{l}{Win (\%)} & \multicolumn{1}{l}{Human (\%)} & \multicolumn{2}{l}{Win (\%)} & \multicolumn{1}{l}{Human (\%)} & \multicolumn{2}{l}{Win (\%)} & \multicolumn{1}{l}{Human (\%)} & \multicolumn{1}{l}{Win (\%)} \\ \hline \multicolumn{1}{l|}{Exact match} & 100.0 & 100.0 & & 100.0 & 100.0 & & 0.0 & 100.0 & & 0.0 & 100.0 \\ \multicolumn{1}{l|}{Feature similarity} & 100.0 & 100.0 & & 100.0 & 100.0 & & 0.0 & 100.0 & & 0.0 & 100.0 \\ \multicolumn{1}{l|}{Mutual exclusivity} & 100.0 & 100.0 & & 100.0 & 100.0 & & 9.3 & 91.2 & & 9.3 & 92.2 \\ \multicolumn{1}{l|}{Similarity + Exclusivity} & 92.0 & 91.7 & & 97.9 & 99.5 & & 3.2 & 98.4 & & 0.0 & 99.9 \\ \hline \end{tabular} } \caption{Behavioral analysis for the $\mathrm{A2I}$ \ model in the Fig.~\ref{fig:scene} scenarios. We randomly chose 20 agent-pairs from each cluster and simulated the same scenario 1K times. Human (\%) denotes the fraction of games where the \textit{hinter} hints the card that corresponds to human-compatible choice (highlighted in yellow in Fig.~\ref{fig:scene}), and Win (\%) denotes the fraction where the \textit{guesser} correctly guesses.} \label{tab:human} \end{table*} \subsection{Human-AI Experiments} \label{subsec:huamn} We recruited 10 university students to play hint-guess. Each subject played as \textit{hinter} for 15 randomly generated games, totaling 150 different games. These subjects are then cross-matched to play as \textit{guessers} with the hints their peers generated. The human hints are also fed into randomly sampled $\mathrm{MLP}$ \ and $\mathrm{A2I \ Sim}$ \ \textit{guesser}-agents to test AI performance against human partners. The experiment was carefully designed so that the hinter is never informed of the guesser's guess and the guesser is never informed of the true target card. This experimental design ensures that the human participants generate zero-shot data, and do not optimize their play using previous experience. Further details of the experiment are in Appendix \ref{app:human}. \textbf{ZSC Performance.} In the right table of Fig.~\ref{fig:human-exp} we report average zero-shot coordination (ZSC) scores obtained by \emph{hinter-guesser} pairs for human-human, human-$\mathrm{MLP}$, and human-$\mathrm{A2I \ Sim}$. Humans obtained an average ZSC score of $0.75$ with their peers. As a baseline, the $\mathrm{MLP}$ \ \emph{guessers} show poor performance in understanding human-generated hints, barely outperforming random guessing. In contrast, the $\mathrm{A2I \ Sim}$ \ \textit{guessers} achieve human-level performance with an average ZSC score of $0.77$ with humans. Note that this score is very close to the average ZSC score in Table~\ref{tabel:xp}, where $\mathrm{A2I \ Sim}$ \ agents cross-played among themselves. \textbf{Human-AI Behavior Correlation.} We also investigate two kinds of correlations between human play and AI play. The right table of Fig.~\ref{fig:human-exp} shows the percentage of games where model \emph{guessers} chose the same action as the human \emph{guessers}. In $80.7 \%$ of the games, the $\mathrm{A2I \ Sim}$ \ agents and human \emph{guessers} agree on the same action across many different scenarios. In contrast, the $\mathrm{MLP}$ \ \emph{guessers} deviate from human \emph{guessers}, with only $40.7 \%$ agreement. \textbf{Human-AI Performance Correlation.} The left plot of Fig.~\ref{fig:human-exp}, shows the correlations between human-human play and human-AI play. As expected, across humans we observed a range of skill levels at the game, with some hinters not even achieving 50\% guess accuracy when paired with other humans, while others exceeded 90\% (as measured along the x-axis). We observe the performance of human-$\mathrm{A2I \ Sim}$ \ pairs increased substantially with the skill level of the human, whereas the performance of human-$\mathrm{MLP}$ \ pairs was less sensitive along this axis. Taken together, these results suggest that the $\mathrm{A2I \ Sim}$ \ agent is both better at coordinating with people than a baseline model and is also better at coordinating with the people who are better at coordinating with people. \begin{figure}[htb] \centering \begin{subfigure}[b]{\textwidth} \includegraphics[width=\textwidth]{figures/human-experiment.pdf} \end{subfigure} \hfill \vspace{-7mm} \caption{Human-AI ZSC results comparing human-human pairs, human-$\mathrm{A2I \ Sim}$ \ pairs and human-$\mathrm{MLP}$ \ pairs. In the left plot, each point corresponds to a particular human hinter. In the table, ``agree'' measures the percentage of games in which the guesser selected the same card as the human guesser.} \label{fig:human-exp} \end{figure} \subsection{Sinusoidal Encoding} \label{subsec:sin} The previous subsections investigated a variant of hint-guess in which the most important mode of comparison between features was whether they were equal or non-equal. In these settings the $\mathrm{A2I}$ \ models were able to learn sophisticated cognitive patterns like mutual exclusivity from one-hot encoding of inputs. However, one-hot encoding does not capture richer semantic relationships between features. For instance, in hint-guess, if cards are encoded one-hot, the agent can only ``know" that the card 1A has the same first label as the card 1B, but it cannot ``know" whether the number 1 is closer to 2 than to 5. Thus, in this section, we investigate whether a more expressive encoding enables the $\mathrm{A2I}$ \ model to learn to leverage the ordinal relationship between features. Specifically, we examine the performance of sinusoidal positional encodings in a variant of hint-guess in which the only card feature is a number between 0 and 19, as described in the experiment setup. We show the results of this experiment in Table~\ref{table:sine}, which shows SP and XP performance of $\mathrm{A2I}$ \ agents with one-hot encoding and sinusoidal encoding in this single-feature setting. Agents with one-hot encoding are near chance in XP. They do not form clusters as observed before. We hypothesize that the failure of one-hot agents is because of the large feature space (20 numbers) relative to the small number of features (1). Because the feature space is large and the agents are only sensitive to exact overlap, the performance gain in SP is marginal. Thus, one-hot agents degenerate into using arbitrary conventions, resulting in a large performance gap between SP and XP. Agents with sinusoidal encodings, in contrast, split into two clusters (named $\mathrm{A2I \ Sim}$ \ and $\mathrm{A2I \ Dissim}$ \ as before), wherein each cluster has near-perfect SP and XP scores with no significant performance gap. We find that these agents learn to exploit the ordering and distance information between the numbers for coordination. $\mathrm{A2I \ Sim}$ \ agents rank the \emph{hinter}'s and \emph{guesser}'s hands in the same order and match the corresponding numbers as hint-guess pairs. $\mathrm{A2I \ Dissim}$ \ agents, on the other hand, rank one hand in ascending order and the other hand in descending order for matching. See Fig.~\ref{fig:order-matching} for a concrete example. For both strategies, if the hinter does not have duplicate numbers in its hand, agents obtain a near-perfect play score. Indeed, in a XP simulation across 15 agent-pairs with 1K games per pair, where each agent's hand is drawn \emph{without} replacement (so no duplicates), in $99.9\%$ of the time, the $\mathrm{A2I \ Sim}$ \ agents hint/guess exactly according to the same order matching scheme. Also in $99.9\%$ of the time, the $\mathrm{A2I \ Dissim}$ \ agents hint/guess according to the reversed order matching scheme. \begin{table}[bth] \medskip \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{lrrrr} \hline Encoding & \multicolumn{1}{l}{SP} & \multicolumn{1}{l}{XP} & \multicolumn{1}{l}{XP ($\mathrm{A2I \ Sim}$)} & \multicolumn{1}{l}{XP ($\mathrm{A2I \ Dissim}$)} \\ \hline One-hot & 0.81 ± 0.02 & 0.36 ± 0.10 & - & - \\ Sinusoidal & 0.92 ± 0.01 & 0.52 ± 0.16 & 0.92 ± 0.01 & 0.93 ± 0.01 \\ \hline \end{tabular} } \end{center} \vspace{-5mm} \caption{SP and XP scores for $\mathrm{A2I}$ \ agents with one-hot and sinusoidal encodings. Agents with sinusoidal encoding form two clusters so we also show the within-cluster results.} \label{table:sine} \end{table} \vspace{-2mm} \begin{figure}[ht] \centering \includegraphics[width=0.95\textwidth]{figures/order-matching.pdf} \caption{A scenario illustrating the behavior of $\mathrm{A2I}$ \ agents. In this scenario, the \emph{hinter}'s hand is $(1,2,3)$ and the \emph{guesser}'s hand is $(2,3,4)$ (the actual hands seen by agents will be permuted). We find that with probability close to 1, $\mathrm{A2I \ Sim}$ \ agents use a strategy that exploits \emph{same order matching} (left). They sort both hands in the same order and match 1-2, 2-3, 3-4, etc. Also with probability close to 1, $\mathrm{A2I \ Dissim}$ \ agents use \emph{reversed order matching} (middle). They sort one hand in ascending order and the other in descending order and match. To compare we also show \emph{naive feature similarity} (right) that solely maximizes feature similarity; this strategy will match 1-2, 2-2, 3-3 and leave out 4.} \label{fig:order-matching} \end{figure} \vspace{-4mm} \subsection{Multi-head and Multi-layer Attention} \label{subsec:multihead} We also find that the results for $\mathrm{A2I}$ \ architectures are qualitatively similar when using multi-head or multi-layer attention or both in Appendix~\ref{app:robust}. This suggests that $\mathrm{A2I}$ \ may also be able to produce human-compatible policies in settings where larger architectures are required for effective learning. \section{Related Work} \label{sec:rel} \textbf{Attention for Input-Output Relationships.} Exploiting semantic relationships between inputs and outputs via an attention-based model has been studied in the deep learning literature. In natural language processing, such an idea is commonly used in question answering models \citep{santos2016,tan2016,yang2016}. For instance, \citet{yang2016} form a matrix that represents the semantic matching information of term pairs from a question and answer pair, and then use dot-product attention to model question term importance. For regression tasks, \citet{kim2019} proposed attentive neural processes (ANP) that use dot-product attention to allow each input location to attend to the relevant context points for the prediction, and applied ANP to vision problems. \textbf{Human Coordination.} Our work is also inspired by how humans coordinate in cooperative settings. Theory-of-mind, the mechanism people use to infer intentions from the actions of others, plays a key role in structuring coordination \citep{wu2021too, shum2019theory}. In particular, rational speech acts (RSA) is a influential model of pragmatic implicature \citep{frank2012predicting, goodman2013knowledge}. At the heart of these approaches are probabilistic representations of beliefs that allow for modeling uncertainty and recursive reasoning about the beliefs of others, enabling higher-order mental state inferences. This recursive reasoning step also underlies the cognitive hierarchy and level-K reasoning models, and is useful for explaining certain focal points \citep{camerer2011behavioral, stahl1995players, camerer2004cognitive}. However, constructing recursive models of players beliefs and behavior is computationally expensive as each agent must construct an exponentially growing number of models of each agent modeling each other agent. As a result, recursive models are often limited to one or two levels of recursion. Furthermore, none of these approaches can by itself take advantage of the shared features across actions and observations. \section{Conclusion} We investigated the effect of network architecture on the ability of learning algorithms to exploit the semantic relationship between shared features across actions and observations for coordination, comparing the behavior of agents with feedforward, recurrent, and attention-based architectures. We found that that attention-based architectures that jointly process a featurized representation of observations and actions have a better inductive bias for exploiting this relationship. Our results suggest that this is a promising architecture to investigate for more complex games in the zero-shot coordination setting, like Hanabi or Overcooked \citep{wu2021too, carroll2019utility}. \clearpage
2024-02-18T23:40:31.529Z
2022-02-01T02:19:35.000Z
algebraic_stack_train_0000
2,660
6,776
proofpile-arXiv_065-12967
\section{Introduction} The question of whether the spacial geometry of our universe being open, flat, or closed, characterized by spatial curvature parameter $\Omega_k$ corresponding to $\Omega_k>0$, $\Omega_k=0$, and $\Omega_k<0$, respectively, is a fundamental issue related to the origin and evolution of the universe. The inflationary cosmology predicts a flat universe, and this has been confirmed by the precise measurements of the cosmic microwave background (CMB) \citep{Guth:1980zm,Linde:1981mu,Bennett:1996ce}. The latest Planck 2018 results reported a very stringent constraint on the curvature parameter, $\Omega_k=0.001\pm0.002$, which is from the combination of CMB power spectra data and baryon acoustic oscillation (BAO) measurements in the framework of the $\Lambda$ cold dark matter ($\Lambda$CDM) model \citep{Planck:2018vyg}. Although there is a precise constraint on $\Omega_k$ indicating a flat universe, two points should be noticed. First, this tight constraint depends on a specific cosmological model and is based on the early-universe measurements. The Hubble tension problem \citep{Riess:2019cxk,DiValentino:2021izs,Vagnozzi:2019ezj,Zhang:2019ylr,Qi:2019zdk,Vattis:2019efj,Zhang:2014ifa,Guo:2018ans,Zhao:2017urm,Guo:2017qjt,Guo:2019dui,Feng:2019jqa}, the most serious crisis in modern cosmology, implies the disagreements between the early universe and the late universe within the framework of modern cosmological theory \citep{Verde:2019ivm,Riess:2019cxk,DiValentino:2021izs}. Therefore, it is necessary to remeasure the curvature parameter using the late-universe observations and preferably cosmological model-independent methods. Second, recent studies \citep{DiValentino:2019qzk,Handley:2019tkm} concerning the curvature parameter found that the Planck power spectra prefer a closed universe at more than 99\% confidence level. However, combining the Planck data with BAO data prefers a flat universe, with a small error of 0.002. Conclusions regarding $\Omega_k$ from the combination of these data sets should be treated with suspicion. Thus, this further urges us to re-examine the constraints on $\Omega_k$ through a cosmological model-independent method and using low-redshift observations. Based on the distance sum rule, \citet{Rasanen:2014mca} presented a cosmological model-independent method to constrain the cosmic curvature parameter with the combination of strong gravitational lensing (SGL) observations and Type Ia supernovae (SN Ia) data and obtained a $\Omega_k$ value close to zero but with poor precision. Subsequently, this method has been fully implemented with larger SGL and SN Ia samples \citep{Liu:2020bzc,Xia:2016dgk,Li:2018hyr,Wang:2019yob,Zhou:2019vou} as well as other distance indicators such as intermediate luminosity quasars \citep{Qi:2018aio}. However, the results of these previous works on the constraints of $\Omega_k$ are not consistent. For instance, with a prior from CMB observations, $\Omega_k \leq -0.1$, \citet{Rasanen:2014mca} and \citet{Xia:2016dgk} obtained that $\Omega_k$ is close to zero. However, without the prior from CMB, \citet{Li:2018hyr} constrained $\Omega_k$ with a larger SN Ia sample and found that a closed universe is preferred. The reason for this inconsistency is probably the addition of the CMB prior. Alternatively, the bias of estimation for $\Omega_k$ could also be caused by the limited number of available SGL samples bringing unknown systematic errors. Specifically, to constrain $\Omega_k$ using the distance sum rule requires calibrating the distances of lenses and sources in SGL systems by using other distance indicators. The maximum redshift of the distance indicators determines the number of SGL systems that can be calibrated. At present, the maximum redshift of sources in the observed SGL sample is about 3.6, while the maximum redshift of the SN Ia sample used commonly as a distance indicator is only about 2.3, which means that some SGL systems cannot be calibrated. Therefore, we need other distance probes capable of detecting higher redshifts. On the other hand, a disadvantage of SN Ia is that it cannot provide absolute distance unless calibrated by the distance ladder. Therefore, it is necessary to develop other reliable cosmological probes to constrain $\Omega_k$. The successful detections of gravitational waves (GWs) \citep{LIGOScientific:2016aoc,LIGOScientific:2017vwq} bring us into the era of GW astronomy and multi-message astronomy. The absolute luminosity distance can be determined by analysing GW's waveform, which is referred to as standard siren \citep{Schutz:1986gp}. For a comparison, for SN Ia, only relative distances can be obtained. If the redshift of GW event is obtained through the electromagnetic (EM) counterpart or its host galaxy, the distance-redshift relation can be established, which is of importance for cosmological studies \citep{Qi:2019spg,Qi:2019wwb,Zhao:2010sz,Wang:2018lun,Zhang:2019ylr,Wang:2019tto,Zhang:2019loq,Zhang:2019ple,Zhao:2019gyk,Jin:2022tdf,Jin:2022qnj,Jin:2020hmc,Wang:2021srv,Jin:2021pcv,Bian:2021ini}. According to the conservative estimates, the third-generation ground-based GW observatory, such as the Einstein Telescope (ET) with one order of magnitude more sensitive than the current GW detectors, can detect 1000 GW events with the redshift information from the binary neutron star (BNS) mergers in a ten-year observation \citep{Nissanke:2009kt,Zhao:2010sz,Cai:2016sby,Zhao:2017cbb,Chen:2020zoq}. Moreover, the detectable redshifts of GWs could reach much higher. It is no doubt that the observations of GWs will become an important tool for cosmological studies in the near future. Considering the above model-independent constraints on $\Omega_k$ based on the method of distance sum rule, GW observation could provide a perfect complement to traditional cosmological probes. Therefore, in this paper, we will investigate how GWs as a distance indicator will affect the constraints on $\Omega_k$ in the near future of GW astronomy. Our investigation includes two parts. First, based on ET in its 10-year observation, we simulate 1000 GW standard sirens and constrain $\Omega_k$ in combination with the latest observed SGL sample. {Since this method is dependent strongly on the lens models characterizing the mass distribution of lens galaxies \citep{Qi:2018aio}, we will perform the constraint on $\Omega_k$ in three lens models extensively used in strong lensing studies.} Next, we consider the possible developments of next decades. During the construction and subsequent observation of ET, the ongoing and future massive surveys like Large Synoptic Survey Telescope (LSST) or Dark Energy Survey will provide a large sample of well-measured SGL systems. For example, according to the prediction of~\citet{Collett:2015roa}, the LSST survey could potentially observe $1.2\times10^5$ SGL systems. In this paper, we also make a forecast for what constraints on $\Omega_k$ can be obtained with such a significant increase of the number of SGL systems. \section{METHODS AND DATA} \subsection{Distance sum rule} According to the cosmological principle that the universe is homogeneous and isotropic at large scales, the spacetime geometry can be described by the Friedmann-Lema\^\i tre-Robertson-Walker (FLRW) metric, so we have \begin{equation}\label{RW} ds^{2}=-dt^{2}+a^{2}(t)\left\{\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^2)\right\}, \end{equation} where $a(t)$ denotes the cosmic scale factor, and $k$ is a constant associated with the spatial curvature. Considering a SGL system in the FLRW metric, the angular diameter distance between the lens galaxy at redshift $z_{l}$ and the source at redshift $z_{s}$ can be represented as $D_{A}(z_{l}, z_{s})$. The dimensionless comoving distance $d(z)$ between the lens and the source can be described as \begin{equation}\label{dz_DA} \begin{aligned} d(z_{l}, z_{s})&=(1+z_{s})H_{0}D_{A}(z_{l}, z_{s})\\ &=\frac{1}{\sqrt{|\Omega_{k}|}} {\rm{sinn}} \left[\sqrt{|\Omega_{k}|} \int_{z_{l}}^{z_{s}}\frac{H_{0 }dz^{\prime}}{H(z^{\prime})}\right],\\ & \end{aligned} \end{equation} where \begin{equation} {\rm sinn}(x)= \begin{cases} \sin(x), & \text{$\Omega_{k}<0$},\\ x, & \text{$\Omega_{k}=0$},\\ \sinh(x), & \text{$\Omega_{k}>0$}. \end{cases} \end{equation} Here, $H(z)$ is the Hubble parameter, and $H_{0}$ is the Hubble constant. $\Omega_{k}={-k}/({H_{0}^{2}}a_{0}^{2})$ ($a_{0}=a(0)$) is the spatial curvature parameter. For convenience, we define $d_{l}=d(0, z_{l})$, $d_{s}=d(0, z_{s})$, and $d_{ls}=d(z_{l}, z_{s})$. These three dimensionless distances in the FLRW universe and cosmic curvature $\Omega_k$ satisfy the distance sum rule \citep{Bernstein:2005en,Rasanen:2014mca}: \begin{equation}\label{dls_dl} \frac{ d_{ls}}{ d_{s}}=\sqrt{1+\Omega_{k}d_{l}^{2}}-\frac{d_{l}}{d_{s}}\sqrt{1+\Omega_{k}d_{s}^{2}}. \end{equation} Obviously, we obtain $d_{s} = d_{l} + d_{ls}$ if the universe is spatially flat ($\Omega_{k}=0$). Simultaneously, $d_{s} < d_{l} + d_{ls}$ and $d_{s} > d_{l} + d_{ls}$ correspond to a spatially closed ($\Omega_{k}<0$) and open ($\Omega_{k}>0$) universe, respectively. On the basis of Equation~(\ref{dls_dl}), if we obtain the distances $d_{l}$, $d_{s}$, and $d_{ls}$ from observations, the spatial curvature $\Omega_{k}$ can be directly derived without any assumption regarding the specific cosmological model. In this work, the distances $d_{l}$ and $d_{s}$ are inferred from the GW data, while the distance ratio $d_{ls}/d_{s}$ can be obtained from the observations of SGL systems. \subsection{Data simulation for gravitational wave standard sirens} All the GW events considered in this work are assumed to be produced by the mergers of binary neutron stars (BNSs). The neutron star (NS) mass distribution is randomly sampled in the interval [1, 2] $M_{\odot}$, where $M_{\odot}$ is the solar mass, the same as in the literature \citep{Cai:2017aea,Zhang:2018byx,Wang:2018lun}. The redshift distribution of GW sources takes the form \citep{Zhao:2010sz, Cai:2016sby} \begin{equation}\label{p} P(z) \propto \frac{4 \pi d_{C}^2(z) R(z)}{H(z)(1+z)}, \end{equation} where $d_{C}(z)$ represents the comoving distance at the redshift $z$. $R(z)$ indicates the time evolution of the burst rate, which is given by \citep{Schneider:2000sg, Cutler:2009qv} \begin{equation}\label{R} R(z)= \begin{cases} 1+2z, & \text{$z \le 1$},\\ \frac{3}{4}(5-z), & \text{$1 < z < 5$},\\ 0, & \text{$z \ge 5$}. \end{cases} \end{equation} After knowing the redshift and mass distributions described above, we can generate the mock catalog of the GW standard sirens. The luminosity distance $D_{L}$ can be extracted from the GW amplitude, and its value in this simulation can be obtained by \begin{equation}\label{ET_DL} D_{L}(z)=(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}. \end{equation} In this simulation, the fiducial cosmological model we choose is the flat $\Lambda$CDM universe and the values of parameters are taken from Planck 2018 results \citep{Planck:2018vyg}. For the estimation of the luminosity distance error $\Delta{D_{L}}$, it depends on the sensitivity of the GW detector and the signal-to-noise ratio (SNR) of a GW event. The strain $h(t)$ in GW interferometers quantifies the difference of two optical paths due to the passing of GW, following \cite{Sathyaprakash:2009xs} and \cite{Zhao:2010sz}, which can be denoted as \begin{equation}\label{interferometers} h(t)=F_{+}(\theta, \phi, \psi)h_{+}(t)+F_{\times}(\theta, \phi, \psi)h_{\times}(t), \end{equation} where $\psi$ is the polarization angle, and $(\theta,\phi)$ describe the source-location angles relative to the detector. Here, the antenna pattern functions $F_{+}$ and $F_{\times}$ of the ET are written as \citep{Cai:2016sby} \begin{equation}\label{F} \begin{aligned} F_{+}^{(1)}(\theta, \phi, \psi)=&\frac{\sqrt{3}}{2} \bigg[\frac{1}{2}(1+\cos^{2}(\theta))\cos(2\phi)\cos(2\psi)\\&- \cos(\theta)\sin(2\phi)\sin(2\psi)\bigg],\\ F_{\times}^{(1)}(\theta, \phi, \psi)=&\frac{\sqrt{3}}{2} \bigg[\frac{1}{2}(1+\cos^2(\theta))\cos(2\phi)\sin(2\psi)\\&+ \cos(\theta)\sin(2\phi)\cos(2\psi)\bigg]. \end{aligned} \end{equation} There are three interferometers with $60^{\circ}$ inclined angles for each other, with $F_{+, \times}^{(2)}(\theta, \phi, \psi)=F_{+, \times}^{(1)}(\theta, \phi+\frac{2\pi}{3}, \psi)$ and $F_{+, \times}^{(3)}(\theta, \phi, \psi)=F_{+, \times}^{(1)}(\theta, \phi+\frac{4\pi}{3}, \psi)$. Then, the Fourier transform $\mathcal H(f)$ of the time domain waveform $h(t)$ can be derived as \citep{Zhao:2010sz} \begin{equation}\label{H} \mathcal H(f)= \mathcal Af^{-7/6} \exp[i(2\pi ft_{0}-\pi/4 +2\Psi(f/2)-\varphi_{(2.0)})]. \end{equation} Here, the definitions of the funtions $\Psi$ and $\varphi_{(2.0)}$ can be found in \cite{Zhao:2010sz}. The Fourier amplitude $\mathcal A$ is defined as \begin{equation}\label{A} \begin{aligned} \mathcal A= &\frac{1}{D_{L}}\sqrt{F_{+}^2(1+\cos^{2}(\iota))^2 +4F_{\times}^2\cos^2(\iota)} \\&\times \sqrt{5\pi/96}\pi^{-7/6}\mathcal M_{c}^{5/6}, \end{aligned} \end{equation} where $\mathcal M_{c}=(1+z)M\eta^{3/5}$ is chirp mass. Here, $M$ is the total mass of the coalescing binary with component masses $m_{1}$ and $m_{2}$, namely $M=m_{1}+m_{2}$, and $\eta=m_1m_2/(m_1+m_2)^2$. The parameter $\iota$ is the inclination angle of the binary's orbital angular momentum with the line of sight, which can be obtained from the accompanying EM counterpart of the GW event like short gamma-ray bursts (SGRBs). SGRBs are believed to be strongly beamed phenomena \citep{Nakar:2005bs, Fermi-LAT:2009owx, Rezzolla:2011da}. Once SGRBs are observed, it means that the binaries should be aligned nearly face on (i.e., $\iota \simeq0$). We take the maximal inclination to be $\iota=20^\circ$. In general, one would need to compute all the Fisher matrices with random inclination angles and then select the sources above the detection threshold that happen to have an EM counterpart. However, according to the analysis of \cite{li2015extracting}, averaging the Fisher matrix over the inclination $\iota$ and the polarisation $\psi$ with the constraint $\iota \leq 20^\circ$ is approximately equivalent to taking $\iota=0$. Moreover, in the previous simulation of GW \citep{Zhao:2010sz,Cai:2017aea,Zhang:2018byx,Wang:2018lun}, the inclination $\iota$ was also treated in the same way. Following them, therefore, we set $\iota=0$ in the simulation of GW data. After knowing a waveform of GW, one can calculate its signal-to-noise ratio (SNR). For the ET detector, a GW event is confirmed only when the SNR reaches at least 8. The combined SNR of the network including three equivalent independent interferometers can be written as \begin{equation}\label{rho} \rho=\sqrt{\sum_{i=1}^{3}(\rho^{(i)})^2}, \end{equation} where $\rho^{(i)}=\sqrt{\left\langle \mathcal{H}^{(i)}, \mathcal{H}^{(i)} \right\rangle}$, and the inner product is denoted as \begin{equation}\label{H2} \langle a, b \rangle=4 \int_{f_{\rm lower}}^{f_{\rm upper}} \frac{\tilde{a}(f)\tilde{b}^{*}(f)+\tilde{a}^{*}(f)\tilde{b}(f)}{2} \frac{df}{S_{h}(f)}, \end{equation}\label{H3} where a tilde represents the Fourier transform of the function. Here, $S_{h}(f)$ is the one-side noise power spectral density, and its form for ET is taken to be the same as in \cite{Freise:2009nz,Zhao:2010sz,Cai:2017aea}. For the detection rate of GW from BNS mergers with redshift measurements enabled by EM counterparts, according to the recent studies \citep{Yu:2021nvx,Chen:2020zoq} by investigating various models of the short $\gamma$-ray bursts and afterglows, a rough estimation of about 1000 GW standard sirens for the 10-year observation of ET is achievable. Although the approximation of $\iota=0$ we take above could increase the SNR, it does not increase our estimated detection rate, which is based on more robust studies about the short $\gamma$-ray bursts. Therefore, we simulate 1000 GW standard sirens based on a 10-year observation of ET. Applying the Fisher information matrix, the instrument error of $D_{L}$ could be given by \begin{equation}\label{Fisher} \Delta D_{L}^{\rm inst}\simeq \sqrt{\left \langle \frac{\partial \mathcal H}{\partial D_{L}},\frac{\partial \mathcal H}{\partial D_{L}} \right \rangle^{-1}}. \end{equation} Due to $\mathcal H \varpropto D_{L}^{-1}$ as shown in Equations (\ref{H}) and~(\ref{A}), we have \begin{equation} \label{Dh} \frac{\partial \mathcal H}{\partial D_{L}}=-\frac{\mathcal H}{ D_{L}}. \end{equation} By substituting Equation (\ref{Dh}) into Equation (\ref{Fisher}), we can obtain \begin{equation} \Delta D_{L}^{\rm inst}\simeq \sqrt{\frac{D_L^2}{\left \langle \mathcal H, \mathcal H \right \rangle}} \simeq \frac{D_{L}}{\rho}. \end{equation} Note that the uncertainty of the inclination $\iota$ would affect the SNR, and the maximal effect of the inclination on the SNR is a factor of 2 ($0^\circ <\iota < 90^\circ$). Then, the instrumental error on the luminosity distance can be written as \begin{equation}\label{sigma2} \Delta D_{L}^{\rm inst}\simeq \frac{2D_{L}}{\rho}. \end{equation} Besides, the error from the weak lensing should be taken into account as well, wherein $\Delta D_{L}^{\rm lens}= 0.05zD_{L}$ \citep{Sathyaprakash:2009xt}. Finally, the total error of $D_{L}$ can be expressed as \begin{equation}\label{total sigma} \begin{aligned} \Delta D_{L}&=\sqrt{(\Delta D_{L}^{\rm inst})^2+(\Delta D_{L}^{\rm lens})^2}\\ & =\sqrt{\left(\frac{2D_{L}}{\rho}\right)^2+(0.05zD_{L})^2}. \end{aligned} \end{equation} In this way, we generate a catalogue of GW standard sirens with the redshift $z$, the luminosity distance $D_{L}$, and the error of the luminosity distance $\Delta D_{L}$. \subsection{Gaussian process} Using the distance sum rule to constrain $\Omega_k$ requires the knowledge of the distances in SGL systems, which usually could be implemented by the distance calibration using other distance indicators, such as GWs, as done in this paper. However, one key difficulty is that there is no one-to-one correspondence between the redshifts of SGL data and GW data. In the previous works, there are two effective ways to do this, the polynomial fitting and Gaussian process (GP). In this paper, we adopt the GP method based on GaPP Python code \citep{Seikel:2012uu, Seikel:2012cs} to reconstruct a smooth distance-redshift curve of $D_L$ from GWs so that we can calibrate the distances in SGL data. This reconstruction method has been widely used in cosmology \citep{Seikel:2012uu, Seikel:2012cs,Zhang:2018gjb,Zhang:2016tto,Cai:2019bdh,Wang:2020dbt,Seikel:2013fda}, by which the reconstructed function $f(z)$ is a Gaussian distribution at each point $z$, and its values at different points $z$ and $\tilde{z}$ are connected by a covariance function $k(z,\tilde{z})$. There are various forms for the covariance function. According to the analysis in \cite{Seikel:2013fda}, the squared exponential form with the Mat\'{e}rn $(\nu=9/2)$ covariance function can lead to more reliable results than all others. So we take it here and its expression is \begin{eqnarray} k(z,\tilde{z})&=&\sigma_f^2\exp(-\frac{3|z-\tilde{z}|}{\ell})\nonumber\\ &\times &(1+\frac{3|z-\tilde{z}|}{\ell}+\frac{27(z-\tilde{z})^2}{7\ell^2}\nonumber\\ &+&\frac{18|z-\tilde{z}|^3}{7\ell^3}+\frac{27(z-\tilde{z})^4}{35\ell^4}),\label{7} \end{eqnarray} where $\sigma_f$ and $\ell$ are hyperparameters which can be optimized by the GP itself via the observational data. To determine the dimensionless distances, $d_l$ and $d_s$, in Equation (\ref{dls_dl}), firstly we convert the luminosity distances of GWs into the dimensionless distances via the following relation \begin{equation} d(z)=\frac{H_0D_L(z)}{(1+z)}. \end{equation} With the simulated GW data with redshift measurements enabled by EM counterparts, we can use the smoothing technique of GP to reconstruct the distance-redshift curve with 1$\sigma$ confidence region as shown in Figure \ref{fig:gapp_ET}. In this way, the dimensionless comoving distances corresponding to the source and lens for an SGL system could be determined by the reconstructed distance-redshift curve, as well as the errors of distances. \begin{figure} \includegraphics[width=0.5\textwidth]{ET_normal_dl} \caption{Reconstruction of the dimensionless comoving distances from the 1000 simulated GW standard siren data. The red points with error bars represent the simulated data. The blue shaded area and the blue solid line denote the 1$\sigma$ confidence level errors and best-fit values of the reconstruction by using the GP method. } \label{fig:gapp_ET} \end{figure} \subsection{Strong gravitational lensing systems} In this subsection, we briefly introduce the SGL system and the observational SGL sample we used. For SGL systems, the measurements of lens velocity dispersion $\sigma$ could be used commonly as a statistical quantity to constrain cosmological parameters and density profiles of lens galaxies. In general, early-type galaxies are more massive and dominant in most SGL samples. Moreover, they also could be characterized by a general mass model because most of them satisfy the spherical symmetry distribution \citep{Chen:2018jcf}. With strict criteria to ensure the validity of the assumption of spherical symmetry on the lens galaxies, \citet{Chen:2018jcf} compiled a sample of SGL including 161 galaxy-scale strong lensing systems from the following surveys: the Sloan Lens ACS (SLACS) survey \citep{Bolton:2005nf,Bolton:2008xf,Auger:2009hj,Auger:2010va,Shu:2014aba,Shu:2017yon}, the Baryon Oscillation Spectroscopic Survey (BOSS) Emission-Line Lens Survey (BELLS) \citep{Brownstein:2011leg}, the BELLS for GALaxy-Ly $\gamma$ EmitteR sYstemsGALLERY \citep{Shu_2016a, Shu_2016b}. In this SGL sample, 130 SGL systems have the measurements of the luminosity density slope $\delta$ for lens galaxies which is obtained by fitting the two-dimensional power-law luminosity profile to the high-resolution imaging data from the Hubble Space Telescope. \citet{Chen:2018jcf} found that treating $\delta$ as an observable for individual lens galaxy rather than treating it as a universal parameter for all lens galaxies is necessary to get an unbiased cosmological estimate. Therefore, in this paper, we also use this truncated SGL sample including 130 SGL systems with the measurements of $\delta$, for which the redshift range of lenses is $0.0624\leq z_l\leq0.7224$ and the redshift range of sources is $0.1970\leq z_s\leq 2.8324$. As mentioned above, the velocity dispersion of intervening galaxies is the statistical quantity for cosmological fitting, and its measurement could be obtained from the spectroscopic data. To eliminate the effect of the aperture size on measurements of velocity dispersions, $\sigma_{\rm ap}$ measured within a circular aperture with the angular radius $\theta_{\rm ap}$ should be normalized to a typical physical aperture within a circular aperture of radius $R_{\rm eff}/2$ (the half-light radius of the lens galaxy), according to the aperture correction formula \citep{Jorgensen:1995zz}, \begin{equation}\label{sigma0} \sigma_{0}=\sigma_{\rm ap}\left(\frac{\theta_{\rm eff}}{2\theta_{\rm ap}}\right)^{\xi}, \end{equation} where $\theta_{\rm eff}=R_{\rm eff}/D_A(z_l)$, and $\xi$ is adopted as $\xi = -0.066 \pm 0.035$ \citep{Cappellari:2005ux}. It should be noted that the uncertainty of $\xi$ is going to feed into the total error of $\sigma_0$. In addition, considering the extra mass contribution from matters along the line of sight and the fractional uncertainty of the Einstein radius, 5\% uncertainty of velocity dispersion will be taken as the systematic error \citep{Wang:2019yob}. For a SGL system, the gravitational mass $M_{\rm grl}^{\rm E}$ should equal to the dynamical mass $M_{\rm dyn}^{\rm E}$ within the Einstein radius $\theta_{\rm E}$. If the lens model and the cosmological distances are determined, $M_{\rm dyn}^{\rm E}$ could be inferred from the velocity dispersion, and $M_{\rm grl}^{\rm E}$ can also be inferred from the measurement of the Einstein radius. {As mentioned above, although this constraint of $\Omega_k$ is independent of cosmological models, it strongly depends on the lens models. Therefore, we will consider three lens models widely used in strong lensing studies for full analysis.} \begin{itemize} \item Singular isothermal sphere (SIS) model For the simplest SIS model, the velocity dispersion can be expressed as \citep{Cao:2015qja} \begin{equation}\label{SGL1} \sigma_{0}^{\rm SIS} =\sqrt{\frac{\theta_{\rm E}}{4\pi f_{\rm E}^2}\frac{d_{s}}{d_{ls}}}, \end{equation} where $f_{\rm E}$ is a phenomenological coefficient, which reflects the uncertainty due to the difference between the observed stellar velocity dispersion and the underlying dark matter, and other systematic effects. In terms of standard SIS model, the coefficient $f_{\rm E}$ is strictly equal to 1. In this paper, $f_{\rm E}$ is treated as a free parameter and it takes the range $0.8<f_{\rm E}^{2}<1.2$ according to some observations~\citep{Kochanek:1999rj,Ofek:2003sp}. \item Extended power-law (EPL) lens model Considering a more complex mass model, we assume that the luminosity density profile $\upsilon(r)$ differs from the total-mass density profile $\rho(r)$, and they take the forms \citep{Cao:2015qja} \begin{equation}\label{rho_upsilon} \rho(r)=\rho_{0}\left(\frac{r}{r_{0}}\right)^{-\gamma},~~~ \upsilon(r)=\upsilon_{0}\left(\frac{r}{r_{0}}\right)^{-\delta}, \end{equation} where $r$ is the spherical radius from the lens galaxy center, $\gamma$ is the power law index of the total mass density profile treated as a free parameter, and $\delta$ is the power law index of the luminosity density profile, which has been measured for each lens in SGL sample we used in this paper. In addition, we also consider the anisotropy of the stellar velocity dispersion $\beta(r)$, which is given by \begin{equation}\label{beta} \beta(r)=1-\frac{\sigma_{\theta}^{2}}{\sigma_{r}^{2}}, \end{equation} where $\sigma_{\theta}$ and $\sigma_{r}$ are the tangential and radial components of the velocity dispersion, respectively. According to the constraint on $\beta$ from a well-studied sample of nearby elliptical galaxies, we will treat it as a nuisance parameter and marginalize over it with a Gaussian distribution, $\beta=0.18\pm0.13$ \citep{Schwab:2009nz}. In this lens model, the velocity dispersion can be expressed as \citep{Chen:2018jcf} \begin{equation}\label{SGL2} \sigma_{0}^{\rm EPL}=\sqrt{\frac{\theta_{\rm E}}{2\sqrt{\pi}}\frac{d_{s}}{d_{ls}}\frac{3-\delta}{(\xi-2\beta)(3-\xi)}\left(\frac{\theta_{\rm eff}}{2\theta_{\rm E}}\right)^{2-\gamma}\left[\frac{\rm\lambda(\xi)-\beta \rm \lambda(\xi+2)}{\rm \lambda(\gamma)\rm \lambda(\delta)}\right]}, \end{equation} where $\xi= \gamma + \delta-2$, and $\lambda(x)=\Gamma\left(\frac{x-1}{2}\right)/\Gamma\left(\frac{x}{2}\right)$. It is worth noting that if $\gamma=\delta=2$ and $\beta=0$, the EPL model will be reduced to the standard SIS model. According to the studies of previous works \citep{Ruff:2010rv,Bolton:2012uh,Cao:2016wor,Cui:2017idf,Holanda:2017jrj}, the dependence of total mass density slope $\gamma$ on the redshift is possible. Therefore, we consider two scenarios of $\gamma$ to further explore the issues we are interested in, i.e., \begin{description} \item[(i)] EPL1: $\gamma=\gamma_{0}$, \item[(ii)] EPL2: $\gamma=\gamma_{0}+\gamma_{1} z_{l}$, \end{description} where $\gamma_0$ and $\gamma_1$ are free parameters. The distance ratio $d_{ls}/d_s$ can be inferred from the distance sum rule, once the distances $d_l$ and $d_s$ are calibrated by GWs, in which the spatial curvature $\Omega_{k}$ is involved. Thus, the values of $\sigma_0$ in three lens models can be obtained. $\Omega_k$ can be constrained by maximizing the likelihood function $\mathcal{L} \propto e^{-\chi^{2} / 2}$. The $\chi^2$ function is defined as \begin{equation}\label{chi} \chi^{2}(\boldsymbol{p},\Omega_{k})=\sum^{N}_{i=1}\frac{[\sigma_0^{\rm lens}(z_{i},\boldsymbol{p},\Omega_{k})-\sigma_0^{\rm obs}(z_{i})]^2}{(\Delta\sigma^{\rm tot}_0)^2}, \end{equation} where $N$ denotes the number of SGL data points, and $\boldsymbol{p}$ is the parameters of lens models. It should be noted that the total uncertainty $\sigma^{\rm tot}_0$ not only has the contribution from the measurements of SGL systems, but also contains the uncertainties from distance calibrations of $d_l$ and $d_s$. \end{itemize} \section{Results and Discussion} By using the \texttt{emcee} Python module \citep{ForemanMackey:2012ig} based on the Markov Chain Monte Carlo (MCMC) method, we obtain the cosmological model-independent constraint on $\Omega_k$ in the framework of three lens models. Different from previous work~\citep{Rasanen:2014mca,Xia:2016dgk} considering a prior of $\Omega_{k}> -0.1$ from the CMB observation~\citep{Vonlanthen:2010cd, Audren:2012wb, Audren:2013nwa}, we do not take this prior because our motivation is to measure $\Omega_k$ using only the late-universe observations. Firstly, we present the constraint results from the current data sets of 130 SGL systems combined with the simulated GW data. Secondly, considering the upcoming LSST survey with a large sample of SGL as expected, we also forecast what constraint on $\Omega_k$ could be achieved. \subsection{Results from current SGL data} For the simplest SIS model, the constraints on $\Omega_k$ and $f_{\mathrm{E}}$ are shown in Figure \ref{fig:observation} and Table \ref{tab:summary}. By using the combination of 1000 GW simulation data and 130 SGL observational data, the spatial curvature parameter is constrained to be $\Omega_{k}=0.550^{+0.313}_{-0.256}$, wherein a zero value of $\Omega_{k}$ is ruled out at 2$\sigma$ confidence level. It should be noted that while the 1000 GW data are simulated in a flat universe, the 130 SGL data are actually observed, so the constraint result of $\Omega_k$ is still instructive. For the parameter $f_{\mathrm{E}}$ reflecting the mass distribution of the lens galaxies, we obtain a result of $f_{\mathrm{E}}=1.016 \pm 0.009$ at 1$\sigma$ confidence level, which is in good agreement with the standard SIS model $(f_{\mathrm{E}}=1)$ at 2$\sigma$ confidence level. Now we focus on the constraint errors of parameters. Compared with the previous results using SN Ia as distance indicators to calibrate the distances of SGL, using GW standard sirens does not obtain competitive precision for the constraints on $\Omega_k$ in this lens model. For instance, by using the combination of 137 SGL data and Pantheon SN Ia sample, \citet{Zhou:2019vou} inferred the cosmic curvature parameter as $\Omega_{k}=0.483^{+0.239}_{-0.385}$ at 1$\sigma$ confidence level based on the SIS lens model. With 161 galactic-scale SGL systems and 1048 SN Ia data, \citet{Wang:2019yob} obtained a value of $\Omega_{k}=0.57^{+0.20}_{-0.28}$ at 1$\sigma$ confidence level in the framework of SIS lens model. Although the constraint error of $\Omega_k$ has not been significantly improved by using simulated GW data, with the increase of SGL data observed in the future, the GW standard siren observation covering a wider redshift range could calibrate more SGL systems than SN Ia, which will help reduce the statistical error for the constraint on $\Omega_k$. \begin{figure*} \includegraphics[width=0.32\textwidth]{SIS-full_tri} \includegraphics[width=0.32\textwidth]{P1_omegak-full_tri} \includegraphics[width=0.32\textwidth]{P2_omegak-full_tri} \caption{One-dimensional and two-dimensional posterior distributions for all parameters from 130 SGL systems. Left: The constraints on spatial curvature $\Omega_{k}$ and the lens profile parameter $f_{\rm E}$ in the SIS lens model. Middle: The constraints on spatial curvature $\Omega_{k}$ and the lens profile parameter $\gamma_{0}$ in the ELP1 lens model. Right: The constraints on spatial curvature $\Omega_{k}$ and the lens profile parameters $\gamma_{0}$ and $\gamma_{1}$ by using GW and the $\Lambda$CDM to provide the distances in the EPL2 lens model.} \label{fig:observation} \end{figure*} \begin{table*} \centering \renewcommand\arraystretch{1.2} \caption{The fit values of all parameters from 130 SGL systems at the 1$\sigma$ confidence level in the SIS, EPL1, and EPL2 models.} \label{tab:summary} \begin{tabular}{ccccccccccccccc} \hline Lens model&$\Omega_{k}$ &$f_{\rm E}$ & $\gamma_{0}$ & $\gamma_{1}$ \\ \hline SIS & $0.550^{+0.313}_{-0.256}$ & $1.016 \pm0.009$ & $-$ & $-$ \\ EPL1 & $-0.052^{+0.194}_{-0.154}$ & $-$ & $2.106 \pm0.013$ & $-$\\ EPL2 & $-0.139^{+0.278}_{-0.172}$ & $-$ & $2.098 \pm0.019$ & $0.053^{+0.098}_{-0.108}$\\ \hline \end{tabular} \end{table*} For the EPL1 model, we present the constraint results in Figure \ref{fig:observation} and Table \ref{tab:summary}. The fit value at 1$\sigma$ confidence level of $\Omega_k$ is $\Omega_{k}=-0.052^{+0.194}_{-0.154}$, in excellent agreement with a flat universe. By comparing with the results from the SIS model, we find that the model selection has a strong influence on the constraint on $\Omega_k$, which further confirms the conclusion of previous works as well \citep{Qi:2018aio,Wang:2019yob}. Moreover, for the constraint on $\Omega_k$ in the EPL1 model, we obtain a more stringent result by using GWs as the distance indicators with respect to using SN Ia. \citet{Zhou:2019vou} presented a result of $\Omega_{k}=0.100^{+0.538}_{-0.114}$, and \citet{Wang:2019yob} obtained $\Omega_{k}=0.25^{+0.23}_{-0.16}$ from the combination of 161 SGL data and 1048 SN Ia data. On the other hand, we stress that the EPL1 model will be reduced to the standard SIS model if $\gamma_0=2$. The constraint result of $\gamma_0$ we obtain is $\gamma_{0}=2.106 \pm0.013$. It is clearly shown that the SIS model has been excluded at 2$\sigma$ confidence level. For the EPL2 model, the one-dimensional marginalized posterior distributions and the contours of parameters are shown in Figure \ref{fig:observation}, and the constraint results are summarized in Table \ref{tab:summary}. It can be clearly seen that the result $\Omega_{k}=-0.139^{+0.278}_{-0.172}$ is well consistent with a flat universe. Compared to the results of the EPL1 model, this constraint on $\Omega_k$ becomes weaker, possibly due to the addition of a parameter $\gamma_1$. However, this constraint on $\Omega_k$ is tighter than that of the SIS model, even though the number of parameters here is one more than the SIS model. All these results indicate that reasonably modeling the mass distribution of lens galaxies is an important factor for constraining $\Omega_k$ with this method. For the lens model parameters, we have $\gamma_{0}=2.098 \pm0.019$, and $\gamma_{1}=0.053^{+0.098}_{-0.108}$, wherein a zero value of $\gamma_{1}$ is included at 1$\sigma$ confidence level. This suggests that the dependence of the total mass density profile slope $\gamma$ on the redshift is not significant in this work, which supports the EPL2 lens model being reduced to the EPL1 model at $1\sigma$ confidence level. {In our analyses, GW data are used as the distance indicator to calibrate the distances of source and lens in SGL data. For the constraint on $\Omega_k$, which of the two data (SGL or GW) is dominant needs to be clarified. First, for the best-fit values, by comparing with the previous results using SN Ia as distance indicators, we find that the best-fit values of $\Omega_k$ in the same lens model are very close, as discussed above. In addition, the simulation of GW to provide the distances is based on the flat ($\Omega_k=0$) $\Lambda$CDM model. Therefore, a flat universe under any lens model should be obtained if the GW data dominate the constraint on $\Omega_k$. However, we find that the best-fit values of $\Omega_k$ in three lens models are different. These two points indicate that the SGL data are dominant for the constrained best-fit values. Second, we explore which of the two data dominates the constrained uncertainties of $\Omega_k$. Taking the EPL2 model as an example, we perform the same constraint by using the $\Lambda$CDM model as same as the fiducial model in the simulation of GW to provide the distances instead of GW data. In the right panel of Figure \ref{fig:observation}, we find that the result from the $\Lambda$CDM model is almost the same as that from GW data, even though the distances provided by the $\Lambda$CDM model have no errors. All of these imply that the SGL data dominate the constraints of $\Omega_k$ in this approach. } \subsection{Results from LSST simulation sample} \begin{figure} \includegraphics[width=0.35\textwidth]{SIS-diffLSST-full} \caption{One-dimensional and two-dimensional posterior distributions for the parameters $\Omega_{k}$ and $f_{\rm E}$ from LSST simulation samples of $2\times10^3$ (gray solid line), $5\times10^3$ (red solid line), and $1\times10^4$ (blue solid line) lenses in the SIS lens model. } \label{fig:SIS-diffLSST} \end{figure} \begin{table} \caption{\label{tab:SIS-diffLSST}The best-fit values of the parameters $\Omega_{k}$ and $f_{\rm E}$ at the 1$\sigma$ confidence level from $2\times10^3$, $5\times10^3$, and $1\times10^4$ LSST simulation SGL systems in the SIS model.} \footnotesize\centering \begin{tabular}{ccccccccccccccc} \hline Sample number & $\Omega_{k}$ & $f_{\rm E}$\\ \hline $2 \times 10^3$ $\rm $ & $-0.004\pm 0.027$ & $1.001 \pm 0.004$\\ $5 \times 10^3$ $\rm $ & $-0.005 \pm 0.017$ & $1.000 \pm 0.002$\\ $1\times 10^4$ $\rm $ & $-0.004 \pm 0.012$ & $1.000 \pm 0.002$\\ \hline \end{tabular} \end{table} During the construction and subsequent observations of ET, the upcoming LSST with wide field-of-view is expected to observe $1.2\times10^5$ galaxy-galaxy strong lensing. Such a large sample of SGL data is bound to produce extensive cosmological applications. Here we also make a forecast for what constraint on $\Omega_k$ can be achieved with such a tremendous increase of SGL data. Based on the performance of LSST, \citet{Collett:2015roa} performed a simulation of a realistic population of galaxy-galaxy strong lensing. For our estimations of $\Omega_k$, a fraction of the SGL sub-sample is available, considering the determination of redshift and accurate measurement on velocity dispersion, and so on. Therefore, in this paper, by using a public package LensPop\footnote{github.com/tcollett/LensPop}, we simulate $2\times10^3$, $5\times10^3$ and $1\times10^4$ well-measured SGL systems, respectively, to investigate the effect of the increase of data points in SGL sample on improving the constraints on $\Omega_k$. High-quality imaging and spectroscopic data from LSST enable highly precision inferences of Einstein radius and lens velocity dispersion. According to the analysis from \citet{Collett:2016muz}, we adopt the fractional uncertainties of observed velocity dispersion and the Einstein radius as 5\% and 3\%, respectively. In the framework of the SIS model, the constraint results from combining GWs with $2\times10^3$, $5\times10^3$, and $1\times10^4$ mock data from LSST simulation, respectively, are shown in Figure \ref{fig:SIS-diffLSST} and Table \ref{tab:SIS-diffLSST}. We find that as the number of SGL data increases by an order of magnitude compared to the existing SGL sample, the constraint on $\Omega_k$ is improved by an order of magnitude, i.e., $\Omega_k=-0.004\pm 0.027$, from $2\times10^3$ simulated SGL systems. {This significant improvement is not only contributed by the increase of SGL samples, but also the improvement in the observation precision.} However, when the number of SGL data increases by an order of magnitude again, i.e., $\sim 1\times10^4$, the constraint on $\Omega_k$ is only improved by a factor of $\sim 2$, indicating that systematic errors will dominate over statistical errors. Although the constraint on $\Omega_k$ here is not as good as the result obtained by the combination of Planck and BAO data (with the error 0.002), it must be emphasized that our constraints are independent of any cosmological model, which will be helpful in solving cosmological tension problem concerning the cosmic curvature in the future. \section{Conclusion} With the increasing precision of cosmological observations, tensions in the measurements of some key cosmological parameters has gradually emerged, which is usually viewed to be the measurement inconsistency between the early and late universe. The confusion caused by recent studies concerning cosmic curvature parameter $\Omega_k$ suggests that it is necessary to remeasure $\Omega_k$ using only the late-universe observations in a cosmological model-independent way. The distance sum rule in SGL provides such a way, provided that the distances in the sum rule can be calibrated by other observations. Usually, SN Ia can be used as a distance indicator to perform the distance calibration in this method. However, SN Ia observation has some drawbacks, such as dependence on distance ladder, narrow redshift range, and so forth. In this work, we propose that GWs can be used to provide the distance calibration in the SGL method, which can avoid the dependence on distance ladder and cover a wider redshift range. We use the simulated GW standard siren observation from the Einstein Telescope as an example to show that this scheme is feasible and advantageous. Specifically, in the framework of three lens models, namely SIS, EPL1, and EPL2 models, we use 130 current SGL data and 1000 simulated GW standard siren data to estimate $\Omega_k$. We find that the result of the SIS model prefers an open universe at more than 2$\sigma$ confidence level, while the inferences for $\Omega_k$ in EPL1 and EPL2 models are in excellent agreement with a flat universe, which means that the lens-model selection has a strong influence on inferring $\Omega_k$. Moreover, for the constraints on $\Omega_k$ in the three lens models, we obtain the most stringent result in the EPL1 model, i.e., $\Omega_k=-0.052^{+0.194}_{-0.154}$, which is slightly tighter than that obtained by using SN Ia as distance indicators. On the whole, we find that these model-independent estimations of $\Omega_k$ using only the late-universe observations still somewhat favor a flat universe. However, it is important to emphasize that although this constraint of $\Omega_k$ is independent on cosmological models, it is dependent strongly on lens models in fact. In this paper, the mass distribution of the lens galaxies is assumed to be spherically symmetric, which could characterize well the morphologies of early-type galaxies that are more likely to serve as intervening lenses. Although the sample of SGL we used is obtained with well-defined selection criteria to ensure the validity of the assumption of spherical symmetry, the properties of early-type galaxies as their formation and evolution are still not fully understood. There is still a long way from accurately characterizing the mass distribution of lens galaxies, which is crucial for the unbiased and precise estimation of $\Omega_k$ in this way. Fortunately, as future massive surveys observe more and more SGL samples, a more accurate phenomenological model for lens galaxies could be obtained, which will greatly improve the constraint on cosmic curvature. Then, we further forecast what constraint can be achieved for the spatial curvature in the near future by GW standard sirens from ET and abundant SGL data from the forthcoming LSST survey. We find that about $1 \times 10^4$ SGL data combined with 1000 GW standard sirens could achieve a precise constraint of $\Delta\Omega_{k} \simeq10^{-2}$. Our results show that the observations of SGL and GWs by the next-generation facilities would improve the late-universe measurement of cosmic curvature by one order of magnitude. \section*{Acknowledgements} We would like to thank Ling-Feng Wang, Yun Chen, Shang-Jie Jin, and Dong-Ze He for helpful discussions. This work was supported by the National Natural Science Foundation of China (Grants Nos. 11975072, 11835009, and 11875102), the Liaoning Revitalization Talents Program (Grant No. XLYC1905011), the Fundamental Research Funds for the Central Universities (Grant Nos. N2005030 and N2105014), the National 111 Project of China (Grant No. B16009), and the science research grants from the China Manned Space Project (Grant No. CMS-CSST-2021-B01). \section*{DATA AVAILABILITY} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
2024-02-18T23:40:31.899Z
2022-09-29T02:06:56.000Z
algebraic_stack_train_0000
2,676
7,344
proofpile-arXiv_065-13109
\section{Introduction} \label{sec:introduction} Moving agents perceive streams of information, typically a mix of RGB images, depth and inertial measurements. Probabilistic generative models \citep{koller2009probabilistic} are a principled way to formalise the \emph{synthesis} of this data, and from these models inference can be derived through Bayes' rule. We focus on exactly such inference and target the agent states and the scene map, a problem known as simultaneous localisation and mapping (SLAM). We treat it as a posterior approximation for a given state-space model, such that the combination is useful for model-based control: the posterior inference serves as a state estimator and the predictive state-space model as a simulator with which to plan ahead \citep{bertsekas}. To pave the way towards decision making, we believe an inference method should have: \begin{itemize}[topsep=0pt] \itemsep0em \item a compatible predictive model for both RGB-D images and 6-DoF dynamics; \item principled state and map uncertainty; \item real-time performance on commodity hardware; \item state-of-the-art localisation accuracy. \end{itemize} We motivate these requirements further in \cref{app:motivation}. Prominent methods like LSD-SLAM \citep{engel2014lsd}, ORB-SLAM \citep{orbslam2}, DSO \citep{dso} have propelled visual SLAM forward, with heavy focus on large-scale localisation. The core of modern large-scale SLAM is maximum a-posteriori (MAP) smoothing in a probabilistic factor graph \cite{cadena2016past,kschischang2001factor}. At present this demands sparsity assumptions for computational feasibility, which obstructs the tight integration of dense maps and rendering. Nonetheless, for smaller scenes the recent popularity of neural models (e.g.\, NERF~\citep{nerf}) has sparked interest in inference through a renderer (e.g.\ \citep{Zhu2022CVPR,koestler2021tandem,sucar2021imap}), but dynamics modelling and uncertainty have remained out of scope. Conversely, classical filtering comes with dynamics and uncertainty in real-time (e.g.\ \citep{kalman1960new,fastslam,grisetti2007improved}), but over time has given way to large-scale smoothing \citep{cadena2016past} and to our knowledge has not been well explored for the integration of dense differentiable rendering and dynamics on a moderate scale. Overall, we find there is a need for a cohesive inference solution that satisfies our requirements. We thus contribute by meeting all the above goals, emphasising the link to a predictive model (\cref{fig:predictive}). \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figures/predictive.png} \caption{ Inference is tailored to the depicted predictive model. Predicting future rollouts, as shown, is required for optimal control. Ground-truth trajectory in \emph{black}, inferred trajectory from past data in \color{myblue}\emph{blue}\color{black}. In \color{orange}\emph{orange}\color{black}, we see uncertainty envelopes for the predicted future states. On the right, we see predicted and ground-truth future images. Visualised in 2D for clarity, our method operates in 3D. } \label{fig:predictive} \end{figure} We start from the generative model of \citet{mirchev2021variational}, who combine differentiable rendering and agent dynamics in a probabilistic framework. The authors considered stochastic variational inference for this model, applying it off-line with runtime orders of magnitude too long for on-line use. We pursue an alternative route for real-time inference: from the generative assumptions we derive approximations to the true marginal filters over the last state and map \citep{sarkka2013bayesian}. By focusing on recursive filtering updates, we identify where established probabilistic inference and computer vision techniques can be used, putting emphasis on fast closed-form updates. We find this divide-and-conquer strategy is a good compromise for achieving the aforementioned objectives under computational constraints. We evaluate the proposed solution on two unmanned aerial vehicle (UAV) data sets \citep{euroc,blackbird} and on TUM-RGBD \citep{tumrgbd}. Our method PRISM runs at 10\,Hz real-time with similar localisation accuracy to state-of-the-art SLAM in moderately-sized indoor environments. It provides uncertainty estimates and features a predictive distribution that can both render images and forecast the agent's movement. \section{Related Work} \paragraph{Generative models} Generative state-space models simulate the formation of observed data over time in a Markov chain \citep{koller2009probabilistic,kalman1960new,hmms,dvbf,dkf,vrnn}, serving as \emph{world} models \citep{envsims,worldmodels}. With their agent dynamics and state-to-observation emission models we can imagine future rollouts for planning \citep{bertsekas,pilco,planet,dreamer,slac,learntofly,empowerment,vast,deepmbrl}. We abide by this framework and design a posterior inference for a \emph{spatial} state-space model, to enable on-line control. Among such models (e.g. \citep{gtmsm,gqn,gregor2019,abise,gupta2019cognitive,parisotto2018neural,chaplot2020learning}), we tailor our inference to the model of \citet{mirchev2021variational}. It scales to 3D with rendering and 6-DoF dynamics. We contribute a real-time inference that fits its probabilistic formulation. \paragraph{SLAM through image synthesis} The assumed generative model renders RGB-D images, which is related to SLAM through full-image synthesis. Traditional methods feature varied maps, from volumetric to surfels (e.g. \citep{curless1996volumetric,carr2001reconstruction,dtam,newcombe2011kinectfusion,niessner2013real,keller2013realtime,Whelan2015ElasticFusionDS,Cao2018RealtimeHT,hrbf}), and commonly estimate new camera poses by aligning new observations to a rendered image with variants of point-to-plane ICP with photometric consistency \citep{chen1992object,steinbrucker2011real,audras2011real,kerl2013dense,newcombe2011kinectfusion}. We extend this optimisation with dynamics in our approximate state filter \citep{kayalibay2022tracking}. A recent trend is to use implicit scene representations like NERF (e.g. \citep{nerf,siren,isdf}) with high rendering fidelity. Gradient-based pose inference through NERF-like rendering has received attention \citep{inerf,nerfmm}, with iMAP \citep{sucar2021imap} and NICE-SLAM \citep{Zhu2022CVPR} being two real-time solutions. The mapping runtime of such methods is weighed down by optimisation through the renderer. Rendering can be sped up by decomposing parameters over space, e.g. by using voxels or primitives \citep{kilonerf,neuralvolumes,liu2020neural,mixtureprimitives,mueller2022instant}, but how to update neural maps in closed form remains unclear. Therefore, we rely on vanilla voxel grid maps \citep{mirchev2021variational,occupancymaps}, as their probabilistic treatment and closed-form updates are straightforward, leaving implicit representations for future work. We note that none of the aforementioned methods incorporate dynamics and uncertainty, which distinguishes our approach. \paragraph{Probabilistic SLAM inference} SLAM filters are thoroughly explored for flat 2D modelling \citep{probrobotics,kalman1960new,murphymap,fastslam,hahnel2003efficient,fastslam2,rbpfslam}, but have been superseded by MAP smoothing in modern visual SLAM (e.g. \citep{engel2014lsd,orbslam2,dso,vimo,vinsmono,Rosinol19icra-incremental,Rosinol20icra-Kimera}), primarily due to scalability concerns \citep{cadena2016past,strasdat2012visual}. However, as of now smoothing is not computationally feasible without sparsity assumptions. We therefore reexamine filtering for differentiable rendering, as we aim to obtain a dense map posterior with uncertainty in real-time (see \cref{app:motivation} for further motivation). Filters may benefit from the dense modelling of observations \citep{strasdat2012visual}, which aligns with our objective, and we will demonstrate they can be a feasible solution for moderately-sized indoor environments. For the states, we use a Laplace approximation \citep{laplace} and velocity updates similar to those in extended Kalman filters \citep{kalman1960new}. For the map, occupancy grids are a common probabilistic choice \citep{occupancymaps,murphymap,bhm} and closed-form mapping has been used in that context \citep{rbpfslam}. To enable rendering we provide a similar derivation, but for a signed distance function (SDF), which is related. Probabilistic SDF mapping dates back to~\citet{curless1996volumetric}, and SDF updates have a well-known probabilistic interpretation \citep{hernandez2007probabilistic,dong2018psdf}. We use these approximations to arrive at a holistic probabilistic solution that scales to dense 3D modelling in real-time. \newcommand{H}{H} \section{Overview} \label{sec:overview} We approach on-line SLAM inference with two aims in mind. First, we want to harmonise our map and state estimation with a predictive model. Second, we want to quantify uncertainty: estimates and predictions should account for modelling inaccuracies as well as measurement and process noise. Both are important for autonomous decision making. To achieve this, we derive a Bayesian posterior in the probabilistic model of \citet{mirchev2021variational}, to ensure that inference matches the forward model. Before we delve into our proposed solution, we present a practical summary. At every time step: \begin{enumerate}[topsep=0pt] \itemsep0em \item we point-estimate the agent's pose using gradient descent, involving geometry and dynamics. \item we extend the pose with a Gaussian covariance matrix through a Laplace approximation. \item with the pose, we estimate the agent's current velocity in closed form. \item with the pose and the current observation, we update the map in closed form. \end{enumerate} We use well-established methods for the above. In 1. we combine assumed density filtering \citep{opper1999bayesian}, point-to-plane ICP \citep{chen1992object} and photometric alignment \citep{steinbrucker2011real,audras2011real}. In 2. we use a Laplace approximation \citep{laplace,bishop}. In 3. we use linear-Gaussian updates, akin to Kalman filters \citep{kalman1960new}. In 4. we first derive generic closed-form map updates, which boil down to SDF updates \citep{curless1996volumetric} for our generative assumptions. We contribute by deriving a holistic Bayesian inference from the generative model we started with. In doing so, we identify where traditional techniques are applicable to make a practical algorithm. \section{Methods} In the following we will denote generative distributions, true posterior distributions and conditionals with $p(\cdot)$. Respectively, approximate distributions will be denoted with $q(\cdot)$. Approximation steps will be indicated by~$\approx$~in equations. We use $q^{\varpars}\left(\cdot\right)$ to subsume estimated distribution parameters into $\varpars$. A subscript $\cdot_t$ indicates that a variable or a distribution is different at every time step. \subsection{Background} We start with an overview of the generative model of \citet{mirchev2021variational} from which we will derive the inference. We assume a sequence of RGB-D observations $\obs_{1:T}$ and a sequence of agent states $\state_{1:T}$ driven by controls $\control_{1:T-1}$ form a Markovian state-space model. Each observation is constructed from a respective state with a rendering emission model $\pp{\obs_t}{\Map, \state_t}$, where $\Map$ is a global latent random variable for a dense map. A transition model $\pp{\state_t}{\state_{t-1}, \control_{t-1}}$ accounts for the agent dynamics, where $\control_{t}$ are known acceleration controls. Assuming $\state_1$ is given, the joint distribution is: \vspace{-0.5em} \eq{ &\pp{\Map, \state_{2:T}, \obs_{1:T}}{\control_{1:T-1}, \state_1} = \p{\Map}\pp{\obs_1}{\Map, \state_1}\prod_{t=2}^{T} \pp{\state_t}{\state_{t-1}, \control_{t-1}} \pp{\obs_t}{\state_t, \Map}. } The map is a 3D voxel grid of occupancy and color--each cell contains four values. The emission is fully-differentiable and performs volumetric raymarching, searching for a unique hit position at a surface along each ray \citep{parker1998interactive}. The transition performs Euler integration, using the acceleration controls and maintained velocity from the latent state. \Cref{app:generative} and the original paper have the details. \subsection{Posterior Choice} \label{sec:posteriorchoice} First we need to choose which posterior to approximate. For example, \citet{mirchev2021variational} approximate the full posterior over the map and \emph{all} states $\pp{\Map, \state_{2:T}}{\obs_{1:T}, \control_{1:T-1}, \state_1}$ with variational inference~\citep{vae}. While generic, this approach is slowed down by rendering at every optimisation step \citep{kayalibay2022tracking}, and the inevitable stochastic optimisation demands multiple steps until convergence. In addition, estimating the posterior over all states scatters the optimisation budget across the whole trajectory. To enable real-time inference we target an alternative posterior, the filter $\pp{\Map, \state_t}{\obs_{1:t}, \control_{1:t-1}, \state_1}$, as the last state belief is enough for planning ahead \citep{bertsekas}. Since filters can be updated recursively \citep{sarkka2013bayesian,bishop}, we can use closed-form updates for fast inference. Still, maintaining the joint distribution is too costly because of the large dense 3D map $\Map$.\footnote{E.g. the size of full-covariance Gaussian representations \citep{kalman1960new} or carrying multiple maps in parallel for a Rao-Blackwellised particle filter \citep{fastslam,grisetti2007improved} become prohibitive.} Instead, we approximate the two marginal filters: \eq{ \qfilter{t}{\Map} &\approx \pfilter{t}{\Map} = \pp{\Map}{\obs_{1:t}, \control_{1:t-1}, \state_1} \\ \qfilter{t}{\state_t} &\approx \pfilter{t}{\state_t} = \pp{\state_t}{\obs_{1:t}, \control_{1:t-1}, \state_1}, } where $H_t = \historyplus$. More details about this modelling choice can be found in \cref{app:marginals}. We draw attention to the shorthand notation $\pfilter{t}{~\cdot}$, which will appear again in the following. \subsection{Approximate Filtering} For both marginal filters, we will arrive at adequate approximations by reusing the following equation: \eq{ \pp{\map, \state_t}{H_t} \propto&~ \pp{\obs_t}{\state_t, \map} \int \pp{\state_t}{\state_{t-1}, \control_{t-1}} \pp{\map, \state_{t-1}}{H_{t-1}} d\state_{t-1}, \numberthis \label{eq:chapmankolmogorov} } This is a classic recursive expression of the Bayes filter \citep{sarkka2013bayesian}. Starting from each true marginal posterior, we will first expand the joint, then use \cref{eq:chapmankolmogorov} and apply a set of approximations. Next we will discuss our final result, we defer the detailed derivation of both filters to \cref{app:mapfilter,app:statefilter}. \subsubsection{Marginal Map Filter} \label{sec:mapfilter} \def\qq{\map}{\obs_t, \hat \state_t}{\qq{\map}{\obs_t, \hat \state_t}} We begin with the map approximation, starting from the true marginal Bayes filter: \eq{ \pp{\map}{H_t} =&~ \int \pp{\map, \state_t}{H_t} \dint\state_t \\ \propto&~ \int \pp{\obs_t}{\state_t, \map} \int \pp{\state_t}{\state_{t-1}, \control_{t-1}} \pp{\map, \state_{t-1}}{H_{t-1}} \dint\state_{t-1} \dint\state_t \\ \approx&~ \pp{\obs_t}{\hat \state_t, \map} \times \qfilter{t-1}{\map} \numberthis \label{eq:mapadf} \\ \approx&~ \qq{\map}{\obs_t, \hat \state_t} \times \qfilter{t-1}{\map} =: \qfilter{t}{\map}. \numberthis \label{eq:mapupdate} } \Cref{eq:mapadf,eq:mapupdate} hide a few approximations detailed in \cref{app:mapfilter}. The resulting solution takes a nominal state sample $\hat \state_t$, with which a map update $\qq{\map}{\obs_t, \hat \state_t}$ is applied to the previous map belief $\qfilter{t-1}{\Map}$. We set $\hat \state_t$ to the mean of the current state belief $\qfilter{t}{\state_t}$. Accepting some bias, we do this for speed as it is our best guess for $\state_t$ without extra computation.\footnote{\Cref{app:approximations} discusses this approximation further.} Intuitively, the map update $\qq{\map}{\obs_t, \hat \state_t}$ populates the map such that the observation $\obs_t$ can be reconstructed. Our derivation of the updates is similar to the one by \citet{rbpfslam} for 2D occupancy maps, but now applied to 3D. The above approximation is generic, agnostic to the specific map and rendering assumptions. In practice, we need a closed-form map update $\qq{\map}{\obs_t, \hat \state_t}$ that is faithful to the emission $\pp{\obs_t}{\hat \state_t, \Map}$. In this work, we follow \citet{mirchev2021variational} and use a Gaussian map that factorises over voxels: \eq{ \qfilter{t}{\map} = \prod_{ijk} \gauss{\map_{ijk}}{\bmu^\map_{ijk,t}, \diag((\bsigma^\map_{ijk,t})^2)}. } Here the indices $ijk$ run over voxels in a 3D grid. For this specific representation and the assumed surface-based rendering, we identify that the map update $\qq{\map}{\obs_t, \hat \state_t}$ can be implemented as a probabilistic signed distance function (SDF) update \citep{curless1996volumetric}. We provide the technical details in \cref{app:sdfs}. SDF updates for voxel maps are a traditional concept in computer vision, and prior work has considered their probabilistic interpretation before \citep{hernandez2007probabilistic,dong2018psdf}. We contribute by identifying the place of such updates in a probabilistic filter that follows the generative model of \citep{mirchev2021variational}. A detailed discussion of how the above relates to classical SDF update equations can be found in \cref{app:sdfs}. The above approximations are motivated by the real-time constraint. For example, one could optimise \cref{eq:mapadf} directly with gradient descent through the renderer, but evaluating the emission is expensive and hinders accurate convergence on a budget. This is particularly true when uncertainty estimates are desirable, as optimisation would then be stochastic and gradients noisy \citep{blei2017variational}. In contrast, the derived one-shot map updates are meant to have a cost similar to emitting just once, while capturing uncertainty as well. We show some of the differences between the two approaches in \cref{sec:runtime}. \subsubsection{Marginal State Filter} \label{sec:statefilter} \def\qqu{\state_t}{\control_{t-1}, \hist_{t-1}}{t}{\qqu{\state_t}{\control_{t-1}, H_{t-1}}{t}} \def\qqu{\pose_t}{\control_{t-1}, \hist_{t-1}}{t}{\qqu{\pose_t}{\control_{t-1}, H_{t-1}}{t}} \def\qqu{\vel_t}{\pose_t, \control_{t-1}, \hist_{t-1}}{t}{\qqu{\vel_t}{\pose_t, \control_{t-1}, H_{t-1}}{t}} Similarly, for the state filter we start from the true marginal and arrive at approximations via \cref{eq:chapmankolmogorov}: \eq{ \pp{\state_t}{H_t} =&~ \int \pp{\map, \state_t}{H_t} \dint\map \\ \propto&~ \int \pp{\obs_t}{\state_t, \map} \int \pp{\state_t}{\state_{t-1}, \control_{t-1}} \pp{\map, \state_{t-1}}{H_{t-1}} \dint\state_{t-1} \dint\map \\ \approx&~ \pp{\obs_t}{\pose_t, \hat \map} \qqu{\pose_t}{\control_{t-1}, \hist_{t-1}}{t} \qqu{\vel_t}{\pose_t, \control_{t-1}, \hist_{t-1}}{t} \numberthis \label{eq:apx_objective} \\ \approx&~ \qfilter{t}{\pose_t} \times \qqu{\vel_t}{\pose_t, \control_{t-1}, \hist_{t-1}}{t} =:~ \qfilter{t}{\state_t} \numberthis \label{eq:stateupdate}. } We detail all the approximations that lead to \cref{eq:apx_objective} in \cref{app:statefilter}. In \cref{eq:apx_objective} we have three terms: an image reconstruction likelihood, a Gaussian pose prior and a linear Gaussian velocity conditional given a pose. The latter two we obtain analytically with a linear approximation of the transition model and the previous Gaussian belief $\qfilter{t-1}{\state_{t-1}}$ (c.f.\, \cref{app:statefilter}). First, using the first two terms of \cref{eq:apx_objective} we define a maximum a-posteriori (MAP) objective for pose optimisation: \eq{ \arg\max_{\pose_t}\,\, \log \pp{\obs_t}{\hat \Map, \pose_t} + \log \qqu{\pose_t}{\control_{t-1}, \hist_{t-1}}{t}. } Here, $\hat \map$ is a nominal map sample set to the mean of the previous map belief $\qfilter{t-1}{\Map}$.\footnote{\Cref{app:approximations} discusses this approximation further.} The term $\log \qqu{\pose_t}{\control_{t-1}, \hist_{t-1}}{t}$ is an approximate dynamics prior over the current pose, it makes the pose respect the transition model. The term $\log p(\obs_t\mid\hat \map, \pose_t)$ represents reconstructing the current observation, optimising it for the current pose will align the observation to the map. However, evaluating this rendering term in every gradient step is inefficient. Because of this, we replace it with the prediction-to-observation objective used by \citet{kayalibay2022tracking,niessner2013real,kinectfusion}. We refer to \citep{kayalibay2022tracking} for further motivation and we list the technical details in \cref{app:statefilter}. The above optimisation gives us a MAP pose estimate, which we denote with $\bmu^{\posetext}_t$. Next, we apply a Laplace approximation \citep{laplace} around it to obtain a full covariance matrix $\boldsymbol{\Sigma}^{\posetext}_t$ which captures the curvature of the objective. This leaves us with a full Gaussian belief over the current pose: \eq{ \qfilter{t}{\pose_t} = \gauss{\pose_t}{\bmu^{\posetext}_t, \boldsymbol{\Sigma}^{\posetext}_t}. } Finally, we can combine this Gaussian with the Gaussian velocity conditional $\qqu{\vel_t}{\pose_t, \control_{t-1}, \hist_{t-1}}{t}$ (the third term in \cref{eq:apx_objective}) into a full-state belief in closed form: \eq{ \qfilter{t}{\state_t} = \gauss{\state_t}{\bmu_t, \boldsymbol{\Sigma}_t} = \gauss{\pose_t}{\bmu^{\posetext}_t, \boldsymbol{\Sigma}^{\posetext}_t} \gauss{\vel_t}{\mathbf{D}_t\pose_t + \mathbf{e}_t, \boldsymbol{\Sigma^{\mathrm{vel}}_t}}. } This is approximate, we do it for speed and find it does not harm localisation in practice. \Cref{app:statefilter} describes how the linear Gaussian terms come to be in more detail. \section{Experiments} \label{sec:experiments} \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{figures/env_panel.png} \caption{Example mapping and localisation} \label{fig:envs} \end{subfigure}% \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{figures/map_uncertainty_star3p0forward.png} \caption{Blackbird map uncertainty.} \label{fig:mapuncertaintyblackbird} \includegraphics[width=\linewidth]{figures/map_uncertainty_V1_02_medium.png} \caption{EuRoC map uncertainty.} \label{fig:mapuncertaintyeuroc} \end{subfigure}% \caption{ (\subref{fig:envs}) 3D reconstruction, example emission and inferred trajectory for EuRoC/V102 and TUM-RGBD fr3/office. (\subref{fig:mapuncertaintyblackbird}) Blackbird experiment. Top-down map uncertainty on the left, \emph{black} is uncertain, \color{orange}\emph{orange}\color{black}\, is precise. Precision is highest in a triangle around the center, which is the camera frustum where the agent remains sitting on a platform for a long time, see the orange triangle amidst the map point cloud on the right. (\subref{fig:mapuncertaintyeuroc}) Analogous EuRoC experiment. Map uncertainty is high outside of the room, at the center and behind the two structures on the left due to occlusion. The uncertainty in the center is high because the agent primarily looks outwards (view directions in the right image). } \label{fig:bigpanel} \end{figure} Originally we set out with a few goals: the inference method should be faithful to the generative assumptions, it should quantify uncertainty and it should run in real-time. What follows is an empirical analysis of these aspects. We evaluate on the EuRoC \citep{euroc}, Blackbird \citep{blackbird} and TUM-RGBD \citep{tumrgbd} data sets. The agent in the former two is an unmanned aerial vehicle (UAV), with speed of up to 4 m/s. For Blackbird, we use Semi-Global Block Matching (SGBM) for stereo depth estimation \citep{sgbm}. For EuRoC, we use the ground-truth Leica MS50 depth readings provided by \citep{koestler2021tandem}. We pretend the IMU readings from these data sets are our control inputs. For TUM-RGBD we do not feed in any controls and assume a constant-velocity transition. All experimental details are in \cref{app:experiments}. \subsection{Inference Through a Probabilistic Generative Model} First we look into the synergy between the inference and the generative assumptions. In \cref{fig:envs} we see mapping and localisation examples. The inferred scenes are consistent, with no dramatic offsets in geometry. More importantly, rendering from the inferred map using the emission $\pp{\obs_t}{\state_t, \map}$ works as expected (see middle row), indicating that map updates are consistent with the generative assumptions. This is evident from the accuracy of the inferred state trajectories as well (last row), as the pose optimisation objective from \cref{sec:statefilter} uses rendered images at every filtering step. A potential discrepancy between the inference and the generative assumptions would lead to errors that would accumulate over time, which is not the case. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/state_uncertainty_z.png} \caption{$z$ location uncertainty} \label{fig:stateuncertaintyx} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\linewidth]{figures/state_uncertainty_qz.png} \caption{$\mathbf{q}_z$ (yaw) orientation uncertainty} \label{fig:stateuncertaintyqz} \end{subfigure} \caption{ Inferred state uncertainty. Inferred trajectories are colored by precision (inverse uncertainty) of a certain state dimension, followed by observations, followed by columns of the tracking Jacobian for that same state dimension. (\subref{fig:stateuncertaintyx}) Here the precision in $z$ (vertical movement) is high (\color{Goldenrod}\emph{yellow}\color{black}), because the $z$-orthogonal floor produces a consistent Jacobian (bottom right). (\subref{fig:stateuncertaintyqz}) Here the precision in $\mathbf{q}_z$ orientation (yaw, azimuth) is low (\color{Violet}\emph{violet}\color{black}), as there are no orthogonal surfaces (i.e.\, facing sideways). Note the low Jacobian magnitude of the horizontal floor this time (bottom right). } \label{fig:stateuncertainty} \end{figure} \paragraph{Map uncertainty} The inferred map uncertainty is determined by the map updates. We show its interpretable effects in \cref{fig:mapuncertaintyblackbird,fig:mapuncertaintyeuroc} for two examples, one from Blackbird and another from EuRoC. Our map updates are akin to traditional SDF updates and the main factor that decides whether a map region is certain is how often it was observed. Regions that were occluded by objects, are behind walls or were rarely in view remain uncertain, e.g.\, as seen in \cref{fig:mapuncertaintyeuroc}. In contrast, if the agent spends a lot of time looking at a certain map region, the uncertainty there decreases, as seen in \cref{fig:mapuncertaintyblackbird}. \paragraph{State uncertainty} In \cref{fig:stateuncertainty} we analyse state uncertainty by looking at the variance for individual dimensions. We notice that state uncertainty changes along the trajectory. Uncertainty is determined by what the agent currently sees, based on the geometric relationship between the agent movement and the observed scene (e.g.\, \cref{fig:stateuncertaintyx} and \cref{fig:stateuncertaintyqz}). This effect can be explained if we examine the Laplace approximation used to estimate pose covariances. At any given time step, we set the covariance to \eq{ \boldsymbol{\Sigma}^{\posetext}_t \approx \mathbf{-H}^{-1} \approx -\left(2\mathbf{J}^{T}\mathbf{J}\right)^{-1}. } Here $\mathbf{H}$ is the Hessian of the tracking objective at the mean pose estimate and $\mathbf{J}$ is the Jacobian. The Jacobian connects the pose to all image pixel errors. The more consistent Jacobian entries are for a given pose dimension, the smaller the variance for that dimension will be. We refer to \cref{app:uncertainty} for more details about the map and state uncertainty quality. \subsection{Localisation Accuracy} \label{sec:localisation} We compare PRISM's localisation to state-of-the-art methods in moderately-sized indoor environments. We consider both baselines with dense maps (TANDEM \citep{koestler2021tandem}, VSSM-LM \citep{mirchev2021variational}, iMAP \citep{sucar2021imap}, NICE-SLAM \citep{Zhu2022CVPR}, CodeVIO \citep{codevio}) and sparse methods without rendering (ORB-SLAM2 \citep{orbslam2}, VINS \citep{vinsmono}, VIMO \citep{vimo}). The results are in \cref{table:localisation}. For the considered trajectories accuracy is comparable to the baselines, with differences of a few centimeters. At the same time, our inference boasts a predictive state-space model with both rendering and dynamics as well as uncertainty estimates, which is not common in the dense visual SLAM literature. Finally, in \cref{fig:velocities} we see example inferred agent velocities, noting the uncertainty bands. This is possible because we model the agent dynamics. Our localisation accuracy on Blackbird is better than the off-line variational inference results of VSSM-LM presented by \citet{mirchev2021variational}, and at the same time our solution runs in real-time and also captures uncertainty. This shows the advantages of the proposed divide-and-conquer filtering. \begin{table}[t] \centering \begin{minipage}{0.60\textwidth} \footnotesize \setlength{\tabcolsep}{2pt} \centering \captionof{table}{Localisation absolute error RMSE in meters on EuRoC \citep{euroc}, Blackbird \citep{blackbird} and TUM-RGBD \citep{tumrgbd}.} \label{table:localisation} \begin{tabular}{lcccc} \multirow{2}{*}{Trajectory} & \multirow{2}{*}{Ours} & Code & \multirow{2}{*}{TANDEM} & ORB \\ & & VIO & & SLAM2 \\ \toprule EuRoC/V101 & 0.041 ($\pm$ 0.002) & 0.05 & 0.09 & \textbf{0.031} \\ EuRoC/V102 & 0.035 ($\pm$ 0.002) & 0.07 & 0.17 & \textbf{0.02} \\ EuRoC/V103 & \textbf{0.042 ($\pm$ 0.002)} & 0.07 & - & 0.048 \\ EuRoC/V201 & \textbf{0.037 ($\pm$ 0.001)} & 0.10 & 0.09 & \textbf{0.037} \\ EuRoC/V202 & \textbf{0.035 ($\pm$ 0.003)} & 0.06 & 0.12 & \textbf{0.035} \\ EuRoC/V203 & x & \textbf{0.275} & - & x \\ \bottomrule \addlinespace[1ex] \multirow{2}{*}{Trajectory} & \multirow{2}{*}{Ours} & VSSM & \multirow{2}{*}{VIMO} & \multirow{2}{*}{VINS} \\ & & LM & & \\ \toprule picasso, 1 m/s & 0.064 ($\pm$ 0.003) & 0.139 & \textbf{0.055} & 0.097 \\ picasso, 2 m/s & 0.053 ($\pm$ 0.003) & 0.136 & \textbf{0.040} & 0.043 \\ picasso, 3 m/s & 0.061 ($\pm$ 0.003) & 0.120 & \textbf{0.043} & 0.045 \\ picasso, 4 m/s & 0.079 ($\pm$ 0.005)\tablefootnote{\label{blackbirdnote}Last 10 s are skipped, as the drone hits the ground during landing.} & 0.174 & \textbf{0.049} & 0.056 \\ star, 1 m/s & 0.089 ($\pm$ 0.007)\footref{blackbirdnote} & 0.137 & \textbf{0.088} & 0.102 \\ star, 2 m/s & 0.111 ($\pm$ 0.009) & 0.163 & \textbf{0.082} & 0.133 \\ star, 3 m/s & \textbf{0.115 ($\pm$ 0.012)} & 0.281 & 0.183 & 0.235 \\ star, 4 m/s & \textbf{0.153 ($\pm$ 0.015)}\footref{blackbirdnote} & 0.156 & x & x \\ \bottomrule \addlinespace[1ex] \multirow{2}{*}{Trajectory} & \multirow{2}{*}{Ours} & \multirow{2}{*}{iMAP} & NICE & ORB \\ & & & SLAM & SLAM2$^*$ \\ \toprule fr1/desk & 0.053 ($\pm$ 0.003) & 0.049 & 0.027 & \textbf{0.016} \\ fr2/xyz & 0.029 ($\pm$ 0.001) & 0.02 & \textbf{0.018} & 0.04 \\ fr3/office & 0.083 ($\pm$ 0.001) & 0.058 & 0.03 & \textbf{0.01} \\ \bottomrule \end{tabular} \end{minipage} \hfill \begin{minipage}{0.37\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/velocities.pdf} \captionof{figure}{Inferred $xyz$-velocity.} \vspace{-0.5em} \label{fig:velocities} \includegraphics[width=1.0\linewidth]{figures/runtimes.pdf} \captionof{figure}{Runtime breakdown.} \vspace{-0.5em} \label{fig:runtimes} \includegraphics[width=1.0\linewidth]{figures/sdf_vs_vi.pdf} \captionof{figure}{Mapping comparison.} \vspace{-0.5em} \label{fig:sdfvsvi} \end{minipage} \end{table} \subsection{Approximations for Runtime Improvement} \label{sec:runtime} All of our approximations are motivated by the real-time constraint, dictating the need for closed-form map updates, a Laplace approximation, linearisation assumptions and a surrogate pose optimisation objective. \Cref{fig:runtimes} shows a runtime breakdown for different image resolutions, measured on an NVIDIA 1080 Ti GPU and an Intel(R) Xeon(R) W-2123 CPU at 3.6 GHz. The heaviest operations are rendering and the gradient-based pose optimisation. Based on movement speed, rendering can happen periodically, whenever a new anchor image prediction for pose optimisation is needed. This leaves us with a total runtime of 10 Hz to 15 Hz, updating the map and state at every data step. In \cref{fig:sdfvsvi} we also compare closed-form map updates to map inference via gradient-descent (e.g. as in \citep{mirchev2021variational,nerf,sucar2021imap,Zhu2022CVPR}). While gradient-descent is more accurate on a bigger budget, it is much more expensive. For example, to match the accuracy of the closed-form updates, which take less than 10 ms, one would need ca.\ 250 ms of optimisation, which is impractical. These runtimes are for a voxel grid that is significantly faster than neural representations \citep{kayalibay2022tracking}, which would only exacerbate the problem. \vspace{-0.5em} \section{Limitations and Conclusion} SDF voxel grids allow for closed-form updates, but their memory footprint limits the maximum resolution and scene size. Voxel hashing~\citep{niessner2013real} or octrees \citep{steinbrucker2013large} can directly replace them for memory efficiency. Neural maps and dynamically changing maps have remained out of our scope. Their probabilistic formulation and closed-form updates require further investigation. Our map factorises over voxels with no inter-region correlation, which could also be improved. PRISM provides interpretable uncertainty in real-time, but estimation is approximate. Obtaining perfectly calibrated uncertainty on a budget remains an open question (see \cref{app:uncertainty}). While filtering works for our generative assumptions indoors, filters cannot revisit past errors and can drift in large scenes with high levels of exploration \citep{cadena2016past}. We leave large-scale inference considerations for future work. We have introduced PRISM, a method for probabilistic filtering in a predefined spatial state-space model. Our solution runs in real-time, provides state and map uncertainty, and infers a dense map and a 6-DoF state trajectory with velocities. It is comparably accurate to state-of-the-art SLAM in indoor environments. To the best of our knowledge this is the first real-time fully-probabilistic solution for SLAM that combines differentiable rendering and agent dynamics. We validated our method on three challenging data sets, featuring unmanned aerial vehicles and a handheld camera. The results are promising, establishing PRISM as a viable state estimator for downstream model-based control. \clearpage \acknowledgments{We thank our reviewers for the thoughtful discussion, it helped us to better position our contribution.} \subsubsection*{Notice} This arXiv version of the paper is slightly adapted from the \href[]{https://proceedings.mlr.press/}{paper published at CoRL 2022, PMLR} by Atanas Mirchev, Baris Kayalibay, Ahmed Agha, Patrick van der Smagt, Daniel Cremers, Justin Bayer. The PMLR publication is licensed under \href[]{https://creativecommons.org/licenses/by/4.0/legalcode}{CC BY 4.0}.
2024-02-18T23:40:32.545Z
2022-12-07T02:14:15.000Z
algebraic_stack_train_0000
2,697
5,328
proofpile-arXiv_065-13120
\section{Introduction} The Einstein equation could be formulated in the language of exterior algebra for any spacetime dimension higher than two, \begin{align} \widetilde{G}_a := \frac{1}{2} \widetilde{R}^b{}_c \wedge *e_{ab}{}^c = \kappa \widetilde{\tau}_a[matter], \end{align} where $e^a$ is the orthonormal coframe (or orthonormal basis 1-form), $\widetilde{R}^a{}_b$ is the Riemann curvature 2-form, $*$ represents Hodge dual map, $\kappa$ is a coupling constant, $\widetilde{\tau}_a[matter]$ denotes energy-momentum 1-form of matter and $\widetilde{G}_a$ is the Einstein tensor 3-form. In four dimensions Einstein tensor 3-form has 16 components. On the other hand, the Riemannian curvature 2-form has 20 independent components (36 from $\widetilde{R}^a{}_{b}$ minus 16 from the Bianchi identity, $\widetilde{R}^a{}_{b} \wedge e^b=0$). Thus in vacuum, $\widetilde{\tau}_a[matter]=0$, though all components of the Einstein tensor vanish, some components of $\widetilde{R}^a{}_b$ may still live and then gravitational waves are allowed in an empty spacetime. If one does the similar analysis in a three-dimensional spacetime, it is seen that there are 9 components at $\widetilde{G}_a$ and 6 independent components at $\widetilde{R}^a{}_b$. Consequently, as the Einstein tensor vanishes, all components of $\widetilde{R}^a{}_{b}$ must also be zero. It means that there can not be gravitational waves in vacuum. Correspondingly in three-dimensions the bare Einstein's general relativity is not viable theory. Therefore there is a wide literature on modified general relativity in three dimensions \cite{deser_jackiw_temp_1982}-\cite{hakan_tekin_2021}. One of modifications is to go beyond the Riemannian geometry. Firstly we can enlarge it by allowing torsion. Since it is thought that torsion tensor is sourced by fermionic matter, it is natural to extend it to couple a Dirac spinor to three-dimensional Einstein theory. For that we need to know the Lorentz-covariant exterior derivative of a spinor, $\psi$, and its adjoint, $\overline{\psi}$. They are done by the formulas, \begin{align} D\psi = d\psi + \frac{1}{2} \omega^{ab}\sigma_{ab} \psi \qquad \text{and} \qquad D\overline{\psi} = d\overline{\psi} - \frac{1}{2} \overline{\psi} \sigma_{ab} \omega^{ab} \label{eq:cov-deriv-spinor} \end{align} where $\sigma_{ab}=-\sigma_{ba}$ is the generator of the restricted special Lorentz group, $SO_+(1,2)$, and $\omega_{ab}= -\omega_{ba}$ is the connection 1-form for the orthonormal frame bundle. On the other hand, it is known that $SO_+(1,2)$ is doubly covered by $Spin_+(1,2)$ group which is also the four-dimensional even subalgebra, $Cl^+(1,2)$, of the eight-dimensional Clifford algebra, $Cl(1,2)$. Meanwhile, a basis set of $Cl^+(1,2)$ is given by $\{1, \sigma_{ab}\}$. As its element $\sigma_{ab}$ generates the Lorentz transformation via the exponentiation $S = e^{\frac{1}{2}\sigma_{ab}\vartheta^{ab}(x)}$, the unit element generates a scale transformation via $ W = e^{1 f(x)}=e^{f(x)} \in \mathbb{R}^+$ where $\vartheta^{ab}(x)$ and $f(x)$ are the concerned transformation parameters. A Lorentz transformation of any two orthonormal coframes could be written $\gamma' = S \gamma S^{-1}$ in terms of $Cl(1,2)$-valued 1-form $\gamma = \gamma_a e^a$ where $2\sigma_{ab}=\frac{1}{2}(\gamma_a\gamma_b - \gamma_b \gamma_a)$ and $\eta_{ab}=\frac{1}{2}(\gamma_a\gamma_b + \gamma_b \gamma_a)$. Since $Cl^+(1,2)$ is four-dimensional, $\gamma_a$ can be represented by real $2\times 2$ matrices in which case a spinor $\psi$ is represented by a two-component complex column matrix and transforms with respect to $\psi= S \psi$ under a Lorentz transformation represented by $2\times 2$ matrix, $S$. In this work we aim to extend the covariant derivative of a spinor given by the equation (\ref{eq:cov-deriv-spinor}) as to include rescaling generated by $I$ of $Spin_+(1,2)$ group. As a scale transformation gives rise to $e^a \to W e^a$ on the orthonormal coframe, there are two possibilities for the affine connection: $\omega^a{}_b \to \omega^a{}_b$ or $\omega^a{}_b \to \omega^a{}_b - \delta^a_b W^{-1}dW$. Both of them leave the curvature 2-form invariant. Here we adhere the first option, because we want to work a modification of the Einstein-Cartan theory which is formulated in the Riemann-Cartan spacetime with a metric compatible connection before and after a rescaling. Thus we leave the affine connection scale-invariant like in Ref.\cite{tekin-robin-1982-PLB}. Now by combining two transformations we define the Weyl group $W(2,2) := SO_+(1,2)\otimes \mathbb{R}^+$ with four parameters, $ \{ a_{01},a_{02},a_{12}, f \}$. Consequently, we postulate the transformation rules for some basic quantities under a $W(2,2)$-transformation, \begin{subequations} \begin{align} e^{a'} &= W L^{a'}{}_a e^a \quad \text{with} \quad \eta_{a'b'} = L^{a}{}_{a'} L^{b}{}_{b'} \eta_{ab} \quad \text{so} \quad \iota_{a'}= W^{-1} L^a{}_{a'} \iota_a , \\ \omega^{a'}{}_{b'} &= L^{a'}{}_a \omega^a{}_b L^b{}_{b'} + L^{a'}{}_a d L^a{}_{b'} , \\ \psi' &= W^{-1} S \psi \quad \text{and} \quad \overline{\psi'} = W^{-1} \overline{\psi} S^{-1}, \end{align} \end{subequations} where $L \in SO_+(1,2)$ is generated by $\sigma_{ab}$ and $W \in \mathbb{R}^+$ generated by $I$ both which are bases of $Cl^+(1,2)$ algebra. Accordingly the transformations of non-metricity, torsion and curvature are calculated readily, \begin{subequations} \begin{align} Q_{a'b'} &= L^{a}{}_{a'} L^{b}{}_{b'} Q_{ab} , \\ T^{a'} &= W L^{a'}{}_{a} \left( T^a + W^{-1} dW \wedge e^a \right),\\ R^{a'}{}_{b'} &= L^{a'}{}_{a} R^{a}{}_{b} L^b{}_{b'} . \end{align} \end{subequations} It is worthy to notice that non-metricity and curvature are scale-invariant, but torsion not. Nonetheless, additive contribution in torsion transformation will be useful at extension of covariant derivative of a spinor. More specifically we will need $W(2,2)$-transformed trace 1-form of torsion, $T=\iota_a T^a$, \begin{align} T' = T - 2 W^{-1} dW . \end{align} Then, we write $W(2,2)$-covariant exterior derivative of a spinor, $D\psi$, and its adjoint, $D\overline{\psi} := (D\psi)^\dagger \gamma_{0}$, \begin{align} \label{eq:cov-derits-spinors} D\psi = d\psi + \Omega \psi - \frac{1}{2}IT \psi \qquad \text{and} \qquad D\overline{\psi} = d{\overline{\psi}}- \overline{\psi} \Omega - \frac{1}{2}IT \overline{\psi}. \end{align} The term $IT\psi/2$ is novelty of this paper. Here the quantity $ \Omega := \frac{1}{2} \omega^{ab} \sigma_{ab}$ must transform according to \begin{align} \Omega' = S \Omega S^{-1} + S dS^{-1} \end{align} for $D\psi$ and $ D\overline{\psi}$ to transform in covariant way, i.e., $ D\psi' = W^{-1} S (D\psi)$ and $D\overline{\psi'} = W^{-1} (D\overline{\psi}) S^{-1}$. It is worthwhile to remind that the transformation elements are generated by all bases, $\{I, \sigma_{ab}\}$, of $Cl^+(1,2)$ Clifford algebra as \begin{align} S = e^{\frac{1}{2}\sigma_{ab}\vartheta^{ab}(x)} \in Spin_+(1,2) \quad \text{and} \quad W = e^{I f(x)} \in \mathbb{R}^+ . \end{align} On the other hand, there is an inconsistency in the formulation of the standard Einstein-Cartan theory that is often overlooked. To see this problem explicitly we remind two formulations of the Dirac theory as being equation approach and Lagrangian approach. Firstly a spinor, $\psi$, and its exterior derivative, $d\psi$, are defined for both formulations. Then the covariant exterior derivative of spinor, $D\psi$, is postulated via the minimal coupling principle meaning simply to replace $d$ with $D$. Finally by following the equation approach the Dirac equation is written by \begin{align} *\gamma \wedge D\psi + m \psi *1 =0 . \end{align} However, the Dirac equation obtained from the Dirac Lagrangian by an independent variation is \begin{align} *\gamma \wedge \left(D - \frac{1}{2} T \right)\psi + m \psi *1 =0 . \end{align} The term, $T/2 := \iota_aT^a/2$, causes an inconsistency at the two formulations. Our formulation in this paper remedies this matter as well. In the next section we summarize our notations, conventions and definitions. Then we formulate the extended Einstein-Cartan theory by giving a Lagrangian 3-form which is invariant under a $W(2,2)$-transformation. After obtaining variational equations from that we solve analytically the affine connection in terms of spinor field and scalar field which has to be introduced for the scale invariance of Einstein-Hilbert Lagrangian. By substituting our findings into other field equations we rewrite them Riemannian terms plus new terms coming from torsion of geometry. We especially trace ones which are caused by our novel contribution in the covariant derivative of spinor. At last step, we insert the calculated affine connection back to the total Lagrangian 3-form by adding a constraint term, $\lambda_a \wedge T^a$ for zero-torsion, we compute field equations by varying this total Riemannian Lagrangian. At the end we observe that both formulations are equivalent, but notice that the non-Riemannian one is tider. \section{Notations, conventions, definitions} The triple $\{M,g,\omega\}$ defines a metric affine geometry where $M$ is three-dimensional orientable and differentiable manifold, $g$ is non-degenerate metric, $\omega$ represents the metric compatible full (or affine) connection \cite{thirring1997}-\cite{lounesto1997}. We denote the orthonormal coframe by $e^a$, then write metric as $g=\eta_{ab} e^a \otimes e^b$ where $\eta_{ab}$ is the Minkowski metric with the signature $(-,+,+)$. In the language of exterior algebra, $e^a$ is called orthonormal 1-form and the Cartan structure equations are given by nonmetricity 1-form, torsion 2-form and curvature 2-form tensors, respectively, \begin{subequations}\label{eq:cartan-ort} \begin{align} Q_{ab} &:= -\frac{1}{2} D\eta_{ab} = \frac{1}{2} (\omega_{ab} + \omega_{ba})=0, \label{eq:nonmetric}\\ T^a &:= De^a = de^a + \omega^a{}_b \wedge e^b \neq 0, \label{eq:tors}\\ R^a{}_b &:= D\omega^a{}_b := d \omega^a{}_b + \omega^a{}_c \wedge \omega^c{}_b \neq 0, \label{eq:curv} \end{align} \end{subequations} where $d$ is the exterior derivative, $D$ is the exterior Lorentz-covariant derivative and $\wedge$ is the exterior product. The metric compatibility condition (\ref{eq:nonmetric}) yields that the full connection 1-form is anti-symmetric, $\omega_{ab}=-\omega_{ba}$. Accordingly, it can be decomposed uniquely to a Riemannian piece, $\widetilde{\omega}_{ab}$ and a non-Riemannian piece, $K_{ab}$, \begin{align} \omega_{ab}=\widetilde{\omega}_{ab} + K_{ab}, \label{eq:connec-decom} \end{align} where $\widetilde{\omega}_{ab} = - \widetilde{\omega}_{ba}$ is the Levi-Civita connection 1-form and $K_{ab} = - K_{ba}$ is the contortion tensor 1-form \begin{subequations} \begin{align} \widetilde{\omega}_{ab} &= \frac{1}{2} \left[ -\iota_a de_b + \iota_b de_a + (\iota_a \iota_b de_c) e^c \right] & &\text{or} & \widetilde{\omega}^a{}_b \wedge e^b &= -de^a , \label{eq:Levi-Civita}\\ K_{ab} &= \frac{1}{2} \left[ \iota_a T_b - \iota_b T_a - (\iota_a \iota_b T_c) e^c \right] & &\text{or} & K^a{}_b \wedge e^b &= T^a . \label{eq:contortion} \end{align} \end{subequations} Here $\iota_a := \iota_{X_a}$ denotes the interior product with respect to the orthonormal basis vector $X_a$. By substituting (\ref{eq:connec-decom}) into (\ref{eq:curv}) we decompose the full curvature as well \begin{align} R^a{}_b = \widetilde{R}^a{}_b + \widetilde{D}K^a{}_b + K^a{}_c \wedge K^c{}_b \label{eq:decomp-curva} \end{align} where $\widetilde{R}^a{}_b$ is the Riemannian curvature 2-form and $ \widetilde{D} $ denotes the covariant exterior derivative, \begin{subequations} \begin{align} \widetilde{R}^a{}_b &:= d \widetilde{\omega}^a{}_b + \widetilde{\omega}^a{}_c \wedge \widetilde{\omega}^c{}_b ,\\ \widetilde{D}K^a{}_b &:= d K^a{}_b + \widetilde{\omega}^a{}_c \wedge K^c{}_b - \widetilde{\omega}^c{}_b \wedge K^a{}_c . \end{align} \end{subequations} All the Riemannian quantities will be labelled by a tilde over them in this paper. Some useful notations and identities are listed as \begin{subequations} \label{eq:identities1} \begin{align} e^{ab\cdots} &:= e^a \wedge e^b \wedge \cdots , \qquad \iota_{ab \cdots } := \iota_a \iota_b \cdots , \qquad \iota_b e^a = \delta^a_b, \\ e^a \wedge *e^b &= \eta^{ab} *1 , \qquad e^a \wedge *e^{bc} = -\eta^{ab} *e^c + \eta^{ac} *e^b, \\ e^a \iota_a \Theta &= p \Theta , \qquad *(\Theta \wedge e_a) = \iota_a * \Theta, \qquad \Theta \wedge * \Phi = \Phi \wedge *\Theta \\ *1 &= \frac{1}{3!} \epsilon_{abc} e^{abc} , \qquad *e_a = \frac{1}{2!} \epsilon_{abc} e^{bc} , \qquad *e_{ab} = \epsilon_{abc} e^c , \qquad *e_{abc} = \epsilon_{abc}, \\ \epsilon^{abl}\epsilon_{abc} &=-2! \delta^l_c, \qquad \epsilon^{akl}\epsilon_{abc}=-\left( \delta^k_b \delta^l_c - \delta^k_c \delta^l_b \right), \qquad \epsilon^{abc}\epsilon_{klm} =- \begin{vmatrix} \delta^a_k & \delta^a_l & \delta^a_m\\ \delta^b_k & \delta^b_l & \delta^b_m\\ \delta^c_k & \delta^c_l & \delta^c_m \end{vmatrix} , \\ D*e_a &= *e_{ab} \wedge T^b , \qquad D*e_{ab} = *e_{abc} \wedge T^c , \qquad D*e_{abc} = D\epsilon_{abc} =0 , \end{align} \end{subequations} where $\Theta$ and $\Phi$ are some $p$-forms, $*$ symbol denotes the Hodge map, $\epsilon_{abc}$ is the totally anti-symmetric epsilon symbol with $\epsilon_{012}=+1$ and $\delta^a_b$ is the Kronecker delta\footnote{When $Q_{ab}\neq 0$, it is $D*e_{abc}=D\epsilon_{abc}=-Q \epsilon_{abc}$ where $Q=\eta_{ab}Q^{ab}$.}. Four-dimensional even subalgebra, $Cl^+(1,2)$ of eight-dimensional Clifford algebra, $Cl(1,2)$, is generated by the unit matrix, $I$ and the gamma matrices $\gamma_a$ satisfying the condition \begin{align} \gamma_a \gamma_b + \gamma_b \gamma_a = 2 \eta_{ab} I . \end{align} One can consult the Appendix for details about the Clifford algebra. We choose the real representations of the gamma matrices given in the equation (\ref{eq:dirac-matricies}), in which case the basis set of $Cl^+(1,2)$ is given by $\{ I, \sigma_{ab} \}$ or $ \{ \gamma_5, \gamma_{a} \}$ because of the results, \begin{align} \gamma_5 := \gamma_0 \gamma_1 \gamma_2 = I , \qquad \sigma_{ab} := \frac{1}{4} [\gamma_a, \gamma_b] = \frac{1}{2} \epsilon_{abc} \gamma^c . \end{align} Consequently we will encounter two independent covariant bilinears \begin{align} \rho := \overline{\psi} \psi \qquad \text{and} \qquad j_a := \overline{\psi} \gamma_a \psi , \label{eq:bilinears} \end{align} where $\overline{\psi}$ is the Dirac adjoint of $\psi$. As special to three dimension one can write $q_{ab} := \overline{\psi} \sigma_{ab} \psi = \frac{1}{2} \epsilon_{abc}j^c$. Here $q_{ab}$ is a quantity related with particle's electromagnetic moment on which could be made observations in particle physics laboratories. In fact, $q^{ab} \sigma_{ab}$ is the probability density of electromagnetic moment of particle. They satisfy $ \rho^\dagger = -\rho$ and $j_a^\dagger = j_a$ and $q_{ab}^\dagger = q_{ab}$. The identities below will be helpful in the calculations, \begin{subequations} \label{eq:identities2} \begin{align} \gamma_a \gamma_b &= \eta_{ab} I + \epsilon_{abc} \gamma^c , \\ \sigma_{ab} \gamma_c - \gamma_c \sigma_{ab} &= \eta_{bc} \gamma_a - \eta_{ac} \gamma_b , \\ \sigma_{ab} \gamma_c + \gamma_c \sigma_{ab} &= \epsilon_{abc} I \\ [\sigma_{ab} , \sigma_{cd}] &= -\eta_{ac}\sigma_{bd} + \eta_{ad}\sigma_{bc} + \eta_{bc}\sigma_{ad} - \eta_{bd}\sigma_{ac} \\ \gamma_0 I^\dagger \gamma_0 &= -I , \qquad \gamma_0 \gamma_a^\dagger \gamma_0 = \gamma_a , \qquad \gamma_0 \sigma_{ab}^\dagger \gamma_0 = \sigma_{ab} , \\ \gamma_0^\dagger &= -\gamma_0 , \qquad \gamma_1^\dagger = \gamma_1 , \qquad \gamma_2^\dagger = \gamma_2 , \end{align} \end{subequations} where the symbol ${}^\dagger$ denotes the Hermitian conjugation. In this representation, spinor field $\psi$ can be considered by a two-component complex column matrix and its Dirac adjoint is defined by $\overline{\psi}:= \psi^\dagger \mathcal{C}$ where $\mathcal{C}$ is the charge conjugation matrix satisfying the relation, $ \mathcal{C} \gamma_a \mathcal{C}^{-1} = - \gamma_a^T$. Here ${}^T$ means transpose matrix. As a complementary remark we remind that the charge conjugated spinor is defined by $\psi_{\mathcal{C}}:= \mathcal{C} \overline{\psi}^T$. In our representation we will use $\mathcal{C} = \gamma_0$ meaning explicitly $\overline{\psi}:= \psi^\dagger \gamma_0$. Correspondingly, after discussions in Introduction we write the $W(2,2)$-covariant exterior derivative of $\psi$ and $\overline{\psi}$ by (\ref{eq:cov-derits-spinors}). In the standard Einstein-Cartan theory the term $T$ does not appear in $D\psi$. But in this work we especially pay attention in order to include all bases $\{I, \sigma_{ab}\}$ of $Cl^+(1,2)$ that are the generators of $Spin_+(1,2)$ group doubly covering the restricted special Lorentz group, $SO_+(1,2)$. In fact, since in the Clifford algebra $Cl^+(1,2)$ there is more structure than in the matrix algebra $\text{Mat}(2,\mathbb{R})$, it is easier to catch the term $T$ in it rather than the matrix notation. Nevertheless, since the connection carries effect of gravitational field, all possible interactions between gravity and spinor field are taken into account in the formula (\ref{eq:cov-derits-spinors}). This extra contribution in the definition of covariant exterior derivative of a spinor is a novel modification. For more discussions on extended covariant derivative of a spinor, one can consult \cite{koivisto-jimenez2020}. As a final remark we calculate the curvature of spinor bundle \begin{align} D^2\psi= \frac{1}{2} \left( R^{ab} \sigma_{ab} - dT \right)\psi . \end{align} \section{Scale invariant Einstein-Cartan theory} In this section we firstly introduce a scalar field $\phi(x)$ transforming as $\phi' = W^{-1} \phi$ under a $W(2,2)$-transformation. Then its $W(2,2)$-covariant exterior derivative becomes, \begin{align} D\phi = d\phi - \frac{1}{2} T \phi, \label{eq:cov-deriv-scalar} \end{align} such that $D\phi' = W^{-1} (D\phi)$. Now we formulate the scale invariant Einstein-Cartan theory by the Lagrangian 3-form by combining minimally the Einstein-Hilbert Lagrangian, the Dirac Lagrangian and the scalar field Lagrangian, \begin{align} L = L_{EH} + L_D + L_\phi \label{eq:lagrange-nonrieman} \end{align} where \begin{subequations} \begin{align} L_{EH} &= -\frac{1}{2\kappa} \phi R^a{}_b \wedge *e_a{}^b, \label{eq:eins-hilbert1} \\ L_D &= \frac{i}{2} \left[ \overline{\psi} *\gamma \wedge \left(D\psi \right) - \left(D\overline{\psi} \right) \wedge *\gamma \psi \right] + im \phi \overline{\psi} \psi *1, \label{eq:dirac-lag1} \\ L_\phi &= \phi^{-1} D\phi \wedge *D\phi + \mu \phi^3 *1 . \end{align} \end{subequations} Here $\kappa$ is gravitational coupling constant, $m$ is mass of spinor field, $\mu$ is a constant that can be interpreted as mass of the scalar field and $\gamma := \gamma_a e^a$ is $Cl(1,2)$-valued 1-form. We introduce imaginary unit in $L_D$ in order to make it Hermitian, $L_D^\dagger = L_D$. We perform independent variations of $L$ with respect to $e^a$, $\omega^{ab}$, $\overline{\psi}$ and $\phi$. Then by using $\delta L=0$ and by discarding exact forms we obtain field equations, \begin{subequations} \begin{align} -\frac{1}{2\kappa} \epsilon_{abc} \phi R^{bc} + \tau_a[\psi] + \tau_a[\phi] &=0, & &\text{COFRAME} \label{eq:first-eqn} \\ -\frac{1}{2\kappa} \left( \epsilon_{abc} \phi T^{c} + d\phi \wedge *e_{ab} \right) + \Sigma_{ab}[\psi] + \Sigma_{ab}[\phi] &=0, & &\text{CONNECTION} \label{eq:second-eqn} \\ i*\gamma \wedge D \psi + i m \phi\psi *1 &=0, & &\text{DIRAC} \label{eq:dirac-eqn} \\ -\frac{1}{2\kappa} R^{ab} \wedge *e_{ab} + im \overline{\psi} \psi *1 + \nabla[\phi] &= 0, & &\text{SCALAR} \label{eq:scalar-eqn} \end{align} \end{subequations} where energy-momentum 2-forms, $\tau_a[\psi]:=\partial L_D/\partial e^a$, $\tau_a[\phi]:=\partial L_\phi/\partial e^a$, and angular momentum 2-forms, $\Sigma_{ab}[\psi]:= \partial L_D / \partial \omega^{ab}$, $\Sigma_{ab}[\phi]:= \partial L_\phi / \partial \omega^{ab}$, for spinor and scalar fields, and the 3-form of scalar field, $\Delta[\phi] := \partial L_\phi / \partial \phi$, are obtained, respectively, \begin{subequations} \begin{align} \tau_a[\psi] :=& \frac{i}{2} \left[ \overline{\psi} \gamma^b \left(D\psi\right) - \left(D\overline{\psi}\right) \gamma^b \psi \right] \wedge *e_{ab} + im \phi \rho *e_a \\ \tau_a[\phi] :=& - \phi^{-1} \left[ (\iota_aD\phi) \wedge *D\phi + D\phi \wedge (\iota_a*D\phi) \right] + \mu \phi^3 *e_a \nonumber \\ &+ D*(D\phi \wedge e_a) - (\iota_a T) \wedge *D\phi - (\iota_aT^b) \wedge \iota_b *D\phi \label{eq:ner-momon-scalar} \\ \Sigma_{ab}[\psi] :=& -\frac{i}{4} \rho e_{ab} ,\\ \Sigma_{ab}[\phi] :=& \frac{1}{2} \left[ e_b \wedge *(D\phi \wedge e_a) - e_a \wedge *(D\phi \wedge e_b) \right] , \\ \Delta[\phi] :=& -\phi^{-2} D\phi \wedge *D\phi -2 d (\phi^{-1} *D\phi) - T \phi^{-1} *D\phi + 3\mu \phi^2 *1. \end{align} \end{subequations} Here it is worthy to remark that the term $T/2$ does not appear in the Dirac equation (\ref{eq:dirac-eqn}). This is a resolution of the inconsistency problem of Einstein-Cartan theory. Furthermore, torsion is solved analytically by some algebra in terms of spinor and scalar fields from CONNECTION equation (\ref{eq:second-eqn}), \begin{align} T^a = \frac{i\kappa}{2} \phi^{-1} \rho *e^a - \phi^{-1} d\phi \wedge e^{a} . \label{eq:torsion2} \end{align} In general torsion 2-form can be split to three pieces \begin{align} T^a = \overset{(1)}{T^a} + \overset{(2)}{T^a} + \overset{(3)}{T^a} . \end{align} In three dimensions they are written as \begin{align} \overset{(2)}{T^a} = - \frac{1}{2} T \wedge e^a , \qquad \overset{(3)}{T^a} = \frac{1}{3} \iota^a \mathcal{T} , \qquad \overset{(1)}{T^a} = T^a - \overset{(2)}{T^a} - \overset{(3)}{T^a} , \end{align} where $T :=\iota_a T^a$ trace 1-form and $\mathcal{T} := e_a \wedge T^a$ trace 3-form. Since $T$ has three components and $\mathcal{T}$ has only one component, $\overset{(2)}{T^a}$ is vector piece and $\overset{(3)}{T^a}$ is scalar piece (the so-called axial vector component in four-dimensions), respectively. $\overset{(1)}{T^a}$ with five components is tensor piece. Correspondingly, our torsion (\ref{eq:torsion2}) has the second and the third pieces and no the first piece because of \begin{align} T = 2 \phi^{-1} d\phi \qquad \text{and} \qquad \mathcal{T} = \frac{3i}{2}\kappa \phi^{-1}\rho *1 . \label{eq:trace-torsion} \end{align} When one compares these results with (27) of Ref.\cite{dereli-ozdemir-2013}, it observed that the non-vanishing trace 1-form along with trace 3-form of torsion is a new outcome. Substitution of the result (\ref{eq:torsion2}) to (\ref{eq:cov-deriv-scalar}) yields $D\phi=0$ under which other field equations turn out to be simpler forms, \begin{subequations} \begin{align} -\frac{1}{2\kappa} \epsilon_{abc} \phi R^{bc} + \tau_a[\psi] + \mu \phi^3 *e_a &=0, & &\text{COFRAME} \label{eq:first-eqn2i} \\ i*\gamma \wedge D \psi + i m \phi\psi *1 &=0, & &\text{DIRAC} \label{eq:dirac-eqn2i} \\ -\frac{1}{2\kappa} R^{ab} \wedge *e_{ab} + im \rho *1 + 3\mu \phi^2 *1 &= 0. & &\text{SCALAR} \label{eq:scalar-eqn2i} \end{align} \end{subequations} \section{Riemannian formulation of the theory} Firstly we calculate contortion 1-form by substituting (\ref{eq:torsion2}) to (\ref{eq:contortion}) \begin{align} K_{ab} = \phi^{-1} \left[ -\frac{i}{4} \kappa \rho *e_{ab} - (\partial_a \phi) e_b + (\partial_b\phi) e_a \right] \label{eq:contort2} \end{align} where $\partial_a\phi := \iota_a d\phi$. Then by noticing $\widetilde{D}e^a=0$, $\widetilde{D}*e^a=0$, $\widetilde{D}*e^{ab}=0$ and $d\phi \wedge *e_{ab}= (\partial_b \phi) *e_a - (\partial_a\phi) *e_b$ the related quantities are computed \begin{subequations} \begin{align} R_{ab} =& \widetilde{R}_{ab} + \frac{i\kappa}{4} (2\rho \phi^{-2} d\phi -\phi^{-1} d\rho ) \wedge *e_{ab} + \phi^{-1} \left[ \widetilde{D}\left(\partial_b \phi\right) \wedge e_a - \widetilde{D} \left(\partial_a \phi\right) \wedge e_b \right] \nonumber \\ &- \frac{\kappa^2}{16} \rho^2 \phi^{-2} e_{ab} + 2 \phi^{-2} d\phi \wedge \left[ (\partial_a\phi) e_b - (\partial_b \phi) e_a \right] - \phi^{-2} (\partial \phi)^2 e_{ab} . \label{eq:decomp-curva2} \\ D\psi =& \widetilde{D}\psi + \frac{i}{8} \kappa \rho \phi^{-1} \gamma_a \psi e^a + \frac{1}{2} \phi^{-1} (\partial^a \phi) \gamma^b \psi *e_{ab} - \phi^{-1} d\phi \psi, \label{eq:Dpsi-decomp}\\ D\overline{\psi} =& \widetilde{D}\overline{\psi} - \frac{i}{8} \kappa \rho \phi^{-1} \overline{\psi} \gamma_a e^a - \frac{1}{2} \phi^{-1} (\partial^a \phi) \overline{\psi} \gamma^b *e_{ab} - \overline{\psi} \phi^{-1} d\phi , \label{eq:Dpsi-bar-decomp} \\ \tau_a[\psi] =& \widetilde{\tau}_a[\psi] - \frac{1}{4}\kappa \rho^2 \phi^{-1} *e_a - \frac{i}{2} \rho \phi^{-1} d\phi \wedge e_a , \end{align} \end{subequations} where $\widetilde{D} \left(\partial_a \phi\right) := d \left(\partial_a \phi\right) - \widetilde{\omega}^c{}_a \left(\partial_c \phi\right)$ and $(\partial \phi)^2 := (\partial_c\phi)(\partial^c\phi)$ and \begin{subequations} \begin{align} \widetilde{D}\psi &:= d\psi + \frac{1}{2} \widetilde{\omega}^{ab} \sigma_{ab} \psi \qquad \text{and} \qquad \widetilde{D}\overline{\psi} := d\overline{\psi} - \frac{1}{2} \overline{\psi}\sigma_{ab} \widetilde{\omega}^{ab}, \\ \widetilde{\tau}_a[\psi] &:= \frac{i}{2} \left[ \overline{\psi} \gamma^b \left(\widetilde{D}\psi\right) - \left(\widetilde{D}\overline{\psi}\right) \gamma^b \psi \right] \wedge *e_{ab} + im \rho \phi *e_a . \label{eq:energy-moment-spinor-riemann} \end{align} \end{subequations} We insert these results into (\ref{eq:first-eqn2i}), and by rearranging terms we find the decomposed COFRAME equation, \begin{align} \widetilde{R}_{ab} =& -\kappa \phi^{-1} \epsilon_{abc} \widetilde{\tau}^c[\psi] - \frac{3}{16} \kappa^2 \rho^2 \phi^{-2} e_{ab} + \frac{i\kappa}{4} \phi^{-1} d\rho \wedge *e_{ab} + \phi^{-1} \left[ \widetilde{D} \left(\partial_a \phi\right) \wedge e_b -\widetilde{D}\left(\partial_b \phi\right) \wedge e_a \right] \nonumber \\ &+\phi^{-2}\{ (\partial \phi)^2 e_{ab} - 2 d\phi \wedge \left[ (\partial_a\phi) e_b - (\partial_b \phi) e_a \right] \} + \mu \kappa \phi^2 e_{ab} . \label{eq:decomp-coframe2i} \end{align} Now we decompose DIRAC equation (\ref{eq:dirac-eqn2i}) by using (\ref{eq:Dpsi-decomp}), \begin{align} i*\gamma \wedge \widetilde{D}\psi + im\phi \psi *1 - \frac{3}{8}\kappa \rho \phi^{-1} \psi *1 =0. \label{eq:dirac-decompsi} \end{align} Finally we decompose SCALAR equation (\ref{eq:scalar-eqn2i}), \begin{align} -\frac{1}{2\kappa} \widetilde{R}^{ab} \wedge *e_{ab} + \frac{3}{16}\kappa \rho^2 \phi^{-2} *1 + im\rho *1 - \frac{1}{\kappa} \phi^{-2} d\phi \wedge *d\phi +\frac{2}{\kappa} \phi^{-1} d*d\phi + 3\mu \phi^2 *1 =0 \label{eq:scalar-decomps} \end{align} At this stage the decomposition of the Lagrangian (\ref{eq:lagrange-nonrieman}) is calculated up to a closed form, \begin{align} \widetilde{L} = \widetilde{L}_{EH} + \widetilde{L}_D - \frac{3}{16}\kappa \rho^2 \phi^{-1} *1 - \frac{1}{\kappa} \phi^{-1} d\phi \wedge *d\phi + \mu \phi^3 *1 + \lambda_a \wedge T^a , \label{eq:lagrang-riemann} \end{align} where $\lambda_a$ is a Lagrange multiplier 1-form constraining zero-torsion, and the Riemannian Einstein-Hilbert Lagrangian and the Dirac Lagrangian are, respectively, \begin{align} \widetilde{L}_{EH} &= -\frac{1}{2\kappa} \phi \widetilde{R}^{ab} \wedge *e_{ab} , \\ \widetilde{L}_D &= \frac{i}{2} \left[ \overline{\psi} *\gamma \wedge \left(\widetilde{D}\psi \right) - \left(\widetilde{D}\overline{\psi} \right) \wedge *\gamma \psi \right] + im \phi \overline{\psi} \psi *1. \label{eq:dirac-lag-rieman} \end{align} The third and fourth terms in (\ref{eq:lagrang-riemann}) represent the the existence of torsion. One can follow the torsional effects by tracing these terms in the Riemannian spacetime geometry. $\lambda_a$-variation of $\widetilde{L}$ warrants that the connection is Levi-Civita, $\widetilde{\omega}^a{}_b$. Then, $\overline{\psi}$-variation and $\phi$-variation yield DIRAC equation (\ref{eq:dirac-decompsi}) and SCALAR equation (\ref{eq:scalar-decomps}), respectively. Thus $e^a$ and $\omega^{ab}$ variations causes to following equations, respectively, \begin{subequations} \begin{align} -\frac{\phi}{2\kappa} \epsilon_{abc}\widetilde{R}^{bc} + \widetilde{\tau}_a[\psi] - \frac{3}{16} \kappa \rho^2 \phi^{-1} *e_a + \frac{\phi^{-1}}{\kappa} \widetilde{\tau}_a[\phi] + \mu \phi^3 *e_a + \widetilde{D}\lambda_a &=0, \label{eq:coframe3-eqn} \\ - \frac{1}{\kappa} d\phi \wedge *e_{ab} -\frac{i}{2} \rho e_{ab} + e_b \wedge \lambda_a - e_a \wedge \lambda_b &=0, \label{eq:connection3-eqn} \end{align} \end{subequations} where \begin{align} \widetilde{\tau}_a[\phi] := \iota_ad\phi \wedge *d\phi + d\phi \wedge \iota_a *d\phi = 2(\partial_a\phi) (\partial_b \phi) *e^b -(\partial\phi)^2 *e_a . \end{align} The Lagrange multiplier could be computed from the second equation (\ref{eq:connection3-eqn}) by hitting $\iota_{ab}$, \begin{align} \lambda_a = -\frac{i}{4} \rho e_a + \frac{1}{\kappa} (\partial^b \phi) *e_{ab} . \end{align} Then it turns out to be \begin{align} \widetilde{D}\lambda_a = - \frac{i}{4} d\rho \wedge e_a + \frac{1}{\kappa} \widetilde{D}(\partial^b \phi) \wedge *e_{ab} . \end{align} Finally the usage of this result in the equation (\ref{eq:coframe3-eqn}) and rearrangement of the terms yield the equation (\ref{eq:decomp-coframe2i}) as expected. Consequently, we studied the same theory in two different geometries and see that they are equivalent in both the Lagrangian level and the equation level. Meanwhile it is observed that the non-Riemannian formalism (\ref{eq:lagrange-nonrieman}) looks tidier than the Riemannian one (\ref{eq:lagrang-riemann}). \section{Discussion} Since the general relativity does not predict gravitational wave in empty space in three dimensions, the three-dimensional extended gravity models attract pretty much attention. Therefore we treated the Einstein-Cartan theory in three dimensions by starting with a discussion about symmetry group. Then we concluded that the complete gauge group should include the scale group along with the Lorentz group. It is the Weyl group, $W(2,2)=SO_+(1,2) \otimes \mathbb{R}^+$ with four parameters. At this point we postulated scale transformation of the affine connection 1-form so as to leave the metricity condition invariant. We also extended the covariant exterior derivative of a spinor by adding the term, $-T\psi/2$, where $T=\iota_a T^a$. Thus, we saw that our new definition of $D\psi$ resolve the inconsistency problem in the Einstein-Cartan theory which can be stated that Dirac equations obtained from the equation level and the Lagrangian level are not same. Afterwards, we wrote a $W(2,2)$-invariant Lagrangian by introducing a compensating scalar field, $\phi$. We computed the field equations by independent variations and could solve torsion algebraically. Substitution of this result into other equations simplified them significantly. In the subsequent section, we decomposed all concerned non-Riemannian quantities as Riemannian quantity plus torsional contribution. Accordingly, we rewrote COFRAME, DIRAC and SCALAR equations, and also the Lagrangian 3-form in the Riemannian geometry with novel contributions. Finally we verified that the decomposed field equations are the variational field equations of the decomposed Lagrangian. Consequently, we showed the equivalence of two formulations of the same theory. Of course, the non-Riemannian formulation seems much tidier, but one can gain physical insights about torsion tensor by tracing novel terms in the Riemannian formulation. In our consideration trace 1-form, $T$, of torsion behaves like a gauge potential of scale transformation. Meanwhile, it is known from application procedure of a gauge theory that one should add kinetic counterpart, $dT$, of gauge potential to Lagrangian. That is, a scale invariant term, $ \frac{\nu}{2} \phi^{-1} dT \wedge *dT$, is expected to be in the Lagrangian (\ref{eq:lagrange-nonrieman}) where $\nu$ is a coupling constant. But when we add that term, the connection variation yields following extra term to CONNECTION equation (\ref{eq:second-eqn}) \begin{align} \frac{\nu}{2} \left[ e_a \wedge \iota_b d(\phi^{-1} *dT) - e_b \wedge \iota_a d(\phi^{-1} *dT) \right] . \end{align} Thus since now torsion gains propagating degrees of freedom because of the contributions $dT$ and $d*dT$, one can not solve it algebraically anymore. Generalisation of our model with inclusion of this term and some explicit solutions are left as our future project. \section*{Appendix}
2024-02-18T23:40:32.593Z
2022-12-07T02:12:39.000Z
algebraic_stack_train_0000
2,698
5,423
proofpile-arXiv_065-13157
\section{Preliminaries} \subsection{Colored multiple zeta values} Consider the series $$\Li_{s_1,\cdots,s_k}(a_1,\cdots,a_k) = \sum_{n_1>\cdots>n_k\geq 1}\frac{a_1^{n_1}\cdots a_k^{n_k}}{n_1^{s_1} \cdots n_k^{s_k}}$$ known as \textit{multiple polylogarithm}. Here $k$ is called the \textit{depth} and $s_1+\cdots+s_k$ is called the \textit{weight}. When $a_i$ are $N$-th roots of unity, $s_i$ are positive integers and $(a_i, s_i) \neq (1,1)$, $\Li_{s_1,\cdots,s_k}(a_1,\cdots,a_k)$ is called a colored multiple zeta values (CMZV) of weight $s_1+\cdots+s_k$ and \textit{level} $N$. We say a complex number is CMZV of weight $w$ and level $N$ if it's in $\mathbb{Q}(e^{2\pi i /N})$-span of such numbers, which we denote by $\CMZV{N}{w}$. The special case when all $a_i = 1$ is the well-known \textit{multiple zeta function}. ~\\[0.01in] The motivic dimension of $\CMZV{N}{w}$ for small $N$ is well-known \cite{delignegroupes}. For small $N$ and $w$, we have an explicit database on expressing each such CMZVs into a linear combination of elements, whose number equals the motivic dimension. For $N=1, 2$, this is classical work of \cite{ihara2006derivation}, \cite{hoffman1997algebra}, Zhao (\cite{zhao2016multiple}, \cite{ZhaoStandard},\cite{zhao2008multiple}) made important progress on general $N$. Reaching the motivic dimension for general $N$ is still open, currently it's only known for $N\leq 8$ (\cite{au2022iterated}). Due to limited space, this article has the occasion to use $N=1,2,4,6$ only. For level $N=1,2$, an extensive database is the MZV datamine \cite{MZVdatamine}. For higher level, the only known source for such data seems to be author's Mathematica package, which can be downloaded \href{https://www.researchgate.net/publication/357601353}{here}. We will use such an explicit reduction into (a set of arbitrarily chosen) basis element\footnote{we remark that the word "basis" is misnomer, it's not known whether they're truly linear independent, but for explicit calculation, this does not matter.} for our calculation. ~\\[0.01in] We denote $G$ to be the Catalan constant. When $D$ is a fundamental discriminant, let $L_D(s) = \sum_{n\geq 1} \left(\frac{D}{n}\right) n^{-s}$ here $\left(\frac{D}{n}\right)$ is Jacobi symbol. When $D = -4$, we let $L_{-4}(s) = \beta(s)$, the Dirichlet beta function. It can be shown that $L_D(s)$ is a CMZV of level $|D|$ of weight $s$. \subsection{Harmonic numbers} Fix a positive integer $N$, $0<\gamma_i \leq 1$ , we denote $$H(\vec{s},\vec{\gamma},N) = \sum_{\substack{n_1+\gamma_1 > n_2+\gamma_2 > \cdots > n_k+\gamma_k > 0 \\ n_1<N}} \frac{1}{(n_ 1+\gamma_1)^{s_ 1}\cdots (n_k+\gamma_k)^{s_k}}$$ here $n_i$ in sum need to be integers; with $\vec{s} = (s_1,\cdots,s_n) ,\vec{\gamma} = (\gamma_1,\cdots,\gamma_k)$. We write $\{a\}_n$ as $n$-component constant vector with entry $a$: $(a,a,\cdots,a)$. The special cases when $\vec{s}=(s)$ and $\vec{\gamma} = (\gamma)$ denoted by $\hbar_N^{(s)}(\gamma)$. This notation, when $\gamma=1$, is the familiar harmonic number $H_N^{(s)} = 1+1/2^s+\cdots+1/N^s$. $\hbar_N^{(s)}(\gamma)$ is the most important type of harmonic number that features in our example, but many CMZVs-theoretical statement in next subsection also holds for $H(\vec{s},\vec{\gamma},N)$. \par It follows from Newton's identity of symmetric function that $H(\{s\}_k, \{\gamma\}_k, N)$ can be written as a (weighted) polynomial in terms of $\hbar_N^{is}(\gamma)$ for various $i$. For example: $$\begin{aligned}\label{bellpoly} H(\{s\}_1, \{\gamma\}_1, N) &= \hbar_N^{(s)}(\gamma) \\ H(\{s\}_2, \{\gamma\}_2, N) &= \frac{1}{2}(\hbar_N^{(s)}(\gamma)^2 - \hbar_N^{(2s)}(\gamma)) \\ H(\{s\}_3, \{\gamma\}_3, N) &= \frac{1}{6}(\hbar_N^{(s)}(\gamma)^3 - 3\hbar_N^{(s)}(1)\hbar_N^{(2s)}(\gamma) + 2\hbar_N^{(3s)}(\gamma)) \end{aligned}$$ the general formula is given by complete Bell polynomials \cite[Chapter~3]{comtet2012advanced}. Let $(a)_n = \Gamma(a+n)/\Gamma(a)$ be pochhammer symbol, for $0<\gamma\leq 1$, then we have the series expansion (around $a=0$): \begin{equation}\label{pochhammerexpansion}(\gamma+a)_n = (\gamma)_n \left(1+ \sum_{k\geq 1} H(\{1\}_k, \{\gamma\}_k, n) a^k \right)\end{equation} To see this, write $$\frac{(\gamma+a)_n}{(\gamma)_n} = \prod_{i=0}^{n-1} \frac{\gamma+a+i}{\gamma+i} = \prod_{i=0}^{n-1} (1+\frac{a}{\gamma+i})$$ the claim \ref{pochhammerexpansion} follows from definition of $H(\{1\}_k, \{\gamma\}_k,n)$. \subsection{Series that gives CMZVs} The following follows directly from definition of CMZVs: \begin{proposition}\label{CMZVsum1} Let $0<\gamma_0\leq 1$, $\vec{\gamma_i}$ rational vectors such that each component is $0< \cdot \leq 1$. Let $N$ be an integer such that all $N\vec{\gamma_i}$, $N\gamma_0$ and $Na$ are integral. The following series, when convergent, is a level $N$ CMZV $$\sum_{n_0>0} \frac{e^{2\pi i a n_0}}{(n_0+\gamma_0)^s} H(\vec{s_1},\vec{\gamma_1},n_0) \cdots H(\vec{s_i},\vec{\gamma_i},n_0) \in \sum_{1\leq i\leq w} \CMZV{N}{i}$$ where $w = |\vec{s_1}| + \cdots + |\vec{s_i}| + s$. \end{proposition} For example, $\sum_{n\geq 0} \frac{(H_n)^3 H_n^{(2)}}{(n+1)^3}$ is level 1 CMZV; while $\sum_{n\geq 0} \frac{(-1)^n (H_n)^3 H_n^{(2)}}{(n+1)^3}$ and $\sum_{n\geq 0} \frac{(-1)^n (H_n)^3 \hbar_{n}^{(2)}(1/2)}{(n+1)^3}$ are level 2. There is an important technique called regularization (\cite{racinet2002doubles}, \cite[Chapter~13]{zhao2008multiple}): if the sum in proposition is divergent, then there exists complex numbers $c_i$ such that $$\sum_{N\geq n_0>0} \frac{e^{2\pi i a n_0}}{(n_0+\gamma_0)^s} H(\vec{s_1},\vec{\gamma_1},n_0) \cdots H(\vec{s_i},\vec{\gamma_i},n_0) = \sum_{i>0} c_i (\log N + \gamma)^i + c_0 + o(1) \qquad N\to \infty$$ here $\gamma$ is Euler's constant and only finitely many $c_i$ are non-zero. Then $c_0$ is a CMZV of same level. An example of regularization, the convergent sum $\sum_{n\geq 0} \frac{H_n^3}{(4n+1)(4n+3)}$ is a CMZV of level 4, since we can split it (by partial fraction) into two divergent sums, and each of these regularized value is such CMZV. \footnote{The author's \href{https://www.researchgate.net/publication/357601353}{Mathematica package} contains a command \textsf{MZSum} to automatically convert such sums to CMZVs} ~\\ Now we give examples that are immediate to our application: consider the function \begin{equation}\label{ex0}\sum_{k\geq 0}\frac{(a+1)_k (b+1)_k}{(c+k+1) (d+k+1) (c+1)_k (d+1)_k}\end{equation} it's analytic near $(a,b,c,d)=(0,0,0,0)$, we wish to find its power series expansion, there seems no simple general formula for them, but we can make a qualitative statement: coefficient of $a^ib^jc^kd^l$ is a CMZV of level 1 with weight $i+j+k+l+2$, this following from \ref{pochhammerexpansion}, using \ref{bellpoly}, the summand of each series can be expressed in terms of $H_n^{(r)}$ only. For example, coefficient of $ac^2d$ above is $$\sum_{k\geq 0}(-\frac{\left(H_k\right){}^2 H_k^{(2)}}{2 (k+1)^2}-\frac{H_k H_k^{(2)}}{2 (k+1)^3}-\frac{\left(H_k\right){}^4}{2 (k+1)^2}-\frac{3 \left(H_k\right){}^3}{2 (k+1)^3}-\frac{2 \left(H_k\right){}^2}{(k+1)^4}-\frac{H_k}{(k+1)^5})$$ by converting to our chosen CMZV basis, it is $-\frac{5 \zeta (3)^2}{2}-\frac{3 \pi ^6}{140}$. More generally, expanding $\ref{ex0}$ at point $a=b=c=d = -1+r/s$ with $0<r<s$, the coefficients are level $s$ CMZV. Actually, there are much more series which produce CMZV, for example \begin{proposition}\label{binomCMZVs} $$\sum_{k\geq 0} \frac{1}{(k+1/2)^s} \frac{(1)_n}{(1/2)_n}\hbar_n^{(r_1)}(1)\cdots \hbar_n^{(r_i)}(1) \hbar_n^{(s_1)}(1/2)\cdots \hbar_n^{(s_k)}(1/2)$$ is CMZV of level 4, of weight $s+\sum r_i + \sum s_i$. There exists similar assertion for $(1/2)_n/(1)_n$. \end{proposition} \begin{proof} The proof is non-trivial and involves much about general CMZV theory. See \cite{au2020evaluation}, \cite{au2022iterated}, \cite{davydychev2004binomial}, \cite{kalmykovBinomial}, \cite{xu2022ap} to get an idea on how to handle such harmonic sum twisted with binomial coefficient. This particular assertion, as well as more general ones, will be proved in an upcoming article of author. \end{proof} We refrain from using above proposition, it only appears in \ref{slowlyconvering1} in this article. \subsection{WZ-pairs} For introduction and terminology, see \cite{AequalsB}, \cite{mohammed2005infinite}. Let $F(n,k), G(n,k)$ be WZ-pair, that is $$F(n+1,k)-F(n,k) = G(n,k+1)-G(n,k)$$ \begin{proposition}[\cite{mohammed2005infinite}]\label{WZsumformula} Let $F,G$ as above, if $\lim_{k\to\infty} G(n,k)=0$ for each $n\geq 0$, then $$\sum_{k\geq 0} F(0,k) = \sum_{n\geq 0} G(n,0) + \lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$$ \end{proposition} In this article, $\lim_{k\to\infty} G(n,k)=0$ is always satisfied. For first two sections, $\lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$ can be shown to vanish, so we have an elegant equality $\sum_{k\geq 0} F(0,k) = \sum_{n\geq 0} G(n,0)$. We give two examples with non-zero $\lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$ at the last section. \par There are many computer algebra packages to find WZ-pairs, such as the Maple package in \cite{pilehrood2011bivariate}, \cite{pilehrood2008simultaneous}. Mathematica is used for all calculations, both CMZVs and WZ-pairs, in this manuscript. The WZ-pairs package we use comes from \href{https://www3.risc.jku.at/research/combinat/software/ergosum/RISC/fastZeil.html}{RISC} (i.e. Gosper's algorithm). ~\\[0.01in] \par Only special (proper) hypergeometric $F(n,k)$ has a hypergeometric WZ-mate $G(n,k)$. A way of finding such non-trivial potential $F(n,k)$ is already noted by \cite{gessel1995finding} to prove terminating hypergeometric identities. We recapitulate it here: let $a,b,\cdots$ be free parameters (independent of $n,k$), if $f(a,b,\cdots,k)$ is a hypergeometric term in each $a,b,\cdots,k$ and such that $\sum_{k\in \mathbb{Z}} f(a,b,\cdots,k)$ is independent of $a,b,\cdots$, then $F(n,k) = f(a+An,b+Bn,\cdots,k+Kn)$ with $A,B,\cdots,K$ integers, should have\footnote{the author is not aware of a rigorous proof of this, but it is true in every case that people have been computed.} a hypergeometric WZ-mate $G(n,k)$. See \cite{gessel1995finding} for examples on this. For example, the WZ-function $F(n,k)$ used in proof of \ref{ex1} comes from closed-form Gauss $_2F_1$ summation formula. All WZ-function $F(n,k)$ in next two section arise from either Gauss $_2F_1$, Dixon's $_3F_2$ or Dougall's $_5F_4$, common for practitioners in this field to derive new formulas (\cite{pilehrood2011bivariate}, \cite{mohammed2005infinite}, \cite{pilehrood2008generating}, \cite{pilehrood2010series}, \cite{pilehrood2008simultaneous}, \cite{guillera2008hypergeometric}). In the last section, we use Watson's $_3F_2$ and a hypergeometric summation from Goursat's theory of $_2F_1$ transformation \cite{goursat1881equation}, these on the other hand, have only been seldom exploited. We will denote \begin{multline*} \textsf{Gauss2F1}(a,b,c,d,k) = (-1)^{a+b} \Gamma (-a+c+1) \Gamma (-a+d+1) \Gamma (a+k+1) \Gamma (-b+c+1) \Gamma (-b+d+1) \Gamma (b+k+1) \\ \div (\Gamma (c+k+2) \Gamma (d+k+2) \Gamma (-a-b+c+d+1)) \\ \textsf{Dixon3F2}(a,b,c,k) = \Gamma (a-b+1) \Gamma (a-c+1) \Gamma (2 a+k+1) \Gamma (b+k+1) \Gamma (c+k+1) \Gamma (2 a-b-c+1) \\ \div (\Gamma (a) \Gamma (b) \Gamma (c) \Gamma (k+2) \Gamma (a-b-c+1) \Gamma (2 a-b+k+2) \Gamma (2 a-c+k+2)) \\ \textsf{Dougall5F4}(a,b,c,d,k) = (a+2 (k+1)) \Gamma (a+k+1) \Gamma (b+k+1) \Gamma (c+k+1) \Gamma (d+k+1) \Gamma (a-b-c+1) \Gamma (a-b-d+1) \Gamma (a-c-d+1) \\ \div (\Gamma (b) \Gamma (c) \Gamma (d) \Gamma (k+2) \Gamma (a-b+k+2) \Gamma (a-c+k+2) \Gamma (a-d+k+2) \Gamma (a-b-c-d+1)) \end{multline*} here the name comes from corresponding hypergeometric summation formula. \section{Introductory examples} \begin{theorem}\label{ex1}For $a,b,c,d$ near $0$, we have \begin{multline}\sum_{k\geq 0}\frac{(a+1)_k (b+1)_k}{(c+k+1) (d+k+1) (c+1)_k (d+1)_k} \\ = \sum_{n\geq 1} \frac{(-a+c+1)_n (-a+d+1)_n (-b+c+1)_n (-b+d+1)_n P_1(n)}{(a-c-n) (a-d-n) (-b+c+n) (-b+d+n) (c+1)_n (d+1)_n (-a-b+c+d+1)_{2 n}} \end{multline} with $P_1(n) = a b-a c-a d-2 a n-b c-b d-2 b n+c^2+c d+3 c n+d^2+3 d n+3 n^2$.\end{theorem} \begin{proof} Let $F(n,k) = \textsf{Gauss2F1}(a,b,c+n,d+n,k)$, find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0}F(0,k) = \sum_{n\geq 0}G(n,0)$. Remember that $(a)_n = \Gamma(a+n)/\Gamma(a)$. For a proof not based on WZ-pair, see \cite{chu2014accelerating}. \end{proof} Expanding \ref{ex1} near $(a,b,c,d)=(0,0,0,0)$, coefficients of $a^i b^j c^k d^l$ of LHS are CMZVs of level 1 and weight $i+j+k+l+2$. Comparing coefficient of constant term: $$\frac{\pi ^2}{6} = \sum_{n\geq 1} \frac{(1)_n^2}{(1)_{2 n}} \frac{3}{n^2} $$ Comparing coefficient of $a$: $$\zeta(3) = \sum_{n\geq 1} \frac{(1)_n^2}{(1)_{2 n}} (-\frac{6 H_n}{n^2}+\frac{3 H_{2 n}}{n^2}+\frac{4}{n^3}) $$ Comparing coefficient of $c$: $$-2\zeta(3) = \sum_{n\geq 1} \frac{(1)_n^2}{(1)_{2 n}} (\frac{3 H_n}{n^2}-\frac{3 H_{2 n}}{n^2}-\frac{3}{n^3}) $$ Comparing coefficient of $a^2$: $$\frac{\pi^4}{90} = \sum_{n\geq 1} \frac{(1)_n^2}{(1)_{2 n}} (-\frac{8 H_n}{n^3}+\frac{4 H_{2 n}}{n^3}+\frac{6 \left(H_n\right){}^2}{n^2}-\frac{6 H_{2 n} H_n}{n^2}+\frac{3 \left(H_{2 n}\right){}^2}{2 n^2}+\frac{3 H_{2 n}^{(2)}}{2 n^2}-\frac{3 H_n^{(2)}}{n^2}+\frac{5}{n^4}) $$ In general, coefficients of $a^i b^j c^k d^l$ of RHS are of form $$\text{some level 1 CMZVs } = \sum_{n\geq 1} \frac{(1)_n^2}{n^2 (1)_{2 n}} \mathbb{Q}[\frac{1}{n},H_n^{(r)}, H_{2n}^{(r)}]$$ here $\mathbb{Q}[x,y,\cdots]$ means polynomial ring with given generators, harmonic numbers with index $2n$ above comes from the term $(-a-b+c+d+1)_{2 n}$ in \ref{ex1}. In \cite{chuAperyseries}, this method of comparing coefficients has already been used on \ref{ex1} to generate many identities, except the hypergeometric identity \ref{ex1} is proved by solely classical means. Some beautiful cases by taking linear combinations of equalities obtained are: $$\sum_{n\geq 1} \frac{1}{n^5\binom{2n}{n}} (18H_{2n}+32) = \frac{5\pi^2 \zeta(3)}{9} + \frac{68}{3}\zeta(5) \qquad \sum_{n\geq 1} \frac{1}{n^5\binom{2n}{n}} (7-3n^2 H_n^{(2)}) = \frac{65\pi^6}{34992}$$ see \cite{chuAperyseries} for more such consequence of differentiating \ref{ex1}. For a method with CMZV nature applied to above example, see \cite{ablinger2017discovering} and \cite{ablinger2019proving}, which uses level $6$ CMZVs. \cite{bailey2006experimental} is an empirical approach to these identities, it seems many therein are consequence of differentiating \ref{ex1}. \begin{theorem}\label{ex2}For $a,b,c,d$ near $0$, we have \begin{multline}\sum_{k\geq 0}\frac{(a+1)_k (b+1)_k}{(c+k+1) (d+k+1) (c+1)_k (d+1)_k} \\ = \sum_{n\geq 1} \frac{(a+1)_n (b+1)_n (-a+c+1)_n (-a+d+1)_n (-b+c+1)_n (-b+d+1)_n P_1(n)}{(a+n) (b+n) (a-c-n) (a-d-n) (-b+c+n) (-b+d+n) (c+1)_{2 n} (d+1)_{2 n} (-a-b+c+d+1)_{2 n}} \end{multline} with $P_1(n)$ can be found in appendix.\end{theorem} \begin{proof} For $F(n,k) = \textsf{Gauss2F1}(a,b,c+n,d+n,k+n)$, find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0}F(0,k) = \sum_{n\geq 0}G(n,0)$. \end{proof} We can perform the same operation to \ref{ex2}. Expanding both sides at $(a,b,c,d)=(0,0,0,0)$, for instances, comparing coefficient of constant term: $$\zeta(2) = \sum_{n\geq 1} \frac{(1)_n^6}{(1)_{2 n}^3} (\frac{21}{n^2}-\frac{8}{n^3}) $$ this is of course a well-known WZ-type result due to Zeilberger \cite{zeilberger1993closed}, so above formula can be seen as four parameter deformation thereof. comparing coefficient of $a$: $$\zeta(3) = \sum_{n\geq 1} \frac{(1)_n^6}{(1)_{2 n}^3} (\left(\frac{8}{n^3}-\frac{21}{n^2}\right) H_n+\left(\frac{21}{n^2}-\frac{8}{n^3}\right) H_{2 n}-\frac{4}{n^4}+\frac{7}{n^3}) $$ comparing coefficient of $ab$: \begin{multline*}\frac{11 \pi ^4}{360} = \sum_{n\geq 1} \frac{(1)_n^6}{(1)_{2 n}^3} (\left(\frac{14}{n^3}-\frac{8}{n^4}\right) H_{2 n}+\left(\frac{21}{n^2}-\frac{8}{n^3}\right) \left(H_n\right){}^2+\left(\frac{21}{n^2}-\frac{8}{n^3}\right) \left(H_{2 n}\right)^2 \\ +\left(\left(\frac{16}{n^3}-\frac{42}{n^2}\right) H_{2 n}+\frac{8}{n^4}-\frac{14}{n^3}\right) H_n+\left(\frac{21}{n^2}-\frac{8}{n^3}\right) H_{2 n}^{(2)}+\frac{1}{n^4}) \end{multline*} In general, most of them are ugly, but certain linear combination of them gives quite elegant results, such as: \begin{corollary}[Conjecture in \cite{sun2022conjectures}] $$\sum_{n=1}^\infty \frac{21n-8}{n^3 \binom{2n}{n}^3} (H_{2 n-1}^{(2)}-\frac{25 H_{n-1}^{(2)}}{8}) = \frac{47\pi^4}{2880}$$ \end{corollary} \begin{proof} Let $[a^ib^jc^kd^l]$ be equality obtained by comparing corresponding coefficient of \ref{ex2} at $(a,b,c,d)=(0,0,0,0)$. Then the equality $11/4 [a^2] + [ac] + 5/8 [ab]$. \end{proof} Now we expand both sides of \ref{ex1} at $(a,b,c,d) = (-1/2,-1/2,-1/2,-1/2)$, LHS's coefficient of $(a+1/2)^i (b+1/2)^j (c+1/2)^k (d+1/2)^l$ will be CMZVs of level 2 of weight $2+i+j+k+l$. Comparing coefficient of constant term: $$\frac{\pi^2}{2} = \sum_{n\geq 1} \frac{4^{2n} (1)_n^6}{(1)_{2 n}^3} (\frac{3 n-1}{n^3}) $$ originally proved in \cite{guillera2008hypergeometric}. Comparing coefficient of $c+1/2$: $$-\pi ^2 \log (2)-\frac{7 \zeta (3)}{2} = \sum_{n\geq 1} \frac{4^{2n} (1)_n^6}{(1)_{2 n}^3} (\frac{3 (3 n-1) H_n}{n^3}-\frac{3 (3 n-1) H_{2 n}}{n^3}-\frac{3 (2 n-1)}{2 n^4}) $$ Comparing coefficient of $(a+1/2)^2$: \begin{multline*} 8 \text{Li}_4\left(\frac{1}{2}\right)-\frac{19 \pi ^4}{360}+\frac{\log ^4(2)}{3}+\frac{2}{3} \pi ^2 \log ^2(2) = \sum_{n\geq 1} \frac{4^{2n} (1)_n^6}{(1)_{2 n}^3} (\frac{(8 n-3) H_{2 n}}{2 n^4}+\frac{2 (3 n-1) \left(H_n\right){}^2}{n^3}\\ +\frac{(3 n-1) \left(H_{2 n}\right)^2}{2 n^3}+\left(-\frac{2 (3 n-1) H_{2 n}}{n^3}-\frac{8 n-3}{n^4}\right) H_n+\frac{(3 n-1) H_{2 n}^{(2)}}{2 n^3}-\frac{(3 n-1) H_n^{(2)}}{n^3}+\frac{5 n-2}{n^5}) \end{multline*} Again, most of them are ugly, but certain linear combination of them gives quite elegant results, such as: \begin{corollary}[Conjecture in \cite{sun2022conjectures}] $$\sum_{n=1}^\infty \frac{(3n-1)16^n}{n^3 \binom{2n}{n}^3} (H_{2 n-1}^{(2)}-\frac{5 H_{n-1}^{(2)}}{4}) = \frac{\pi^4}{24}$$ \end{corollary} \begin{proof} Let $[a^ib^jc^kd^l]$ be equality obtained by comparing corresponding coefficient of \ref{ex1} at $(a,b,c,d)=(-1/2,-1/2,-1/2,-1/2)$. The equality is $[a^2]/2+[c^2]/2-[ab]/4-[cd]/4$. \end{proof} This was recently proved in \cite{wei2022conjectural}, using essentially same method and formula \ref{ex1}. \begin{theorem}\label{ex4}For $a,b,c,d$ near $0$, we have \begin{multline}\sum_{k\geq 0} \frac{(c+1)_k \left(2 a+d+\frac{1}{2}\right)_k \left(b+d+\frac{1}{2}\right)_k}{\left(d+k+\frac{1}{2}\right) \left(d+\frac{1}{2}\right)_k \left(2 a-b+d+k+\frac{1}{2}\right) \left(2 a-b+d+\frac{1}{2}\right)_k (2 a-c+2 d+1)_k} \\ = \sum_{n\geq 0}(-1)^n \left(2 a+d+\frac{1}{2}\right)_n \left(b+d+\frac{1}{2}\right)_n \left(-c+d+\frac{1}{2}\right)_n P_1(n) \left(a-c+d+\frac{1}{2}\right)_n \left(2 a-b-c+d+\frac{1}{2}\right)_n \div \\ ((2 d+2 n+1) \left(d+\frac{1}{2}\right)_n (4 a-2 b+2 d+2 n+1) (2 a-c+2 d+2 n+1) (2 a-2 b-2 c+2 d+2 n+1)\\ \left(2 a-b+d+\frac{1}{2}\right)_n (2 a-c+2 d+1)_{2 n} \left(a-b-c+d+\frac{1}{2}\right)_n)\end{multline} where $P_1(n) = 16 a^2-8 a b-24 a c+40 a d+40 a n+20 a+8 b c-12 b d-12 b n-6 b+8 c^2-24 c d-24 c n-12 c+20 d^2+40 d n+20 d+20 n^2+20 n+5$. \end{theorem} \begin{proof} Let $F(n,k) = \textsf{Dixon3F2}(a,b,c-n,k+n)$, one find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(-1/2+d,k) = \sum_{n\geq 0} G(n-1/2+d,0)$ \end{proof} Expanding LHS at $(a,b,c,d) = (0,0,0,0)$, coefficients are CMZVs of level 2, they are of form $$\sum_{n\geq 0} \frac{(-1)^n (\frac{1}{2})_n^2}{(2 n+1)^2 (1)_{2 n}} \mathbb{Q}[\frac{1}{2n+1}, \hbar_n^{(r)}(1), \hbar_n^{(r)}(1/2)]$$ The constant term gives $$\sum_{n\geq 0} \frac{5 (-1)^n (\frac{1}{2})_n^2}{(2 n+1)^2 (1)_{2 n}} = \frac{\pi^2}{2}$$ Let $[a^ib^jc^kd^l]$ be equality obtained by comparing corresponding coefficient of \ref{ex3}. Most expression obtained by comparing higher coefficients are complicated. However, certain linear combination gives quite elegant results, such as: $$\sum_{n\geq 0} \frac{\binom{2 n}{n}}{(2 n+1)^2 (-16)^n} \left(5H_{2n+1} + \frac{12}{2n+1}\right) = 14\zeta(3)$$ which is $-1/2[a]+1/2[b]$, this was proved recently in \cite{charlton2022two}. \begin{corollary}[Conjectures in \cite{sun2021book}, \cite{sun2010conjectures}] $$\sum_{n\geq 0} \frac{\binom{2 n}{n}}{(2 n+1)^2 (-16)^n} \left(5 \sum _{j=0}^n \frac{1}{(2 j+1)^3}+\frac{1}{(2 n+1)^3}\right) = \frac{\pi^2}{2}\zeta(3)$$ $$\sum_{n\geq 0} \frac{\binom{2 n}{n}}{(2 n+1)^2 (-16)^n} \left(5 \sum _{j=0}^{n-1} \frac{1}{(2 j+1)^4}+\frac{1}{(2 n+1)^4}\right) = \frac{7\pi^6}{7200}$$ \end{corollary} \begin{proof} First is $1/16[a^3]+1/16[b^3]+1/2[c^3]+1/8[abc]-3/16[acd]$; second is $$\frac{9 [a^4]}{80}+\frac{[a^3 b]}{32}+\frac{[a^3 c]}{40}+\frac{[a^3 d]}{160}-\frac{1}{80} [a^2 b c]-\frac{7 [a b^3]}{160}+\frac{11 [a c^3]}{80}-\frac{13 [b^4]}{80}-\frac{[b^3 c]}{16}-\frac{[b c^3]}{16}+\frac{[c^4]}{10}-\frac{[d^4]}{40}$$ \end{proof} \begin{theorem}\label{ex3} For $a,b,c,d,e$ near $0$, we have $$\sum_{k\geq 0} \frac{-(e+2 k+2) (a+1)_k (b+1)_k (c+1)_k (d+1)_k}{(a-e-k-1) (-b+e+k+1) (-c+e+k+1) (-d+e+k+1) (-a+e+1)_k (-b+e+1)_k (-c+e+1)_k (-d+e+1)_k}$$ equals \begin{multline*}\sum_{n\geq 1} ((-1)^{n} (-a-b+e+1)_n (-a-c+e+1)_n (-a-d+e+1)_n (-b-c+e+1)_n (-b-d+e+1)_n (-c-d+e+1)_n P_1(n))\div \\ ( (-a-b+e+n) (-a-c+e+n) (-a-d+e+n) (-a+e+1)_n (-b-c+e+n) (-b-d+e+n) \\ (-b+e+1)_n (-c-d+e+n) (-c+e+1)_n (-d+e+1)_n (-a-b-c-d+2 e+1)_{2 n} ) \end{multline*} where $P_1(n)$ can be found in appendix. \end{theorem} \begin{proof} Let $F(n,k) = \textsf{Dixon5F4}(a-n,b-n,c-n,d-n,k+n)$, one find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(e-a,k) = \sum_{n\geq 0} G(n+e-a,0)$ \end{proof} Comparing coefficient at $(a,b,c,d,e)=(0,0,0,0,0)$, LHS is level 1 CMZV. We obtain expressions of form \begin{equation}\label{dougall1}\sum_{n\geq 0} \frac{(-1)^n (1)_n^2}{n^2 (1)_{2 n}} \mathbb{Q}[\frac{1}{n}, H_n^{(r)}, H_{2n}^{(r)}]\end{equation} In \cite{chu2020alternating}, this method of comparing coefficients has already been used on \ref{ex3} to generate many identities, although WZ-pair was not used to prove \ref{ex3}. Some beautiful cases are $$\sum_{n\geq 1} \frac{(-1)^{n-1}}{\binom{2n}{n}} (\frac{10 H_n}{n^3} - \frac{3}{n^4}) = \frac{\pi^4}{30} \qquad \sum_{n\geq 1} \frac{(-1)^{n-1}}{\binom{2n}{n}} (\frac{4 H_n}{n^3} + \frac{H_{2n}}{n^3}) = \frac{2\pi^4}{75}$$ $$\sum_{n\geq 1} \frac{(-1)^{n-1}}{\binom{2n}{n}} (\frac{1}{n^6} + \frac{5 H_n^{(3)}}{n^3}) = 2\zeta(3)^2$$ see \cite{chu2020alternating} for more such formulas. \begin{theorem} Whenever LHS converges, we have $$\sum_{k\geq 0} \frac{(c+d+k) (2 a+d+1)_k (b+d+1)_k (c+d)_k}{(d+k+1) (d+1)_k (2 a-b+d+k+1) (2 a-c+d+k+1) (2 a-b+d+1)_k (2 a-c+d+1)_k}$$ equals $$-\sum_{n\geq 1} \frac{(-1)^n (1-b)_n (a-b+1)_n (2 a+d+1)_n (c+d)_n (2 a-b-c+1)_n P_1(n)}{2 (n-b) (a-b+n) (2 a+d+n) (d+1)_n (2 a-b-c+n) (a-b-c+1)_n (2 a-b+d+1)_{2 n} (2 a-c+d+1)_n}$$ where $P_1(n)$ is $4 a^2-6 a b-2 a c+4 a d+10 a n+2 b^2+2 b c-2 b d-6 b n-c d-3 c n+d^2+4 d n+5 n^2$. \end{theorem} \begin{proof} For $F(n,k) = \textsf{Dixon3F2}(a,b-n,c,k+n)$ find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(0,k+d) = \sum_{n\geq 0} G(n,d)$. \end{proof} Comparing coefficient around $(a,b,c,d) = (0,0,1/2,0)$, LHS is level 2 CMZV, RHS is of form $$\sum_{n\geq 1} \frac{(-1)^n}{\binom{2n}{n}} \mathbb{Q}[\frac{1}{n}, \frac{1}{2n-1}, \hbar_n^{(r)}(1), \hbar_n^{(r)}(1/2)]$$ \begin{corollary} $$\sum _{n=1}^{\infty } \frac{(-1)^{n-1}}{n^3 \binom{2 n}{n}}\left(H_{2 n-1}^{(2)}-\frac{123 H_{n-1}^{(2)}}{16}\right)=\frac{451 \zeta (5)}{40}-\frac{14 \pi ^2 \zeta (3)}{15}$$ \end{corollary} \begin{proof} Let $a^i b^j c^k d^j$ be the equality obtained from above theorem by comparing coefficient of $a^i b^j (c-1/2)^k d^j$, then above equality is $\frac{1523 a^3}{80}-\frac{11399 a^2 b}{1920}+\frac{2041 a^2 c}{160}+\frac{15023 a^2 d}{1920}+\frac{15821 a b^2}{1920}+\frac{14 a c^2}{15}-\frac{9341 a d^2}{1920}+\frac{185843 b^3}{1280}+\frac{787 b^2 c}{256}-\frac{77211 b^2 d}{1280} -\frac{3629 b c^2}{480}+\frac{110029 b d^2}{3840}-\frac{1991 c^3}{320}-\frac{6239 c d^2}{1280}-\frac{25647 d^3}{1280}$. \end{proof} Despite matching the form in \ref{dougall1}, the above corollary \textit{does not} follow by comparing coefficient of \ref{ex3} (so was not included in \cite{chu2020alternating} as well). \begin{theorem} Whenever LHS converges, $$\sum_{k\geq 0} \frac{(a+1)_k (b+1)_k}{(c+k+1) (d+k+1) (c+1)_k (d+1)_k} = \sum_{n\geq 1} \frac{(-1)^{n+1} (a+1)_n (-b+c+d+2 n) (-b+c+1)_n (-b+d+1)_n}{(a+n) (-b+c+n) (-b+d+n) (c+1)_n (d+1)_n (-a-b+c+d+1)_n}$$ \end{theorem} \begin{proof} Due to its simple appearance, this can be already proved by classical method. We only outline a short WZ-proof. Let $F(n,k) = \textsf{Gauss2F1}(a,b-n,c,d,k+n)$, one find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(0,k) = \sum_{n\geq 0} G(n,0)$. \end{proof} \begin{corollary}\label{slowlyconvering1} $$\sum _{n=1}^{\infty } \frac{(4 n-1) (-64)^n }{n^3 \binom{2 n}{n}^3} \left(H_{2 n-1}^{(2)}-\frac{H_{n-1}^{(2)}}{2}\right) = -16\beta(4)$$ $$\sum _{n=1}^{\infty } \frac{(4 n-1) (-64)^n }{n^3 \binom{2 n}{n}^3} H_{2 n-1}^{(3)} = 24 \Li_{4,1}(i,1)+8 \Li_{4,1}(i,-1)+16 \beta(4) \log (2)-\frac{5 \pi ^5}{64}$$ \end{corollary} \begin{proof} Let $[a^ib^jc^kd^j]$ be equality obtained by comparing both sides at $(a,b,c,d) = (0,-1/2,-1/2,-1/2)$. By \ref{binomCMZVs}, each coefficient is level 4 CMZV. First is $-\frac{a^2}{2}-\frac{a b}{2}-\frac{b^2}{2}+\frac{c^2}{2}$, second is $-\frac{a^3}{4}-\frac{a^2 b}{4}-\frac{a b^2}{4}-\frac{b^3}{4}-c^3+\frac{c^2 d}{4}$. \end{proof} The first equality is conjectured by Sun in \cite{sun2022conjectures}. In second formula, first two terms cannot be expressed in terms of $\log 2, \pi, \zeta(n), \beta(n)$. \section{Examples with alternating harmonic numbers} \begin{theorem}\label{alternatinglevel2} Whenever LHS converges, \begin{multline*}\sum_{k\geq 0} (a+3 e+2 k+2) (b+e+k) (a+2 e+1)_k (b+e)_k (c+e+1)_k (d+e+1)_k \\ \div ((e+k+1) (e+1)_k (a-b+2 e+k+1) (a-c+2 e+k+1) (a-d+2 e+k+1) (a-b+2 e+1)_k (a-c+2 e+1)_k (a-d+2 e+1)_k)\\ = \sum_{n\geq 1} (b+e)_n (a-b-c+e+1)_n (c+e+1)_n (a-b-d+e+1)_n (a-c-d+e+1)_n (d+e+1)_n (a+2 e+1)_{2 n} P_1(n) \\ \div ((a-b-c+e+n) (c+e+n) (a-b-d+e+n) (a-c-d+e+n) (d+e+n) (a+2 e+2 n-1) (a+2 e+2 n) (e+1)_n \\ (a-b-c-d+e+1)_n (a-b+2 e+1)_{2 n} (a-c+2 e+1)_{2 n} (a-d+2 e+1)_{2 n})\end{multline*} $P_1(n)$ is a polynomial which can be found in appendix. \end{theorem} \begin{proof} Let $F(n,k) = \textsf{Dougall5F4}(a+n,b,c,d,k+n)$, one find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(e,k) = \sum_{n\geq 0} G(n+e,0)$. \end{proof} When comparing coefficient at $(a,b,c,d,e) = (0,1/2,0,0,0)$, LHS is level 2 CMZV, RHS has form \begin{equation}\label{alternatinglevel2form}\sum_{n\geq 1} \frac{1}{\binom{4n}{2n}} \mathbb{Q}[\frac{1}{n},\frac{1}{2n-1}, \hbar_n^{(r)}(1), \hbar_n^{(r)}(1/2)]\end{equation} Now consider the conjecture by Sun, $$\sum _{n=1}^{\infty } \frac{1}{n^2 \binom{2 n}{n}}(6 H_{\left\lfloor \frac{n}{2}\right\rfloor }^{(2)}-\frac{(-1)^n}{n^2})=\frac{13 \pi ^4}{1620}$$ split the summand into even and odd part, then combine: $$\sum _{n\geq 1} \frac{1}{(2n-1)^2 \binom{2(2n-1)}{2n-1}}(6 H_{n-1}^{(2)}+\frac{1}{(2n-1)^2}) + \sum_{n\geq 1} \frac{1}{(2n)^2 \binom{4 n}{2n}}(6 H_n^{(2)}-\frac{1}{(2n)^2})$$ note that $\binom{2(2n-1)}{2n-1} = \frac{n}{4n-1} \binom{4n}{2n}$, we see above sum matches the form in \ref{alternatinglevel2form}. Therefore we hope it is derivable from comparing coefficient of \ref{alternatinglevel2}. This, along with many others are indeed so. \begin{corollary}[Conjectures in \cite{sun2010conjectures}] $$\sum _{n=1}^{\infty } \frac{1}{n^2 \binom{2 n}{n}}(6 H_{\left\lfloor \frac{n}{2}\right\rfloor }^{(2)}-\frac{(-1)^n}{n^2})=\frac{13 \pi ^4}{1620}$$ $$\sum _{n=1}^{\infty } \frac{1}{n^2 \binom{2 n}{n}} (24 \sum _{j=1}^{n-1} \frac{(-1)^j}{j^3}+\frac{7 (-1)^n}{n^3}) =7 \zeta (5)-\pi ^2 \zeta (3)$$ $$\sum _{n=1}^{\infty } \frac{(-1)^n}{n^3 \binom{2 n}{n}} (10 \sum _{j=1}^n \frac{(-1)^j}{j^2}-\frac{(-1)^n}{n^2})=\frac{29 \zeta (5)}{6}-\frac{\pi ^2 \zeta (3)}{18}$$ $$\sum _{n=1}^{\infty } \frac{1}{n^4 \binom{2 n}{n}} (72 \sum _{j=1}^n \frac{(-1)^j}{j^2}-\frac{(-1)^n}{n^2})=-\frac{34 \zeta (3)^2}{5}-\frac{31 \pi ^6}{1134}$$ $$\sum _{n=1}^{\infty } \frac{1}{n^2 \binom{2 n}{n}} (8 \sum _{j=1}^n \frac{(-1)^j}{j^4}+\frac{(-1)^n}{n^4})=-\frac{22 \zeta (3)^2}{15}-\frac{97 \pi ^6}{34020}$$ $$\sum _{n=1}^{\infty } \frac{(-1)^n}{n^3 \binom{2 n}{n}} (40 \sum _{j=1}^{n-1} \frac{(-1)^j}{j^3}-\frac{7 (-1)^n}{n^3})=-\frac{367 \pi ^6}{27216}+6 \zeta (3)^2$$ $$\sum _{n=1}^{\infty } \frac{(-1)^n}{n^3 \binom{2 n}{n}} (110 \sum _{j=1}^n \frac{(-1)^j}{j^4}+\frac{29 (-1)^n}{n^4})=\frac{221 \pi ^4 \zeta (3)}{180}+\frac{223 \zeta (7)}{24}-\frac{301 \pi ^2 \zeta (5)}{36}$$ \end{corollary} \begin{proof} Start by splitting each one into even and odd part and then combine, we have seen how to do this for first one, for the remaining, simply use the fact that $$\sum_{j=1}^{2n} \frac{(-1)^j}{j^s} = 2^{1-s} H_{n}^{(s)}-H_{2n}^{(s)} \qquad \sum_{j=1}^{2n-1} \frac{(-1)^j}{j^s} = 2^{1-s} H_{n}^{(s)}-H_{2n}^{(s)} - \frac{1}{(2n)^s} $$ the recombined sum can then be obtained by comparing coefficient of \ref{alternatinglevel2} at point $(a,b,c,d,e) = (0,1/2,0,0,0)$. Let $a^i b^j c^k d^l e^r$ be the equality for the corresponding exponent. First one is $\frac{a^2}{3}+\frac{5 a b}{18}-\frac{a c}{2}+\frac{a d}{2}-\frac{7 b^2}{9}-\frac{5 b d}{12}-\frac{7 c^2}{9}+\frac{7 c d}{9}$, second is $-\frac{27 a^3}{14}-\frac{9 a^2 b}{14}+\frac{3 a^2 c}{14}+\frac{13 a^2 e}{28}+\frac{a b^2}{14}-\frac{a b c}{2}+\frac{13 a c^2}{14}-\frac{9 a e^2}{28}-\frac{b^3}{7}-\frac{b^2 c}{7}-\frac{2 b^2 e}{7}-\frac{9 b e^2}{28}+c^3-\frac{c^2 d}{7}+\frac{c^2 e}{14}-\frac{5 e^3}{28}$. We omit the rest, only giving the second last one, in order to highlight how heavy the computation becomes for high weight, which is $-\frac{119 a^4}{12}-\frac{158 a^3 b}{3}+\frac{128 a^3 c}{3}-\frac{113 a^3 e}{8}-\frac{107 a^2 b^2}{6}-\frac{63}{2} a^2 b c+\frac{41}{24} a^2 b e+\frac{295 a^2 c^2}{6}+\frac{85}{6} a^2 c d-\frac{467}{24} a^2 c e-\frac{2 a^2 e^2}{3}-\frac{17 a b^3}{2}-\frac{17}{2} a b^2 c-\frac{1}{2} a b^2 e-\frac{133}{6} a b c^2-\frac{2}{3} a b e^2+\frac{119 a c^3}{2}+\frac{41}{3} a c^2 d-\frac{65}{3} a c^2 e+\frac{31 a e^3}{48}-\frac{43 b^4}{12}-\frac{59 b^3 c}{12}-\frac{43 b^2 c^2}{12}-\frac{2}{3} b^2 c e+\frac{23 b^2 e^2}{24}-\frac{223 b c^3}{12}+\frac{57 b e^3}{16}+\frac{745 c^4}{12}+16 c^3 d-\frac{131 c^3 e}{6}-\frac{7 c^2 d^2}{6}+\frac{9 c^2 e^2}{8}+\frac{577 e^4}{96}$. \end{proof} \begin{theorem} \label{alternatinglevel4} Whenever LHS converges, \begin{multline*}(a-c-d+e) \sum_{k\geq 0} (a+2 e+2 k) (a)_k (d)_k (b+e)_k (c+e)_k \div \\ ((2 e+1)_k (a-b+e+k) (a-c+e+k) (a-d+2 e+k) (a-b+e)_k (a-c+e)_k (a-d+2 e)_k)\\ = \sum_{n\geq 0} (a-c-d+e+n) P_1(n) (-b+e+1)_n (b+e)_n (-c+e+1)_n (c+e)_n (a-b-d+e+1)_n (a-c-d+e)_n (-d+2 e+1)_{2 n} \div \\ ((a-b+e+n) (a-c+e+n) (2 e+2 n+1) (a-d+2 e+2 n) (a-d+2 e+2 n+1) (a-b-c-d+2 e+2 n+1) \\ (a-b-c-d+2 e+2 n+2) (a-b+e)_n (a-c+e)_n (2 e+1)_{2 n} (a-d+2 e)_{2 n} (a-b-c-d+2 e+1)_{2 n})\end{multline*} $P_1(n)$ is a polynomial which can be found in appendix. \end{theorem} \begin{proof} Using $F(n,k) = \textsf{Dougall5F4}(a-2n,b-n,c-n,d-2n,k+2n)$, one find its WZ-mate $G(n,k)$, the equality is $\sum_{k\geq 0} F(e,k-1) = \sum_{n\geq 0} G(n+e,-1)$. \end{proof} when expanded around $(a,b,c,d,e) = (1,3/4,1/4,1/2,0)$, LHS is in CMZV of level 4, RHS is of form $$\sum_{n\geq 0} \frac{\binom{4n}{2n}}{16^{2n}} \mathbb{Q}[\frac{1}{n},\frac{1}{2n+1},\frac{1}{4n+1},\frac{1}{4n+3},\hbar_n^{(r)}(1),\hbar_n^{(r)}(1/2),\hbar_n^{(r)}(1/4),\hbar_n^{(r)}(3/4)]$$ \begin{corollary} $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n}}{(2 n+1) 16^n} (12 \sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^2}-\frac{(-1)^n}{(2 n+1)^2})=4 \pi G$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1) 16^n} (24 \sum _{j=0}^{n-1} \frac{(-1)^j}{(2 j+1)^3}+\frac{7 (-1)^n}{(2 n+1)^3})=\frac{\pi ^4}{12}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n}}{(2 n+1)^3 16^n}(9 H_{2 n+1}+\frac{32}{2 n+1})=40 \beta(4)+\frac{5 \pi \zeta (3)}{12}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1)^2 (-16)^n}(10 \sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^2}-\frac{(-1)^n}{(2 n+1)^2})=G \pi ^2-\frac{\pi \zeta (3)}{24}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1)^2 (-16)^n} (40 \sum _{j=0}^{n-1} \frac{(-1)^j}{(2 j+1)^3}-\frac{7 (-1)^n}{(2 n+1)^3})=-\frac{85 \pi ^5}{3456}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1) 16^n} (8 \sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^4}+\frac{(-1)^n}{(2 n+1)^4})=\frac{11 \pi ^2 \zeta (3)}{120}+\frac{8 \pi \beta(4)}{3}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n}}{(2 n+1)^3 16^n} (72 \sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^2}-\frac{(-1)^n}{(2 n+1)^2})=\frac{7 \pi ^3 G}{3}+\frac{17 \pi ^2 \zeta (3)}{40}$$ $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1)^2 (-16)^n}(110 \sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^4}+\frac{29 (-1)^n}{(2 n+1)^4})=\frac{91 \pi ^3 \zeta (3)}{96}+11 \pi ^2 \beta (4)-\frac{301 \pi \zeta (5)}{192}$$ \end{corollary} \begin{proof} For each of them, split the summand into even and odd part, then recombine them, note that $$\sum_{j=0}^{2n-1} \frac{(-1)^j}{(2j+1)^s} = \sum_{j=0}^{n} \frac{1}{(4j+1)^s} - \frac{1}{(4j+3)^s} = 4^{-s}(\hbar_{n+1}^{(s)}(1/4) - \hbar_{n+1}^{(s)}(3/4))$$ the combined sum can be obtained by comparing coefficients of \ref{alternatinglevel4} at $(a,b,c,d,e) = (1,3/4,1/4,1/2,0)$. Let $a^i b^j c^k d^r e^l$ be the equality by comparing corresponding exponent, for example, first is $\frac{a^2}{7}+\frac{b^2}{7}+\frac{b d}{4}-\frac{3 c^2}{28}-\frac{3 d^2}{28}$, second is $\frac{37 a^3}{48}+\frac{a^2 b}{8}-\frac{7 a^2 d}{24}-\frac{7 a d^2}{24}-\frac{b^3}{4}-\frac{7 b^2 c}{32}+\frac{c^3}{32}-\frac{7 d^3}{24}$. \end{proof} Charlton \cite{charlton2022two} recently proved the third one using ingenious manipulation of polylogarithm. All above are conjectures in \cite{sun2010conjectures}, except the fourth, in which we have a stronger form: \begin{corollary}[Conjecture in \cite{sun2010conjectures}] $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1)^2 (-16)^n}\sum _{j=0}^n \frac{(-1)^j}{(2 j+1)^2}= \frac{\pi^2 G}{10} + \frac{\pi \zeta(3)}{240} + \frac{27\sqrt{3}}{640}L_{-3}(4)$$ \end{corollary} \begin{proof} In view of the fourth equality of above corollary, it suffices to prove $$\sum_{n\geq 0}\frac{\binom{2 n}{n}}{(2 n+1)^4 16^n} = \frac{27}{64} \sqrt{3} L_{-3}(4)+\frac{\pi \zeta (3)}{12}$$ This has already been proved by Zucker \cite{zucker1985series} in 1985. \end{proof} We note two interesting patterns of above examples. First, when we juxtapose expressions of first and second corollaries, one immediate note the coefficients are same: $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n} }{(2 n+1)^2 (-16)^n} (40 \sum _{j=0}^{n-1} \frac{(-1)^j}{(2 j+1)^3}-\frac{7 (-1)^n}{(2 n+1)^3})=-\frac{85 \pi ^5}{3456}$$ $$\sum _{n=1}^{\infty } \frac{(-1)^n}{n^3 \binom{2 n}{n}} (40 \sum _{j=1}^{n-1} \frac{(-1)^j}{j^3}-\frac{7 (-1)^n}{n^3})=-\frac{367 \pi ^6}{27216}+6 \zeta (3)^2$$ here we have both $40, -7$. The same thing happens for other pairs. Second, for every entry in second corollary\footnote{except the third one}, the result is a multiple of $\pi$. In context of CMZVs, results that are multiple of $\pi$ are quite special. \par An explanation of above two patterns might assist us in discovering more similar formulas. \section{Miscellaneous examples} In this section, we give two examples of quite different flavours than above: 1. the WZ-pairs used come from hypergeometric summation that are less well-known (see end of Section 1); 2: the non-vanishing of limit term in $\ref{WZsumformula}$. \begin{theorem}\label{nonvanishingboundary1} For $a,b,c$ near $0$, we have \begin{multline*}\sum_{k\geq 0} \frac{2 (2 a+1)_k (2 b+1)_k (c+1)_k}{(k+1) (1)_k (2 a+2 b+2 k+1) (2 c+1)_k \left(a+b+\frac{1}{2}\right)_k} = \\ \frac{\sin (\pi a) \sin (\pi b) \Gamma \left(c+\frac{1}{2}\right) \Gamma \left(a+b+\frac{1}{2}\right) \Gamma \left(a-c+\frac{1}{2}\right) \Gamma \left(b-c+\frac{1}{2}\right) \Gamma \left(-a-b+c+\frac{1}{2}\right)}{2 \pi ^{3/2} a b \Gamma \left(a+\frac{1}{2}\right) \Gamma \left(b+\frac{1}{2}\right)} \\ + c\sum_{n\geq 1}\frac{ (-1)^n (1-a)_n (2 a+1)_n (1-b)_n (2 b+1)_n (1-2 c)_n (c+1)_n P_1(n)}{2 (3 n)! (n-a) (2 a+n) (n-b) (2 b+n) (n-2 c) (c+n) \left(a+b+\frac{1}{2}\right)_n \left(a-c+\frac{1}{2}\right)_n \left(b-c+\frac{1}{2}\right)_n} \end{multline*} where $P_1(n)$ is $-8 a b c+4 a b n-4 a c n+20 a n^2-6 a n-4 b c n+20 b n^2-6 b n-20 c n^2+6 c n+28 n^3-18 n^2+3 n$ \end{theorem} \begin{proof} Let $$F(n,k) = \frac{\Gamma \left(-a+c-n+\frac{1}{2}\right) \Gamma (2 a+k+n+1) \Gamma \left(-b+c-n+\frac{1}{2}\right) \Gamma (2 b+k+n+1) \Gamma (c+k+n+1)}{\Gamma (a-n) \Gamma (b-n) \Gamma (k+3 n+2) \Gamma (2 c+k-n+1) \Gamma \left(a+b+k+n+\frac{3}{2}\right)}$$ one find its WZ-mate $G(n,k)$, then checks $\lim_{k\to\infty} G(n,k) = 0$ for each $n$. Therefore we have $$\sum_{k\geq 0}F(0,k) = \sum_{n\geq 0} G(n,0) + \lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$$ some simplification reduces the theorem into showing that $$\lim_{n\to \infty} \sum_{k\geq 0} F'(n,k)$$ equals the first term in the theorem's RHS, where \begin{multline*}F'(n,k) = \Gamma(a+1) \Gamma (b+1) \Gamma (2c+1) \Gamma \left(a+b+\frac{1}{2}\right) \Gamma \left(-a+c-n+\frac{1}{2}\right) \Gamma (2 a+k+n+1) \Gamma \left(-b+c-n+\frac{1}{2}\right)\\ \Gamma (2 b+k+n+1) \Gamma (c+k+n+1) \div (a b \Gamma (2 a+1) \Gamma (2 b+1) \Gamma (c+1) \Gamma \left(-a+c+\frac{1}{2}\right) \Gamma (a-n) \\ \Gamma \left(-b+c+\frac{1}{2}\right) \Gamma (b-n) \Gamma (k+3 n+2) \Gamma (2 c+k-n+1) \Gamma \left(a+b+k+n+\frac{3}{2}\right))\end{multline*} It is easy to see that $\lim_{n\to \infty} \sum_{0\leq k<n} F'(n,k) = 0$ (each $F'(n,k)$ is $O(16^{-n}n^A)$ with $O$ uniform in $k$). Therefore it remains to evaluate $\lim_{n\to \infty} \sum_{k\geq 0} F'(n,n+k)$, with $\sum_{k\geq 0} F'(n,n+k)$ being a $_4F_3$ sum, it reduces to show that \begin{equation}\label{L1}\pFq{4}{3}{1,1+2a+2n,1+2b+2n,1+c+2n}{1+2c,3/2+a+b+2n,2+4n}{1} \sim \frac{\ 2(16^n) \Gamma (2 c+1) (2n)^{a+b-3 c}\Gamma \left(-a-b+c+\frac{1}{2}\right)}{\sqrt{\pi }}\end{equation} as $n\to \infty$. This is shown in lemma below. \end{proof} \begin{lemma} As $n \to \infty$, \ref{L1} is true. \end{lemma} \begin{proof} It is easy seen that, if $A,B,C,D$ are independent of $n,k$ and $A+B=C+D$ then $$\frac{(A+n)_k (B+n)_k}{(C+n)_k (D+n)_k} = 1 + O(\frac{1}{n})$$ with $O$-term uniform in $k\geq 0$. Applying this, we have $$\pFq{4}{3}{1,1+2a+2n,1+2b+2n,1+c+2n}{1+2c,3/2+a+b+2n,2+4n}{1} \sim \pFq{3}{2}{1,1/2+a+b+2n,1+c+2n}{1+2c,2+4n}{1}$$ Replace $n$ by $n/2$ and $a+b$ by $d$, using Euler's integral representation of $_3F_2$, we reduce $\ref{L1}$ into (for $c>0$): \begin{equation}\label{L2}I:= \int_0^1\int_0^1 (1-t)^{2c-1}(1-ts)^{-1/2-d}\Big(\frac{s}{1-s}\Big)^c\Big(\frac{s(1-s)}{1-ts}\Big)^n dsdt\sim \frac{n^{d-3c}}{\sqrt n}\Gamma(2c)\Gamma\Big(\frac{1}{2}-d+c\Big)\end{equation} Substituting $z=s/(1-s), t\mapsto 1-t$, the double integral equals $$\int_0^\infty z^c\Big(\frac{1}{1+z}\Big)^{\frac{3}{2}-d}\Big(\frac{z}{1+z}\Big)^ndz\int_0^1 t^{2c-1}(1+tz)^{-\frac{1}{2}-d}\frac{dt}{(1+tz)^n}$$ Only large $z$ contributes to the outer integral, so we assume $z$ large. When this is the case, we have $$\int_0^1 t^{2c-1}(1+tz)^{-\frac{1}{2}-d}\frac{dt}{(1+tz)^n}=\frac{1}{(nz)^{2c}}\int_0^{nz}x^{2c-1}\Big(1+\frac{x}{n}\Big)^{-\frac{1}{2}-d}e^{-n\log(1+\frac{x}{n})} = \frac{\Gamma(2c)}{(nz)^{2c}}\Big(1+O\Big(\frac{1}{n}\Big)\Big)$$ by dominated convergence theorem. Therefore we have $$I= \frac{\Gamma(2c)}{n^{2c}} (1+O(\frac{1}{n})) \int_0^\infty z^{-c} (\frac{1}{1+z})^{3/2-d} (\frac{z}{1+z})^n dz$$ the integral is beta function, so straightforward application of limiting behaviour of $\Gamma(x)$ gives \ref{L2}. Completing the proof of \ref{nonvanishingboundary1} \end{proof} Comparing coefficient of $a^0b^0c^1$ on both sides of \ref{nonvanishingboundary1}, yields\footnote{coefficient of the gamma product at exponent $a^0b^0c^1$ is $0$} $$\sum _{k=1}^{\infty } \frac{2 (1)_k H_k }{(-2 k-1) (k+1) \left(\frac{1}{2}\right)_k} = \sum_{n\geq 1} \frac{(-1)^n \left(28 n^3-18 n^2+3 n\right) (1)_n^6}{2 n^6 (\frac{1}{2})_n^3 (3n)!}$$ LHS easily evaluates to $-7\zeta(3)$ (more generally, such sum are CMZV of level 2 or 4, \ref{binomCMZVs}), so we proved \begin{corollary}[Conjecture in \cite{sun2021book}, \cite{sun2010conjectures}] $$\sum_{n\geq 1} \frac{(28n^2-18n+3)(-64)^n}{n^5\binom{2n}{n}^4 \binom{3n}{n}} = -14\zeta(3)$$ \end{corollary} We remark that \ref{nonvanishingboundary1} can accommodate a fourth parameter $d$, if we use the formula $\sum_{k\geq 0}F(0,k+d) = \sum_{n\geq 0} G(n,d)$. On the other hand, an additional parameter might not be possible for the following: \begin{theorem}For $a,b$ near $0$, we have \begin{multline*}\sum_{k\geq 0} \frac{2 \left(-\frac{1}{3}\right)^k \left(-a+b+\frac{1}{2}\right)_k (2 a+b+1)_k}{(4 a+2 b+2 k+1) (b+1)_k \left(2 a+b+\frac{1}{2}\right)_k} = \frac{\pi 4^a 3^{-a+b-\frac{1}{2}} \Gamma (b+1) \sec (\pi (a-b)) \Gamma \left(2 a+b+\frac{1}{2}\right)}{\Gamma (a+1) \Gamma \left(-a+b+\frac{1}{2}\right) \Gamma (2 a+b+1)} \\ - b\sum_{n\geq 1} \frac{3^n \left(a+\frac{1}{2}\right)_n \left((a+1)_n\right){}^2 (2 a+b+1)_{2 n}}{(a+n) (2 a+b+2 n-1) (2 a+1)_{2 n} \left(a-b+\frac{1}{2}\right)_n \left(2 a+b+\frac{1}{2}\right)_{2 n}} \end{multline*} \end{theorem} \begin{proof} Like last theorem, the proof is again with WZ-pair, and the gamma product on the right originates again from $\lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$. Let $F(n,k)$ be $$\frac{(-1)^k 3^{n-k} \Gamma (a+n+1)^2 \Gamma \left(-a+b+k-n+\frac{1}{2}\right) \Gamma (2 a+b+k+2 n+1)}{\Gamma \left(-a-n+\frac{1}{2}\right) \Gamma (2 a+2 n+1) \Gamma (b+k+1) \Gamma \left(2 a+b+k+2 n+\frac{3}{2}\right)}$$ one find its WZ-mate $G(n,k)$, then above equality is $$\sum_{k\geq 0}F(0,k) = \sum_{n\geq 0} G(n,0) + \lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$$ The asymptotic term $\lim_{n\to\infty} \sum_{k\geq 0} F(n,k)$ reduces to proving the following: $$\pFq{3}{2}{1,1/2-a+b-n,1+2a+b+2n}{1+b,3/2+2a+b+2n}{-1/3} \sim \Gamma(b+1)2^{1/2}3^{-1/2-a+b-n}4^{n+a}n^{-b} \quad n\to\ \infty$$ This can be shown in complete analogy like above lemma, so we omit it here. \end{proof} Comparing coefficient of $a^0b^1$ on both sides, we have $$\sum _{k=0}^{\infty } -\frac{4 \left(-\frac{1}{3}\right)^k}{(2 k+1)^2} = \frac{\pi \log (3)}{\sqrt{3}} - \sum _{n=1}^{\infty } \frac{3^n \left((1)_n\right){}^2}{n (2 n-1) \left(\frac{1}{2}\right)_{2 n}}$$ LHS is $$2\sqrt{3}i\sum\limits_{k = 1}^\infty {\frac{{{{(\frac{i}{{\sqrt 3 }})}^k}}}{{{k^2}}}[1 - {{( - 1)}^k}]} = 2\sqrt{3}i(\Li_2(\frac{i}{{\sqrt 3 }}) - \Li_2(\frac{{ - i}}{{\sqrt 3 }}))$$ which can be easily shown\footnote{There are many ways. Method 1: \textit{ad hoc} manipulation of functional equation of $\Li_2$; method 2: invoking the fact that $\Li_n(i/\sqrt{3})$ is CMZV of level 6 weight $n$, so they can be expressed using our chosen CMZV basis} $=\frac{\pi \log (3)}{\sqrt{3}}-\frac{15 L_{-3}(2)}{2}$, hence \begin{corollary} [Conjecture in \cite{sun2021book}, \cite{sun2010conjectures}] $$L_{-3}(2)=\frac2{15}\sum _{k=1}^\infty\frac{48^k}{k(2k-1)\binom{4k}{2k}\binom{2k}k}$$ \end{corollary} \newpage \section{Appendix: explicit polynomials} $P_1(n)$ in \ref{ex2} is \\ \small{$a^2 b^2-a^2 b c-a^2 b d-2 a^2 b n+a^2 c d+a^2 c n+a^2 d n+a^2 n^2-a b^2 c-a b^2 d-2 a b^2 n+a b c^2+3 a b c d+6 a b c n+a b d^2+6 a b d n+8 a b n^2-2 a c^2 d-3 a c^2 n-2 a c d^2-11 a c d n+a c d-13 a c n^2+2 a c n-3 a d^2 n-13 a d n^2+2 a d n-14 a n^3+4 a n^2+b^2 c d+b^2 c n+b^2 d n+b^2 n^2-2 b c^2 d-3 b c^2 n-2 b c d^2-11 b c d n+b c d-13 b c n^2+2 b c n-3 b d^2 n-13 b d n^2+2 b d n-14 b n^3+4 b n^2+c^3 d+2 c^3 n+2 c^2 d^2+10 c^2 d n-c^2 d+13 c^2 n^2-2 c^2 n+c d^3+10 c d^2 n-c d^2+29 c d n^2-6 c d n+28 c n^3-8 c n^2+2 d^3 n+13 d^2 n^2-2 d^2 n+28 d n^3-8 d n^2+21 n^4-8 n^3$} \\ $P_1(n)$ in \ref{ex3} is \\ \small{$a^2 b+a^2 c+a^2 d-2 a^2 e-2 a^2 n+a b^2+2 a b c+2 a b d-5 a b e-5 a b n+a c^2+2 a c d-5 a c e-5 a c n+a d^2-5 a d e-5 a d n+6 a e^2+12 a e n+6 a n^2+b^2 c+b^2 d-2 b^2 e-2 b^2 n+b c^2+2 b c d-5 b c e-5 b c n+b d^2-5 b d e-5 b d n+6 b e^2+12 b e n+6 b n^2+c^2 d-2 c^2 e-2 c^2 n+c d^2-5 c d e-5 c d n+6 c e^2+12 c e n+6 c n^2-2 d^2 e-2 d^2 n+6 d e^2+12 d e n+6 d n^2-5 e^3-15 e^2 n-15 e n^2-5 n^3$} \\ $P_1(n)$ in \ref{alternatinglevel2} is \\ \small{ $a^5-2 b a^4-2 c a^4-2 d a^4+10 e a^4+10 n a^4-a^4+b^2 a^3+c^2 a^3+d^2 a^3+39 e^2 a^3+39 n^2 a^3+2 b a^3+3 b c a^3+2 c a^3+3 b d a^3+3 c d a^3+2 d a^3-17 b e a^3-17 c e a^3-17 d e a^3-7 e a^3-17 b n a^3-17 c n a^3-17 d n a^3+78 e n a^3-7 n a^3+76 e^3 a^2+76 n^3 a^2-b^2 a^2-b c^2 a^2-c^2 a^2-b d^2 a^2-c d^2 a^2-d^2 a^2-51 b e^2 a^2-51 c e^2 a^2-51 d e^2 a^2-18 e^2 a^2-51 b n^2 a^2-51 c n^2 a^2-51 d n^2 a^2+228 e n^2 a^2-18 n^2 a^2-b^2 c a^2-3 b c a^2-b^2 d a^2-c^2 d a^2-3 b d a^2-2 b c d a^2-3 c d a^2+7 b^2 e a^2+7 c^2 e a^2+7 d^2 e a^2+11 b e a^2+22 b c e a^2+11 c e a^2+22 b d e a^2+22 c d e a^2+11 d e a^2+7 b^2 n a^2+7 c^2 n a^2+7 d^2 n a^2+228 e^2 n a^2+11 b n a^2+22 b c n a^2+11 c n a^2+22 b d n a^2+22 c d n a^2+11 d n a^2-102 b e n a^2-102 c e n a^2-102 d e n a^2-36 e n a^2+75 e^4 a+75 n^4 a-66 b e^3 a-66 c e^3 a-66 d e^3 a-22 e^3 a-66 b n^3 a-66 c n^3 a-66 d n^3 a+300 e n^3 a-22 n^3 a+b c^2 a+b d^2 a+c d^2 a+15 b^2 e^2 a+15 c^2 e^2 a+15 d^2 e^2 a+18 b e^2 a+48 b c e^2 a+18 c e^2 a+48 b d e^2 a+48 c d e^2 a+18 d e^2 a+15 b^2 n^2 a+15 c^2 n^2 a+15 d^2 n^2 a+450 e^2 n^2 a+18 b n^2 a+48 b c n^2 a+18 c n^2 a+48 b d n^2 a+48 c d n^2 a+18 d n^2 a-198 b e n^2 a-198 c e n^2 a-198 d e n^2 a-66 e n^2 a+b^2 c a+b^2 d a+c^2 d a+2 b c d a-4 b^2 e a-6 b c^2 e a-4 c^2 e a-6 b d^2 e a-6 c d^2 e a-4 d^2 e a-6 b^2 c e a-13 b c e a-6 b^2 d e a-6 c^2 d e a-13 b d e a-15 b c d e a-13 c d e a+300 e^3 n a-4 b^2 n a-6 b c^2 n a-4 c^2 n a-6 b d^2 n a-6 c d^2 n a-4 d^2 n a-198 b e^2 n a-198 c e^2 n a-198 d e^2 n a-66 e^2 n a-6 b^2 c n a-13 b c n a-6 b^2 d n a-6 c^2 d n a-13 b d n a-15 b c d n a-13 c d n a+30 b^2 e n a+30 c^2 e n a+30 d^2 e n a+36 b e n a+96 b c e n a+36 c e n a+96 b d e n a+96 c d e n a+36 d e n a+30 e^5+30 n^5-32 b e^4-32 c e^4-32 d e^4-11 e^4-32 b n^4-32 c n^4-32 d n^4+150 e n^4-11 n^4+10 b^2 e^3+10 c^2 e^3+10 d^2 e^3+10 b e^3+32 b c e^3+10 c e^3+32 b d e^3+32 c d e^3+10 d e^3+10 b^2 n^3+10 c^2 n^3+10 d^2 n^3+300 e^2 n^3+10 b n^3+32 b c n^3+10 c n^3+32 b d n^3+32 c d n^3+10 d n^3-128 b e n^3-128 c e n^3-128 d e n^3-44 e n^3-3 b^2 e^2-8 b c^2 e^2-3 c^2 e^2-8 b d^2 e^2-8 c d^2 e^2-3 d^2 e^2-8 b^2 c e^2-11 b c e^2-8 b^2 d e^2-8 c^2 d e^2-11 b d e^2-21 b c d e^2-11 c d e^2+300 e^3 n^2-3 b^2 n^2-8 b c^2 n^2-3 c^2 n^2-8 b d^2 n^2-8 c d^2 n^2-3 d^2 n^2-192 b e^2 n^2-192 c e^2 n^2-192 d e^2 n^2-66 e^2 n^2-8 b^2 c n^2-11 b c n^2-8 b^2 d n^2-8 c^2 d n^2-11 b d n^2-21 b c d n^2-11 c d n^2+30 b^2 e n^2+30 c^2 e n^2+30 d^2 e n^2+30 b e n^2+96 b c e n^2+30 c e n^2+96 b d e n^2+96 c d e n^2+30 d e n^2+3 b c^2 e+3 b d^2 e+b c d^2 e+3 c d^2 e+3 b^2 c e+3 b^2 d e+b c^2 d e+3 c^2 d e+b^2 c d e+7 b c d e+150 e^4 n-128 b e^3 n-128 c e^3 n-128 d e^3 n-44 e^3 n+3 b c^2 n+3 b d^2 n+b c d^2 n+3 c d^2 n+30 b^2 e^2 n+30 c^2 e^2 n+30 d^2 e^2 n+30 b e^2 n+96 b c e^2 n+30 c e^2 n+96 b d e^2 n+96 c d e^2 n+30 d e^2 n+3 b^2 c n+3 b^2 d n+b c^2 d n+3 c^2 d n+b^2 c d n+7 b c d n-6 b^2 e n-16 b c^2 e n-6 c^2 e n-16 b d^2 e n-16 c d^2 e n-6 d^2 e n-16 b^2 c e n-22 b c e n-16 b^2 d e n-16 c^2 d e n-22 b d e n-42 b c d e n-22 c d e n$} \\ $P_1(n)$ in \ref{alternatinglevel4} is \small{ $30 e^5+64 a e^4-32 b e^4-32 c e^4-43 d e^4+150 n e^4+75 e^4+48 a^2 e^3+10 b^2 e^3+10 c^2 e^3+20 d^2 e^3+300 n^2 e^3+128 a e^3-48 a b e^3-58 b e^3-48 a c e^3+16 b c e^3-58 c e^3-68 a d e^3+34 b d e^3+34 c d e^3-80 d e^3+256 a n e^3-128 b n e^3-128 c n e^3-172 d n e^3+300 n e^3+68 e^3+16 a^3 e^2-3 d^3 e^2+300 n^3 e^2+72 a^2 e^2+8 a b^2 e^2+11 b^2 e^2+8 a c^2 e^2+11 c^2 e^2+22 a d^2 e^2-11 b d^2 e^2-11 c d^2 e^2+25 d^2 e^2+384 a n^2 e^2-192 b n^2 e^2-192 c n^2 e^2-258 d n^2 e^2+450 n^2 e^2+88 a e^2-24 a^2 b e^2-64 a b e^2-35 b e^2-24 a^2 c e^2-64 a c e^2+16 a b c e^2+16 b c e^2-35 c e^2-35 a^2 d e^2-7 b^2 d e^2-7 c^2 d e^2-97 a d e^2+35 a b d e^2+42 b d e^2+35 a c d e^2-8 b c d e^2+42 c d e^2-49 d e^2+144 a^2 n e^2+30 b^2 n e^2+30 c^2 n e^2+60 d^2 n e^2+384 a n e^2-144 a b n e^2-174 b n e^2-144 a c n e^2+48 b c n e^2-174 c n e^2-204 a d n e^2+102 b d n e^2+102 c d n e^2-240 d n e^2+204 n e^2+27 e^2+2 a^4 e+150 n^4 e+16 a^3 e-2 a d^3 e+b d^3 e+c d^3 e-2 d^3 e+256 a n^3 e-128 b n^3 e-128 c n^3 e-172 d n^3 e+300 n^3 e+34 a^2 e+2 a^2 b^2 e+6 a b^2 e+3 b^2 e+2 a^2 c^2 e-2 b^2 c^2 e+6 a c^2 e+2 b c^2 e+3 c^2 e+6 a^2 d^2 e+b^2 d^2 e+c^2 d^2 e+20 a d^2 e-6 a b d^2 e-7 b d^2 e-6 a c d^2 e-2 b c d^2 e-7 c d^2 e+8 d^2 e+144 a^2 n^2 e+30 b^2 n^2 e+30 c^2 n^2 e+60 d^2 n^2 e+384 a n^2 e-144 a b n^2 e-174 b n^2 e-144 a c n^2 e+48 b c n^2 e-174 c n^2 e-204 a d n^2 e+102 b d n^2 e+102 c d n^2 e-240 d n^2 e+204 n^2 e+24 a e-4 a^3 b e-22 a^2 b e-26 a b e-7 b e-4 a^3 c e-22 a^2 c e+2 b^2 c e-26 a c e+4 a^2 b c e+12 a b c e+2 b c e-7 c e-6 a^3 d e-34 a^2 d e-3 a b^2 d e-4 b^2 d e-3 a c^2 d e-2 b c^2 d e-4 c^2 d e-42 a d e+9 a^2 b d e+29 a b d e+13 b d e+9 a^2 c d e-2 b^2 c d e+29 a c d e-2 a b c d e+13 c d e-10 d e+32 a^3 n e-6 d^3 n e+144 a^2 n e+16 a b^2 n e+22 b^2 n e+16 a c^2 n e+22 c^2 n e+44 a d^2 n e-22 b d^2 n e-22 c d^2 n e+50 d^2 n e+176 a n e-48 a^2 b n e-128 a b n e-70 b n e-48 a^2 c n e-128 a c n e+32 a b c n e+32 b c n e-70 c n e-70 a^2 d n e-14 b^2 d n e-14 c^2 d n e-194 a d n e+70 a b d n e+84 b d n e+70 a c d n e-16 b c d n e+84 c d n e-98 d n e+54 n e+4 e+30 n^5+a^4+64 a n^4-32 b n^4-32 c n^4-43 d n^4+75 n^4+4 a^3-a d^3+b c d^3+48 a^2 n^3+10 b^2 n^3+10 c^2 n^3+20 d^2 n^3+128 a n^3-48 a b n^3-58 b n^3-48 a c n^3+16 b c n^3-58 c n^3-68 a d n^3+34 b d n^3+34 c d n^3-80 d n^3+68 n^3+5 a^2+a^2 b^2+a b^2+a^2 c^2-b^2 c^2+a c^2+b c^2+3 a^2 d^2+b c^2 d^2+4 a d^2-2 a b d^2+b^2 c d^2-2 a c d^2-2 a b c d^2-3 b c d^2+16 a^3 n^2-3 d^3 n^2+72 a^2 n^2+8 a b^2 n^2+11 b^2 n^2+8 a c^2 n^2+11 c^2 n^2+22 a d^2 n^2-11 b d^2 n^2-11 c d^2 n^2+25 d^2 n^2+88 a n^2-24 a^2 b n^2-64 a b n^2-35 b n^2-24 a^2 c n^2-64 a c n^2+16 a b c n^2+16 b c n^2-35 c n^2-35 a^2 d n^2-7 b^2 d n^2-7 c^2 d n^2-97 a d n^2+35 a b d n^2+42 b d n^2+35 a c d n^2-8 b c d n^2+42 c d n^2-49 d n^2+27 n^2+2 a-2 a^3 b-5 a^2 b-3 a b-2 a^3 c-5 a^2 c+b^2 c-3 a c+2 a^2 b c+2 a b c-b c-3 a^3 d-8 a^2 d-a b^2 d+b^2 c^2 d-a c^2 d-a b c^2 d-2 b c^2 d-5 a d+4 a^2 b d+5 a b d+4 a^2 c d-a b^2 c d-2 b^2 c d+5 a c d+a^2 b c d+a b c d+3 b c d+2 a^4 n+16 a^3 n-2 a d^3 n+b d^3 n+c d^3 n-2 d^3 n+34 a^2 n+2 a^2 b^2 n+6 a b^2 n+3 b^2 n+2 a^2 c^2 n-2 b^2 c^2 n+6 a c^2 n+2 b c^2 n+3 c^2 n+6 a^2 d^2 n+b^2 d^2 n+c^2 d^2 n+20 a d^2 n-6 a b d^2 n-7 b d^2 n-6 a c d^2 n-2 b c d^2 n-7 c d^2 n+8 d^2 n+24 a n-4 a^3 b n-22 a^2 b n-26 a b n-7 b n-4 a^3 c n-22 a^2 c n+2 b^2 c n-26 a c n+4 a^2 b c n+12 a b c n+2 b c n-7 c n-6 a^3 d n-34 a^2 d n-3 a b^2 d n-4 b^2 d n-3 a c^2 d n-2 b c^2 d n-4 c^2 d n-42 a d n+9 a^2 b d n+29 a b d n+13 b d n+9 a^2 c d n-2 b^2 c d n+29 a c d n-2 a b c d n+13 c d n-10 d n+4 n $} \newpage \bibliographystyle{plain}
2024-02-18T23:40:32.744Z
2022-12-21T02:14:03.000Z
algebraic_stack_train_0000
2,707
13,234
proofpile-arXiv_065-13248
\section{Introduction} Learning from important experiences prevails in nature. In rodent hippocampus, memories with higher importance, such as those associated with rewarding locations or large reward-prediction errors, are replayed more frequently \citep{Michon2019, roscow2019behavioural, salvetti2014role}. Psychophysical experiments showed that participants with more frequent replay of high-reward associated memories show better performance in memory tasks \citep{Gruber2016, schapiro2018human}. As accumulating new experiences is costly, utilizing valuable past experiences is a key for efficient learning \citep{olafsdottir2018role}. Differentiating important experiences from unimportant ones also benefits reinforcement learning (RL) algorithms \citep{katharopoulos2018not}. Prioritized experience replay (PER) \citep{Schaul2016} is an experience replay technique built on deep Q-network (DQN) \citep{Mnih2015}, which weighs the importance of samples by the magnitude of their temporal-difference error ($|\text{TD}|$). As a result, experiences with larger $|\text{TD}|$ are sampled more frequently. PER significantly improves the learning efficiency of DQN, and has been adopted \citep{Hessel2018,Horgan2018, Kapturowski2019} and extended \citep{daley2019reconciling, pan2018organizing, schlegel2019importance} by various deep RL algorithms. $|\text{TD}|$ quantifies the unexpectedness of an experience to a learning agent, and biologically corresponds to the signal of reward prediction error in dopamine system \citep{schultz1997neural, glimcher2011understanding}. However, how $|\text{TD}|$ is related to the importance of experience in the context of RL is not well understood. We address this problem from an economic perspective, by linking $|\text{TD}|$ to \textit{value of experience} in RL. Recently in neuroscience field, a normative theory for memory access, based on Dyna framework \citep{sutton1990integrated}, suggests that a rational agent should replay the experiences that lead to most rewarding future decisions \citep{Mattar2018}. Follow-up research shows that optimizing the replay strategy according to the normative theory has advantage over prioritized experience replay with $|\text{TD}|$ \citep{zha2019experience}. Inspired by \cite{Mattar2018}, we define the value of experience as the increase in the expected cumulative reward resulted from updating on the experience. The value of experience quantifies the importance of experience from first principles: assuming that the agent is economically rational and has full information about the value of experience, it will choose the most valuable experience to update, which leads to most rewarding future decisions. As supplements, we derive two more value metrics, which correspond to the evaluation improvement value and policy improvement value due to update on an experience. In this work, we mathematically show that these value metrics are upper-bounded by $|\text{TD}|$ for Q-learning. Therefore, $|\text{TD}|$ implicitly tracks the value of experience, and accounts for the importance of experience. We further extend our framework to maximum-entropy RL, which augments the reward with an entropy term to encourage exploration \citep{Haarnoja2017}. We derive the lower and upper bounds of these value metrics for soft Q-learning, which are related to $|\text{TD}|$ and ``on-policyness" of the experience. Experiments in grid-world maze and CartPole support our theoretical results for both tabular and function approximation RL methods, showing that the derived bounds hold in practice. Moreover, we show that experience replay using the upper bound as priority improves maximum-entropy RL (\textit{i.e.}, soft DQN) in Atari games. \section{Motivation} \subsection{Q-learning and Experience Replay} \label{qlearning} We consider a Markov Decision Process (MDP) defined by a tuple $\{\mathcal{S},\mathcal{A},\mathcal{P}, \mathcal{R}, \gamma\}$, where $\mathcal{S}$ is a finite set of states, $\mathcal{A}$ is a finite set of actions, $\mathcal{P}$ is the transition function, $\mathcal{R}$ is the reward function, and $\gamma \in [0, 1]$ is the discount factor. A policy $\pi$ of an agent assigns probability $\pi(a|s)$ to each action $a \in \mathcal{A}$ given state $s \in \mathcal{S}$. The goal is to learn an optimal policy that maximizes the expected discounted return starting from time step $t$, $G_t = \sum_{i = 0}^{\infty}\gamma^i r_{t+i}$, where $r_t$ is the reward the agent receives at time step $t$. Value function $v_\pi(s)$ is defined as the expected return starting from state $s$ following policy $\pi$, and Q-function $q_\pi(s, a)$ is the expected return on performing action $a$ in state $s$ and subsequently following policy $\pi$. According to Q-learning \citep{Watkins1992}, the optimal policy can be learned through policy iteration: performing policy evaluation and policy improvement interactively and iteratively. For each policy evaluation, we update $Q(s,a)$, an estimate of $q_\pi(s, a)$, by \begin{equation}\nonumber Q_\text{new}(s, a) = Q_\text{old}(s, a) + \alpha \text{TD}(s, a, r, s'), \end{equation} where TD error $\text{TD}(s, a, r, s') = r + \gamma \max_{a'} Q_\text{old}(s', a') - Q_\text{old}(s, a)$ and $\alpha$ is the step-size parameter. $Q_\text{new}$ and $Q_\text{old}$ denote the estimated Q-function before and after the update respectively. And for each policy improvement, we update the policy from $\pi_{\text{old}}$ to $\pi_{\text{new}}$ according to the newly estimated Q-function, \begin{equation}\nonumber \pi_{\text{new}} = \mathop{\argmax}_{a} Q_\text{new}(s,a). \end{equation} Standard Q-learning only uses each experience once before disregarded, which is sample inefficient and can be improved by \textit{experience replay} technique \citep{lin1992self}. We denote the experience that the agent collected at time $k$ by a tuple $e_k = \{s_k, a_k, r_k, s_{k}' \}$. According to experience replay, the experience $e_k$ is stored into the replay buffer and can be accessed multiple times during learning. \subsection{Value Metrics of Experience} To quantify the importance of experience, we derive three value metrics of experience. The utility of update on experience $e_k$ is defined as the value added to the cumulative discounted rewards starting from state $s_k$, after updating on $e_k$. Intuitively, choosing the most valuable experience for update will yield the highest utility to the agent. We denote such utility as the expected value of backup $\text{EVB}(e_k)$ \citep{Mattar2018}, \begin{align} \text{EVB}(e_k) &= v_{\pi_{\text{new}}}(s_k) - v_{\pi_{\text{old}}}(s_k) \nonumber \\ &= \sum_{a}{\pi_{\text{new}}(a|s_k)q_{\pi_{\text{new}}}(s_k,a)} \nonumber \\ & \qquad\qquad-\sum_{a} {\pi_{\text{old}}(a|s_k)q_{\pi_{\text{old}}}(s_k,a)}, \label{eq:evb} \end{align} where $\pi_{\text{old}}$, $v_{\pi_{\text{old}}}$ and $q_{\pi_{\text{old}}}$ are respectively the policy, value function and Q-function before the update, and $\pi_{\text{new}}$, $v_{\pi_{\text{new}}}$, and $q_{\pi_{\text{new}}}$ are those after. As the update on experience $e_k$ consists of policy evaluation and policy improvement, the value of experience can further be separated to evaluation improvement value $\text{EIV}(e_k)$ and policy improvement value $\text{PIV}(e_k)$ by rewriting (\ref{eq:evb}): \begin{multline}\label{eq:evb2} \text{EVB}(e_k) = \underbrace{\sum_{a}{[\pi_{\text{new}}(a|s_k) - \pi_{\text{old}}(a|s_k)]q_{\pi_{\text{new}}}(s_k,a)}}_{\text{PIV}(e_k)}+ \\ \underbrace{\sum_{a}{\pi_{\text{old}}(a|s_k)[q_{\pi_{\text{new}}}(s_k,a)-q_{\pi_{\text{old}}}(s_k,a)]}}_{\text{EIV}(e_k)}, \end{multline} where $\text{PIV}(e_k)$ measures the value improvements due to the change of the policy, and $\text{EIV}(e_k)$ captures those due to the change of evaluation. Thus, we have three metrics for the value of experience: $\text{EVB}$, $\text{PIV}$ and $\text{EIV}$. \begin{figure*}[t!] \centering \includegraphics[width=.95\textwidth, clip=true, trim = 0mm 8mm 0mm 8mm]{images/motivatingexample_icml.pdf} \caption{\textbf{a.} Illustration of the ``Linear Grid-World" example: there are $N$ grids and 4 actions (north, south, east, west). Reward for entering the goal state (cheese) is 1; reward is 0 elsewhere. \textbf{b-c.} Examples of prioritized experience replay by $|\text{TD}|$ and value of experience (EVB). The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy; while $|\text{TD}|$ is sensitive to changes in value function and will prioritize non-optimal experiences, such as those associated with north or south. Here squares represent states, triangles represent actions, and experiences associated with the highest priority are highlighted. \textbf{d.} Expected number of replays needed to learn the optimal policy, as the number of grids changes: uniform replay (blue), prioritized by $|\text{TD}|$ (orange), and EVB (green).} \label{fig:fig1} \end{figure*} \subsection{Value Metrics of Experience in Q-Learning} For Q-learning, we use Q-function to estimate the true action-value function. A backup over an experience $e_k$ consists of policy evaluation with Bellman operator and greedy policy improvement. As the policy improvement is greedy, we can rewrite value metrics of experience to simpler forms. EVB can be written as follows from (\ref{eq:evb}), \begin{equation} \label{eq:evb3} \text{EVB}(e_k) = \max_a{Q_{\text{new}}(s_k,a)} - \max_a{Q_{\text{old}}(s_k,a)}. \end{equation} Note that EVB here is different from that in \cite{Mattar2018}: in our case, EVB is derived from Q-learning; while in their case, EVB is derived from Dyna, a model-based RL algorithm \citep{sutton1990integrated}. Similarly, from (\ref{eq:evb2}), PIV can be written as \begin{equation} \label{eq:piv} \text{PIV}(e_k) = \max_a{Q_{\text{new}}(s_k,a)} - Q_{\text{new}}(s_k, a_{\text{old}}), \end{equation} where $a_{\text{old}} = \arg\max_a{Q_{\text{old}}(s_k,a)}$, and EIV can be written as \begin{equation} \label{eq:eiv} \text{EIV}(e_k) = Q_{\text{new}}(s_k,a_{\text{old}}) - Q_{\text{old}}(s_k, a_{\text{old}}). \end{equation} \subsection{A Motivating Example} We illustrate the potential gain of value of experience in a ``Linear Grid-World" environment (Figure~\ref{fig:fig1}a). This environment contains $N$ linearly-aligned grids and 4 actions (north, south, east, west). The rewards are rare: 1 for entering the goal state and 0 elsewhere. The solution for this environment is always choosing east. We use this example to highlight the difference between prioritization strategies. Three agents perform Q-learning updates on the experiences drawn from the same replay buffer, which contains all the ($4N$) experiences and associated rewards. The first agent replays the experiences uniformly at random, while the other two agents invoke the oracle to prioritize the experiences, which greedily select the experience with the highest $|\text{TD}|$ or EVB respectively. In order to learn the optimal policy, agents need to replay the experiences associated with action east in a reverse order. For the agent with random replay, the expected number of replays required is $4N^2$ (Figure~\ref{fig:fig1}d). For the other two agents, prioritization significantly reduces the number of replays required: prioritization with $|\text{TD}|$ requires $4N$ replays, and prioritization with EVB only uses $N$ replays, which is optimal (Figure~\ref{fig:fig1}d). The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy (Figure~\ref{fig:fig1}c), while $|\text{TD}|$ is sensitive to changes in the value function and will prioritize non-optimal experiences: for example, the agent may choose the experiences associated with south or north in the second update, which are not optimal but have the same $|\text{TD}|$ as the experience associated with east (Figure~\ref{fig:fig1}b). Thus, EVB that directly quantifies the value of experience can serve as an optimal priority. \section{Upper Bounds of Value Metrics of Experience in Q-Learning} \label{sec:3} PER \citep{Schaul2016} greatly improves the learning efficiency of DQN. However, the underlying rationale is not well understood. Here, we prove that $|\text{TD}|$ is the upper bound of the value metrics in Q-learning. \begin{figure*}[t!] \centering \includegraphics[width=.85\textwidth, clip=true, trim = 50mm 0mm 50mm 0mm]{images/Figure_2_mazeq_icml.pdf} \caption{The value metrics are upper-bounded by TD errors in Q-learning. \textbf{a-c.} $|\text{TD}|$ \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) of a tabular Q-learning agent in a grid-world maze. The red line indicates the identity line. } \label{fig:fig2_icml} \end{figure*} \begin{theorem} \label{the:1} The three value metrics of experience $e_k$ in Q-learning ($|\text{EVB}|$, $|\text{PIV}|$ and $|\text{EIV}|$) are upper-bounded by $\alpha|\text{TD}(s_k,a_k,r_k,s_k')|$, where $\alpha$ is a step-size parameter. \end{theorem} \begin{proof} From (\ref{eq:evb3}), $|\text{EVB}|$ can be written as \begin{align}\nonumber |\text{EVB}(e_k)| &= |\max_a{Q_{\text{new}}(s_k,a)} - \max_a{Q_{\text{old}}(s_k,a)} | \\ \nonumber &\leq \max_a{|{Q_{\text{new}}(s_k,a)} - {Q_{\text{old}}(s_k,a)}|} \\ \label{eq:evb-q} &\leq \alpha|\text{TD}(s_k,a_k, r_k, s_k')|, \end{align} where the second line is from the contraction of max operator. Proofs for the upper bounds of $|\text{PIV}|$ and $|\text{EIV}|$ are similar and given in Appendix \ref{sec:app1}. \end{proof} In Theorem \ref{the:1}, we prove that $|\text{EVB}|$, $|\text{PIV}|$, and $|\text{EIV}|$ are upper-bounded by $|\text{TD}|$ (scaled by the learning step-size) in Q-learning. To verify the bounds experimentally, we simulated a tabular Q-learning agent in a 5 $\times$ 5 grid-world maze \footnote{All the codes for the experiments are available at: \url{https://github.com/AmazingAng/VER}.}. The agent needs to reach the goal zone by moving one square in any of the four directions (north, south, east, west) each time (further details are described in Appendix \ref{sec:app4}). For each transition, we record the associated TD error and value metrics. As we can see from Figure~\ref{fig:fig2_icml}, all three value metrics of experience are bounded by $|\text{TD}|$. As our theory predicts (see Appendix \ref{sec:app1} for detail), $|\text{EIV}|$ is either equal to $|\text{TD}|$ (if the action of the experience is the optimal action before update) or 0. There is a large proportion of EVBs lies on the identity line, indicating the bound is tight. Moreover, we note that a significant proportion of value metrics lies on the x-axis. Because the value metrics are affected by the ``on-policyness" of the experienced actions, and Q-learning learns a deterministic policy that makes most actions of experiences off-policy. As $|\text{TD}|$ intrinsically tracks the evaluation and policy improvements, it can serve as an appropriate importance metric for past experiences. \section{Extension to Maximum-Entropy RL} In this section, we extend our framework to study the relationship between $|\text{TD}|$ and value of experience in maximum-entropy RL, particularly, soft Q-learning. \subsection{Soft Q-Learning} \label{softqlearning} \begin{figure*}[t!] \centering \includegraphics[width=.83\textwidth, clip=true, trim = 50mm 20mm 50mm 20mm]{images/Figure_3_mazesoftq_icml.pdf} \caption{The value metrics and their bounds in soft Q-learning. $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) as well as their theoretical lower-bound \textbf{a-c.} and upper-bounds \textbf{d-f.} of a tabular soft Q-learning agent in a grid-world maze. The red line indicates the identity line. } \label{fig:fig3_icml} \end{figure*} Unlike regular RL algorithms, maximum-entropy RL augments the reward with an entropy term: $r + \beta \mathcal{H}(\pi(\cdot|s))$, where $\mathcal{H}(\cdot)$ is the entropy, and $\beta$ is an optional temperature parameter that determines the relative importance of entropy and reward. The goal is to maximize the expected cumulative entropy-augmented rewards. Maximum-entropy RL algorithms have advantages at capturing multiple modes of near optimal policies, better exploration, and better transfer between tasks. Soft Q-learning is an off-policy value-based algorithm built on maximum-entropy RL principles \citep{Haarnoja2017, schulman2017equivalence}. Different from Q-learning, the target policy of soft Q-learning is stochastic. During policy iteration, Q-function is updated through soft Bellman operator $\Gamma^\text{soft}$, and the policy is updated to a maximum-entropy policy: \begin{align*} Q^\text{soft}_\text{new} (s, a) &= [\Gamma^\text{soft} Q^\text{soft}_\text{old}] (s, a) = r + \gamma V^\text{soft}_\text{old} (s') \\ \pi_\text{new}(a| s) &= \text{softmax}_a{(\frac{1}{\beta}Q^\text{soft}_\text{new} (s, a))}, \end{align*} where $\text{softmax}_i(x) = \exp(x_i)/ \sum_i{\exp(x_i)}$ is the softmax function, and the soft value function $V^\text{soft}_\pi(s)$ is defined as, \begin{align*}\nonumber V^\text{soft}_\pi(s) &= \mathbb{E}_{a}{\{Q^\text{soft}_{\pi}(s,a) - \log(\pi(a|s))\}}\\ &=\beta\log\sum_a\exp(\frac{1}{\beta}Q^\text{soft}_\pi (s, a)). \end{align*} Similar as in Q-learning, the TD error in soft Q-learning (soft TD error) is given by: \begin{equation*} \text{TD}^{\text{soft}}(s,a,r,s') = r + \gamma V^\text{soft}_\text{old} (s') - Q^\text{soft}_\text{old} (s, a). \end{equation*} \subsection{Value Metrics of Experience in Maximum-Entropy RL} Here, we extend the value metrics of experience to soft Q-learning. Similar as (\ref{eq:evb}), EVB for maximum-entropy RL is defined as, \begin{equation} \label{eq:evbsoft} \resizebox{0.9\linewidth}{!}{$ \begin{split} & \text{EVB}^{\text{soft}}(e_k) \\ & = v^\text{soft}_{\text{new}}(s_k) - v^\text{soft}_{\text{old}}(s_k) \\ & = \sum_{a}{\pi_{\text{new}}(a|s_k)\{q^\text{soft}_{\text{new}}(s_k,a) -\beta\log(\pi_{\text{new}}(a|s_k))\}} \\ & \quad -\sum_{a}{\pi_{\text{old}}(a|s_k)\{q^\text{soft}_{\text{old}}(s_k,a)- \beta\log(\pi_{\text{old}}(a|s_k))\}} \end{split} $} \end{equation} $\text{EVB}^{\text{soft}}$ can be separated into $\text{PIV}^{\text{soft}}$ and $\text{EIV}^{\text{soft}}$, which respectively quantify the value of policy and evaluation improvement in soft Q-learning, \begin{equation} \label{eq:pivsoft} \resizebox{0.9\linewidth}{!}{$ \begin{split} \text{PIV}^{\text{soft}}(e_k) &= \sum_{a}{\{\pi_{\text{new}}(a|s_k) - \pi_{\text{old}}(a|s_k)\}q^\text{soft}_{\text{new}}(s_k,a)} \\ &\qquad+\beta(H(\pi_{\text{new}}(\cdot|s))-H(\pi_{\text{old}}(\cdot|s_k))), \end{split} $} \end{equation} \begin{equation} \label{eq:eivsoft} \resizebox{0.9\linewidth}{!}{$ \text{EIV}^{\text{soft}}(e_k) = \sum_{a}{\pi_{\text{old}}(a|s_k)[q^\text{soft}_{\text{new}}(s_k,a)-q^\text{soft}_{\text{old}}(s_k,a)]}. $} \end{equation} Value metrics of experience in maximum-entropy RL have similar forms as in regular RL except for the entropy term, because changes in policy lead to changes in the policy entropy and affect the entropy-augmented rewards. \subsection{Lower and Upper Bounds of Value Metrics of Experience in Soft Q-learning} We theoretically derive the lower and upper bounds of the value metrics of experience in soft Q-learning. \begin{theorem} \label{the:2} The three value metrics of experience $e_k$ in soft Q-learning ($|\text{EVB}^\text{soft}|$, $|\text{PIV}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$) are upper-bounded by $\rho^\text{max}_\pi * \left| \text{TD}^{\text{soft}}\right|$, where $\rho^\text{max}_\pi = \max\{\pi_{\text{old}}(a_k|s_k), \pi_{\text{new}}(a_k|s_k)\} $ is a policy related term. \end{theorem} \begin{proof} See Appendix \ref{sec:app2}. \end{proof} \begin{theorem} \label{the:3} For soft Q-learning, $|\text{EVB}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$ (but not $|\text{PIV}^\text{soft}|$) are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, where $\rho^\text{min}_\pi = \min\{\pi_{\text{old}}(a_k|s_k), \pi_{\text{new}}(a_k|s_k)\} $ is a policy related term. \end{theorem} \begin{proof} See Appendix \ref{sec:app3}. \end{proof} \begin{figure*}[t!] \centering \setlength{\abovecaptionskip}{7pt} \includegraphics[width=.75\textwidth, clip=true, trim = 50mm 50mm 50mm 50mm]{images/Figure_4_cartpole_icml.png} \caption{Results of DQN and soft DQN in CartPole. \textbf{a-c.} $|\text{TD}|$ \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) in DQN. \textbf{d-f.} Theoretical upper bound and (\textbf{g-i.}) lower bound \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) in soft DQN. The red line indicates the identity line.} \label{fig:fig4_icml} \end{figure*} The lower and upper bounds in soft Q-learning include a policy term with $|\text{TD}|$. The policy related term $\rho_\pi$ quantifies the ``on-policyness'' of the experienced action. And the bounds become tighter as the difference between $\pi_{\text{old}}(a_k|s_k)$ and $\pi_{\text{new}}(a_k|s_k)$ becomes smaller. Surprisingly, the coefficient of the entropy term $\beta$ impacts the bound only through the policy term, which makes it an excellent priority even $\beta$ changes during learning \citep{haarnoja2018soft}. As $0 \leq \rho^\text{max}_\pi \leq 1$, the value metrics are also upper-bounded by $|\text{TD}|$ alone, which is similar as in Q-learning. However, as $\pi(a_k|s_k)$ is usually less than $1$, $|\text{TD}|$ is a looser upper bound in soft Q-learning. To verify the bounds in soft Q-learning experimentally, we simulated a tabular soft Q-learning agent in the grid-world maze described previously. From upper panel of Figure~\ref{fig:fig3_icml}, all three value metrics of experience are upper-bounded by $\rho^\text{max}_\pi * |\text{TD}|$. Moreover, from lower panel of Figure~\ref{fig:fig3_icml}, $|\text{EVB}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$ (but not $|\text{PIV}^\text{soft}|$) are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, supporting our theoretical analysis (Theorem \ref{the:2} and \ref{the:3}). The proportion of non-zero values of experiences is higher in soft Q-learning than in Q-learning, because, different from greedy policy of Q-learning, soft Q-learning learns a stochastic policy that makes experiences more ``on-policy" and have non-sparse values. In summary, the experimental results support the theoretical bounds of value metrics in tabular soft Q-learning. \section{Extension to Function Approximation Methods} \label{sec:FA} Function approximation methods, which are more powerful and expressive than tabular methods, are effective in solving more challenging tasks, such as the game of Go \citep{silver2016mastering}, video games \citep{Mnih2015} and robotic control \citep{Haarnoja2017}. In these methods, we learn a parameterized Q-function $Q(s,a; \theta_t)$, where the parameters are updated on experience $e_k$ through gradient-based method, \begin{equation}\nonumber \theta_{t+1} = \theta_{t} + \alpha \text{TD} \nabla_{\theta_{t}} Q(s_k,a_k; \theta_t)), \end{equation} where $\alpha$ is the learning rate, the TD error is defined as: \begin{equation}\nonumber \text{TD}= Q_{\text{target}}(s_k,a_k) - Q(s_k,a_k;\theta_t), \end{equation} and the target Q-value $Q_{\text{target}}$ is defined as \begin{equation}\nonumber Q_{\text{target}}(s_k,a_k) = r_k + \gamma\max_{a'}{Q(s'_k,a'; \theta_t)}. \end{equation} As $\alpha$ in function approximation Q-learning is usually very small, for each update, the parameterized function moves to its target by a small amount. Our framework can be extended to function approximation methods by slightly modifying the definition of the value metrics of experience. Note that if we apply the original definition of EVB in (\ref{eq:evb3}) directly to function approximation methods, the Q-function after the update $Q_{\text{new}}(s,a)= Q(s,a;\theta_{t+1})$ involves gradient-based update, which complicates the analysis and breaks the inequalities derived in the tabular case. As a remedy, we replace $Q(s,a; \theta_{t+1})$ by the target Q-value $Q_{\text{target}}(s,a)$ in the value metrics of experience (\ref{eq:evb}-\ref{eq:eiv}) and (\ref{eq:evbsoft}-\ref{eq:eivsoft}). The intuition is simple: the value is defined by the cause of the update (target Q-value), but not the result of the update through gradient-based method. Moreover, this modification allows our theory to apply to all function approximation methods, regardless the specific forms of the function approximator (linear function or neural networks). After the modifications, the value metrics of experience have similar form as the tabular case, and all theorems derived in the tabular case can be applied to function approximation methods. To test whether our theoretical predictions hold in function approximation methods, we simulated one DQN (Deep-Q network) agent and one soft DQN (DQN with soft update) agent in CartPole environment, where the goal is to keep the pole balanced by moving the cart forward and backward (further details are described in Appendix~\ref{sec:app4}). From Figure~\ref{fig:fig4_icml}, all value metrics of experience in DQN (Figure~\ref{fig:fig4_icml}a-c) and soft DQN (Figure~\ref{fig:fig4_icml}d-f) are bounded by the theoretical upper bounds. For DQN, $|\text{EVB}|$ and $|\text{PIV}|$ are uniformly distributed in the bounded area, while $|\text{EIV}|$ are equal to $|\text{TD}|$ or $0$. Results are different in soft DQN, where $|\text{EVB}|$ and $|\text{EIV}|$ are distributed more closely towards the theoretical upper bounds, suggesting the upper bound in soft Q-learning is tighter. Moreover, (Figure~\ref{fig:fig4_icml}g-i) shows the $|\text{EVB}|$ and $|\text{EIV}|$ are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, while $|\text{PIV}|$ are not. The experimental results confirm the bounds of value metrics hold for function approximation methods. \section{Experiments on Atari Games} \begin{figure}[t!] \centering \includegraphics[width=.85\columnwidth, clip=true, trim = 10mm 20mm 10mm 25mm]{images/supp_fig_barh2.pdf} \caption{Illustration on the difference between $|\text{TD}|$ and theoretical upper-bound for value metrics in soft Q-learning. Depicted are the theoretical upper bound (left), $|\text{TD}|$ (middle), and the policy term (right) of 50 experiences from the replay buffer in the grid-world maze (upper panel) and CartPole (lower panel), ordered by the theoretical upper bound.} \label{fig:supp2} \vspace*{-0.2cm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.75\textwidth]{images/Figure_4_atariv2_ef.pdf} \caption{Learning curve of soft DQN (blue lines), and soft DQN with prioritized experience replay in term of soft TD error (PER, orange lines) and the theoretical upper bound of value metrics of experience (VER, green lines) on Atari games. Solid lines are average return over 8 evaluation runs and shaded area is the standard error of the mean.} \label{fig:fig4} \end{figure*} The theoretical upper bound ($\rho^\text{max}_\pi * |\text{TD}|$) of value metrics balances the prediction error and ``on-policyness" of the experience. To better illustrate the difference between the theoretical upper bound and $|\text{TD}|$, we randomly drew 50 experiences from the grid-world maze and CartPole experiments. From Figure~\ref{fig:supp2}, we can see that the experiences with highest theoretical upper bounds are associated with higher $|\text{TD}|$ and "on-policyness". To investigate whether the theoretical upper bound can serve as an appropriate priority for experience replay in soft Q-learning, we compare the performance of soft DQN with different prioritization strategies: uniform replay, prioritization with $|\text{TD}|$ or the theoretical upper bound ($\rho^\text{max}_\pi * \left| \text{TD}^{\text{soft}}\right|$), which are denoted by soft DQN, PER and VER (valuable experience replay) respectively. This set of experiments consists of 9 selected Atari 2600 games according to \cite{schulman2017equivalence}, which balances generality of the games and limited compute power. We closely follow the experimental setting and network architecture outlined by \cite{Mnih2015}. For each game, the network is trained on a single GPU for 40M frames, or approximately 5 days. More details for the settings and hyperparameters are available in Appendix \ref{sec:app4}. Figure~\ref{fig:fig4} shows that soft DQN prioritized by $|\text{TD}|$ or the theoretical upper bound substantially outperforms uniform replay in most of the games. On average, soft DQN with PER or VER outperform vanilla soft DQN by 11.8\% or 18.0\% respectively. Moreover, VER shows higher convergence speed and outperforms PER in most of the games (8.47\% on average), which suggest that a tighter upper bound on value metrics improves the performance of experience replay. These results suggest that the theoretical upper bound can serve as an appropriate priority for experience replay in soft Q-learning. \section{Discussion} In this work, we formulate a framework to study relationship between the importance of experience and $|\text{TD}|$. To quantify the importance of experience, we derive three value metrics of experience: expected value of backup, policy evaluation value, and policy improvement value. For Q-learning, we theoretically show these value metrics are upper-bounded by $|\text{TD}|$. Thus, $|\text{TD}|$ implicitly tracks the value of the experience, which leads to high sample efficiency of PER. Furthermore, we extend our framework to maximum-entropy RL, by showing that these value metrics are lower and upper-bounded by the product of a policy term and $|\text{TD}|$. Experiments in grid-world maze and CartPole support our theoretical results for both tabular and function approximation RL methods, showing that the derived bounds hold in practice. Moreover, we show that experience replay using the upper bound as a priority improves maximum-entropy RL (\textit{i.e.}, soft DQN) in Atari games. By linking $|\text{TD}|$ and value of experience, two important quantities in learning, our study has the following implications. First, from a machine learning perspective, our study provides a framework to derive appropriate priorities of experience for different algorithms, with possible extension to batch RL \citep{fu2020d4rl} and N-step learning \citep{hessel2017rainbow}. Second, for neuroscience, our work provides insight on how brain might encode the importance of experience. Since $|\text{TD}|$ biologically corresponds to the reward prediction-error signal in the dopaminergic system \citep{schultz1997neural, glimcher2011understanding} and implicitly tracks the value of the experience, the brain may account on it to differentiate important experiences. \section{Introduction} Learning from important experiences prevails in nature. In rodent hippocampus, memories with higher importance, such as those associated with rewarding locations or large reward-prediction errors, are replayed more frequently \citep{Michon2019, roscow2019behavioural, salvetti2014role}. Psychophysical experiments showed that participants with more frequent replay of high-reward associated memories show better performance in memory tasks \citep{Gruber2016, schapiro2018human}. As accumulating new experiences is costly, utilizing valuable past experiences is a key for efficient learning \citep{olafsdottir2018role}. Differentiating important experiences from unimportant ones also benefits reinforcement learning (RL) algorithms \citep{katharopoulos2018not}. Prioritized experience replay (PER) \citep{Schaul2016} is an experience replay technique built on deep Q-network (DQN) \citep{Mnih2015}, which weighs the importance of samples by the magnitude of their temporal-difference error ($|\text{TD}|$). As a result, experiences with larger $|\text{TD}|$ are sampled more frequently. PER significantly improves the learning efficiency of DQN, and has been adopted \citep{Hessel2018,Horgan2018, Kapturowski2019} and extended \citep{daley2019reconciling, pan2018organizing, schlegel2019importance} by various deep RL algorithms. $|\text{TD}|$ quantifies the unexpectedness of an experience to a learning agent, and biologically corresponds to the signal of reward prediction error in dopamine system \citep{schultz1997neural, glimcher2011understanding}. However, how $|\text{TD}|$ is related to the importance of experience in the context of RL is not well understood. We address this problem from an economic perspective, by linking $|\text{TD}|$ to \textit{value of experience} in RL. Recently in neuroscience field, a normative theory for memory access, based on Dyna framework \citep{sutton1990integrated}, suggests that a rational agent should replay the experiences that lead to most rewarding future decisions \citep{Mattar2018}. Follow-up research shows that optimizing the replay strategy according to the normative theory has advantage over prioritized experience replay with $|\text{TD}|$ \citep{zha2019experience}. Inspired by \cite{Mattar2018}, we define the value of experience as the increase in the expected cumulative reward resulted from updating on the experience. The value of experience quantifies the importance of experience from first principles: assuming that the agent is economically rational and has full information about the value of experience, it will choose the most valuable experience to update, which leads to most rewarding future decisions. As supplements, we derive two more value metrics, which correspond to the evaluation improvement value and policy improvement value due to update on an experience. In this work, we mathematically show that these value metrics are upper-bounded by $|\text{TD}|$ for Q-learning. Therefore, $|\text{TD}|$ implicitly tracks the value of experience, and accounts for the importance of experience. We further extend our framework to maximum-entropy RL, which augments the reward with an entropy term to encourage exploration \citep{Haarnoja2017}. We derive the lower and upper bounds of these value metrics for soft Q-learning, which are related to $|\text{TD}|$ and ``on-policyness" of the experience. Experiments in grid-world maze and CartPole support our theoretical results for both tabular and function approximation RL methods, showing that the derived bounds hold in practice. Moreover, we show that experience replay using the upper bound as priority improves maximum-entropy RL (\textit{i.e.}, soft DQN) in Atari games. \section{Motivation} \subsection{Q-learning and Experience Replay} \label{qlearning} We consider a Markov Decision Process (MDP) defined by a tuple $\{\mathcal{S},\mathcal{A},\mathcal{P}, \mathcal{R}, \gamma\}$, where $\mathcal{S}$ is a finite set of states, $\mathcal{A}$ is a finite set of actions, $\mathcal{P}$ is the transition function, $\mathcal{R}$ is the reward function, and $\gamma \in [0, 1]$ is the discount factor. A policy $\pi$ of an agent assigns probability $\pi(a|s)$ to each action $a \in \mathcal{A}$ given state $s \in \mathcal{S}$. The goal is to learn an optimal policy that maximizes the expected discounted return starting from time step $t$, $G_t = \sum_{i = 0}^{\infty}\gamma^i r_{t+i}$, where $r_t$ is the reward the agent receives at time step $t$. Value function $v_\pi(s)$ is defined as the expected return starting from state $s$ following policy $\pi$, and Q-function $q_\pi(s, a)$ is the expected return on performing action $a$ in state $s$ and subsequently following policy $\pi$. According to Q-learning \citep{Watkins1992}, the optimal policy can be learned through policy iteration: performing policy evaluation and policy improvement interactively and iteratively. For each policy evaluation, we update $Q(s,a)$, an estimate of $q_\pi(s, a)$, by \begin{equation}\nonumber Q_\text{new}(s, a) = Q_\text{old}(s, a) + \alpha \text{TD}(s, a, r, s'), \end{equation} where TD error $\text{TD}(s, a, r, s') = r + \gamma \max_{a'} Q_\text{old}(s', a') - Q_\text{old}(s, a)$ and $\alpha$ is the step-size parameter. $Q_\text{new}$ and $Q_\text{old}$ denote the estimated Q-function before and after the update respectively. And for each policy improvement, we update the policy from $\pi_{\text{old}}$ to $\pi_{\text{new}}$ according to the newly estimated Q-function, \begin{equation}\nonumber \pi_{\text{new}} = \mathop{\argmax}_{a} Q_\text{new}(s,a). \end{equation} Standard Q-learning only uses each experience once before disregarded, which is sample inefficient and can be improved by \textit{experience replay} technique \citep{lin1992self}. We denote the experience that the agent collected at time $k$ by a tuple $e_k = \{s_k, a_k, r_k, s_{k}' \}$. According to experience replay, the experience $e_k$ is stored into the replay buffer and can be accessed multiple times during learning. \subsection{Value Metrics of Experience} To quantify the importance of experience, we derive three value metrics of experience. The utility of update on experience $e_k$ is defined as the value added to the cumulative discounted rewards starting from state $s_k$, after updating on $e_k$. Intuitively, choosing the most valuable experience for update will yield the highest utility to the agent. We denote such utility as the expected value of backup $\text{EVB}(e_k)$ \citep{Mattar2018}, \begin{align} \text{EVB}(e_k) &= v_{\pi_{\text{new}}}(s_k) - v_{\pi_{\text{old}}}(s_k) \nonumber \\ &= \sum_{a}{\pi_{\text{new}}(a|s_k)q_{\pi_{\text{new}}}(s_k,a)} \nonumber \\ & \qquad\qquad-\sum_{a} {\pi_{\text{old}}(a|s_k)q_{\pi_{\text{old}}}(s_k,a)}, \label{eq:evb} \end{align} where $\pi_{\text{old}}$, $v_{\pi_{\text{old}}}$ and $q_{\pi_{\text{old}}}$ are respectively the policy, value function and Q-function before the update, and $\pi_{\text{new}}$, $v_{\pi_{\text{new}}}$, and $q_{\pi_{\text{new}}}$ are those after. As the update on experience $e_k$ consists of policy evaluation and policy improvement, the value of experience can further be separated to evaluation improvement value $\text{EIV}(e_k)$ and policy improvement value $\text{PIV}(e_k)$ by rewriting (\ref{eq:evb}): \begin{multline}\label{eq:evb2} \text{EVB}(e_k) = \underbrace{\sum_{a}{[\pi_{\text{new}}(a|s_k) - \pi_{\text{old}}(a|s_k)]q_{\pi_{\text{new}}}(s_k,a)}}_{\text{PIV}(e_k)}+ \\ \underbrace{\sum_{a}{\pi_{\text{old}}(a|s_k)[q_{\pi_{\text{new}}}(s_k,a)-q_{\pi_{\text{old}}}(s_k,a)]}}_{\text{EIV}(e_k)}, \end{multline} where $\text{PIV}(e_k)$ measures the value improvements due to the change of the policy, and $\text{EIV}(e_k)$ captures those due to the change of evaluation. Thus, we have three metrics for the value of experience: $\text{EVB}$, $\text{PIV}$ and $\text{EIV}$. \begin{figure*}[t!] \centering \includegraphics[width=.95\textwidth, clip=true, trim = 0mm 8mm 0mm 8mm]{images/motivatingexample_icml.pdf} \caption{\textbf{a.} Illustration of the ``Linear Grid-World" example: there are $N$ grids and 4 actions (north, south, east, west). Reward for entering the goal state (cheese) is 1; reward is 0 elsewhere. \textbf{b-c.} Examples of prioritized experience replay by $|\text{TD}|$ and value of experience (EVB). The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy; while $|\text{TD}|$ is sensitive to changes in value function and will prioritize non-optimal experiences, such as those associated with north or south. Here squares represent states, triangles represent actions, and experiences associated with the highest priority are highlighted. \textbf{d.} Expected number of replays needed to learn the optimal policy, as the number of grids changes: uniform replay (blue), prioritized by $|\text{TD}|$ (orange), and EVB (green).} \label{fig:fig1} \end{figure*} \subsection{Value Metrics of Experience in Q-Learning} For Q-learning, we use Q-function to estimate the true action-value function. A backup over an experience $e_k$ consists of policy evaluation with Bellman operator and greedy policy improvement. As the policy improvement is greedy, we can rewrite value metrics of experience to simpler forms. EVB can be written as follows from (\ref{eq:evb}), \begin{equation} \label{eq:evb3} \text{EVB}(e_k) = \max_a{Q_{\text{new}}(s_k,a)} - \max_a{Q_{\text{old}}(s_k,a)}. \end{equation} Note that EVB here is different from that in \cite{Mattar2018}: in our case, EVB is derived from Q-learning; while in their case, EVB is derived from Dyna, a model-based RL algorithm \citep{sutton1990integrated}. Similarly, from (\ref{eq:evb2}), PIV can be written as \begin{equation} \label{eq:piv} \text{PIV}(e_k) = \max_a{Q_{\text{new}}(s_k,a)} - Q_{\text{new}}(s_k, a_{\text{old}}), \end{equation} where $a_{\text{old}} = \arg\max_a{Q_{\text{old}}(s_k,a)}$, and EIV can be written as \begin{equation} \label{eq:eiv} \text{EIV}(e_k) = Q_{\text{new}}(s_k,a_{\text{old}}) - Q_{\text{old}}(s_k, a_{\text{old}}). \end{equation} \subsection{A Motivating Example} We illustrate the potential gain of value of experience in a ``Linear Grid-World" environment (Figure~\ref{fig:fig1}a). This environment contains $N$ linearly-aligned grids and 4 actions (north, south, east, west). The rewards are rare: 1 for entering the goal state and 0 elsewhere. The solution for this environment is always choosing east. We use this example to highlight the difference between prioritization strategies. Three agents perform Q-learning updates on the experiences drawn from the same replay buffer, which contains all the ($4N$) experiences and associated rewards. The first agent replays the experiences uniformly at random, while the other two agents invoke the oracle to prioritize the experiences, which greedily select the experience with the highest $|\text{TD}|$ or EVB respectively. In order to learn the optimal policy, agents need to replay the experiences associated with action east in a reverse order. For the agent with random replay, the expected number of replays required is $4N^2$ (Figure~\ref{fig:fig1}d). For the other two agents, prioritization significantly reduces the number of replays required: prioritization with $|\text{TD}|$ requires $4N$ replays, and prioritization with EVB only uses $N$ replays, which is optimal (Figure~\ref{fig:fig1}d). The main difference is that EVB only prioritizes the experiences that are associated with the optimal policy (Figure~\ref{fig:fig1}c), while $|\text{TD}|$ is sensitive to changes in the value function and will prioritize non-optimal experiences: for example, the agent may choose the experiences associated with south or north in the second update, which are not optimal but have the same $|\text{TD}|$ as the experience associated with east (Figure~\ref{fig:fig1}b). Thus, EVB that directly quantifies the value of experience can serve as an optimal priority. \section{Upper Bounds of Value Metrics of Experience in Q-Learning} \label{sec:3} PER \citep{Schaul2016} greatly improves the learning efficiency of DQN. However, the underlying rationale is not well understood. Here, we prove that $|\text{TD}|$ is the upper bound of the value metrics in Q-learning. \begin{figure*}[t!] \centering \includegraphics[width=.85\textwidth, clip=true, trim = 50mm 0mm 50mm 0mm]{images/Figure_2_mazeq_icml.pdf} \caption{The value metrics are upper-bounded by TD errors in Q-learning. \textbf{a-c.} $|\text{TD}|$ \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) of a tabular Q-learning agent in a grid-world maze. The red line indicates the identity line. } \label{fig:fig2_icml} \end{figure*} \begin{theorem} \label{the:1} The three value metrics of experience $e_k$ in Q-learning ($|\text{EVB}|$, $|\text{PIV}|$ and $|\text{EIV}|$) are upper-bounded by $\alpha|\text{TD}(s_k,a_k,r_k,s_k')|$, where $\alpha$ is a step-size parameter. \end{theorem} \begin{proof} From (\ref{eq:evb3}), $|\text{EVB}|$ can be written as \begin{align}\nonumber |\text{EVB}(e_k)| &= |\max_a{Q_{\text{new}}(s_k,a)} - \max_a{Q_{\text{old}}(s_k,a)} | \\ \nonumber &\leq \max_a{|{Q_{\text{new}}(s_k,a)} - {Q_{\text{old}}(s_k,a)}|} \\ \label{eq:evb-q} &\leq \alpha|\text{TD}(s_k,a_k, r_k, s_k')|, \end{align} where the second line is from the contraction of max operator. Proofs for the upper bounds of $|\text{PIV}|$ and $|\text{EIV}|$ are similar and given in Appendix \ref{sec:app1}. \end{proof} In Theorem \ref{the:1}, we prove that $|\text{EVB}|$, $|\text{PIV}|$, and $|\text{EIV}|$ are upper-bounded by $|\text{TD}|$ (scaled by the learning step-size) in Q-learning. To verify the bounds experimentally, we simulated a tabular Q-learning agent in a 5 $\times$ 5 grid-world maze \footnote{All the codes for the experiments are available at: \url{https://github.com/AmazingAng/VER}.}. The agent needs to reach the goal zone by moving one square in any of the four directions (north, south, east, west) each time (further details are described in Appendix \ref{sec:app4}). For each transition, we record the associated TD error and value metrics. As we can see from Figure~\ref{fig:fig2_icml}, all three value metrics of experience are bounded by $|\text{TD}|$. As our theory predicts (see Appendix \ref{sec:app1} for detail), $|\text{EIV}|$ is either equal to $|\text{TD}|$ (if the action of the experience is the optimal action before update) or 0. There is a large proportion of EVBs lies on the identity line, indicating the bound is tight. Moreover, we note that a significant proportion of value metrics lies on the x-axis. Because the value metrics are affected by the ``on-policyness" of the experienced actions, and Q-learning learns a deterministic policy that makes most actions of experiences off-policy. As $|\text{TD}|$ intrinsically tracks the evaluation and policy improvements, it can serve as an appropriate importance metric for past experiences. \section{Extension to Maximum-Entropy RL} In this section, we extend our framework to study the relationship between $|\text{TD}|$ and value of experience in maximum-entropy RL, particularly, soft Q-learning. \subsection{Soft Q-Learning} \label{softqlearning} \begin{figure*}[t!] \centering \includegraphics[width=.83\textwidth, clip=true, trim = 50mm 20mm 50mm 20mm]{images/Figure_3_mazesoftq_icml.pdf} \caption{The value metrics and their bounds in soft Q-learning. $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) as well as their theoretical lower-bound \textbf{a-c.} and upper-bounds \textbf{d-f.} of a tabular soft Q-learning agent in a grid-world maze. The red line indicates the identity line. } \label{fig:fig3_icml} \end{figure*} Unlike regular RL algorithms, maximum-entropy RL augments the reward with an entropy term: $r + \beta \mathcal{H}(\pi(\cdot|s))$, where $\mathcal{H}(\cdot)$ is the entropy, and $\beta$ is an optional temperature parameter that determines the relative importance of entropy and reward. The goal is to maximize the expected cumulative entropy-augmented rewards. Maximum-entropy RL algorithms have advantages at capturing multiple modes of near optimal policies, better exploration, and better transfer between tasks. Soft Q-learning is an off-policy value-based algorithm built on maximum-entropy RL principles \citep{Haarnoja2017, schulman2017equivalence}. Different from Q-learning, the target policy of soft Q-learning is stochastic. During policy iteration, Q-function is updated through soft Bellman operator $\Gamma^\text{soft}$, and the policy is updated to a maximum-entropy policy: \begin{align*} Q^\text{soft}_\text{new} (s, a) &= [\Gamma^\text{soft} Q^\text{soft}_\text{old}] (s, a) = r + \gamma V^\text{soft}_\text{old} (s') \\ \pi_\text{new}(a| s) &= \text{softmax}_a{(\frac{1}{\beta}Q^\text{soft}_\text{new} (s, a))}, \end{align*} where $\text{softmax}_i(x) = \exp(x_i)/ \sum_i{\exp(x_i)}$ is the softmax function, and the soft value function $V^\text{soft}_\pi(s)$ is defined as, \begin{align*}\nonumber V^\text{soft}_\pi(s) &= \mathbb{E}_{a}{\{Q^\text{soft}_{\pi}(s,a) - \log(\pi(a|s))\}}\\ &=\beta\log\sum_a\exp(\frac{1}{\beta}Q^\text{soft}_\pi (s, a)). \end{align*} Similar as in Q-learning, the TD error in soft Q-learning (soft TD error) is given by: \begin{equation*} \text{TD}^{\text{soft}}(s,a,r,s') = r + \gamma V^\text{soft}_\text{old} (s') - Q^\text{soft}_\text{old} (s, a). \end{equation*} \subsection{Value Metrics of Experience in Maximum-Entropy RL} Here, we extend the value metrics of experience to soft Q-learning. Similar as (\ref{eq:evb}), EVB for maximum-entropy RL is defined as, \begin{equation} \label{eq:evbsoft} \resizebox{0.9\linewidth}{!}{$ \begin{split} & \text{EVB}^{\text{soft}}(e_k) \\ & = v^\text{soft}_{\text{new}}(s_k) - v^\text{soft}_{\text{old}}(s_k) \\ & = \sum_{a}{\pi_{\text{new}}(a|s_k)\{q^\text{soft}_{\text{new}}(s_k,a) -\beta\log(\pi_{\text{new}}(a|s_k))\}} \\ & \quad -\sum_{a}{\pi_{\text{old}}(a|s_k)\{q^\text{soft}_{\text{old}}(s_k,a)- \beta\log(\pi_{\text{old}}(a|s_k))\}} \end{split} $} \end{equation} $\text{EVB}^{\text{soft}}$ can be separated into $\text{PIV}^{\text{soft}}$ and $\text{EIV}^{\text{soft}}$, which respectively quantify the value of policy and evaluation improvement in soft Q-learning, \begin{equation} \label{eq:pivsoft} \resizebox{0.9\linewidth}{!}{$ \begin{split} \text{PIV}^{\text{soft}}(e_k) &= \sum_{a}{\{\pi_{\text{new}}(a|s_k) - \pi_{\text{old}}(a|s_k)\}q^\text{soft}_{\text{new}}(s_k,a)} \\ &\qquad+\beta(H(\pi_{\text{new}}(\cdot|s))-H(\pi_{\text{old}}(\cdot|s_k))), \end{split} $} \end{equation} \begin{equation} \label{eq:eivsoft} \resizebox{0.9\linewidth}{!}{$ \text{EIV}^{\text{soft}}(e_k) = \sum_{a}{\pi_{\text{old}}(a|s_k)[q^\text{soft}_{\text{new}}(s_k,a)-q^\text{soft}_{\text{old}}(s_k,a)]}. $} \end{equation} Value metrics of experience in maximum-entropy RL have similar forms as in regular RL except for the entropy term, because changes in policy lead to changes in the policy entropy and affect the entropy-augmented rewards. \subsection{Lower and Upper Bounds of Value Metrics of Experience in Soft Q-learning} We theoretically derive the lower and upper bounds of the value metrics of experience in soft Q-learning. \begin{theorem} \label{the:2} The three value metrics of experience $e_k$ in soft Q-learning ($|\text{EVB}^\text{soft}|$, $|\text{PIV}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$) are upper-bounded by $\rho^\text{max}_\pi * \left| \text{TD}^{\text{soft}}\right|$, where $\rho^\text{max}_\pi = \max\{\pi_{\text{old}}(a_k|s_k), \pi_{\text{new}}(a_k|s_k)\} $ is a policy related term. \end{theorem} \begin{proof} See Appendix \ref{sec:app2}. \end{proof} \begin{theorem} \label{the:3} For soft Q-learning, $|\text{EVB}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$ (but not $|\text{PIV}^\text{soft}|$) are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, where $\rho^\text{min}_\pi = \min\{\pi_{\text{old}}(a_k|s_k), \pi_{\text{new}}(a_k|s_k)\} $ is a policy related term. \end{theorem} \begin{proof} See Appendix \ref{sec:app3}. \end{proof} \begin{figure*}[t!] \centering \setlength{\abovecaptionskip}{7pt} \includegraphics[width=.75\textwidth, clip=true, trim = 50mm 50mm 50mm 50mm]{images/Figure_4_cartpole_icml.png} \caption{Results of DQN and soft DQN in CartPole. \textbf{a-c.} $|\text{TD}|$ \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) in DQN. \textbf{d-f.} Theoretical upper bound and (\textbf{g-i.}) lower bound \textit{v.s.} $|\text{EVB}|$ (left), $|\text{EIV}|$ (middle) and $|\text{PIV}|$ (right) in soft DQN. The red line indicates the identity line.} \label{fig:fig4_icml} \end{figure*} The lower and upper bounds in soft Q-learning include a policy term with $|\text{TD}|$. The policy related term $\rho_\pi$ quantifies the ``on-policyness'' of the experienced action. And the bounds become tighter as the difference between $\pi_{\text{old}}(a_k|s_k)$ and $\pi_{\text{new}}(a_k|s_k)$ becomes smaller. Surprisingly, the coefficient of the entropy term $\beta$ impacts the bound only through the policy term, which makes it an excellent priority even $\beta$ changes during learning \citep{haarnoja2018soft}. As $0 \leq \rho^\text{max}_\pi \leq 1$, the value metrics are also upper-bounded by $|\text{TD}|$ alone, which is similar as in Q-learning. However, as $\pi(a_k|s_k)$ is usually less than $1$, $|\text{TD}|$ is a looser upper bound in soft Q-learning. To verify the bounds in soft Q-learning experimentally, we simulated a tabular soft Q-learning agent in the grid-world maze described previously. From upper panel of Figure~\ref{fig:fig3_icml}, all three value metrics of experience are upper-bounded by $\rho^\text{max}_\pi * |\text{TD}|$. Moreover, from lower panel of Figure~\ref{fig:fig3_icml}, $|\text{EVB}^\text{soft}|$ and $|\text{EIV}^\text{soft}|$ (but not $|\text{PIV}^\text{soft}|$) are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, supporting our theoretical analysis (Theorem \ref{the:2} and \ref{the:3}). The proportion of non-zero values of experiences is higher in soft Q-learning than in Q-learning, because, different from greedy policy of Q-learning, soft Q-learning learns a stochastic policy that makes experiences more ``on-policy" and have non-sparse values. In summary, the experimental results support the theoretical bounds of value metrics in tabular soft Q-learning. \section{Extension to Function Approximation Methods} \label{sec:FA} Function approximation methods, which are more powerful and expressive than tabular methods, are effective in solving more challenging tasks, such as the game of Go \citep{silver2016mastering}, video games \citep{Mnih2015} and robotic control \citep{Haarnoja2017}. In these methods, we learn a parameterized Q-function $Q(s,a; \theta_t)$, where the parameters are updated on experience $e_k$ through gradient-based method, \begin{equation}\nonumber \theta_{t+1} = \theta_{t} + \alpha \text{TD} \nabla_{\theta_{t}} Q(s_k,a_k; \theta_t)), \end{equation} where $\alpha$ is the learning rate, the TD error is defined as: \begin{equation}\nonumber \text{TD}= Q_{\text{target}}(s_k,a_k) - Q(s_k,a_k;\theta_t), \end{equation} and the target Q-value $Q_{\text{target}}$ is defined as \begin{equation}\nonumber Q_{\text{target}}(s_k,a_k) = r_k + \gamma\max_{a'}{Q(s'_k,a'; \theta_t)}. \end{equation} As $\alpha$ in function approximation Q-learning is usually very small, for each update, the parameterized function moves to its target by a small amount. Our framework can be extended to function approximation methods by slightly modifying the definition of the value metrics of experience. Note that if we apply the original definition of EVB in (\ref{eq:evb3}) directly to function approximation methods, the Q-function after the update $Q_{\text{new}}(s,a)= Q(s,a;\theta_{t+1})$ involves gradient-based update, which complicates the analysis and breaks the inequalities derived in the tabular case. As a remedy, we replace $Q(s,a; \theta_{t+1})$ by the target Q-value $Q_{\text{target}}(s,a)$ in the value metrics of experience (\ref{eq:evb}-\ref{eq:eiv}) and (\ref{eq:evbsoft}-\ref{eq:eivsoft}). The intuition is simple: the value is defined by the cause of the update (target Q-value), but not the result of the update through gradient-based method. Moreover, this modification allows our theory to apply to all function approximation methods, regardless the specific forms of the function approximator (linear function or neural networks). After the modifications, the value metrics of experience have similar form as the tabular case, and all theorems derived in the tabular case can be applied to function approximation methods. To test whether our theoretical predictions hold in function approximation methods, we simulated one DQN (Deep-Q network) agent and one soft DQN (DQN with soft update) agent in CartPole environment, where the goal is to keep the pole balanced by moving the cart forward and backward (further details are described in Appendix~\ref{sec:app4}). From Figure~\ref{fig:fig4_icml}, all value metrics of experience in DQN (Figure~\ref{fig:fig4_icml}a-c) and soft DQN (Figure~\ref{fig:fig4_icml}d-f) are bounded by the theoretical upper bounds. For DQN, $|\text{EVB}|$ and $|\text{PIV}|$ are uniformly distributed in the bounded area, while $|\text{EIV}|$ are equal to $|\text{TD}|$ or $0$. Results are different in soft DQN, where $|\text{EVB}|$ and $|\text{EIV}|$ are distributed more closely towards the theoretical upper bounds, suggesting the upper bound in soft Q-learning is tighter. Moreover, (Figure~\ref{fig:fig4_icml}g-i) shows the $|\text{EVB}|$ and $|\text{EIV}|$ are lower-bounded by $\rho^\text{min}_\pi * \left| \text{TD}^{\text{soft}}\right|$, while $|\text{PIV}|$ are not. The experimental results confirm the bounds of value metrics hold for function approximation methods. \section{Experiments on Atari Games} \begin{figure}[t!] \centering \includegraphics[width=.85\columnwidth, clip=true, trim = 10mm 20mm 10mm 25mm]{images/supp_fig_barh2.pdf} \caption{Illustration on the difference between $|\text{TD}|$ and theoretical upper-bound for value metrics in soft Q-learning. Depicted are the theoretical upper bound (left), $|\text{TD}|$ (middle), and the policy term (right) of 50 experiences from the replay buffer in the grid-world maze (upper panel) and CartPole (lower panel), ordered by the theoretical upper bound.} \label{fig:supp2} \vspace*{-0.2cm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.75\textwidth]{images/Figure_4_atariv2_ef.pdf} \caption{Learning curve of soft DQN (blue lines), and soft DQN with prioritized experience replay in term of soft TD error (PER, orange lines) and the theoretical upper bound of value metrics of experience (VER, green lines) on Atari games. Solid lines are average return over 8 evaluation runs and shaded area is the standard error of the mean.} \label{fig:fig4} \end{figure*} The theoretical upper bound ($\rho^\text{max}_\pi * |\text{TD}|$) of value metrics balances the prediction error and ``on-policyness" of the experience. To better illustrate the difference between the theoretical upper bound and $|\text{TD}|$, we randomly drew 50 experiences from the grid-world maze and CartPole experiments. From Figure~\ref{fig:supp2}, we can see that the experiences with highest theoretical upper bounds are associated with higher $|\text{TD}|$ and "on-policyness". To investigate whether the theoretical upper bound can serve as an appropriate priority for experience replay in soft Q-learning, we compare the performance of soft DQN with different prioritization strategies: uniform replay, prioritization with $|\text{TD}|$ or the theoretical upper bound ($\rho^\text{max}_\pi * \left| \text{TD}^{\text{soft}}\right|$), which are denoted by soft DQN, PER and VER (valuable experience replay) respectively. This set of experiments consists of 9 selected Atari 2600 games according to \cite{schulman2017equivalence}, which balances generality of the games and limited compute power. We closely follow the experimental setting and network architecture outlined by \cite{Mnih2015}. For each game, the network is trained on a single GPU for 40M frames, or approximately 5 days. More details for the settings and hyperparameters are available in Appendix \ref{sec:app4}. Figure~\ref{fig:fig4} shows that soft DQN prioritized by $|\text{TD}|$ or the theoretical upper bound substantially outperforms uniform replay in most of the games. On average, soft DQN with PER or VER outperform vanilla soft DQN by 11.8\% or 18.0\% respectively. Moreover, VER shows higher convergence speed and outperforms PER in most of the games (8.47\% on average), which suggest that a tighter upper bound on value metrics improves the performance of experience replay. These results suggest that the theoretical upper bound can serve as an appropriate priority for experience replay in soft Q-learning. \section{Discussion} In this work, we formulate a framework to study relationship between the importance of experience and $|\text{TD}|$. To quantify the importance of experience, we derive three value metrics of experience: expected value of backup, policy evaluation value, and policy improvement value. For Q-learning, we theoretically show these value metrics are upper-bounded by $|\text{TD}|$. Thus, $|\text{TD}|$ implicitly tracks the value of the experience, which leads to high sample efficiency of PER. Furthermore, we extend our framework to maximum-entropy RL, by showing that these value metrics are lower and upper-bounded by the product of a policy term and $|\text{TD}|$. Experiments in grid-world maze and CartPole support our theoretical results for both tabular and function approximation RL methods, showing that the derived bounds hold in practice. Moreover, we show that experience replay using the upper bound as a priority improves maximum-entropy RL (\textit{i.e.}, soft DQN) in Atari games. By linking $|\text{TD}|$ and value of experience, two important quantities in learning, our study has the following implications. First, from a machine learning perspective, our study provides a framework to derive appropriate priorities of experience for different algorithms, with possible extension to batch RL \citep{fu2020d4rl} and N-step learning \citep{hessel2017rainbow}. Second, for neuroscience, our work provides insight on how brain might encode the importance of experience. Since $|\text{TD}|$ biologically corresponds to the reward prediction-error signal in the dopaminergic system \citep{schultz1997neural, glimcher2011understanding} and implicitly tracks the value of the experience, the brain may account on it to differentiate important experiences.
2024-02-18T23:40:33.137Z
2021-02-08T02:19:33.000Z
algebraic_stack_train_0000
2,727
9,852
proofpile-arXiv_065-13261
\section{Introduction} The demand for deploying DNN models on edge devices (e.g., mobile phones, robots, and self-driving cars) are expanding rapidly. However, the increasing memory and computing power requirements of DNNs make their deployment on edge devices a grand challenge. Thus, various custom-made DNN models have been introduced by experts to accommodate a DNN model with reasonably high accuracy on mobile devices~\cite{howard2019searching,tan2019efficientnet,zhang2018shufflenet,ma2018shufflenetv2,mehta2020dicenet,huang2018condensenet}. In addition to mobile-friendly deep networks, model optimization methods such as network pruning~\cite{han2015deep,he2018amc}, factorization~\cite{Sainath2013factorization}, knowledge distillation~\cite{hinton2015distilling}, and parameter quantization~\cite{han2015deep} help to shrink the DNN model size down to the target hardware capabilities. Among such methods, network pruning has shown to be considerably useful in model compression by introducing sparsity or eliminating channels or filters, yet requires extensive knowledge and effort to find the perfect balance between the accuracy and model size. The main challenge of network pruning is to find the best pruning schedule or strategy for the layers of a network. Furthermore, a pruning strategy for a given DNN cannot be used for other networks due to their different structure. Thus, each network demands a customized pruning strategy. Recently, He et al.~\cite{he2018amc} leveraged reinforcement learning (RL) to automatically find the best pruning strategy. However, they used manually defined rules, such as number of input/output channels, parameter size, and FLOPs for the RL environment states vectors and ignored the rich structural information within the DNN. Yu et al.~\cite{yu2020agmc} are the first to model a given DNN as a hierarchical graph and proposed a GNN-based encoder-decoder to embed DNN layers. However, their method learns the topology indirectly and does not consider topology changes while model compression. Moreover, existing RL-based model compression methods require manually defined pruning ratio to get the desired model size reduction. Although the model accuracy is used within the RL agent's reward function, there is a negative correlation between the compression ratio and reward. Thus, without any constraint, the RL agent tends to search for a tiny compression ratio to get a better reward. Deep neural networks are already being represented as computational graphs in deep-learning frameworks, such as TensorFlow\cite{Abadi2016TensorFlow} and PyTorch\cite{Paszke2019pytorch}. Such a representation contains various patterns (a.k.a motifs) repeated throughout the network topology. For instance, MobileNetV2~\cite{Sandler2018mobileNetv2} involves 17 blocks, each following a similar graph and operation structure. The topology of the blocks can represent their states, allowing us to exploit their redundancy and importance and search for a suitable compression policy. Such structural characteristics within DNNs inspired us to model them as hierarchical computational graphs and learn the compression policy. In a nutshell, we model a given DNN as hierarchical computational graphs and propose multi-stage graph neural networks (m-GNN) to embed DNNs. Additionally, we equipped m-GNN with a reinforcement learning agent (GNN-RL) to automatically search for the compression policy (e.g., pruning ratios). To avoid tiny compression ratios due to the negative correlation between the compression ratio and RL agent's reward, we created a DNN-Graph environment for the GNN-RL agent. Such an environment allows the agent to continuously compress the DNNs until it satisfies the model size constraint. For each step of the compression, the DNN-Graph environment converts the compressed DNN to a graph. The graph is the environment state input to the GNN-RL agent. Once the compressed DNN satisfies the desired model size, the DNN-Graph ends the search episodes and uses the pruned DNN's accuracy as a reward for the agent. In essence, this paper makes the following contributions: \begin{itemize} \item A novel method for modeling DNNs as hierarchical graphs to exploit their topological information for network pruning. \item An efficient multi-stage GNN and a learning-based pooling method to learn hierarchical graph embeddings. \item A topology-aware solution based on GNN and RL for automatic network pruning. \item State-of-the-art model compression results on various DNN models. \end{itemize} \section{Related Work} Within the context of this paper, researchers already proposed various methods to compress DNN models, such as architecture design, network pruning, and quantization. Graph neural networks are also gaining momentum among these research fields. In the following, we will review these methods. \textbf{Model Compression.} Extensive works focus on model compression and efficient deployment of DNNs, such as network pruning~\cite{han2015deep,he2018amc}, knowledge distillation~\cite{hinton2015distilling}, and network quantization~\cite{han2015deep,courbariaux2016binarized,rastegari2016xnor}. Within the scope of this paper, we mainly consider network pruning. Structured~\cite{Anwar2017Structured} and unstructured pruning~\cite{zhang2018unstructured,Guo2016unstructured} evaluate model parameters importance and remove those with a lower rank. The unstructured pruning promises a higher compression ratio by tensor sparsification. However, the potential speedup is only attainable on specialized AI-accelerators. On the other hand, structured pruning attempts to eliminate filters or channels and provides benefit to all hardware platforms. For instance, the uniform, shallow, deep empirical structured pruning policies~\cite{he2017handcraft_channel,Li2016handcraft}, the hand-crafted structured pruning methods, such as SPP~\cite{wang2017SPP}, FP~\cite{Li2016handcraft}, and RNP~\cite{Lin2017RNP} fall into the structured pruning category. The SPP analyzes each layer and measures a reconstruction error to determine the pruning ratio. FP evaluates the performance of single-layer-pruning and ranks the importance of layers and prunes aggressively on low ranks. RNP groups all convolutional channels into sets and trains an RL agent to decide on the sets. However, handcrafted pruning policies often fail to be extended to new models and might lead to sub-optimal performance. Recently, researchers tend to leverage reinforcement learning to search for pruning policies automatically. Liu et al.~\cite{liu2020AutoCompress} proposed an ADMM-based~\cite{Boyd2011ADMM} structured weight pruning method and an innovative additional purification step for further weight reduction. He et al.~\cite{he2018amc} proposed the AMC for network pruning and leveraged reinforcement learning to predict each hidden layer's compression policies. However, they manually defined DNN's embeddings and ignored the neural network's essential structural information. Yu et al.~\cite{yu2020agmc} are the first to model DNNs as graphs and introduced a GNN-based graph encoder-decoder to embed DNNs' hidden layers. Nevertheless, their RL agent learns the topology information indirectly and is insensitive to the structural changes of DNNs while being pruned. \textbf{Graph Neural Networks (GNN).} GNN and its variants~\cite{kipf2017gcn,Schlichtkrull2018rgcn} can learn the graph embeddings and have been successfully used for link prediction~\cite{Nowell2007linkprediction} and node classification. However, these methods are mainly focused on node embedding and are inherently flat, which is inefficient to deal with the hierarchical data. In this paper, we aim to learn the global topology information from DNNs. Thus, we proposed multi-stage GNN (m-GNN), which takes advantage of the repetitive motifs available in DNNs. m-GNN considers the edge features and has a novel learning-based pooling strategy to learn the global graph embedding. \textbf{Graph-based Neural Architecture Search (NAS).} Although this paper is not directly related to NAS, it is an active area of research wherein the computationally expensive operations are replaced with more efficient alternative. Particularly, graph-based NAS methods apply GNN and use graph-based neural architecture encoding schemes to exploit neural network's topology. They model neural architecture's search spaces as graphs and aim to search for the best performing neural network structure~\cite{Guo2019NAS_NAT,Han2020NAS_oneshot,Dudziak2021BPR_NAS}. Such methods inspired us to exploit compression policy from the topology information of DNNs. \section{Approach} To prune a given DNN, the user provides the model size constraint~(e.g., FLOPs-constraint). The DNN-Graph environment receives the constraint, takes the DNN's hierarchical computational graph as the environment state, and leverages the GNN-RL agent to search for a compression policy. Figure~\ref{fig:2} depicts a high-level overview of our method. The DNN-Graph environment episode is essentially a model compression iteration. As the red arrows show, the process starts from the original DNN. The model size evaluator first evaluates the size of the DNN. If the size is not satisfied, the graph generator converts the DNN to a hierarchical computational graph. Then the GNN-RL agent leverages m-GNN to learn pruning ratios (compression policy) from the graph. The pruner prunes the DNN with the pruning ratios and begins the next iteration from the compressed DNN. Each step of the compression will lead to DNN's topology change. Thus, the DNN-Graph environment reconstructs a new hierarchical computational graph for the GNN-RL agent corresponding to the current compression state. Once the compressed DNN satisfies the size constraint, the evaluator will end the episode, and the accuracy evaluator will assess the pruned DNN's accuracy as an episode reward for the GNN-RL agent. As opposed to the existing RL-based methods~\cite{he2018amc,yu2020agmc,liu2020AutoCompress}, with the DNN-Graph environment, the GNN-RL can automatically learn to reach the desired model size. Hence, it prevents us from manual adjustments and obtaining tiny compression ratios. In the following, we will explain the details of the m-GNN and RL agent within our approach. \input{contents/f02_figure2} \subsection{Hierarchical graph representation} \label{sec:hierarchicalgraph} Neural networks representation as computational graphs in deep learning frameworks, such as TensorFlow and PyTorch, contains rich topology information. However, it may involve billions of operations~\cite{he2016ResNet}, which makes the computational graph bloated. Nevertheless, computational graphs often contain repetitive sub-graphs (a.k.a. motifs), such as 3$\times$3 convolutions or custom blocks in state-of-the-art networks. We can simplify the computational graphs by extracting the motifs and modeling them as hierarchical computational graphs. Additionally, we can make the graph coarser by replacing primitive operations such as \textit{add}, \textit{multiple}, and \textit{minus} with machine-learning high-level operations (e.g., convolution, pooling, etc.). Formally, we model the DNN as an $l$-layer hierarchical computational graph, such that at the $l^{th}$ layer (the top layer) we would have the hierarchical computational graph set $\mathcal{G}^{l} = \{G^l\}$, where each item is a computational graph $G^l = (V^l,\mathcal{E}^l,\mathcal{G}^{l-1})$. $V^l$ is the graph nodes corresponding to hidden states. $\mathcal{E}^l$ is the set of directed edges with a specific edge type associated with the operations. Lastly, $\mathcal{G}^{l-1} = \{G^{l-1}_0,G^{l-1}_1,...\}$ is the computational graph set at the $(l-1)$-layer and the operation set at layer $l$. Within the first layer, we manually choose commonly used machine learning operations as the primitive operations for $\mathcal{G}^{0}$. As an example, Figure \ref{fig:1} illustrates the idea behind generating hierarchical computational graphs using a sample graph $G$, where the edges are operations and the nodes are hidden states. In the input graph, we choose three primitive operations $\mathcal{G}^{0} = $ \{1$\times$1 conv, 3$\times$3 conv, 3$\times$3 max-pooling\} corresponding to the three edge types. Then, we extract the repetitive subgraphs (i.e., $G^1_1$, $G^1_2$ and $G^1_3$), each denoting a compound operation, and decompose the graph $G$ into two hierarchical levels, as shown in Figure \ref{fig:1}~(b) and (c). The level-1 computational graphs are motifs that correspond to the edges within the level-2 computational graph. \input{contents/f01_figure1} The hierarchical computation graph's size depends on the primitive operations we choose in $\mathcal{G}^{0}$. In our experiments, we choose the commonly used operations in machine learning as primitive operations (e.g., convolution, pooling, etc.). \subsection{Network pruning using GNN and RL} \subsubsection{Multi-stage GNN} Standard GNN and its variants~\cite{kipf2017gcn} are inherently flat~\cite{Ying2018DiffPool}. Since we model a given DNN as an $l-$layer hierarchical computational graph (see Section~\ref{sec:hierarchicalgraph}), we propose a multi-stage GNN~(m-GNN), which embeds the hierarchical graph in $l$-stages according to its hierarchical levels and analyzes the motifs. As depicted in Figure~\ref{fig:1}, m-GNN initially learns the lower level embeddings and uses them as the corresponding edge features in high-level computation graphs. Instead of learning node embeddings, m-GNN aims to learn the global graph representation. We further introduced a novel learning-based pooling strategy for every stage of embedding. With m-GNN, we only need embedding once for each motif on the computational graph. It is much more efficient and uses less memory than embedding a flat computation graph with standard GNN. \textbf{Multi-stage Embedding.} For the computational graphs $\mathcal{G}^{t} = \{G^{t}_0,G^{t}_1,...,G^{t}_{N_t}\}$ in the $t^{th}$ hierarchical layer, we embed the computational graph $G^t_i = (V^t_i,\mathcal{E}^t_i,\mathcal{G}^{t-1}), i=\{1,2,...,N_t\}$ as: \begin{equation} e^t_i = EncoderGNN_t(G^t_i, E_{t-1}) \end{equation} , where $e^t_i$ is the embedding vector of $G^{t}_i$, $E_{t-1} = \{e^{t-1}_j\}, j=\{1,2,...,N_{t-1}\}$ is the embedding of the computational graphs at level ${t-1}$ and the type of edges at level ${t}$. For layer-1, $E_{0}$ contains the initial features (e.g., one-hot, and random standard) of the primitive operations $\mathcal{G}^{0}$ that we manually select. In the hierarchical computational graphs, each edge corresponds to a computational graph of the previous level and uses its graph embedding as the edge feature. Furthermore, the graphs at the same hierarchical level share the GNN's parameter. At the top layer ($l^{th}$ layer) of the hierarchical graph $\mathcal{G}^{l} = \{G^l\}$, we only have one computational graph and its embedding is the DNN's final embedding $g$: \begin{equation} g = EncoderGNN_l(G^l, E_{l-1}) \end{equation} \textbf{Message passing.} In the multi-stage hierarchical embedding, we consider the edge features. However in the standard graph convolutional networks (GCN)~\cite{kipf2017gcn}, it only passes the node features and the message passing function can be formulated as follows: \begin{equation} h^{l+1}_i = \sum_{j\in N_i}\frac{1}{c_i}W^l h^l_j \end{equation} , where $h$ is nodes' hidden states, $c_i$ is a constant coefficient, $N_i$ is node $i$ neighbors, and $W^l$ is GNN's learnable weight matrix. Instead of standard message passing, in the multi-stage GNN, we add the edge features: \begin{equation} h^{l+1}_i = \sum_{j\in N_i}\frac{1}{c_i}W^l (h^l_j\circ e^{l-1}_k) \end{equation} , where $e^{l-1}_k$ is the features of edge $(i,j)$ and is also the embeddings of the $k^{th}$ graph at layer $l-1$, such that the edge $(i,j)$ corresponds to the operation $G^{l-1}_k$. The operation $\circ$ denotes the element-wise product, which we selected for the convenience of multi-stage message passing, but it is not limited to element-wise product. \textbf{Learning-based pooling.} Standard GNN aims to learn the node embeddings of a graph~(e.g., learn node representation and perform node classification). However, our goal is to learn the graph representation of a given DNN. Thus, we introduced a learning-based pooling method for multi-stage GNN to pool node embeddings and learn the graph embedding. We define the graph embedding $e$ as: \begin{equation} e = \sum_{i\in N}\alpha_i h_i \end{equation} , where $N$ is the set of nodes, $h_i$ is the $i-$th node embedding, and $\alpha_i$ is the learnable weight coefficient for $h_i$. In the multi-stage GNN, the computational graphs at the same hierarchical level share the GNN’s parameters, but in the pooling, each computational graph has its learnable pooling parameters $\alpha$. \subsubsection{Reinforcement learning} We use the generated hierarchical computational graph $\mathcal{G}^{l}$ for representing the DNN's state and the RL agent's environment state. Since pruning the model causes its underlying graph topology to change, we constantly update the graph $\mathcal{G}^{l}$ after each pruning step to help the RL agent find the pruning policy on current states. We employ deep deterministic policy gradient (DDPG) RL~\cite{lillicrap2016ddpg} together with m-GNN~(GNN-RL) to learn the compression policy directly from topology states. The actor and critic-network within the GNN-RL agent contain an m-GNN graph encoder and a multi-layer perception. The graph encoder is used to learn the graph embedding, and the multi-layer perception projects the embedding into action space~(i.e., compression policy). The actor's output layer applies the sigmoid function to bound the actions within $(0,1)$. Specifically, we perform FLOPs-constrained model compression using structured channel pruning (filter pruning) on the DNN's convolutional layers, which are the most computationally intensive. Thus, the GNN-RL agent's action space $A\in \mathbb{R}^{N \times 1}$, where the $N$ is the number of pruning layers, is the pruning ratios for hidden layers: $A=a_i$, where $i = \{1,2,...,N\}$, and $a_i \in [0,1)$ is the pruning ratio for $i^{th}$ layer. The GNN-RL agent makes the actions directly from the topology states: \begin{equation} g = GraphEncoder(\mathcal{G}^{l}) \label{eq:graphencoder} \end{equation} \begin{equation} A = MLP(g) \label{eq:mlp} \end{equation} , where the $\mathcal{G}^{l}$ is the environment states, $g$ is the graph representation, The MLP is a multi-layer perception neural network. The graph encoder learns the topology embedding, and the MLP projects the embedding into hidden layers' pruning ratios. The reward function is defined in Equation~\ref{eq:reward}. \begin{equation} R_{err} = -Error \label{eq:reward} \end{equation} , where the \textit{Error} is the compressed DNN's Top-1 error on validation set. \section{Experiments} To show the effectiveness of the GNN-RL, we evaluate our approach on over-parameterized DNNs (e.g., ResNet-20/32/44/56/110\cite{he2016ResNet} and VGG-16\cite{Simonyan2015VGG}) and mobile-friendly DNNs (e.g., MobileNet\cite{Andrew2017MobileNetv1,Sandler2018mobileNetv2} and Shufflenet\cite{ma2018shufflenetv2,zhang2018shufflenet}). Additionally, to demonstrate the superiority of our proposed method, we compare GNN-RL with three sets of methods: \begin{itemize} \item Uniform, shallow, and deep empirical policies~\cite{he2017handcraft_channel,Li2016handcraft}. \item The handcrafted channel reduction methods, such as SPP\cite{wang2017SPP}, FP~\cite{Li2016handcraft}, and RNP~\cite{Lin2017RNP}. \item The state-of-the-art RL-Based AutoML methods, such as AMC~\cite{he2018amc}, AGMC~\cite{yu2020agmc}, and random search (RS) with RL. \end{itemize} We use soft target update rate $\tau = 0.01$ for the GNN-RL updates. In the first $30$ episodes, we warm up the agent with random action. Then exploits 150 episodes with exponentially decayed noise and trains the network with 64 batch size and 2000 as replay buffer size. The experiment involves multiple datasets, including CIFAR-10/100~\cite{Krizhevsky2009Cifar}, and ImageNet~\cite{Olga2015ImageNet}. In the CIFAR-10/100 dataset, we sample $5K$ images from the test set as the validation set. In ILSVRC-2012, we split $10K$ images from the test set as the validation set. When searching, the DNN-Graph environment uses the compressed model's $R_{err}$ on the validation set as the GNN-RL agent's reward. \subsection{Over-parameterized DNNs} \input{contents/t01_table1} \input{contents/f04_figure4} We evaluate the effectiveness of GNN-RL on ResNet-20/32/44/56/110~\cite{he2016ResNet} and VGG-16~\cite{Simonyan2015VGG}, which fall into the over-parameterized networks category. With its residual connections, ResNet avoids gradient vanishing and allows an efficient training on its deep layers. However, its deep neural structure and billions of parameters make ResNet a challenging network to deploy on edge devices. Similarly, the VGG-16 network contains compact and dense convolutional layers, where some layers have hundreds of filters, leading to a giant model size (528 MB GPU memory for VGG-16). To compress these over-parameterized DNNs, we perform FLOPs-constrained channel pruning (filter pruning) on their convolutional layers. We trained ResNet-20/32/44/56/110 and VGG-16 models on CIFAR-10~\cite{Krizhevsky2009Cifar} and ImageNet~\cite{Olga2015ImageNet} datasets, respectively. Since the validation accuracy on the ImageNet dataset is sensitive to the compression ratio, with high compression ratios, the accuracy drops considerably without fine-tuning (in some cases, the pruned model without fine-tuning has less than $1\%$ validation accuracy). We applied a one epoch fine-tuning process on each RL search episode to ensure that the GNN-RL gets a valuable reward when pruning the VGG-16. When pruning the ResNett-20/32/44/56/110, we share the pruning index between residual connection layers to avoid channel mismatch. Table~\ref{table_1} shows the top-1 test accuracy of the pruned models. We set the $50\%$ FLOPs constraint, and all the RL-Based methods use the $R_{err}$ as the reward. After pruning, we fine-tuned the DNNs with 100 epochs and only updated pruned layers' parameters. Results show that GNN-RL outperforms all the baselines and achieves higher test accuracy and compression ratio. For the ResNet-110/56/44 models, the model pruned by the GNN-RL even achieves higher test accuracy than the original model. After further investigations, we believe that it is due to the over-fitting of ResNet-110/56/44, as the accuracy on the training set was 100\%. To verify our assumption, we performed a further experiment to explore the relationship between the FLOPs constraints and the accuracy of DNNs. Figure~\ref{fig:5} shows the FLOPs ratio between 0.4 to 0.6 (compared to the original model's FLOPs) can get the highest test accuracy on ResNet-110. When the FLOPs reduction ratio exceeds 0.6, the test accuracy drops intensively. \input{contents/f05_figure5} In addition to the experiments above, we further analyzed the redundancy and the importance of each layer. Figure~\ref{fig:4} shows the hidden layers' pruning ratios on ResNet-110 and ResNet-56. ResNet contains residual connection layers, which transfer hidden states directly from previous residual layers. Thus, the residual connection layers are more redundant and informative since they contain the information of both the current layer's hidden states and the previous layers. The GNN-RL agent automatically learns that the residual connection layers are more redundant and applies more pruning on the residual connection layers. Another insight from Figure~\ref{fig:4} is that the GNN-RL agent applies more pruning on layers 45 to 65 within ResNet-110. Similarly, layers 23 to 35 of ResNet-56 have been pruned more. Such an insight shows that the middle layers have less impact on model accuracy. \subsection{Mobile-friendly DNNs} \input{contents/f06_figure6} We evaluated GNN-RL on MobileNet-v1/v2~\cite{Andrew2017MobileNetv1,Sandler2018mobileNetv2} and ShuffleNet-v1/v2~\cite{zhang2018shufflenet,ma2018shufflenetv2}, which are more suitable for devices with limited resources. Instead of using traditional convolutional operation, the MobileNet-v1/v2 and ShuffleNet-v1/v2 have designed more efficient convolutional blocks. To maintain the characteristics and high-efficiency of those custom-designed blocks, we have developed specific pruning strategies for them. \subsubsection{Pruning strategy} \textbf{MobileNet-v1.} The MobileNet-v1 block separates the convolution into the depth-wise and point-wise convolutions~\cite{Andrew2017MobileNetv1}. Each depth-wise filter only operates on one channel of feature maps. On the other hand, point-wise operations are the $1\times1$ convolutions, which operate on the feature maps processed by the depth-wise convolutions. In our experiments, applying regular filter pruning on such layers causes information loss. As depicted in Figure~\ref{fig:6}, pruning the filter painted in grey causes its corresponding channel (the green one) to be deleted as well. To handle this, instead of pruning depth-wise and point-wise filters separately, we only prune the point-wise filters within MobileNet-v1 blocks. \textbf{MobileNet-v2.} The MobileNet-v2 is principally designed based on MobileNet-v1 blocks with an additional linear expansion layer. The linear expansion layers are 1$\times$1 convolutions without non-linear activation. Residual shortcuts are between every two linear expansion layers, which connect MobileNet-v1 blocks. Similar to the MobileNet-v1, here we prune linear expansion layers and point-wise convolutional layers. Since residual connections are between linear expansion layers, we share the linear expansion layers' pruning ratio. \textbf{ShuffleNet-v1/v2.} The ShuffleNet model uses blocks containing depth-wise and point-with convolutions, channel shuffle, linear expansion, and residual connections. To avoid dimension mismatch when downsampling, we consider the ShuffleNet blocks together and perform channel pruning inside the blocks. In a ShuffleNet block, we do not prune the expansion layer (the output layer of the block), which can preserve the number of output channels and keep the feature maps dimensions when downsampling. \subsubsection{Results} \input{contents/t02_table2} Table~\ref{table_2} shows the FLOPs-constrained channel pruning results with 60\% and 80\% FLOPs ratio for ShuffleNet and MobileNet, respectively. We have compared GNN-RL with AGMC~\cite{yu2020agmc}, and random search (RS) with RL. We did not include AMC and handcrafted methods since we designed specific pruning strategies for mobile-friendly DNNs. We believe that these strategies are incompatible with AMC layer embeddings and handcrafted rules, leading to unfair comparison. The MobileNet-v1/v2 and ShuffleNet-v1/v2 are pre-trained on the CIFAR-100~\cite{Krizhevsky2009Cifar}. After pruning, we fine-tuned the compressed DNNs with 150 epochs. Our approach outperformed all the baselines. Although the networks have already been very compact, with $20\%$ FLOPs reduction on the MobileNet-v2, the GNN-RL increases the top-1 accuracy by $0.19\%$. \subsection{Inference acceleration and memory saving} \input{contents/t03_table3} The inference and memory usage of compressed DNNs are essential metrics to determine the possibility of DNN deployment on a given platform. Thus, we evaluated the pruned models' inference latency using PyTorch 1.7.1 on an Nvidia GTX 1080Ti GPU and recorded the GPU memory usages. The ResNet-110/56/44/32/20 are measured on the CIFAR-10 test set with batch size 32. The VGG-16 is evaluated on the ImageNet test set with batch size 32. Lastly, MobileNet-v1/v2 and ShuffleNet-v1/v2 are measured on the CIFAR-100 with batch size 32. Table~\ref{table_3} shows the inference accelerations and memory savings on our GPU. All the models pruned by GNN-RL achieve noteworthy inference acceleration and GPU memory reductions. Particularly, for the VGG-16, the original model's GPU memory usage is 528 MB since it has a very compact dense layer, which contributes little to FLOPs but leads to extensive memory requirement. The GNN-RL prunes convolutional layers and significantly reduces the feature map size, thus consuming 141 MB less memory than the original version. The inference acceleration on VGG-16 is also noticeable, with $1.38 \times$ speed up on the ImageNet. The inference acceleration for mobile-friendly DNNs may seem relatively insignificant. However, such models are designed for deployment on mobile devices. Thus, we believe that our tested GPU, with its extensive resources, does not take advantage of the mobile-friendly properties. \section{Conclusion} This paper proposed a network compression approach called GNN-RL, which utilizes a graph neural network and a reinforcement learning agent to exploit a topology-aware compression policy. We introduced a DNN-Graph environment that converts compression states to a topology changing process and allow GNN-RL to learn the desired compression ratio without human intervention. To efficiently embed DNNs and take advantage of motifs, we introduced m-GNN, a new multi-stage graph embedding method. In our experiments, GNN-RL is validated and verified on over-parameterized and mobile-friendly networks. For the over-parameterized models pruned by GNN-RL, ResNet-110/56/44, the test accuracy even outperformed the original models, i.e. $+0.63\%$ on ResNet-110, $+0.1\%$ on ResNet-56 and $+0.13\%$ on ResNet-44. For mobile-friendly DNNs, the $79\%$ FLOPs MobileNet-v2 pruned by GNN-RL increased the test accuracy by $0.19\%$ compared to the original model. Additionally, all the pruned models accelerated the inference speed and saved a considerable amount of memory usage.
2024-02-18T23:40:33.197Z
2021-02-08T02:17:54.000Z
algebraic_stack_train_0000
2,731
4,685
proofpile-arXiv_065-13321
\section{Introduction} \label{sec:intro} Large Transformer models~\cite{gpt3, gshard} have powered accuracy breakthroughs in both natural language processing and computer vision. GPT-3 hit a new record high accuracy for nearly all NLP tasks. Vision Transformer (ViT) \cite{dosovitskiy2020image} also achieved 89\% top-1 accuracy in ImageNet, outperforming state-of-the-art convolutional networks ResNet-152 \cite{he2016deep} and EfficientNet \cite{tan2019efficientnet}. To tackle the growth in model sizes, researchers have proposed various distributed training techniques, including parameter servers~\cite{ps, byteps, parallax}, pipeline parallel~\cite{gpipe, hetpipe, pipedream}, intra-layer parallel~\cite{gshard, meshtf, megatron}, and zero redundancy data parallel~\cite{zero}. \vspace{-0.05em} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{Figures/Interpretion1.pdf} \vspace{-0.5cm} \caption{Interpretable Freeze Training: DNNs converge bottom up (Results on CIFAR10 using ResNet). Each pane shows layer-by-layer similarity using SVCCA \cite{Raghu2017SVCCASV}.} \label{fig:intepretable_freeze} \vspace{-0.4cm} \end{figure} \vspace{-0.2em} \begin{figure*}[h!] \centering \makebox[\textwidth][c]{\includegraphics[width=\textwidth]{Figures/PipeTransformer2.pdf}} \vspace{-0.8cm} \caption{The process of \texttt{PipeTransformer}'s automated and elastic pipelining to accelerate distributed training of Transformer models} \vspace{-0.5cm} \label{fig:PipeTransformer} \end{figure*} Existing distributed training solutions, however, only study scenarios where all model weights are required to be optimized throughout the training (i.e., computation and communication overhead remains relatively static over different iterations). Recent works on \textit{freeze training} \cite{Raghu2017SVCCASV,NIPS2018_7815,reservoir} suggest that parameters in neural networks usually converge from the bottom-up (i.e., not all layers need to be trained all the way through training). Figure~\ref{fig:intepretable_freeze} shows an example of how weights gradually stabilize during training in this approach. This observation motivates us to utilize freeze training for distributed training of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active layers. Such a layer freezing strategy is especially pertinent to pipeline parallelism, as excluding consecutive bottom layers from the pipeline can reduce computation, memory, and communication overhead. In this paper, we propose \code{PipeTransformer}, an elastic pipelining training acceleration framework that automatically reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and data-parallel training. Figure \ref{fig:PipeTransformer} demonstrates the benefits of such a combination. First, by excluding frozen layers from the pipeline, the same model can be packed into fewer GPUs, leading to both fewer cross-GPU communications and smaller pipeline bubbles. Second, after packing the model into fewer GPUs, the same cluster can accommodate more pipeline replicas, increasing the width of data parallelism. More importantly, the speedups acquired from these two benefits are multiplicative rather than additive, further accelerating the training. The design of \code{PipeTransformer} faces four major challenges. First, the freeze algorithm must make \textit{on the fly} and adaptive freezing decisions; however, existing work~\cite{Raghu2017SVCCASV} only provides a posterior analysis tool. Second, the efficiency of pipeline re-partitioning results is influenced by multiple factors, including partition granularity, cross-partition activation size, and the chunking (the number of micro-batches) in mini-batches, which require reasoning and searching in a large solution space. Third, to dynamically introduce additional pipeline replicas, \code{PipeTransformer} must overcome the static nature of collective communications and avoid potentially complex cross-process messaging protocols when onboarding new processes (one pipeline is handled by one process). Finally, caching can save time for repeated forward propagation of frozen layers, but it must be shared between existing pipelines and newly added ones, as the system cannot afford to create and warm up a dedicated cache for each replica. \code{PipeTransformer} is designed with four core building blocks to address the aforementioned challenges. First, we design a tunable and adaptive algorithm to generate signals that guide the selection of layers to freeze over different iterations (Section \ref{sec:freeze}). Once triggered by these signals, our elastic pipelining module \code{AutoPipe}, then packs the remaining active layers into fewer GPUs by taking both activation sizes and variances of workloads across heterogeneous partitions (frozen layers and active layers) into account. It then splits a mini-batch into an optimal number of micro-batches based on prior profiling results for different pipeline lengths (Section \ref{sec:auto_pipe}). Our next module, \code{AutoDP}, spawns additional pipeline replicas to occupy freed-up GPUs and maintains hierarchical communication process groups to attain dynamic membership for collective communications (Section \ref{sec:auto_dp}). Our final module, \code{AutoCache}, efficiently shares activations across existing and new data-parallel processes and automatically replaces stale caches during transitions (Section \ref{sec:auto_cache}). Overall, \texttt{PipeTransformer}\ combines the \code{Freeze Algorithm}, \code{AutoPipe}, \code{AutoDP} and \code{AutoCache} modules to provide a significant training speedup. We evaluate \texttt{PipeTransformer}\ using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that \texttt{PipeTransformer}\ attains up to $2.83$-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. Finally, we have also developed open-source flexible APIs for \texttt{PipeTransformer}\, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, allowing for transferability to other algorithms that require similar freezing strategies. The source code is made publicly available. \section{Overview} \label{sec:method} \subsection{Background and Problem Setting} \label{sec:bg_problem} Suppose we aim to train a massive model in a distributed training system where the \textit{hybrid of pipelined model parallelism and data parallelism} is used to target scenarios where either the memory of a single GPU device cannot hold the model, or if loaded, the batch size is small enough to avoid running out of memory. More specifically, we define our settings as follows: \textbf{Training task and model definition.} We train Transformer models (e.g., Vision Transformer \cite{dosovitskiy2020image}, BERT \cite{devlin2018bert}) on large-scale image or text datasets. The Transformer model $\mathcal{F}$ has $L$ layers, in which the $i$th layer is composed of a forward computation function $f_i$ and a corresponding set of parameters, $\mathbf{w}_i$. With this definition, the overall model is $\mathcal{F}=f_{0}(\mathbf{w}_0) \circ \ldots \circ f_{L-1}(\mathbf{w}_{L-1})$. The model size is $S$, and the batch size is set to $N_{bs}$. \textbf{Training infrastructure.} Assume the training infrastructure contains a GPU cluster that has $N$ GPU servers (i.e. nodes). Each node has $I$ GPUs. Our cluster is homogeneous, meaning that each GPU and server have the same hardware configuration. Each GPU's memory capacity is $M_\text{GPU}$. Servers are connected by a high bandwidth network interface such as \texttt{InfiniBand} interconnect. \textbf{Pipeline parallelism.} In each machine, we load a model $\mathcal{F}$ into a pipeline $\mathcal{P}$ which has $K$ partitions ($K$ also represents the pipeline length). The $k$th partition $p_k$ consists of consecutive layers $p_k=f_{i}(\mathbf{w}_i) \circ \ldots \circ f_{j}(\mathbf{w}_{j})$, and $\mathcal{P}=p_{0} \circ \ldots \circ p_{K-1}$. We assume each partition is handled by a single GPU device. $1 \leq K \leq I$, meaning that we can build multiple pipelines for multiple model replicas in a single machine. We assume all GPU devices in a pipeline belong to the same machine. Our pipeline is a synchronous pipeline, which does not involve stale gradients, and the number of micro-batches is $M$. In the Linux OS, each pipeline is handled by a single process. We refer the reader to \code{GPipe} \cite{gpipe} for more details. \textbf{Data parallelism.} $\code{DDP}$ \cite{ddp} is a cross-machine distributed data parallel process group within $R$ parallel workers. Each worker is a pipeline replica (a single process). The $r$th worker's index (ID) is rank $r$. For any two pipelines $\mathcal{P}^{(r_i)}$ and $\mathcal{P}^{(r_j)}$ in $\code{DDP}$, $r_i$ and $r_j$ can belong to either the same GPU server or different GPU servers, and they can exchange gradients with the \code{AllReduce} algorithm. Under these settings, our goal is to accelerate training by leveraging \emph{freeze training}, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation, and cross-process caching, as discussed in the introduction. We next describe our overall design, named \texttt{PipeTransformer}, which can address these challenges. \subsection{Overall Design} \label{sec:overall_design} \begin{figure}[h!] \vspace{-0.2cm} \centering \includegraphics[width=1.0\linewidth]{Figures/systemdesign.pdf} \vspace{-0.5cm} \caption{Overview of \texttt{PipeTransformer}\ Training System} \label{fig:overview_design} \vspace{-0.3cm} \end{figure} \texttt{PipeTransformer}\ co-designs an on the fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure \ref{fig:overview_design}. To support \texttt{PipeTransformer}'s elastic pipelining, we maintain a customized version of \code{PyTorch Pipe}~\cite{kim2020torchgpipe}. For data parallelism, we use \code{PyTorch DDP} \cite{ddp} as a baseline. Other libraries are standard mechanisms of an operating system (e.g., \code{multi-processing}) and thus avoid specialized software or hardware customization requirements. To ensure the \textit{generality} of our framework, we have decoupled the training system into four core components: \texttt{freeze algorithm}, \texttt{AutoPipe}, \texttt{AutoDP}, and \texttt{AutoCache}. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with \texttt{AutoPipe} (green). \texttt{AutoPipe} is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, \texttt{AutoPipe} passes \textit{pipeline length} information to \texttt{AutoDP} (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP introduces a new replica (purple). \texttt{AutoCache} (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure \ref{fig:overview_design} for readability and generality. \section{Algorithm and System Design} This section elaborates on the four main algorithmic and system-wise design components of \texttt{PipeTransformer}. \subsection{Freeze Algorithm} \label{sec:freeze} The freeze algorithm must be lightweight and able to make decisions on the fly. This excludes existing layer-wise training approaches such as SVCCA~\cite{Raghu2017SVCCASV} which require full training states and heavy posterior measurements. We propose an adaptive on the fly freeze algorithm to define $L_{\text{frozen}}^{(T)}$ at timestep $T$ as follows: \vspace{-0.2em} \begin{equation} \begin{split} \footnotesize \label{eq:freeze} \text{$\min \Bigg( L_{\text{frozen}}^{(T-1)} + \alpha(L - L_{\text{frozen}}^{(T-1)}), \operatornamewithlimits{argmin}\limits_{\ell \in \{L_{\text{frozen}}^{(T-1)}, ..., L\}} \left\|\boldsymbol{g}_{\ell}^{(T)}\right\| \Bigg)$} \\ \text{where $T \geq 1$, $L_{\text{frozen}}^{(0)}=0$, and $\alpha \in (0,1)$} \end{split} \end{equation} \vspace{-1.0em} where $g_{\ell}^{(T)}$ is the gradient for layer $\ell$ at iteration $T$, and $\left\|\boldsymbol{g}_{\ell}^{(T)}\right\|$ is its norm. The intuition behind the second term in the $\min$ function is that the layer with the smallest gradient norm converges first. To stabilize training, we enforce an upper bound $L_{\text{frozen}}^{(T-1)} + \alpha(L - L_{\text{frozen}}^{(T-1)})$ for the number of frozen layers, which is a geometric sequence containing a hyper-parameter $\alpha$. This essentially freezes an $\alpha$ fraction of the remaining active layers. To illustrate the impact of $\alpha$, we rewrite the equation as: $L_{\text{frozen}}^{(T)} = (1 - \alpha)^{T}[\frac{{\alpha}L}{1-\alpha} + \sum_{t=2}^{T}{\frac{{\alpha}L}{(1-\alpha)^t}}]$ (see Appendix for the derivation), and draw the curve of this function in Figure \ref{fig:freeze}. As we can see, a larger $\alpha$ leads to a more aggressive layer freezing. Therefore, Equation \ref{eq:freeze} calculates the number of frozen layers at timestep $T$ using both the gradient norm and a tunable argument $\alpha$. \begin{figure}[h!] \vspace{-0.2cm} \centering \includegraphics[width=0.75\linewidth]{Figures/freeze.pdf} \vspace{-15pt} \caption{Freeze Algorithm Using Different $\alpha$ } \label{fig:freeze} \vspace{-0.3cm} \end{figure} The $\alpha$ parameter controls the trade-off between accuracy and training speed. This algorithm is also analogous to learning rate (LR) decay. Both algorithms use a scheduler function during training, and take the progress of training as an indicator. The difference is that the above freeze algorithm also takes gradient norm into account, making the algorithm simple and effective. Other freezing strategies can be easily plugged into the our training system. Indeed, we plan to investigate other strategies in our future work. \subsection{AutoPipe: Elastic Pipelining} \label{sec:auto_pipe} \begin{algorithm*}[h!] \caption{\texttt{AutoPipe} Algorithm} \label{alg:transformation algorithm} \begin{multicols}{2} \footnotesize \begin{algorithmic}[1] \STATE {\bfseries Input:} model $\mathcal{F}$, layer number $L$ and $L_{\text{frozen}}$, pipeline length $K$, frozen layer cost factor $\lambda_{\text{frozen}}$ \STATE {\bfseries Return:} model $\mathcal{F}_{\text{frozen}}$, model $\mathcal{F}_{\text{pipe}}$, updated $K$; \STATE \code{\colorbox[gray]{0.85}{def m\_partition($\mathcal{F}$,$L$, $L_{\text{frozen}}$):}} \textit{//see \ref{sec:model_partition}} \STATE $\mathcal{F}_{\text{frozen}}=\code{Sequential()}$; model size $S_\text{frozen} = 0$ \STATE $\mathcal{F}_{\text{pipe}}=\code{Sequential()}$; per-layer size $S_\text{pipe} = \code{[]}$ \FOR{layer index = $L_{\text{frozen}}$ {\bfseries to} $L$} \STATE \colorbox{green!20}{${f_{\text{ATT}}}_i, {f_{\text{MLP}}}_i \leftarrow f_i $} \STATE $\mathcal{F}_{\text{pipe}}.\code{append}({f_{\text{ATT}}}_i); S_\text{pipe}.\code{append}(\code{m\_size}({f_{\text{ATT}}}_i))$ \STATE $\mathcal{F}_{\text{pipe}}.\code{append}({f_{\text{MLP}}}_i); S_\text{pipe}.\code{append}(\code{m\_size}({f_{\text{MLP}}}_i))$ \ENDFOR \STATE {\bfseries return} $\mathcal{F}_{\text{frozen}}$,$S_\text{frozen}$,$\mathcal{F}_{\text{pipe}}$,$S_\text{pipe}$ \STATE \colorbox[gray]{0.85}{\code{def load\_balance}($\mathcal{F}_{\text{pipe}}$, $S_\text{pipe}$, $K$):} \textit{//Section \ref{sec:model_partition}} \STATE $B_{L}$=\code{dict}(), $B_{S}$=\code{dict}() \textit{// balanced L and S} \STATE $L_{\text{assigned}} = 0$; $S_{\text{total}}$ = \code{sum}($S_\text{pipe}$) \FOR{partition index = $k$ {\bfseries to} $K$} \STATE \code{mean}=$S_{\text{total}}$/($K$ - $k$); \STATE \code{var=np.var}($S_\text{pipe}$[$L_{\text{assigned}}$:])/($K$ - $k$) \FOR{sublayer index i = $L_{\text{assigned}}$ {\bfseries to} \code{len}($S_\text{pipe}$)} \STATE $S_k$ = $S_\text{pipe}$[i] \STATE \code{criterion}=$B_{S}$[i]-$S_\text{frozen}$(1.0-\colorbox{red!18}{$\lambda_{\text{frozen}}$})+$S_k$ \IF{\code{criterion < mean + var}} \STATE $B_{S}$+=$S_k$; $B_{L}$+=1; $L_{\text{assigned}}$+=1; $S_{\text{total}}$-=$S_k$ \ELSE \STATE \code{break} \ENDIF \ENDFOR \ENDFOR \STATE {\bfseries return} $B_{L}$, $B_{S}$ \STATE $\mathcal{F}_{\text{frozen}}$,$S_\text{frozen}$,$\mathcal{F}_{\text{pipe}}$,$S_\text{pipe}$ = \code{m\_partition}($\mathcal{F}$,$L$, $L_{\text{frozen}}$) \WHILE{$K \geq 2$} \STATE $B_{L}$, $B_{S}$ = \code{load\_balance}($\mathcal{F}_{\text{pipe}}$, $S_\text{pipe}$, $K/2$) \STATE $B_{S}$[0] -= $S_\text{frozen}$(1.0 - $\lambda_{\text{frozen}}$); \STATE $M_{GPU}^{(T)}$ = \code{max}($B_{S}$) \textit{ //Equation \ref{eq:compression}} \IF{\colorbox{blue!18}{$M_{GPU}^{(T)} < M_{GPU}^{(0)}$}} \STATE \code{$K$=$K$/2} \ELSE \STATE break \ENDIF \ENDWHILE \STATE load $\mathcal{F}_{\text{frozen}}$ and $\mathcal{F}_{\text{pipe}}$ to $K$ GPUs using $B_{S}$ and $B_{L}$ \STATE \code{Pipe($\mathcal{F}_{\text{pipe}}$, chunks=\colorbox{yellow!20}{\code{get\_optimal\_chunks}}($K$))} \end{algorithmic} \end{multicols} \label{alg:autopipe} \vspace{-0.2cm} \end{algorithm*} Triggered by the freeze algorithm, \code{AutoPipe} can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of \code{AutoPipe} that dynamically partition pipelines, minimize the number of pipeline devices and optimize mini-batch chunk size accordingly. Algorithm \ref{alg:transformation algorithm} presents the pseudo-code. \subsubsection{Balanced Pipeline Partitioning} \label{sec:model_partition} \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{Figures/TransformerLayer.pdf} \vspace{-0.5cm} \caption{Partition boundary is in the middle of a skip connection} \label{fig:skip_connection} \vspace{-0.3cm} \end{figure} Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers, forcing devices with lighter workloads to wait (demonstrated by Section \ref{sec:speedup_breakdown}). However, maintaining optimally balanced partitions does not guarantee the fastest training speed because other factors also play a crucial role: 1. \textit{Cross-partition communication overhead.} Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in figure \ref{fig:skip_connection}, partition $k$ must take intermediate outputs from both partition $k-2$ and partition $k-1$. In contrast, if the boundary is placed after the \code{addition} layer, the communication overhead between partition $k-1$ and $k$ is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix). Therefore, we do not consider breaking skip connections (highlighted separately as an entire attention layer ${f_{\text{ATT}}}_i$ and MLP layer ${f_{\text{MLP}}}_i$ in green at line 7 in Algorithm \ref{alg:autopipe}). 2. \textit{Frozen layer memory footprint.} During training, \code{AutoPipe} must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that in active layers, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor $\lambda_{\text{frozen}}$ to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set $\lambda_{\text{frozen}}$ to $\frac{1}{6}$. Based on the above two considerations, \code{AutoPipe} balances pipeline partitions based on parameter sizes. More specifically, \code{AutoPipe} uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into $K$ GPU devices. Pseudo code is described as the \code{load\_balance()} function in Algorithm \ref{alg:autopipe}. The frozen layers are extracted from the original model and kept in a separate model instance $\mathcal{F}_{\text{frozen}}$ in the first device of a pipeline. Note that the partition algorithm employed in this paper is not the only option; \code{PipeTransformer} is modularized to work with any alternatives. \subsubsection{Pipeline Compression} \label{sec:pipe_compression} Pipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep $T=0$. To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows: \vspace{-0.5em} \begin{equation} \label{eq:compression} \begin{split} \text{compress the pipeline if } M_{GPU}^{(T)} \leq M_{GPU}^{(0)} \\ \text{where } M_{GPU}^{{(T)}} \Leftrightarrow \max _{k \in \{0, \cdots, K-1\} } S_{p_k} \end{split} \end{equation} Once the freeze notification is received, \code{AutoPipe} will always attempt to divide the pipeline length $K$ by 2 (e.g., from 8 to 4, then 2). By using $\frac{K}{2}$ as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudo code is shown in lines 25-33 in Algorithm \ref{alg:autopipe}. Note that this compression makes the acceleration ratio \textit{exponentially} increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified. \vspace{-0.5em} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{Figures/bubble.pdf} \vspace{-2em} \caption{Pipeline Bubble: $F_{d,b}$, $B_{d, b}$, and $U_d$ denote forward, backward, and the optimizer update of micro-batch $b$ on device $d$, respectively. The total bubble size in each iteration is $(K-1)$ times per micro-batch forward and backward cost.} \label{fig:bubble} \end{figure} \vspace{-1em} Additionally, such a technique can also speed up training by shrinking the size of pipeline bubbles. To explain bubble sizes in a pipeline, Figure~\ref{fig:bubble} depicts how 4 micro-batches run through a 4-device pipeline ($K = 4$). In general, the total bubble size is $(K-1)$ times per micro-batch forward and backward cost (for further explanation, please refer to Appendix. Therefore, it is clear that shorter pipelines have smaller bubble sizes. \subsubsection{Dynamic Number of Micro-batches} \label{sec:micro_batches} Prior pipeline parallel systems use a fixed number of micro-batches per mini-batch ($M$). \code{GPipe} suggests $M \geq 4 \times K$, where $K$ is the number of partitions (pipeline length). However, given that that \code{PipeTransformer} dynamically configures $K$, we find it to be sub-optimal to maintain a static $M$ during training. Moreover, when integrated with \code{DDP}, the value of $M$ also has an impact on the efficiency of \code{DDP} gradient synchronizations. Since \code{DDP} must wait for the last micro-batch to finish its backward computation on a parameter before launching its gradient synchronization, finer micro-batches lead to a smaller overlap between computation and communication (see Appendix for illustration). Hence, instead of using a static value, \code{PipeTransformer} searches for optimal $M$ on the fly in the hybrid of \code{DDP} environment by enumerating $M$ values ranging from $K$ to $6K$. For a specific training environment, the profiling needs only to be done once (see Algorithm~\ref{alg:autopipe} line 35). Section~\ref{sec:experiments} will provide performance analyses of $M$ selections. \subsection{AutoDP: Spawning More Pipeline Replicas} \label{sec:auto_dp} As \code{AutoPipe} compresses the same pipeline into fewer GPUs, \code{AutoDP} can automatically spawn new pipeline replicas to increase data-parallel width. Despite the conceptual simplicity, subtle dependencies on communications and states require careful design. The challenges are threefold: 1. \code{DDP} \textit{Communication}: Collective communications in PyTorch \code{DDP} requires static membership, which prevents new pipelines from connecting with existing ones; 2. \textit{State Synchronization}: newly activated processes must be consistent with existing pipelines in the training progress (e.g., epoch number and learning rate), weights and optimizer states, the boundary of frozen layers, and pipeline GPU range; 3. \textit{Dataset Redistribution}: the dataset should be re-balanced to match a dynamic number of pipelines. This not only avoids stragglers but also ensures that gradients from all DDP processes are equally weighted. \begin{figure}[h!] \vspace{-0.4cm} \centering \includegraphics[width = 1 \linewidth]{Figures/audo_dp.pdf} \vspace{-0.8cm} \caption{AutoDP: handling dynamical data parallel with messaging between double process groups (Process 0-7 belong to machine 0, while process 8-15 belong to machine 1)} \label{fig:autodp} \vspace{-0.33cm} \end{figure} To tackle these challenges, we create double communication process groups for \code{DDP}. As in the example shown in Figure \ref{fig:autodp}, the message process group (purple) is responsible for light-weight control messages and covers all processes, while the active training process group (yellow) only contains active processes and serves as a vehicle for heavy-weight tensor communications during training. The message group remains static, whereas the training group is dismantled and reconstructed to match active processes. In T0, only process \code{0} and \code{8} are active. During the transition to T1, process \code{0} activates processes \code{1} and \code{9} (newly added pipeline replicas) and synchronizes necessary information mentioned above using the message group. The four active processes then form a new training group, allowing static collective communications adaptive to dynamic memberships. To redistribute the dataset, we implement a variant of \code{DistributedSampler} that can seamlessly adjust data samples to match the number of active pipeline replicas. The above design also naturally helps to reduce \code{DDP} communication overhead. More specifically, when transitioning from T0 to T1, processes \code{0} and \code{1} destroy the existing \code{DDP} instances, and active processes construct a new \code{DDP} training group using $\mathcal{F}_{\text{pipe}}$ (\code{AutoPipe} stores $\mathcal{F}_{\text{frozen}}$ and $\mathcal{F}_{\text{pipe}}$ separately, introduced in Section \ref{sec:model_partition}). Discussion of communication cost can be found in Appendix. \subsection{AutoCache: Cross-pipeline Caching} \label{sec:auto_cache} Caching activation maps from frozen layers can help further speed up training. This idea appears to be straightforward, but several caveats must be carefully addressed. \begin{figure}[h!] \centering \includegraphics[width = 0.75 \linewidth]{Figures/caching.pdf} \vspace{-0.2cm} \caption{AutoCache} \label{fig:autocache} \vspace{-0.6cm} \end{figure} \textbf{Cross-process caching.} The cache must be shared across processes in real time, as creating and warming up a dedicated cache for each model replica slow down the training. This is achieved by spawning a dedicated daemon process to hold cache in shared memory that all training processes can access in real time. Figure~\ref{fig:autocache} shows an example of the transition from T1 to T2, assuming T1 freezes 3 layers, T2 freezes 4 layers, and 5 layers remain active in T2. Immediately after the transition by \code{AutoDP}, the cache still holds cached activations from layer 3, which must be replaced by activations from layer 7. Therefore, all processes read their corresponding activations from the cache, feed them to the next 4 layers to compute activations for layer 7, then replace the existing cache with new activations for their samples accordingly. In this way, \code{AutoCache} can gradually update cached activations without running any sample through any frozen layers twice. When the activations are too large to reside on CPU memory, \code{AutoCache} will also swap them to the disk and perform pre-fetching automatically. More details on the cross-process cache design can be found in the Appendix. \textbf{Timing of cache} is also important, as the cache can be slower than running the real forward propagation, especially if frozen layers are few and activations are large. To ensure that our training system can adapt to different hardware, model architecture, and batch size settings, \texttt{AutoCache} also contains a profiler that helps evaluate the appropriate transition to enable caching, and it only employs cached activations when the profiler suggests caching can speed up the forward pass. Performance analysis is provided at Section \ref{sec:timing_of_caching}. \section{Experiments} \label{sec:experiments} This section first summarizes experiment setups and then evaluates \code{PipeTransformer} using computer vision and natural language processing tasks. More comprehensive results can be found in the Appendix. \vspace{-0.2cm} \subsection{Setup} \paragraph{Hardware.} Experiments were conducted on 2 identical machines connected by InfiniBand CX353A ($5$GB/s), where each machine is equipped with 8 NVIDIA Quadro RTX 5000 (16GB GPU memory). GPU-to-GPU bandwidth within a machine (PCI 3.0, 16 lanes) is $15.754$GB/s. \vspace{-0.3cm} \paragraph{Implementation.} We used \code{PyTorch Pipe} as a building block, which has not yet been officially released at the time of writing of this paper. Hence, we used the developer version \code{1.8.0.dev20201219}. The BERT model definition, configuration, and related tokenizer are from \code{HuggingFace 3.5.0}. We implemented Vision Transformer using PyTorch by following its TensorFlow implementation. More details can be found in our source code. \vspace{-0.3cm} \paragraph{Models and Datasets.} Experiments employ two representative Transformers in CV and NLP: Vision Transformer (ViT) and BERT. ViT was run on an image classification task, initialized with pre-trained weights on ImageNet21K and fine-tuned on ImageNet and CIFAR-100. BERT was run on two tasks, text classification on the SST-2 dataset from the General Language Understanding Evaluation (GLUE) benchmark, and question answering on the SQuAD v1.1 Dataset (Stanford Question Answering) which is a collection of 100k crowdsourced question/answer pairs. \vspace{-0.3cm} \paragraph{Training Schemes.} Given that large models normally would require thousands of GPU-days (\emph{e.g.}, GPT-3) if trained from scratch, fine-tuning downstream tasks using pre-trained models has become a trend in CV and NLP communities. Moreover, \texttt{PipeTransformer}\ is a complex training system that involves multiple core components. Thus, for the first version of \texttt{PipeTransformer}\ system development and algorithmic research, it is not cost-efficient to develop and evaluate from scratch using large-scale pretraining. Therefore, experiments presented in this section focuses on pre-trained models. Note that since the model architectures in pre-training and fine-tuning are the same, \code{PipeTransformer} can serve both. We discussed pre-training results in the Appendix. \vspace{-0.3cm} \paragraph{Baseline.} Experiments in this section compares \code{PipeTransformer} to the state-of-the-art framework, a hybrid scheme of \code{PyTorch Pipe} (PyTorch’s implementation of GPipe~\cite{gpipe}) and \code{PyTorch DDP}. Since this is the first paper that studies accelerating distributed training by freezing layers, there are no perfectly aligned counterpart solutions yet. \vspace{-0.3cm} \paragraph{Hyper-parameters.} Experiments use ViT-B/16 (12 transformer layers, $16 \times 16$ input patch size) for ImageNet and CIFAR-100, BERT-large-uncased (24 layers) for SQuAD 1.1, and BERT-base-uncased (12 layers) for SST-2. With \code{PipeTransformer}, ViT and BERT training can set the per-pipeline batch size to around 400 and 64 respectively. Other hyperparameters (e.g., epoch, learning rate) for all experiments are presented in Appendix. \subsection{Overall Training Acceleration} We summarize the overall experimental results in Table \ref{table:speedup_cv}. Note that the speedup we report is based on a conservative $\alpha$ ($\frac{1}{3}$) value that can obtain comparable or even higher accuracy. A more aggressive $\alpha$ ($\frac{2}{5}$, $\frac{1}{2}$) can obtain a higher speedup but may lead to a slight loss in accuracy (See section \ref{sec:freeze_alpha_setting}). Note that the model size of BERT (24 layers) is larger than ViT-B/16 (12 layers), thus it takes more time for communication (see Section \ref{sec:communication_cost} for details). \begin{table}[h!] \caption{Speedup for ViT and BERT Training} \label{table:speedup_cv} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{lccccc} \toprule & \multicolumn{2}{c}{\textbf{Baseline}} & \multicolumn{2}{c}{\textbf{PipeTransformer}} & \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \multirow{2}{*}{Dataset}& \multirow{2}{*}{Accuracy} & Training & \multirow{2}{*}{Accuracy} & Training & Training \\ & & time & & time & Speedup \\ \midrule ImageNet & 80.83 $\pm$ 0.05 & 26h 30m & 82.18 $\pm$ 0.32 & 9h 21m & \textbf{\large{2.83}} $\times$\\ CIFAR-100 & 91.21 $\pm$ 0.07 & 35m 6s & 91.33 $\pm$ 0.05 & 12m 23s & 2.44 $\times$\\ SQuAD 1.1 & 90.71 $\pm$ 0.18 & 5h 7m & 90.69 $\pm$ 0.23 & 2h 26m & 2.10 $\times$\\ \bottomrule \end{tabular} } \end{center} \begin{tablenotes} \footnotesize \item *Note: 1. the accuracy is the mean and variance of three independent runs with the same random seed; 2. the training time among different runs are relatively stable (the gap is less than 1 minute); 3. GLUE (SST-2)'s training time is relatively small, thus we mainly used it for debugging without reporting a few minutes result. 4. accuracy metric: ImageNet/CIFAR-100: top-1 accuracy; SQuAD: F1 score. \end{tablenotes} \vspace{-0.5cm} \end{table} \subsection{Performance Analysis} This section presents evaluation results and analyzes the performance of different components in \texttt{PipeTransformer}. More experimental results can be found in the Appendix. \subsubsection{Speedup Breakdown} \label{sec:speedup_breakdown} \begin{figure}[h!] \subfigure{{\includegraphics[width=0.95\linewidth]{Figures/legend.pdf}}} \setcounter{subfigure}{0} \setlength{\abovecaptionskip}{0pt} \subfigure[\label{fig:throughput} Sample Throughput] {{\includegraphics[width=0.48\linewidth]{Figures/throughput.pdf}}} \subfigure[Speedup Ratio Comparison] {{\includegraphics[width=0.47\linewidth]{Figures/speedup.pdf}}} \setlength{\belowcaptionskip}{-0.5cm} \vspace{-0.3cm} \caption{Speedup Breakdown (ViT on ImageNet)} \label{fig:breakdown} \vspace{-0.3cm} \end{figure} To understand the efficacy of all four components and their impacts on training speed, we experimented with different combinations and used their training sample throughput (samples/second) and speedup ratio as metrics. Results are illustrated in Figure \ref{fig:breakdown}. Key takeaways from these experimental results are: 1. the main speedup is the result of elastic pipelining which is achieved through the joint use of \code{AutoPipe} and \code{AutoDP}; 2. \code{AutoCache}'s contribution is amplified by \code{AutoDP}; 3. freeze training alone without system-wise adjustment even downgrades the training speed (discussed in Section \ref{sec:auto_pipe}). We provide additional explanations of these results in the Appendix. \begin{figure*}[htb] \setcounter{subfigure}{0} \subfigure[\label{fig:fig1a} Tuning $\alpha$ in Freeze Algorithm] {{\includegraphics[width=0.35\textwidth]{Figures/alpha_acc_speedup.pdf}}} \subfigure[Profiling Optimal Chunk Number] {{\includegraphics[width=0.36\textwidth]{Figures/throughput_M.pdf}}} \subfigure[\label{fig:fig1c} Timing of Caching] {{\includegraphics[width=0.28\textwidth]{Figures/timing_caching.pdf}}} \setlength{\belowcaptionskip}{-1cm} \vspace{-0.5cm} \caption{\textcolor{black}{Some Results of Performance Analysis}} \label{fig:performance_analysis} \vspace{-0.4cm} \end{figure*} \subsubsection{Communication Cost} \label{sec:communication_cost} We also analyzed how communication and computation contribute to the overall training time. Since \code{PyTorch DDP} overlaps communication with computation, the time difference between a local training iteration and distributed training iteration does not faithfully represent the communication delay. Moreover, as DDP also organizes parameters into buckets and launches an \code{AllReduce} for each bucket, recording the start and finish time of overall communications also falls short, as there can be time gaps between buckets. To correctly measure DDP communication delay, we combined the DDP communication hook with \code{CUDAFuture} callback. More details of this measurement are documented in the Appendix. Key takeaways: 1. larger model cost more time on communication (BERT on SQuAD); 2. a higher cross-machine bandwidth can further speedup the training, especially for larger model. \begin{table}[h!] \vspace{-0.5cm} \caption{Communication Cost v.s. Computational Cost} \vspace{-0.2cm} \label{table:communication_ratio} \begin{center} \resizebox{\linewidth}{!}{ \begin{threeparttable} \begin{tabular}{lcccc} \toprule \multirow{2}{*}{\textbf{Dataset}} & \textbf{Overall} & \textbf{Communication} & \textbf{Computation} & \textbf{Communication}\\ & \textbf{Cost} & \textbf{Cost} & \textbf{Cost} & \textbf{Cost Ratio}\\ \midrule ImageNet & 9h 21m & 34m& 8h 47m & 5.9 \% \\ SQuAD & 2h 26m & 16m 33s & 2h 9m & 8.8\% \\ \bottomrule \end{tabular} \end{threeparttable}} \end{center} \vspace{-0.5cm} \end{table} \subsubsection{Tuning $\alpha$ in Freezing Algorithm} \label{sec:freeze_alpha_setting} We ran experiments to show how the $\alpha$ in the freeze algorithms influences training speed. The result clearly demonstrates that a larger $\alpha$ (excessive freeze) leads to a greater speedup but suffers from a slight performance degradation. In the case shown in Figure~\ref{fig:performance_analysis}(a), where $\alpha=1/5$, freeze training outperforms normal training and obtains a $2.04$-fold speedup. We provide more results in the Appendix. \subsubsection{Optimal Chunks in elastic pipeline} We profiled the optimal number of micro-batches $M$ for different pipeline lengths $K$. Results are summarized in Figure~\ref{fig:performance_analysis}(b). As we can see, different $K$ values lead to different optimal $M$, and the throughput gaps across different M values are large (as shown when $K=8$), which confirms the necessity of an anterior profiler in elastic pipelining. \subsubsection{Understanding the Timing of Caching} \label{sec:timing_of_caching} To evaluate \code{AutoCache}, we compared the sample throughput of training that activates \code{AutoCache} from epoch $0$ (blue) with the training job without \code{AutoCache} (red). Figure \ref{fig:performance_analysis}(c) shows that enabling caching too early can slow down training, as caching can be more expensive than forward propagation on a small number of frozen layers. After freezing more layers, caching activations clearly outperforms the corresponding forward propagation. As a result, \code{AutoCache} uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers. \section{Related Works} \input{Sections/related} \section{Discussion} We defer the discussion section to the Appendix, where we discuss \textit{pretraining v.s. fine-tuning}, \textit{designing better freeze algorithms}, and the \textit{versatility} of our approach. \section{Conclusion} This paper proposes \texttt{PipeTransformer}, a holistic solution that combines elastic pipeline-parallel and data-parallel for distributed training. More specifically, \texttt{PipeTransformer}\ incrementally freezes layers in the pipeline, packs remaining active layers into fewer GPUs, and forks more pipeline replicas to increase the data-parallel width. Evaluations on ViT and BERT models show that compared to the state-of-the-art baseline, \texttt{PipeTransformer}\ attains up to $2.83\times$ speedups without accuracy loss. \nocite{langley00}
2024-02-18T23:40:33.472Z
2021-02-15T02:11:59.000Z
algebraic_stack_train_0000
2,741
6,246
proofpile-arXiv_065-13474
\section{Introduction \label{section:intro}} In the past decades, significant progress \cite{zhao2017pspnet, chen2018deeplab,xiao2018unified,WangSCJDZLMTWLX19,YuanCW20,tao2020hierarchical,mohan2020efficientps} in semantic segmentation has been achieved with Deep Convolutional Neural Networks. The empirical observation \cite{raffel2019exploring, xie2019self} demonstrates that the leading performance is partially attributed to a large volume of training data, thus dense pixel-level annotations are required in supervised learning, which is laborious and time-consuming. To avoid this painstaking task, researchers resort to train segmentation models on synthetic but photo-realistic large-scale datasets such as GTA5~\cite{richter2016playing} and SYNTHIA~\cite{ros2016synthia} with computer-generated annotations. However, due to the cross-domain differences, these well-trained models usually undergo significant performance drops when tested on realistic datasets (e.g., Cityscapes~\cite{cordts2016cityscapes}). Therefore, unsupervised domain adaptation (UDA) methods have been widely adopted to align the domain shift between the rich-labeled source data (synthetic images) and the unlabeled target data (real images). \begin{figure}[t] \centering \subfloat[Illustration of adaptation in domain-$\mathcal{T}$.]{\includegraphics[align=c,width=3.1in]{Figures/Single_domain_T.pdf}} \vspace{-0.1cm} \subfloat[Illustration of adaptation in domain-$\mathcal{S}$.]{\includegraphics[align=c,width=3.1in]{Figures/Single_domain_S.pdf}} \caption{Illustration of single-domain adaptation pipelines. $\mathcal{S}$ is source image with ground-truth label $Y_{\mathcal{S}}$, and $\mathcal{T}$ is target image. $G_{\mathcal{S}\rightarrow\mathcal{T}}$ represents image translation from domain-$\mathcal{S}$ to domain-$\mathcal{T}$ and vice versa. $\mathcal{S}' = G_{\mathcal{S}\rightarrow\mathcal{T}}(\mathcal{S})$ and $\mathcal{T}' = G_{\mathcal{T}\rightarrow\mathcal{S}}(\mathcal{T})$ are translated images in the corresponding domain. $M_{\mathcal{S}}$ and $M_{\mathcal{T}}$ are semantic segmentation models in domain-${\mathcal{S}}$ and domain-${\mathcal{T}}$, respectively. $\hat{Y}_{\mathcal{T}}$ and $\hat{Y}_{\mathcal{T}'}$ represent the corresponding pseudo labels of $\mathcal{T}$ and $\mathcal{T}'$. Red dash rectangles denote that visual inconsistency raised by image translations disturbs domain adaptation learning in either supervised part or SSL part.} \label{fig:flowchat} \vspace{-0.7cm} \end{figure} Two commonly used paradigms in unsupervised domain adaptive segmentation are image-to-image translation based methods~\cite{murez2018image, hoffman2018cycada} and self-supervised learning (SSL) based methods~\cite{zou2018unsupervised, zou2019confidence, zhang2019category,Two-phase}. The most common practice for image-to-image translation based methods is to translate synthetic data from source domain (denote as domain-$\mathcal{S}$) to target domain (denote as domain-$\mathcal{T}$)~\cite{hoffman2018cycada,chang2019all} to reduce the visual gap between different domains. Then adaptive segmentation is trained on translated synthetic data. However, by only applying the image-to-image translation to domain adaptation task, the results are always unsatisfying. One of the leading factors is that image-to-image translation may change the image content involuntarily and introduce \emph{visual inconsistency} between raw images and translated images. Training on translated images with uncorrected ground-truth labels of source images introduces noise which disturbs the domain adaptation learning. A combination of SSL and image-to-image translation~\cite{li2019bidirectional, yang2020label, kim2020learning} has been demonstrated great effectiveness in the UDA field. SSL utilizes a well-trained segmentation model to generate a set of pseudo labels with high confidence for unlabeled target data, then the adaptive segmentation training can be divided into two parallel parts, namely supervised part (training is performed on source data with ground-truth labels) and SSL part (training is performed on target data with pseudo labels). In this paradigm, the most prevalent practice is to perform adaptation to well align a single domain, i.e., either source domain (named domain-$\mathcal{S}$ adaptation)~\cite{li2019bidirectional, kim2020learning} or target domain (named domain-$\mathcal{T}$ adaptation)~\cite{yang2020label}. However, both domain-$\mathcal{S}$ and domain-$\mathcal{T}$ adaptation heavily rely on the quality of image-to-image translation models, where visual inconsistency is always unavoidable. For domain-$\mathcal{T}$ adaptation (as shown in Figure~\ref{fig:flowchat}.(a)), visual inconsistency brings in misalignment between translated source images and uncorrected ground-truth labels, which disturbs the supervised part. In contrast, domain-$\mathcal{S}$ adaptation (as shown in Figure~\ref{fig:flowchat}.(b)) avoids image translation on source images, but simultaneously introduces visual inconsistency between target images and the corresponding translated images. Defective pseudo labels generated by unaligned images disturb the SSL part. Notice the above single-domain adaptation pipelines are almost complementary in terms of the two training parts, i.e., visual inconsistency caused by image translation disturbs the training of supervised part in domain-$\mathcal{T}$ adaptation and SSL part in domain-$\mathcal{S}$ adaptation. In contrast, SSL part in domain-$\mathcal{T}$ adaptation and supervised part in domain-$\mathcal{S}$ adaptation are unaffected. It is natural to raise a question: \emph{could we combine these two complementary adaptation pipelines into a single framework to make good use of each strength and make them promote each other?} Based on this idea, we propose the \emph{dual path learning} framework which considers two pipelines from opposite domains to alleviate unavoidable visual inconsistency raised by image translations. We name two paths used in our framework as path-$\mathcal{T}$ (adaption is performed in domain-$\mathcal{T}$) and path-$\mathcal{S}$ (adaption is performed in domain-$\mathcal{S}$), respectively. Path-$\mathcal{S}$ assists path-$\mathcal{T}$ to learn precise supervision from source data. Meanwhile, path-$\mathcal{T}$ guides path-$\mathcal{S}$ to generate high-quality pseudo labels which are important for SSL in return. It is worth noting that path-$\mathcal{S}$ and path-$\mathcal{T}$ are not two separated pipelines in our framework, interactions between two paths are performed throughout the training, which is demonstrated to be effective in our experiments. The whole system forms a closed-loop learning. Once the training has finished, we only retain a single segmentation model well aligned in target domain for testing, no extra computation is required. The main contributions of this work are summarized as: \begin{itemize} \item We present a novel dual path learning (DPL) framework for domain adaptation of semantic segmentation. DPL employs two complementary and interactive single-domain pipelines (namely path-$\mathcal{T}$ and path-$\mathcal{S}$) in the training phase. In the testing, only a single segmentation model well aligned in target domain is used. The proposed DPL framework surpasses state-of-the-art methods on representative scenarios. \item We present two interactive modules to make two paths promote each other, namely dual path image translation and dual path adaptive segmentation. \item We introduce a novel warm-up strategy for the segmentation models which helps adaptive segmentation in the early training stage. \end{itemize} \section{Related Work} {\noindent \textbf{Domain Adaptation.}}\hspace{3pt} Domain adaptation is a broadly studied topic in computer vision. It aims to rectify the mismatch in cross-domains and tune the models toward better generalization at testing~\cite{patel2015visual}. A variety of domain adaptation methods for image classification~\cite{saito2017maximum,Chen_2019_CVPR,tzeng2017adversarial,kang2019contrastive} and object detection~\cite{chen2018domain,bhattacharjee2020dunit} have been proposed. In this paper, we focus on the unsupervised domain adaptation of semantic segmentation. \begin{figure*}[t] \centering \subfloat[Training pipeline of DPL.]{\includegraphics[align=c,width=0.85\linewidth]{Figures/Pipeline_Overall_train.pdf}} \hspace{0.2cm} \subfloat[Testing.]{\includegraphics[align=c,width=0.1\linewidth]{Figures/Pipeline_Overall_test.pdf} } \caption{(a) Overview of DPL framework. Inputs are highlighted by orange rectangles. DPL consists of two complementary single-domain paths: path-$\mathcal{S}$ (learning is performed in \emph{source} domain) and path-$\mathcal{T}$ (learning is performed in \emph{target} domain). Dual path image translation (DPIT) and dual path adaptive segmentation (DPAS) are proposed to make two paths interactive and promote each other. In DPIT, unpaired image translation models ($G_{\mathcal{T}\rightarrow\mathcal{S}}$ and $G_{\mathcal{S}\rightarrow\mathcal{T}}$) are supervised by general GAN loss and cross-domain perceptual loss. DPAS employs the proposed dual path pseudo label generation (DPPLG) module to produce pseudo labels $\hat{Y}_{*}$ of target images, then segmentation models ($M_{\mathcal{S}}$ and $M_{\mathcal{T}}$) are trained on both source images (or translated source images) with ground-truth labels and target images (or translated target images) with pseudo labels. (b) Testing of DPL. Only $M_{\mathcal{T}}$ is used for inference.} \label{fig:DPL_flowchat} \vspace{-0.5cm} \end{figure*} {\noindent \textbf{Domain Adaptation for Semantic Segmentation.}}\hspace{3pt} Semantic segmentation needs a large volume of pixel-level labeled training data, which is laborious and time-consuming in annotation. A promising solution to reduce the labeling cost is to train segmentation networks on synthetic dataset (e.g., GTA5~\cite{richter2016playing} and SYNTHIA~\cite{ros2016synthia}) with computer-generated annotations before testing on realistic dataset (e.g., Cityscapes~\cite{cordts2016cityscapes}). Although synthetic images have similar appearance to real images, there still exist domain discrepancies in terms of layouts, colors and illumination conditions, which always cripples the models' performance. Domain adaptation is necessary to align the synthetic and the real dataset~\cite{wu2018dcan,zou2018unsupervised,zhao2019madan,kang2020pixel}. Adversarial-based methods~\cite{hoffman2016fcns, long2018transferable,tsai2018learning} are broadly explored in unsupervised domain adaptation, which align different domains at image-level~\cite{murez2018image,hoffman2018cycada,wu2018dcan} or feature-level~\cite{tsai2018learning,huang2020contextual}. The image-level adaptation regards domain adaptation as an image synthesis problem, and aims to reduce visual discrepancy (e.g., lighting and object texture) in cross-domains with unpaired image-to-image translation models~\cite{CycleGAN2017,liu2017unsupervised,park2020contrastive}. However, the performance is always unsatisfactory by simply applying image translation to domain adaptation task. One reason is that image-to-image translation may change the image content involuntarily and further disturb the following segmentation training~\cite{li2019bidirectional}. In recent years, self-supervised learning (SSL)~\cite{grandvalet2005semi,zhu2007semi} shows tremendous potential in adaptive segmentation~\cite{zou2018unsupervised,zou2019confidence,subhani2020learning,Two-phase}. The key principle for these methods is to generate a set of pseudo labels for target images as the approximation to the ground-truth labels, then segmentation model is updated by leveraging target domain data with pseudo labels. CRST~\cite{zou2018unsupervised} is the first work to introduce self-training into adaptive segmentation, it also alleviates category imbalance issue by controlling the proportion of selected pseudo labels in each category. Recent TPLD~\cite{Two-phase} proposes a two-phase pseudo label densification strategy to obtain dense pseudo labels for SSL. Two works~\cite{li2019bidirectional, yang2020label} which explore the combination of image translation and SSL are closely related to ours. Label-Driven~\cite{yang2020label} performs a target-to-source translation and a label-driven reconstruction module is used to reconstruct source and target images from the corresponding predicted labels. In contrast, BDL~\cite{li2019bidirectional} represents a bidirectional learning framework which alternately trains the image translation and the adaptive segmentation in target domain. Meanwhile, BDL utilizes a single-domain perceptual loss to maintain visual consistency. We will demonstrate this kind of design is suboptimal compared with the proposed dual path image translation module in Section~\ref{section:DPIT}. These two works demonstrate the combination of image translation and SSL can promote adaptive learning. Different from these single-domain adaptation methods, the proposed dual path learning framework integrates two complementary single-domain pipelines in an interactive manner to address visual inconsistency problem by: 1) utilizing segmentation models aligned in different domains to provide cross-domain perceptual supervision for image translation; 2) combining knowledge from both source and target domain for self-supervised learning. \section{Method} Given the source dataset $\mathcal{S}$ (synthetic data) with pixel-level segmentation labels $Y_\mathcal{S}$, and the target dataset $\mathcal{T}$ (real data) with no labels. The goal of unsupervised domain adaptation (UDA) is that by only using $\mathcal{S}$, $Y_\mathcal{S}$ and $\mathcal{T}$, the segmentation performance can be on par with the model trained on $\mathcal{T}$ with corresponding ground-truth labels $Y_\mathcal{T}$. Domain gap between $\mathcal{S}$ and $\mathcal{T}$ makes the task difficult for the network to learn transferable knowledge at once. To address this problem, we propose a novel dual path learning framework named DPL. As shown in Figure~\ref{fig:DPL_flowchat}.(a), DPL consists of two complementary and interactive paths: path-$\mathcal{S}$ (adaptive learning is performed in \emph{source} domain) and path-$\mathcal{T}$ (adaptive learning is performed in \emph{target} domain). How to allow one of both paths provide positive feedbacks to the other is the key to success. To achieve this goal, we propose two modules, namely dual path image translation (DPIT) and dual path adaptive segmentation (DPAS). DPIT aims to reduce the visual gap between different domains without introducing visual inconsistency. In our design, DPIT unites general unpaired image translation models with dual perceptual supervision from two single-domain segmentation models. Notice any unpaired image translation models can be used in DPIT, we use CycleGAN~\cite{CycleGAN2017} as our default model due to its popularity and it provides bidirectional image translation inherently. We use $\mathcal{T}'=G_{\mathcal{T}\rightarrow\mathcal{S}}(\mathcal{T})$ and $\mathcal{S}'=G_{\mathcal{S}\rightarrow\mathcal{T}}(\mathcal{S})$ to denote translated images in path-$\mathcal{S}$ and path-$\mathcal{T}$ respectively, where $G_{\mathcal{T}\rightarrow\mathcal{S}}$ and $G_{\mathcal{S}\rightarrow\mathcal{T}}$ are image translation models in the corresponding path. DPAS utilizes translated images from DPIT and the proposed dual path pseudo label generation (DPPLG) module to generate high-quality pseudo labels for target images, then segmentation models $M_{\mathcal{S}}$ (in path-$\mathcal{S}$) and $M_{\mathcal{T}}$ (in path-$\mathcal{T}$) are trained with both transferred knowledge in source domain and implicit supervision in target domain. The testing of DPL is extremely simple, we only retain $M_{\mathcal{T}}$ for inference as shown in Figure~\ref{fig:DPL_flowchat}.(b). The training process of DPL consists of two phases: single-path warm-up and DPL training. DPL benefits from well-initialized $M_{\mathcal{S}}$ and $M_{\mathcal{T}}$, since both DPIT and DPAS rely on the quality of segmentation models. A simple but efficient warm-up strategy can accelerate the convergence of DPL. Once the warm-up finishes, DPIT and DPAS are trained sequentially in DPL training phase. In this section, we first describe our warm-up strategy in Section~\ref{section:warm_phase}. Then, we introduce the key components of DPL: DPIT in Section~\ref{section:DPIT} and DPAS in Section~\ref{section:DPSA}. Next, we revisit and summarize the whole training process in Section~\ref{section:trainpipeline}. Finally, testing pipeline of DPL is presented in Section~\ref{section:testpipeline}. \subsection{Single Path Warm-up\label{section:warm_phase}} Perceptual supervision in DPIT and pseudo label generation in DPAS rely on the quality of segmentation models. To accelerate convergence of DPL, a warm-up process for segmentation models $M_{\mathcal{S}}$ and $M_{\mathcal{T}}$ is required. \noindent\textbf{$\boldsymbol{M_{\mathcal{S}}}$ Warm-up.} The warm-up for ${M_{\mathcal{S}}}$ is easily conducted in a fully supervised way by using source dataset $\mathcal{S}$ with ground-truth labels $Y_\mathcal{S}$. \noindent\textbf{$\boldsymbol{M_{\mathcal{T}}}$ Warm-up.} It is difficult to directly train $M_{\mathcal{T}}$ in a supervised manner since no labels can be accessed in target dataset $\mathcal{T}$. A straightforward idea is to translate source images $\mathcal{S}$ to target domain by using naive CycleGAN, and then $M_{\mathcal{T}}$ is trained on translated images $\mathcal{S}'$ with approximate ground-truth labels $Y_{\mathcal{S}}$. Unfortunately, naive CycleGAN does not apply any constrains to preserve visual consistency between $\mathcal{S}$ and $\mathcal{S'}$, i.e., visual content may be changed when $\mathcal{S}$ is translated to $\mathcal{S'}$. Misalignment between $\mathcal{S'}$ and $Y_{\mathcal{S}}$ can disturb the training of $M_{\mathcal{T}}$. \begin{figure}[t] \centering \includegraphics[width=2.2in]{Figures/Lable_correction_figure.pdf} \caption{Illustration of label correction strategy. Inputs are highlighted by orange rectangles.} \label{fig:initialazion_T} \setlength{\belowcaptionskip}{-2cm} \vspace{-0.7cm} \end{figure} To address this issue, we propose a novel label correction strategy as shown in Figure~\ref{fig:initialazion_T}. The core principle is to find a revised label $Y_{\mathcal{S}'}$ for ${\mathcal{S}'}$ by considering both ground-truth labels $Y_{\mathcal{S}}$ and segmentation predictions of ${\mathcal{S}'}$. Specially, we feed $\mathcal{S'}$ into $M_{\mathcal{T}}$ (which is initialized as $M_{\mathcal{S}}$ at the beginning) to generate pseudo labels $\hat{Y}_{\mathcal{S}'}$. Then label correction module revises raw ground-truth labels $Y_{\mathcal{S}}$ by replacing pixel-wise labels in $Y_{\mathcal{S}}$ with high-confidence pixel-wise labels in $\hat{Y}_{\mathcal{S}'}$, which means the labels of content-changed areas have been approximately corrected by reliable predictions. Formally, define revised labels $Y_{\mathcal{S}'} = \{ Y_\mathcal{S'}^{(i,j)}\}~(1 \leq i \leq H, 1 \leq j \leq W)$ as: \begin{equation} \label{equ:correction} Y_\mathcal{S'}^{(i,j)}=\left\{ \begin{array}{ll} \hat{Y}^{(i,j)}_{\mathcal{S}'}, & \mbox{if} \ P^{(i,j,\hat{c})}(\mathcal{S}') - P^{(i,j,c)}(\mathcal{S}') > \delta\\ Y^{(i,j)}_\mathcal{S},& \mbox{else,} \end{array} \right. \end{equation} where $H$ and $W$ denote the height and width of the input image respectively, $P(\cdot)$ is probability map predicted by segmentation model, $\hat{c}$ and $c$ denote the category index of $\hat{Y}^{(i,j)}_{\mathcal{S}'}$ and $Y^{(i,j)}_{\mathcal{S}}$ respectively, $\delta$ controls correction rate, we set $\delta=0.3$ empirically. In addition, we also use $M_{\mathcal{T}}$ to generate pseudo labels $\hat{Y}_{\mathcal{T}}$ for $\mathcal{T}$. Now we have paired training data $(\mathcal{S}', Y_{\mathcal{S}'})$ and $(\mathcal{T}, \hat{Y}_{\mathcal{T}})$ which approximately lie in target domain for $M_{\mathcal{T}}$ training. The overall loss is defined as: \begin{equation} \label{loss:init_seg_t} \begin{aligned} \mathcal{L}_{M_{\mathcal{T}}} &=\mathcal{L}_{seg}(\mathcal{S}', Y_\mathcal{S'}) + \mathcal{L}_{seg}(\mathcal{T}, \hat{Y}_{\mathcal{T}})\\ &+\lambda_{adv} \mathcal{L}_{adv}(\mathcal{S'},\mathcal{T}), \end{aligned} \end{equation} where $\mathcal{L}_{adv}$ represents typical adversarial loss as used in~\cite{tsai2018learning,li2019bidirectional,yang2020label} to further align target domain, $\mathcal{L}_{seg}$ indicates the commonly used per-pixel segmentation loss: \begin{equation} \label{loss:seg} \mathcal{L}_{seg}(I,Y)=-\frac{1}{HW}\sum_{i=1}^{H}\sum_{j=1}^{W}\sum_{c=1}^{C}Y^{(i,j,c)}\log P^{(i,j,c)}(I), \end{equation} where $I$ and $Y$ denote input image (raw image or translated image) and corresponding labels (ground-truth labels or pseudo labels), respectively. Once warm-up procedure is finished, we obtain preliminary segmentation models which are approximately aligned in the corresponding domain. These well-initialized models facilitate the training of DPIT and DPAS, which will be described in next sections. \subsection{Dual Path Image Translation\label{section:DPIT}} Image-to-image translation aims to reduce the gap in visual appearance (e.g., object textures and lighting) between source and target domain. As discussed in Section~\ref{section:intro}, unavoidable visual inconsistency caused by image translation may mislead the subsequent adaptive segmentation learning, and thus extra constraints to maintain visual consistency are required. BDL~\cite{li2019bidirectional} introduces a perceptual loss to maintain visual consistency between paired images (i.e., raw images and corresponding translated images). Perceptual loss measures distance of perceptual features\footnote{Perceptual feature denotes the probability map before softmax layer of segmentation model.} extracted from a well-trained segmentation model. In BDL, domain adaptation is only performed in target domain, as a result, the perceptual loss of paired images ($\mathcal{S}$, $\mathcal{S}'$) and ($\mathcal{T}$, $\mathcal{T}'$) is computed with identical segmentation model. Notice paired images are from two different domains ($\mathcal{S}$ and $\mathcal{T}'$ are in source domain while $\mathcal{T}$ and $\mathcal{S}'$ are in target domain), using segmentation model aligned in a single domain to extract features for perceptual loss computation may be suboptimal. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/Pipeline_Train_SSL_Pseudo.pdf} \caption{Illustration of dual path pseudo label generation (DPPLG). Input is highlighted by orange rectangle.} \label{fig:DPPLG} \vspace{-4mm} \end{figure} Now we introduce our dual path image translation (DPIT) as illustrated in Figure~\ref{fig:DPL_flowchat}.(a). DPIT is an bidirectional image translation model with cross-domain perceptual supervision. We use ${G_{\mathcal{S}\rightarrow\mathcal{T}}}$ and ${G_{\mathcal{T}\rightarrow\mathcal{S}}}$ to denote image translation in Path-$\mathcal{T}$ and Path-$\mathcal{S}$ respectively. CycleGAN is served as our default model since it provides bidirectional image translation inherently, however, any unpaired image translation algorithms can be used in DPIT. Different from BDL, DPIT makes use of two paths aligned in opposite domains and extracts perceptual features for paired images from their corresponding path to better maintain visual consistency. Concretely, DPIT utilizes $M_{\mathcal{S}}$ to extract perceptual features for $\mathcal{S}$ and $\mathcal{T}'$, and $M_{\mathcal{T}}$ to extract perceptual features for $\mathcal{T}$ and $\mathcal{S}'$, respectively. Then we can formulate our dual perceptual loss $\mathcal{L}_{DualPer}$ as: \begin{equation} \label{loss:tranlation_loss} \begin{aligned} \mathcal{L}_{DualPer}(\mathcal{S}, \mathcal{S}', \mathcal{T}, \mathcal{T}')&= \mathcal{L}_{Per}(F_{\mathcal{T}}(\mathcal{S}'), F_{\mathcal{S}}(\mathcal{S}))\\ &+\mathcal{L}_{Per}(F_{\mathcal{T}}(\mathcal{T}), F_{\mathcal{S}}(\mathcal{T}')), \end{aligned} \vspace{-0.1cm} \end{equation} where $\mathcal{L}_{Per}$ is perceptual loss as in ~\cite{li2019bidirectional}, $F_{\mathcal{S}}(\cdot)$ and $F_{\mathcal{T}}(\cdot)$ represent perceptual feature extracted by $M_{\mathcal{S}}$ and $M_{\mathcal{T}}$ respectively. Besides the supervision of dual perceptual loss, DPIT is also supervised by general adversarial and reconstruction loss. The overall loss of DPIT can be formulated as: \begin{equation} \label{loss:dualGAN} \begin{aligned} \mathcal{L}_{DPIT}&= \mathcal{L}^{\mathcal{S}}_{GAN}(\mathcal{S},\mathcal{T}') + \mathcal{L}^{\mathcal{T}}_{GAN}( \mathcal{S}',\mathcal{T})\\ &+\lambda_{Recon}\mathcal{L}^{\mathcal{S}}_{Recon}(\mathcal{S}, G_{\mathcal{T}\rightarrow\mathcal{S}}(\mathcal{S}'))\\ &+\lambda_{Recon}\mathcal{L}_{Recon}^{\mathcal{T}}(\mathcal{T}, G_{\mathcal{S}\rightarrow\mathcal{T}}(\mathcal{T}'))\\ &+{\lambda}_{DualPer} \mathcal{L}_{DualPer}(\mathcal{S}, \mathcal{S}', \mathcal{T}, \mathcal{T}'), \end{aligned} \vspace{-0.1cm} \end{equation} where $\mathcal{L}^{\mathcal{S}}_{GAN}$ ($\mathcal{L}^{\mathcal{T}}_{GAN}$) and $\mathcal{L}^{\mathcal{S}}_{Recon}$ ($\mathcal{L}^{\mathcal{T}}_{Recon}$) are GAN loss and reconstruction loss as in~\cite{CycleGAN2017}, $\lambda_{Recon}$ and ${\lambda}_{DualPer}$ denote the weights of reconstruction loss and dual perceptual loss respectively. We set $\lambda_{Recon}=10$ and $\lambda_{DualPer}=0.1$ by default. \subsection{Dual Path Adaptive Segmentation \label{section:DPSA}} Once DPIT is symmetrically trained, translated images $\mathcal{S}'= {G_{\mathcal{S}\rightarrow\mathcal{T}}}(\mathcal{S})$ and $\mathcal{T}'= {G_{\mathcal{T}\rightarrow\mathcal{S}}}(\mathcal{T})$ are fed into dual path adaptive segmentation (DPAS) module for subsequent learning. As shown in figure~\ref{fig:DPL_flowchat}.(a), DPAS utilizes self-supervised learning with combination of well-trained image translation for adaptive segmentation learning, i.e., segmentation models are trained on both source images (or translated source images) with ground-truth labels and target images (or translated target images) with pseudo labels. The core of DPAS is to generate high-quality pseudo labels of target images by combining predicted results from two paths. The training process of DPAS can be formulated as two alternative steps: 1) dual path pseudo label generation; 2) dual path segmentation training. {\noindent \textbf{Dual Path Pseudo Label Generation.}}\hspace{3pt} The labels of target dataset are unavailable in unsupervised domain adaptation tasks. Self-supervised learning (SSL) has been demonstrated great success when the labels of dataset are insufficient or noisy. The way to generate pseudo labels plays an important role in SSL. As described in Section~\ref{section:intro}, in path-$\mathcal{T}$, visual inconsistency brings in misalignment between translated source images $\mathcal{S}'$ and uncorrected ground-truth labels $Y_\mathcal{S}$, which disturbs the training of $M_\mathcal{T}$. Similar issue exists in path-$\mathcal{S}$ (see Figure~\ref{fig:flowchat}). Inspired by the observation that two paths from opposite domains are almost complementary, we take full advantages of two paths and present a novel dual path pseudo label generation (DPPLG) strategy to generate high-quality pseudo labels as shown in Figure~\ref{fig:DPPLG}. Concretely, let $P_{\mathcal{S}}(\cdot) = \mbox{Softmax}(F_{\mathcal{S}}(\cdot))$ and $P_{\mathcal{T}}(\cdot) = \mbox{Softmax}(F_{\mathcal{T}}(\cdot))$ denote probability map predicted by $M_\mathcal{S}$ and $M_\mathcal{T}$, respectively. In path-$\mathcal{T}$, target images can be directly fed into $M_\mathcal{T}$ to generate $P_{\mathcal{T}}(\mathcal{T})$. In contrast, path-$\mathcal{S}$ requires image translation to generate $\mathcal{T}'= {G_{\mathcal{T}\rightarrow\mathcal{S}}}(\mathcal{T})$, then $P_{\mathcal{S}}(\mathcal{T}')$ can be obtained by feeding $\mathcal{T}'$ into $M_\mathcal{S}$. Finally, enhanced probability map $P_*$ which is used for generating pseudo labels of target images can be obtained by a weighted sum of two separate probability maps $P_{\mathcal{T}}(\mathcal{T})$ and $P_{\mathcal{S}}(\mathcal{T}')$: \vspace{-0.2cm} \begin{equation} {P_*}=\frac{1}{2} P_{\mathcal{T}}(\mathcal{T}) + \frac{1}{2} P_{\mathcal{S}}(\mathcal{T'}), \vspace{-0.2cm} \end{equation} Following common practice~\cite{li2019bidirectional, Two-phase}, we use max probability threshold (MPT) to select the pixels with higher confidence in $P_*$ as pseudo labels of unlabeled target images. Concretely, define pseudo labels $\hat{Y}_{*} = \{ \hat{Y}_{*}^{(i,j,c)}\}~(1 \leq i \leq H, 1 \leq j \leq W, 1 \leq c \leq C)$ as: \vspace{-0.2cm} \begin{equation} \label{eq:sharepseudolabel} {\hat{Y}_*^{(i,j,c)}}=\left\{ \begin{array}{ll} 1, & {\rm if} \quad c=\mathop{argmax}\limits_{c} {({P_*^{(i,j,c)}})} \\ & {\rm and} \ {P_*^{(i,j,c)}}>{\lambda} \\ 0,& {\rm else}, \end{array}\\ \right. \vspace{-0.1cm} \end{equation} where $\lambda$ denotes threshold to filter pixels with low prediction confidence. We set $\lambda=0.9$ as default according to ~\cite{li2019bidirectional}. Though path-$\mathcal{S}$ and path-$\mathcal{T}$ can use respective pseudo labels generated by themselves, we will demonstrate the benefits by using shared pseudo label $\hat{Y}_*$ in Section~\ref{section:Experiments}. {\noindent \textbf{Dual Path Segmentation Training.}} Now we introduce the process of dual path segmentation training. Concretely, for path-$\mathcal{T}$, the objective is to train a well generalized segmentation model ${M_{\mathcal{T}}}$ in target domain. Training data for ${M_{\mathcal{T}}}$ includes two part, translated source images $\mathcal{S}' = {G_{\mathcal{S}\rightarrow\mathcal{T}}}(\mathcal{S})$ with ground-truth labels $Y_{\mathcal{S}}$, and raw target images $\mathcal{T}$ with pseudo labels $\hat{Y}_*$ generated by DPPLG. In contrast, path-$\mathcal{S}$ requires good generalization in source domain. Similarly, ${M_{\mathcal{S}}}$ is trained on source images $\mathcal{S}$ with ground-truth labels $Y_{\mathcal{S}}$ and translated images $\mathcal{T}'={G_{\mathcal{T}\rightarrow\mathcal{S}}}(\mathcal{T})$ with shared pseudo labels $\hat{Y}_*$. Besides the supervision from segmentation loss, we also utilize a discriminator on top of the features of the segmentation model to further decrease the domain gap as in~\cite{hoffman2018cycada, li2019bidirectional}. The overall loss function of dual path segmentation can be defined as: \begin{equation} \label{loss:seg_t} \vspace{-0.1cm} \begin{aligned} \mathcal{L}_{DualSeg} &= \mathcal{L}_{seg}^{\mathcal{T}}(\mathcal{S'}, Y_\mathcal{S}) + \mathcal{L}_{seg}^{\mathcal{T}}(\mathcal{T}, \hat{Y}_*)\\ &+\mathcal{L}_{seg}^{\mathcal{S}}(\mathcal{S}, Y_\mathcal{S}) + \mathcal{L}_{seg}^{\mathcal{S}}(\mathcal{T'}, \hat{Y}_*)\\ &+\lambda_{adv} (\mathcal{L}_{adv}^{\mathcal{T}}(\mathcal{S'}, \mathcal{T}) + \mathcal{L}_{adv}^{\mathcal{S}}(\mathcal{S}, \mathcal{T'})), \end{aligned} \end{equation} where $\mathcal{L}_{adv}^{\mathcal{S}}$ and $\mathcal{L}_{adv}^{\mathcal{T}}$ denote typical adversarial loss, $\mathcal{L}_{seg}^{\mathcal{S}}$ and $\mathcal{L}_{seg}^{\mathcal{T}}$ are per-pixel segmentation loss as defined in Equation~\ref{loss:seg}, $\lambda_{adv}$ controls contribution of adversarial loss. \vspace{-0.1cm} \begin{algorithm}[b] \vspace{-0.1cm} \caption{Training process of DPL}\label{algr:train_proc} \begin{algorithmic} \Require ${\mathcal{S}}$, ${Y_\mathcal{S}}$, ${\mathcal{T}}$ \Ensure{${M_{\mathcal{T}}^{(N)}}$, ${M_{\mathcal{S}}^{(N)}}$} \State {warm-up $M^{(0)}_{\mathcal{S}}$, $M^{(0)}_{\mathcal{T}}$} \State {train DPIT with Equation~\ref{loss:dualGAN}} \For{$n \gets 1$ to $N$} DPAS\\ \hspace{\algorithmicindent}generate $\hat{Y}_*^{(n)}$ with Equation~\ref{eq:sharepseudolabel} \\ \hspace{\algorithmicindent}train $M_{\mathcal{T}}^{(n)}$ and $M_{\mathcal{S}}^{(n)}$ with Equation \ref{loss:seg_t} \EndFor \end{algorithmic} \end{algorithm} \subsection{Training pipeline} \label{section:trainpipeline} Algorithm~\ref{algr:train_proc} summarizes the whole training process of DPL. First, ${M_{\mathcal{S}}}$ and ${M_{\mathcal{T}}}$ are initialized by the proposed warm-up strategy. Next, we train DPIT to provide well-translated images for subsequent learning. At last, following the common practice that self-supervised learning is conducted in an iterative way~\cite{li2019bidirectional,zou2019confidence,Two-phase}, DPAS is trained $N$ times for domain adaptation. We use superscript $(n)$ to refer to the $n$-th iteration. \subsection{Testing Pipeline} \label{section:testpipeline} As shown in Figure~\ref{fig:DPL_flowchat}.(b), the inference of DPL is extremely simple, we only retain $M_{\mathcal{T}}$ when testing on target images. Though DPL already shows the superiority over the state-of-the-art methods, we explore an optional dual path testing pipeline named DPL-Dual to boost performance by considering predictions from two paths. Concretely, we first generate probability map $P_{\mathcal{T}}(\mathcal{T})$ and $P_{\mathcal{S}}(\mathcal{T}')$ from two well-trained segmentation models $M_{\mathcal{T}}$ and $M_{\mathcal{S}}$ respectively, then an average function is used to generate final probability map $P_F = (P_{\mathcal{S}}(\mathcal{T}') + P_{\mathcal{T}}(\mathcal{T}))/2$. Though DPL-Dual promotes the performance, extra computation is introduced. We recommend DPL-Dual as an optional inference pipeline when computation cost is secondary. \section{Experiments} \label{section:exp} \subsection{Datasets} Following common practice, We evaluate our framework in two common scenarios, GTA5~\cite{richter2016playing}$\rightarrow$Cityscapes~\cite{cordts2016cityscapes} and SYNTHIA~\cite{ros2016synthia}$\rightarrow$Cityscapes. GTA5 consists of 24,996 images with the resolution of $1914\times1052$ and we use the 19 common categories between GTA5 and Cityscapes for training and testing. For SYNTHIA dataset, we use the SYTHIA-RAND-CITYSCAPES set which contains 9,400 images with resolution $1280\times760$ and 16 common categories with Cityscapes. Cityscapes is split into training set, validation set and testing set. Training set contains 2,975 images with resolution $2048\times1024$. Following common practice, we report the results on the validation set which contains 500 images with same resolution. All ablation studies are performed on GTA5$\rightarrow$Cityscapes, and comparison with state-of-the-art is performed on both GTA5$\rightarrow$Cityscapes and SYNTHIA$\rightarrow$Cityscapes. We use category-wise IoU and mIoU to evaluate the performance. \subsection{Network Architecture} Following common practice, we use DeepLab-V2~\cite{chen2018deeplab} with ResNet-101~\cite{he2016deep} and FCN-8s~\cite{long2015fully} with VGG16~\cite{simonyan2014very} as our semantic segmentation models. The discriminator used in adversarial learning is similar to~\cite{radford2015unsupervised}, which has 5 convolutional layers with kernel size $4 \times 4$ with channel number \{64, 128, 256, 512, 1\} and stride of 2. For each of convolutional layer except the last one, a leaky ReLU~\cite{xu2015empirical} layer parameterized by 0.2 is followed. The discriminator is implemented over the softmax output of segmentation model. For DPIT, following \cite{li2019bidirectional}, we adopt the architecture of CycleGAN with 9 blocks and use the proposed dual perceptual loss to maintain visual consistency. \subsection{Implementation Details} When training DPIT, the input image is randomly cropped to the size $512 \times 256$ and it is trained for 40 epochs. The learning rate of first 20 epochs is 0.0002 and decreases to 0 linearly after 20 epochs. Following \cite{CycleGAN2017, li2019bidirectional}, in Equation \ref{loss:dualGAN}, $\lambda_{Recon}$ is set to 10, $\lambda_{DualPer}$ is set to 0.1, respectively. For DPAS training, the input images are resized to the size $1024\times512$ with batch size 4. For DeepLab-V2 with ResNet-101, we adopt SGD as optimizer and set initial learning rate with $5 \times 10^{-4}$, which is decreased with `poly' learning rate policy with power as 0.9. For FCN-8s with VGG16, we use Adam optimizer with momentum $\{0.9,0.99\}$ and initial learning rate is set to $2 \times 10^{-5}$. The learning rate is decreased with `step' policy with step size 50000 and drop factor 0.1. For adversarial learning, $\lambda_{adv}$ is set to $1 \times 10^{-3}$ for DeepLab-V2 and $1 \times 10^{-4}$ for FCN-8s in Equation~\ref{loss:init_seg_t} and \ref{loss:seg_t}. The discriminator is trained with Adam optimizer with the initial learning rate $2 \times 10^{-4}$. The momentum parameters are set as 0.9 and 0.99. All ablation studies are conducted on the first iteration ($N=1$). We set $N=4$ when comparing with state-of-the-art methods. \begin{table}[t] \centering \caption{Comparison of different image translation models.} \small \label{tab:translation_choice} \begin{tabular}{ccc} \toprule[1.0pt] \makecell{Image translation module}& mIoU(${M^{(1)}_{\mathcal{S}}}$) & mIoU(${M^{(1)}_{\mathcal{T}}}$)\\ \hline CycleGAN &41.4 &48.5\\ SPIT & 48.6&51.1\\ DPIT &\textbf{49.6}&\textbf{51.8}\\ \bottomrule \end{tabular} \vspace{-0.2cm} \end{table} \begin{table}[t] \centering \small \caption{Comparison of different pseudo label generation strategies.} \label{tab:Ablation_SSl} \setlength{\tabcolsep}{2.5pt} \begin{tabular}{cccc} \toprule[1.0pt] \makecell{Pseudo label generation strategy}& mIoU(${M^{(1)}_{\mathcal{S}}}$)& mIoU(${M^{(1)}_{\mathcal{T}}}$)\\ \hline SPPLG&46.0&50.0\\ \hline DPPLG-Max & 49.2&50.6\\ DPPLG-Joint&49.1 &50.3\\ DPPLG-Weighted&\textbf{49.6}& \textbf{51.8}\\ \bottomrule \end{tabular} \vspace{-0.5cm} \end{table} \subsection{Experiments} {\noindent \textbf{Dual Path Image Translation Improves Translation Quality.}}\hspace{3pt} \label{section:Experiments} DPIT encourages visual consistency through dual perceptual loss computed by segmentation models $M_\mathcal{S}$ and $M_\mathcal{T}$. To demonstrate the effectiveness of DPIT, we compare it with: 1) naive CycleGAN, in which no perceptual loss is used to maintain visual consistency; 2) Single Path Image Translation (SPIT) used in BDL~\cite{li2019bidirectional}, which applies CycleGAN and perceptual loss computed by single segmentation model aligned in target domain. Notice the only difference in this ablation study is that different image translation methods are used in DPL. Table~\ref{tab:translation_choice} shows the comparison. By using perceptual loss to maintain visual consistency, both SPIT and DPIT can significantly improve the adaptation performance compared with naive CycleGAN. Our DPIT surpasses SPIT in both segmentation models ($M_\mathcal{S}$ and $M_\mathcal{T}$) demonstrates that extracting aligned perceptual features can further alleviate visual inconsistency caused by image translation. {\noindent \textbf{The Effectiveness of Dual Path Pseudo Label Generation.}}\hspace{3pt} In our proposed DPPLG module, predictions from two paths jointly participate in the generation of pseudo labels. We compare DPPLG with single path pseudo label generation (SPPLG) method, i.e., path-$\mathcal{S}$ and path-$\mathcal{T}$ generate respective pseudo labels by themselves. Meanwhile, we study three different strategies of DPPLG: 1) DPPLG-Max, which selects the prediction with maximum probability of two paths; 2) DPPLG-Joint, in which two paths generate pseudo labels separately and intersections are selected as final pseudo labels; 3) DPPLG-Weighted, which is the default strategy as described in Section~\ref{section:DPSA}. Table \ref{tab:Ablation_SSl} shows the results. All of the DPPLG strategies have better performance than SPPLG, which means the joint decision of two complementary paths can improve the quality of pseudo labels. We use DPPLG-Weighted as our pseudo label generation strategy due to the preeminent experimental result. \begin{figure}[t!] \begin{minipage}[c]{0.45\linewidth} \small \centering \makeatletter\def\@captype{table}\makeatother\caption{Ablation study on stage-wise DPAS.} \label{tab:Ablation_performance} \setlength{\tabcolsep}{4pt} \begin{tabular}{cccc} \toprule[1.0pt] $M_{\mathcal{S}}$ &mIoU& $M_{\mathcal{T}}$ &mIoU\\ \hline ${M_{\mathcal{S}}^{(0)}}$ & 43.7 & ${M_{\mathcal{T}}^{(0)}}$ &48.5\\ ${M_{\mathcal{S}}^{(1)}}$ & 49.6 & ${M_{\mathcal{T}}^{(1)}}$ &51.8\\ ${M_{\mathcal{S}}^{(2)}}$ & 50.6 & ${M_{\mathcal{T}}^{(2)}}$&52.4\\ ${M_{\mathcal{S}}^{(3)}}$ & \textbf{50.7} & ${M_{\mathcal{T}}^{(3)}}$ &52.6\\ ${M_{\mathcal{S}}^{(4)}}$ & \textbf{50.7} & ${M_{\mathcal{T}}^{(4)}}$ &\textbf{52.8}\\ \bottomrule \end{tabular} \end{minipage} \hspace{.15in} \begin{minipage}[c]{0.45\linewidth} \small \centering \makeatletter\def\@captype{table}\makeatother\caption{Ablation study on $M_{\mathcal{T}}$ warm-up.} \setlength{\tabcolsep}{4pt} \label{tab:Ablation_init} \begin{tabular}{ccc} \toprule[1.0pt] Model & $\delta$ &mIoU\\ \hline $M_{\mathcal{T}}$ &0.2 &47.4\\ $M_{\mathcal{T}}$ &0.3 &\textbf{48.5}\\ $M_{\mathcal{T}}$ &0.5 & 47.3\\ \hline $M_{\mathcal{T}}$ w/ $Y_{\mathcal{S}}$& - &46.2\\ $M_{\mathcal{T}}$ w/ $\hat{Y}_{\mathcal{S}'}$ & - &44.3\\ \bottomrule \end{tabular} \end{minipage} \vspace{-0.7cm} \end{figure} \renewcommand\arraystretch{1.1} \begin{table*}[tp] \scriptsize \centering \caption{Comparison with state-of-the-art methods on GTA5$\rightarrow$Cityscapes scenario. \textcolor{red}{Red}: best result. \textcolor{blue}{Blue}: second best result.} \label{tab:comparison_gta5} \setlength{\tabcolsep}{2.5pt} \begin{tabular}{ccccccccccccccccccccccc} \hline \shortstack{Segmentation \\Model}&{Method} & \rotatebox{90}{road} & \rotatebox{90}{sidewalk} &\rotatebox{90}{building} & \rotatebox{90}{wall} & \rotatebox{90}{fence} & \rotatebox{90}{pole} & \rotatebox{90}{t-light} & \rotatebox{90}{t-sign} & \rotatebox{90}{vegetation } & \rotatebox{90}{terrain} & \rotatebox{90}{sky} & \rotatebox{90}{person} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{truck} & \rotatebox{90}{bus} & \rotatebox{90}{train} & \rotatebox{90}{motorbike} & \rotatebox{90}{bicycle} & mIoU\\ \hline \multirow{9}{*}{ResNet101\cite{he2016deep}} &BDL\cite{li2019bidirectional} & {{91.0}} & {{44.7}} & {{84.2}} & {{34.6}} & {{27.6}} & 30.2 & 36.0 & 36.0 & {{85.0}} & \textcolor{blue}{{43.6}} & {{83.0}} & 58.6 & {{31.6}} & {{83.3}} & {{35.3}} & {{49.7}} & 3.3 & 28.8 & 35.6 & {{48.5}} \\ &SIM \cite{wang2020differential} & 90.6 & 44.7 & 84.8 & 34.3 & 28.7 & 31.6 & 35.0 & \textcolor{blue}{37.6} & 84.7 & 43.3 & 85.3 & 57.0 & 31.5 & 83.8 & \textcolor{blue}{42.6} & 48.5 & 1.9 & 30.4 & 39.0 & 49.2 \\ &FADA\cite{wang2020classes} & 92.5& 47.5 &85.1 &37.6 &\textcolor{red}{32.8}& 33.4 &33.8 &18.4 &85.3 &37.7& 83.5 &\textcolor{red}{63.2}& \textcolor{red}{39.7} &\textcolor{red}{87.5} &32.9 &47.8& 1.6& 34.9& 39.5& 49.2\\ &Label-Driven\cite{yang2020label} & 90.8 & 41.4 & 84.7 & 35.1 &27.5&31.2&38.0&32.8&\textcolor{blue}{85.6}&42.1&84.9&59.6& \textcolor{blue}{34.4}&85.0& \textcolor{red}{42.8}&\textcolor{blue}{52.7}&3.4&30.9&38.1&49.5 \\ &Kim et al. \cite{kim2020learning} & \textcolor{blue}{92.9} & \textcolor{blue}{55.0} & 85.3 & 34.2 & 31.1 & 34.9 & 40.7 & 34.0 & 85.2 & 40.1 & \textcolor{red}{87.1} & 61.0 & 31.1 & 82.5 & 32.3 & 42.9 & 0.3 & 36.4 & 46.1 & 50.2 \\ &FDA-MBT \cite{yang2020fda} & 92.5 & 53.3 & 82.4 & 26.5 & 27.6 & \textcolor{blue}{36.4} & 40.6 & \textcolor{red}{38.9} & 82.3 & 39.8 & 78.0 & 62.6 & \textcolor{blue}{34.4} & 84.9 & 34.1 & \textcolor{red}{53.1} & 16.9 & 27.7 & \textcolor{blue}{46.4} & 50.5 \\ &TPLD \cite{Two-phase}& \textcolor{red}{94.2} &\textcolor{red}{60.5} &82.8 &36.6 &16.6& \textcolor{red}{39.3}& 29.0 &25.5& \textcolor{blue}{85.6} &\textcolor{red}{44.9} &84.4& 60.6 &27.4& 84.1 &37.0& 47.0 &\textcolor{red}{31.2} &36.1& \textcolor{red}{50.3} &51.2 \\ \cline{2-22} &DPL &92.5 &52.8 &\textcolor{blue}{86.0} &\textcolor{blue}{38.5} &31.7 &36.2 &\textcolor{blue}{47.3} &34.9 &85.5 &39.9 &85.2 &\textcolor{blue}{62.9} &33.9 &86.8 &37.2 &45.3 &\textcolor{blue}{20.1} &\textcolor{red}{44.1} &42.4& \textcolor{blue}{52.8}\\ &DPL-Dual& 92.8 &54.4 &\textcolor{red}{86.2} &\textcolor{red}{41.6} &\textcolor{blue}{32.7} &\textcolor{blue}{36.4} &\textcolor{red}{49.0} &34.0 &\textcolor{red}{85.8} &41.3 &\textcolor{blue}{86.0} &\textcolor{red}{63.2} &34.2 &\textcolor{blue}{87.2} &39.3 &44.5 &18.7 &\textcolor{blue}{42.6} &43.1& \textcolor{red}{53.3} \\ \hline \multirow{9}{*}{VGG16\cite{simonyan2014very}} &TPLD \cite{Two-phase}& 83.5& 49.9& 72.3& 17.6& 10.7& \textcolor{blue}{29.6}& 28.3& 9.0& 78.2& 20.1& 25.7& 47.4& 13.3& 79.6& 3.3& 19.3& 1.3& 14.3& \textcolor{red}{33.5}& 34.1 \\ &BDL \cite{li2019bidirectional} & 89.2& 40.9& 81.2& 29.1& 19.2& 14.2& 29.0& 19.6& 83.7& 35.9& 80.7& 54.7& 23.3& 82.7& 25.8& 28.0& 2.3& 25.7& 19.9& 41.3 \\ &FDA-MBT \cite{yang2020fda} & 86.1& 35.1& 80.6& 30.8& 20.4& 27.5& 30.0& 26.0& 82.1& 30.3& 73.6& 52.5& 21.7& 81.7& 24.0& 30.5& \textcolor{red}{29.9}& 14.6& 24.0& 42.2 \\ &Kim et al. \cite{kim2020learning} & \textcolor{red}{ 92.5}& \textcolor{red}{54.5}& \textcolor{red}{83.9}& \textcolor{blue}{34.5}& \textcolor{blue}{25.5}& \textcolor{red}{31.0}& 30.4& 18.0& \textcolor{blue}{84.1}& \textcolor{blue}{39.6}& \textcolor{blue}{83.9}& 53.6& 19.3& 81.7& 21.1& 13.6& \textcolor{blue}{17.7}& 12.3& 6.5& 42.3 \\ &SIM \cite{wang2020differential} & 88.1& 35.8& 83.1& 25.8& 23.9& 29.2& 28.8& \textcolor{red}{28.6}& 83.0& 36.7& 82.3& 53.7& 22.8& 82.3& 26.4& 38.6& 0.0& 19.6& 17.1& 42.4 \\ &Label-Driven\cite{yang2020label} & 90.1& 41.2& 82.2& 30.3& 21.3& 18.3& 33.5& 23.0& \textcolor{blue}{84.1}& 37.5& 81.4& 54.2& 24.3& 83.0& 27.6& 32.0& 8.1& \textcolor{blue}{29.7}& 26.9& 43.6 \\ &FADA\cite{wang2020classes} & \textcolor{blue}{92.3}& \textcolor{blue}{51.1}& \textcolor{blue}{83.7}& 33.1& \textcolor{red}{29.1}& 28.5& 28.0& 21.0& 82.6& 32.6& \textcolor{red}{85.3}& \textcolor{red}{55.2}& \textcolor{red}{28.8}& \textcolor{red}{83.5}& 24.4& 37.4& 0.0& 21.1& 15.2& 43.8\\ \cline{2-22} &DPL& 88.9& 43.6& 83.4& 33.8& {24.7}& 28.0& \textcolor{blue}{37.6}& \textcolor{blue}{26.2}& \textcolor{blue}{84.1}& \textcolor{red}{40.3}& 81.5& \textcolor{blue}{54.9}& 25.0& 83.0& \textcolor{blue}{27.7}& \textcolor{blue}{48.6}& 4.8& 29.1& 32.0& \textcolor{blue}{46.2} \\ &DPL-Dual& 89.2& 44.0& 83.5& \textcolor{red}{35.0}& {24.7}& 27.8& \textcolor{red}{38.3}& 25.3&\textcolor{red}{84.2}& 39.5& 81.6& 54.7& \textcolor{blue}{25.8}& \textcolor{blue}{83.3}& \textcolor{red}{29.3}& \textcolor{red}{49.0}& 5.2& \textcolor{red}{30.2}& \textcolor{blue}{32.6}& \textcolor{red}{46.5} \\ \hline \end{tabular} \end{table*} \begin{table*}[t!p] \scriptsize \centering \caption{Comparison with state-of-the-art methods on SYNTHIA$\rightarrow$Cityscapes scenario. \textcolor{red}{Red}: best result. \textcolor{blue}{Blue}: second best result.} \setlength{\tabcolsep}{3.5pt} \label{tab:comparison_synthia} \begin{tabular}{cccccccccccccccccccc} \hline \shortstack{Segmentation \\Model}&{Method} & \rotatebox{90}{road} & \rotatebox{90}{sidewalk} &\rotatebox{90}{building}&\rotatebox{90}{wall} & \rotatebox{90}{fence} & \rotatebox{90}{pole} & \rotatebox{90}{t-light} & \rotatebox{90}{t-sign} & \rotatebox{90}{vegetation } & \rotatebox{90}{sky} & \rotatebox{90}{person} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{bus} & \rotatebox{90}{motorbike} & \rotatebox{90}{bicycle} &\makecell[b]{ mIoU \\ (16)}&\makecell[b]{ mIoU \\ (13)}\\ \hline \multirow{9}{*}{ResNet101\cite{he2016deep}} &Kim et al. \cite{kim2020learning} & \textcolor{red}{92.6} & \textcolor{red}{53.2} & 79.2&-&-&-& 1.6 & 7.5 & 78.6 & 84.4 & 52.6 & 20.0 & 82.1 & 34.8 & 14.6 & 39.4 &-& 49.3 \\ &BDL\cite{li2019bidirectional} & {{86.0}} & \textcolor{blue}{{46.7}} & {{80.3}} &-&-&- & 14.1 & 11.6 & {{79.2}} & 81.3 & {{54.1}} & {{27.9}} & {{73.7}} & \textcolor{blue}{{42.2}} & {{25.7}} & {{45.3}} &-& {{51.4}} \\ &SIM \cite{wang2020differential} & 83.0 & 44.0 & 80.3 &-&-&-& 17.1 & 15.8 & 80.5 & 81.8 & 59.9 & \textcolor{red}{33.1} & 70.2 & 37.3 & 28.5 & \textcolor{blue}{45.8} & -&52.1 \\ &FDA-MBT \cite{yang2020fda} & 79.3 & 35.0 & 73.2 &-&-&- & 19.9 & 24.0 & 61.7 & 82.6 & \textcolor{red}{61.4} & {31.1} & 83.9 & 40.8 & \textcolor{red}{38.4} & \textcolor{red}{51.1} &-& 52.5 \\ &FADA\cite{wang2020classes} & 84.5&40.1&\textcolor{red}{83.1}&4.8&0.0&\textcolor{blue}{34.3}&{20.1}&\textcolor{blue}{27.2}&\textcolor{red}{84.8}&84.0&53.5&22.6&\textcolor{red}{85.4}&\textcolor{red}{43.7}&{26.8}&27.8&45.2&{52.5}\\ &Label-Driven\cite{yang2020label}& 85.1&44.5&81.0&-&-&-&16.4&15.2&80.1&84.8&59.4&\textcolor{blue}{31.9}&73.2&41.0&\textcolor{blue}{32.6}&44.7&-&53.1 \\ &TPLD \cite{Two-phase}&80.9 &44.3 &82.2 &\textcolor{red}{19.9}&0.3&\textcolor{red}{40.6}& 20.5& \textcolor{red}{30.1}& 77.2 &80.9& \textcolor{blue}{60.6}& 25.5& \textcolor{blue}{84.8}& 41.1& 24.7 &43.7&\textcolor{red}{47.3}& 53.5\\ \cline{2-20} &DPL & 87.4&45.5&82.7&\textcolor{blue}{14.8}&\textcolor{red}{0.7}&33.0&\textcolor{blue}{21.9}&20.0&82.9&\textcolor{blue}{85.1}&56.4&21.7&82.1&39.5&30.8&45.2&46.9&\textcolor{blue}{53.9} \\ &DPL-Dual&\textcolor{blue}{87.5}&45.7&\textcolor{blue}{82.8}&13.3&\textcolor{blue}{0.6}&33.2&\textcolor{red}{22.0}&20.1&\textcolor{blue}{83.1}&\textcolor{red}{86.0}&56.6&21.9&83.1&40.3&29.8&45.7&\textcolor{blue}{47.0}&\textcolor{red}{54.2}\\ \hline \multirow{9}{*}{VGG16~\cite{simonyan2014very}} &CrCDA \cite{huang2020contextual} &74.5& 30.5& 78.6& \textcolor{blue}{6.6}& 0.7& 21.2& 2.3& 8.4& 77.4& 79.1& 45.9& 16.5& 73.1& 24.1& 9.6& 14.2& 35.2& 41.1\\ &TPLD \cite{Two-phase}& 81.3& 34.5& 73.3& \textcolor{red}{11.9}&0.0& 26.9& 0.2& 6.3& 79.9& 71.2& 55.1& 14.2& 73.6& 5.7& 0.5& 41.7& 36.0& 41.3 \\ &Kim et al. \cite{kim2020learning} & \textcolor{red}{89.8}& \textcolor{red}{48.6}& 78.9&-&-&-&0.0& 4.7& 80.6& 81.7& 36.2& 13.0& 74.4& 22.5& 6.5& 32.8&-& 43.8 \\ &BDL \cite{li2019bidirectional} & 72.0& 30.3& 74.5& 0.1& 0.3& 24.6& 10.2& 25.2& 80.5& 80.0& 54.7& \textcolor{blue}{23.2}& 72.7& 24.0& 7.5& 44.9& 39.0& 46.1 \\ &FADA \cite{wang2020classes} & 80.4& 35.9& \textcolor{red}{80.9}& 2.5& 0.3& \textcolor{red}{30.4}& 7.9& 22.3& \textcolor{red}{81.8}& \textcolor{red}{83.6}& 48.9& 16.8& 77.7& \textcolor{red}{31.1}& 13.5& 17.9& 39.5& 46.1 \\ &FDA-MBT \cite{yang2020fda} & \textcolor{blue}{84.2}& 35.1& 78.0& 6.1& 0.4& 27.0& 8.5& 22.1& 77.2& 79.6& 55.5& 19.9& 74.8& 24.9& \textcolor{red}{14.3}& 40.7& 40.5& 47.3 \\ &Label-Driven \cite{yang2020label}& 73.7& 29.6& 77.6& 1.0& 0.4& 26.0& 14.7& 26.6& 80.6& 81.8& \textcolor{red}{57.2}& \textcolor{red}{24.5}& 76.1& \textcolor{blue}{27.6}& \textcolor{blue}{13.6}& \textcolor{red}{46.6}& 41.1& 48.5 \\ \cline{2-20} &DPL& 82.7& 37.3& 80.1& 1.6& \textcolor{blue}{0.9}& \textcolor{blue}{29.5}& \textcolor{red}{20.5}& \textcolor{red}{33.1}& \textcolor{blue}{81.7}& \textcolor{blue}{82.9}& 55.6& 20.2& \textcolor{blue}{79.2}& 26.3& 6.8& 45.5& \textcolor{blue}{42.7}& \textcolor{blue}{50.2} \\ &DPL-Dual &83.5& \textcolor{blue}{38.2}& \textcolor{blue}{80.4}& 1.3& \textcolor{red}{1.1}& 29.1& \textcolor{blue}{20.2}& \textcolor{blue}{32.7}& \textcolor{red}{81.8}& \textcolor{red}{83.6}& \textcolor{blue}{55.9}& 20.3& \textcolor{red}{79.4}& 26.6& 7.4& \textcolor{blue}{46.2}& \textcolor{red}{43.0}& \textcolor{red}{50.5} \\ \bottomrule \end{tabular} \vspace{-1em} \end{table*} {\noindent \textbf{The Effectiveness of Dual Path Adaptive Segmentation.}}\hspace{3pt} We show the stage-wise results of DPAS in Table~\ref{tab:Ablation_performance}. When warm-up is finished, $M_\mathcal{S}^{(0)}$ and $M_\mathcal{T}^{(0)}$ achieve mIoU of 43.7 and 48.5, respectively. After first iteration, $M_\mathcal{S}^{(1)}$ achieves 49.6 (+13.5{\%} improvement), and $M_\mathcal{T}^{(1)}$ achieves 51.8 (+6.8{\%} improvement). The big improvements on two segmentation models demonstrates that the interactions between two complementary paths facilitate the adaptive learning mutually. Though subsequent iterations ($M_\mathcal{S}^{(2)}$-$M_\mathcal{S}^{(4)}$ and $M_\mathcal{T}^{(2)}$-$M_\mathcal{T}^{(4)}$) can still promote the performance, the improvement is limited. {\noindent \textbf{Ablation Study on Label Correction Strategy.}}\hspace{3pt} In Section~\ref{section:warm_phase}, we propose a label correction strategy for $M_{\mathcal{T}}$ warm-up. Now we study different warm-up strategies as well as hyper parameters in Table~\ref{tab:Ablation_init}. Recall that label correction is used to find a revised label $Y_{\mathcal{S}'}$ by considering both ground-truth labels $Y_\mathcal{S}$ and pseudo labels $\hat{Y}_{\mathcal{S}'}$ (see Equation~\ref{equ:correction}). We ablate two extreme cases: 1) directly leverage ground-truth labels $Y_\mathcal{S}$ without label correction; 2) directly leverage pseudo labels $\hat{Y}_{\mathcal{S}'}$ without label correction. Results in Table~\ref{tab:Ablation_init} shows the superiority of our label correction module. We also study different $\delta$ which controls correction rate, from the table, we find $\delta$ is a less-sensitive hyper parameter which can be set as 0.3 by default. {\noindent \textbf{Comparison with State-of-the-art Methods.}}\hspace{3pt} \label{sec:experiments} We evaluate DPL and DPL-Dual with state-of-the-art methods on two common scenarios, GTA5$\rightarrow$Cityscapes and SYNTHIA$\rightarrow$ Cityscapes. For each scenario, we report the results on two segmentation models, ResNet101 and VGG16. Table~\ref{tab:comparison_gta5} shows the results on the scenario GTA5$\rightarrow$Cityscapes, DPL achieves state-of-the-art performance on both models (with mIoU of 52.8 on ResNet101 and 46.2 on VGG16). DPL-Dual further achieves mIoU of 53.3 on ResNet101 and 46.5 on VGG16. Domain gap between SYNTHIA and Cityscapes is much larger than that of GTA5 and Cityscapes, and their categories are not fully overlapped. We list both of the results for the 13-category and 16-category for a fair comparison with state-of-the-art methods. Results are shown in Table~\ref{tab:comparison_synthia}, mIoU (13) and mIoU (16) represent adaptation methods are evaluated on 13 common categories and 16 common categories, respectively. Once again, under 13-category metric, DPL achieves state-of-the-art result on both ResNet101 and VGG16, DPL-Dual further boosts performance. For 16-categories metric, the performance of DPL with ResNet101 is slightly worse since the domain shift is much larger in \{\textit{wall, fence, pole}\} categories, and DPL with VGG16 still surpasses state-of-the-art with mIoU 42.7, DPL-Dual further promotes the performance to 43.0. \section{Conclusion} In this paper, we propose a novel dual path learning framework named DPL, which utilizes two complementary and interactive paths for domain adaptation of segmentation. Novel technologies such as dual path image translation and dual path adaptive segmentation are presented to make two paths interactive and promote each other. Meanwhile, a novel label correction strategy is proposed in the warm-up stage. The inference of DPL is extremely simple, only one segmentation model well aligned target domain is used. Experiments on common scenarios GTA5$\rightarrow$Cityscapes and SYNTHIA$\rightarrow$Cityscapes demonstrate the superiority of our DPL over the state-of-the-art methods. {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:40:34.237Z
2021-08-16T02:21:10.000Z
algebraic_stack_train_0000
2,770
8,846
proofpile-arXiv_065-13526
\section{Introduction} Interstellar methanimine (CH$_2$NH) was first detected toward Sagittarius B2 (Sgr-B2) and is the simplest molecule to contain the carbon nitrogen double bond \citep{Godfrey1973}. {The first CH$_2$NH\ maser emission was later discovered in Sgr-B2 by \citet{Faure2018}.} Methanimine is the simplest of the ``imines,'' which are precursors to amino acids \citep{Danger2011}, and thus an important tracer of prebiotic chemistry in the universe. We report the discovery of the first CH$_2$NH\ megamaser and provide evidence for more CH$_2$NH\ masers toward compact obscured nuclei (CONs). CH$_2$NH\ has been detected in both extragalactic and Galactic environments. The first extragalactic detection of CH$_2$NH\ was toward Arp~220 with the Arecibo radio telescope \citep{Salter2008}. The line was observed in emission and hypothesized to be a maser. A maser is a source of stimulated emission. It was also identified by \citet{Martin2006} in NGC~253 and in absorption toward PKS~1830-211 \citep{Muller2011}. In the Milky~Way, CH$_2$NH\ has been detected toward the Galactic center \citep{Godfrey1973,Turner1991} with abundance enhancements toward high-mass star forming regions \citep{Dickens1997}. Compact obscured nuclei are galaxies that host dusty and optically thick galactic centers \citep[e.g.,][]{Sakamoto2010,Gonzalez-Alfonso2012,Aalto2015b, Falstad2019, Falstad2021}. Radiation at X-ray to millimeter wavelengths is strongly attenuated in these regions, where molecular gas column densities exceed $\rm{N(H_2)}=10^{25}$~cm$^{-2}$ \citep[e.g.,][]{Treister2010,Roche2015,Aalto2015a,Aalto2019} and dust temperatures are $\gtrsim100$~K \citep[e.g.,][]{Sakamoto2013,Aalto2015a,Aalto2015b,Aalto2019}. The frequency range from 5 to 80 GHz is the only frequency range where these nuclei may be optically thin \citep{Barcos-Munoz2015,Sakamoto2017,Barcos-Munoz2018,Aalto2019}. However, \citet{Martin2016} show that at the high frequency end of this spectral range, dust emission may still be opaque. For this reason, it is not known if an active galactic nucleus (AGN) or a nuclear starburst powers the extreme infrared luminosities ($L_{\rm{IR}}~>10^{11}$~L$_{\odot}$) and outflows of these galaxies. This paper presents a search for the 5.29~GHz CH$_2$NH\ transition toward CONs. \citet{Aalto2015b} found CH$_2$NH\ emission at millimeter wavelengths ($\sim263$~GHz) toward two CONs: IC~860 and Zw~049.057. They suggest that CH$_2$NH\ may be an important tracer of the CON environment. Astrophysical masers are important probes of galaxy nuclei: for example OH~\citep[e.g.,][]{Henkel1990}, H$_2$CO\ \citep[e.g.,][]{Baan2017}, H$_2$O\ \citep[e.g.,][]{Herrnstein1999, Reid2009}, and CH$_3$OH\ \citep[e.g.,][]{Chen2015}. Studies of these masers reveal outflows \citep[e.g., OH;][]{Baan1989}, molecular tori \citep[e.g.,][]{Lonsdale1998}, accretion disks of AGN \citep[e.g.,][]{Reid2009}, intense star formation \citep[e.g.,][]{Hagiwara2001,Brunthaler2009,Gorski2019}, and cloud-scale shocks \citep[e.g.,][]{Ellingsen2017, Gorski2017,Gorski2018}. Because masers often trace specific conditions within the interstellar medium, finding new maser species may be critical to unveiling what is hidden behind the thick dust veils of CONs. We report the detection of the 5.29~GHz CH$_2$NH~$1_{10}-1_{11}$ transition toward six galaxies: Arp~220, IC~860, Zw~049.057, IRAS~17208$-$0014, IRAS~17578$-$0400, and NGC~4418. Toward Arp~220 we provide clear evidence for a CH$_2$NH\ megamaser. \begin{table*} \centering \caption{Observational parameters.} \begin{tabular}{llllllll} \hline\hline Project Code & Source &RA & Dec. &$z$ & Complex Gain Calibrator & Channel Width \\ & &(J2000) &(J2000) & & &(kHz) \\ \hline 20A-501 & Zw~049.057 & 15:13:13.10 & +07:13:32.0 & 0.0130 & J1504+1029 & 125 \\ 20A-501 & IRAS~17208$-$0014 & 17:23:21.95 & $-$00:17:00.9 & 0.0428 & J1743$-$0350 & 125 \\ 20A-501 & IRAS~17578$-$0400 & 18:00:31.85 & +23:30:10.5 & 0.0134 & J1743$-$0350 & 125 \\ 20A-501 & NGC~4418 & 12:26:54.62 & $-$00:52:39.4 & 0.0073 & J1224+0330 & 125 \\ 20A-501 & IRAS~22491$-$1808 & 22:51:49.31 & $-$17:52:24.0 & 0.0778 & J2246$-$1206 & 125 \\ 15A-398 & IC~860 & 13:15:03.53 & +24:37:07.9 & 0.0129 & J1504+1029 & 125 \\ 11A-231 & Arp~220 & 15:34:57.27 & +23:30:10.5 & 0.0181 & J1513+2338 & 250 \\ \hline\hline \end{tabular} \label{tab:obsprop} \vspace{-6pt} \end{table*} \begin{table*} \centering \caption{Data cube parameters.} \begin{tabular}{lllllll} \hline\hline Source & Weighting & Molecule & Channel Width & RMS noise & Beam Dimensions & Position Angle\\ & & & km\,s$^{-1}$ & mJy~beam$^{-1}$ & &\\ \hline Zw 049.057 & 0.5 & CH$_2$NH & 20 & 0.37 & 5\farcs27$\times$3\farcs97 & \phantom{$-$}49.94$^{\circ}$\\ & & H$_2$CO & 20 & 0.38 & 5\farcs91$\times$4\farcs19 & \phantom{$-$}51.6$^{\circ}$\\ IRAS~17208$-$0014 & 0.5 & CH$_2$NH & 100 & 0.25 & 5\farcs04$\times$4\farcs81 & \phantom{$-$}15.2$^{\circ}$\\ & & H$_2$CO & 100 & 0.24 & 4\farcs87$\times$4\farcs14 & \phantom{$-$}13.4$^{\circ}$\\ IRAS~17578$-$0400 & 0.5 & CH$_2$NH & 20 & 0.37 & 4\farcs82$\times$3\farcs56 & \phantom{$-$0}6.5$^{\circ}$\\ & & H$_2$CO & 50 & 0.31 & 5\farcs20$\times$3\farcs98 & \phantom{$-$0}7.6$^{\circ}$\\ NGC~4418 & 0.5 & CH$_2$NH & 50 & 0.48 & 2\farcs06$\times$1\farcs18 & \phantom{$-$}48.4$^{\circ}$\\ & & H$_2$CO & 50 & 0.49 & 2\farcs32$\times$1\farcs28 & \phantom{$-$}48.0$^{\circ}$\\ IRAS~22491$-$1808 & 0.5 & CH$_2$NH & 50 & 0.44 & 2\farcs24$\times$1\farcs19 & \phantom{$-$}21.5$^{\circ}$\\ & & H$_2$CO & 50 & 0.44 & 2\farcs72$\times$1\farcs38 & \phantom{$-$}22.4$^{\circ}$\\ IC~860 & 0.5 & CH$_2$NH & 66 & 0.23 & 0\farcs42$\times$0\farcs35 & $-$63.4$^{\circ}$\\ & & H$_2$CO & 66 & 0.22 & 0\farcs43$\times$0\farcs38 & $-$63.4$^{\circ}$\\ Arp~220 & 0.5 & CH$_2$NH & 18 & 0.19 & 0\farcs36$\times$0\farcs28 & $-$67.7$^{\circ}$\\ Arp~220 & uniform & CH$_2$NH & 18 & 0.33 & 0\farcs32$\times$0\farcs26 & $-$70.6$^{\circ}$\\ \hline\hline \end{tabular} \label{tab:cubeprop} \vspace{-6pt} \end{table*} \section{Observations} Zw~049.057, IRAS~17208$-$0014, and IRAS~17578$-$0400 were observed with the Karl G. Jansky Very Large Array (VLA) in C configuration, and NGC~4418 and IRAS~22491$-$1808 were observed in B configuration (Project Code 20A-501). Individual 128~MHz wide subbands were placed to target the 4.8~GHz H$_2$CO, 5.29~GHz CH$_2$NH, and 6.7~GHz CH$_3$OH\ masers with 125~kHz wide channels. Arp 220 and IC 860 were both observed with the VLA in A configuration (project codes 11A-231 and 15A-398), with 250 kHz and 125 kHz wide channels, respectively. The data were calibrated and imaged in the Common Astronomy Software Applications (CASA) package version 5.0.0 (McMullin et al. 2007). With two exceptions, 3C286 (flux density = 7.47 Jy at 5.1 GHz) was observed as the bandpass and flux density scale calibrator. For Arp~220, J1602+3326 was observed as the bandpass calibrator, and for IRAS~22491$-$1808 3C147 (flux density = 6.74 Jy at 5.5 GHz) was observed as the bandpass and flux density scale calibrator. The details of the observations are summarized in Table \ref{tab:obsprop}. The final data cubes were produced using a Briggs robustness value of 0.5 in the CASA task tclean, except for Arp~220. For Arp~220 two data cubes were made, one with a Briggs robust value of 0.5 and one uniformly weighted for the smallest synthesized beam. The frequency axes were resampled to a velocity resolution of 18~km\,s$^{-1}$ to 100~km\,s$^{-1}$ and RMS values ranged from 0.19~mJy~beam$^{-1}$ to 0.49~mJy~beam$^{-1}$. All velocities are reported in the kinematic local standard of rest (LSRK) frame. Continuum subtraction was performed by selecting line-free channels and fitting with a polynomial of order 1 in the image domain. The continuum flux density at the line peak is determined by fitting a polynomial of order 1 to the entire 128 MHz subband. 5\% of channels at the band edges, and suspected line containing channels, were ignored. The properties of each image cube are listed in Table \ref{tab:cubeprop}. \section{Results} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{methanimine-spec.pdf} \caption{Observed CH$_2$NH\ spectra toward six CONs, including both nuclei in Arp~220. The horizontal black dashed line shows zero flux, the vertical black dashed line shows the systemic velocity of each galaxy, and the red line indicates the Gaussian best fit. Systemic velocities are adopted from \citet[][]{Aalto2015a} for IC~860 and Zw~049.057, from \citet{Martin2016} for Arp~220, from \citet{Sakamoto2013} for NGC~4418, from \citet{GarciaBurillo2015} for IRAS~17208$-$0014, and from \citet{Falstad2021} for IRAS~17578$-$0400. } \label{fig:ch2nhlines} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{formaldehyde-spec.pdf} \caption{Observed H$_2$CO\ spectra toward three CONs. The horizontal black dashed line shows zero flux, the vertical black dashed line shows the systemic velocity of each galaxy, and the red line indicates the Gaussian best fit. Systemic velocities are adopted from \citet[][]{Aalto2015a} for IC~860 and Zw~049.057 and from \citet{Falstad2021} for IRAS~17578$-$0400.} \label{fig:h2colines} \end{figure*} The 5.29 GHz CH$_2$NH~$1_{10}-1_{11}$ transition is detected in emission toward the nuclei of six galaxies: Arp~220, IC~860, Zw~049.057, IRAS~17208$-$0014, IRAS~17578$-$0400, and NGC~4418. In addition, we report a new detection of the 4.8~GHz formaldehyde (H$_2$CO) transition toward IRAS~17578$-$0400. This increases the number of galaxies with known H$_2$CO\ 4.8~GHz emission from five (UGC~5101, IC~860, Arp~220, NGC 3079, and Zw~049.057 \citealp{Mangum2008,Mangum2013}) to six. In all observations the CH$_2$NH\ and H$_2$CO\ lines are spatially unresolved. For each galaxy the CH$_2$NH\ and H$_2$CO\ transitions were fit with a Gaussian profile. Table \ref{tab:lineprop} presents the derived peak flux density ($\rm{S_p}$), continuum flux density at the line peak, velocity full width at half maximum (FWHM), center velocity, integrated flux density, and peak brightness temperature ($\rm{T_{pk}}$) of the CH$_2$NH\ and the 4.8 GHz H$_2$CO{} transition. Figure \ref{fig:ch2nhlines} shows the observed line profiles. For all galaxies the FWHM of the line is $>100$\,km\,s$^{-1}${} except IC~860 where both the H$_2$CO\ and CH$_2$NH\ transitions are spectrally unresolved (FWHM$<66$\,km\,s$^{-1}$). If we assume isotropic radiation and that the line width is nonrelativistic: \begin{equation} \begin{aligned} L &=4 \pi D^2 \int S d \rm{\nu} \\ \rm{d}\nu &= \frac{\rm{dv}}{c}\nu_{0} \end{aligned} ,\end{equation} where $\nu_{0}$ is the rest frequency of the line, D is the luminosity distance to the object, and the integral is the integrated line flux density. The observed luminosity of an emission line is thus: \begin{equation} \begin{aligned} L &=4 \pi D^2 \frac{\nu_0}{c} \int S d\rm{v} \\ \end{aligned} .\end{equation} The luminosity of the CH$_2$NH\ transition, in units common to observational extragalactic astronomy, is calculated as \begin{equation} L_{\rm{CH_2NH}}[L_\odot]=5.53\times10^{-3}\times (D[\rm{Mpc}])^2\times \int S d\rm{v}[\rm{Jy\,km\,s^{-1}}] \end{equation} and for formaldehyde as \begin{equation} L_{\rm{H_2CO}}[L_\odot]=5.04\times10^{-3}\times (D[\rm{Mpc}])^2\times \int S d\rm{v}[\rm{Jy\,km\,s^{-1}}] .\end{equation} The integral represents the integrated line flux density in units of Jy\,km\,s$^{-1}$, and $D$ is the distance in megaparsecs. Distances are adopted from \citet{Sanders2003}. We provide upper limits toward the galaxies where neither CH$_2$NH\ nor H$_2$CO\ was detected. Among the detected emission, CH$_2$NH\ luminosity varies between 2.9~L$_{\odot}$\ and 27~L$_{\odot}$\, and the H$_2$CO\ luminosity varies between 2.8~L$_{\odot}$\ and 5.3~L$_{\odot}$. \begin{table*} \caption{Emission line parameters.} \begin{tabular}{lrrrrrrr} \hline\hline Source & $S_{\rm cont}$ & $S_{\rm p}$ & $\Delta v_{\rm FWHM}$ & $v_{\rm Center}$ & $\int{S}\,d\nu$ & $T_{\rm pk}^a$& Luminosity\\ & [mJy] & [mJy] & [km\,s$^{-1}${}]& [km\,s$^{-1}${}] & [Jy km\,s$^{-1}${}]& [K] &[L$_{\odot}${}]\\ \hline \textbf{CH$_2$NH} \\ \hline Zw 049.057 & 29.55$\pm$0.06 & 1.25$\pm$0.19 & 238$\pm$46 & 3901$\pm$19 & 0.31$\pm$0.05 & \phantom{0}2.61$\pm$0.40 & \phantom{0}6.22$\pm$0.99 \\ IRAS 17208$-$0014 & 55.14$\pm$0.18 & \phantom{0}0.44$\pm$0.13 & 375$\pm$139 & 12690$\pm$56 & 0.14$\pm$0.05 & \phantom{0}0.81$\pm$0.18 & 32.81$\pm$10.01 \\ IRAS 17578$-$0400 &31.85$\pm$0.05 & \phantom{0}0.90$\pm$0.21 & 158$\pm$53 & 3977$\pm$19 & 0.15$\pm$0.03 & \phantom{0}2.29$\pm$0.53 & \phantom{0}3.04$\pm$0.70 \\ NGC~4418 & 22.94$\pm$0.08 & 0.78$\pm$0.13 & 689$\pm$146 & 2138$\pm$56 & 0.57$\pm$0.10 & 14.02$\pm$2.34 & \phantom{0}3.50$\pm$0.60 \\ IRAS 22491$-$1808 & \phantom{0}3.22$\pm$0.04 &<1.32 & - & - & <0.066 & <19.7 & <39.30 \\ IC 860& 22.24$\pm$0.03 & \phantom{0}2.2$\pm$0.23 & <68 & 3980$\pm$68 & 0.15$\pm$0.02 & 657$\pm$69 & \phantom{0}2.88$\pm$0.38 \\ Arp 220 W &84.00$\pm$0.19 & 2.81$\pm$0.07 & 254$\pm$8\phantom{0} & 5390$\pm$28 & 0.74$\pm$0.02 & 1216$\pm$28 & 26.80$\pm$0.65 \\ Arp 220 E& \phantom{0}54.6$\pm$0.19 & 0.46$\pm$0.05 & 535$\pm$70 & 5348$\pm$24 & 0.26$\pm$0.03 & \phantom{0}229$\pm$22 & \phantom{0}9.47$\pm$0.99 \\ \hline \textbf{H$_2$CO{}} \\ \hline Zw 049.057& 31.09$\pm$0.09 & 1.79$\pm$0.22 & 164$\pm$29 & 3921$\pm$12 & 0.31$\pm$0.04 & \phantom{0}3.74$\pm$0.47 & \phantom{0}6.28$\pm$0.94 \\ IRAS 17208$-$0014& 58.01$\pm$0.16 &<1.23 & - & - & <0.031 & <3.19 & <5.19 \\ IRAS 17578$-$0400& 34.10$\pm$0.10 & 0.77$\pm$0.18 & 177$\pm$77 & 3998$\pm$32 & 0.15$\pm$0.04 & \phantom{0}1.95$\pm$0.46 & \phantom{0}5.45$\pm$1.98 \\ NGC~4418& 24.82$\pm$0.08 & <1.47 & - & - & <0.074 & <26.07 & <0.41 \\ IRAS 22491$-$1808& \phantom{0}3.40$\pm$0.02 & <1.32 & - & - & <0.066 & <18.5 & <35.81 \\ IC 860& 22.79$\pm$0.16 & \phantom{0}3.7$\pm$0.20 & <66 & 3852$\pm$66 & 0.24$\pm$0.02 & 1191$\pm$71 & \phantom{0}4.29$\pm$0.35 \\ \hline\hline \end{tabular} \vspace{-6pt} \tablefoot {Uncertainties reported in this table are determined from the covariant matrix of the Gaussian fit.} \tablefoottext{a}{The measured brightness temperatures are all lower limits as the transitions are unresolved toward all galaxies.} \label{tab:lineprop} \end{table*} \section{Discussion} \subsection{Evidence for maser emission} Toward all galaxies the CH$_2$NH\ and H$_2$CO\ transitions are spatially unresolved, and thus the brightness temperatures reported in this paper are lower limits. Even galaxies observed in the most extended configuration of the array, A configuration with synthesized beam dimensions 0\farcs36$\times$0\farcs28\ (Table~\ref{tab:cubeprop}), are unresolved. In the case of Arp~220, the measured brightness temperature of CH$_2$NH\ toward the western core is $>1216\pm28$~K in the Briggs weighted image cube. By uniformly weighting the visibilities the spatial resolution of the CH$_2$NH\ image cube is improved to 0\farcs32$\times$0\farcs26 and the peak brightness temperature is measured to be 1470$\pm$68~K (3.2$\pm$0.3~mJy). Attempting to de-convolve the CH$_2$NH\ source associated with the western nucleus of Arp~220 with the CASA task {\sc{imfit}}, we found an unresolved source. The line emission in Arp 220 must be superthermal because its peak brightness temperature greatly exceeds the physical temperatures (gas kinetic and dust), $\leq 300$ K, derived for Arp 220 and other CONs \citep{Sakamoto2010,Aalto2019,Zschaechner2016}. Furthermore, the physical temperatures toward CONs do not appear to exceed 300~K, for example in NGC~4418 \citep{Sakamoto2010}, IC~860 \citep{Aalto2019}, and Arp~220 \citep{Zschaechner2016}. High brightness temperatures measured toward IC~860 and Arp~220 can be attributed to maser emission; however, the remaining detections require higher angular resolution observations to confirm their masing nature. If the CH$_2$NH\ emission traces the innermost region (<50~pc) of the CONs, the brightness temperatures may be significantly higher. \citet{Aalto2015b,Aalto2019} show that the lines of vibrationally excited HCN (HCN-VIB) toward CONs trace structures smaller than 0\farcs2 ($\lesssim$60pc). \citet{Costagliola2013} show that the CON of NGC~4418 has an angular diameter less than 0\farcs3 (50~pc). Assuming the CONs are smaller than 50~pc and the CH$_2$NH\ masers probe the CONs, then the brightness temperatures toward NGC~4418, Zw~49.057, IRAS~17208$-$0014, IC~860, and Arp~220 W would respectively be 380~K, 2000~K, 6700~K, 3400~K, and 7900~K. Thus, it is likely that the 5.29~GHz line is masing toward all the CONs we have observed. The measured isotropic luminosity of the CH$_2$NH\ line is $>2$~L$_{\odot}$\ and $\lesssim30$~L$_{\odot}$\ in all cases where it is detected. A megamaser is defined as 10$^6$ times as luminous as the average Milky Way maser of the same transition. For example the cutoff for H$_2$O\ megamasers is 20~L$_{\odot}$\ (see \citealt{Hagiwara2001} for a description of this nomenclature). The Milky Way detections of the 5.29 GHz CH$_2$NH~$1_{10}-1_{11}$ transition have luminosities of $\sim1.1\times10^{-6}$~L$_{\odot}$\ and $\sim0.5\times10^{-6}$~L$_{\odot}$\ \citep{Faure2018} if one adopts a distance of 8.3~kpc \citep{Reid2014}. The least luminous detection in our study is toward IC~860, where the line luminosity is 2.88~L$_{\odot}$, and the most luminous is toward the western core of Arp220, where the line luminosity is 26.8~L$_{\odot}$. All the galaxies we have observed with detections of the CH$_2$NH\ line qualify as megamasers assuming Sgr-B2 masers from \citet{Faure2018} represent the more luminous end of the distribution of Milky Way CH$_2$NH\ masers. \subsection{Non-LTE modeling} \label{sec:LVG} \cite{Baan2017} show that population inversions of H$_2$CO can be maintained by infrared pumping based on calculations with the nonlocal thermodynamic equilibrium (non-LTE) radiative transfer code {\tt RADEX} \citep{vanderTak2007}. Such models of maser emission based on a simple escape probability approximation can be reliable until the maser saturates at optical depths $\tau \leq -1$. However, \citet{Baan2017} did not discuss whether the luminosity implied by their assumed blackbody continuum at $T_{\rm bb} = 50$ K was consistent with the observed infrared emission on the same small angular scale, nor did they include the near- and mid-infrared radiation that is known to be present in galaxies' luminous CONs, and that must also excite vibrational transitions in molecules such as H$_2$CO and CH$_2$NH. \citet{Faure2018} recently investigated the CH$_2$NH maser emission in the Galactic-center molecular cloud complex Sgr B2. To do this, they computed rate coefficients for collisional excitation of CH$_2$NH by para-H$_2$ for the 15 lowest rotational levels of methanimine at temperatures up to 30 K. However, their {\tt RADEX} analysis of the methanimine emission apparently omitted the infrared continuum radiation of the source and ignored radiative excitation processes except for the 2.7 K cosmic background radiation. \citet{Faure2018} did demonstrate a robust population inversion with excitation temperature $T_{\rm ex} = -0.48$ in the $1_{10} - 1_{11}$ transition at kinetic temperature $T=30$ K and density of para hydrogen $10^4$ cm$^{-3}$. The physical conditions in CONs of galaxies may be more extreme than those explored in previous analyses of centimeter-wave masers in H$_2$CO and CH$_2$NH. Not only are the temperatures and densities higher than $30$ K and $10^4$ cm$^{-3}$, respectively, but the continuous radiation from centimeter wavelengths to the near-infrared filling these regions is extremely intense (cf. \citealt{Sakamoto2013}, \citealt{Aalto2015a}, \citealt{Aalto2015b}, \citealt{Gorski2018}, \citealt{Mangum2019}). To explore the excitation of weak maser emission in such regions, we calculated non-LTE radiative transfer models with a code that fully incorporates all the features originally intended for {\tt RADEX} as described by \cite{vanderTak2007}. The new code, {\tt GROSBETA}, retains the simplified mean-escape probability treatment of radiative transfer but allows for arbitrary continuum spectral energy distributions, incorporates chemical formation-pumping where appropriate, and solves for many different molecules in the same computation. \citet{Tabone2021} describe the code and its application to superthermal OH emission. In order to illustrate the full range of radiative effects on the excitation of a molecule such as methanimine, we expanded the spectroscopic data files to include all fundamental vibrational transitions and explored a range of plausible infrared continua (see the appendices). The non-LTE computation solves for the steady-state density in each vibration-rotation level of a molecule subject to collisional excitation at kinetic temperature $T_k$ and number densities of the main collision partners, for example $n({\rm H}_2)$. The other input parameters are the internal brightness of continuum radiation $J_{\nu}$ with dimensions [Jy sr$^{-1}$], the path length through the source $R$, and the molecular column density $N$ [cm$^{-2}$] over the FWHM of the line-of-sight velocity distribution, $\Delta V$ [km s$^{-1}$]. The effective solid angle of the source is $\Omega = \pi (R/D_{\rm L})^2$, where $D_{\rm L}$ is the luminosity distance. The observable flux in a line is given by \begin{equation} f_{\nu} = \Omega \Bigl( I_{\nu,{\rm core}} \exp\bigl(-\tau_{\nu}\bigr) + B_{\nu}(T_{\rm ex}) \bigl( 1 - \exp(-\tau_{\nu}) \bigr)\Bigr) \label{jb1} ,\end{equation} where $I_{\nu,{\rm core}}$ is the surface brightness of the continuum in the core, $B_{\nu}(T_{\rm ex})$ is the Planck function evaluated at the excitation temperature of the line, and $\tau_{\nu}$ is the optical depth in the line. The first term represents the amplification (absorption) of the continuum radiation when the optical depth is negative (positive) and the second term is the self-emission of the molecular source. We take the formaldehyde and methanimine maser emission in IC 860 as a test case. A crucial part of the non-LTE excitation calculation is the specification of the internal radiation field. For IC 860 we adopt observed fluxes as collected in the NASA/IPAC Extragalactic Database (NED)\footnote{https://ned.ipac.caltech.edu/, NED is funded by the US National Aeronautics and Space Administration and operated by the California Institute of Technology.}. The observed flux densities from meter to centimeter wavelengths are well-fitted by a flat power law spectrum, $S_{\nu} \propto \nu^{-0.215}$; therefore, we assume that the power-law component is very compact and contained entirely within the $0.42\times 0.35$ arcsecond projected beam area of the VLA observations presented here. The corresponding solid angle is $\Omega = 3\times 10^{-12}$ sr. Most of the power in the observed spectrum of IC 860 is contained in the submillimeter and infrared region (frequencies $3.5\times 10^{12}$ to $6.5\times 10^{13}$ Hz). The integrated flux density over this frequency interval, 7.1 W m$^{-2}$, corresponds to a luminosity of $6.3\times 10^{11}$ L$_{\odot}$. The observed power is thought to be short-wavelength (visible, ultraviolet, X-ray) light from a central starburst and/or AGN that has been absorbed and reradiated by surrounding dust. Owing to the lack of far-infrared measurements at sub-arcsecond resolution, we do not know what fraction of the observed infrared power, $\varphi$, is contained within the solid angle $\Omega$ of the centimeter-wave radio source. However, the molecules within $R=53$~pc of the center of IC 860 must be exposed to a mean brightness $J_{\nu}$ given by \begin{equation} J_{\nu} \Omega = 22.39 \Bigl({{\nu}\over{5.29 {\mathrm GHz}}}\Bigr)^{-0.215} 10^{-0.4 A_{\lambda}} + \varphi S_{\nu,{\rm obs}} \;\;\; {\mathrm mJy}, \label{jb2} \end{equation} where $A_{\lambda}$ is the extinction in magnitudes for a standard interstellar extinction law, normalized to visual (550 nm wavelength) extinction $A_V=400$ mag. Including the extinction ensures that the power-law component does not exceed the observed central infrared radiation. The model radiation field is presented in Appendix A.2. It can be useful to define the equivalent radiation brightness temperature $T_{\rm rad}$ such that the mean internal brightness is \begin{equation} J_{\nu} = B_{\nu}(T_{\rm rad}) \;\;\; , \end{equation} together with a function \begin{equation} y(\nu) = \Bigl( \exp(h\nu / kT_{\rm rad}) - 1\Bigr)^{-1} \;\;\; . \end{equation} For any transition at frequency $\nu$ with spontaneous transition probability $A_{u,\ell}$ from upper state $u$ of statistical weight $g_u$ to lower state $\ell$ with statistical weight $g_{\ell}$, the rate of stimulated emission in this radiation field is $y(\nu) A_{u,\ell}$, while the rate of absorption is $y(\nu) g_u A_{u,\ell}\, /\, g_{\ell}$. For example, the CH$_2$NH transition $1_{10} - 1_{11}$ at 5.29 GHz has $A_{u,\ell} = 1.55\times 10^{-9}$ s$^{-1}$, while the value of $T_{\rm rad} = 8791$ K in the power-law component of the internal radiation field, so that $y=34625$ and the pumping rate $y A_{u,\ell} = 5.35\times 10^{-5}$ s$^{-1}$. Examples of pumping rates for two values of $\varphi = 1.0$ and $0.1$ are listed for a number of transitions in Table \ref{tab:pumping}. This shows that induced absorption out of $J_{K_{\rm a}K_{\rm c}} = 1_{10}$, the upper level of the 5.29 GHz transition, is faster than spontaneous decay at 5.29 GHz for various rotational and vibration-rotation transitions at frequencies between 133 GHz and 50 THz. The highest pumping rates occur in the millimeter-to-submillimeter wavelength part of the spectrum. In the adopted radiation field, the vibration-rotation transitions in the infrared are not very important in the excitation of the $1_{10}$ and $1_{11}$ levels. Comparison of the rates also suggests why the centimeter-wave continuum enters the interpretation in two distinct ways: (1) the internal radiation (Eq. \ref{jb2}) felt by the molecules is so strong that stimulated emission in the 5.29 GHz transition is much faster than spontaneous emission, and (2) the observable continuum emission is strong and must be taken into account in description of the line emission (Eq. \ref{jb1}). Finally, it is useful to compare collisional de-excitation rates with the induced radiative rates. The $1_{10}$ level has a total spontaneous decay rate of $5.64\times 10^{-5}$ s$^{-1}$ mostly in the 166.85 GHz transition. The few collision rates presented by \citet{Faure2018} suggest downward collisional rate coefficients from $1_{10}$ on the order of $10^{-11}$ cm$^3$ s$^{-1}$; therefore, in the absence of any radiative couplings to higher states, collisions could dominate the excitation of $1_{10}$ at densities rather greater than $n({\rm H}_2) \sim 10^6$ cm$^{-3}$. With a realistic description of the radiation environment, on the other hand, we see that radiative pumping out of $1_{10}$ occurs with a rate on the order of $y A \sim 10^{-3}$~s$^{-1}$. Thus, collisions are unlikely to thermalize the populations of low-excitation levels such as $1_{10}$ unless the hydrogen density exceeds $10^8$ cm$^{-3}$, although collisions can suppress the population inversion at lower densities, on the order of $10^6$ cm$^{-3}$. \begin{figure*} \centering \includegraphics{Rad_Pump_Models.pdf} \caption{Non-LTE models showing radiative excitation of the 5.29~GHz CH$_2$NH\ line. The top row of plots shows the line radiation temperature, and the bottom row shows the optical depth. The horizontal axis in all plots is the H$_{2}$\ density. The horizontal black dotted line indicates a vertical axis value of 0.0 on each plot. The violet horizontal dotted line indicates the observed brightness temperature toward IC~860. The horizontal thick red dashed line indicates the optical depth limit indicated by \citet{vanderTak2007} as line brightness temperatures may not be trusted if the optical depth is less than -0.1. Column densities for each pair of $\varphi$ are labeled at the top of each pair of columns. Temperatures of the molecular gas are labeled in the upper left corner. The line width input to the mean escape probability approximation is 100~km\,s$^{-1}$. The maser action of CH$_2$NH\ in an infrared radiation field is observed in all cases from $10^{2}~\rm{cm}^{-2}$ to $\sim10^{5}~\rm{cm}^{-2}$.} \label{fig:radex_w_background} \end{figure*} The non-LTE treatment of excitation and radiative transfer includes all processes, such as those outlined in the preceding paragraph. Thus we can see what parameter space allows for a CH$_2$NH maser of the observed intensity in IC~860. The results of these models are shown in Fig. \ref{fig:radex_w_background}. The models that reproduce the observed line brightness temperature, $T_{\rm line} \geq 657$ K, have total densities in the range $n({\rm H}_2)=10^2$ to $10^5$ cm$^{-3}$. The solutions are insensitive to kinetic temperature, which reflects the dominance of radiative pumping in the excitation. Large column densities and abundances are needed to reproduce the line intensity when $\varphi = 1$. At lower values (e.g., $\varphi \leq 0.1$), the required column density of CH$_2$NH is much lower, $\gtrsim1\times 10^{16}$ cm$^{-2}$, so that the required abundance varies inversely with the hydrogen density. Population inversion and weak amplification ($\tau \approx -0.1$) is maintained over a wide extent of parameter-space, producing line strengths of $\gtrsim100$~K, without any need for delicate tuning of density or abundance. It should be possible to constrain the value of $\varphi$ with further observations at millimeter-to-submillimeter wavelengths, both in lines and continuum, with sub-arcsecond angular resolution. For example, the model predicts that the $3_{03}-2_{12}$ transition at 35.055 GHz will also be a maser under the same conditions that explain the 5.29 GHz line intensity, but with $T_{\rm line} \sim 2000$ K, 100 K, and 25 K at $\varphi = 1, 0.1$, and $0.01$, respectively. Numerous CH$_2$NH transitions at frequencies 200 to 300 GHz are predicted to appear strongly in emission at $\varphi = 1$, but to go into absorption with $\tau \sim 1$ when $\varphi \leq 0.1$. In particular, the predicted flux ratio of the $7_{16} - 7_{07}$ transition (frequency $250.162$ GHz, excitation energy $E_u/k=97$ K) and the $4_{04} - 3_{03}$ transition (at $254.685$ GHz with $E_u/k=31$ K) is $S_{250}/S_{254}\approx 2.5$ when $\varphi=1$, but becomes negative when the radiation scaling falls to $\varphi = 0.1$. These numbers refer to fluxes measured in the same effective solid angle as the centimeter-wave masers, $\Omega = 3\times 10^{-12}$. In summary, panchromatic, non-LTE radiative transfer models of both CH$_2$NH reproduce the observed fluxes of centimeter-wave maser emission at {0}\farcs{4} angular resolution in IC~860. The population inversions and amplification are robust over a range of densities and temperatures. The compact, intense centimeter-wave continuum emission plays a major role both in the excitation of the masering levels and as the background flux that is amplified in the lines. The molecular excitation may be dominated by radiative processes over the entire parameter space that sustains the required population inversions in both CH$_2$NH and H$_2$CO. The models predict fluxes of additional strong lines at mm wavelengths, which could be used to constrain better the density and abundances. \subsection{Comparing CH$_2$NH\ masers in CONs with other maser species } \subsubsection{H$_2$O\ masers} The seven galaxies reported in this paper are all classified as CONs, and CH$_2$NH~$1_{10}-1_{11}$ emission is detected toward six of these galaxies. Surveys for megamasers in AGN have low success rates, such as \citet{Sato2005}, where 90 Seyfert 2 or LINER galaxies were surveyed resulting in a single H$_2$O\ megamaser detection. Of the $>2800$ galaxies surveyed for H$_2$O\ megamasers, 178 have clear detections \citep{Braatz2018IAUS}. Often the detection rate is on the order of 1\% \citep[e.g.,][]{Sato2005,Bennert2009,Braatz2018IAUS}. The H$_2$O\ megamaser detection rate toward Compton-thick AGN is much greater $\sim50\%$ \citep{Castangia2019}. While the sample size is small, the CH$_2$NH\ detection rate is similar toward CONs, $\sim86\%$. The detection rates of megamasers toward CONs suggests that megamasers may be an indicator of Compton-thick environments; however, a comparison of CH$_2$NH\ emission in a sample of non-Compton-thick galaxies is needed to correctly test this. The link between the growth of supermassive black holes and megamasers is well established \citep[e.g.,][]{Reid2009}. 22~GHz H$_2$O\ maser structure has been observed in the accretion disks, jets, and outflows of AGN \citep[e.g.,][]{Henkel2005}. Compact OH megamasers trace molecular tori around AGN \citep[e.g.,][]{Lonsdale1998}. The spectrum of the CH$_2$NH\ megamaser does not yet show extreme velocity components ($>1000$~km\,s$^{-1}$) similar to the disk 22~GHz H$_2$O\ megamasers. The CH$_2$NH\ megamasers seem to have more in common with masers observed toward starbursts, jets, or outflows \citep[e.g.,][]{Peck2003,Kondratko2005,konig2017}. The CH$_2$NH\ line width in all cases is $>100$~km\,s$^{-1}$\ except for IC~860. The unresolved CH$_2$NH\ megamaser in Arp~220 implies the source is $<103$~pc in diameter (0\farcs32 adopting a distance of 81.8~Mpc) and in IC~860 the source is $<100$~pc (0\farcs35 adopting a distance of 59.1~Mpc). toward both the eastern and western nuclei of Arp~220, which have respective velocities of 5454~km\,s$^{-1}$\ and 5342~km\,s$^{-1}$\ \citep{Martin2016}, the CH$_2$NH\ line is blue shifted by $\sim100$~km\,s$^{-1}$. However, these properties are not unique to AGN, unlike extreme velocity components, and may also result from nuclear starbursts \citep[e.g.,][]{Brunthaler2009,konig2017,Gorski2019} \subsubsection{36~GHz CH$_3$OH\ masers} \citet{Suzuki2016} hypothesize that CH$_2$NH\ may be abundant toward class I CH$_3$OH\ maser sources because both molecules can be formed by hydrogenation of molecules on grain surfaces. CH$_2$NH\ can be formed by hydrogenation of HCN \citep{Theule2011} and CH$_3$OH\ by hydrogenation of CO. \citet{Suzuki2016} target 12 high-mass star forming regions and two low-mass star forming regions known to have Class I CH$_3$OH\ masers. CH$_3$OH\ masers are divided into two subclasses depending on their pumping scheme. Class I CH$_3$OH\ masers are pumped via collisions and class II CH$_3$OH\ masers are radiatively pumped \citep{Menten1991a}. CH$_2$NH\ was detected toward eight of the high-mass star forming regions with a fractional abundance in the range of $\sim10^{-9}$ to $\sim10^{-8}$. The existence of hyper-compact {H\sc {\scriptsize II}}}\def\Ha {{H {$\alpha$}}\ regions and weak H54$\beta$, or lack thereof, points to an early stage of high-mass star formation. Sources with evolved {H\sc {\scriptsize II}}}\def\Ha {{H {$\alpha$}}\ regions show less abundance of CH$_2$NH. It is suggested from the results of that large CH$_2$NH\ abundances are related to high-mass star formation, and the presence of class I CH$_3$OH\ masers. \begin{figure}[] \centering \includegraphics[width=0.48\textwidth]{LCH3OH-CH2NH-OH-comp.pdf} \caption{Relationship between the infrared luminosity of the host galaxy and CH$_3$OH, OH, or CH$_2$NH\ megamasers. Galaxies with 36~GHz CH$_3$OH\ masers from \citet{Chen2016} are plotted with purple pentagons, and galaxies with CH$_2$NH\ masers are plotted with orange diamonds. OH megamasers for six of the seven CONs (all except IRAS 17578$-$0400) from \citet{Darling2002} and \citet{Wiggins2016} are plotted with green squares. The infrared luminosity of the galaxies are adopted from \citet[][]{Sanders2003}. The purple dotted line represents the linear best fit to the CH$_3$OH\ data from \citep{Chen2016}, the orange dashed line represents the best fit to the CH$_2$NH\ data (this paper), and the green dashed dotted line is the linear best fit to the OH maser luminosities from \citet{Darling2002} and \citet{Wiggins2016}. } \label{fig:methanolmethanimine} \end{figure} Since CH$_2$NH\ abundance and class I CH$_3$OH\ masers are likely related, we compare extragalactic CH$_3$OH\ masers with CH$_2$NH\ masers. Figure \ref{fig:methanolmethanimine} shows the infrared luminosity, L$_{\rm IR}$, \citep[][]{Sanders2003} plotted against luminosities of the 36~GHz CH$_3$OH\ maser \citep{Chen2016} and 5.29~GHz CH$_2$NH\ maser (this paper) for all presently known extragalactic sources. No galaxies have yet been observed in both lines besides Arp~220. \citet{Chen2016} note a strong correlation of the 36~GHz CH$_3$OH\ maser and infrared luminosity ${\rm L_{methanol}}\propto 1.36\,{\rm L_{IR}}$ (R=0.92): The strong correlation with the infrared luminosity of the galaxy suggests that the 36~GHz CH$_3$OH\ maser is related to star formation processes. However, when corrected for Malmquist bias the ${\rm L_{methanol}}$---${\rm L_{IR}}$ relation is shallower with a slope of $1.01\pm0.18$. Indeed, when the 36~GHz maser is observed with sufficient angular resolution to be spatially resolved toward star forming galaxies, the maser reveals large-scale shocks \citep[$>10$~pc; e.g.,][]{Gorski2017,Gorski2018,Gorski2019} potentially indicating cloud-cloud collisions. If the 5.29~GHz CH$_2$NH\ is linked to class I CH$_3$OH\ masers then a similar tight relationship with the infrared luminosity of the galaxy may be observed. We find a weak correlation (Fig. \ref{fig:methanolmethanimine}; R=0.78) with the infrared luminosity of the galaxy: \begin{equation} \rm{log}\,L_{\rm{CH_2NH}}[L_\odot]=(0.61\pm0.34)\,\rm{log}\,L_{IR}[L_\odot] - (6.32\pm4.01) .\end{equation} The CH$_2$NH\ maser appears less strongly correlated with the infrared luminosity of the host galaxy. This may be a result of having a small sample of galaxies or that the CH$_2$NH\ maser traces a different physical process than 36~GHz CH$_3$OH\ masers. \subsubsection{OH masers} The CH$_2$NH\ maser is emitted from a region smaller than 103~pc and 100~pc respectively toward Arp~220 and IC~860. OH megamasers are also known to trace the dense molecular environment around CONs. \citet{Momjian2006}, \citet{Pihlstrom2001} and \citet{Lonsdale1998} show, with Very Long Baseline Interferometry (VLBI) observations, complicated structures in Arp220, IRAS 17208$-$0014, and III~Zw~35 traced by OH masers. The OH masers trace material near the sphere of influence of the supermassive black hole (e.g., $<30$~pc \citealp{Onishi2017}). A strong correlation between OH and CH$_2$NH\ masers may indicate the CH$_2$NH\ maser is tracing the feedback resulting from within the CON. \begin{figure}[] \centering \includegraphics[width=0.45\textwidth]{LOH-CH2NH-comp.pdf} \caption{Comparison between the luminosity of the 5.29~GHz CH$_2$NH\ megamasers (this work) and the OH megamasers from \citet{Darling2002} and \citet{Wiggins2016}. No OH megamaser has been observed yet in IRAS~17578$-$0400. All galaxies are labeled, and the best fit linear relationship ($\rm{L}_{\rm{CH_2NH}} \propto \rm{L}_{\rm{OH}}^{0.36\pm0.05} $, R=0.97) is shown with a blue dotted line. The shaded blue area represents the uncertainty in the best fit. } \label{fig:OH-methanimine} \end{figure} For the CONs we find that the OH maser luminosity scales with infrared luminosity by (Fig. \ref{fig:methanolmethanimine}; R=0.84): \begin{equation} \rm{log}\,L_{\rm{OH}}[L_\odot]=(1.66\pm0.84)\,\rm{log}\,L_{IR}[L_\odot] - (18.2\pm10.0) .\end{equation} The luminosities for our sample of CONs of OH megamasers are adopted from \citet{Wiggins2016} except for IRAS 22491$-$1808, which is adopted from \citet{Darling2002}, and IRAS~17578$-$0400, for which an OH detection is absent in the literature. This is consistent with the relationship found by surveys of IRAS galaxies by \citet{Kandalian1996,Darling2002}. We find a strong correlation between the 5.29 GHz CH$_2$NH\ and OH maser luminosities (Fig. \ref{fig:OH-methanimine}; R=0.97), \begin{equation} \rm{log}\,L_{\rm{CH_2NH}}[L_\odot]=(0.36\pm0.05)\,\rm{log}\,L_{OH}[L_\odot] - (0.45\pm0.09) ,\end{equation} toward Arp~220 \citep{Lonsdale1998} compact ($<$1~pc) and diffuse OH maser emission is revealed. The compact OH maser regions appear to be pumped by collisions, whereas the diffuse masers are pumped via infrared radiation, though \citet{Pihlstrom2001} and \citet{Parra2005} argue that the observed differences in the line ratios between compact and diffuse phases could be a natural property of one phase characterized by clumpy unsaturated masers. The strong correlation between OH and CH$_2$NH\ maser luminosities, and the spatial coincidence within $\sim100$~parsecs, suggests they trace similar processes in the CONs. The critical density of the upper 1$_{10}$ state of CH$_2$NH\ is estimated as $\sim1\times10^5$~cm$^{-3}$ \citep{Faure2018}, whereas OH mases at densities an order of magnitude lower $>10^4$~cm$^{-3}$ \citep{Baan1991}. Consequently, the CH$_2$NH\ maser may trace denser structures in CONs. Parsec scale resolution observations of the CH$_2$NH\ maser may provide insights to the nature of these massive structures. \subsection{Maser luminosity and infrared relationship} We found that the CH$_2$NH\ maser is weakly correlated with the infrared luminosity of the host galaxy. Usually, the luminosity of a radiatively pumped maser is proportional to the availability of pumping photons and stimulating photons, for example $L_{\rm{maser}} \propto L_{\rm{stim}}L_{\rm{pump}}$ \citep{Baan1989}. However, one can, perhaps naively, assume that for a low-gain maser pumped by the infrared radiation field, that is stimulated by the radio continuum and unsaturated, the luminosity of the maser is proportional to the square of the infrared luminosity (e.g., $L_{\rm{maser}} \propto L_{\rm{IR}}^2$). This is because the stimulating photons from the radio continuum are proportional to the infrared flux. Saturated masers no longer grow in luminosity exponentially, so the luminosity of the maser will be proportional to the infrared luminosity (e.g., $L_{\rm{maser}} \propto L_{\rm{IR}}$). Thus, for an unspecified number of radiatively pumped masers, we expect the luminosity of the maser to be proportional to $L_{\rm{IR}}^\alpha$ where $\alpha$ has a value between 1 and 2 (see Sect. 5.4 of \citealp{Darling2002} and references therein for a more detailed discussion). We observe that the CH$_2$NH\ masers are under luminous for infrared pumping in this scenario with $L_{\rm{CH_2NH}} \propto {0.61\pm0.34}\, L_{\rm{IR}}$ We can imagine a few scenarios that might explain this situation. First, the maser may still be radiatively pumped, but at millimeter-to-submillimeter wavelengths. A lack of infrared pumping is plausible, as much of the mid-infrared is attenuated and reemitted at longer wavelengths \citep{Aalto2015a,Aalto2019}. Thus giving rise to a weaker correlation between the maser and the infrared luminosity of the galaxy. Our models show that pumping mainly occurs from the intense radiation at millimeter-to-submillimeter wavelengths, supporting this scenario. Second, the CH$_2$NH\ maser is not subject to the total infrared radiation field, but still radiatively pumped. Perhaps the regions responsible for the CH$_2$NH\ maser are shielded from the intense radiation from the CON environment, and they are pumped by a more local source such as a nearby star-forming region. As these regions also excite OH masers this gives rise to the strong correlation between CH$_2$NH\ and OH masers. In this scenario CH$_2$NH\ absorption lines would likely be observable in the mid-infrared. Last, the CH$_2$NH\ molecules do not experience the radiation field and the maser is collisionally pumped. The radiation field is either too heavily shielded or too dilute to pump the molecules. The CH$_2$NH\ maser is then related to the same physical process as the collisionally pumped OH masers yielding a strong correlation. \citet{Lonsdale1998} suggesting that shock fronts from molecular tori around newly formed AGN could result in these spectrally broad features. \section{Conclusions} We have conducted the first search for the 5.29 GHz CH$_2$NH~$1_{10}-1_{11}$ transition in a sample of galaxy nuclei using the VLA. CH$_2$NH\ emission is detected toward six out of seven galaxies with CONs: Zw~049.057, IRAS~17208$-$0014 IRAS~17578$-$0400, NGC~4418, IC~860, and Arp~220. H$_2$CO\ emission is also detected toward three galaxies: Zw~049.057, IRAS~17578$-$0400, and IC~860. In all observations, the emission is spatially unresolved. The CH$_2$NH\ emission detected toward the western core of Arp~220 has an isotropic luminosity of 27~L$_{\odot}$\ and a brightness temperature $> 1400$~K, providing evidence for the first CH$_2$NH\ megamasers. The isotropic luminosities measured toward the other galaxies range from 2$\sim$10~L$_{\odot}$; however, the spatial resolution of the observations only allows for lower limits of the brightness temperature. Non-LTE modeling suggests that the CH$_2$NH\ maser is pumped by the intense millimeter-to-submillimeter radiation field, though pumping from collisions cannot be excluded. Currently, the structure is measured to be smaller than 103~pc toward Arp~220 and smaller than 100~pc toward IC~860, which is consistent with the HCN-VIB emitting regions of CONs ($\lesssim$60~pc \citealp{Aalto2015b,Aalto2019}). Our investigation reveals that the CH$_2$NH\ masers are weakly correlated with the infrared luminosity of the galaxy and strongly correlated with OH megamaser luminosities. We hypothesize that the strong correlation between CH$_2$NH\ masers and OH masers is due to collisions in the molecular tori around embedded AGN, although other explanations are possible. In this picture, CH$_2$NH\ is shielded from the intense infrared radiation field in CONs, giving rise to a weak correlation between the CH$_2$NH\ maser luminosity and the global infrared luminosity of the host galaxy. Higher angular resolution observations are needed to reveal the physical structure of the emission and to reveal the pumping mechanism. \highlight[comment=~]{Altogether}, CH$_2$NH\ megamasers provide a new tool for investigating the nuclear processes in galaxies and a potential avenue for observing parsec-scale structure in CONs. \begin{acknowledgements} S.A., M.G., K.O., S.K. N.F. gratefully acknowledges support from an ERC Advanced Grant 789410 a. The National Radio Astronomy Observatory is a facility of the U.S. National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \end{acknowledgements} \bibliographystyle{aa_url}
2024-02-18T23:40:34.530Z
2021-08-23T02:17:45.000Z
algebraic_stack_train_0000
2,784
8,019
proofpile-arXiv_065-13547
\section{Introduction} \label{sec:intro} Monte Carlo event generators have become an indispensable part of the numerical toolkit needed to interpret high-energy physics experiments at colliders~\cite{Webber:1986mc,Buckley:2011ms}. They extend the reach of analytic or numeric fixed-order calculations by providing detailed simulations of QCD parton evolution and hadronization. Both aspects are vital in order to understand the features of experimentally accessible analysis objects such as jets or photons, and to link the picture of QCD perturbation theory to the complicated reality of measurements. Due to the large dynamic range of observables at the Large Hadron Collider (LHC), the accurate description of QCD evolution plays a particularly important role. It is implemented in fully differential form by Monte-Carlo algorithms called parton showers. The high statistical precision of data from the Large Hadron Collider experiments, as well as the promise of yet more detailed and accurate measurements over the coming years, have spurred the development of various improved parton shower algorithms. A number of works have revisited questions on the logarithmic accuracy~\cite{ Hoche:2017kst,Dasgupta:2018nvj} of parton showers~\cite{Catani:1992ua,Catani:1990rr} and dipole showers~\cite{Gustafson:1987rq,Lonnblad:1992tz} and have led to the development of new and improved algorithms~\cite{ Dasgupta:2020fwr,Bewick:2019rbu,Forshaw:2020wrq,Nagy:2020dvz,Bewick:2021nhc}. The resummation of logarithms at higher orders in the $1/N_c$ expansion~\cite{ Platzer:2012np,Nagy:2015hwa,Isaacson:2018zdi,Platzer:2018pmd,Nagy:2019pjp,Forshaw:2019ver, Hoche:2020pxj,DeAngelis:2020rvq,Hamilton:2020rcu,Holguin:2020joq,Platzer:2020lbr}, and the possibility to include genuine higher-order matrix elements~\cite{ Hartgring:2013jma,Li:2016yez,Hoche:2017iem,Dulat:2018vuy} has become a focus of interest recently. The combination of these various ingredients could soon enable the formally more precise simulation of QCD parton evolution, and allow to consistently estimate systematic uncertainties from missing higher-order effects in the perturbative expansion. In this note we will focus on the implementation of higher-order splitting kernels in parton showers. Our numerical implementation is based on a dipole shower, but the method itself is applicable to any parton shower with on-shell intermediate states. The possibility of adding next-to-leading order corrections for more inclusive observables to parton showers has been explored early on~\cite{ Kato:1986sg,Kato:1988ii,Kato:1990as,Kato:1991fs,Jadach:2011kc,Gituliar:2014eba} and was revisited recently~\cite{Hoche:2017hno,Dasgupta:2021hbh}. A differential approach based on modern shower algorithms was first discussed in~\cite{Hartgring:2013jma,Li:2016yez}. The link to DGLAP evolution~\cite{ Gribov:1972ri,Lipatov:1974qm,Dokshitzer:1977sg,Altarelli:1977zs} at next-to-leading order~\cite{Curci:1980uw,Furmanski:1980cm,Floratos:1980hk,Floratos:1980hm, Heinrich:1997kv,Bassetto:1998uv} was explored in in~\cite{Hoche:2017iem}, and the connection to soft-gluon resummation~\cite{Korchemsky:1992xv,Korchemsky:1993uz} was established in~\cite{Dulat:2018vuy}. Here we will address the question of how higher-order corrections obtained from hard matrix elements in the triple-collinear and double-soft limits can be combined consistently. Our procedure relies on the numerical techniques developed in~\cite{Hoche:2017iem} and~\cite{Dulat:2018vuy}, which treated the two different limits individually. We propose a subtraction method that removes soft double counting at the level of the fully differential evolution kernels for two-parton emission, and we identify the corresponding endpoint contributions, which are related to the two-loop cusp anomalous dimension~\cite{Kodaira:1981nh,Davies:1984hs,Davies:1984sp,Catani:1988vd}. We apply the method to quark pair emission in the process $e^+e^-\to$ hadrons as an example. The manuscript is structured as follows: Section~\ref{sec:basic} introduces the basic concepts. In Sec.~\ref{sec:tc_ds} we review the techniques for the simulation of triple collinear and double soft emissions. Section~\ref{sec:or} introduces the removal of overlapping singularities, and Sec.~\ref{sec:mc} presents the modified subtraction needed for a computation in four dimensions. The endpoint contributions and their relation to the soft gluon coupling and the CMW scheme~\cite{Catani:1990rr} are discussed in Sec.~\ref{sec:cmw}. Section~\ref{sec:results} presents a first numerical analysis, and Sec.~\ref{sec:conclusion} contains an outlook. \section{Strategy for constructing an NLO parton shower} \label{sec:basic} In this section we provide a heuristic introduction to the main ideas behind a fully differential parton evolution at next-to-leading order. To this end, it is useful to revisit the basic principles of a leading-order algorithm. The one-loop matrix elements for gluon emissions off a color dipole exhibit two types of singularities~\cite{Ellis:1991qj,Dokshitzer:1991wu}: soft gluon singularities and collinear poles. Most of the existing leading-order parton shower algorithms treat these two effects in a unified way: They either employ one splitting kernel to describe the complete antenna radiation pattern, or two splitting kernels that capture the collinear monopole radiation pattern. In the first case, the collinear radiator function is matched to the soft, while in the second case the soft radiator function is matched to the collinear, while at the same time removing potential double counting through partial fractioning of eikonal terms or angular ordering. To construct a parton shower at next-to-leading order accuracy, it is useful to discard this picture and instead recall that the soft gluon limit has a semi-classical origin and is thus structurally different from the collinear limit. However, the two do of course overlap in the soft-collinear region. An improved leading-order parton-shower can therefore be constructed by working with three different radiator functions for each color dipole, one capturing the soft emission pattern, and one each for capturing the remainder of the collinear radiators, after subtracting the overlap with the soft function. This strategy allows to cover the complete phase space with each evolution kernel, and it furthermore allows to choose different evolution variables in the soft and collinear regions. Representative squared diagrams for a process with two hard partons are \begin{equation} \label{eq:lops} \left.\mathcal{F}\right|_\mathrm{1-loop,coll} \sim \vcenter{\hbox{\includegraphics[width=.15\textwidth]{fig/intro/tcds-qqqq-lo-sc-1}}}\;,\qquad \left.\mathcal{F}\right|_\mathrm{1-loop,soft} \sim \vcenter{\hbox{\includegraphics[width=.15\textwidth]{fig/intro/tcds-qqqq-lo-ss-1}}} ~+~\ldots\;, \end{equation} where the dots stand for diagrams with permutations of the hard partons. The left figure indicates a collinear emission, and the right indicates coherent soft gluon radiation. At second order in the strong coupling, the perturbative fragmentation functions will contain real-virtual and double-real corrections. We will use the emission of a quark pair as an example for the construction of a soft-collinear overlap removal in these contributions. The triple-collinear $q\to qq'\bar{q}'$ splitting function can be factorized into a collinear one-loop $q\to g$ times a collinear one-loop $g\to q$ splitting in the strongly ordered limit, while the soft function for quark-pair emission cannot be factorized into lower-order soft functions. However, it can be factorized into a product of eikonal currents times a spin-dependent collinear one-loop $g\to q$ splitting, in fact it is given entirely in terms of their product~\cite{Catani:1999ss}. Effective diagrams for double-real corrections at two loops may thus be approximated by iterated branchings, \begin{equation} \label{eq:lops_oas2_fac} \left.\mathcal{F}\right|_{\mathrm{2-loop,coll}} \sim\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-sc-1}}}\;, \qquad\left.\mathcal{F}\right|_{\mathrm{2-loop,soft}} \sim\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-ss-1}}} ~+~\ldots\;. \end{equation} In analogy to fixed-order computations in the dipole method, the calculation of the double-real corrections to this approximate picture proceeds by subtracting the approximate result in Eq.~\eqref{eq:lops_oas2_fac} from the complete matrix elements. In addition, an endpoint contribution is required, which originates in the difference between the integrated subtraction terms and the corresponding collinear mass factorization counterterms. The result is finite in four dimensions and can therefore be computed with Monte-Carlo methods~\cite{Hoche:2017iem}. Using the double-real quark-pair emission triple-collinear (tc) and double-soft (ds) kernels as an example, we can write, schematically \begin{equation} \label{eq:pds} P^{(\mathrm{tc})} \sim \left[ \vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrtc-1}}} ~-~\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-sc-1}}}\right]\;, \qquad P^{(\mathrm{ds})} \sim \left[ \vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrds-1}}} ~-~\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-ss-1}}} ~+~\ldots~\right]\;, \end{equation} where the black blobs indicate the complete matrix elements in the triple collinear and double soft limits. Equation~\eqref{eq:pds} is valid independently for both the differential and the endpoint contributions. For an appropriately defined leading-order parton shower, this subtraction must remove all infrared singularities associated with the vanishing of intermediate propagators. This puts stringent requirements on the leading-order shower, in particular that it must implement spin correlations and a suitable kinematics mapping~\cite{Hoche:2017iem,Dulat:2018vuy}. The above subtraction ensures that the correct splitting probabilities are reproduced in the collinear and soft region individually, but it is insufficient to guarantee the correct two-loop radiation pattern in multiple limits simultaneously, because the individual two-loop splitting functions have overlapping singularities. Each triple-collinear matrix element contains the complete double-soft result. This is reminiscent of the overlap of the double collinear and single soft matrix elements in the leading-order case. To remove the overlap, a solution similar to the leading-order case can be adopted: A combination of triple-collinear and double-soft corrections at leading color requires 1) removing the endpoint-subtracted double-soft splitting function from the endpoint-subtracted triple-collinear splitting function, and 2) adding the double-soft splitting functions for all pairs of hard partons and the soft-subtracted collinear splitting functions for all partons in order to obtain the complete radiator function for the multipole. In the case of quark pair emission, the genuine triple collinear contributions to this combined splitting function are given by \begin{eqnarray} \label{eq:ptcds} P^{(\mathrm{tc}-\mathrm{ds})} &\sim& \left[ \vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrtc-1}}} ~-~\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-sc-1}}} ~-~\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrds-1}}} ~+~\vcenter{\hbox{\includegraphics[width=.133\textwidth]{fig/intro/tcds-qqqq-rrlo-ss-1}}} ~+~\ldots~\right]\;. \end{eqnarray} Again, this is valid independently for both the differential and the endpoint contributions. The subtraction has to be applied for every possible occurrence of the double-soft limit in the triple-collinear splitting functions. In the following sections, we will first discuss the individual triple collinear and double soft limits of the QCD matrix elements, and then develop the above described procedure in detail for quark pair emission. The gluon emission case is structurally identical but technically more involved. We postpone its discussion to a forthcoming publication. \section{Parton evolution in the triple collinear and double soft limits} \label{sec:tc_ds} In this section we summarize the ingredients needed for the consistent simulation of triple collinear and double soft splittings in a dipole-like parton shower. We note that this type of parton shower is affected by the problems discussed in~\cite{Dasgupta:2018nvj}, but the structure of our calculation is generic and can therefore be applied to any parton shower for which the phase-space factorization and splitting functions are known in $D=4-2\varepsilon$ dimensions. In the triple collinear limit of partons $1$, $2$ and $3$, any QCD associated matrix element with more than 3 external partons factorizes as~\cite{Campbell:1997hg,Catani:1999ss} \begin{equation}\label{eq:tc_me_factorization} |M_{1,2,3,\ldots,k,\ldots}(p_1,p_2,p_3,\ldots)|^2\overset{\rm 123-coll}{\longrightarrow} \left(\frac{8\pi\mu^{2\varepsilon}\alpha_s}{s_{123}}\right)^2 \mathcal{T}^{ss'}_{123,\ldots}(p_{123},\ldots)\, P^{ss'}_{123}(p_1,p_2,p_3)\;. \end{equation} The corresponding spin-averaged, triple-collinear splitting functions, $\delta_{ss'}P^{ss'}_{123}/2$, are given in~\cite{Campbell:1997hg,Catani:1999ss}. The simplest of them are the quark to quark splitting kernels with quark pair emission. They read \begin{equation}\label{eq:p_qbpqpq} \begin{split} P_{\bar{q}_1'q_2'q_3}=&\;\frac{1}{2}C_FT_R\frac{s_{123}}{s_{12}} \left[\frac{4z_3+(z_1-z_2)^2}{z_1+z_2}-\frac{t_{12,3}^2}{s_{12}s_{123}} +(1-2\varepsilon)\left(z_1+z_2-\frac{s_{12}}{s_{123}}\right)\right]\;,\\ P_{\bar{q}_1q_2q_3}=&\;\Big[\,P_{\bar{q}_1'q_2'q_3}+P_{\bar{q}_1'q_3'q_2}\,\Big] +\Big[\,P_{\bar{q}_1q_2q_3}^{\rm(id)}+P_{\bar{q}_1q_3q_2}^{\rm(id)}\,\Big]\;, \end{split} \end{equation} where $s_{ij}=2p_ip_j$ are the scalar products of the (light-like) parton momenta, $s_{123}=s_{12}+s_{13}+s_{23}$, and where $z_i=p_in/p_{123}n$ is the light-cone momentum fraction of particle $i$ with respect to an arbitrary auxiliary vector $n$, which must not be parallel to the collinear momentum, $p_{123}=p_1+p_2+p_3$. The interference term, $P_{\bar{q}_1q_2q_3}^{\rm(id)}$, is given by \begin{equation}\label{eq:p_qbqq_id} \begin{split} P_{\bar{q}_1q_2q_3}^{\rm(id)}=&\;C_F\left(C_F-\frac{C_A}{2}\right)\bigg\{ (1-\varepsilon)\left(\frac{2s_{23}}{s_{12}}-\varepsilon\right) -\frac{s_{123}^2}{s_{12}s_{13}}\frac{z_1}{2}\left[\frac{1+z_1^2}{(1-z_2)(1-z_3)} -\varepsilon\left(1+2\frac{1-z_2}{1-z_3}\right)-\varepsilon^2\right]\\ &\qquad+\frac{s_{123}}{s_{12}}\left[\frac{1+z_1^2}{1-z_2}-\frac{2z_2}{1-z_3} -\varepsilon\left(\frac{(1-z_3)^2}{1-z_2}+1+z_1-\frac{2z_2}{1-z_3}\right)-\varepsilon^2(1-z_3)\right]\bigg\}\;. \end{split} \end{equation} Following~\cite{Catani:1999ss}, we have defined \begin{equation}\label{eq:t123} t_{12,3}=2\,\frac{z_1s_{23}-z_2s_{13}}{z_1+z_2}+\frac{z_1-z_2}{z_1+z_2}\,s_{12}\;. \end{equation} We can interpret the triple collinear branching of the combined parton $(123)$ as two subsequent splittings, $(123)\to(12)3$ and $(12)\to12$. Integration over the final-state phase space of the second splitting, renormalization and collinear mass factorization in the $\overline{\rm MS}$ scheme then lead to the integrated double-collinear time-like splitting functions at NLO accuracy~\cite{ Curci:1980uw,Furmanski:1980cm,Floratos:1980hk,Floratos:1980hm,Heinrich:1997kv,Bassetto:1998uv} \begin{equation}\label{eq:p1_qqp} \begin{split} P_{qq'}^{(T)}(z)=&\;C_F T_R\left((1+z)\log^2(z) -\left(\frac{8}{3}z^2+9z+5\right)\log(z)+\frac{56}{9}z^2+4z-8-\frac{20}{9z}\right)\;,\\ P_{q\bar{q}}^{(T)}(z)=&\;P_{qq'}^{(T)}(z)+C_F \bigg(C_F-\frac{C_A}{2}\bigg)\bigg(2p_{qq}(-z)S_2(z) +2(1+z)\log(z)+4(1-z)\bigg)\;,\\ \end{split} \end{equation} where $p_{qq}(z)=(1+z^2)/(1-z)$, and where the auxiliary function $S_2$ is defined as \begin{equation} S_2(z)=-2{\rm Li}_2\frac{1}{1+z}+\frac{1}{2}\ln^2z-\ln ^2(1-z)+\frac{\pi^2}{6}\;. \end{equation} In the double soft limit, the hard matrix element for emission of a quark-antiquark pair factorizes as~\cite{Campbell:1997hg,Catani:1999ss} \begin{equation}\label{eq:soft_factorization} |M_{1,2,3,\ldots,n}(p_1,p_2,p_3,\ldots,p_n)|^2 \overset{\rm 12-soft}{\longrightarrow}\left(4\pi\mu^{2\varepsilon}\alpha_s\right)^2 \sum_{\substack{i,j=3}}^{n}\mathcal{I}_{ij}(p_1,p_2)\,|M_{3,\ldots,n}^{(i,j)}(p_3,\ldots,p_n)|^2, \end{equation} where the color-correlated tree-level matrix element squared is given by \begin{equation}\label{eq:color_correlated_born} |M_{3,\ldots,n}^{(i,j)}(p_3,\ldots,p_n)|^2= -\langle M_{3,\ldots,n}(p_3,\ldots,p_n)|\,\hat{T}_i\hat{T}_j\,|M_{3,\ldots,n}(p_3,\ldots,p_n)\rangle\;. \end{equation} The corresponding double-soft splitting function, $\mathcal{I}_{ij}(p_1,p_2)$, is given by~\cite{Campbell:1997hg,Catani:1999ss} \begin{equation}\label{eq:ds_qbpqpq} \mathcal{I}_{ij}(p_1,p_2)=T_R\, \frac{s_{i1}s_{j2}+s_{i2}s_{j1}-s_{ij}s_{12}}{ s_{12}^2(s_{i1}+s_{i2})(s_{j1}+s_{j2})}\;. \end{equation} In contrast to the one-loop case, the $i=j$ contributions to the soft matrix element in Eq.~\eqref{eq:soft_factorization} do not vanish. In the following section, we will discuss the combination of Eqs.~\eqref{eq:tc_me_factorization} and~\eqref{eq:soft_factorization} in a fully differential parton-shower simulation. \section{Overlap removal and genuine collinear anomalous dimension} \label{sec:or} Following the general arguments outlined in Sec.~\ref{sec:basic}, we need to remove the collinear limit of the double-soft matrix element, Eq.~\eqref{eq:soft_factorization}, from the triple-collinear matrix element, Eq.~\eqref{eq:tc_me_factorization}, in order to obtain a purely collinear remainder. In this limit, we can perform the sum over spectator partons, $j$, in Eq.~\eqref{eq:soft_factorization}, while holding $i=3$ fixed. This yields the collinear limit of the soft factorization formula \begin{equation}\label{eq:soft_factorization_coll} |M_{1,2,3,\ldots,n}(p_1,p_1,p_3,\ldots,p_n)|^2 \overset{\rm 12-soft}{\underset{\rm 123-coll}{\longrightarrow}} \left(\frac{8\pi\mu^{2\varepsilon}\alpha_s}{s_{123}}\right)^2 \mathcal{T}^{ss}_{123,\ldots}(p_{123},\ldots)P^{\rm(ds)}_{123}(p_1,p_2,p_3)\;, \end{equation} where the double soft splitting function, $P^{\rm(ds)}_{123}$, is given by~\cite{Dulat:2018vuy} \begin{equation}\label{eq:ps_qbpqpq} P_{\bar{q}_1'q_2'a_3}^{\rm(ds)}=\frac{1}{2}C_aT_R\,\frac{s_{123}^2}{(s_{13}+s_{23})^2} \left[\frac{4z_3}{1-z_3}\frac{s_{13}+s_{23}}{s_{12}} -\left(\frac{t_{12,3}}{s_{12}}-\frac{z_1-z_2}{z_1+z_2}\right)^2\,\right]\; . \\ \end{equation} This function can be integrated using the phase-space parametrization of~\cite{Gehrmann-DeRidder:2003pne}. Following~\cite{Hoche:2017iem}, we factor out the two-particle phase space, the integration over the three-particle invariant $y_{aij}=s_{aij}/q^2$ and the corresponding factors $(y_{aij}(1-y_{aij}))^{1-2\varepsilon}$ as well as the integration over one of the light-cone momentum fractions, which is chosen to be $\tilde{z}=s_{ak}/q^2/(1-y_{aij})$. We also remove the square of the normalization factor $(4\pi)^{\varepsilon}/(16\pi^2\Gamma(1-\varepsilon))\,(q^2)^{1-\varepsilon}$. The remaining one-emission phase-space integral reads \begin{equation}\label{eq:tcps_tl} \begin{split} \int{\rm d}\Phi_{+1}^{(F)}=&\; (1-\tilde{z})^{1-2\varepsilon}\tilde{z}^{-\varepsilon} \int_0^1{\rm d}\tau\,(\tau(1-\tau))^{-\varepsilon} \int_0^1{\rm d}v\,(v(1-v))^{-\varepsilon}\; \frac{\Omega(1-2\varepsilon)}{\Omega(2-2\varepsilon)} \int_0^1{\rm d}\chi\,2(4\chi(1-\chi))^{-1/2-\varepsilon}\;, \end{split} \end{equation} where $\Omega(n)=2\pi^{n/2}/\Gamma(n/2)$. The variables $\tau$ and $v$ are given by the transformation \cite{Hoche:2017iem} \begin{equation} s_{ai}=s_{aij}(1-\tilde{z}_j)\,v\;, \qquad \tilde{z}_j=\frac{s_{jk}/q^2}{1-y_{aij}}=(1-\tilde{z})\,\tau\;. \end{equation} The azimuthal angle integration is parametrized using $\chi$, which is defined as $ s_{ij}=s_{ij,-}+\chi(s_{ij,+}-s_{ij,-})\,, $ with $s_{ij,\pm}$ being the two solutions of the quadratic equation $\cos^2\phi_{a,i}^{j,k}=1$~\cite{Gehrmann-DeRidder:2003pne}. The result is \begin{equation} \begin{split} &\frac{1}{C_aT_R}\int{\rm d}\Phi_{+1}^{(F)}P_{aq'}^{(ds)} =-\frac{1}{\varepsilon}\left(\frac{4}{3\tilde{z}}-2\tilde{z}+\frac{2\tilde{z}^2}{3}+2\ln\tilde{z}\right)\\ &\qquad-2\left({\rm Li}_2(\tilde{z})-\zeta_2\right)+3\ln^2\tilde{z} +\frac{2}{3\tilde{z}}(1-7\tilde{z}+10\tilde{z}^2-4\tilde{z}^3)\\ &\qquad+\left(\frac{8}{3\tilde{z}}-2\tilde{z}+\frac{2\tilde{z}^2}{3}\right)\ln\tilde{z} +\left(\frac{4}{3\tilde{z}}-2\tilde{z}+\frac{2\tilde{z}^2}{3}\right)\ln(1-\tilde{z}) +\mathcal{O}(\varepsilon) \, . \end{split} \end{equation} Upon including the propagator term from Eq.~\eqref{eq:soft_factorization_coll} and the phase-space factor $y_{aij}^{1-2\varepsilon}$, the leading pole is multiplied by an additional factor $-\delta(y_{aij})/2\varepsilon$. The $1/\varepsilon^2$ coefficient thus generated is removed by the renormalization of the soft component of the fragmentation function. This renormalization term is obtained as \begin{equation}\label{eq:ffpdf_ren} \mathcal{P}_{aq'}^{(ds)}(\tilde{z})=\int_{\tilde{z}}^1\frac{{\rm d}x}{x} P_{ag}^{\rm(0,s)}(z)P_{gq}^{(0)}(z/x)=\,C_a T_R\left(2\ln\tilde{z} +\frac{2\tilde{z}^2}{3}-2\tilde{z}+\frac{4}{3\tilde{z}}\right)\;, \end{equation} where $P_{ag}^{\rm(0,s)}(z)=2C_a(1-z)/z$ is the soft limit of the double-collinear splitting function for gluon emission. In order to extract the analog of the next-to-leading order splitting function $P_{qq'}$, we employ the two-loop matching condition for the fragmenting jet function~\cite{Ritzmann:2014mka}. \begin{equation}\label{eq:twoloop_ffmatch} \begin{split} \mathcal{G}_a^{i(2)}(s,z,\mu)=\mathcal{J}_{ai}^{(2)}(s,z,\mu) +\sum_j\int_z^1\frac{{\rm d}x}{x}\mathcal{J}_{aj}^{(1)}(s,z/x,\mu)D_j^{i(1)}(x,\mu) +\delta(s)D_a^{i(2)}(z,\mu)\;. \end{split} \end{equation} The complete matching term is given in~\cite{Ritzmann:2014mka,Hoche:2017iem}. Its soft-collinear analog needed for $\mathcal{G}_a^{q'(2)}$ is given by \begin{equation}\label{eq:match_tl} \int_z^1\frac{{\rm d}x}{x}\mathcal{J}_{ag}^{(1)}(s,z/x,\mu)D_g^{q(1)}(x,\mu)\,\Big|_{\rm(ds)} =2\int_{\tilde{z}}^1\frac{{\rm d}x}{x} \,2C_F\,\frac{1-x}{x}\ln(x(1-x))\;P_{gq}^{(0)}(\tilde{z}/x)\;. \end{equation} A detailed discussion will be given in Sec.~\ref{sec:cmw}. Using this technique, we obtain the soft-collinear contribution to the timelike NLO $q\to q'$ splitting function \begin{equation}\label{eq:p1_qqp_ds} \begin{split} P_{qq'}^{\rm(ds,T)}(\tilde{z})=&\ C_a T_R\bigg(2({\rm Li}_2\tilde{z}-\zeta_2)+\ln^2\tilde{z} -\left(8+6\tilde{z}-2\tilde{z}^2\right)\ln \tilde{z}\\ &\qquad\qquad\left.-\left(\frac{4}{3\tilde{z}}-2\tilde{z}+\frac{2\tilde{z}^2}{3}\right)\ln(1-\tilde{z}) -\frac{34}{9}\tilde{z}^2+\frac{46}{3}\tilde{z}-\frac{28}{3}-\frac{20}{9\tilde{z}}\right)\;. \end{split} \end{equation} In the numerical simulation this term can be obtained from the quark-pair contribution to the double soft splitting function, taken in the triple collinear limit. The corresponding methods have been discussed in detail in~\cite{Dulat:2018vuy}, and here we will therefore focus on the difference to the complete timelike NLO $q\to q'$ splitting function only. This difference leads to a genuine two-loop timelike collinear anomalous dimension that is given by \begin{equation} \gamma_{qq'}^{\rm(tc,T)}=\int_0^1{\rm d}z\,z\left(P_{qq'}^{\rm(T)}(z)-P_{qq'}^{\rm(ds,T)}(z)\right)= \left(\frac{11}{18}-\frac{2\pi^2}{9}\right)C_FT_R\;. \label{eq:collAnomDim} \end{equation} Based on this result, we expect the genuinely triple collinear configurations to generate a small negative correction to the leading-order radiation pattern. In the following section we will discuss how the above computation can be implemented using a four-dimensional modified subtraction scheme. \section{Computation in four dimensions} \label{sec:mc} The modified subtraction procedure needed to implement next-to-leading order corrections to the parton-shower splitting kernels was outlined in~\cite{Hoche:2017iem}. The computation of the soft contributions is performed according to the formula \begin{equation}\label{eq:tcps_mc} P_{aq'}^{\rm(ds)}(\tilde{z})=\Big(\mathrm{I}+\frac{1}{\varepsilon}\,\mathcal{P}-\mathcal{I}\Big)_{aq'}^{\rm(ds)}(\tilde{z})+ \int{\rm d}\Phi_{+1}(\mathrm{R}-\mathrm{S})_{aq'}^{\rm(ds)}(\tilde{z},\Phi_{+1})\;. \end{equation} In order to implement the algorithm, we need the approximate spin-independent splitting function, $\tilde{P}_{aq'}^{1\to3\rm(ds)}$ and the corresponding spin correlation term, $\Delta\tilde{P}_{aq'}^{1\to3\rm(ds)}$, which define the differential subtraction term according to \begin{equation}\label{eq:dsps_def_rs} \begin{split} \mathrm{R}_{aq'}^{\rm(ds)}(\tilde{z},\Phi_{+1})=&\;P_{aq'}^{1\to3\rm(ds)}(\tilde{z},\Phi_{+1})\\ \mathrm{S}_{aq'}^{\rm(ds)}(\tilde{z},\Phi_{+1})=&\;\tilde{P}_{aq'}^{1\to3\rm(ds)}(\tilde{z},\Phi_{+1}) +\Delta\tilde{P}_{aq'}^{1\to3\rm(ds)}(\tilde{z},\Phi_{+1})\;. \end{split} \end{equation} The two contributions to the subtraction term are given by \begin{equation}\label{eq:tcsf_qqpa} \begin{split} \tilde{P}_{aq'}^{1\to3\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,s_{ai},s_{aj},s_{ij}) =&\;C_a T_R\frac{s_{aij}}{s_{ai}}\frac{2\tilde{z}_j}{1-\tilde{z}_j} \left(1-\frac{2}{1-\varepsilon}\frac{\tilde{z}_a\tilde{z}_i}{(\tilde{z}_a+\tilde{z}_i)^2}\right)\\ \Delta\tilde{P}_{aq'}^{1\to3\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,s_{ai},s_{aj},s_{ij}) =&\;C_aT_R \frac{s_{aij}}{s_{ai}}\frac{4 \tilde{z}_a \tilde{z}_i \tilde{z}_j}{(1-\tilde{z}_j)^3} \left(1-2\cos^2\phi_{ai}^{jk}\right)\;. \end{split} \end{equation} We use the definition of the azimuthal angle in the soft-collinear approximation~\cite{Dulat:2018vuy} \begin{equation}\label{eq:cosphi_def} 4\,\tilde{z}_a\tilde{z}_i\cos^2\phi_{ai}^{jk} =\frac{(\tilde{z}_a s_{ij}-\tilde{z}_i s_{aj})^2}{ s_{ai}\tilde{z}_j(s_{aj}+s_{ij})(\tilde{z}_a+\tilde{z}_i)}\;. \end{equation} For $s_{ai}\to 0$, this agrees with the definition of $\cos^2\phi_{a,j}^{i,k}$ in~\cite{Hoche:2017iem}. Away from the collinear limit, $\phi_{ai}^{jk}$ is not a physical angle, as $\cos^2\phi_{ai}^{jk}$ is not bounded by one. Equation~\eqref{eq:cosphi_def} is constructed such that it reproduces the soft matrix element, hence the subtraction term $\mathrm{S}_{aq'}^{\rm(ds)}(\tilde{z},\Phi_{+1})$ provides a much better approximation of the triple collinear and double soft matrix elements, leading to substantially smaller real-emission contributions in Eq.~\eqref{eq:tcps_mc}.\footnote{The leading-order parton shower algorithm should implement spin correlations according to Eq.~\eqref{eq:tcsf_qqpa}, in order to achieve a consistent modified subtraction. While the parton shower we employ in Sec.~\ref{sec:results} does not include these correlations yet, we note that their phenomenological impact is negligible except for dedicated observables, and we will therefore postpone their implementation to future work.} In addition to the differential radiation pattern, the endpoint contributions need to be simulated. This is achieved by extracting the $\mathcal{O}(1)$ contributions to the NLO splitting functions that originate in the combination of the $-\delta(v)/\varepsilon$ term in the series expansion in $v$, and the $\mathcal{O}(\varepsilon)$ terms in the expansion of the differential forms of the subtraction and matching terms. They are given by \begin{equation}\label{eq:iterm_mc} \begin{split} \Delta\mathrm{I}_{aq'}^{\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j)=&\; \tilde{\mathrm{I}}_{aq'}^{\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,\tilde{z}_a)- \tilde{\mathcal{I}}_{aq'}^{\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,\tilde{z}_a+\tilde{z}_i)\;,\\ \end{split} \end{equation} where \begin{equation} \begin{split} \tilde{\mathrm{I}}_{aq'}^{\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,\tilde{x})=&\; C_a T_R \left[\frac{2\tilde{z}_j}{1-\tilde{z}_j} \frac{2\,\tilde{z}_a\tilde{z}_i}{(\tilde{z}_a+\tilde{z}_i)^2} +\frac{2\tilde{z}_j}{1-\tilde{z}_j} \left(1-\frac{2\,\tilde{z}_a\tilde{z}_i}{(\tilde{z}_a+\tilde{z}_i)^2}\right) \log(\tilde{x}\,\tilde{z}_i\tilde{z}_j)\right]\;,\\ \tilde{\mathcal{I}}_{aq'}^{\rm(ds)}(\tilde{z}_a,\tilde{z}_i,\tilde{z}_j,\tilde{x})=&\; C_a \frac{2\tilde{z}_j}{1-\tilde{z}_j}\log(\tilde{x}\,\tilde{z}_j)\, P_{gq}^{(0)}\Big(\frac{\tilde{z}_a}{\tilde{z}_a+\tilde{z}_i}\Big)\;. \end{split} \end{equation} The implementation of the endpoint contributions for $q\to\bar{q}$ transitions and the needed symmetry factors was discussed in~\cite{Hoche:2017iem} and remains unchanged. The symmetry factors are reviewed in App.~\ref{sec:tagging}. \section{Relation to the effective soft-gluon coupling} \label{sec:cmw} In this section we provide an intuitive explanation for the origin of the familiar soft singular term $-20/9z$ in Eq.~\eqref{eq:p1_qqp} and~\eqref{eq:p1_qqp_ds}, and we explain how the two-loop cusp anomalous dimension emerges naturally upon integration over the final state phase space and summation over flavors. Since the effective soft gluon coupling should be implemented as part of the soft-collinear gluon radiation pattern~\cite{Dulat:2018vuy}, we conclude that a separation of the triple collinear splitting function into a double-soft component and a genuine triple collinear remainder yields an appropriate algorithm for parton shower evolution at the next-to-leading order. We first note that the endpoint contributions of the next-to-leading order splitting functions can be extracted by means of a series expansion of the scaled propagator virtuality, $v$ \begin{equation} \frac{1}{v^{1+\varepsilon}}=-\frac{1}{\varepsilon}\,\delta(v) +\sum_{i=0}^\infty\frac{\varepsilon^n}{n!}\left(\frac{\log^n v}{v}\right)_+\;. \end{equation} When this term is combined with the $\mathcal{O}(\varepsilon)$ contributions in the series expansion of phase-space factors and splitting functions, it generates characteristic logarithms, which contribute the leading transcendental terms to the anomalous dimensions. In the triple collinear case we obtain \begin{equation}\label{eq:tcps_tl_pole_1} \begin{split} \int{\rm d}\Phi_{+1}^{(F)}\frac{1}{v}=&\; \tilde{z}^{-\varepsilon} \int_0^1{\rm d}\tilde{z}_j\,(\tilde{z}_j\tilde{z}_i)^{-\varepsilon} \int_0^1{\rm d}v\,\frac{(1-v)^{-\varepsilon}}{v^{1+\varepsilon}}\; \frac{\Omega(1-2\varepsilon)}{\Omega(2-2\varepsilon)} \int_0^1{\rm d}\chi\,2(4\chi(1-\chi))^{-1/2-\varepsilon}\\ =&\;-\frac{\delta(v)}{\varepsilon} \int_0^1{\rm d}\tilde{z}_j\,(\tilde{z}_j\tilde{z}_i\tilde{z})^{-\varepsilon}\, \frac{\Omega(1-2\varepsilon)}{\Omega(2-2\varepsilon)} \int_0^1{\rm d}\chi\,2(4\chi(1-\chi))^{-1/2-\varepsilon} +\ldots\;, \end{split} \end{equation} where the dots stand for plus distributions in $\ln^n v/v$. If the integrand does not depend on the azimuthal angle variable $\chi$, we can simplify this to \begin{equation}\label{eq:tcps_tl_pole_2} \begin{split} \int{\rm d}\Phi_{+1}^{(F)}\frac{1}{v} =&\;-\frac{\delta(v)}{\varepsilon} \int_0^1{\rm d}\tilde{z}_j\,(\tilde{z}_j\tilde{z}_i\tilde{z})^{-\varepsilon}\, +\ldots\;. \end{split} \end{equation} We can rewrite Eq.~\eqref{eq:tcps_tl_pole_2} as a convolution by measuring $\tilde{z}$ and integrating over $\bar{\tau}=\tilde{z}/(1-\tilde{z}_j)$ \begin{equation}\label{eq:tcps_tl_pole_3} \begin{split} \int_0^1{\rm d}\bar{\tau}\,\int{\rm d}\Phi_{+1}^{(F)}\frac{1}{v}\,\delta(x-\tilde{z}) =&\;-\frac{\delta(v)}{\varepsilon}\, \int_x^1\frac{{\rm d}\bar{\tau}}{\bar{\tau}}\,\left(x\tilde{z}_i\left(1-\frac{x}{\bar{\tau}}\right)\right)^{-\varepsilon}\, +\ldots\;. \end{split} \end{equation} Let us now consider the matching term in Eq.~\eqref{eq:twoloop_ffmatch}. It is given by a similar convolution, but includes only the phase-space factors for the production of the leading-order final state \begin{equation}\label{eq:mtps_pole} \begin{split} \int_0^1{\rm d}\bar{\tau}\,\int{\rm d}\Phi_{+1}^{(F,J)}\frac{1}{v}\,\delta(x-\tilde{z}) =&\;-\frac{\delta(v)}{\varepsilon}\,\int_x^1\frac{{\rm d}\bar{\tau}}{\bar{\tau}}\, \left(\left(1-\frac{x}{\bar{\tau}}\right)\frac{x}{\bar{\tau}}\right)^{-\varepsilon}\,+\ldots\;. \end{split} \end{equation} Combining the complete phase-space integral and the integral needed for the matching term leads to \begin{equation} \begin{split} &\int_0^1{\rm d}\bar{\tau}\,\left(\int{\rm d}\Phi_{+1}^{(F)} -2\int{\rm d}\Phi_{+1}^{(F,J)}\right)\,\frac{1}{v}\,\delta(x-\tilde{z})\\ &\qquad=-\frac{\delta(v)}{\varepsilon}\,\int_x^1\frac{{\rm d}\bar{\tau}}{\bar{\tau}}\, \bigg[\,1-\varepsilon\ln\left(\bar{\tau}(1-\bar{\tau})\right)+\varepsilon\ln\left(1-\frac{x}{\bar{\tau}}\right) +\mathcal{O}(\varepsilon^2)\,\bigg]\,+\ldots\;. \end{split} \end{equation} In the double soft limit, $x/\bar{\tau}\to 0$, and we nearly recover the standard double-collinear phase-space integral. Applying this to the approximate splitting function in Eq.~\eqref{eq:tcsf_qqpa}, we can reconstruct the leading soft enhanced term as \begin{equation} \begin{split} &\int_0^1{\rm d}\bar{\tau}\,\left(\int{\rm d}\Phi_{+1}^{(F,\rm ds)} P_{ag}^{\rm(s)}\left(\frac{x}{\bar{\tau}}\right)P_{gq'}^{(0)}(\bar{\tau},\varepsilon) -2\int{\rm d}\Phi_{+1}^{(F,J,\rm ds)} P_{ag}^{\rm(s)}\left(\frac{x}{\bar{\tau}}\right)P_{gq'}^{(0)}(\bar{\tau},0)\right)\, \frac{1}{v}\,\delta(x-\tilde{z})\\ &\quad=\mathcal{O}\Big(\frac{1}{\varepsilon}\Big)+\delta(v)\, \int_x^1\frac{{\rm d}\bar{\tau}}{\bar{\tau}}\,P_{ag}^{\rm(s)}\left(\frac{x}{\bar{\tau}}\right) \left[P_{gq'}^{(0)}(\bar{\tau},0)\big(\ln(\bar{\tau}(1-\bar{\tau}))+1\big) -P_{gq'}^{(0)}(\bar{\tau},\varepsilon)\right]\,+\ldots+\mathcal{O}(\varepsilon)\;, \end{split} \end{equation} where the dots stand for contributions that are finite in $x$, and for plus distributions in $\ln^n v/v$. The function $P_{ag}^{\rm(s)}(z)=2C_a(1-z)/z$ is the soft-collinear splitting kernel for the transition $a\to g$. Its leading term is given by $2C_a/z$, such that we can extract the leading term in $1/x$ of the finite remainder as \begin{equation}\label{eq:cmw_eq} \delta(v)\,\frac{2C_a}{x}\,T_R\,\int_x^1{\rm d}\bar{\tau}\, \Big[(1-2\bar{\tau}(1-\bar{\tau}))\ln(\bar{\tau}(1-\bar{\tau}))+2\bar{\tau}(1-\bar{\tau})\Big] \overset{x\to0}{\longrightarrow} \delta(v)\,\frac{2C_a}{x}\,T_R\left(-\frac{10}{9}+\mathcal{O}(x)\right)\;. \end{equation} The relation to the CMW scheme is now manifest: The leading term on the right-hand side of Eq.~\eqref{eq:cmw_eq} is simply the contribution from the production of a single quark pair to the $n_f$ term in the two-loop cusp anomalous dimension~\cite{Kodaira:1981nh,Davies:1984hs,Davies:1984sp,Catani:1988vd}. The parentheses do not evaluate to a constant, because we explicitly consider resolved partons, i.e.\ we implement the measurement $\delta(x-\tilde{z})$. In summary, we find that the above contribution from the soft-collinear splitting function correctly reproduces the expected finite remainders in the soft limit, and therefore its simulation in fully differential form induces the conventional rescaling of the soft-gluon coupling~\cite{Catani:1990rr} upon integration over $x$. As outlined in Sec.~\ref{sec:basic}, it is therefore appropriate to implement the corresponding endpoints as part of the soft gluon radiator function~\cite{Dulat:2018vuy}. We finally note that the above calculation serves only to make the origin of the soft gluon coupling explicit. We do not explicitly implement the resolved parton evolution in our numerical simulations. Instead, following the derivation in~\cite{Jadach:2003bu,Hoche:2017iem}, we obtain equivalent results from unconstrained parton evolution including tagging factors, which allows us to implement the soft physical coupling using the technique described in~\cite{Dulat:2018vuy}. This method is reviewed in App.~\ref{sec:tagging} As a direct consequence, we are not restricted to interpreting the $q\to q q'\bar{q}'$ transitions that we implement as a contribution solely to the NLO $q\to q'$ splitting function. The same splitting also contributes to the $n_f$ part of the $q\to q$ splitting function. This is achieved by summing over all possible ways to tag the final-state partons. We have thus presented a generic technique to implement all real-emission type $n_f$ contributions to the next-to-leading order splitting functions, as well as the corresponding $C_F-C_A/2$ interference terms. \section{Numerical results} \label{sec:results} \begin{figure}[t] \subfigure{ \begin{minipage}{0.475\textwidth} \begin{center} \includegraphics[scale=0.7]{fig/validation/log10_y_nm.pdf}\\[-0.75mm] \includegraphics[scale=0.7]{fig/validation/log10_y_23_diff.pdf}\\[-0.75mm] \includegraphics[scale=0.7]{fig/validation/log10_y_34_diff.pdf} \end{center} \end{minipage} \label{fig:validation_ff}} \subfigure{ \begin{minipage}{0.475\textwidth} \begin{center} \includegraphics[scale=0.7]{fig/impact/log10_yc_23.pdf}\\[-0.5mm] \includegraphics[scale=0.7]{fig/impact/log10_yc_34.pdf}\\[-0.5mm] \includegraphics[scale=0.7]{fig/impact/log10_yc_45.pdf} \end{center} \end{minipage} \label{fig:impact_ff}} \caption{Durham $k_T$-jet rates in $e^+e^-\to$ hadrons at LEP. Left: Validation of the simulation of soft-subtracted triple-collinear parton splittings. Right: Impact of the soft-subtracted triple-collinear simulation. The top panel shows the ratio between the leading-order result and the leading-order simulation including soft-subtracted triple-collinear branchings. The middle and bottom panels show a comparison between the simulation of up to one soft-subtracted triple-collinear splitting and arbitrarily many (both not including the leading-order result). \label{fig:validation_impact}} \end{figure} In this section we present the first application of our algorithm to the process $e^+e^-\to$ hadrons at LEP energies. We implement a computation of the soft-subtracted $q\to qq'\bar{q}'$ and $q\to qq\bar{q}$ triple collinear splitting functions into the \textsc{Dire}\xspace parton showers, which provide two entirely independent codes within the event generation frameworks \textsc{Pythia}\xspace~\cite{Sjostrand:1985xi,Sjostrand:2014zea} and \textsc{Sherpa}\xspace~\cite{Gleisberg:2008ta,Sherpa:2019gpd}. We employ the CT10nlo PDF set~\cite{Lai:2010vv}, and use the corresponding form of the strong coupling. Following standard practice, we implement the CMW scheme through a rescaling of the soft gluon coupling by $1+\alpha_s(t)/(2\pi) K$, where $K=(67/18-\pi^2/6)\,C_A-10/9\,T_R\,n_f$~\cite{Catani:1990rr}. The implementation of this term in fully differential form has been discussed in~\cite{Dulat:2018vuy}. The left side in Fig.~\ref{fig:validation_impact} shows a comparison between the \textsc{Dire+Sherpa} and \textsc{Dire+Pythia} predictions for the soft-subtracted triple-collinear $q\to qq'\bar{q}'$ splittings, when considering only a single branching. The lower panel shows the deviation of the two results, normalized bin-wise to the statistical uncertainty. We find perfect agreement, suggesting that no technical problems are present. A single $1\to 3$ branching populates both the $2\to3$ jet rate, $y_{23}$, and the $3\to4$ jet rate, $y_{34}$. The $3\to 4$ jet rate is entirely given by the $R-S$ contribution in Eq.~\eqref{eq:tcps_mc}, while the $2\to 3$ jet rate also receives contributions from the $I - \mathcal{I}$ term. The contributions from the soft-subtracted triple-collinear branchings are negative, as anticipated based on Eq.~\eqref{eq:collAnomDim}, and they are of similar size for both rates. To be consistent with the renormalization group evolution of the strong coupling, we only produce b-quarks if the shower evolution variable is above the quark mass, $t>m_b^2$. The corresponding threshold effects can be seen close to $\log_{10}(4.75^2/91.2^2) = -2.6$. A similar effect for the charm quark is not visible, since the threshold at $\log_{10}(1.3^2/91.2^2) = -3.7$ is too close to the parton-shower cutoff placed at 1~GeV. The right side of Fig.~\ref{fig:validation_impact} shows the phenomenological impact of the soft-subtracted triple-collinear branchings. The upper panel displays the ratio between the pure leading-order parton evolution and the LO + $1\to 3$ evolution, indicating a difference of up to $4\%$ in the $2\to 3$ jet rate. Compared to the triple-collinear $q\to q'$ and $q\to \bar{q}$ corrections presented in \cite{Hoche:2017iem}, we find larger effects, since we not only consider the contribution to the identified final state, but the sum over all ways to tag the $qq'\bar{q}'$ or $qq\bar{q}$ final-state (cf.\ the last paragraph of Sec.~\ref{sec:cmw}). Allowing for multiple $1\to 3$ branchings has a marginal effect on the $3\to 4$ jet rate, and adds a very small correction to the $4\to 5$ jet rate. These are shown in the middle and bottom panels of Fig.~\ref{fig:impact_ff}. \section{Conclusions} \label{sec:conclusion} This note introduced a method for the consistent combination of triple-collinear and double-soft corrections to parton evolution at leading-order by means of subtraction at the integrand level. We argue that a subtraction technique is the most appropriate method for addressing the soft-collinear overlap, as it allows to cleanly separate the integrands into soft enhanced and soft finite contributions. It is also supported by the fact that the effective soft-gluon coupling generated by the radiative corrections in the triple collinear limit can be obtained by including double soft corrections alone. In our algorithm, all higher-order corrections are embedded in the parton shower in fully differential form, using the appropriate transition matrix elements computed in dimensional regularization and the $\overline{\rm MS}$ scheme. The method recovers known analytic results, such as the $n_f$ contribution to the two-loop cusp anomalous dimension. While we explicitly considered only the special case of quark pair emission from quarks, we note that other triple collinear splitting functions can be treated in the same manner. We have implemented our new method into two independent Monte-Carlo programs in the general-purpose event generators \textsc{Pythia}\xspace and \textsc{Sherpa}\xspace for the case of $q\to q'\bar{q}'$ and $q\to qq\bar{q}$ transitions, proving the feasibility of the algorithmic considerations for numerical studies. Overall, the impact of the genuine triple-collinear corrections to the parton cascade is small for standard observables -- provided that the leading-order shower correctly reproduces the radiation pattern at $\mathcal{O}(\alpha_s^2)$ in ordered phase-space regions. This supports previous findings that the main effect of the $\mathcal{O}(\alpha_s^2)$ corrections is to reduce the uncertainties present in the leading-order simulation. \section{Acknowledgments} We thank Joshua Isaacson for comments on the manuscript. This note was supported by funding from the Swedish Research Council, contract numbers 2016-05996 and 2020-04303, and by the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE--AC02--07CH11359. We further acknowledge funding from the European Union’s Horizon 2020 research and innovation program as part of the Marie Sk{\l}odowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104).
2024-02-18T23:40:34.603Z
2021-10-22T02:14:39.000Z
algebraic_stack_train_0000
2,786
7,301
proofpile-arXiv_065-13616
\section{Introduction} To define a path integral, one needs to specify a way to enumerate the configurations to be summed over. For a non-relativistic particle, it is common is to introduce a lattice of discrete time steps, sum over piecewise linear paths across these steps, and take the continuum limit of lattice spaces going to zero \cite{Feynman1965QuantumIntegrals}. For gravity, one could similarly introduce a simplicial lattice, sum over piecewise flat geometries on the lattice characterized by the edge lengths, and take the limit of lattice refinement (\cref{fig:sll}). Historically, this method follows from Regge's insight \cite{Regge1961GeneralCoordinates} to use piecewise flat geometries to approximate curved space(times) at the classiccal level. Regge's classical approach is usually referred to as Regge calculus, or simplicial gravity, while the quantum path integral based on it is usually referred to as quantum Regge calculus, or simplicial quantum gravity \cite{Rocek1981QuantumCalculus, Williams1992ReggeBibliography, Loll1998DiscreteDimensions, Hamber2009QuantumApproach, Barrett2019TullioGravity}. \begin{figure \centering \includegraphics[width=1.\textwidth]{sll1.png} \caption{Simplicial lattice refinement.} \label{fig:sll} \end{figure} As a non-perturbative path integral approach, simplicial quantum gravity has a clear merit. It is known how to couple to the matter species of the Standard Model (see e.g., Chapter 6 of Hamber's textbook \cite{Hamber2009QuantumApproach} and references therein). On the other hand, Euclidean quantum gravity faces the conformal instability problem \cite{Gibbons1977TheThermodynamics}. This is manifested as the problem of the spikes for Euclidean simplicial quantum gravity. In concrete $2D$ models, it is shown that configurations with diverging edge lengths dominate the path integral, even when the total spacetime area is bounded \cite{Ambjrn1997SpikesCalculusb}. One view is that only the weak coupling phase is rendered ill by the spiky configurations, but the strong coupling phase stays healthy \cite{Hamber2019VacuumGravity}. A more pessimistic view is that conformal instability poses a lethal threat to Euclidean simplicial quantum gravity. Whatever conformal instability actually implies about Euclidean quantum gravity, the case is different for the Lorentzian. For $2D$ simplicial quantum gravity it can be shown that the Lorentzian and Euclidean theories are inequivalent, and that spikes are absent in the Lorentzian where spacetime configurations are equipped with causal structures \cite{Tate2011Fixed-topologyDomain, JiaTime-spaceGravity}.\footnote{The proof of the absence of spikes in \cite{Tate2011Fixed-topologyDomain} assumes that the causal signature of simplicial lattice edges are fixed under the path integral. In \cite{JiaTime-spaceGravity} this assumption is dropped. It is shown that spikes are still absent, provided that causally irregular points with no lightcones attached are prohibited.} The question about higher dimensions is open, but the prospect that spikes are absent in the Lorentzian in general, and the fact that spacetime is Lorentzian in Nature form motivations to study Lorentzian simplicial quantum gravity. Apart from a few works \cite{Tate2011Fixed-topologyDomain, Tate2012Realizability1-simplex, MikovicPiecewiseGravity, AsanteEffectiveGravity, DittrichLorentzianSimplicial, JiaTime-spaceGravity}, the path integrals of Lorentzian simplicial quantum gravity have not been studied much in the past.\footnote{In this statement we mean by simplicial quantum gravity the formalism with dynamical lengths. The variation of simplicial quantum gravity with fixed lengths but dynamical lattice graphs has been extensively studied in the form of causal dynamical triangulation \cite{Ambjorn2012NonperturbativeGravity, Loll2020QuantumReview}.} Because of the numerical sign problem, naive Monte Carlo simulations do not work efficiently in the Lorentzian as in the Euclidean. This has remained a major obstacle for quantitative studies of Lorentzian simplicial quantum gravity. In this work we propose to generalize simplicial quantum gravity to the complex domain. This allows us to apply the techniques of complex contour deformation developed in recent years to alleviate the sign problem \cite{AuroraScienceCollaboration2012HighThimble, AlexandruComplexProblem}. By a higher dimensional version of Cauchy's integration theorem, a path integral with a real integration contour can equally be evaluated along a complex contour if the two contours are related across a region where the integrand is holomorphic. The sign problem could be milder on the deformed contour. As reviewed in \cite{AlexandruComplexProblem}, this idea has been successfully applied to various lattice field theories of matter. It has also been applied to analyze gravitational propagators for spin-foam models in the large spin limit \cite{Han2021SpinfoamPropagator}. Here we show that the complex contour deformation method also works for Lorentzian simplicial quantum gravity. Monte Carlo simulations are performed to compute the expectation value of spacetime lengths in $1+1D$ using the holomorphic gradient flow algorithm (also called the generalized thimble algorithm) \cite{Alexandru2016SignThimbles, Alexandru2017MonteCarloModel, AlexandruComplexProblem}. It is found that the sign fluctuations are largely suppressed on suitable complex contours. As far as we know, this constitutes the first non-perturbative computation of Lorentzian simplicial gravitational path integrals. It opens the possibility to investigate questions about quantum gravity non-perturbatively and quantitatively using Lorentzian simplicial quantum gravity. Notably, the expectation values computed on the complex contours are directly the results of interest. There is no analytic continuation to Euclidean spacetime like in causal dynamical triangulation \cite{Ambjorn2012NonperturbativeGravity}, nor analytic continuation of parameters in the action like in causal sets \cite{Surya2019TheGravity}. These procedures face the open problem of inverse analytic continuation, which does not arise in the method used here. Besides overcoming the sign problem, another reason to consider complex simplicial quantum gravity is to study singularity resolving processes. Quantum theory assigns non-zero probabilities to certain processes characterized by boundary conditions admitting not real, but complex semi-classical solutions. A standard example is particle tunneling \cite{Turok2014OnTime, ChermanReal-TimeInstantons, Tanizaki2014Real-timeTunneling}. It is conceivable that cosmological and black hole singularity resolving processes (see e.g., \cite{Frolov1981SphericallyGravity, Frolov1989ThroughUniverse, Barrabes1996HowHole, Frolov1998BlackPhysics, Vilenkin1982CreationNothing, Hartle1983WaveUniverse, Halliwell1991Introductory1990, Bojowald2001AbsenceCosmology, Modesto2004DisappearanceGravityb, Ashtekar2005BlackParadigmb, Hayward2006FormationHoles, Hossenfelder2010ConservativeProblem, Haggard2015Quantum-gravityTunneling, Barcelo2014TheResignation, Bianchi2018WhiteHoleb, DAmbrosio2021EndEvaporation, Oriti2017BouncingCondensatesb}) fall into the same category \cite{Hartle1989SimplicalModel, Li1993ComplexMachines, Gielen2015PerfectBounce, Gielen2016QuantumSingularities, Feldbrugge2017LorentzianCosmology, Dorronsoro2017RealCosmology, Bramberger2017QuantumSingularities, DittrichLorentzianSimplicial}. Lorentzian simplicial quantum gravity provides a formalism to compute the probabilities for such processes. To analyze the semi-classical solutions, the formalism needs to be generalized to the complex domain. Although simplicial quantum gravity in the complex domain has been studied before \cite{Hartle1989SimplicalModel, Louko1992ReggeCosmology, Birmingham1995LensCosmology, Birmingham1998ACalculus, Furihata1996No-boundaryUniverse, Silva1999SimplicialField, Silva1999AnisotropicField, Silva2000SimplicialPhi2, daSilvaWormholesMinisuperspace}, the complex theory is reached by analytically continuing the Euclidean theory. In addition, these works concentrated on symmetry-reduced models. In this work we specify Lorentzian simplicial gravity in arbitrary dimensions and without symmetry reduction with manifestly holomorphic expressions. Upon analytic continuation, the holomorphic expressions define simplicial gravity in the complex domain. The path integrals based on this complex action encompass both Lorentzian and Euclidean simplicial quantum gravity as special cases with different integration contours. Along the way, we show that the celebrated Gauss-Bonnet theorem admits a complex generalization. This mathematical results may be of independent interest. The paper is organized as follows. In \cref{sec:lav} and \cref{sec:a}, we review the geometric quantities of length, volume, and areas of Euclidean simplicial gravity, and generalize the quantities to the Lorentzian and complex domains. In \cref{sec:qg} we define simplicial gravitational path integrals in the Lorentzian and complex domains in terms of manifestly holomorphic expressions. In \cref{sec:hf} we review the holomorphic gradient flow algorithm for numerical computations of path integrals with complex actions. Starting in \cref{sec:2dsqg} we specialize to $2D$ simplicial quantum gravity and present the formulas needed for applying the holomorphic gradient flow algorithm. Along the way we prove a complex version of the Gauss-Bonnet theorem. In \cref{sec:nr} we present numerical results that overcome the sign problem. In \cref{sec:d} we finish with a discussion. \section{Lengths and volumes}\label{sec:lav} In simplicial gravity, the basic variable is the squared length, and the Einstein-Hilbert action is written in terms of volume and angles. (See Hamber's textbook \cite{Hamber2009QuantumApproach} for a comprehensive and lucid introduction to Euclidean simplicial quantum gravity.) In this section and next, we start by presenting length, volume and angles for simplicial geometry in the Euclidean domain, and then generalize these quantities to the Lorentzian and complex domains. \subsection{Squared length as the basic variable} Given a metric field $g_{ab}$ on a manifold, the \textbf{squared length} $\sigma$ of a line $\gamma$ segment is given by \begin{align}\label{eq:slfg} \sigma=\int_\gamma ds^2, \end{align} where $ds^2 = g_{ab} dx^a dx^b$ is the line element. In simplicial gravity each lattice edge $e$ has a squared lengths $\sigma_e$ with $\gamma$ taken along the edge. In the Euclidean domain, $\sigma\ge 0$. In the Lorentzian domain, we choose the signature convention that $\sigma>0$ for spacelike intervals, $\sigma<0$ for timelike intervals, and $\sigma=0$ for lightlike intervals. In a continuum field theory, the basic gravitational variable is usually taken to be the metric field $g_{ab}$, and the squared length is derived from $g_{ab}$ using (\ref{eq:slfg}). In contrast, in simplicial gravity the basic variable is usually taken to be the squared lengths $\sigma$ on the lattice edges. A gravitational configuration is given in terms of the squared length on the edges, from which the metric can be derived as follows. \begin{figure \centering \includegraphics[width=.4\textwidth]{spl2.png} \caption{A simplex with labelled vertices $i$ and edge vectors $e_i$.} \label{fig:spl1} \end{figure} Let a $d$-simplex be given and label the vertices by $0,1,\cdots, d$ (\cref{fig:spl1}). Within the simplex we set up a coordinate system whose basis vectors $e_i$ for $i=1,\cdots, d$ point from vertex $0$ to vertex $i$. Define a dot product $\cdot$ by \begin{align}\label{eq:edots} e_i\cdot e_j = \frac{1}{2}(\sigma_{0i}+\sigma_{0j}-\sigma_{ij}), \end{align} where $\sigma_{ij}$ for $i,j = 0, 1,\cdots, d$ are the squared lengths of the edges connecting vertices $i$ and $j$. Using the metric \begin{align}\label{eq:metric} g_{ij} = \frac{1}{2}(\sigma_{0i}+\sigma_{0j}-\sigma_{ij}), \end{align} the dot product of any pair of vectors $u= u^i e_i$ and $v= v^i e_i$ can be computed as $u\cdot v = g_{ij} u^i v^j$, where the Einstein summation convention is used. The metric (\ref{eq:metric}) is the simplicial analog of the continuum metric. In the continuum, squared lengths are computed through $ds^2=g_{ab}dx^a dx^b$. On a simplicial lattice, edge squared lengths are computed through \begin{align}\label{eq:slv} \sigma=v\cdot v = g_{ij} v^i v^j, \end{align} where $v$ is the edge vector. For edges containing vertex $0$, $v=e_i$, and $v\cdot v=g_{ii}=\sigma_{0i}$. For other edges, $v=e_i-e_j$, and $v\cdot v=g_{ii}-g_{ij}-g_{ji}+g_{jj}=\sigma_{ij}$. The simplex is understood to have a homogeneous interior. For a line segment within the simplex, the square length is computed by the same formula (\ref{eq:slv}) where $v$ is the vector for the line segment. \subsection{Complexifying strategy}\label{sec:mtd} In complexifying simplicial geometry, we adopt a ``squared length based'' methodology. After identifying a quantity of interest, such as volumes and angles, we express it as a function of the squared lengths. The function is chosen to agree with known expressions in the Lorentzian and/or Euclidean domains where the squared lengths take real values. In addition, the function should be holomorphic if possible to facilitate the deformations of integration contours when we study of the quantum theory. Suppose the above two requirements can be met. Then we can analytically continue the domain of the function to complex squared lengths. When multi-valued functions such as the square root and the log are present, we will extend the domain to be the corresponding Riemann surfaces. As an example, consider the (linear) \textbf{length} defined by $l = \sqrt{\sigma}.$ This function is holomorphic away from the branch point $\sigma=0$. In the Euclidean domain $l>0$. In the Lorentzian domain $l>0$ for spacelike edges, and $l$ is positive imaginary for timelike edges in the current choice of the positive branch for the square root. \subsection{Volumes}\label{sec:vol} The squared length and length given above are special cases of squared volumes and volumes. In the continuum, let $s$ be a simplex defined by some unit vectors. Suppose the metric is constant in the region of the simplex. Then the squared volume for the simplex is $\mathbb{V} = \int_s \det g_{ab}(x) ~d^Dx=\frac{1}{d!}\det g_{ab}$, where $\frac{1}{d!}$ arises because this is for a simplex rather than a hypercube. On a simplicial complex, define the \textbf{squared volume} of a $d$-simplex by \begin{align}\label{eq:svol1} \mathbb{V} = \frac{1}{(d!)^2}\det g_{ij}, \end{align} where $g_{ij}$ as defined in (\ref{eq:metric}) is a function of the edge squared lengths. An equivalent expression that is manifestly symmetric in the squared lengths is the Cayley-Menger determinant \begin{align}\label{eq:svol} \mathbb{V} = \frac{(-1)^{d+1}}{2^d (d!)^2} \begin{vmatrix} 0 & 1 & 1 & 1 & \ldots & 1 \\ 1 & 0 & \sigma _{01} & \sigma _{02} & \ldots & \sigma _{0 d} \\ 1 & \sigma _{01} & 0 & \sigma _{12} & \ldots & \sigma _{1 d} \\ 1 & \sigma _{02} & \sigma _{12} & 0 & \ldots & \sigma _{2 d} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \sigma _{0 d} & \sigma _{1 d} & \sigma _{2 d} & \ldots & 0 \\ \end{vmatrix}. \end{align} The \textbf{volume} $V$ of a $d$-simplex is defined by \begin{align}\label{eq:vol} V =\sqrt{\mathbb{V}}. \end{align} Both $\mathbb{V}$ and $V$ are defined for complex squared lengths. In (\ref{eq:vol}) the squared volume is taken to live on the Riemann surface of the square root function. $V$ is holomorphic as a function of the squared lengths away from the branch points where $\mathbb{V}=0$. In the Euclidean domain, $\mathbb{V}>0$. In the Lorentzian domain, $\mathbb{V} \le 0$. The positive branch for the square root is chosen so that $V$ is positive imaginary or zero for Lorentzian simplices. \begin{example} In lower dimensions some familiar expressions are recovered. In $1D$ the volumes derived from (\ref{eq:svol}) and (\ref{eq:vol}) are \begin{align} \mathbb{V}=&\sigma_{01}, \\ V =& \sqrt{\sigma_{01}}, \end{align} which reproduce the length formulas. In $2D$ the volumes for a triangle $t$ derived from (\ref{eq:svol}) and (\ref{eq:vol}) are \begin{align}\label{eq:2dsvol} \mathbb{V}=&\frac{1}{16} \left(-\sigma _{01}^2-\sigma _{02}^2-\sigma _{12}^2+2 \sigma _{01} \sigma _{02}+2 \sigma _{01} \sigma _{12}+2 \sigma _{02} \sigma _{12}\right), \\ V =& \frac{1}{4} \sqrt{-\sigma _{01}^2-\sigma _{02}^2-\sigma _{12}^2+2 \sigma _{01} \sigma _{02}+2 \sigma _{01} \sigma _{12}+2 \sigma _{02} \sigma _{12}}, \label{eq:2dvol} \end{align} which reproduce Heron's formula for triangle areas. \qed \end{example} \subsection{Generalized triangle inequalities}\label{sec:gti} The squared distances must obey certain generalized triangle inequalities to describe Euclidean and Lorentzian simplices. In the Euclidean domain, a simplex $s$ obeys \begin{align}\label{eq:egti} \mathbb{V} > 0 \quad \text{ for all subsimplices of $s$ including $s$ itself}. \end{align} For example, for a triangle this means the squared area and the squared lengths are positive: \begin{align} &\mathbb{V}=\frac{1}{16} \left(-\sigma _{01}^2-\sigma _{02}^2-\sigma _{12}^2+2 \sigma _{01} \sigma _{02}+2 \sigma _{01} \sigma _{12}+2 \sigma _{02} \sigma _{12}\right) >0, \\ &\sigma_{01},\sigma_{02},\sigma_{03} >0. \end{align} In the Lorentzian domain, a simplex $s$ obeys \cite{Tate2012Realizability1-simplex, AsanteEffectiveGravity} \begin{align}\label{eq:lgti} \mathbb{V}_s < 0; \text{ and } \mathbb{V}_{r}<0 \implies \mathbb{V}_{t}<=0 \text{ for all $t\supset r$}. \end{align} A simplex is timelike if $\mathbb{V}<0$, and spacelike if $\mathbb{V}>0$. The Lorentzian generalized inequalities (\ref{eq:lgti}) say that the simplex $s$ itself needs to be timelike. Furthermore, if any subsimplex $r$ is timelike, then all subsimplices $t$ containing $r$ cannot be spacelike. This is because a timelike subsimplex cannot be embedded in a spacelike subsimplex. \section{Angles}\label{sec:a} \subsection{Euclidean angles}\label{sec:ea} In Euclidean space, what is the angle $\theta$ bounded by two vectors $a$ and $b$? Since \begin{align}\label{eq:adotbth} a\cdot b=|a||b|\cos\theta, \quad |x|:=\sqrt{x\cdot x}, \end{align} one answer is that $\theta=\cos^{-1}\frac{a\cdot b}{|a||b|}$. Another answer is in terms of the scalar wedge product defined by \begin{align} a\wedge b=&\sqrt{(a\cdot b)^2-(a\cdot a)(b\cdot b)}. \label{eq:awedgeb1} \end{align} Using $\sin^2 \theta+\cos^2 \theta=1$, it is easy to see that for $\theta>0$, \begin{align}\label{eq:awedgebth} a\wedge b=i |a||b|\sin \theta. \end{align} Therefore $\theta=\sin^{-1}\frac{a\wedge b}{i|a||b|}$. The answer (\ref{eq:adotbth}) or (\ref{eq:awedgebth}) in isolation has ambiguities, because different angles can have the same $\cos$ or $\sin$ values. Within a $2\pi$ period, angles are uniquely determined when the information of $\cos^{-1}$ and $\sin^{-1}$ are combined. From (\ref{eq:adotbth}) and (\ref{eq:awedgebth}), we derive that $e^{i\theta}=\frac{1}{|a||b|}(a\cdot b+a\wedge b)$, so\footnote{This formula is related to the so-called ``geometric product'' $\vec{a}\cdot \vec{b}+\vec{a}\wedge \vec{b}$, which offers a way to encode rotations. The difference is that here $\vec{a}\wedge \vec{b}$ is a bivector instead of a scalar.} \begin{align}\label{eq:ea} \theta =& -i\log \alpha, \\ \alpha=&\frac{1}{|a||b|}(a\cdot b+a\wedge b). \end{align} This determines $\theta$ uniquely within a $2\pi$ period depending on the choice of the branch for the log function. \subsection{Complex angles}\label{sec:ca} In the general complex domain, we take \begin{align} \theta =& -i\log \alpha, \label{eq:theta} \\ \alpha=&\frac{a\cdot b+a\wedge b}{\sqrt{a\cdot a}\sqrt{b\cdot b}}=\frac{a\cdot b+\sqrt{(a\cdot b)^2-(a\cdot a)(b\cdot b)}}{\sqrt{a\cdot a}\sqrt{b\cdot b}}, \label{eq:alpha} \end{align} as the definition of \textbf{complex angles}. Equation (\ref{eq:alpha}) is one of the expressions in Sorkin's definition of Lorentzian angles in the Minkowski plane \cite{SorkinLorentzianVectors}.\footnote{In Sorkin's definition of Lorentzian triangles \cite{SorkinLorentzianVectors}, (\ref{eq:alpha}) is used for angles bounded by two spacelike vectors in the same quadrant, and angles bounded by a spacelike vector and a timelike vector. A different expression is used for angles bounded by two timelike vectors in the same quadrant.} Here we recognize that more generally, (\ref{eq:theta}) and (\ref{eq:alpha}) offer a unified definition for Euclidean, Lorentzian, and complex angles in all cases.\footnote{For the formula to apply to the Euclidean case, the $-i$ factor in (\ref{eq:theta}) is necessary. In comparing with other works based on Sorkin's definition one should keep in mind that the $-i$ factor is absent there. In addition, for Lorentzian angles (\ref{eq:la}) defined below differs in the choice of square root branches from Sorkin's formula.} In terms of the edge squared lengths (\cref{fig:tri}), \begin{align} a\cdot b=& \frac{1}{2}(\sigma_{a}+\sigma_{b}-\sigma_c), \label{eq:adotb} \\ a\cdot a=&\sigma_a, \quad b\cdot b=\sigma_b, \\ a\wedge b=& \frac{1}{2} \sqrt{\sigma _{a}^2+\sigma _{b}^2+\sigma _{c}^2-2 \sigma _{a} \sigma _{b}-2 \sigma _{b} \sigma _{c}-2 \sigma _{c} \sigma _{a}}. \label{eq:awedgeb} \end{align} Therefore \begin{align}\label{eq:ca} \theta =& -i\log \alpha,\nonumber \\ \alpha=& \frac{\sigma_{a}+\sigma_{b}-\sigma_c+\sqrt{\sigma _{a}^2+\sigma _{b}^2+\sigma _{c}^2-2 \sigma _{a} \sigma _{b}-2 \sigma _{b} \sigma _{c}-2 \sigma _{c} \sigma _{a}}}{2\sqrt{\sigma_a}\sqrt{\sigma_b}}. \end{align} We take (\ref{eq:ca}) as the definition of \textbf{complex angles} in terms of complex squared lengths. This function is holomorphic away from the log and square root branch points. \begin{figure \centering \includegraphics[width=.4\textwidth]{tri.png} \caption{A triangle with squared lengths $\sigma_{a}, \sigma_{b}, \sigma_c$.} \label{fig:tri} \end{figure} Note from (\ref{eq:2dsvol}) that the input $A$ to the numerator square root equals $-16\mathbb{V}$, where $\mathbb{V}$ is the squared volume for the triangle in \cref{fig:tri}. By the triangle inequalities of \cref{sec:gti}, $A>0$ for a Lorentzian triangle and $A<0$ for an Euclidean triangle. For Euclidean angles the principal branches of the log and square root functions are chosen. The complex angles then reduce to the correct Euclidean angles, since the former are obtained by generalizing the latter. The choices of branches for Lorentzian angles are specified below. \subsection*{Sum of complex angles in a triangle} The angles of an Euclidean triangle sum to $\pi$. In the complex domain, this generalizes to $(2n+1)\pi$ with $n\in \mathbb{Z}$. \begin{proposition}\label{th:sta} The complex angles sum to $(2n+1)\pi$ with $n\in \mathbb{Z}$ for a triangle of complex squared edge lengths. \end{proposition} \begin{proof} Consider a triangle with complex squared lengths $\sigma_{a}, \sigma_{b}, \sigma_c$ (\Cref{fig:tri}), and complex angles $\theta=-i\log\alpha, \theta_1=-i\log\alpha_1, \theta_2=-i\log\alpha_2$. A straightforward calculation using (\ref{eq:ca}) yields \begin{align} \alpha_1 \alpha_2=\frac{-\sigma_{a}-\sigma_{b}+\sigma_c+\sqrt{\sigma _{a}^2+\sigma _{b}^2+\sigma _{c}^2-2 \sigma _{a} \sigma _{b}-2 \sigma _{b} \sigma _{c}-2 \sigma _{c} \sigma _{a}}}{2\sqrt{\sigma_a}\sqrt{\sigma_b}}. \end{align} A similar calculation yields $\alpha \alpha_1 \alpha_2=-1$. For the complex log function, $\log(z_1 z_2)=\log z_1 +\log z_2$ up to multiples of $2\pi i$. Therefore $\theta+\theta_1+\theta_2=-i\log(-1)+2 \pi n=(2n+1)\pi$, where $n$ is an integer. \end{proof} \subsection{Lorentzian angles}\label{sec:la} In previous works \cite{SorkinLorentzianVectors, Tate2011Fixed-topologyDomain, AsanteEffectiveGravity}, not one, but multiple expressions for Lorentzian angles in terms of log and trigonometric functions were used depending on where the edges lie in the Minkowski plane. A merit of the complex angle defined above is that it unifies these multiple cases (as well as the Euclidean case) in one formula. Here we focus on convex angles, because in simplicial gravity only these arise from individual simplices. Non-convex angles arise from summing the convex angles of individual simplices. Here we consider the branch choice \begin{align} \theta =& -i \Log \alpha,\nonumber \\ \alpha= \frac{a\cdot b+\sqrt{(a\cdot b)^2-(a\cdot a)(b\cdot b)}}{\sqrt{a\cdot a-0i}\sqrt{b\cdot b-0i}},\label{eq:la} \end{align} for Lorentzian angles. In terms of the squared lengths, \begin{align} \alpha=& \frac{\sigma_{a}+\sigma_{b}-\sigma_c+\sqrt{\sigma _{a}^2+\sigma _{b}^2+\sigma _{c}^2-2 \sigma _{a} \sigma _{b}-2 \sigma _{b} \sigma _{c}-2 \sigma _{c} \sigma _{a}}}{2\sqrt{\sigma_a-0i}\sqrt{\sigma_b-0i}}. \end{align} Here \begin{align}\label{eq:lpb} \Log z =& \log r + i\phi, \quad z=r e^{i\phi}\text{ with }\phi\in (-\pi,\pi] \\\sqrt{z}=&\sqrt{r}e^{i\phi/2}, \quad z=r e^{i\phi}\text{ with }\phi\in (-\pi,\pi], \label{eq:sqrtb} \\\sqrt{z-0i}=&\sqrt{r}e^{i\phi/2}, \quad z=r e^{i\phi}\text{ with }\phi\in [-\pi,\pi).\label{eq:bcc} \end{align} The first two are just the principal branches of log and square root. The third one $\sqrt{z-0i}$ is negative imaginary for $z<0$. The symbol $-0i$ is a reminder that $z<0$ is continuously connected to $z>0$ through the lower complex plane instead of the upper one. The following properties hold for Lorentzian angles. \begin{proposition}\label{prop:laa} The Lorentzian convex angles $\theta$ defined by formula (\ref{eq:la}) are additive. \end{proposition} \begin{proposition}\label{prop:caba} The complex Lorentzian angle $\theta$ is related to the Lorentz boost angle $\theta_{\text{boost}}$ by \begin{align}\label{eq:caba} \theta = -i \theta_{\text{boost}}. \end{align} Here the convention is that $\theta_{\text{boost}}>0$ for a boost angle relating spacelike vectors, and $\theta_{\text{boost}}<0$ for a boost angle relating timelike vectors. \end{proposition} \begin{proposition}\label{prop:casl} Between two edges related by the reflection across a light ray, the angle $\theta$ equals \begin{align} \theta = \pi/2, \end{align} whose imaginary part vanishes. \end{proposition} \begin{proposition}\label{prop:fp2p} In the flat Minkowski plane, the angles around a point sum to $2\pi$. \end{proposition} \begin{proposition}\label{prop:rtheta} For a convex Lorentzian angle $\theta$, \begin{align} \Re \theta = N\pi/2, \end{align} where $N=0,1,2$ is the number of light rays enclosed within the angle. \end{proposition} \begin{proposition}\label{prop:ltri} The angles of a Lorentzian triangle sum to $2\pi$. \end{proposition} These results are easier to derive after working through some examples. These also serve to help readers unfamiliar with Lorentzian angles \cite{SorkinLorentzianVectors} to build some intuitions. In the Minkowski plane, a convex angle can bound $N=0,1,$ or $2$ light rays (\cref{fig:lpv}). According to whether the vectors bounding the angle are timelike or spacelike (for reasons mentioned below all the examples, we do not consider lightlike edges here), there are five cases in total. We consider them in turn. \begin{example}[Spacelike edges within the same quadrant] Consider spacelike edges $a$ and $b$ forming a triangle with squared lengths $\sigma_a=1, \sigma_b=3/4, \sigma_{ab}=-1/4$, where $\sigma_{ab}$ is the squared length for the third edge (\Cref{fig:lpv}). The complex angle $\theta$ bounded by $a$ and $b$ can be calculated using (\ref{eq:adotb}) to (\ref{eq:awedgeb}) as follows. \begin{align} a\cdot b=& \frac{1}{2}(\sigma_{a}+\sigma_{b}-\sigma_{ab})=1, \\ a\cdot a=&\sigma_a=1, \quad b\cdot b=\sigma_b=3/4, \\ a\wedge b=&\sqrt{(a\cdot b)^2-(a\cdot a)(b\cdot b)}=1/2, \\ \theta =& -i\log (\frac{a\cdot b+a\wedge b}{\sqrt{a\cdot a-0i}\sqrt{b\cdot b-0i}}) \nonumber \\ =& -i\log\sqrt{3}. \end{align} \qed \end{example} The above calculation is based on the invariant quantity of the squared length and does not invoke any coordinate system. Alternatively, one could introduce a coordinate system in the Minkowski plane, represent $a$ and $b$ as vectors there, and use the Minkowski inner product for $\cdot$ to calculate $\theta$. For instance, one could choose $a=(1,0)$ and $b=(1,1/2)$ in the coordinate convention $(x,t)$. Then again $a\cdot b= 1^2-0=1$, $a\cdot a= 1^2-0^2=1$, and $b\cdot b=1^2-(1/2)^2=3/4$, so one will get the same result for $\theta$. \begin{figure \centering \includegraphics[width=.4\textwidth]{lpv.png} \caption{The Minkowski plane with four quadrants bounded by dashed light rays. The edges $a$ to $f$ are distributed in different quadrants.} \label{fig:lpv} \end{figure} The complex angle $\theta$ is related to the boost angle of Lorentz transformations. The boost angle from $a$ to $b$ is, up to a choice of sign, \begin{align}\label{eq:ba} \theta_{\text{boost}}=\cosh^{-1}(\hat{a}\cdot\hat{b}). \end{align} Here $\hat{x}:=x/\sqrt{\abs{x\cdot x}}$ denotes the normalized vector for $x$. For the vectors $a=(1,0)$ and $b=(1,1/2)$, $\hat{a}=(1,0)$ and $\hat{b}=(2/\sqrt{3},1/\sqrt{3})$. Therefore $\theta_{\text{boost}}=\cosh^{-1}(2/\sqrt{3})=\log\sqrt{3}$, where we used the elementary identity \begin{align}\label{eq:chlog} \cosh^{-1} z = \log(z+\sqrt{z^2-1}). \end{align} In this case for two spacelike edges, upon choosing the boost angle to be positive, we see that $\theta = -i \theta_{\text{boost}}.$ Using (\ref{eq:ba}) and (\ref{eq:chlog}), it is easy to check that this relation holds for all pairs of spacelike edges in the same quadrant in the Minkowski plane. We will see next that this relation also holds for timelike edges. \begin{example}[Timelike edges within the same quadrant] Consider timelike edges $c$ and $d$ forming a triangle with squared lengths $\sigma_c=-3/4, \sigma_d=-1, \sigma_{cd}=1/4$, where $\sigma_{cd}$ is the squared length for the third edge (\Cref{fig:lpv}). The complex angle $\theta$ bounded by $c$ and $d$ can be calculated using (\ref{eq:adotb}) to (\ref{eq:awedgeb}) as follows. \begin{align} c\cdot d=& \frac{1}{2}(\sigma_{c}+\sigma_{d}-\sigma_{cd})=-1, \\ c\cdot c=&\sigma_c=-3/4, \quad d\cdot d=\sigma_b=-1, \\ c\wedge d=&\sqrt{(c\cdot d)^2-(c\cdot c)(d\cdot d)}=1/2, \\ \theta =& -i\log (\frac{c\cdot d+c\wedge d}{\sqrt{c\cdot c-0i}\sqrt{d\cdot d-0i}}) \nonumber \\ =& -i\log\frac{-1+1/2}{(-i\sqrt{3/4})(-i\sqrt{1})}=-i\log(1/\sqrt{3}). \end{align} \qed \end{example} Alternatively, setting $c=(1/2,1)$ and $d=(0,1)$ in a coordinate system $(x,t)$ and performing the calculation there leads to the same $\theta$. Note that $c$ and $b$, as well as $d$ and $a$ are related by reflection with respect to the light ray separating quadrant I and II. The same Lorentz boost transformation that maps $a$ to $b$ will map $d$ to $c$. The boost angle from $a$ to $b$ is anti-clockwise, while that from $d$ to $c$ is clockwise. Since we chose the boost angle from $a$ to $b$ to be positive, it is reasonable to choose the boost angle from $d$ to $c$ to be negative. In this case we have \begin{align}\label{eq:ba2} \theta_{\text{boost}}=-\cosh^{-1}(|\hat{c}\cdot\hat{d}|)=-\cosh^{-1}(-\hat{c}\cdot\hat{d}), \end{align} since for timelike vectors in the same quadrant $\hat{c}\cdot\hat{d}<0$, and the normalized vectors take the form $\hat{x}:=x/\sqrt{\abs{x\cdot x}}=x/\sqrt{-x\cdot x}$. From this we obtain $\hat{c}=(1/\sqrt{3},2/\sqrt{3})$ and $\hat{d}=(0,1)$, so $\theta_{\text{boost}}=-\cosh^{-1}(2/\sqrt{3})=\log(1/\sqrt{3})$. Again, $\theta = -i \theta_{\text{boost}}$. Using (\ref{eq:ba2}) and (\ref{eq:chlog}), it is not hard to check that actually this relation holds for all pairs of timelike edges in the same quadrant in the Minkowski plane. Since boost angles exist only between two spacelike vectors in the same quadrant and two timelike vectors in the same quadrant, we have proved \Cref{prop:caba}. \begin{example}[A spacelike edge and a time like edge] Consider the spacelike edge $a$ and timelike edge $c$ forming a triangle with squared lengths $\sigma_a=1, \sigma_c=-3/4, \sigma_{ac}=-3/4$, where $\sigma_{ac}$ is the squared length for the third edge (\Cref{fig:lpv}). The complex angle $\theta$ bounded by $a$ and $c$ can be calculated using (\ref{eq:adotb}) to (\ref{eq:awedgeb}) as follows. \begin{align} a\cdot c=& \frac{1}{2}(\sigma_{a}+\sigma_{c}-\sigma_{ac})=1/2, \\ a\cdot a=&\sigma_a=1, \quad c\cdot c=\sigma_c=-3/4, \\ a\wedge c=&\sqrt{(a\cdot c)^2-(a\cdot a)(c\cdot c)}=1, \\ \theta =& -i\log (\frac{a\cdot c+a\wedge c}{\sqrt{a\cdot a-0i}\sqrt{c\cdot c-0i}}) \nonumber \\ =& -i\log\frac{1/2+1}{(\sqrt{1})(-i\sqrt{3/4})}=-i\log(i\sqrt{3})=-i\log\sqrt{3}+\pi/2. \end{align} \qed \end{example} Alternatively, setting $a=(1,0)$ and $c=(1/2,1)$ in a coordinate system $(x,t)$ and performing the calculation there leads to the same $\theta$. Note the relevance of the choice of branch for the square root. Had we chosen the branch without $-0i$, the denominator would be $i\sqrt{3/4}$ instead, and the real part of $\theta$ would be $- \pi/2$. In the choice with $-0i$, we have: \begin{lemma}\label{lm:aste} The angle $\theta$ between a spacelike edge and a timelike edge obeys \begin{align} \Re\theta = \pi/2. \end{align} \end{lemma} \begin{proof} Without loss of generality let $\sigma_a>0$ and $\sigma_c<0$. Then $(a\cdot a)(c\cdot c)<0$, so $a\wedge c=\sqrt{(a\cdot c)^2-(a\cdot a)(c\cdot c)}>\abs{a\cdot c}$. Therefore the numerator of $\alpha$, $a\cdot c+a\wedge c$, is positive. The denominator $\sqrt{a\cdot a-0i}\sqrt{c\cdot c-0i}$ is negative imaginary. Therefore $\alpha$ is positive imaginary. It follows that $\theta=-i\log(i r)=-i\log r+\pi/2$ for some $r>0$. \end{proof} For the special case of two edges $a$ and $c$ related by a reflection across a light ray as the reflection axis, the angle bounded by them equals $\theta = \pi/2$. This is the content of \Cref{prop:casl}, which is proved by noting that $\sigma_a=-\sigma_c$ and $\sigma_{ac}=0$. From these we derive that $a\cdot c = 0$, $a\wedge c= \sqrt{\sigma_a^2}$, whence $\alpha=\sqrt{\sigma_a^2}/(\sqrt{\sigma_a-0i}\sqrt{-\sigma_a-0i})=i$. Therefore $\theta=-i\log\alpha=\pi/2$. This should be expected. The boost angles from $a$ and $c$ to the light ray are equal in magnitude and opposite in sign. When added up to obtain $\Im\theta$ according to \Cref{prop:caba}, they cancel. By \Cref{prop:rtheta}, $\Re\theta=\pi/2$ because travelling from $a$ to $c$ crosses one light ray. \Cref{prop:casl} implies that the in the flat Minkowski plane the angles around around a point sum to $2\pi$, which is the content of \Cref{prop:fp2p}. Consider four edges right in the middle of the four quadrants. According to \cref{prop:casl}, the four angles formed by them all equal $\pi/2$, so they sum to $2\pi$. \begin{example}[Spacelike edges in different quadrants] Consider two spacelike edges $a$ and $e$ in different quadrants forming a triangle with squared lengths $\sigma_a=1, \sigma_e=3/4, \sigma_{ae}=15/4$, where $\sigma_{ae}$ is the squared length for the third edge (\Cref{fig:lpv}). The complex angle $\theta$ bounded by $a$ and $e$ can be calculated using (\ref{eq:adotb}) to (\ref{eq:awedgeb}) as follows. \begin{align} a\cdot e=& \frac{1}{2}(\sigma_{a}+\sigma_{e}-\sigma_{ae})=-1, \\ a\cdot a=&\sigma_a=1, \quad e\cdot e=\sigma_e=3/4, \\ a\wedge e=&\sqrt{(a\cdot e)^2-(a\cdot a)(e\cdot e)}=1/2, \\ \theta =& -i\log (\frac{a\cdot e+a\wedge e}{\sqrt{a\cdot a-0i}\sqrt{e\cdot e-0i}}) \nonumber \\ =& -i\log(-1/\sqrt{3})=-i\log(1/\sqrt{3})+\pi. \end{align} \qed \end{example} Again, the readers can check that the vectors $a=(1,0)$ and $e=(-1,1/2)$ leads to the same $\theta$. Note the relevance of the choice of branch for the log function. The principal branch which we chose yields $\Re\theta = \pi$ for $\alpha<0$. A different choice could result in $\Re\theta = -\pi$. Given the branch choices for the square roots, only for the principal branch can the angles possibly be additive. To see this, note that by \Cref{lm:aste}, each light ray crossing accrues $\pi/2$ for $\Re\theta$. Since from $a$ to $e$ there are two light rays crossed, $\Re\theta$ needs to be $\pi$ if the angles are additive. \begin{example}[Timelike edges in different quadrants] Consider two timelike edges $d$ and $f$ in different quadrants forming a triangle with squared lengths $\sigma_d=-1, \sigma_f=-3/4, \sigma_{df}=-15/4$, where $\sigma_{df}$ is the squared length for the third edge (\Cref{fig:lpv}). The complex angle $\theta$ bounded by $d$ and $f$ can be calculated using (\ref{eq:adotb}) to (\ref{eq:awedgeb}) as follows. \begin{align} d\cdot f=& \frac{1}{2}(\sigma_{d}+\sigma_{f}-\sigma_{df})=1, \\ d\cdot d=&\sigma_d=-1, \quad f\cdot f=\sigma_f=-3/4, \\ d\wedge f=&\sqrt{(d\cdot f)^2-(d\cdot d)(f\cdot f)}=1/2, \\ \theta =& -i\log (\frac{d\cdot f+d\wedge f}{\sqrt{d\cdot d-0i}\sqrt{f\cdot f-0i}}) \nonumber \\ =& -i\log\frac{1+1/2}{(-i)(-i\sqrt{3/4})}=-i\log(-\sqrt{3})=-i\log(\sqrt{3})+\pi. \end{align} \qed \end{example} Alternatively, setting $d=(0,1)$ and $f=(-1/2,-1)$ in a coordinate system $(x,t)$ and performing the calculation there leads to the same $\theta$. In the above two cases $\Re \theta = \pi$. It actually holds in general that crossing two light rays makes the angle accrue a real part of $\pi$. The reason is that the log argument is negative for two light ray crossings, which yields $\Re \theta = \pi$. To see that the log argument is negative, note that for two spacelike vectors $a$ and $e$ in different quadrants, $a\cdot e= \frac{1}{2}(\sigma_{a}+\sigma_{e}-\sigma_{ae})<0$ as a consequence of the Lorentzian triangle inequality (\ref{eq:lgti}). Therefore the log argument $\frac{a\cdot e+a\wedge e}{\sqrt{a\cdot a-0i}\sqrt{e\cdot e-0i}}<0$. For two timelike vectors $d$ and $f$ in different quadrants, $d\cdot f= \frac{1}{2}(\sigma_{d}+\sigma_{f}-\sigma_{df})>0$ as a consequence of the Lorentzian triangle inequality (\ref{eq:lgti}). In addition, $d\wedge f=\sqrt{(d\cdot f)^2(d\cdot d)(f\cdot f)}<\abs{d\cdot f}$. Therefore the log argument $\frac{d\cdot f+d\wedge f}{\sqrt{d\cdot d-0i}\sqrt{f\cdot f-0i}}<0$. Since a convex angle in the Minkowski plane can only enclose $0,1$ or $2$ light rays, we have proved \Cref{prop:rtheta}. For any triangle in the Minkowski plane, the three angles enclose two light rays in total. Therefore the sum of the three angles have $\pi$ as the real part. By \cref{th:sta}, the imaginary part vanishes. This proves \cref{prop:ltri}. Finally, we want to prove \cref{prop:laa}, i.e., \begin{align} \theta(a,c)=\theta(a,b)+\theta(b,c), \end{align} where $b$ lies between $a$ and $c$ in the Minkowski plane, and $\theta(x,y)=-i\log \alpha(x,y)$ denotes the convex angle defined by some vectors $x$ and $y$ according to (\ref{eq:la}). The first part of the proof is the same as Sorkin's proof for his equation (3) in \cite{SorkinLorentzianVectors}. Explicitly, since the angles are convex and $b$ lies in between $a$ and $c$, one could write $b=\alpha a +\beta c$ with $\alpha, \beta\ge 0$. This can be plugged in \begin{align} (\frac{a\cdot b+a\wedge b}{\sqrt{a\cdot a}\sqrt{b\cdot b}})( \frac{b\cdot c+b\wedge c}{\sqrt{b\cdot b}\sqrt{c\cdot c}})= \frac{a\cdot c+a\wedge c}{\sqrt{a\cdot a}\sqrt{c\cdot c}}, \end{align} i.e., $\alpha(a,b)\alpha(b,c)=\alpha(a,c)$, to eliminate $b$ and establish the identity. For the complex log function, $\theta(a,b)+\theta(b,c)=-i\log \alpha(a,b)-i\log \alpha(b,c)=-i \log (\alpha(a,b)\alpha(b,c))= -i\log \alpha(a,c)=\theta(a,c)$ up to an integer multiple of $2\pi$. However, by \Cref{prop:rtheta} and the assumption that all three angles are convex, the real part of the left hand side can only be $0,\pi/2,$ or $\pi$. The same holds for the right hand side. Therefore the multiple of $2\pi$ has to be zero, and we established $\theta(a,b)+\theta(b,c)=\theta(a,c)$. \subsection*{Lightlike edges} When one or two of the edges that bound the angle are lightlike, the Lorentzian angle defined in (\ref{eq:la}) could diverge. In \cite{SorkinLorentzianVectors}, special care is taken to redefine such angles. We will not perform any redefinition for angles with lightlike edges in this work, because the main focus is on the quantum theory. In the path integral, squared lengths is integrated over for each edge. Zero (lightlike) squared length is of measure zero, and a special redefinition just on this measure zero set is not necessary. See \cref{sec:cb} for additional discussions on the (ir)relevance of lightlike edges for the gravitational path integral. \subsection{Dihedral angles}\label{sec:da} \begin{figure \centering \includegraphics[width=.5\textwidth]{da2.png} \caption{In $3D$, the tetrahedron simplex $s$ projects into the shaded triangle orthogonal to the hinge edge $h$. The dihedral angle $\theta_{s,h}$ projects to the triangle angle $\theta$. The faces bounding the dihedral angle project to the edges $a$ and $b$ of the triangle.} \label{fig:da} \end{figure} In simplicial gravity, curvature is captured by deficit angles, which is in turn defined in terms of dihedral angles. A dihedral angles is formed by two codimension-$1$ faces at a hinge, which is a codimension-$2$ simplex. For instance in $2D$, the dihedral angle $\theta_{s,h}$ in triangle $s$ at vertex $h$ is the angle formed by the two edges sharing $h$. In $3D$ the dihedral angle $\theta_{s,h}$ in tetrahedron $s$ at edge $h$ is the angle formed by the two triangles sharing $h$. In $4D$ the dihedral angle $\theta_{s,h}$ in $4$-simplex $s$ at triangle $h$ is the angle formed by the two tetrahedrons sharing $h$ etc. As illustrated in \Cref{fig:da}, dihedral angles can be obtained by projecting $s$ to the triangle orthogonal to $h$, and extracting the triangle angle at the vertex that $h$ projects to. Using (\ref{eq:alpha}), namely \begin{align} \theta =& -i\log \alpha,\quad \alpha=\frac{a\cdot b+\sqrt{(a\cdot b)^2-(a\cdot a)(b\cdot b)}}{\sqrt{a\cdot a}\sqrt{b\cdot b}}, \end{align} the dihedral angle can be computed from $a\cdot b$, $a\cdot a$, and $b\cdot b$ of the projected triangle. However, in simplicial gravity the input data are the squared distances $\sigma_e$ on the simplicial edges $e$. We need to express $a\cdot b$, $a\cdot a$, and $b\cdot b$ in terms $\sigma_e$. \subsection*{Volume forms} To express $a\cdot b$, $a\cdot a$, and $b\cdot b$ in terms $\sigma_e$, it is useful to introduce a volume form representation of the (sub)simplices \cite{Hartle1985SimplicialDiscussion}. An $n$-simplex has $n+1$ vertices. With one of the vertices labelled as $0$, the $n$ vectors $e_i, i=1,\dots, n$ starting from $0$ and pointing to the other $n$ vertices characterize the simplex (\cref{fig:spl1}). In \cref{sec:vol} we treated $e_i$ as the basis vectors in defining the metric $g_{ij}$ which equals $e_i\cdot e_j$. Let $e^i$ be the dual vectors so that $e^i(e_j)=\delta^i_j$. A $d$-simplex $s$ can represented by the $d$-form \begin{align} \omega_s = e^1\wedge \cdots \wedge e^d. \end{align} Then an $n$-dimensional subsimplex $r$ with edge vectors $e_{r_1},\dots, e_{r_n}$ can be represented by the $n$-form \begin{align} \omega_r = e^{r_1}\wedge \cdots \wedge e^{r_n}. \end{align} The ordering of the indices $r_i$ decides an orientations for the (sub)simplex. The dot product of two $n$-forms is given by \begin{align}\label{eq:fdot} \omega_r\cdot \omega_t = (\frac{1}{n!})^2 \det(e_{r_i}\cdot e_{t_j}). \end{align} \cref{eq:fdot} conforms to the standard definition of inner products for $n$-forms. One can check that if $e_{r_i}=e_{r_j}$ for any $i\ne j$, or if $e_{t_i}=e_{t_j}$ for any $i\ne j$, then $\omega_r\cdot \omega_t=0$, which should hold for forms. By the definition (\ref{eq:svol1}) of the squared volume, \begin{align}\label{eq:fv2} \omega_r\cdot \omega_r = (\frac{1}{n!})^2 \det(e_{r_i}\cdot e_{r_j}) = \mathbb{V}_r. \end{align} \subsection*{Vector dot products} The form representation can be used to express $a\cdot b$, $a\cdot a$, and $b\cdot b$ for the dihedral angle in terms $\sigma_e$. Let $\omega_h$ be the $d-2$-form of the hinge $h$, and let \begin{align} \omega_{a} = \omega_h\wedge e, \quad \omega_{b} = \omega_h\wedge e' \end{align} be the $d-1$-forms of the faces of $a$ and $b$ (one might change the order between $\omega_h$ and $e$ ($e'$) if a different orientation is suitable). The edge vector $e$ can be written as $e=a+e_{\parallel}$, where $a$ is orthogonal to $h$ and $e_{\parallel}$ is parallel to $h$. Similarly $e'=b+e_{\parallel}'$. Since $e_{\parallel}$ and $e_{\parallel}'$ are parallel to $h$, it follows from the properties of forms that $\omega_{a} = \omega_h\wedge a$ and $\omega_{b} = \omega_h\wedge b$. Therefore \begin{align} \omega_{a} \cdot \omega_{b} =& (\omega_h\wedge a) \cdot (\omega_h\wedge b) \\=& \frac{\omega_{h} \cdot \omega_{h}}{(d-1)^2} ~ a\cdot b. \end{align} In the second line we used the definition (\ref{eq:fdot}) and noted that since $a$ and $b$ are orthogonal to $h$, $a\cdot e=b\cdot e=0$ for any $e$ of $h$. Therefore \begin{align}\label{eq:adbf} a\cdot b = (d-1)^2 ~\frac{\omega_{a} \cdot \omega_{b}}{\omega_{h} \cdot \omega_{h}}. \end{align} The other terms $a\cdot a$ and $b\cdot b$ can be obtained by setting $a=b$. The numerator of (\ref{eq:adbf}) can be expressed in squared lengths using \begin{align} \omega_{a} \cdot \omega_{b} =& \frac{1}{(d-1)!^2} \det(e_{a_i}\cdot e_{b_j}) \label{eq:ipab1} \\=& \frac{1}{(d-1)!^2} \det(\frac{1}{2}(\sigma_{0 a_i}+\sigma_{0 b_j}-\sigma_{a_i b_j})), \label{eq:ipab2} \end{align} where (\ref{eq:fdot}) and (\ref{eq:edots}) are used. Here $a_i$ is the $i$-th vertex of the subsimplex $a$, and $b_j$ is the $j$-th vertex of the subsimplex $b$. The vertex $0$ is the one fixed when specifying the $d$-simplex $s$, and the squared lengths $\sigma_{0 a_i}, \sigma_{0 b_j}, \sigma_{a_i b_j}$ are inputs to simplicial gravity. According to (\ref{eq:fv2}), the denominator $\omega_{h} \cdot \omega_{h}$ of (\ref{eq:adbf}) simply equals $\mathbb{V}_h$, which is a function of squared lengths by definition (\ref{eq:svol1}) or (\ref{eq:svol}). These formulas can then be used to express the dihedral angles in terms of squared lengths. Incidentally, there is an alternative useful expression \begin{align} \omega_{a} \cdot \omega_{b} = d^2 \pdv{\mathbb{V}}{\sigma_e}, \end{align} where $e$ is the edge of the two vertices out of the common hinge $h$ of subsimplices $a$ and $b$. This expression can be derived using (\ref{eq:svol1}), (\ref{eq:ipab1}), (\ref{eq:edots}) and (\ref{eq:metric}). \subsection{Deficit angles} In simplicial gravity, curvature is captured by deficit angles. The deficit angle at a hinge is the difference between the flat space(time) value and the actual value for the sum of dihedral angles around the hinge. At a hinge $h$ in the interior of a region (instead of on the boundary), the deficit angle is defined as \begin{align}\label{eq:da1} \delta_h =& 2\pi - \sum_{s\ni h}\theta_{s, h}, \end{align} where the sum is over all simplices $s$ containing $h$. Here $2\pi$ is the flat space(time) value. The dihedral angles around $h$ can be obtained by projecting the simplices to the plane orthogonal to $h$ and summing the angles around the point $h$ projects to (\cref{sec:da}). In flat Euclidean space, the angles obviously sum to $2\pi$. In flat Lorentzian spacetime, they also sum to $2\pi$ according to \cref{prop:fp2p}. In the complex domain it is taken as an assumption that the flat value is $2\pi$, so that (\ref{eq:da1}) constitutes a definition of the complex deficit angle in general. If the hinge $h$ lies on the boundary of a region, the dihedral angles around it within that region can sum to less than $2\pi$ for the flat case. Suppose there are $Q_h$ regions sharing the hinge $h$. Then one way to define the deficit angle is \begin{align}\label{eq:da2} \delta_h =& \frac{2\pi}{Q_h} - \sum_{s\ni h}\theta_{s, h}. \end{align} This ensures additivity, i.e., once all the deficit angles in all regions are summed over (\ref{eq:da1}) is recovered. \Cref{eq:da2} is taken as the general definition of the \textbf{complex deficit angle}, with $Q_h=1$ if $h$ lies in the interior of the region. \section{Quantum gravity}\label{sec:qg} Formally, gravitational path integrals take the form \begin{align}\label{eq:qgl} Z=\int \mathcal{D}g ~ e^{i \int d^dx \sqrt{-g} ( - \lambda +k R + \cdots)} \end{align} in the Lorentzian, and \begin{align}\label{eq:qge} Z=\int \mathcal{D}g ~ e^{\int d^dx \sqrt{g} ( - \lambda + k R+ \cdots)} \end{align} in the Euclidean. The dots stand for higher order terms that may be present. Here the Riemann tensor convention is \begin{align}\label{eq:rt} {R^{\rho }}_{\sigma \mu \nu }=\partial _{\mu }\Gamma _{\nu \sigma }^{\rho }-\partial _{\nu }\Gamma _{\mu \sigma }^{\rho }+\Gamma _{\mu \lambda }^{\rho }\Gamma _{\nu \sigma }^{\lambda }-\Gamma _{\nu \lambda }^{\rho }\Gamma _{\mu \sigma }^{\lambda }, \end{align} so that as usual $\lambda>0$ leads to a De Sitter spacetime in cosmology. To give an exact meaning to these formal expressions non-perturbatively, one needs to specify a way to enumerate gravitational configurations to be summed over. \subsection{Simplicial quantum gravity} In simplicial quantum gravity, \begin{align}\label{eq:sqg1} Z=\int_C \mathcal{D}\sigma ~ e^{E[\sigma]}, \end{align} where the exponent $E$ is given below. The gravitational configurations are specified by the squared lengths $\sigma$ on edges of simplicial lattices, and the path integral measure takes the form \begin{align}\label{eq:sqgm1} \int_C \mathcal{D}\sigma = (\sum_\tau) \lim_\Gamma \prod_{e\in\Gamma} \int_{-\infty}^\infty d\sigma_e ~ \mu[\sigma] C[\sigma]. \end{align} The integration measure factor $\mu[\sigma]$ is not known \textit{a priori}. Suppose one wants to define the path integral so that even on a finite lattice (without taking the lattice refinement limit) the result is exact result. Then one idea for fixing the measure is to demand discretization independence \cite{Dittrich2011PathGravityb}. This would lead to a non-local measure in $4D$ \cite{Dittrich2014DiscretizationGravity}. Alternatively, one could adopt simpler local measures and demand that the exact result be obtained only after taking the lattice refinement limit. In this case different measures could belong to a same universality class and lead to the same result in the lattice refinement limit \cite{Hamber2009QuantumApproach}. However, there seems to be no consensus exactly which measures are correct to be used. In analogy to the continuum measures factors $(\det g)^m$, a commonly used family of simplicial measures is the product of powers of simplicial squared volumes \begin{align} \mu[\sigma]=\prod_s \mathbb{V}_s^m \end{align} parametrized by $m$. For the Lorentzian case one could use $\mu[\sigma]=\prod_s (-\mathbb{V}_s)^m$ to make the measure positive definite, in analogy to $\prod_x (-\det g(x))^m$. When the lattice has fixed size this makes no essential difference from (\ref{eq:sqgm1}) since the two measures only differ by an overall constant. This can be included as a term in the integrand exponent \begin{align}\label{eq:em} E_m = m \sum_s \log \mathbb{V}_s. \end{align} Any measure factor can be similarly be incorporated by setting $\mu[\sigma]=1$ and introducing an additional term in the integrand exponent. We will adopt this formulation and fix the measure to be \begin{align}\label{eq:sqgm} \int_C \mathcal{D}\sigma = (\sum_\tau) \lim_\Gamma \prod_{e\in\Gamma} \int_{-\infty}^\infty d\sigma_e ~ C[\sigma]. \end{align} The constraint $C[\sigma]$ specifies the integration contour and determines if the theory is for the Euclidean or Lorentzian. It equals $1$ when the Euclidean/Lorentzian generalized triangle inequalities (\ref{eq:egti})/(\ref{eq:lgti}) are matched and vanishes otherwise. In the Lorentzian case, an additional constraint may be imposed so that each point of a simplicial manifold has two lightcones. This is explained in more detail in \cref{sec:lcs}. On a fixed lattice graph $\Gamma$, the gravitational configurations are summed over by integrating the squared lengths $\sigma_e$ on edges $e$. The continuum limit $\lim_\Gamma$ is taken by going to ever finer lattice graphs (\cref{fig:sll}). In practice, the lattice field theory strategy is usually adopted. Instead of taking the limit, one evaluates the path integral on a fixed graph and look for the continuum limit by searching for universality classes. Whether topologies should be summed over in the gravitational path integral is an open question \cite{Hartle1985UnrulyGravity}. In (\ref{eq:sqgm}) the sum over topologies $\sum_\tau$ is included as an option enclosed in brackets. In (\ref{eq:sqg1}) the path integral is expressed in terms of the path exponent $E$ instead of the action $S$ to retain unified formula for the Euclidean, Lorentzian, and general complex cases. $E$ is related to the actions by \begin{align} E=\begin{cases} -S^E, \quad & \text{in Euclidean space,} \\ iS^L, \quad & \text{in Lorentzian spacetime.} \end{cases} \end{align} Explicitly, $E$ equals \begin{align} E=& \underbrace{- \lambda V}_{E_{CC}} + \underbrace{(-k) \sum_h \delta_h \sqrt{\mathbb{V}_h-0i}}_{E_{EH}} + \underbrace{\cdots}_{E_O}. \label{eq:pe} \end{align} $E_{O}$ stands for ``other terms'' in addition to the cosmological constant term $E_{CC}$ and the Einstein-Hilbert term $E_{EH}$. The measure factor (\ref{eq:em}) is an example. An $R^2$ term as another example is considered in \cref{sec:2dsqg}. The terms $E_{CC}$ and $E_{EH}$ are discussed below. \subsection{Cosmological constant term} The cosmological constant term equals \begin{align} E_{CC} = & -\lambda V = -\lambda \sum_s V_s. \end{align} Here $\lambda$ is the cosmological constant, and the sum is over all simplicial volumes $V_s=\sqrt{\mathbb{V}_s}$ as defined in (\ref{eq:vol}). In Euclidean space $\mathbb{V}_s>0$, so $V_s>0$. Therefore large volumes are suppressed by the exponent $E_{CC}$. This agrees with ordinary Euclidean quantum gravity. In Lorentzian spacetime $\mathbb{V}_s<0$, so $V_s=\sqrt{\mathbb{V}_h}$ as defined in (\ref{eq:vol}) are positive imaginary. This agrees with the usual convention for Lorentzian quantum gravity in which $E_{CC} = - i \lambda V^L$ with a positive Lorentzian volume $V^L=\sum_s \abs{V_s}$. \subsection{Einstein-Hilbert term}\label{sec:eht} The Einstein-Hilbert term equals \begin{align}\label{eq:eht} E_{EH} = & -k \sum_h \delta_h \sqrt{\mathbb{V}_h-0i} . \end{align} Here $k>0$ is the gravitational coupling constant, the sum is over all hinges $h$, $\mathbb{V}_h$ is the squared volume of the hinge $h$, and $\delta_h$ is its deficit angle. The notation $\sqrt{z-0i}$ is as defined in (\ref{eq:bcc}): \begin{align} \sqrt{z-0i}=&\sqrt{r}e^{i\phi/2}, \quad z=r e^{i\phi}\text{ with }\phi\in [-\pi,\pi). \end{align} The point is that $\sqrt{z-0i}$ is negative imaginary for $z<0$. In the Euclidean domain $\mathbb{V}_h>0$, so $\sqrt{\mathbb{V}_h-0i}=\sqrt{\mathbb{V}_h}>0$. In addition, (\ref{eq:la}) agrees with (\ref{eq:ea}). Then $E_{EH} = -k \sum_h \delta_h V_h$ is minus the Einstein-Hilbert term of Euclidean simplicial quantum gravity in the convention of \cite{Hamber2009QuantumApproach}. This in turn yields in the continuum limit \begin{align} Z=\int \mathcal{D}g ~ e^{\int d^dx \sqrt{g} (- k R)} \end{align} for the pure gravity path integral. Note the extra minus sign in contrast to (\ref{eq:qge}). Since the Einstein-Hilbert term is unbounded from below, it is unclear if this sign choice is a bad one. In a follow up work, we will point out a different branch choice for the angle formula (\ref{eq:la}) which reproduces the the Einstein-Hilbert term with the conventional sign in the Euclidean.\footnote{I am very grateful to Bianca Dittrich and Jos{\'e} Padua-Arg{\"u}elles for discussions that clarified the sign conventions of the Einstein-Hilbert term and the mistakes I made regarding the alternatives for the Einstein-Hilbert term in a previous version of the manuscript. The discussions also clarified how one should interpret Sorkin's Lorentzian Regge action \cite{SorkinLorentzianVectors} so that it is holomorphic. The details of this interpretation will be reported elsewhere.} For a Lorentzian path integral, (\ref{eq:la}) is used to define the deficit angle $\delta_h$ according to (\ref{eq:da2}). We have \begin{align}\label{eq:eheld1} E_{EH} = & ik\sum_{h\text{ timelike}} \delta_h \abs{V_h} - k\sum_{h\text{ spacelike}} \delta_h \abs{V_h}, \end{align} where $\sum_h$ is expanded into a sum over timelike and spacelike hinges (lightlike hinges do not contribute to the exponent since $\mathbb{V}_h=0$), and $\abs{V_h}$ is the modulus of $V_h=\sqrt{\mathbb{V}_h}$. Sorkin showed that in the convention where the convex angles $\theta$ are positive in the Euclidean plane, $\sum_{h\text{ timelike}} \delta_h \abs{V_h}$ reproduces $\int d^dx \sqrt{-g} R$ in the continuum limit \cite{Sorkin1974DevelopmentFields}.\footnote{In this statement $R$ is as defined from (\ref{eq:rt}). Note that Sorkin used an opposite sign convention for $R$ in the original paper \cite{Sorkin1974DevelopmentFields}.} In the convention of this work the convex angles are also positive in the Euclidean plane. Therefore the first term $ik\sum_{h\text{ timelike}} \delta_h \abs{V_h}$ reproduces the commonly used path integral exponent $E_{EH}=iS_{EH}=ik\int d^dx \sqrt{-g} R$ of (\ref{eq:qgl}). For the second term, Sorkin pointed out that $\sum_{h\text{ spacelike}} \tilde{\delta_h} \abs{V_h}$ reproduces $\int d^dx \sqrt{-g} R$ in the continuum limit when $\tilde{\delta_h}$ is positive for a spacelike Lorentz boost deficit angle \cite{Sorkin1974DevelopmentFields}. By \cref{prop:caba}, $\delta_h$ here is negative imaginary. Therefore the second term $-k\sum_{h\text{ spacelike}} \delta_h \abs{V_h}$ also reproduces the commonly used path integral exponent $E_{EH}=iS_{EH}=ik\int d^dx \sqrt{-g} R$ of (\ref{eq:qgl}). \subsection{Lightcone structures}\label{sec:lcs} In ordinary classical space-time, each point has two lightcones attached to it. In simplicial gravity, a point can have more or fewer than two light cones (\cref{fig:2dlcs}). \begin{figure \centering \includegraphics[width=.3\textwidth]{2dlcs.png} \caption{Irregular lightcone structure in $2D$. The point at the center has six light rays (dashed lines) and three lightcones, if spacelike (s) and timelike (t) edges are as assigned.} \label{fig:2dlcs} \end{figure} It is an open question whether such spacetime configurations with irregular lightcone structures should be included in the gravitational path integral. When they are included the exponent becomes complex rather than staying imaginary. This is because the constant $2\pi$ in the exponents are cancelled exactly when the angles enclose four light rays, as in ordinary flat spacetime (\cref{prop:rtheta}). Depending on the sign choice for the exponent, a space-time configurations with the irregular lightcone structures is either suppressed or enhanced by the additional non-vanishing real part of the exponent. In \cite{Louko1995ComplexChange}, reasons are offered to prefer the enhancement (suppression) of configurations with fewer (more) than four light rays. The exponent (\ref{eq:eheld1}) with the extra minus sign conforms with the opposite choice. As will be reported in details elsewhere, a different branch choice for the angle formula (\ref{eq:la}) reverses the enhancement/suppression. If irregular light structures are allowed in Nature, observing the enhancement/suppression effects could in principle help us to determine the branch choice. \section{Holomorphic flow}\label{sec:hf} Analytic calculations for the non-perturbatively defined gravitational path integral is hard. In the Euclidean, one usually proceeds numerically with Markov Chain Monte Carlo simulations. The efficiency of this method relies on positivity of the path integrand in the Euclidean. In the Lorentzian, however, the path integrand is complex. The leads to the sign problem. The phase of the complex numbers summed over can fluctuate wildly to cancel each other off, which reduces the efficiency of Markov Chain Monte Carlo simulations. The sign problem is not restricted to quantum gravity, but is also encountered in quantum theories of matter. Several methods have been developed to overcome the sign problem (see e.g., \cite{AlexandruComplexProblem, Berger2019ComplexPhysics, Gattringer2016ApproachesTheory} and references therein). The basic idea of the complex path methods is to deform the integration contour to the complex to reduce the phase fluctuations. This idea is demonstrated to work for several models, including low dimension Thirring models, real time scalar field theories, and Hubbard models \cite{AlexandruComplexProblem}. It has also been applied to analyze gravitational propagators for spin-foam models in the large spin limit \cite{Han2021SpinfoamPropagator}. As reviewed in \cite{AlexandruComplexProblem} there are several different ways to implement the general idea of complex path deformation to overcome the sign problem. In later sections we apply the ``holomorphic gradient flow'' algorithm, also called the ``generalized thimble'' algorithm, \cite{Alexandru2016SignThimbles, Alexandru2017MonteCarloModel} to Lorentzian simplicial quantum gravity. This section summarizes the algorithm. \subsection{Flow equations}\label{sec:fe} The celebrated Cauchy integration theorem indicates that up to a sign the integral of a complex function $f(z)$ does not change value if the integration contour is deformed through a region where $f(z)$ is holomorphic. Cauchy's theorem admits a multi-dimensional generalization \cite{AlexandruComplexProblem} which applies to path integrals of multiple variables. The holomorphic gradient flow algorithm exploits this to find deformed contours where the sign problem is mitigated. Consider a path integral with a holomorphic integrand of the form \begin{align}\label{eq:pi1} Z =& \int \mathcal{D}\sigma ~ e^{E[\sigma]}, \end{align} where in $\mathcal{D}\sigma$ multiple configurations $\sigma_e$ labelled by the lattice edges $e$ are integrated over. The \textbf{flow equations} are \begin{align}\label{eq:fe} \frac{d \sigma_{e}}{dt}=&-\overline{\partial_e E} \quad \forall e, \end{align} where $\partial_{e}$ is a shorthand for $\pdv{}{\sigma_{e}}$, and the overline stands for complex conjugation. For any point $\zeta$ in the original integration contour, the solution to (\ref{eq:fe}) as a function of the flow time $t$ defines the \textbf{holomorphic gradient flow} (or holomorphic flow in short) for $\zeta$. Solving (\ref{eq:fe}) for the whole original integration contour yields a deformation of the integration contour as a function $t$. \begin{figure \centering \includegraphics[width=.6\textwidth]{hfcb.png} \caption{Schematic illustration of the flow region and its boundary. The original contour at the bottom is deformed into the contour at the top. The integral along these contours plus on the dashed boundaries is zero, if the function being integrated over is holomorphic inside. If the integral on the dashed boundaries are negligibly small, then the integrals on the two contours are equal up to a sign.} \label{fig:hfcb} \end{figure} If the integral along the boundary of the flowed region is negligible, then up to a sign (\ref{eq:pi1}) can be evaluated on the flowed contour (\cref{fig:hfcb}). This could reduce the phase fluctuations for the complex numbers integrated over, because only a smaller region on the flowed contour contribute significantly to the integral, and the phase fluctuations could be small in this smaller region. To see this, we look at the real part $E_R$ and the imaginary part $E_I$ of $E$. By (\ref{eq:fe}), \begin{align}\label{eq:DreEDt} \dv{E_R}{t}=&\frac{1}{2}(\dv{E}{t}+\overline{\dv{E}{t}})=\frac{1}{2}\sum_e (\partial_e E \dv{\sigma_e}{t}+\overline{\partial_e E \dv{\sigma_e}{t}})=-\sum_e\abs{\partial_e E}^2\le 0, \\\dv{E_I}{t}=&\frac{1}{2i}(\dv{E}{t}-\overline{\dv{E}{t}})=\frac{1}{2i}\sum_e (\partial_e E \dv{\sigma_e}{t}-\overline{\partial_e E \dv{\sigma_e}{t}})= 0. \end{align} Therefore the real part of the exponent decreases monotonically through the flow, while the imaginary part stays constant. For sufficiently long flow time, the magnitude of the integrand is exponentially suppressed for most points on the deformed contour. Only points close to the critical points of the flow obeying \begin{align} \partial_e E=0 \quad \forall e \end{align} contribute significantly. If the phase fluctuations for such points that contribute significantly is small enough, Markov Chain Monte Carlo simulation can be efficiently performed. \subsection{Numerical algorithm}\label{sec:na} As a summary of \cref{sec:fe}, suppose: \begin{itemize} \item The holomorphic flow transverse a region where the path integrand is holomorphic; \item The boundary of the flow region have negligible contribution to the path integral. \end{itemize} Then the original path integral can be equally evaluated along the contour at any flow time $t=T$. To compute the path integral on the flowed contour, one could use the holomorphic gradient flow algorithm \cite{Alexandru2016SignThimbles, Alexandru2017MonteCarloModel}. The idea is to parametrize the flowed contour by its preimage in the original contour, and perform Markov Chain Monte Carlo simulation using weights on the flowed contour. Specifically, the algorithm goes as: \begin{enumerate} \item Start with a configuration $\zeta$ in the original contour. Evolve it under the holomorphic flow by time $T$ to obtain $\phi=\phi(\zeta)$. \item Draw a new configuration $\zeta'=\zeta+\delta\zeta$ on the original contour, where $\delta\zeta$ is a random vector drawn from a symmetric distribution. Again evolve $\zeta'$ under the flow by time $T$ to obtain $\phi'=\phi'(\zeta')$. \item Accept $\zeta'$ with probability $P = \min\{1, e^{\Re E_{\text{eff}}(\phi')-\Re E_{\text{eff}}(\phi)} \}$, where $E_{\text{eff}}$ is defined in (\ref{eq:Eeff}). \item Repeat steps 2 and 3 until a sufficient ensemble of configurations is generated. \item Compute the expectation values using \begin{align}\label{eq:expo} \ev{O}=&\frac{\ev{O e^{i\varphi(\zeta)}}_{\Re E_{\text{eff}}}}{\ev{e^{i\varphi(\zeta)}}_{\Re E_{\text{eff}}}}, \end{align} where $\ev{\cdot}_{\Re E_{\text{eff}}}$ stands for the average using the ensemble just generated, and $\varphi$ is defined in (\ref{eq:vphi}). \end{enumerate} In steps 1 and 2, the evolution can be conducted through numerically integrating the ODEs (\ref{eq:fe}). If the complexified theory has is domain on Riemann surfaces, as is the case for simplicial quantum gravity, branches need to be recorded as part of the numerical integration algorithm to make sure the system flows continuously on the Riemann surfaces. In Step 3, \begin{align}\label{eq:Eeff} E_{\text{eff}}(\phi) =& E(\phi(\zeta)) + \log \det J(\zeta), \quad J_{ee'} = \pdv{\phi_e}{\zeta_{e'}}, \end{align} where $\phi_e$ and $\zeta_{e}$ are the values $\phi$ and $\zeta$ take on the edge $e$. The Jacobian can be obtained (see Appendix A of \cite{AlexandruComplexProblem}) by integrating \begin{align}\label{eq:jcb} \frac{d J_{ee'}}{dt}= \sum_{e''}\overline{H_{ee''}J_{e''e'}},\quad H_{ee'}:=- \partial_{e'}\partial_e E, \quad J_{ee'}(0)=\delta_{ee'}. \end{align} The function $e^{E_{\text{eff}}}$ is the integrand of the final integral to be computed, since \begin{align} Z=&\int_{M_0} e^{E(\zeta)} d\zeta \\=& \int_{M_T} e^{E(\phi)} d\phi \\=& \int_{M_0} e^{E(\phi(\zeta))} \det J ~d\zeta, \label{eq:ppi \end{align} where we reparametrized the flowed manifold $M_T$ by points $\zeta$ of the original manifold $M_0$ in the last step. Now the integrand equals $e^{E_{\text{eff}}}$ for $E_{\text{eff}}$ defined in (\ref{eq:Eeff}). Expanding $E_{\text{eff}}$ in real and imaginary parts yields $e^{E_{\text{eff}}}= e^{\Re E_{\text{eff}} + i \varphi}$, where \begin{align}\label{eq:vphi} \varphi= \Im E_{\text{eff}}=\Im E + \arg\det(J). \end{align} This explains steps 3 and 5, in which we sample (\ref{eq:ppi}) according to the magnitude $e^{\Re E_{\text{eff}}}$ of the integrand, and treat the phase $e^{i\varphi}$ as part of the observable in (\ref{eq:expo}). This algorithm can alleviate the sign problem because as $T\rightarrow\infty$, the flowed manifold approaches a combination of steepest descent contours (Lefschetz thimbles) on each of which $\varphi$ is constant \cite{AlexandruComplexProblem}. However, the usefulness of the algorithm is not guaranteed because of ``trapping'' for the Monte Carlo sampling. As noted below (\ref{eq:DreEDt}) $\Re E$ decreases monotonically under the holomorphic flow, so $\Re E_{\text{eff}}$ also tends to decrease. As $T$ is increased, the probability weight $e^{\Re E_{\text{eff}}}$ develop peaks around the stationary points where $\partial_e E=0$, separated by valleys where $e^{\Re E_{\text{eff}}}$ is exponentially suppressed. Consequently it can be hard for the Markov chain to travel across the peak regions to generate a sufficient sample. In practice, we need to find a flow time $T$ large enough so that the phase fluctuation in $\varphi$ is sufficiently suppressed to tame the sign problem, and small enough so that the trapping of the Markov chain is sufficiently weak. More sophisticated algorithms such as the tempering algorithms \cite{Fukuma2017ParallelThimbles, Alexandru2017TemperedThimbles} involving multiple flow times/chains have been developed to avoid the trapping issue. In principle general Markov Chain Monte Carlo algorithms for multimodal distributions can also be applied. \section{2D simplicial quantum gravity}\label{sec:2dsqg} We apply the holomorphic gradient flow method to overcome the sign problem for Lorentzian simplicial gravitational path integrals. We focus on the 2D case for this initial study on the topic. The relevant expressions for the holomorphic flow equation and the Jacobian equation are given in this section. Along the way we prove a complex version of the Gauss-Bonnet theorem, which may be of independent interest. The numerical results are presented in the next section. In $2D$, we consider the path integral \begin{align} Z =& \int \mathcal{D}\sigma ~ e^E, \\ E = & - \lambda V-k \sum_v \delta_v + a \sum_v \frac{\delta_v^2}{A_v} + m \sum_t \log \mathbb{V}_t. \end{align} The first (cosmological constant) and second (Einstein-Hilbert) terms are as is (\ref{eq:pe}) specialized to $2D$. The fourth term is the measure factor term of (\ref{eq:em}). The third term $a \sum_v \delta_v^2/A_v$ is the $R^2$ term \cite{Hamber1986Two-dimensionalGravity}. Here $a$ is the coupling constant, and $A_v$ is the area share of vertex $v$: \begin{align} A_v =& \frac{1}{3}\sum_{t\ni v} V_t = \frac{1}{3}\sum_{t\ni v} \sqrt{\mathbb{V}_t}, \end{align} where the sum is over triangles $t$ containing vertex $v$, and $\mathbb{V}_t$ is the squared volume for triangle $t$ calculated according to (\ref{eq:svol1}) or (\ref{eq:svol}). The letter $A$ instead of $V$ is used for $A_v$ to distinguish from the hinge (vertex in $2D$) volume $V_h=V_v$, which is usually set to $1$ in $2D$. \subsection{Complex Gauss-Bonnet theorem} The Einstein-Hilbert term $E_{EH}=-k \sum_v \delta_v$ can actually be left out of the path integration because it is topological. In the Euclidean domain, the celebrated Gauss-Bonnet theorem says that $E_{EH}=k 2\pi \chi$, where $\chi$ is a topological invariant that is fixed by the simplicial complex, and does not depend on the particular length assignments. The same holds in the Lorentzian domain. A nice prove can be found in \cite{SorkinLorentzianVectors}, and a slight generalization that accounts for multiple boundary components can be found in \cite{JiaTime-spaceGravity}. That a version of the Gauss-Bonnet theorem exists in the complex domain was suggested by Louko and Sorkin \cite{Louko1995ComplexChange}, but they left it as an open question to investigate. Here we prove a complex version of the Gauss-Bonnet theorem, which generalizes the Euclidean and Lorentzian versions. It implies that on a fixed simplicial lattice, $E_{EH}$ is constant when the Lorentzian or Euclidean contour is continuously deformed into the complex domain. Therefore $E_{EH}$ can be taken out of the path integral in the holomorphic gradient flow algorithm. \begin{theorem}[Complex Gauss-Bonnet]\label{th:cgb} On a fixed simplicial lattice, any continuous deformation of the path integration contour in the complex domain will not change the value of the Einstein-Hiblert term $E_{EH}$. If the deformation is continuously connected to the Lorentzian or the Euclidean contour, \begin{align}\label{eq:cgbt1} E_{EH}/(-k) = 2\pi \chi, \quad \chi=V-E+T, \end{align} where $V,E,T$ are the vertex, edge, and triangle numbers of the simplicial lattice, and $\chi$ is Euler number. This simple result assumes that each boundary vertex is shared by two regions. More generally, when the numbers of regions sharing the vertices $v$ is $Q_v$, \begin{align}\label{eq:cgbt2} E_{EH}/(-k) = 2\pi \chi, \quad \chi= V^{\mathrm{o}} + \frac{1}{2} V^\partial - E + T + \sum_{v\in \partial} \frac{1}{Q_v}, \end{align} where the bulk and boundary elements are labelled by superscripts $\mathrm{o}$ and $\partial$, and the sum $\sum_{v\in \partial}$ is over all boundary vertices. \end{theorem} \begin{proof} In $2D$, the Einstein-Hilbert equals \begin{align} E_{EH}/(-k)=\sum_v \delta_v=&(\sum_v 2\pi/Q_v - \sum_a \theta_a) \\ =& (\sum_v 2\pi/Q_v - \pi N).\label{eq:cgb1} \end{align} In the first line we used the definition (\ref{eq:da2}) of the deficit angle. $\sum_a$ is over all triangular angles of the $2D$ simplicial complex. In the second line we grouped the angles into triangles and applied \cref{th:sta}. Here $N$ is some integer. This shows that the $E_{EH}$ can only take values from a discrete set labelled by $N$. Under a continuous deformation of the contour, a holomorphic function such as $E_{EH}$ can only change value continuously. Yet we just showed that the codomain of $E_{EH}$ is a discrete set. Therefore $E_{EH}$ cannot change value under a continuous deformation of the contour. The claims (\ref{eq:cgbt1}) and (\ref{eq:cgbt2}) can be proved by the same argument in \cite{SorkinLorentzianVectors} and \cite{JiaTime-spaceGravity}. In the Lorentzian and Euclidean domains, \begin{align}\label{eq:gb1} E_{EH}/(-k\pi)=&2V^{\mathrm{o}} + \sum_{v\in \partial} \frac{2}{n_v} - T, \\ 0=&-2 E^{\mathrm{o}} - E^\partial + 3T, \label{eq:gb2} \\ 0=&V^{\partial} - E^\partial. \label{eq:gb3} \end{align} Equation (\ref{eq:gb1}) uses the fact that in the interior of the region, $Q_v=1$, and that in the Lorentzian and Euclidean domains the angles of a triangle sum to $\pi$ (\cref{prop:ltri}), whence $N=T$. Equations (\ref{eq:gb2}) and (\ref{eq:gb3}) are simple facts about the simplicial lattice. Each bulk edge is shared by two faces, each boundary edge is shared by one face, and each face has three edges so (\ref{eq:gb2}) follows. The boundary is formed by a vertex-edge-vertex-edge... chain so (\ref{eq:gb3}) follows. Adding up (\ref{eq:gb1}) to (\ref{eq:gb3}) yields (\ref{eq:cgbt2}). Specializing to $Q_v=2$ for all $v$ yields (\ref{eq:cgbt1}). \end{proof} \subsection{Flow equations} Because of \cref{th:cgb}, $\partial_e E_{EH}=0$, so the flow equations (\ref{eq:fe}) become \begin{align} \frac{d \sigma_{e}}{dt}=&-\overline{\partial_e E}=-\overline{\partial_e E_{CC}}-\overline{\partial_e E_{R^2}}-\overline{\partial_e E_{m}} \end{align} For the cosmological constant term $E_{CC}$, \begin{align} \partial_e E_{CC} =& -\lambda \partial_e V \\ =& -\lambda \sum_t \partial_e V_t. \end{align} For the $R^2$ term $E_{R^2}$, \begin{align} \partial_{e} E_{R^2} =& a \sum_v \partial_{e}(\frac{\delta_v^2}{A_v}) \\=& a \sum_v [\frac{2\delta_v \partial_{e}\delta_v}{A_v}-\frac{\delta_v^2 \partial_{e}A_v}{A_v^2}]. \end{align} For the measure term $E_m$, \begin{align} \partial_{e} E_{m} =& m \sum_t \partial_{e} \log \mathbb{V}_t \\=& m \sum_t \mathbb{V}_t^{-1}\partial_{e} \mathbb{V}_t. \end{align} Therefore \begin{align} \frac{d \sigma_{e}}{dt}=& -\overline{\partial_e E_{CC}}-\overline{\partial_e E_{R^2}}-\overline{\partial_e E_{m}} \\ =& \lambda \sum_t \overline{\partial_e V_t} - a \sum_v \overline{(\frac{2\delta_v \partial_{e}\delta_v}{A_v}-\frac{\delta_v^2 \partial_{e}A_v}{A_v^2})}-m \sum_t\overline{\mathbb{V}_t^{-1}\partial_{e} \mathbb{V}_t}.\label{eq:2dfe} \end{align} This formula needs to be expressed in terms of the squared lengths to be applied. While $\delta_v$, $A_v$, and $\mathbb{V}_t$ in terms of the squared lengths are known from the definitions, the derivative terms in terms of the squared lengths are given below. \subsection*{Volume terms} For $\partial_e V_t$ and $\partial_e A_v$, a straightforward calculation using the definitions yields \begin{align} \partial_e V_t=&\frac{\partial_e \mathbb{V}_t}{2 \sqrt{\mathbb{V}_t}}=\frac{\partial_e \mathbb{V}_t}{2 V_t}, \label{eq:2dvbe} \\\partial_e \mathbb{V}_t=&\frac{1}{8} \left(-\sigma _e+\sigma _{e1}+\sigma _{e2}\right), \label{eq:2dv2be} \\\partial_e A_v =& \frac{1}{3}\sum_{t\ni v} \partial_e V_t=\frac{1}{3}\sum_{t\ni v, e} \partial_e V_t, \end{align} where $e1, e2$ are the other two edges of the triangle $t$. \subsection*{Angle terms} For $\partial_{e}\delta_v$, \begin{align} \delta_v =& 2\pi/Q_v - \sum_{t\ni v}\theta_{t,v}, \\\partial_{e}\delta_v =& -\sum_{t\ni v} \partial_{e} \theta_{t,v} = -\sum_{t\ni v, e} \partial_{e} \theta_{t,v}.\label{eq:dabl2} \end{align} For $a$ and $b$ in triangle $t$ meeting at vertex $v$, (\ref{eq:ca}) implies \begin{align} \frac{\partial \theta _{t,v}}{\partial \sigma_{a}} = & \frac{\sigma _a-\sigma _b+\sigma _c}{4i \sigma _a (a\wedge b) }=\frac{\sigma _a-\sigma _b+\sigma _c}{8 \sigma _a V_t }, \label{eq:abe1} \\ \frac{\partial \theta _{t,v}}{\partial \sigma_{b}} = & \frac{-\sigma _a+\sigma _b+\sigma _c}{4i \sigma _b (a\wedge b)}=\frac{-\sigma _a+\sigma _b+\sigma _c}{8 \sigma _b V_t}, \label{eq:abe2} \\ \frac{\partial \theta _{t,v}}{\partial \sigma_{v}} = & \frac{i}{2 a\wedge b} = \frac{-1}{4 V_t}. \label{eq:abe3} \end{align} Here we noted that \begin{align}\label{eq:awedgeb2} a\wedge b=& -2 i V_t, \end{align} where $V_t$ in terms of squared lengths is given in (\ref{eq:2dvol}). These can be used to express (\ref{eq:dabl2}) fully in the squared lengths. \subsection{Jacobian} The Jacobian flow equation is given in (\ref{eq:jcb}) as \begin{align} \frac{d J_{ee'}}{dt}= \sum_{e''}\overline{H_{ee''}J_{e''e'}},\quad H_{ee'}:=- \partial_{e'}\partial_e E, \quad J_{ee'}(0)=\delta_{ee'}. \end{align} Specialized to simplicial quantum gravity in $2D$, \begin{align} H_{ee'}=& - \partial_{e'}\partial_e E = - \partial_{e'}\partial_e E_{CC}- \partial_{e'}\partial_e E_{R^2}-\partial_{e'}\partial_e E_{m}, \end{align} where the Einstein-Hilbert term drop out by \cref{th:cgb}. \subsection*{The cosmological constant term} The cosmological constant term is \begin{align} \partial_{e'}\partial_e E_{CC} =& -\lambda \sum_t\partial_{e'}\partial_e V_t \\ =& -\lambda \sum_{t\ni e, e'} \partial_{e'}\partial_e V_t, \label{eq:2dECCbee} \end{align} where it was noted that $\partial_{e'}\partial_e V_t= 0$ if the triangle $t$ does not contain both $e$ and $e'$. By (\ref{eq:2dvbe}) and (\ref{eq:2dv2be}), \begin{align} \partial_{e'}\partial_e V_t=&\frac{1}{2 \sqrt{\mathbb{V}_t}} ( \frac{-1}{2 \mathbb{V}_t} \partial_e \mathbb{V}_t \partial_{e'} \mathbb{V}_t + \partial_{e'} \partial_e \mathbb{V}_t). \label{eq:2dvbee} \\\partial_e \mathbb{V}_t=&\frac{1}{8} \left(-\sigma _e+\sigma _{e1}+\sigma _{e2}\right), \\\partial_{e'} \partial_e \mathbb{V}_t=& \begin{cases} \frac{-1}{8}, \quad & e = e', \\\frac{1}{8}, & e \ne e'. \end{cases} \label{eq:2dv2bee} \end{align} Plugging these in (\ref{eq:2dECCbee}) yields an expression in terms of squared lengths. Regarding computational complexity it is relevant to note that $\partial_{e'}\partial_e E_{CC}$ is quasi-local. Because the sum $\sum_{t\ni e, e'}$ in (\ref{eq:2dECCbee}) is over triangles $t$ that contain both $e$ and $e'$, if $e$ and $e'$ are not identical or adjacent then $\partial_{e'}\partial_e E_{CC}=0$. \subsection*{The $R^2$ term} For the $R^2$ term, \begin{align}\label{eq:2dER2bee} \partial_{e} \partial_{e'} E_{R^2} =& \sum_v \frac{a }{A_v^3} [2 \delta_vA_v\left(-\delta_v^{(0,1)} A_v^{(1,0)}-\delta_v^{(1,0)} A_v^{(0,1)}+\delta_v^{(1,1)} A_v\right)\nonumber \\ &+\delta_v^2 \left(2 A_v^{(0,1)} A_v^{(1,0)}-A_vA_v^{(1,1)}\right)+2 \delta_v^{(0,1)} \delta_v^{(1,0)} A_v^2], \end{align} where $f^{(i,j)}$ is the shorthand for $\partial_{e}^i \partial_{e'}^j f$. \begin{figure \centering \includegraphics[width=.4\textwidth]{ql2d.png} \caption{The edges that $A_v$ and $\delta_v$ depend on are thickened. They are all within one edge away from $v$, and are all within two edges away from each other. A pair of edges (e.g., $e$ and $e'''$) more than two edges away will not find any vertex $v$ whose $A_v$ and $\delta_v$ depend on them both. Even a pair of edges (e.g., $e$ and $e''$) two edges away may not find any vertex $v$ whose $A_v$ and $\delta_v$ depend on them both.} \label{fig:ql2d} \end{figure} We see that $\partial_{e} \partial_{e'} E_{R^2}$ is quasi-local, in the sense that $\partial_{e} \partial_{e'} E_{R^2}=0$ when $e$ and $e'$ are more than two edges away (meaning the shortest lattice graph path touching both $e$ and $e'$ has more than two edges) (\cref{fig:ql2d}). This is because $\partial_e\delta_v=\partial_e A_v=0$ if $e$ is more than one edge away from $v$. If $e$ and $e'$ are more than two edges away, then at least one of them is more than one edge away from $v$ for any $v$, whence all terms on the right hand side of (\ref{eq:2dER2bee}) vanish. \subsubsection*{Volume terms} By the definition of $A_v$, \begin{align} A_v^{(1,0)}=&\partial_e V_v = \frac{1}{3}\sum_{t\ni v} \partial_e V_t=\frac{1}{3}\sum_{t\ni v,e,e'} \partial_e V_t, \\ A_v^{(1,0)}=&\partial_{e'} V_v = \frac{1}{3}\sum_{t\ni v} \partial_{e'} V_t=\frac{1}{3}\sum_{t\ni v,e,e'} \partial_{e'} V_t, \\ A_v^{(1,1)}=&\partial_e \partial_{e'} V_v = \frac{1}{3}\sum_{t\ni v} \partial_e \partial_{e'} V_t=\frac{1}{3}\sum_{t\ni v,e,e'} \partial_e \partial_{e'} V_t. \end{align} These can be expressed in terms of squared lengths using (\ref{eq:2dvbe}), (\ref{eq:2dv2be}), (\ref{eq:2dvbee}), and (\ref{eq:2dv2bee}): \begin{align} \partial_e V_t =& \frac{\partial_e \mathbb{V}_t}{2 \sqrt{\mathbb{V}_t}}, \\ \partial_e \mathbb{V}_t=&\frac{1}{8} \left(-\sigma _e+\sigma _{e1}+\sigma _{e2}\right), \\ \partial_{e'}\partial_e V_t =&\frac{1}{2 \sqrt{\mathbb{V}_t}} ( \frac{-1}{2 \mathbb{V}_t} \partial_e \mathbb{V}_t \partial_{e'} \mathbb{V}_t + \partial_{e'} \partial_e \mathbb{V}_t), \\\partial_{e'} \partial_e \mathbb{V}_t=& \begin{cases} \frac{-1}{8}, \quad & e = e', \\\frac{1}{8}, & e \ne e'. \end{cases} \end{align} \subsubsection*{Angle terms} \begin{figure \centering \includegraphics[width=.4\textwidth]{tri2.png} \caption{Triangle $t$ with edges $e_a, e_b, e_c$ whose squared lengths are $\sigma_a, \sigma_b, \sigma_c$. Edges $e_a$ and $e_b$ bound the angle $\theta_{t,v}$.} \label{fig:tri2} \end{figure} The terms $\delta_v^{(1,0)}$ and $\delta_v^{(0,1)}$ can be expressed in squared lengths using (\ref{eq:dabl2}) - (\ref{eq:abe3}) (with labels specified in \cref{fig:tri2}): \begin{align} \partial_{e}\delta_v =& -\sum_{t\ni v} \partial_{e} \theta_{t,v} = -\sum_{t\ni v, e} \partial_{e} \theta_{t,v}. \\\frac{\partial \theta _{t,v}}{\partial \sigma_{a}} = & \frac{\sigma _a-\sigma _b+\sigma _c}{4i \sigma _a (a\wedge b) }=\frac{\sigma _a-\sigma _b+\sigma _c}{8 \sigma _a V_t }, \\ \frac{\partial \theta _{t,v}}{\partial \sigma_{b}} = & \frac{-\sigma _a+\sigma _b+\sigma _c}{4i \sigma _b (a\wedge b)}=\frac{-\sigma _a+\sigma _b+\sigma _c}{8 \sigma _b V_t}, \\ \frac{\partial \theta _{t,v}}{\partial \sigma_{v}} = & \frac{i}{2 a\wedge b} = \frac{-1}{4 V_t}. \end{align} For the second derivative, \begin{align} \delta_v^{(1,1)}=\partial_{e}\partial_{e'}\delta_v =& -\sum_{t\ni v, e, e'} \partial_{e}\partial_{e'} \theta_{t,v}. \end{align} For $e,e'$ ordered as $e_a,e_b,e_c$ (\cref{fig:tri2}), the Hessian matrix is \begin{align} \partial_{e}\partial_{e'}\theta_{t,v} = \frac{1}{32 V_t^3} \left( \begin{array}{ccc} \frac{ X}{4 \sigma _a^2 } & -\sigma _c & \frac{ \left(-\sigma _a+\sigma _b+\sigma _c\right)}{2} \\ -\sigma _c & \frac{ Y}{4 \sigma _b^2 } & \frac{ \left(\sigma _a-\sigma _b+\sigma _c\right)}{2} \\ \frac{ \left(-\sigma _a+\sigma _b+\sigma _c\right)}{2} & \frac{ \left(\sigma _a-\sigma _b+\sigma _c\right)}{2} & \frac{ \left(\sigma _a+\sigma _b-\sigma _c\right)}{2} \\ \end{array} \right), \end{align} where \begin{align} X=&\sigma _a^3+\sigma _a^2 \left(\sigma _c-3 \sigma _b\right)+3 \sigma _a \left(\sigma _b^2-\sigma _c^2\right)-\left(\sigma _b-\sigma _c\right){}^3, \\ Y=X(\sigma _a \leftrightarrow \sigma _b)=&\sigma _b^3+\sigma _b^2 \left(\sigma _c-3 \sigma _a\right)+3\sigma _b \left( \sigma _a^2- \sigma _c^2\right)-\left(\sigma _a-\sigma _c\right){}^3. \end{align} The above volume and angular terms of derivatives can be plugged into (\ref{eq:2dER2bee}) to express it in terms of squared lengths. \subsection*{The measure term} By the definition of $E_m$, \begin{align}\label{eq:2dEmbee} \partial_{e'}\partial_e E_{m} =& m \sum_{t\ni e, e'}\partial_{e'}\partial_e \log \mathbb{V}_t \\=& m \sum_{t\ni e, e'} \frac{1}{\mathbb{V}_t^2}(\mathbb{V}_t \partial_{e'}\partial_e \mathbb{V}_t- \partial_e \mathbb{V}_t \partial_{e'} \mathbb{V}_t). \end{align} The previous formulas (\ref{eq:2dv2be}) and (\ref{eq:2dv2bee}) can then be used to express this in terms of squared length. \section{Numerical results}\label{sec:nr} In this section we present results of numerical simulation for the path integral \begin{align} Z =& \int \mathcal{D}\sigma ~ e^E, \quad E = - \lambda V + a \sum_v \frac{\delta_v^2}{A_v} + m \sum_t \log \mathbb{V}_t, \end{align} parameterized by $p=(\lambda, a, m)$. These constants and the squared lengths are set unitless in this section for simplicity. We compute the expectation value for the squared length $\ev{\sigma_e}=\int \mathcal{D}\sigma ~ \sigma_e e^E$. According to (\ref{eq:expo}), \begin{align} \ev{\sigma_e}=&\frac{\ev{\sigma_e e^{i\varphi }}_{\Re E_{\text{eff}}}}{\ev{e^{i\varphi }}_{\Re E_{\text{eff}}}}, \end{align} where $\ev{\cdot}_{\Re E_{\text{eff}}}$ is the average using the ensemble just generated, and the phase $\varphi$ is the imaginary part of $E_{\text{eff}}$. When $\varphi$ fluctuates wildly, the sign problem is bad. The task is to choose $T$ so that on the flowed contour the phase fluctuation is reduced. We can quantify the performance of the algorithm in alleviating the sign problem by the average phase \begin{align} \Phi=\abs{\ev{e^{i\varphi }}_{\Re E_{\text{eff}}}}=\abs{\frac{\int \mathcal{D}\sigma ~ e^{i\varphi+ \Re E_{\text{eff}}}}{\int \mathcal{D}\sigma ~ e^{\Re E_{\text{eff}}}}}. \end{align} The closer $\Phi$ is to $1$, the less the sign fluctuation, and hence the better the performance. In the cases considered below complex contours are found where $\Phi>0.9$. \subsection{Numerical setup} The algorithm is as presented in \cref{sec:na}. Some specific parameter choices are introduced for the numerical results to be presented in the next subsection. \begin{itemize} \item Minimum and maximum acceptance rates: $r_\text{min}=0.2, r_\text{max}=0.7$. \item Minimum jump size: $j_\text{min}=0.001$; Default jump size: $j_\text{default}=1.0$; Jump factor: $j_\text{factor}=10.0^{1/2}$. \item Minimum $\Re E_{\text{eff}}$: $E_\text{min}=-100.0$. \item Maximum volume: $V_\text{max}=10000.0$ \item Boundary squared lengths: $\sigma_1=1.0$, $\sigma_2=2.0$. \end{itemize} For any fixed flow time $T$, we use a Metropolis–Hastings algorithm to generate an ensemble of configurations according to the probability weight $e^{\Re E_{\text{eff}}}$. In each step we randomly pick an edge $e$, and propose a shift of $\sigma_e$ according to a Gaussian probability distribution. The variance, i.e., the ``jump size'', is dynamical. The acceptance rate is checked every ten steps. If it is below $r_\text{min}$ (above $r_\text{max}$), the jump size is decreased (increased) by a factor of $j_\text{factor}$. To prevent unbounded decrease for the jump size a minimum value $j_\text{min}$ is set. When this value is reached the jump size is reset to a default value $j_\text{default}$. A proposal is rejected if the Lorentzian triangle inequality is violated. In addition, it is rejected if the number of light rays at any vertex is different from $2$, which fits the intuition that at the boundaries of the tube there is only one lightcone within the tube region. With these constraints, the dynamical edges can still be either timelike or spacelike. A lower bound $E_\text{min}$ is imposed on $\Re E_{\text{eff}}$ in the numerical integration for the holomorphic flow from $t=0$ to the designated flow time $t=T$. If $\Re E_{\text{eff}}$ is too small the proposal will not be accepted. It improves the efficiency of the algorithm to simply truncate the integrator at the lower bound to move on to the next proposal. An upper bound $V_\text{max}$ on the total spacetime volume (absolute value of the Lorentzian volume) is imposed to prevent the Markov chain from running away when the chain does not find a peak region. In the cases studied the runaway only happens for the uninteresting cases with $T=0.0$. \begin{figure \centering \includegraphics[width=.4\textwidth]{tube} \caption{The tube lattice, with the crossed edges identified.} \label{fig:tube} \end{figure} The numerical simulation is performed on a tube lattice in a symmetry-reduced setting (\cref{fig:tube}). The squared lengths of the top boundary are fixed at $\sigma_2$, and those of the bottom boundary are fixed at $\sigma_1$. The six remaining edges are dynamical, and they take the same $\sigma$. \subsection{Results} We consider four sets of coupling constants $p$. The numerical simulations are performed using the Julia programming language \cite{Bezanson2017Julia:Computing} on a personal computer. All Markov chains are obtained within about an hour. Those of small flow time $T$ are obtained within minutes. The results for the average phase $\Phi$ and the expectation value $\ev{\sigma}$ are presented in the figures for the simulations. The interest is not really in obtaining results for $\ev{\sigma}$, but in seeing that the sign fluctuations can be suppressed using the algorithm. In all cases considered, we are able to identify a flow time $T$ for which the sign problem is significantly ameliorated so that $\Phi>0.9$. \subsection*{Case I} For $p=(10.0, 1.0, 0.0)$, we consider $T=0.0, 0.1, 0.2$ (\cref{fig:01-tube-hom-T=0.0} to \cref{fig:01-tube-hom-T=0.2-burnin}). As the flow time $T$ is increased the phase fluctuations are more and more suppressed. For $T=0.2$, the average phase $\Phi\approx 0.992$ is much suppressed. \begin{figure \centering \includegraphics[width=0.9\textwidth]{01-tube-hom-T=0.0.png} \caption{Case I. $p=(10.0, 1.0, 0.0)$. With $T=0.0$ the phase fluctuation is not suppressed.} \label{fig:01-tube-hom-T=0.0} \end{figure} \begin{figure \centering \includegraphics[width=0.9\textwidth]{01-tube-hom-T=0.1.png} \caption{Case I. With $T=0.1$ the phase fluctuation is large.} \label{fig:01-tube-hom-T=0.1} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{01-tube-hom-T=0.2.png} \caption{Case I. With $T=0.2$ the phase fluctuation is largely suppressed after the chain finds the peak region.} \label{fig:01-tube-hom-T=0.2} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{01-tube-hom-T=0.2burnin.png} \caption{Case I. After dropping the burn-in steps, the phase fluctuation is largely suppressed.} \label{fig:01-tube-hom-T=0.2-burnin} \end{figure} \subsection*{Case II} For $\lambda$ changed to $1.0$, the phase fluctuations are still largely suppressed with $T=0.2$ (\cref{fig:03-tube-hom-T=0.2} and \cref{fig:03-tube-hom-T=0.2-burnin}). Decreasing the cosmological constant $\lambda$ has the effect of increasing the dominating volumes, which meets the naive expectation coming from the Euclidean case. \subsection*{Case III} For $\lambda$ staying at $10.0$ and $a$ changed to $3.0$, we consider $T=0.2, 0.3, 0.6$ (\cref{fig:04-tube-hom-T=0.2} to \cref{fig:04-tube-hom-T=0.6burnin}). The phase fluctuations are not longer sufficiently suppressed with $T=0.2$. It takes $T=0.6$ to suppress the phase fluctuation enough to reach $\Phi>0.9$. Increasing $a$ has the effect of increasing the dominating volumes, just as decreasing $\lambda$ did. This could be explained by noting that $a$ and $\lambda$ have opposite length dimensions which result in opposite scaling effects \cite{Hamber2013InconsistenciesConstantb}. \subsection*{Case IV} For $\lambda$ staying at $10.0$, $a$ staying at $1.0$, and $m$ changed to $-0.75$ (this is the value for the DeWitt measure in $2D$ \cite{Hamber2009QuantumApproach}), we consider $T=0.0, 0.1, 0.2$ (\cref{fig:05-tube-hom-T=0.0} to \cref{fig:05-tube-hom-T=0.1burnin}). With $T=0.0$, the Markov chain drove into the path integrand branch point $\sigma=0.25$ for which $V_t=0$ for the lower triangles. This situation is ill-posed (see \cref{sec:cb} for further discussions). Introducing a positive $T$ removes the problem. At $T=0.1$ the phase fluctuations become sufficiently suppressed so that $\Phi\approx 0.993$. \begin{figure \centering \includegraphics[width=.9\textwidth]{03-tube-hom-T=0.2.png} \caption{Case II. Changing $\lambda$ to $1.0$ shifts the peak region.} \label{fig:03-tube-hom-T=0.2} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{03-tube-hom-T=0.2burnin.png} \caption{Case II. Dropping burn-in steps, one sees more clearly that decreasing $\lambda$ increases the dominating volumes.} \label{fig:03-tube-hom-T=0.2-burnin} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{04-tube-hom-T=0.2.png} \caption{Case III. Increasing $a$ to $3.0$ causes larger phase fluctuations. For $T=0.2$ the Markov chain did not find a peak region.} \label{fig:04-tube-hom-T=0.2} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{04-tube-hom-T=0.3.png} \caption{Case III. For $T=0.3$ the Markov chain still did not find a peak region.} \label{fig:04-tube-hom-T=0.3} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{04-tube-hom-T=0.6.png} \caption{Case III. For $T=0.6$ the Markov chain found a peak region.} \label{fig:04-tube-hom-T=0.6} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{04-tube-hom-T=0.6burnin.png} \caption{Case III. After dropping the burn-in steps, one sees more clearly that increasing $a$ increases the dominating volumes.} \label{fig:04-tube-hom-T=0.6burnin} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{05-tube-hom-T=0.0.png} \caption{Case IV. For $m=-0.75$ with $T=0.0$, $\sigma$ approaches the branch point at $0.25$ where $V_t=0$. The result is invalid without a $T>0$ flow.} \label{fig:05-tube-hom-T=0.0} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{05-tube-hom-T=0.1.png} \caption{Case IV. With $T=0.2$, the phase fluctuation is largely suppressed after the chain finds the peak region.} \label{fig:05-tube-hom-T=0.1} \end{figure} \begin{figure \centering \includegraphics[width=.9\textwidth]{05-tube-hom-T=0.1burnin.png} \caption{Case IV. After dropping the burn-in steps, it becomes clearer that the phase fluctuation is largely suppressed.} \label{fig:05-tube-hom-T=0.1burnin} \end{figure} \subsection{Contour boundaries}\label{sec:cb} As mentioned in \cref{sec:na}, to apply the holomorphic gradient flow algorithm we need that: 1) The holomorphic flow transverse a region where the path integrand is holomorphic; 2) The boundary of the flow region have negligible contribution to the path integral. For simplicial quantum gravity, the boundaries are set by the branch point singularities of the path integrand, the generalized triangle inequalities, and bounds on the total volume (if they are imposed). Within the region bounded by these boundaries, the path integrand is holomorphic, so the first requirement is met. We now check the second requirement that the boundary of the flow region make negligible contribution to the path integral. Firstly, the total volume bounds need not pose any problem when we can choose to impose them at places that are known to make negligible contribution. For instance, tn the above examples the upper bound is set away from the peak regions. Secondly, the boundaries of the generalized triangle inequalities are set where the Lorentzian volumes vanish, i.e., $\mathbb{V}_t=0$. Yet this coincides with one of the square root branch points singularities (see (\ref{eq:2dvol}) and (\ref{eq:ca})). Therefore altogether we only need to consider the boundaries of the branch point singularities of the path integrand. Along such boundaries the contribution to the path integral is infinitely suppressed. To see this, note from (\ref{eq:DreEDt}) that $\dv{E_R}{t}=-\sum_e\abs{\partial_e E}^2\le 0$, i.e., the real part of the path exponent $E$ decays monotonically at a rate determined by $\abs{\partial_e E}$ along the holomorphic flow. Using the formulas of \cref{sec:2dsqg}, one can check that $\abs{\partial_e E}\rightarrow\infty$ at the branch point singularities. Therefore at the boundaries set by these branch points, the path integrand is infinitely exponentially suppressed. They make negligible contributions to the path integral. In \cref{fig:05-tube-hom-T=0.0}, it was observed that a boundary branch point makes a non-negligible contribution at the starting point of the flow. In this case we could simply deform the contour infinitesimally along the holomorphic contour to avoid branch point artifacts. After this is done, sensible results are obtained in \cref{fig:05-tube-hom-T=0.1} and \cref{fig:05-tube-hom-T=0.1burnin}. \section{Discussion}\label{sec:d} We have provided a definition of complex simplicial gravity, which reduces to Euclidean and Lorentzian simplicial gravity in special cases. The complex formalism enabled us to perform Monte Carlo simulations for Lorentzian simplicial quantum gravity. The numerical sign problem is overcome by deforming the integration contour into the complex. The complex formalism also sets the path for further studies of singularity resolving processes with complex semi-classical solutions, generalizing previous studies in the symmetry-reduced setting \cite{Hartle1989SimplicalModel, Louko1992ReggeCosmology, Birmingham1995LensCosmology, Birmingham1998ACalculus, Furihata1996No-boundaryUniverse, Silva1999SimplicialField, Silva1999AnisotropicField, Silva2000SimplicialPhi2, daSilvaWormholesMinisuperspace, DittrichLorentzianSimplicial}, and making a clear connection to the Lorentzian theory. The numerical simulations for Lorentzian simplicial quantum gravity performed here are in a very simple setting. They are on a short tube lattice, in $1+1D$, with symmetry reduction, and for pure gravity. Future works should extend to larger lattices, higher dimensions, without symmetry reduction, and with matter coupling. The physics theory side of these generalizations is understood. From the present work it is clear how to define complex simplicial quantum gravity on larger lattices in higher dimensions without symmetry reduction. From previous works it is clear how to couple to the matter species of the Standard Model (see e.g., Chapter 6 of Hamber's textbook \cite{Hamber2009QuantumApproach} and references therein). The numerics side of these generalizations still needs to be understood better. It is unclear to what extent the holomorphic gradient flow algorithm adopted here will remain efficient. Some other techniques may be needed, such as the tempered thimbles, the learnifolds, and the path optimization algorithms reviewed in \cite{AlexandruComplexProblem} and further developed in, e.g., \cite{Fukuma2021WorldvolumeMethod, Fukuma2021StatisticalAlgorithm, Lawrence2021NormalizingProblem, Wynen2021MachineProblems}. Using the numerical tools, one could study the refinement (continuum) limit of the theory. One could investigate questions about the fate of black hole and cosmological singularities \cite{Frolov1981SphericallyGravity, Frolov1989ThroughUniverse, Barrabes1996HowHole, Frolov1998BlackPhysics, Hartle1983WaveUniverse, Halliwell1991Introductory1990, Bojowald2001AbsenceCosmology, Modesto2004DisappearanceGravityb, Ashtekar2005BlackParadigmb, Hayward2006FormationHoles, Hossenfelder2010ConservativeProblem, Haggard2015Quantum-gravityTunneling, Barcelo2014TheResignation, Bianchi2018WhiteHoleb, DAmbrosio2021EndEvaporation, Oriti2017BouncingCondensatesb, Hartle1989SimplicalModel, Li1993ComplexMachines, Gielen2015PerfectBounce, Gielen2016QuantumSingularities, Feldbrugge2017LorentzianCosmology, Dorronsoro2017RealCosmology, Bramberger2017QuantumSingularities, DittrichLorentzianSimplicial}. From a path integral perspective, if a process can be characterized by a set of path integral configurations, the formalism assigns a probability to it (which may or may not have meaning to cognitive beings such as us). Simplicial quantum gravity provides a formalism to compute and compare the probabilities for such processes. \section*{Acknowledgement} I am very grateful to Bianca Dittrich and Jos{\'e} Padua-Arg{\"u}elles for comments and questions that helped improve the manuscript, to Seth Asante, Lee Smolin and Bianca Dittrich for valuable discussions on simplicial quantum gravity, to Erik Schnetter and Dustin Lang for timely help on computation matters, and to Lucien Hardy, Achim Kempf, Laurent Freidel, and Robert Mann for valuable discussions on quantum gravity in general. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. \bibliographystyle{unsrt}
2024-02-18T23:40:34.885Z
2021-10-20T02:23:35.000Z
algebraic_stack_train_0000
2,801
17,182
proofpile-arXiv_065-13764
\section{Introduction and Main Results}\label{results} We study the $(1+1)$-dimensional quasilinear wave equation \begin{align} g(x)w_{tt}-w_{xx}+h(x)(w_t^3)_t=0 \quad \text{for } (x,t)\in\R\times\R, \label{quasi} \end{align} and we look for real-valued, time-periodic and spatially localized solutions $w(x,t)$. At the end of this introduction we give a motivation for this equation arising in the study of localized electromagnetic waves modeled by Kerr-nonlinear Maxwell equations. We also cite some relevant papers. To the best of our knowledge for \eqref{quasi} in its general form no rigorous existence results are available. A first result is given in this paper by taking an extreme case where $h(x)$ is a spatial delta distribution at $x=0$. Our basic assumption on the coefficients $g$ and $h$ is the following: \begin{align} g\in\Leb{\infty}{\R} \text{ even, } g \not \equiv 0 \text{ and } h(x)=\gamma\delta_0(x) \mbox{ with } \gamma\neq0 \tag{C0}\label{C0} \end{align} where $\delta_0$ denotes the delta-distribution supported in $0$. We have two prototypical examples for the potential $g$: a step potential (Theorem~\ref{step}) and a periodic step potential (Theorem~\ref{w is a weak solution in expl exa}). The general version is given in Theorem~\ref{w is a weak solution general} below. \begin{thm}\label{step} For $a,b,c>0$ let \[ g(x)\coloneqq\left\{\begin{array}{ll} -a, & \mbox{ if }\, \abs{x}>c,\\ b, & \mbox{ if }\, \abs{x}<c. \end{array}\right. \] For every frequency $\omega$ such that $\sqrt{b}\omega c \frac{2}{\pi}\in \frac{2\N+1}{2\N+1}$ and $\gamma<0$ there exist infinitely many nontrivial, real-valued, spatially localized and time-periodic weak solutions of \eqref{quasi} with period $T=\frac{2\pi}{\omega}$. For each solution $w$ there are constants $C,\rho>0$ such that $\abs{w(x,t)}\leq C\e^{-\rho\abs{x}}$. \end{thm} \begin{thm}\label{w is a weak solution in expl exa} For $a,b>0$, $a\not =b$ and $\Theta\in(0,1)$ let \[ g(x)\coloneqq\left\{\begin{array}{ll} a, & \mbox{ if }\, \abs{x}<\pi\Theta,\\ b, & \mbox{ if }\, \pi\Theta<\abs{x}<\pi \end{array}\right. \] and extend $g$ as a $2\pi$-periodic function to $\R$. Assume in addition \begin{align*} \sqrt{\frac{b}{a}}\,\frac{1-\Theta}{\Theta}\in \frac{2\N+1}{2\N+1}. \end{align*} For every frequency $\omega$ such that $4\sqrt{a}\theta\omega\in \frac{2\N+1}{2\N+1}$ there exist infinitely many nontrivial, real-valued, spatially localized and time-periodic weak solutions of \eqref{quasi} with period $T=\frac{2\pi}{\omega}$. For each solution $w$ there are constants $C,\rho>0$ such that $\abs{w(x,t)}\leq C\e^{-\rho\abs{x}}$. \end{thm} Weak solutions of \eqref{quasi} are understood in the following sense. We write $D\coloneqq{\R\times\T_T}$ and denote by $\T_T$ the one-dimensional torus with period $T$. \begin{defn}\label{Defn of weak Sol to (quasi)} Under the assumption \eqref{C0} a function $w\in\SobH{1}{{\R\times\T_T}}$ with $\partial_tw(0,\cdot)\in\Leb{3}{\T_T}$ is called a weak solution of \eqref{quasi} if for every $\psi\in\Contc{\infty}{{\R\times\T_T}}$ \begin{align} \int_D-g(x)\partial_tw\,\partial_t\psi +\partial_xw\,\partial_x\psi\dd{(x,t)} -\gamma\int_{0}^{T} (\partial_t w(0,t))^3 \partial_t \psi(0,t)\dd{t}=0. \label{WeakEquation for (quasi)} \end{align} \end{defn} Theorem~\ref{step} and Theorem~\ref{w is a weak solution in expl exa} are special cases of Theorem~\ref{w is a weak solution general}, which applies to much more general potentials $g$. In Section~\ref{details_example_step} and Section~\ref{explicit example Bloch Modes_WR} of the Appendix we will show that the special potentials $g$ from these two theorems satisfy the conditions \eqref{spectralcond} and \eqref{FurtherCond_phik} of Theorem~\ref{w is a weak solution general}. The basic preparations and assumptions for Theorem~\ref{w is a weak solution general} will be given next. Since we are looking for time-periodic solutions, it is appropriate to make the Fourier ansatz $w(x,t)=\sum_{k\in{\Z_{odd}}} w_k(x) \e^{\i k\omega t}$ with ${\Z_{odd}}\coloneqq2\Z+1$. The reason for dropping even Fourier modes is that the $0$-mode belongs to the kernel of the wave operator $L=g(x)\partial_t^2 - \partial_x^2$. The restriction to odd Fourier modes generates $T/2=\pi/\omega$-antiperiodic functions $w$, is therefore compatible with the structure of \eqref{quasi} and in particular the cubic nonlinearity. Notice the decomposition $(Lw)(x,t)=\sum_{k\in {\Z_{odd}}} L_k w_k(x) \e^{\i k\omega t}$ with self-adjoint operators \begin{align*} L_k = -\frac{d^2}{dx^2} - k^2\omega^2 g(x): H^2(\R)\subset L^2(\R)\rightarrow L^2(\R). \end{align*} Clearly $L_k=L_{-k}$ so that is suffices to study $L_k$ for $k\in {\N_{odd}}$. Our first assumption is concerned with the spectrum $\sigma(L_k)$: \begin{align}\label{spectralcond} \forall\,k\in {\N_{odd}}, 0\not \in \sigma_{\mathrm{ess}}(L_k)\cup \sigma_{\mathrm{D}}(L_k), \tag{C1} \end{align} where by $\sigma_{\mathrm{D}}(L_k)$ we denote the spectrum of $L_k$ with an extra Dirichlet condition at $0$, i.e., the spectrum of $L_k$ restricted to $\{\varphi\in\SobH{2}{\R}~|~\varphi(0)=0\}$. This is the same as the spectrum of $L_k$ restricted to functions which are odd around $x=0$. \begin{lemma} \label{exp_decaying_sol} Under the assumption \eqref{C0} and \eqref{spectralcond} there exists for every $k\in {\N_{odd}}$ a function $\Phi_k\in H^2(0,\infty)$ with $L_k\Phi_k=0$ on $(0,\infty)$ and $\Phi_k(0)=1$. \end{lemma} \begin{proof} We have either that $0$ is in the point spectrum (but not the Dirichlet spectrum) or that $0$ is in the resolvent set of $L_k$. In the first case there is an eigenfunction $\Phi_k\in H^2(\R)$ with $L_k\Phi_k=0$ and w.l.o.g. $\Phi_k(0)=1$. In the second case $0\in \rho(L_k)$ so that there exists a unique solution $\tilde \Phi_k$ of $L_k \tilde \Phi_k = 1_{[-2,-1]}$ on $\R$. Clearly, if restricted to $(0,\infty)$, the function $\tilde\Phi_k$ solves $L_k \tilde\Phi_k=$ on $(0,\infty)$. Moreover, $\tilde\Phi_k(0)\not =0$ since otherwise $\tilde\Phi_k$ would be an odd eigenfunction of $L_k$ which is excluded due to $0\in \rho(L_k)$. Thus a suitably rescaled version of $\tilde\Phi_k$ satisfies the claim of the lemma. \end{proof} Our second set of assumptions concerns the structure of the decaying fundamental solution according to Lemma~\ref{exp_decaying_sol}. \begin{align}\label{FurtherCond_phik} \mbox{There exist $\rho, M>0$ such that for all } k\in {\N_{odd}}\colon |\Phi_k(x)|\leq Me^{-\rho x} \mbox{ on } [0,\infty). \tag{C2} \end{align} Now we can formulate our third main theorem as a generalization of Theorem~\ref{step} and Theorem~\ref{w is a weak solution in expl exa}. The fact that the solutions which we find, can be well approximated by truncation of the Fourier series in time, is explained in Lemma~\ref{lemma_approximation} below. Moreover, a further extension yielding infinitely many different solutions is given in Theorem~\ref{multiplicity abstract} in Section~\ref{infinitely_many_breathers}. \begin{thm}\label{w is a weak solution general} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik} for a potential $g$ and a frequency $\omega>0$. Then \eqref{quasi} has a nontrivial, $T$-periodic weak solution $w$ in the sense of Definition~\ref{Defn of weak Sol to (quasi)} with $T=\frac{2\pi}{\omega}$ provided \begin{itemize} \item[(i)] $\gamma<0$ and the sequence $\left(\Phi'_k(0)\right)_{k\in{\N_{odd}}}$ has at least one positive element, \item[(ii)] $\gamma>0$ and the sequence $\left(\Phi'_k(0)\right)_{k\in{\N_{odd}}}$ has at least one negative element. \end{itemize} Moreover, there is a constant $C>0$ such that $\abs{w(x,t)}\leq C\e^{-\rho\abs{x}}$ for all $(x,t)\in \R^2$ with $\rho$ as in \eqref{FurtherCond_phik}. \end{thm} \begin{rmk} (a) \label{remark_Dr} It turns out that the above assumptions can be weakened as follows: it suffices to verify \eqref{spectralcond} and \eqref{FurtherCond_phik} and (i), (ii) for all integers $k\in r\cdot {\Z_{odd}}$ for some $r\in {\N_{odd}}$. We will prove this observation in Section~\ref{infinitely_many_breathers}. (b) Our variational approach also works if we consider \eqref{quasi} with Dirichlet boundary conditions on a bounded interval $(-l,l)$ instead of the real line. There are many possible results. For illustration purposes we just formulate the simplest one. E.g., if we assume that $\frac{\omega l}{\pi}\in\frac{{\Z_{odd}}}{4\Z}$ then \begin{align*} w_{tt}-w_{xx}+\gamma\delta_0(x)(w_t^3)_t=0 \mbox{ on } (-l,l)\times\R \mbox{ with } w(\pm l,t)=0 \mbox{ for all } t \end{align*} has a nontrivial, real-valued time-periodic weak solution with period $T=\frac{2\pi}{\omega}$ both for $\gamma>0$ and $\gamma<0$. The operator $L_k=-\frac{d^2}{dx^2}-\omega^2k^2$ is now a self-adjoint operator on $H^2(-l,l)\cap H_0^1(-l,l)$. The assumption $\frac{\omega l}{\pi}\in\frac{{\Z_{odd}}}{4\Z}$ guarantees \eqref{spectralcond} for all $k\in{\Z_{odd}}$. The functions $\Phi_k$ are given by $\Phi_k(x)=\frac{\sin(\omega k(l-x))}{\sin(\omega kk)}$ so that $\Phi_k'(0)=-\omega k\cot(\omega kl)$. The assumption $\frac{\omega l}{\pi}\in\frac{{\Z_{odd}}}{4\Z}$ now guarantees that the sequence $\{\cot(\omega kl)~|~k\in{\Z_{odd}}\}$ is finite and does not contain $0$ or $\pm\infty$. Moreover $\frac{\omega l}{\pi}=\frac{2p+1}{4q}$ yields $\Phi_{k}'(0)\Phi_{k+2q}'(0)<0$, i.e., we also have the required sign-change which allows for both signs of $\gamma$. \end{rmk} We observe that the growth of $\left(\Phi'_k(0)\right)_{k\in{\Z_{odd}}}$ is connected to regularity properties of our solutions. \begin{thm}\label{w is even more regular} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik} and additionally $\Phi'_k(0) = O(k)$. Then the weak solution $w$ from Theorem \ref{w is a weak solution general} belongs to $\SobHper{1+\nu}{\T_T,\Leb{2}{\R}}\cap\SobHper{\nu}{\T_T,\SobH{1}{\R}}$ for any $\nu\in(0,\frac{1}{4})$. \end{thm} Here, for $\nu\in\R$ the fractional Sobolev spaces of time-periodic functions are defined by \begin{align*} \SobHper{\nu}{\T_T,\Leb{2}{\R}}&\coloneqq\left\{ u(x,t)=\sum_{k\in\Z}\hat{u}_k(x)\e^{\i\omega kt} ~\bigg|~ \sum_{k\in\Z}\left(1+\abs{k}^2\right)^{\nu}\norm{\hat{u}_k}^2_\Leb{2}{\R}<\infty \right\}, \\ \SobHper{\nu}{\T_T,\SobH{1}{\R}}&\coloneqq\left\{ u(x,t)=\sum_{k\in\Z}\hat{u}_k(x)\e^{\i\omega kt} ~\bigg|~ \sum_{k\in\Z}\left(1+\abs{k}^2\right)^{\nu}\norm{\hat{u}_k}^2_\SobH{1}{\R}<\infty \right\}. \end{align*} We shortly motivate \eqref{quasi} and give some references to the literature. Consider Maxwell's equations in the absence of charges and currents \begin{align*} \nabla\cdot\mathbf{D}&=0, &\nabla\times\mathbf{E}\,=&-\partial_t\mathbf{B}, &\mathbf{D}=&\varepsilon_0\mathbf{E}+\mathbf{P}(\mathbf{E}), \\ \nabla\cdot\mathbf{B}&=0, &\nabla\times\mathbf{H}=&\,\partial_t\mathbf{D}, &\mathbf{B}=&\mu_0\mathbf{H}. \end{align*} We assume that the dependence of the polarization $\mathbf{P}$ on the electric field $\mathbf{E}$ is instantaneous and it is the sum of a linear and a cubic term given by $\mathbf{P}(\mathbf{E})=\varepsilon_0\chi_1(\mathbf{x})\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}$, cf. \cite{agrawal}, Section~2.3 (for simplicity, more general cases where instead of a factor multiplying $\abs{\mathbf{E}}^2\mathbf{E}$ one can take $\chi_3$ as an $\mathbf{x}$-dependent tensor of type $(1,3)$ are not considered here). Here $\varepsilon_0, \mu_0$ are constants such that $c^2=(\varepsilon_0\mu_0)^{-1}$ with $c$ being the speed of light in vacuum and $\chi_1, \chi_3$ are given material functions. By direct calculations one obtains the quasilinear curl-curl-equation \begin{align} 0=\nabla\times\nabla\times\mathbf{E} +\partial_t^2\left( V(\mathbf{x})\mathbf{E}+\Gamma(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}\right), \label{curlcurl} \end{align} where $V(\mathbf{x})=\mu_0\varepsilon_0\left(1+\chi_1(\mathbf{x})\right)$ and $\Gamma(\mathbf{x})=\mu_0\varepsilon_0\chi_3(\mathbf{x})$. Once \eqref{curlcurl} is solved for the electric field $\mathbf{E}$, the magnetic induction $\mathbf{B}$ is obtained by time-integration from $\nabla\times\mathbf{E}=-\partial_t\mathbf{B}$ and it will satisfy $\nabla\cdot\mathbf{B}=0$ provided it does so at time $t=0$. By construction, the magnetic field $\mathbf{H}=\frac{1}{\mu_0} \mathbf{B}$ satisfies $\nabla\times\mathbf{H}=\partial_t\mathbf{D}$. In order to complete the full set of nonlinear Maxwell's equations one only needs to check Gauss's law $\nabla\cdot\mathbf{D}=0$ in the absence of external charges. This will follow directly from the constitutive equation $\mathbf{D}=\varepsilon_0(1+\chi_1(\mathbf{x}))\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\abs{\mathbf{E}}^2\mathbf{E}$ and the two different specific forms of $\mathbf{E}$ given next: \begin{align*} \mathbf{E}(\mathbf{x},t)&=(0,u(x_1-\kappa t,x_3),0)^T &&\hspace*{-2cm}\mbox{ polarized wave traveling in $x_1$-direction } \\ \mathbf{E}(\mathbf{x},t)&=(0,u(x_1,t),0)^T &&\hspace*{-2cm}\mbox{ polarized standing wave} \end{align*} In the first case $\mathbf{E}$ is a polarized wave independent of $x_2$ traveling with speed $\kappa$ in the $x_1$ direction and with profile $u$. If additionally $V(\mathbf{x})=V(x_3)$ and $\Gamma(\mathbf{x})=\Gamma(x_3)$ then the quasilinear curl-curl-equation \eqref{curlcurl} turns into the following equation for $u=u(\tau,x_3)$ with the moving coordinate $\tau=x_1-\kappa t$: \begin{align*} -u_{x_3 x_3} + (\kappa^2 V(x_3)-1) u_{\tau\tau} + \kappa^2\Gamma(x_3)(u^3)_{\tau\tau}=0. \end{align*} Setting $u=w_\tau$ and integrating once w.r.t. $\tau$ we obtain \eqref{quasi}. \medskip In the second case $\mathbf{E}$ is a polarized standing wave which is independent of $x_2, x_3$. If we assume furthermore that $V(\mathbf{x})=V(x_1)$ and $\Gamma(\mathbf{x})=\Gamma(x_1)$ then this time the quasilinear curl-curl-equation \eqref{curlcurl} for $u=w_t$ turns (after one time-integration) directly into \eqref{quasi}. \medskip In the literature, \eqref{curlcurl} has mostly been studied by considering time-harmonic waves $\mathbf{E}(\mathbf{x},t)= \mathbf{U}(\mathbf{x})e^{\i\kappa t}$. This reduces the problem to the stationary elliptic equation \begin{equation} \label{curl_curl_stat} 0=\nabla\times\nabla\times\mathbf{U} -\kappa^2\left( V(\mathbf{x})\mathbf{U}+\Gamma(\mathbf{x})\abs{\mathbf{U}}^2\mathbf{U}\right) \mbox{ in } \R^3. \end{equation} Here case $\mathbf{E}$ is no longer real-valued. This may be justified by extending the ansatz to $\mathbf{E}(\mathbf{x},t)= \mathbf{U}(\mathbf{x})e^{\i\kappa t}+c.c.$ and by either neglecting higher harmonics generated from the cubic nonlinearity or by assuming the time-averaged constitutive relation $\mathbf{P}(\mathbf{E})=\varepsilon_0\chi_1(\mathbf{x})\mathbf{E}+\varepsilon_0\chi_3(\mathbf{x})\frac{1}{T}\int_0^T\abs{\mathbf{E}}^2\,dt \mathbf{E}$ with $T=2\pi/\kappa$, cf. \cite{stuart_1993}, \cite{Sutherland03}. For results on \eqref{curl_curl_stat} we refer to \cite{BDPR_2016}, \cite{Mederski_2015} and in particular to the survey \cite{Bartsch_Mederski_survey}. Time-harmonic traveling waves have been found in a series of papers \cite{stuart_1990, stuart_1993,stuart_zhou_2010}. The number of results for monochromatic standing polarized wave profiles $U(\mathbf{x})=(0,u(x_1),0)$ with $u$ satisfying $0=-u''-\kappa^2\left( V(x_1)u+\Gamma(x_1)|u|^2u\right)$ on $\R$ is too large to cite so we restrict ourselves to Cazenave's book \cite{cazenave}. \medskip Our approach differs substantially from the approaches by monochromatic waves described above. Our ansatz $w(x,t)=\sum_{k\in{\Z_{odd}}} w_k(x) \e^{\i k\omega t}$ with ${\Z_{odd}}\coloneqq2\Z+1$ is automatically polychromatic since it couples all integer multiples of the frequency $\omega$. A similar polychromatic approach is considered in \cite{PelSimWeinstein}. The authors seek spatially localized traveling wave solutions of the 1+1-dimensional quasilinear Maxwell model, where in the direction of propagation $\chi_1$ is a periodic arrangement of delta functions. Based on a multiple scale approximation ansatz, the field profile is expanded into infinitely many modes which are time-periodic in both the fast and slow time variables. Since the periodicities in the fast and slow time variables differ, the field becomes quasiperiodic in time. To a certain extent the authors of \cite{PelSimWeinstein} analytically deal with the resulting system for these infinitely many coupled modes through bifurcation methods, with a rigorous existence proof still missing. However, numerical results from \cite{PelSimWeinstein} indicate that spatially localized traveling waves could exist. \medskip With our case of allowing $\chi_1$ to be a bounded function but taking $\chi_3$ to be a delta function at $x=0$ we consider an extreme case. On the other hand our existence results (possibly for the first time) rigorously establish localized solutions of the full nonlinear Maxwell problem \eqref{curlcurl} without making the assumption of either neglecting higher harmonics or of assuming a time-averaged nonlinear constitutive law. \medskip The existence of localized breathers of the quasilinear problem \eqref{quasi} with bounded coefficients $g, h$ remains generally open. We can, however, provide specific functions $g$, $h$ for which \eqref{quasi} has a breather-type solution that decays to $0$ as $|x|\to \infty$. Let \begin{align*} b(x) \coloneqq (1+x^2)^{-1/2}, \quad h(x) \coloneqq \frac{1-2x^2}{1+x^2}, \quad g(x) \coloneqq \frac{2+x^4}{(1+x^2)^2} \end{align*} and consider a time-periodic solution $a$ of the ODE \begin{align*} -a'' - (a'^3)' =a \end{align*} with minimal prescribed period $T\in (0,2\pi)$. Then $w(x,t) \coloneqq a(t)b(x)$ satisfies \eqref{quasi}. Note that $h$ is sign-changing and $w$ is not exponentially localized. We found this solution by inserting the ansatz for $w$ with separated variables into \eqref{quasi}. We then defined $b(x)\coloneqq(1+x^2)^{-1/2}$ and set $g(x)\coloneqq -b''(x)/b(x)$ and $h(x)\coloneqq -b''(x)/b(x)^3$. The remaining equation for $a$ then turned out to be the above one. \medskip The paper is structured as follows: In Section~\ref{variational_approach} we develop the variational setting and give the proof of Theorem~\ref{w is a weak solution general}. The proof of the additional regularity results of Theorem~\ref{w is even more regular} is given in Section~\ref{further_regularity}. In Section~\ref{infinitely_many_breathers} we give the proof of Theorem~\ref{multiplicity abstract} on the existence of infinitely many different breathers. In Section~\ref{approximation} we show that our breathers can be well approximated by truncation of the Fourier series in time. Finally, in the Appendix we give details on the background and proof of Theorem~\ref{step} (Section~\ref{details_example_step}) and Theorem~\ref{w is a weak solution in expl exa} (Section~\ref{explicit example Bloch Modes_WR}) as well as a technical detail on a particular embedding of H\"older spaces into Sobolev spaces (Section~\ref{embedding}). \section{Variational Approach and Proof of Theorem~\ref{w is a weak solution general}} \label{variational_approach} The main result of our paper is Theorem~\ref{w is a weak solution general} which will be proved in this section. It is a consequence of Lemma~\ref{breathers} and Theorem~\ref{J attains a minimum and its properties} below. \medskip Formally \eqref{quasi} is the Euler-Lagrange-equation of the functional \begin{equation} \label{def_I} I(w)\coloneqq\int_D-\frac{1}{2}g(x)\abs{\partial_tw}^2+\frac{1}{2}\abs{\partial_xw}^2\dd{(x,t)} -\frac{1}{4}\gamma\int_{0}^{T}\abs{\partial_tw(0,t)}^4\dd{t} \end{equation} defined on a suitable space of $T$-periodic functions. Instead of directly searching for a critical point of this functional we first rewrite the problem into a nonlinear Neumann boundary value problem under the assumption that $w$ is even in $x$. In this case \eqref{quasi} amounts to the following linear wave equation on the half-axis with nonlinear Neumann boundary conditions: \begin{gather} \begin{cases} g(x) w_{tt}-w_{xx}=0 & \text{for } (x,t)\in(0,\infty)\times\R,\\ 2w_x(0_+,t)=\gamma\left(w_t(0,t)^3\right)_t & \text{for }t\in\R \end{cases}\label{nonlinNeuBVP} \end{gather} where solutions $w\in\SobH{1}{[0,\infty)\times\T_T}$ with $\partial_tw(0,\cdot)\in\Leb{3}{\T_T}$ of \eqref{nonlinNeuBVP} are understood in the sense that \begin{align} 2\int_{D_+}-g(x)\partial_tw\,\partial_t\psi +\partial_xw\,\partial_x\psi\dd{(x,t)} -\gamma\int_{0}^{T} (\partial_t w(0,t))^3 \partial_t \psi(0,t)\dd{t}=0 \label{WeakEquation for nlinNeuBVP} \end{align} for all $\psi\in\Contc{\infty}{[0,\infty)\times\T_T}$ with $D_+=(0,\infty)\times\T_T$. It is clear that evenly extended solutions $w$ of \eqref{WeakEquation for nlinNeuBVP} also satisfy \eqref{WeakEquation for (quasi)}. To see this note that every $\psi\in\Contc{\infty}{\R\times\T_T}$ can be split into an even and an odd part $\psi=\psi_{e}+\psi_{o}$ both belonging to $\Contc{\infty}{\R\times\T_T}$. Testing with $\psi_o$ in \eqref{WeakEquation for (quasi)} produces zeroes in all spatial integrals due to the evenness of $w$ and also in the temporal integral since $\psi_{o}(0,\cdot)\equiv 0$ due to oddness. Testing with $\psi_e$ in \eqref{WeakEquation for (quasi)} produces twice the spatial integrals appearing in \eqref{WeakEquation for nlinNeuBVP}. In the following we concentrate on finding solutions of \eqref{nonlinNeuBVP} for the linear wave equation with nonlinear Neumann boundary conditions. Motivated by the linear wave equation in \eqref{nonlinNeuBVP} we make the ansatz that \begin{equation} \label{ansatz} w(x,t)=\sum_{k\in{\Z_{odd}}}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t), \end{equation} where $e_k(t)\coloneqq\frac{1}{\sqrt{T}}\e^{\i\omega kt}$ denotes the $\Leb{2}{\T_T}$-orthonormal Fourier base of $\T_T$, and where $\Phi_k$ are the decaying fundamental solutions $\Phi_k$ of $L_k$, cf. Lemma~\ref{exp_decaying_sol}. Such a function $w$ will always solve the linear wave equation in \eqref{nonlinNeuBVP} and we will determine real sequences $\hat{\alpha}= (\hat\alpha_k)_{k\in {\Z_{odd}}}$ such that the nonlinear Neumann condition is satisfied as well. The additional factor $\frac{1}{k}$ is only for convenience, since $\partial_t$ generates a multiplicative factor $\i\omega k$. \medskip The convolution between two sequences $\hat{z},\hat{y}\in\R^\Z$ is defined pointwise (whenever it converges) by $(\hat{z}*\hat{y})_k\coloneqq \sum_{l\in\Z}\hat{z}_l\hat{y}_{k-l}$. \medskip In order to obtain real-valued functions $w$ by the ansatz \eqref{ansatz} we require the sequence $\hat{\alpha}$ to be real and odd in $k$, i.e., $\hat{\alpha}_k\in \R$ and $\hat{\alpha}_k = -\hat{\alpha}_{-k}$. Since \eqref{ansatz} already solves the wave equation in \eqref{nonlinNeuBVP}, it remains to find $\hat{\alpha}$ such that \begin{align*} 2w_x(0_+,t) = 2\sum_{k\in {\Z_{odd}}} \frac{\hat{\alpha}_k}{k} \Phi_k'(0)e_k(t) \stackrel{!}{~=~} \frac{1}{T}\sum_{k\in {\Z_{odd}}} \gamma\omega^4 k (\hat\alpha*\hat\alpha*\hat\alpha)_k e_k(t) = \gamma(w_t(0,t)^3)_t, \end{align*} where we have used $\Phi_k(0)=1$. As the above identity needs to hold for all $t\in \R$ we find \begin{equation} \label{euler_lagrange_alpha} (\hat\alpha*\hat\alpha*\hat\alpha)_k = \frac{2T\Phi_k'(0)}{\gamma\omega^4 k^2} \hat \alpha_k \quad\mbox{ for all } k \in {\Z_{odd}}. \end{equation} This will be accomplished by searching for critical points $\hat{\alpha}$ of the functional \begin{align*} J(\hat{z})\coloneqq\frac{1}{4} (\hat{z}*\hat{z}*\hat{z}*\hat{z})_0+\frac{T}{\gamma \omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{z}_k^2. \end{align*} defined on a suitable Banach space of real sequences $\hat{z}$ with $\hat{z}_k = -\hat{z}_{-k}$. Indeed, computing (formally) the Fr\'{e}chet derivative of $J$ at $\hat{\alpha}$ we find \begin{equation} J'(\hat{\alpha})[\hat{y}]=\left(\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{\alpha}_k\hat{y}_k. \label{frechet} \end{equation} Let us indicate how \eqref{frechet} amounts to \eqref{euler_lagrange_alpha}. For fixed $k_0\in {\Z_{odd}}$ we define the test sequence $\hat{y}\coloneqq(\delta_{k,k_0}-\delta_{k,-k_0})_{k\in {\Z_{odd}}}$ which has exactly two non-vanishing entries at $k_0$ and at $-k_0$. Thus, $\hat{y}$ belongs to the same space of odd, real sequences as $\hat{\alpha}$ and can therefore be used as a test sequence in $J'(\hat{\alpha})[\hat{y}]=0$. After a short calculation using $\hat{\alpha}_k=-\hat{\alpha}_{-k}$, $\Phi_k'=\Phi_{-k}'$ we obtain \eqref{euler_lagrange_alpha} for $k_0$. \medskip It turns out that a real Banach space of real-valued sequences which is suitable for $J$ can be given by \begin{align*} \Dom{J}\coloneqq\left\{ \hat{z}\in\R^{\Z_{odd}} ~\big|~ \NORM{\hat{z}}<\infty,~ \hat{z}_k=-\hat{z}_{-k} \right\} \mbox{ where } \NORM{\hat{z}}\coloneqq \norm{\hat{z}*\hat{z}}_\seq{2}^\frac{1}{2}. \end{align*} The relation between the function $I$ defined in \eqref{def_I} and the new functional $J$ is formally given by \begin{align*} I\left(\sum_{k\in{\Z_{odd}}}\frac{\hat{z}_k}{k}\Phi_k(\abs{x})e_k(t)\right) =-\frac{\gamma\omega^4}{T} J\left(\hat{z}\right). \end{align*} \begin{lemma} \label{Charakterization Dom(J)} The space $(\Dom{J},\NORM{\cdot})$ is a separable, reflexive, real Banach space and isometrically embedded into the real Banach space $\Leb{4}{\T_T,\i\R}$ of purely imaginary-valued measurable functions. Moreover for $\hat{u}, \hat{v}, \hat{w}, \hat{z} \in \Dom{J}$ we have \begin{align} (\hat{u}*\hat{u}*\hat{u}*\hat{u})_0 & = \NORM{\hat{u}}^4, \\ \abs{(\hat{u}*\hat{v}*\hat{w}*\hat{z})_0} & \leq \NORM{\hat{u}}\,\NORM{\hat{v}}\,\NORM{\hat{w}}\,\NORM{\hat{z}}, \label{conv_multilinear} \\ \norm{\hat{z}}_{\seq{2}} &\leq\NORM{\hat{z}}. \label{l2_l4} \end{align} \end{lemma} \begin{proof} We first recall the correspondence between real-valued sequences $\hat{z}\in l_2$ with $\hat{z}_k=-\hat{z}_{-k}$ and purely imaginary-valued functions $z\in\Leb{2}{\T_T,\i\R}$ by setting \begin{align*} \hat{z}_k\coloneqq\skp{z}{e_k}_{L^2(\T_T)} \mbox{ and } z(t)\coloneqq\sum_{k\in\Z}\hat{z}_ke_k(t) \end{align*} Parseval's identity provides the isomorphism $\norm{z}_{\Leb{2}{\T_T}}=\|\hat{z}\|_\seq{2}$. The following identity \begin{align*} T\norm{z}_{\Leb{4}{\T_T}}^4 = T\int_0^T z(t)^4\,dt = (\hat{z}*\hat{z}*\hat{z}*\hat{z})_0 = \|\hat{z}*\hat{z}\|_\seq{2}^2 = \NORM{\hat{z}}^4 \end{align*} shows that $\NORM{\cdot}$ is indeed a norm on $\Dom{J}$ and its provides the isometric embedding of $\Dom{J}$ into a subspace of $L^4(\T_T,\i\R)$. By Parseval's equality and H\"older's inequality we see that \begin{align*} \norm{\hat{z}}_{\seq{2}}=\norm{z}_{\Leb{2}{\T_T}}\leq T^\frac{1}{4}\norm{z}_{\Leb{4}{\T_T}}=\NORM{\hat{z}} \end{align*} so that $\Dom{J}$ is indeed a subspace of $l^2$. Finally, for any $\hat{u}, \hat{v}, \hat{w}, \hat{z} \in \Dom{J}$ we see that \begin{align*} \abs{(\hat{u}*\hat{v}*\hat{w}*\hat{z})_0} = T \abs{\int_0^T u(t)v(t)w(t)z(t) \,dt} \leq T\norm{u}_{L^4}\norm{v}_{L^4}\norm{w}_{L^4}\norm{z}_{L^4} = \NORM{\hat{u}}\,\NORM{\hat{v}}\,\NORM{\hat{w}}\,\NORM{\hat{z}}. \end{align*} This finishes the proof of the lemma. \end{proof} For $\frac{T}{2}$-anti-periodic functions $\psi\colon D\to \R$ of the space-time variable $(x,t)\in D$ we use the notation \begin{equation} \label{ansatz_phi} \psi(x,t)=\sum_{k\in{\Z_{odd}}}\hat{\psi}_k(x)e_k(t) = \sum_{k\in{\Z_{odd}}} \frac{1}{k} \Psi_k(x) e_k(t) \end{equation} with $\frac{1}{k}\Psi_k(x)=\hat{\psi}_k(x)\coloneqq\skp{\psi(x,\cdot)}{e_k}_\Leb{2}{\T_T}$. The Parseval identity and the definition of $\NORM{\cdot}$ immediately lead to the following lemma. \begin{lemma} \label{characterization} For $\psi:D\to \R$ as in \eqref{ansatz_phi} the following holds: \begin{itemize} \item[(i)] $\|\psi_x\|_{L^2(D)}^2=\sum_{k} \frac{1}{k^2} \|\Psi_k'\|_{\Leb{2}{\R}}^2$, \item[(ii)] $\|\psi_t\|_{L^2(D)}^2=\omega^2\sum_{k} \|\Psi_k\|_{\Leb{2}{\R}}^2$, \item[(iii)] $T\|\psi_t(0,\cdot)\|_\Leb{4}{\T_T}^4 = \omega^4\NORM{\hat{y}}^4$ where $\hat{y}_k = \Psi_k(0)$ for $k\in {\Z_{odd}}$. \end{itemize} \end{lemma} The next result give some estimates on the growth of norms of $\Phi_k$. It serves as a preparation for the proof of regularity properties for functions $w$ as in \eqref{ansatz} stated in Lemma~\ref{breathers}. \begin{lemma} \label{norm_estimates} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik}. Then \begin{equation} \label{phi_k_estimates} \|\Phi_k\|_{L^2(0,\infty)}= O(1), \quad \|\Phi_k'\|_{L^2(0,\infty)}=O(k), \quad \|\Phi_k'\|_{L^\infty(0,\infty)} = O(k^\frac{3}{2}). \end{equation} In particular $|\Phi_k'(0)|=O(k^\frac{3}{2})$. \label{asymptotik} \end{lemma} \begin{proof} The first part of \eqref{phi_k_estimates} is a direct consequence of \eqref{FurtherCond_phik}. \medskip Multiplying $L_k\Phi_k=0$ with $\Phi_k$, $\Phi_k'$ and integrating from $a\geq 0$ to $\infty$ we get \begin{align} \int_a^\infty -\omega^2 k^2 g(x)\Phi_k(x)^2+\Phi_k'(x)^2\,dx & = -\Phi_k(a)\Phi_k'(a), \label{mult1}\\ \int_a^\infty -2 \omega^2 k^2 g(x) \Phi_k(x)\Phi_k'(x)\,dx & = -\Phi_k'(a)^2, \label{mult2} \end{align} respectively. Applying the Cauchy-Schwarz inequality to \eqref{mult2} and using the first part of \eqref{phi_k_estimates} we find \begin{equation} \label{mult3} \|\Phi_k'\|_{L^\infty(0,\infty)}^2 \leq O(k^2) \|\Phi_k'\|_{L^2(0,\infty)} \end{equation} and from \eqref{mult1}, \eqref{mult3} we get \begin{align*} \|\Phi_k'\|_{L^2(0,\infty)}^2 & \leq O(k^2) + \|\Phi_k\|_{L^\infty(0,\infty)} \|\Phi_k'\|_{L^\infty(0,\infty)}\\ & \leq O(k^2) + \|\Phi_k\|_{L^\infty(0,\infty)} O(k) \|\Phi_k'\|_{L^2(0,\infty)}^\frac{1}{2}. \end{align*} The $L^\infty$-assumption in \eqref{FurtherCond_phik} leads to \begin{align*} \|\Phi_k'\|_{L^2(0,\infty)}^2 \leq O(k^2) + O(k)\|\Phi_k'\|_{L^2(0,\infty)}^\frac{1}{2} \leq O(k^2) +C_\epsilon O(k^\frac{4}{3}) + \epsilon \|\Phi_k'\|_{L^2(0,\infty)}^2, \end{align*} where we have used Young's inequality with exponents $4/3$ and $4$. This implies the second inequality in \eqref{phi_k_estimates}. Inserting this into \eqref{mult3} we obtain the third inequality in \eqref{phi_k_estimates}. \end{proof} \begin{lemma}\label{breathers} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik}. For $\hat{\alpha}\in\Dom{J}$ and $w:D\to\R$ as in \eqref{ansatz} we have $w_x, w_t \in L^2(D)$, $w_t(0,\cdot)\in\Leb{4}{\T_T}$ and there are values $C>0$ and $\rho>0$ such that $\abs{w(x,t)}\leq C\e^{-\rho\abs{x}}$. \end{lemma} \begin{rmk} The lemma does not require $\hat{\alpha}$ to be a critical point of $J$. The smoothness and decay of $w$ as in \eqref{ansatz} is simply a consequence of $\hat{\alpha} \in \Dom{J}$ and \eqref{FurtherCond_phik}. \end{rmk} \begin{proof} We use the characterization from Lemma~\ref{characterization}. Let us begin with the estimate for $\norm{\partial_t w}_{\Leb{2}{D}}$. By Lemma~\ref{norm_estimates} we have $\sup_k \|\Phi_k\|_{L^2(0,\infty)}<\infty$ so that \begin{align*} \norm{\partial_t w}_{\Leb{2}{D}}^2 &= 2\omega^2 \sum_k \hat{\alpha}_k^2\norm{\Phi_k}_\Leb{2}{0,\infty}^2 \leq 2\omega^2\Bigl(\sup_k\norm{\Phi_k}_\Leb{2}{0,\infty}\Bigr)^2 \norm{\hat{\alpha}}_\seq{2}^2 \\ & \leq 2\omega^2\Bigl(\sup_k\norm{\Phi_k}_\Leb{2}{0,\infty}\Bigr)^2 \NORM{\hat{\alpha}}^2<\infty, \end{align*} which finishes our first goal. Next we estimate $\norm{\partial_x w}_{\Leb{2}{D}}$. Here we use again Lemma~\ref{norm_estimates} to find \begin{align*} \norm{\partial_x w}_{\Leb{2}{D}}^2 = 2\sum_k\frac{\hat{\alpha}_k^2}{k^2}\norm{\Phi_k'}_\Leb{2}{0,\infty}^2 \leq C\|\hat\alpha\|_{l^2}^2 \leq C\NORM{\hat\alpha}^2<\infty \end{align*} which finishes our second goal. Next we show that $w_t(0,\cdot)\in\Leb{4}{\T_T}$. Using $\Phi_k(0)=1$ we observe that \begin{align*} T\norm{w_t(0,\cdot)}_\Leb{4}{\T_T}^4 =T\int_{0}^T\Bigl(\sum_{k\in{\Z_{odd}}}\i\omega\hat{\alpha}_k\Phi_k(0)e_k(t)\Bigr)^4\dd{t}\\ =\omega^4\NORM{\hat{\alpha}}^4<\infty. \end{align*} Finally we show the uniform-in-time exponential decay of $w$. By construction $w$ is even in $x$, hence we only consider $x>0$. By \eqref{FurtherCond_phik} we see that \begin{align*} \abs{w(x,t)} \leq\sum_k\frac{\abs{\hat{\alpha}_k}}{\abs{k}}\abs{\Phi_k(x)} =\sum_k\frac{\abs{\hat{\alpha}_k}}{\abs{k}}Ce^{-\alpha x} \leq \|\hat\alpha\|_{l^2}\left(\sum_k \frac{1}{k^2}\right)^{1/2} C e^{-\alpha x} \leq \tilde C e^{-\alpha x} \end{align*} which finishes the proof of the lemma. \end{proof} In the following result we will show that minimizers of $J$ on $\Dom{J}$ exist, are solutions of \eqref{euler_lagrange_alpha} and indeed correspond to weak solutions of \eqref{quasi}. \begin{thm}\label{J attains a minimum and its properties} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik}. Then the functional $J$ is well defined on its domain $\Dom{J}$, Fr\'{e}chet-differentiable, bounded from below and attains its negative minimum provided \begin{itemize} \item[(i)] $\gamma<0$ and the sequence $\left(\Phi'_k(0)\right)_{k\in{\N_{odd}}}$ has at least one positive element, or \item[(ii)] $\gamma>0$ and the sequence $\left(\Phi'_k(0)\right)_{k\in{\N_{odd}}}$ has at least one negative element. \end{itemize} For every critical point $\hat{\alpha}\in\Dom{J}$ the corresponding function $w(x,t)\coloneqq\sum_{k\in{\Z_{odd}}}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t)$ is a nontrivial weak solution of \eqref{quasi}. \end{thm} \begin{proof} Note that $J(\hat z) = \frac{1}{4} \NORM{\hat{z}}^4 + J_1(\hat z)$, where $J_1(\hat z)= \sum_k a_k \hat z_k^2$ with $a_k = \frac{T\Phi_k'(0)}{\gamma\omega^4 k^2}$. By Lemma~\ref{norm_estimates} the sequence $(a_k)_k$ is converging to $0$ as $|k|\to \infty$, so in particular it is bounded. Due to \eqref{l2_l4} one finds that $J$ is well defined and continuous on $\Dom{J}$, and moreover, that for $\hat{z}\in \Dom{J}$ \begin{align*} J(\hat{z}) \geq\frac{1}{4}\NORM{\hat{z}}^4 -\sup_k |a_k| \sum_k\hat{z}_k^2 \geq\frac{1}{4}\NORM{\hat{z}}^4 - \sup_k |a_k|\NORM{\hat{z}}^2. \end{align*} This implies that $J$ is coercive and bounded from below. The weak lower semi-continuity of $J$ follows from the convexity and continuity of the map $\hat z \mapsto \NORM{\hat{z}}^4$ and the weak continuity of $J_1$. To see the latter take an arbitrary $\epsilon>0$. Then there is $k_0\in \N$ such that $|a_k|\leq \epsilon$ for $|k|> k_0$ and this implies the inequality \begin{align} \label{estimate_J_1} |J_1(\hat z)- J_1(\hat{y})| \leq \sup_k |a_k| \sum_{|k|\leq k_0} |\hat z_k^2-\hat{y_k}^2| + \epsilon (\|\hat z\|_{l^2}^2 + \|\hat{y}\|_{l^2}^2) \quad\forall\,\hat{z},\hat{y}\in\Dom{J}. \end{align} Since $(\Dom{J},\NORM{\cdot})$ continuously embeds into $l^2$ any weakly convergent sequence in $(\Dom{J},\NORM{\cdot})$ also weakly converges in $l^2$ and in particular pointwise. This pointwise convergence together with the boundedness of the sequence and \eqref{estimate_J_1} yields the weak continuity of $J_1$ and thus the weak lower semi-continuity of $J$. As a consequence, cf. Theorem 1.2 in \cite{struwe}, we get the existence of a minimizer. In order to check that the minimizer is nontrivial is suffices to verify that $J$ attains negative values. Here we distinguish between case (i) and (ii) in the assumptions of the theorem. In case (i) when $\gamma<0$ we find an index $k_0$ such that $\Phi_{k_0}'(0)>0$. In case (ii) when $\gamma>0$ we choose $k_0$ such that $\Phi_{k_0}'(0)<0$. In both cases we obtain that $\Phi_{k_0}'(0)/\gamma<0$. If we set $\hat{y}\coloneqq(\delta_{k,k_0}-\delta_{k,-k_0})_{k\in {\Z_{odd}}}$ then $\hat{y}$ has exactly two non-vanishing entries, namely $+1$ at $k_0$ and $-1$ at $-k_0$. Hence $\hat{y}\in\Dom{J}$. Using the property $\Phi_{k_0}'=\Phi_{-k_0}'$ we find for $t\in \R$ \begin{align*} J(t \hat{y})=t^4\frac{1}{4}\NORM{\hat{y}}^4 +2t^2\frac{T\Phi'_{k_0}(0)}{\gamma\omega^4k_0^2} \end{align*} which is negative by the choice of $k_0$ provided $t>0$ is sufficiently small. Thus, $\inf_{\Dom{J}}J<0$ and every minimizer $\hat\alpha$ is nontrivial. Next we show for every critical point $\hat\alpha$ of $J$ that $w(x,t)\coloneqq\sum_{k\in{\Z_{odd}}}\frac{\hat{\alpha}_k}{k}\Phi_k(\abs{x})e_k(t)$ is a weak solution of \eqref{quasi}. The regularity properties $w\in\SobH{1}{{\R\times\T_T}}$, $\partial_tw(0,\cdot)\in\Leb{4}{\T_T}$ and the exponential decay have already been shown in Lemma~\ref{breathers}. We skip the standard proof that $J\in\Cont{1}{\Dom{J},\R}$ and that its Fr\'{e}chet-derivative is given by \eqref{frechet}. We will show that \eqref{WeakEquation for (quasi)} holds for any $\psi$ as in \eqref{ansatz_phi} with even functions $\Psi_k\in H^1(\R)$, $\Psi_k=-\Psi_{-k}$ such that $\psi_x, \psi_t \in L^2(D)$ and $\psi(0,\cdot)\in L^4(\T_T)$ as described in Lemma~\ref{characterization}. We begin by deriving expressions and estimates for the functionals \begin{align*} H_1(\psi) = \int_D g(x) w_t \psi_t\,d(x,t), \quad H_2(\psi) = \int_D w_x \psi_x \,d(x,t), \quad H_3(\psi) = \int_{0}^{T} w_t(0,t)^3\psi_t(0,t)\,dt. \end{align*} In a first step we assume that the sum in \eqref{ansatz_phi} is finite in order to justify the exchange of summation and integration in the following. Then, starting with $H_1$ we find \begin{align*} H_1(\psi) &= -\omega^2 \int_D g(x)\sum_{k,l} \hat{\alpha}_k\Phi_k(\abs{x}) \Psi_l(|x|) e_k(t)e_l(t) \dd{(x,t)} \\ &=-2\omega^2 \sum_k \hat{\alpha}_k\int_0^\infty g(x)\Phi_k(x)\Psi_{-k}(x)\dd{x}\\ &=2\omega^2 \sum_k \hat{\alpha}_k\int_0^\infty g(x)\Phi_k(x)\Psi_k(x)\dd{x}, \\ |H_1(\psi)| &\leq 2\omega^2 \|g\|_{L^\infty(\R)} \Bigl(\sum_k \hat{\alpha}_k^2 \|\Phi_k\|^2_{L^2(0,\infty)}\Bigr)^\frac{1}{2} \Bigl(\sum_k\|\Psi_k\|_{L^2(0,\infty)}^2\Bigr)^\frac{1}{2}= \|g\|_{L^\infty(\R)} \|w_t\|_{L^2(D)}\|\psi_t\|_{L^2(D)} \end{align*} and similarly for $H_2$ we find using \eqref{eq:bloch} \begin{align*} H_2(\psi) &= \int_D \sum_{k,l} \frac{\hat{\alpha}_k}{k}\Phi_k'(\abs{x}) \frac{1}{l}\Psi_l'(|x|) e_k(t)e_l(t) \dd{(x,t)} \\ &= 2\sum_k \frac{\hat{\alpha}_k}{-k^2} \int_0^\infty \Phi_k'(x)\Psi_{-k}'(x)\dd{x}\\ &= 2\sum_k \frac{\hat{\alpha}_k}{k^2} \int_0^\infty \Phi_k'(x)\Psi_k'(x)\dd{x} \\ &= 2\omega^2\sum_k \hat{\alpha}_k \int_0^\infty g(x) \Phi_k(x)\Psi_k(x)\dd{x} - 2\sum_k \frac{\hat{\alpha}_k}{k^2}\Phi_k'(0)\Psi_k(0), \\ |H_2(\psi)| &\leq 2\Bigl(\sum_k \frac{\hat{\alpha}_k^2}{k^2}\|\Phi_k'\|_{L^2(0,\infty)}^2 \Bigr)^\frac{1}{2} \Bigl(\sum_k \frac{1}{k^2}\|\Psi_k'\|_{L^2(0,\infty)}^2 \Bigr)^\frac{1}{2} = \|w_x\|_{L^2(D)}\|\psi_x\|_{L^2(D)}. \end{align*} Moreover, considering $H_3$ and setting $\hat{y}_k \coloneqq \Psi_k(0)$ for $k\in {\Z_{odd}}$ one sees \begin{align*} H_3(\psi) &= \omega^4 \int_{0}^{T}\Bigl(\sum_k \hat{\alpha}_ke_k(t)\Bigr)^3\Bigl(\sum_l \Psi_l(0)e_l(t)\Bigr)\dd{t} \\ &= \frac{\omega^4}{T} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y})_0,\\ \abs{H_3(\psi)} & \leq \frac{\omega^4}{T} \NORM{\hat{\alpha}}^3 \NORM{\hat{y}} = \norm{w_t(0,\cdot)}_\Leb{4}{\T_T}^3\norm{\psi_t(0,\cdot)}_\Leb{4}{\T_T}. \end{align*} Hence $H_1, H_2$ and $H_3$ are bounded linear functionals of the variable $\psi$ as in \eqref{ansatz_phi} with $\psi_x, \psi_t \in L^2(D)$ and $\psi_t(0,\cdot)\in \Leb{4}{\T_T}$. For such $\psi$ we use the above formulae for $H_1, H_2, H_3$ and compute the linear combination \begin{align*} -H_1(\psi)+H_2(\psi)-\gamma H_3(\psi) = -2\sum_{k} \frac{\hat{\alpha}_k}{k^2} \Phi_k'(0)\Psi_k(0) - \frac{\gamma \omega^4}{T} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha}*\hat{y})_0=0 \end{align*} due to the Euler-Lagrange equation for the functional $J$, i.e., the vanishing of $J'(\hat{\alpha})[\hat{y}]$ in \eqref{frechet} for all $\hat{y} \in \Dom{J}$. The last equality means that $w$ is a weak solution of \eqref{quasi}. \end{proof} \section{Further Regularity} \label{further_regularity} Here we prove Theorem~\ref{w is even more regular}. We observe first that in the example of a periodic step-potential in Theorem~\ref{w is a weak solution in expl exa} we find that not only $\Phi'_k(0)=O(k^\frac{3}{2})$ holds (as Lemma~\ref{norm_estimates} shows) but even $\Phi'_k(0)=O(k)$ is satisfied. It is exactly this weaker growth that we can exploit in order to prove additional smoothness of the solutions of \eqref{quasi}. We begin by defining for $\nu>0$ the Banach space of sequences \begin{align*} \seqsobh{\nu}\coloneqq\Bigl\{\hat{z}\in\seq{2} \mbox{ s.t. } \|\hat{z}\|_{h^\nu}^2 \coloneqq \sum_k (1+k^2)^{\nu}\abs{\hat{z}_k}^2<\infty\Bigr\}. \end{align*} Moreover, we use the isometric isomorphism between $h^\nu$ and \begin{align*} H^{\nu}(\T_T) = \Bigl\{z(t)=\sum_k \hat{z}_k e_k(t) \text{ s.t. } \hat{z}\in\seqsobh{\nu} \Bigr\} \end{align*} by setting $\|z\|_{H^\nu} \coloneqq \|\hat z\|_{h^\nu}$. We also use the Morrey embedding $\SobH{1+\nu}{\T_T}\to \Cont{0,\frac{1}{2}+\nu}{\T_T}$ for $\nu \in (0,1/2)$ and the following embedding: $\Cont{0,\nu}{\T_T}\to \SobH{\tilde\nu}{\T_T}$ for $0<\tilde\nu<\nu\leq 1$, cf. Lemma~\ref{unusual_embedd} in the Appendix. The latter embedding means that $\hat{z} \in h^{\tilde\nu}$ provided $z\in\Cont{0,\nu}{\T_T}$ and $0<\tilde\nu<\nu\leq 1$. \begin{thm}\label{smoothness alpha} Assume \eqref{C0}, \eqref{spectralcond}, \eqref{FurtherCond_phik} and in addition $\Phi'_k(0) = O(k)$. For every $\hat{\alpha}\in \Dom{J}$ with $J'(\hat{\alpha})=0$ we have $\hat{\alpha}\in h^\nu$ for every $\nu\in (0,1/4)$. \end{thm} \begin{proof} Let $\hat{\alpha}\in \Dom{J}$ with $J'(\hat{\alpha})=0$. Recall from \eqref{euler_lagrange_alpha} that \begin{equation} \label{el} (\hat{\alpha}*\hat{\alpha}*\hat{\alpha})_k = \hat{\eta}_k \hat{\alpha}_k \quad\mbox{ where }\quad \hat{\eta}_k \coloneqq \frac{2T\Phi'_k(0)}{\gamma\omega^4k^2} \mbox{ for } k \in {\Z_{odd}} \end{equation} so that $|\hat{\eta}_k| \leq C/k$. If we define the convolution of two $T$-periodic functions $f,g\in\Leb{2}{\T_T}$ on the torus $\T_T$ as \begin{align*} \left(f*g\right)(t)\coloneqq\frac{1}{\sqrt{T}}\int_{0}^{T}f(s)g(t-s)\dd{s} \end{align*} and if we set \begin{align*} \alpha(t) \coloneqq \sum_k \hat{\alpha}_k e_k(t), \quad \eta (t) \coloneqq \sum_k \hat{\eta}_k e_k(t) \end{align*} then the equation \begin{equation} \label{el_equiv} \alpha^3=\alpha*\eta \end{equation} for the $T$-periodic function $\alpha\in\Leb{4}{\T_T}$ is equivalent to the equation \eqref{el} for the sequence $\hat\alpha\in \Dom{J}$. We will analyze \eqref{el_equiv} with a bootstrap argument. \emph{Step 1:} We show that $\alpha \in\Cont{0,\frac{1}{6}}{\T_T}$. The right hand side of \eqref{el_equiv} is an $\SobH{1}{\T_T}$-function since \begin{align*} \norm{\alpha*\eta}_\SobH{1}{\T_T}^2 =\norm{\hat{\alpha}\hat{\eta}}_\seqsobh{1}^2 \leq \sum_{k\in{\Z_{odd}}} (1+k^2)\hat{\alpha}_k^2\frac{C^2}{k^2} \leq 2C^2 \|\hat{\alpha}\|_{l^2}^2 <\infty. \end{align*} Therefore, using \eqref{el_equiv} we see that $\alpha^3\in H^1(\T_T)$ and by the Morrey embedding that $\alpha^3\in\Cont{0,\frac{1}{2}}{\T_T}$. Since the inverse of the mapping $x\mapsto x^3$ is given by $x\mapsto |x|^{-\frac{2}{3}}x$, which is a $\cont{0,\frac{1}{3}}(\R)$-function, we obtain $\alpha\in\Cont{0,\frac{1}{6}}{\T_T}$. \emph{Step 2:} We fix $q\in (0,1)$ and show that if $\alpha\in \Cont{0,\nu_n}{\T_T}$ for some $\nu_n\in (0,1/2)$ solves \eqref{el_equiv} then $\alpha \in \Cont{0,\nu_{n+1}}{\T_T}$ with $\nu_{n+1}= \frac{q\nu_n}{3}+\frac{1}{6}$. For the proof we iterate the process from Step 1 and we start with $\alpha\in\Cont{0,\nu_n}{\T_T}$. Then, according to Lemma~\ref{unusual_embedd} of the Appendix, $\alpha\in\SobH{q\nu_n}{\T_T}$ and hence $\hat{\alpha} \in \seqsobh{q\nu_n}$. Then as before the convolution of $\alpha$ with $\eta$ generates one more weak derivative, namely \begin{align*} \norm{\alpha*\eta}_\SobH{1+q\nu_n}{\T_T}^2 =\norm{\hat{\alpha}\hat{\eta}}_\seqsobh{1+q\nu_n}^2 \leq \sum_k(1+k^2)^{1+q\nu_n}\hat{\alpha}_k^2\frac{C^2}{k^2} \leq C^2 \|\hat{\alpha}\|_{h^{q\nu_n}}<\infty. \end{align*} Hence by \eqref{el_equiv} we conclude $\alpha^3\in\SobH{1+q\nu_n}{\T_T}$ and by the Morrey embedding $\alpha^3\in\Cont{0,\frac{1}{2}+q\nu_n}{\T_T}$ provided $q\nu_n \in (0,1/2)$. As in Step 1 this implies $\alpha\in\Cont{0,\nu_{n+1}}{\T_T}$ with $\nu_{n+1} =\frac{1}{6}+\frac{q\nu_n}{3}$. \medskip Starting with $\nu_1=1/6$ from Step 1 we see by Step 2 that $\nu_n\nearrow\frac{1}{2(3-q)}$. Since $q\in (0,1)$ can be chosen arbitrarily close to $1$ this finishes the proof. \end{proof} With this preparation the proof of Theorem \ref{w is even more regular} is now immediate. \begin{proof}[Proof of Theorem~ \ref{w is even more regular}] Let $w(x,t) = \sum_{k\in {\Z_{odd}}} \frac{\hat{\alpha}_k}{k} \Phi_k(|x|)e_k(t)$ with $\hat{\alpha} \in \Dom{J}$ such that $J'(\hat{\alpha})=0$. Recall from assumption \eqref{FurtherCond_phik} that $C\coloneqq \sup_k\norm{\Phi_k}_\Leb{2}{0,\infty}^2<\infty$. Likewise, from Lemma~\ref{norm_estimates} we have $\norm{\Phi_k'}_\Leb{2}{0,\infty}^2 \leq \tilde Ck^2$ for all $k\in {\Z_{odd}}$ and some $\tilde C>0$. Therefore, using Theorem~\ref{smoothness alpha} we find for all $\nu<\frac{1}{4}$ \begin{align*} \norm{\partial_t^{1+\nu} w}_\Leb{2}{D}^2 =2\omega^{2+2\nu}\sum_k\hat{\alpha}_k^2|k|^{2\nu}\norm{\Phi_k}_\Leb{2}{0,\infty}^2 \leq2\omega^{2+2\nu}C \|\hat{\alpha}\|_{h^\nu}^2 <\infty \end{align*} and likewise \begin{align*} \norm{\partial_t^\nu w_x}_\Leb{2}{D}^2 =2\omega^{2\nu}\sum_k\hat{\alpha}_k^2|k|^{2\nu-2}\norm{\Phi_k'}_\Leb{2}{0,\infty}^2 \leq 2\omega^{2\nu}\tilde C\|\hat{\alpha}\|_{h^\nu}^2 <\infty. \end{align*} This establishes the claim. \end{proof} \section{Existence of Infinitely Many Breathers} \label{infinitely_many_breathers} In this section we extend Theorem~\ref{w is a weak solution general} by the following multiplicity result. \begin{thm}\label{multiplicity abstract} Assume \eqref{C0}, \eqref{spectralcond} and \eqref{FurtherCond_phik}. Then \eqref{quasi} has infinitely many nontrivial, $T$-periodic weak solution $w$ in the sense of Definition~\ref{Defn of weak Sol to (quasi)} with $T=\frac{2\pi}{\omega}$ provided \begin{itemize} \item[(i)] $\gamma<0$ and there exists an integer $l_-\in {\N_{odd}}$ such that for infinitely many $j\in \N$ the sequence $\Bigl(\Phi'_{m\cdot l_-^j}(0)\Bigr)_{m\in{\N_{odd}}}$ has at least one positive element, \item[(ii)] $\gamma>0$ and there exists an integer $l_+\in {\N_{odd}}$ such that for infinitely many $j\in \N$ the sequence $\Bigl(\Phi'_{m\cdot l_+^j}(0)\Bigr)_{m\in{\N_{odd}}}$ has at least one negative element. \end{itemize} \end{thm} \begin{rmk} \label{remark_infinitely} In the above Theorem, conditions \eqref{spectralcond} and \eqref{FurtherCond_phik} can be weakened: instead of requiring them for all $k\in {\N_{odd}}$ it suffices to require them for $k\in l_-^j{\N_{odd}}$, $k\in l_+^j{\N_{odd}}$ respectively. We prove this observation together with the one in Remark~\ref{remark_Dr} at the end of this section. \end{rmk} We start with an investigation about the types of symmetries which are compatible with our equation. The Euler-Lagrange equation \eqref{euler_lagrange_alpha} for critical points $\hat{\alpha}\in\Dom{J}$ of $J$ takes the form $(\hat{\alpha}*\hat{\alpha}*\hat{\alpha})_k = \hat{\eta}_k \hat{\alpha}_k$ with $\hat{\eta}_k \coloneqq \frac{2T\Phi'_k(0)}{\gamma\omega^4k^2}$ for $k \in {\Z_{odd}}$. Next we describe subspaces of $\Dom{J}$ which are invariant under triple convolution and pointwise multiplication with $(\hat\eta_k)_{k\in {\Z_{odd}}}$. It turns out that these subspaces are made of sequences $\hat{z}$ where only the $r^{th}$ entry modulus $2r$ is occupied. \begin{defn} For $r\in {\N_{odd}}, p \in {\N_{even}}$ with $r<p$ let \begin{align*} \Dom{J}_{r,p} = \{\hat{z}\in \Dom{J}:\forall\,k\in\Z, k\neq r ~\mathrm{mod}~p \colon\hat{z}_k=0 \}. \end{align*} \end{defn} \begin{lemma} For $r\in {\N_{odd}}, p\in {\N_{even}}$ with $r<p$ and $p\not = 2r$ we have $\Dom{J}_{r,p}=\{0\}$. \end{lemma} \begin{proof} Let $\hat{z}\in \Dom{J}_{r,p}$. For all $k\not \in r+p\Z$ we have $\hat{z}_k=0$ by definition of $\Dom{J}_{r,p}$. Let therefore $k=r+pl_1$ for some $l_1\in \Z$. Then $-k=-r-pl_1 \not \in r+p\Z$ because otherwise $2r=-p(l_1+l_2)=p|l_1+l_2|$ for some $l_2\in \Z$. Since by assumption $p>r$ we get $|l_1+l_2|<2$. But clearly $|l_1+l_2|\not \in \{0,1\}$ since $r\not= 0$ and $p\not = 2r$ by assumption. By this contradiction we have shown $-k\not \in r+p\Z$ so that necessarily $0=\hat z_{-k}=-\hat z_{k}$. This shows $\hat z=0$. \end{proof} In the following we continue by only considering $\mathcal{D}_r \coloneqq\Dom{J}_{r,2r}$ for $r\in{\N_{odd}}$. \begin{prop} \label{zwei_eingenschaften} Let $r\in{\N_{odd}}$. \begin{itemize} \item[(i)] The elements $\hat z \in \mathcal{D}_r$ are exactly those elements of $\Dom{J}$ which generate $\frac{T}{2r}$-antiperiodic functions $\sum_{k\in {\Z_{odd}}} \frac{\hat z_k}{k}\Phi_k(x)e_k(t)$. \item[(ii)] If $\hat z\in \mathcal{D}_r$ then $(\hat z*\hat z*\hat z)_k=0$ for all $k\not\in r+2r\Z$. \end{itemize} \end{prop} \begin{proof} (i) An element $\hat z\in \Dom{J}$ generates a $\frac{T}{2r}$-antiperiodic function $z(x,t)= \sum_{k\in {\Z_{odd}}} \frac{\hat z_k}{k}\Phi_k(x)e_k(t)$ if and only if $z(x,t+\frac{T}{2r})=-z(x,t)$. Comparing the Fourier coefficients we see that this is the case if for all $k\in{\Z_{odd}}$ we have $\hat z_k\bigl(\exp(\frac{\i\omega kT}{2r})+1\bigr)=0$, i.e., either $k\in r+2r\Z$ or $\hat z_k=0$. This is exactly the condition that $\hat z \in \mathcal{D}_r$. \\ (ii) Let $\hat z\in \mathcal{D}_r$ and assume that there is $k\in\Z$ such that $0\not = (\hat z*\hat z*\hat z)_k=\sum_{l,m} \hat z_l\hat z_{m-l}\hat z_{k-m}$. So there is $l_0, m_0\in {\Z_{odd}}$ such that $\hat z_{l_0}, \hat z_{m_0-l_0}, \hat z_{k-m_0}\not =0$ which means by the definition of $\mathcal{D}_r$ that $l_0, m_0-l_0, k-m_0\in r+2r\Z$. Thus $k = l_0+m_0-l_0+k-m_0 \in 3r+2r\Z = r+2r\Z$. \end{proof} \begin{proof}[Proof of Theorem~\ref{multiplicity abstract}] We give the proof in case (i); for case (ii) the proof only needs a trivial modi\-fication. Let $r=l^j$ where $j$ is an index such that the sequence $\Bigl(\Phi'_{k\cdot l^j}(0)\Bigr)_{k\in {\N_{odd}}}$ has a positive element (we have changed the notation from $l_-$ to $l$ for the sake of readability). Since $\mathcal{D}_r$ is a closed subspace of $\Dom{J}$ we have as before in Theorem~\ref{J attains a minimum and its properties} the existence of a minimizer $\hat\alpha^{(r)}\in \mathcal{D}_r$, i.e., $J(\hat\alpha^{(r)})=\min_{\mathcal{D}_r}J<0$. Moreover, $\hat\alpha^{(r)}$ satisfies the restricted Euler-Lagrange-equation \begin{equation} 0=J'\left(\hat\alpha^{(r)}\right)\left[\hat{x}\right]=\left(\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat{x}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat\alpha^{(r)}_k\hat{x}_k \qquad\forall\,\hat{x}\in \mathcal{D}_r. \label{frechet_symmetric} \end{equation} We need to show that \eqref{frechet_symmetric} holds for every $\hat z\in \Dom{J}$. If for an arbitrary $\hat{z}\in\Dom{J}$ we define $\hat{x}_k\coloneqq\hat{z}_k$ for $k\in r+2r\Z$ and $\hat{x}_k\coloneqq0$ else then $\hat{x}\in \mathcal{D}_r$. If we furthermore define $\hat{y}\coloneqq\hat{z}-\hat{x}$ then $\hat{y}_k=0$ for all $k\in r+2r\Z$. This implies in particular that \begin{align*} \sum_k\frac{\Phi'_k(0)}{k^2}\hat\alpha^{(r)}_k\hat{y}_k = 0 \end{align*} and by using (ii) of Proposition~\ref{zwei_eingenschaften} also \begin{align*} (\hat\alpha*\hat\alpha*\hat\alpha*\hat y)_0=\sum_{k}\left(\hat\alpha^{(r)}*\hat\alpha^{(r)}*\hat\alpha^{(r)}\right)_k\hat{y}_{-k}=0. \end{align*} This implies $J'(\hat\alpha^{(r)})[\hat{y}]=0$ and since by \eqref{frechet_symmetric} also $J'(\hat\alpha^{(r)})[\hat{x}]=0$ we have succeeded in proving that $J'(\hat\alpha^{(r)})=0$. \medskip It remains to show the multiplicity result. For this purpose we only consider $r=l^{j_m}$ for $j_m\to \infty$ as $m\to\infty$ where $j_m$ is an index such that the sequence $\Bigl(\Phi'_{l^{j_m}k}(0)\Bigr)_{k\in {\N_{odd}}}$ has a positive element. First we observe that $\mathcal{D}_{l^{j_m}}\supsetneq \mathcal{D}_{l^{j_{m+1}}}$. Assume for contradiction that the set $\{\hat\alpha^{(l^{j_m})}\}$ is finite. Then we have a subsequence $(j_{m_n})_{n\in \N}$ such that $\hat\alpha = \hat\alpha^{(l^{j_{m_n}})}$ is constant. But then \begin{align*} \hat\alpha \in \bigcap_{n\in\N} \mathcal{D}_{l^{j_{m_n}}} = \bigcap_{j\in\N} \mathcal{D}_{l^j}=\{0\}. \end{align*} This contradiction shows the existence of infinitely many distinct critical points of the function $J$ and finishes the proof of the theorem. \end{proof} \begin{proof}[Proof of Remark~\ref{remark_Dr} and Remark~\ref{remark_infinitely}] The proof of Theorem~\ref{multiplicity abstract} works on the basis that it suffices to minimize the functional $J$ on $\mathcal{D}_r$. In this way a $\frac{T}{2r}$-antiperiodic breather is obtained. For $\hat z\in \mathcal{D}_r$ only the entries $\hat z_k$ with $k\in r{\Z_{odd}}$ are nontrivial while all other entries vanish. Therefore, \eqref{spectralcond} and \eqref{FurtherCond_phik} and the values of $\Phi_k'(0)$ are only relevant for $k\in r{\Z_{odd}}$. In the special case of Remark~\ref{remark_infinitely} we take $r=l_\pm^j$. \end{proof} \section{Approximation by Finitely Many Harmonics}\label{approximation} Here we give some analytical results on finite dimensional approximation of the breathers obtained in Theorem~\ref{w is a weak solution general}. The finite dimensional approximation is obtained by cutting-off the ansatz \eqref{ansatz} and only considering harmonics of order $|k|\leq N$. Here a summand in the series \eqref{ansatz} of the form $\Phi_k(|x|)e_k(t)$ is a called a harmonic since it satisfies the linear wave equation in \eqref{nonlinNeuBVP}. We will prove that $J$ restricted to spaces $\Dom{J^{(N)}}$ of cut-off ansatz functions still attains its minimum and that the sequence of the corresponding minimizers converges up to a subsequence to a minimizer of $J$ on $\Dom{J}$. \begin{defn} Let $N\in{\N_{odd}}$. Define \begin{align*} J^{(N)}\coloneqq J|_\Dom{J^{(N)}},\qquad \Dom{J^{(N)}}\coloneqq\left\lbrace \hat{z}\in\Dom{J} ~\big|~ \forall\,\abs{k}>N\colon\hat{z}_k=0 \right\rbrace \end{align*} \end{defn} \begin{lemma} \label{lemma_approximation} Under the assumptions of Theorem~\ref{w is a weak solution general} the following holds: \begin{enumerate} \item[(i)] For every $N\in {\N_{odd}}$ sufficiently large there exists $\hat{\alpha}^{(N)}\in\Dom{J^{(N)}}$ such that $J(\hat{\alpha}^{(N)})=\inf J^{(N)}<0$ and $\lim_{N\to\infty}J(\hat{\alpha}^{(N)})=\inf J$. \item[(ii)] There is $\hat{\alpha}\in\Dom{J}$ such that up to a subsequence (again denoted by $(\hat{\alpha}^{(N)})_N$) we have \begin{align*} \hat{\alpha}^{(N)}\to\hat{\alpha} \qquad \text{ in }~\Dom{J} \end{align*} and $J(\hat{\alpha})=\inf J$. \end{enumerate} \end{lemma} \begin{rmk} The Euler-Lagrange-equation for $\hat{\alpha}^{(N)}$ reads: \begin{align*} 0=J'\left(\hat{\alpha}^{(N)}\right)[\hat{y}]=\left(\hat{\alpha}^{(N)}*\hat{\alpha}^{(N)}*\hat{\alpha}^{(N)}*\hat{y}\right)_0+\frac{2T}{\gamma\omega^4}\sum_k\frac{\Phi'_k(0)}{k^2}\hat{\alpha}^{(N)}_k\hat{y}_k \qquad \forall\,\hat{y}\in\Dom{J^{(N)}}. \end{align*} This amounts to satisfying \eqref{WeakEquation for (quasi)} in Definition~\ref{Defn of weak Sol to (quasi)} for functions $\psi(x,t)= \sum_{k\in {\Z_{odd}}, |k|\leq N} \hat \psi_k(x) e_k(t)$ with $\hat\psi_k\in H^1(\R)$. Clearly, in general $\hat{\alpha}^{(N)}$ is not a critical point of $J$. \end{rmk} \begin{proof} (i) We choose $N\in{\N_{odd}}$ so large, such that we have the assumed sign of the the one element in $\left(\Phi_k'(0)\right)_{\abs{k}\leq N}$. The restriction of $J$ to the $\frac{N+1}{2}$-dimensional space $\Dom{J^{(N)}}$ preserves coercivity. The continuity of $J^{(N)}$ therefore guarantees the existence of a minimizer $\hat\alpha^{(N)}\in\Dom{J^{(N)}}$. As before we see that $J(\hat\alpha^{(N)})=\inf J^{(N)}<0$, so in particular $\hat{\alpha}^{(N)}\neq0$. Next we observe that $\Dom{J^{(N)}}\subset\Dom{J}$, i.e., $J(\hat{\alpha}^{(N)})\geq\inf J= J(\hat\beta)$ for a minimizer $\hat{\beta}\in\Dom{J}$ of $J$. Let us define $\hat{\beta}^{(N)}_k=\hat{\beta}_k$ for $\abs{k}\leq N$ and $\hat{\beta}^{(N)}_k=0$. Since the Fourier-series $\beta(t) = \sum_k \hat\beta_k e_k(t)$ converges in $L^4(\T)$, cf. Theorem~4.1.8 in \cite{GrafakosClass}, we see that $\hat{\beta}^{(N)}\rightarrow\hat{\beta}$ in $\Dom{J}$. By the minimality of $\hat{\alpha}^{(N)}\in \Dom{J^{(N)}}$ and continuity of $J$ we conclude \begin{align*} \inf_{\Dom{J}}J\leq J(\hat{\alpha}^{(N)})\leq J(\hat{\beta}^{(N)})\longrightarrow J(\hat{\beta})=\inf_{\Dom{J}}J. \end{align*} Hence $\lim_{N\to\infty} J(\hat{\alpha}^{(N)})=\inf J$ as claimed. \medskip \noindent (ii) Since $\Dom{J^{(N)}}\subset\Dom{J^{(N+1)}}\subset\Dom{J}$ we see that $J(\hat{\alpha}^{(N)})\geq J(\hat{\alpha}^{(N+1)})\geq\inf J$ so that in particular the sequence $(J(\hat{\alpha}^{(N)}))_N$ is bounded. By coercivity of $J$ we conclude that $(\hat{\alpha}^{(N)})_N$ is bounded in $\Dom{J}$ so that there is $\hat{\alpha}\in\Dom{J}$ and a subsequence (again denoted by $(\hat{\alpha}^{(N)})_N$) such that \begin{align*} \hat{\alpha}^{(N)}\rightharpoonup\hat{\alpha} \qquad \text{ in }~\Dom{J}. \end{align*} By part (i) and weak lower semi-continuity of $J$ we obtain \begin{align*} \inf J=\lim_{N\to\infty}J(\hat{\alpha}^{(N)})\geq J(\hat{\alpha}), \end{align*} i.e., $\hat{\alpha}$ is a minimizer of $J$. Recall that $J(\cdot)= \frac{1}{4}\NORM{\cdot}^4+J_1(\cdot)$ where $J_1$ is weakly continuous, cf. proof of Theorem~\ref{J attains a minimum and its properties}. Therefore, since $\hat{\alpha}^{(N)}\rightharpoonup\hat{\alpha}$ and $J(\hat{\alpha}^{(N)})\to J(\hat{\alpha})$ we see that $\NORM{\hat\alpha^{(N)}}\to \NORM{\hat\alpha}$ as $N\to \infty$. Since $\Dom{J}$ is strictly uniformly convex, we obtain the norm-convergence of $(\hat\alpha^{(N)})_N$ to $\hat\alpha$. \end{proof} \section{Appendix}\label{appendix} \subsection{Details on exponentially decreasing fundamental solutions for step potentials}\label{details_example_step} Here we consider a second-order ordinary differential operator \begin{align*} L_k \coloneqq - \frac{d^2}{dx^2} -k^2\omega^2 g(x) \end{align*} with $g$ as in Theorem~\ref{step}. Clearly, $L_k$ is a self-adjoint operator on $L^2(\R)$ with domain $H^2(\R)$. Moreover, $\sigma_{ess}(L_k)=[k^2\omega^2 a,\infty)$. By the assumption on $\omega$ we have \begin{align*} \sqrt{b}\omega c \frac{2}{\pi} = \frac{p}{q} \mbox{ with } p,q \in {\N_{odd}}. \end{align*} Hence, with $k\in q{\N_{odd}}$, $k\sqrt{b}\omega c$ is an odd multiple of $\pi/2$. In the following we shall see that $0$ is not an eigenvalue of $L_k$ for $k\in q{\N_{odd}}$ so that \eqref{spectralcond} as in Remark~\ref{remark_infinitely} is fulfilled. A potential eigenfunction $\phi_k$ for the eigenvalue $0$ would have to look like \begin{equation} \label{ansatz_ef} \phi_k(x) = \begin{cases} -A\sin(k\omega\sqrt{b}c) e^{k\omega\sqrt{a}(x+c)}, & ~\phantom{-c<}x<-c,\\ A\sin(k\omega \sqrt{b} x)+B\cos(k\omega\sqrt{b}x), & -c<x<c,\\ A\sin(k\omega\sqrt{b}c) e^{-k\omega\sqrt{a}(x-c)}, & \phantom{-}c<x. \end{cases} \end{equation} with $A,B\in \R$ to be determined. Note that we have used $\cos(k\omega\sqrt{b}c)=0$. The $C^1$-matching of $\phi_k$ at $x=\pm c$ lead to the two equations \begin{align*} -Bk\omega\sqrt{b}\sin(k\omega\sqrt{b}c) &= -Ak\omega\sqrt{a}\sin(k\omega\sqrt{b}c),\\ Bk\omega\sqrt{b}\sin(k\omega\sqrt{b}c) &= -Ak\omega\sqrt{a}\sin(k\omega\sqrt{b}c) \end{align*} and since $\sin(k\omega\sqrt{b}c)=\pm 1$ this implies $A=B=0$ so that there is no eigenvalue $0$ of $L_k$. Next we need to find the fundamental solution $\phi_k$ of $L_k$ that decays to zero at $+\infty$ and is normalized by $\phi_k(0)=1$. Here we can use the same ansatz as in \eqref{ansatz_ef} and just ignore the part of $\phi_k$ on $(-\infty,0)$. Now the normalization $\phi_k(0)=1$ leads to $B=1$ and the $C^1$-matching at $x=c$ leads to $A=\sqrt{\frac{b}{a}}B=\sqrt{\frac{b}{a}}$ so that the decaying fundamental solution is completely determined. We find that \begin{align*} \abs{\phi_k(x)} \leq \left\{ \begin{array}{ll} A+B, & 0\leq x \leq c \vspace{\jot}\\ A, & c<x\leq 2c \vspace{\jot}\\ A e^{-\frac{1}{2} k\omega\sqrt{a}x}, & x>2c \end{array}\right. \end{align*} so that $|\phi_k(x)|\leq (A+B)e^{-\rho_k x}\leq Me^{-\rho x}$ on $[0,\infty)$ with $\rho_k = \frac{1}{2} k\omega\sqrt{a}$, $\rho=\frac{1}{2}\omega\sqrt{a}$ and $M=A+B$. This shows that also \eqref{FurtherCond_phik} holds. Finally, since $\phi_k'(0)=\frac{bk\omega}{\sqrt{a}}>0$ the existence of infinitely many breathers can only be shown for $\gamma<0$. At the same time, due to $|\phi_k(0)|=O(k)$, Theorem~\ref{w is even more regular} applies. \subsection{Details on Bloch Modes for periodic step potentials} \label{explicit example Bloch Modes_WR} Here we consider a second-order periodic ordinary differential operator \begin{align*} L \coloneqq - \frac{d^2}{dx^2} + V(x) \end{align*} with $V\in L^\infty(\R)$ which we assume to be even and $2\pi$-periodic. Moreover, we assume that $0$ does not belong to the spectrum of $L:H^2(\R)\subset L^2(\R)\to L^2(\R)$. We first describe what Bloch modes are and why they exists. Later we show that this is the situation which occurs in Theorem~\ref{w is a weak solution general} and we verify conditions \eqref{spectralcond} and \eqref{FurtherCond_phik}. \medskip A function $\Phi\in\Cont{1}{\R}$ which is twice almost everywhere differentiable such that \begin{equation} \label{eq:bloch} L\Phi=0 \quad\text{ a.e. in } \R, \qquad \Phi(\cdot+2\pi)=\rho\Phi(\cdot). \end{equation} with $\rho\in (-1,1)\setminus\{0\}$ is called the (exponentially decreasing for $x\to \infty$) Bloch mode of $L$ and $\rho$ is called the Floquet multiplier. The existence of $\Phi$ is guaranteed by the assumption that $0\notin\sigma(L)$. This is essentially Hill's theorem, cf. \cite{Eastham}. Note that $\Psi(x)\coloneqq\Phi(-x)$ is a second Bloch mode of $L$, which is exponentially increasing for $x\to \infty$. The functions $\Phi$ and $\Psi$ form a fundamental system of solutions for operator $L$ on $\R$. Next we explain how $\Phi$ is constructed, why it can be taken real-valued and why it does not vanish at $x=0$ so that we can assume w.l.o.g $\Phi(0)=1$. \medskip According to \cite{Eastham}, Theorem 1.1.1 there are linearly independent functions $\Psi_{1},\Psi_{2}\colon\R\rightarrow\C$ and Floquet-multipliers $\rho_{1},\rho_{2}\in\C$ such that $L\Psi_{j}=0$ a.e. on $\R$ and $\Psi_{j}(x+2\pi)=\rho_{j}\Psi_{j}(x)$ for $j=1,2$. We define $\phi_{j}$, $j=1,2$ as the solutions to the initial value problems \begin{align*} \begin{cases} L\phi_1=0,\\ \phi_{1}(0)=1,\quad \phi_{1}'(0)=0, \end{cases} \quad\text{and}\qquad \begin{cases} L\phi_2=0,\\ \phi_{2}(0)=0,\quad \phi_{2}'(0)=1 \end{cases} \end{align*} and consider the Wronskian \begin{align} \label{wronskian} W(x)\coloneqq\begin{pmatrix} \phi_{1}(x) & \phi_{2}(x) \\ \phi'_{1}(x) & \phi'_{2}(x) \end{pmatrix} \end{align} and the monodromy matrix \begin{align} \label{monodromy} A\coloneqq W(2\pi)=\begin{pmatrix} \phi_{1}(2\pi) & \phi_{2}(2\pi) \\ \phi'_{1}(2\pi) & \phi'_{2}(2\pi) \end{pmatrix}. \end{align} Then $\det A=1$ is the Wronskian determinant of the fundamental system $\phi_1, \phi_2$ and the Floquet multipliers $\rho_{1,2} = \frac{1}{2} \left(\tr(A)\pm \sqrt{\tr(A)^2-4}\right)$ are the eigenvalues of $A$ with corresponding eigenvectors $v_{1}=(v_{1,1}, v_{1,2})\in \C^2$ and $v_{2}=(v_{2,1}, v_{2,2})\in\C^2$. Thus, $\Psi_{j}(x)=v_{j,1}\phi_{1}(x)+v_{j,2}\phi_{2}(x)$. By Hill's theorem (see \cite{Eastham}) we know that \begin{align*} 0\in\sigma(L) \qquad\Leftrightarrow\qquad \abs{\tr(A)}\leq 2. \end{align*} Due to the assumption that $0\not\in \sigma(L)$ we see that $\rho_1, \rho_2$ are real with $\rho_1, \rho_2\in\R\setminus\{-1,0,1\}$ and $\rho_1\rho_2=1$, i.e., one of the two Floquet multipliers has modulus smaller then one and other one has modulus bigger than one. W.l.o.g. we assume $0<|\rho_2|<1<|\rho_1|$. Furthermore, since $\rho_1, \rho_2$ are real and $A$ has real entries we can choose $v_1, v_2$ to be real and so $\Psi_1, \Psi_2$ are both real valued. As a result we have found a real-valued Bloch mode $\Psi_2(x)$ which is exponentially decreasing as $x\to \infty$ due to $|\rho_2|<1$. Let us finally verify that $\Psi_2(0)\not = 0$ so that we may assume by rescaling that $\Psi_2(0)=1$. Assume for contradiction that $\Psi_2(0)=0$. Since the potential $V(x)$ is even in $x$ this implies that $\Psi_2$ is odd and hence (due to the exponential decay at $+\infty$) in $L^2(\R)$. But this contradicts that $0\not\in\sigma(L)$. \medskip Now we explain how the precise choice of the data $a, b>0, \Theta\in (0,1)$ and $\omega$ for the step-potential $g$ in Theorem~\ref{w is a weak solution in expl exa} allows to fulfill the conditions \eqref{spectralcond} and \eqref{FurtherCond_phik}. Let us define \begin{align*} \tilde g(x)\coloneqq \begin{cases} a, & x \in [0,2\Theta\pi),\\ b, & x \in (2\Theta\pi,2\pi). \end{cases} \end{align*} and extend $\tilde g$ as a $2\pi$-periodic function to $\R$. Then $\tilde g(x) = g(x-\Theta\pi)$, and the corresponding exponentially decaying Bloch modes $\tilde\phi_k$ and $\phi_k$ are similarly related by $\tilde\phi_k(x) = \phi_k(x-\Theta\pi)$. For the computation of the exponentially decaying Bloch modes, it is, however, more convenient to use the definition $\tilde g$ instead of $g$. Now we will calculate the monodromy matrix $A_k$ from \eqref{monodromy} for the operator $L_k$. For a constant value $c>0$ the solution of the initial value problem \begin{align*} -\phi''(x)-k^2\omega^2c\phi(x)=0, \quad\phi(x_0)=\alpha,\quad\phi'(x_0)=\beta \end{align*} is given by \begin{align*} \begin{pmatrix}\phi(x)\\\phi'(x)\end{pmatrix} = T_k(x-x_0,c)\begin{pmatrix}\alpha\\\beta\end{pmatrix} \end{align*} with the propagation matrix \begin{align*} T_k(s,c)\coloneqq\begin{pmatrix} \cos(k\omega\sqrt{c}s) & \frac{1}{k\omega\sqrt{c}}\sin(k\omega\sqrt{c}s) \\ -k\omega\sqrt{c}\sin(k\omega\sqrt{c}s) & \cos(k\omega\sqrt{c}s) \end{pmatrix}. \end{align*} Therefore we can write the Wronskian as follows \begin{align*} W_k(x) &=\begin{cases} T_k(x,a) & x\in[0,2\Theta\pi] \\ T_k(x-2\Theta\pi,b)T_k(2\Theta\pi,a) & x\in[2\Theta\pi,2\pi] \end{cases} \end{align*} and the monodromy matrix as \begin{align*} A_k=W_k(2\pi)=T_k(2\pi(1-\Theta),b)T_k(2\Theta\pi,a). \end{align*} To get the exact form of $A_k$ let us use the notation \begin{align*} l\coloneqq\sqrt{\frac{b}{a}}\,\frac{1-\Theta}{\Theta},\qquad m\coloneqq2\sqrt{a}\Theta\omega. \end{align*} Hence \begin{align*} A_k = & \sin(kml\pi)\sin(km\pi) \\ & \cdot \begin{pmatrix} \cot(kml\pi)\cot(km\pi)-\sqrt{\frac{a}{b}} & \frac{1}{k\omega\sqrt{a}} \cot(kml\pi) + \frac{1}{k\omega\sqrt{b}}\cot(km\pi) \\ -k\omega\sqrt{b}\cot(km\pi)-k\omega\sqrt{a}\cot(kml\pi) & -\sqrt{\frac{b}{a}}+\cot(kml\pi)\cot(km\pi) \end{pmatrix} \end{align*} and \begin{align*} \tr(A_k)= 2\cos(kml\pi)\cos(km\pi)-\Bigl(\sqrt{\frac{a}{b}}+\sqrt{\frac{b}{a}}\Bigr)\sin(kml\pi)\sin(km\pi). \end{align*} In order to verify \eqref{spectralcond} we aim for $\abs{\tr(A_k)}>2$. However, instead of showing $\abs{\tr(A_k)}>2$ for all $k\in{\Z_{odd}}$ we may restrict to $k\in r\cdot{\Z_{odd}}$ for fixed $r\in{\N_{odd}}$ according to Remark~\ref{remark_Dr}. Next we will choose $r\in{\Z_{odd}}$. Due to the assumptions from Theorem~\ref{w is a weak solution in expl exa} we have \begin{equation} \label{cond_on_lr} l=\frac{\tilde p}{\tilde q}, ~~2m=\frac{p}{q} \in\frac{{\N_{odd}}}{{\N_{odd}}}. \end{equation} Therefore, by setting $r=\tilde q q$\footnote{Instead of $r=\tilde q q$ we may have chosen any odd multiple of $\tilde q q$, e.g. $r=(\tilde q q)^j$ for any $j\in \N$. This is important for the applicability of Theorem~\ref{multiplicity abstract} to obtain infinitely many breathers.} we obtain $\cos(km\pi)=\cos(kml\pi)=0$ and $\sin(km\pi), \sin(kml\pi)\in\{\pm1\}$ for all $k\in r\cdot {\Z_{odd}}$. Together with $a\not =b$ this implies $|\tr(A_k)|=\sqrt{\frac{a}{b}}+\sqrt{\frac{b}{a}}>2$ so that \eqref{spectralcond} holds and $A_k$ takes the simple diagonal form \begin{align*} A_k = \begin{pmatrix} -\sqrt{\frac{a}{b}}\sin(kml\pi)\sin(km\pi) & \\ 0 & -\sqrt{\frac{b}{a}}\sin(kml\pi)\sin(km\pi). \end{pmatrix} \end{align*} In the following we assume w.l.o.g $0<a<b$, i.e., the Floquet exponent with modulus less than $1$ is $\rho_k = -\sqrt{\frac{a}{b}}\sin(kml\pi)\sin(km\pi)$. Note that $|\rho_k|=\sqrt{a/b}$ is independent of $k$. Furthermore the Bloch mode $\tilde \phi_k$ that is decaying to $0$ at $+\infty$ and normalized by $\tilde\phi_k(\Theta\pi)=1$ is deduced from the upper left element of the Wronskian, i.e., \begin{align*} \tilde \phi_k(x) = \frac{1}{\cos(k\omega\sqrt{a}\Theta\pi)}\left\{ \begin{array}{ll} \cos(k\omega\sqrt{a}x), & x\in (0,2\Theta\pi), \vspace{\jot}\\ \cos(k\omega\sqrt{b}(x-2\Theta\pi))\cos(k\omega\sqrt{a}2\Theta\pi) & \\ -\sqrt{\frac{a}{b}}\sin(k\omega\sqrt{b}(x-2\Theta\pi))\sin(k\omega\sqrt{a}2\Theta\pi), & x\in (2\Theta\pi,2\pi) \end{array} \right. \end{align*} and on shifted intervals of lengths $2\pi$ one has $\tilde\phi_k(x+2m\pi)= \rho_k^{m}\tilde\phi_k(x)$. Notice that by \eqref{cond_on_lr} the expression $k\omega\sqrt{a}\Theta\pi=k\frac{p}{q}\frac{\pi}{4}$ is an odd multiple of $\pi/4$ since $k\in q\tilde q{\Z_{odd}}$ and hence $|\cos(k\omega\sqrt{a}\Theta\pi)|=1/\sqrt{2}$. Therefore $\|\phi_k\|_{L^\infty(0,\infty)}=\|\tilde\phi_k\|_{L^\infty(\Theta\pi,\infty)}\leq \|\tilde\phi_k\|_{L^\infty(0,2\pi)}\leq \sqrt{2}(1+\sqrt{a/b})$. Thus we have shown that $|\phi_k(x)|\leq M e^{-\rho x}$ for $x\in [0,\infty)$ with $M>0$ and $\rho=\frac{1}{4\pi}(\ln b-\ln a)>0$. Finally, let us compute \begin{align*} \phi_k'(0)=\tilde\phi_k'(\Theta\pi)= -k\omega\sqrt{a}\tan(k\omega\sqrt{a}\Theta\pi)\in\{\pm k\omega\sqrt{a}\}. \end{align*} This shows that $|\phi_k'(0)|=O(k)$ holds which allows to apply Theorem~\ref{w is even more regular}. It also shows that the estimate $|\phi_k(0)|=O(k^\frac{3}{2})$ from Lemma~\ref{norm_estimates} can be improved in special cases. To see that $\phi_k'(0)$ is alternating in $k$, observe that moving from $k\in r{\Z_{odd}}$ to $k+2r\in r{\Z_{odd}}$ the argument of $\tan$ changes by $2r\omega\sqrt{a}\Theta\pi$ which is an odd multiple of $\pi/2$. Since $\tan(x+{\Z_{odd}}\frac{\pi}{2})=-1/\tan(x)$ we see that the sequence $\phi_k'(0)$ is alternating for $k\in r{\Z_{odd}}$. This shows in particular that for any $j\in\N$ the sequence $(\phi_{hr^j}'(0))_{h\in {\N_{odd}}}$ contains infinitely many positive and negative elements, and hence Theorem~\ref{multiplicity abstract} for the existence of infinitely many breathers is applicable. This concludes the proof Theorem~\ref{w is a weak solution in expl exa} since we have shown that the potential $g$ satisfies the assumptions \eqref{spectralcond} and \eqref{FurtherCond_phik} from Theorem~\ref{w is a weak solution general}. \subsection{Embedding of H\"older-spaces into Sobolev-spaces}\label{embedding} \begin{lemma} \label{unusual_embedd} For $0<\tilde\nu<\nu<1$ there is the continuous embedding $\Cont{0,\nu}{\T_T}\to \SobH{\tilde\nu}{\T_T}$. \end{lemma} \begin{proof} Let $z(t)= \sum_k \hat z_k e_k(t)$ be a function in $\Cont{0,\nu}{\T_T}$. We need to show the finiteness of the spectral norm $\|z\|_{H^{\tilde\nu}}$. For this we use the equivalence of the spectral norm $\|\cdot\|_{H^{\tilde\nu}}$ with the Slobodeckij norm, cf. Lemma~\ref{equivalence}. Therefore it suffices to check the estimate \begin{align*} \int_{\T_T} \int_{\T_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2\tilde\nu}} \dd{t}\dd{\tau} \leq \|z\|_{C^\nu(\T_T)}^2 \int_{\T_T} \int_{\T_T} |t-\tau|^{-1+2(\nu-\tilde\nu)} \dd{t}\dd{\tau} \leq C(\nu,\tilde\nu)\|z\|_{C^\nu(\T_T)}^2 \end{align*} where the double integral is finite due to $\nu>\tilde\nu$. \end{proof} For $0<s<1$ recall the definition of the Slobodeckij-seminorm for a function $z:\T_T \to \R$ \begin{align*} [z]_s \coloneqq \left(\int_{\T_T} \int_{\T_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}} \dd{t}\dd{\tau}\right)^{1/2}. \end{align*} \begin{lemma} \label{equivalence} For functions $z\in H^s(\T_T)$, $0<s<1$ the spectral norm $\|z\|_{H^s} = (\sum_k (1+k^2)^s |\hat z_k|^2)^{1/2}$ and the Solobodeckij norm $\NORM{z}_{H^s}\coloneqq (\|z\|_{L^2(\T_T)}^2+ [z]_s^2)^{1/2}$ are equivalent. \end{lemma} \begin{proof} The Solobodeckij space and the spectrally defined fractional Sobolev space are both Hilbert spaces. Hence, by the open mapping theorem, if suffices to verify the estimate $\NORM{z}_{H^s} \leq C\|z\|_{H^s}$. By direct computation we get \begin{align*} \int_{\T_T} \int_{\T_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}}\dd{t}\dd{\tau} &= \int_0^{T} \int_{-\tau}^{T-\tau} \frac{|z(x+\tau)-z(\tau)|^2}{|x|^{1+2s}} \dd{x}\dd{\tau}\\ &= \int_0^{T} \left(\int_0^{T-\tau} \frac{|z(x+\tau)-z(\tau)|^2}{x^{1+2s}}\dd{x} + \int_{T-\tau}^{T} \frac{|z(x+\tau)-z(\tau)|^2}{(T-x)^{1+2s}}\dd{x}\right)\dd{\tau} \\ &= \int_0^{T}\int_0^{T} \frac{|z(x+\tau)-z(\tau)|^2}{g(x,\tau)^{1+2s}}\dd{x}\dd{\tau} \end{align*} with \begin{align*} g(x,\tau) = \left\{ \begin{array}{ll} x & \mbox{ if } 0\leq x\leq T-\tau, \vspace{\jot}\\ T-x & \mbox{ if } T-\tau \leq x \leq T. \end{array} \right. \end{align*} Since $g(x,\tau) \geq \dist(x,\partial\T_T)$ and due to Parseval's identity we find \begin{align*} \int_{\T_T} \int_{\T_T} \frac{|z(t)-z(\tau)|^2}{|t-\tau|^{1+2s}}\dd{t}\dd{\tau} & \leq \int_{\T_T} \frac{\| \widehat{z(\cdot+x)}-\hat z\|_{l^2}^2}{\dist(x,\partial\T_T)^{1+2s}}\dd{x} \\ &= \int_{\T_T} \sum_k \frac{|\exp(\i k\omega x)-1|^2 |\hat z_k|^2}{\dist(x,\partial\T_T)^{1+2s}}\dd{x} \\ &= 4 \int_0^{T/2}\sum_k \frac{1-\cos(k\omega x)}{x^{1+2s}} |\hat z_k|^2 \dd{x} \\ & \leq 4\tilde C \sum_k k^{2s} |\hat z_k|^2 \end{align*} with $\tilde C=\int_0^\infty \frac{1-\cos(\omega\xi)}{\xi^{1+2s}}\dd{\xi}$. This finishes the proof. \end{proof} \section*{Acknowledgment} Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 258734477 – SFB 1173 \bibliographystyle{plain}
2024-02-18T23:40:36.087Z
2021-04-28T02:02:10.000Z
algebraic_stack_train_0000
2,829
13,720
proofpile-arXiv_065-13794
\section{Copyright} All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details. \section{Formatting Requirements in Brief} We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai22.sty). \textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements: \begin{itemize} \item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.) \item All fonts must be embedded in the PDF file --- including your figures. \item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages. \item No type 3 fonts may be used (even in illustrations). \item You may not alter the spacing above and below captions, figures, headings, and subheadings. \item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein). \item You may not alter the line spacing of text. \item Your title must follow Title Case capitalization rules (not sentence case). \item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below). \item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper). \item No \LaTeX{} 209 documents may be used or submitted. \item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document). \item Two-column format in AAAI style is required for all papers. \item The paper size for final submission must be US letter without exception. \item The source file must exactly match the PDF. \item The document margins may not be exceeded (no overfull boxes). \item The number of pages and the file size must be as specified for your event. \item No document may be password protected. \item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages). \item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands). \item Your PDF must be compatible with Acrobat 5 or higher. \item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed. \item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) . \end{itemize} If you do not follow these requirements, your paper will be returned to you to correct the deficiencies. \section{What Files to Submit} You must submit the following items to ensure that your paper is published: \begin{itemize} \item A fully-compliant PDF file that includes PDF metadata. \item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately). \item The bibliography (.bib) file(s). \item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files. \item Only the graphics files used in compiling paper. \item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.). \end{itemize} Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai22.bst), and any custom macros. Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution. \textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth. \textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth. \textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. \textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB. Name your source file with the last (family) name of the first author, even if that is not you. \section{Using \LaTeX{} to Format Your Paper} The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file. \subsection{Document Preamble} In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. \subsubsection{The Following Must Appear in Your Preamble} \begin{quote} \begin{scriptsize}\begin{verbatim} \def\year{2022}\relax \documentclass[letterpaper]{article} \usepackage{aaai22} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage[hyphens]{url} \usepackage{graphicx} \urlstyle{rm} \def\rm{\rm} \usepackage{graphicx} \usepackage{natbib} \usepackage{caption} \DeclareCaptionStyle{ruled}% {labelfont=normalfont,labelsep=colon,strut=off} \frenchspacing \setlength{\pdfpagewidth}{8.5in} \setlength{\pdfpageheight}{11in} \pdfinfo{ /Title (AAAI Press Formatting Instructions for Authors Using LaTeX -- A Guide) /Author (AAAI Press Staff, Pater Patel Schneider, Sunil Issar, J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez) /TemplateVersion (2022.1) } \end{verbatim}\end{scriptsize} \end{quote} \subsection{Preparing Your Paper} After the preamble above, you should prepare your paper as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \begin{document} \maketitle \begin{abstract} \end{abstract}\end{verbatim}\end{scriptsize} \end{quote} \noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \section{Introduction} Scaling up the model capacity has shown to be promising to achieve better performance on a variety of tasks, including natural language understanding~\cite{brown2020language,raffel2019exploring} and visual representation learning~\cite{dosovitskiy2020image,bao2021beit}. The continued growth in model size and parameters brings higher computational cost, while large dense models have almost hit the boundary of hardware capacity. In pursuit of better computational efficiency, sparse Mixture-of-Experts (MoE) is proposed as an efficient alternative to dense models~\cite{lepikhin2020gshard,fedus2021switch,riquelme2021scaling,lewis2021base}. For the sparsely-gated MoE transformers, the feed-forward network (FFN) sub-layer will be replaced by a set of experts with independent parameters. The sparsity of MoE is brought by experts and the gated routing network. The gated routing network will calculate the routing score between input tokens and each expert and activate experts with top-k routing scores. Most experts will not be activated, thus forming a sparse structure. Since the computation cost is only proportional to the activated top-k sub-network, sparsely activated MoE models could scale up model parameters without significantly increasing computational cost. With affordable computational overhead, MoE models could achieve better performance than dense models on various tasks such as neural machine translation~\cite{lewis2019bart,conneau2019cross,lepikhin2020gshard},image recognition~\cite{riquelme2021scaling} and speech recognition~\cite{kumatani2021building}. Recent studies have reached a consensus that more experts mean more parameters and large model capacity, which always bring improvements. However, some studies show that more trainable parameters and sparse conditional computation may introduce overfitting~\cite{xue2021go,lou2021sparse,xue2022one}, especially for downstream tasks with limited data. As depicted in Figure~\ref{fig:overfit}, as the number of experts grows, overfitting gradually becomes apparent in the machine translation task. Moreover, we find that enlarging the size of the MoE will not always lead to improvement. There seems to exist a performance upper bound of scaling up experts with limited data. Moreover, we find an unreasonable phenomenon in Figure~\ref{fig:overfit}: 64-expert MoE with more parameters and larger model capacity has higher training loss than 32-expert MoE. It implies that large-scale MoE not only suffers from overfitting, but also has other hidden problems that affect training. According to our analysis, the probability of each expert getting a token reduces proportionally as the number of experts grows. With the same data, each expert will get less diverse samples. It may affect the sufficient learning of expert layers. Insufficient data to match more parameters is also a major cause of overfitting. Therefore, we want to explore ways in which experts could get diverse samples and learn abundant knowledge, thereby alleviating overfitting and sparse data allocation. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/overfit_new.pdf} \caption{A simple demonstration of loss curves of MoE models on WMT-14 English-to-German translation task. We show the loss curve of MoE baseline models with different experts. The value in the box represents the minimum loss.} \label{fig:overfit} \end{figure} In this work, we propose Mixture of Expert Clusters (MoEC), a general optimizing strategy for MoE models. We close the routing probability among neighbor experts to form the clustered expert structure. The inductive bias expects that the similarity of intra-cluster experts is high while the similarity of inter-cluster experts is low. Experts within a cluster are prone to tokens with similar hidden states and could ``share'' similar tokens. Moreover, we propose a cluster-level expert dropout strategy for the expert cluster structure. Several experts in the cluster will be randomly dropped, the dropped experts will not participate in the routing stage. The activated experts will be selected from the remaining experts in the cluster. Implementing dropout within clusters will make tokens always dispatched to suitable experts, no matter how random the dropout is. We evaluate our MoEC on machine translation and natural language understanding tasks. Experiment results show that MoEC outperforms dense models and baseline MoE models. It indicates that MoEC retains the advantages of the sparse structure of MoE, and alleviates overfitting and sparse data allocation problems. Our contributions are summarized as follows: \begin{itemize} \item We point out the overfitting and sparse data allocation problems for large-scale MoE models, and experts getting less diverse samples could be the common cause of both problems. \item We propose to build expert clusters by variance-based constraints, which allows experts to get a more diverse set of similar tokens. We also implement cluster-level expert dropout as a regularization method. \item We conduct experiments on translation and natural language understanding tasks. MoEC could improve performance and alleviate problems caused by scaling up experts. \item We find that there exists a performance upper bound for scaling up MoE models with limited data, and MoEC could raise the upper bound. \end{itemize} \section{Related Work} In the context of modern deep learning architectures, scaling up transformers using sparse Mixture of Experts (MoE) is proven to be effective to achieve state-of-the-art performance on various NLP and CV tasks~\cite{shazeer2017outrageously,lepikhin2020gshard,riquelme2021scaling,fedus2021switch}. Compared with dense transformers, an MoE model contains several experts (feed-forward networks), and a router to select top-k experts for input tokens. It increases the model capacity by such conditional computation while maintaining computational efficiency. To future explore the potential of MoE, some studies focus on router assignment algorithm~\cite{lewis2021base,roller2021hash,dai2022stablemoe}. Besides, some work focus on optimizing training methods for MoE models.~\citet{dua2021tricks} applied a temperature heating mechanism for sparse MoE models on the translation task.~\citet{chi2022representation} proposed a dimension reduction to estimate the routing scores between tokens and experts on a low-dimensional hyper-sphere. Our work is also proposed to optimize the MoE model. Instead of changing the model structure and routing strategy, MoEC establishes expert clusters, which allows experts to be assigned a more diverse set of similar tokens. Although MoE models have achieved promising results, they are proven to have overfitting problems~\cite{fedus2021switch,wu2022residual,xue2022one} on downstream tasks with limited data. To mitigate overfitting, some works use knowledge distillation to distill MoE models into small-sized MoE models or dense models~\cite{xue2022one,dai2022stablemoe}. Another approach is to apply the dropout strategy during training.~\citet{fedus2021switch} set a small dropout rate at non-expert layers and a larger dropout rate at expert layers.~\citet{liu2022gating} propose gating dropout, which allows some tokens to ignore the gated routing network and stay at their local machines to reduce cross-machine communication. In our work, we propose the cluster-level expert dropout. Randomly selected experts in the cluster will be dropped so that they will not participate in the routing stage. \section{Preliminary} \label{sec:preliminary} To build MoE transformers, it is a common practice to replace feed-forward network (FFN) sub-layers with a set of experts. The experts share the same structure with the FFN layer in the dense transformer model. We denote the hidden representation of input token $x$ as $h$, and the embedding for the $i$-th expert as $e_i$. The router computes the routing score $s_i=h^\mathrm{T}e_i$ to compare the similarity between $h$ and the set of experts $E$. Then, the router utilizes a gating function $\alpha(\cdot)$ to compute the gated value of expert $i$. \begin{equation} \alpha_i = \left\{ \begin{aligned} \frac{exp(s_i)}{\sum_{j=1}^{E}exp(s_j)},\quad \textit{softmax}\ gating \\ \frac{1}{1+exp(-s_i)},\quad \textit{sigmoid}\ gating \end{aligned} \right. \end{equation} The gating function $\alpha_i$ represents the probability of dispatching input token to expert $i$. The top-k gated-value is used for dispatching the token $x$ according to $\alpha_i$. The corresponding k expert networks are conditionally activated. We denote the set of selected top-k indices as $K$. \begin{equation} y=\sum_{i\in K} \alpha_i \cdot E_i(x) \end{equation} where $E_i(x)$ is the $i$-th expert network, which is a feed-forward network. The output of the gated routing network is the linearly weighted combination of each expert’s computation on the token by the gate value. \section{Method} \label{sec:Method} In this work, our goal is to give experts access to more diverse training samples, thus learning abundant knowledge and mitigating overfitting and sparse data allocation. We close the routing probability among neighbor experts to form the clustered expert structure. We apply the variance-based clustering loss to implement constraints. Then, we further propose a cluster-level expert dropout strategy. In our work, we use top-1 gating. Only the expert with the largest routing score is activated. And we choose softmax as our activation function. Experts in a cluster will be distributed on the same device to reduce communication costs. \subsection{Mixture of Expert Clusters} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/sys.pdf} \caption{Illustration of a conventional MoE layer and our proposed MoEC layer. The similarity between hidden states $H_i$ is represented by the color.} \label{fig:overview} \end{figure} We illustrated our MoEC (Mixture of Expert Clusters) in Figure~\ref{fig:overview}. For conventional MoE, the routing probability of tokens will not be constrained. The router will always dispatch input tokens to their best-matched experts, while other similar tokens have little chance of being selected. When scaling up the number of experts, the sparse data distribution will cause each expert to get less diverse tokens. The expert layer could not get adequately trained. Also, the amount of data is insufficient to match the growing number of parameters, which is also the main reason for overfitting. In order to solve the problems of conventional MoE, our MoEC allows each expert to get more rich and diverse tokens. We impose variance-based constraints on the routing stage, aiming to make neighbor experts have similar routing probabilities for input tokens, thus forming expert clusters prone to tokens with similar hidden states. In MoEC, experts will get a more diverse set of similar input tokens by ``sharing'' input tokens with other experts in the cluster. Compared with previous work related to MoE, our training objective added an extra term - cluster loss. The overall training objective is to minimize: \begin{equation} \mathscr{L}=\mathscr{L}_{task} + \mathscr{L}_{balance} + \mathscr{L}_{cluster} \end{equation} $\mathscr{L}_{task}$ is determined by the specific task. In our work, we employ the label smoothed cross-entropy loss for neural machine translation, masked language modeling loss for pre-training language model, and negative log-likelihood loss (NLL loss) or mean-squared loss (MSE loss) for GLUE tasks. In the following, we will introduce $\mathscr{L}_{balance}$ and $\mathscr{L}_{cluster}$. \textbf{Load Balancing Loss.} During training, there exists a load imbalance issue between experts~\cite{shazeer2017outrageously,lepikhin2020gshard}: Most tokens are dispatched to a small number of experts, while many other experts do not get sufficiently trained at all. Besides, imbalanced assignments will result in a high computational bottleneck in the MoE layer and thus limit the computational efficiency. We follow the work in~\cite{fedus2021switch} and add the balance loss to the training objective to encourage a balanced load across experts. Given $N$ experts indexed by $i$=1 to $N$, the balance loss is computed as follows: \begin{equation} \mathscr{L}_{balance}=\alpha N \cdot \sum_{i=1}^{N} f_{i} \cdot p_{i} \end{equation} where $f_i$ is the fraction of tokens dispatching to expert $i$. We denote the number of tokens dispatched to the $i$-th expert as $Count_i$. Given a batch $B$ with $T$ tokens, $f_i$ = $Count_i / T$. $p_i$ is the fraction of the routing probability allocated for expert $i$ in the batch $B$. It is calculated by averaging the probability of routing token $x$ to expert $i$ in the batch $B$. \begin{equation} p_{i} = \frac{1}{T} \sum_{x \in B} \alpha_i(x) \end{equation} where $\alpha_i(x)$is the gating function depicted in Equation 1, which represents the probability of dispatching token $x$ to expert $i$. The balance loss in Equation 4 encourages uniform routing since it would be minimized under a uniform distribution. To control the impact of balance loss in the training process, a hyper-parameter $\alpha$ is applied as a multiplicative coefficient for the loss. Throughout this work, we use an $\alpha= 10^{-2}$ which was sufficiently large to ensure load balancing while small enough not to overwhelm the primary cross-entropy objective. \textbf{Clustering Loss.} In our work, we find the sparse allocation of data severely hinders the adequate training of MoE layers and exacerbates overfitting. In order to allow experts to get rich and diverse tokens to mitigate the impact of sparse allocation, we design the clustering loss. This loss is designed to constrain certain adjacent experts so that they will share similar routing probabilities to tokens, thus forming a cluster-like distribution. For input tokens originally dispatched to the best-matched experts, clustering loss will give them more opportunities to access other experts in the cluster. As a result, experts will be assigned a more diverse set of similar tokens, thus alleviating the problem of sparse allocation. In MoE models with $N$ experts, the clustering loss will guide experts to form $m$ clusters ($m$ should be divisible by $N$), and each cluster contains $L=\frac{N}{m}$ experts. We use $E_{i}^{j}$ to represent the j-th expert in the i-th cluster, while $p_{i}^{j}$ represents the routing probability allocated for $E_{i}^{j}$ ($i=0,1,...,m-1; j=0,1,...,L-1$). According to the size and number of clusters, $p_{i}^{0},p_{i}^{1},...,p_{i}^{L-1}$ will compose a one-dimensional matrix $\tilde{P_i} \in \mathbb{R}^{L}$ to represent the routing probabilities of the $L$ experts in the i-th cluster, and we denote the mean value of them as $\overline{p_i}$. We define the clustering loss as follows: \begin{equation} \begin{aligned} \mathscr{L}_{clustering} &=\beta N \cdot \emph{C}_{intra} \cdot \emph{C}_{inter} \\ &=\beta N \cdot \frac{\sum_{i=0}^{m-1}\delta(\tilde{P_i})}{m} \cdot e^{-\mu\frac{\max{\{\overline{p_i}\}}- max_2\{\overline{p_i}\}}{\max{\{\overline{p_i}\}}}} \end{aligned} \end{equation} As can be seen from Equation 6, clustering loss is mainly composed of two parts: the variance-based intra-cluster constraint $\emph{C}_{intra}$ and the difference-based inter-cluster constraint $\emph{C}_{inter}$. $\delta(\tilde{P_i})=\frac{(p_{i}^{0}-\overline{p_i})^2+(p_{i}^{1}-\overline{p_i})^2+...+(p_{i}^{L-1}-\overline{p_i})^2]}{L} $ represents the variance of the routing probability in the $i$-th cluster. We compute the mean variance of $m$ clusters as the intra-cluster constraint $\emph{C}_{intra}$, which will be minimized when the routing probabilities of experts within the same cluster are balanced. Besides, we use $\emph{C}_{inter}$ to measure the probability difference between the dispatched cluster and the sub-optimal cluster. $\max{\{\cdot \}}$ means the max value of $\overline{p_i}$ ($i$=0,1,...,$m$-1) and $max_2\{\cdot \}$ means the second max value. $\emph{C}_{inter}$ will be minimized when the probability of a token being dispatched to a suboptimal cluster is low. $\mu$ is the coefficient used to control the value of $\emph{C}_{inter}$. When we set $\mu=0$, the probability difference between clusters will not be considered. We could also set $\mu$ to a non-zero value to activate $\emph{C}_{inter}$. We will conduct in-depth experiments and analysis on it in the Experiments chapter. To minimize clustering loss, the probability distribution within the cluster should be uniform, and the probability difference between the clusters should be more apparent (optional). In the initial training steps, the variance among experts will be very high, so the clustering loss will dominate the optimization and guide the rapid formation of expert clusters. When the intra-cluster variance is stable, the clustering loss will become relatively small to maintain the expert clusters. Similar to the practice in balance loss, a hyper-parameter $\beta$ is applied. The value of the $\beta$ should be relatively small, because a large $\beta$ means a strong clustering constraint, thus making experts in the cluster too similar. It will cause these experts to lose their characteristics, and the contributions of multiple similar experts are only approximately equal to one expert. In our work, we set the value of $\beta$ as $10^{-2}$ by default. Experiments on the selection of $\beta$ values could be found in Appendix A. \subsection{Cluster-level expert dropout} When applying large-scale MoE models on tasks with limited data, over-fitting issues naturally arise. Previous MoE-related work~\cite{raffel2019exploring,fedus2021switch} used dropout~\cite{srivastava2014dropout} at each layer to prevent overfitting. Here, cluster-level expert dropout acts as a regularization technique completely different from traditional dropout. It does not drop parameters, but drops some experts in the cluster, which makes the dispatching of tokens more random. \textbf{Implementation in clusters.} First, our cluster-level expert dropout works at the routing stage, so it will only be implemented at expert layers. For experts in a cluster, we randomly drop some of them by deleting the expert ids from the candidate expert list when calculating the routing probability. Thus, the corresponding experts will be ignored in the routing stage. Assume the dropout rate as $\gamma$, only the remaining $N(1-\gamma)$ experts will participate in the calculation of routing probability during training. The dimension of the matrix $P$ will decrease from $\mathbb{R}^{N}$ to $\mathbb{R}^{N\cdot(1-\gamma)}$. All clusters implement the dropout simultaneously. It allows tokens to have more opportunities to be dispatched to other experts in the same cluster, instead of being repeatedly dispatched to the expert with the highest probability. From another perspective, each expert will receive more diverse tokens without adding training data. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/cluster-level.pdf} \caption{Illustration of global-level expert dropout and cluster-level expert dropout. The similarity between hidden states $H_i$ is represented by the color.} \label{fig:cluster_level} \end{figure} \textbf{Cluster-level expert dropout vs Traditional expert dropout.} Traditional expert dropout is recommended in~\citet{fedus2021switch}. It is a dropout technique~\cite{srivastava2014dropout} to regularize MoE models, which acts on the feed-forward layer to reduce overfitting caused by too many parameters. By setting a relatively small dropout rate at non-expert layers (0.1), expert dropout increases the dropout rate by an explicit amount at the interim feed-forward computation at each expert layer (0.4). Our expert dropout acts completely different from it. We perform random dropout on the candidate list of experts during the routing stage. It does not reduce the number of parameters during training but allocates tokens more diversely and flexibly. While traditional expert dropout is usually used for fine-tuning on downstream tasks, our cluster-level expert dropout is a general regularization mechanism with strong generality. In addition, our dropout can be applied together with Fedus' expert dropout, and they can work together to improve the performance of MoE. \textbf{Why cluster-level is better?} It is natural to think that expert dropout could be implemented at the global level, which provides more opportunities for tokens to access other sub-optimal experts. But for global-level expert dropout, as shown in Figure~\ref{fig:cluster_level}, if a random dropout happens to drop suitable experts, tokens may be dispatched to less relevant experts. Inappropriate dispatching may negatively impact the learning of experts. In MoEC, We address this problem by exploiting the cluster-like structure and design a cluster-level expert dropout. Cluster-level dropout could give tokens the option to be randomly re-dispatched while confining the routing results to a more reasonable range. No matter how random the dropout is implemented, tokens will always be dispatched to experts with similar routing probability. \section{Experiments} \begin{table*}[htbp] \centering \caption{\textbf{The performance on machine translation and GLUE tasks for baselines and MoEC models.} WMT-14 is measured on the test set, while GLUE tasks are measured on the development sets. We report the average results by a set of seeds (see Appendix C). All experiments are conducted with 64 experts.} \resizebox{\linewidth}{!}{ \begin{tabular}{lcccccccccc} \toprule & \multicolumn{1}{c}{\textbf{NMT}} & \multicolumn{8}{c}{\textbf{GLUE Tasks}}\\ \\\cmidrule(lr){2-2}\cmidrule(lr){3-10} & \textbf{WMT14 En-De}&\textbf{MNLI}&\textbf{CoLA}&\textbf{SST-2}&\textbf{QQP}&\textbf{QNLI}&\textbf{MRPC}&\textbf{STS-B}&\textbf{GLUE Avg} \\ \midrule Dense & 27.10 &85.97&57.10&92.87&91.20&92.23&87.50 &89.18 &85.16\\ MoE Baseline & 30.59 & 87.27& 75.60 & 93.30& 91.37 & 92.33 & 86.30 &88.28 &87.78 \\ \midrule MoEC (w/o expert dropout)& 32.21 &87.37 & 75.93& \textbf{93.43} & \textbf{91.45} & 92.40 & 88.07 &89.11 &88.25\\ MoEC & \textbf{32.50} & \textbf{87.37}&\textbf{76.80} & 93.37& 91.40& \textbf{92.45} & \textbf{88.23} &\textbf{89.24} &\textbf{88.41}\\ \bottomrule \end{tabular} \label{tab:main_results}} \end{table*} We name our model MoEC (Mixture of Expert Clusters), and evaluate the performance on bilingual machine translation and natural language understanding tasks. We use the X-MoE model from~\citet{chi2022representation} as our backbone architecture, which has shown better performance than prior MoE models such as Switch Transformers~\cite{fedus2021switch} on widely-used cross-lingual understanding benchmarks. \subsection{Evaluation Dataset} \textbf{WMT 2014 English-to-German} Ninth Workshop on Statistical Machine Translation (WMT 2014) releases a collection of datasets used in shared tasks including machine translation. We add additional news-commentary-v12 data from WMT-17 for training and validation. The total training data contains 3.96M English-to-German sentence pairs. \noindent\textbf{GLUE} General Language Understanding Evaluation~\cite{wang2018glue} benchmark is a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks, including MNLI~\cite{williams2017broad}, CoLA~\cite{warstadt2019neural}, SST-2~\cite{socher2013recursive}, QQP, QNLI~\cite{rajpurkar2016squad}, MRPC~\cite{dolan2005automatically} and STS-B~\cite{cer2017semeval}. We do not perform experiments on RTE because previous work~\cite{chen2022task} demonstrated that MoE is not suitable for this task. It is worth mentioning that we will pre-train our model on the BooksCorpus~\cite{zhu2015aligning} and English Wikipedia corpus~\cite{wikidump} for 120k steps before fine-tuning on GLUE tasks. \subsection{Experiments Setup} \textbf{Model Architecture} For our MoEC and all baseline models, we follow the recommended settings in~\cite{vaswani2017attention} and use Transformer-big as the unified backbone architecture on WMT 2014 English-German translation task. For GLUE tasks, we use Transformer-base as the backbone architecture. For MoE layers, we apply the 64-expert MoE model with 3 FFN sub-layers in the 3rd encoder block and 3rd decoder block. A more detailed model hyper-parameters could be found in Appendix B. \noindent\textbf{Baselines} We conduct two baselines in our experiments. The first is \textbf{dense transformer}~\cite{vaswani2017attention}. For another, we follow the work in~\cite{chi2022representation} and apply X-MoE as our~\textbf{MoE baseline}. It could serve as a strong baseline that shows better performance than Switch Transformer~\cite{fedus2021switch} on widely-used cross-lingual understanding benchmarks. The MoE baseline estimates routing scores between tokens and experts on a low-dimensional hypersphere and adds a learnable temperature scalar in the gating function. For a fair comparison, the two baseline methods are built with the same setting as MoEC, which could be found in Appendix B. \noindent\textbf{MoEC Hyper-parameters} For MoEC, several unique hyper-parameters are introduced. For clustering loss, we set $\beta$ to $10^{-2}$ according to the experiment results (see Appendix A) and set $\mu=0$ by default. For cluster size (the number of experts in a cluster) and expert dropout rate, we will have detailed related experiments in the following sections. \noindent\textbf{Training Hyper-parameters} For a fair comparison, the dense model, MoE baseline model, and MoEC model share the same training hyper-parameters. All models are trained with the Adam optimizer~\cite{kingma2014adam} ($\beta_1=0.9,\beta_2=0.98$). The learning rate is set $5e^{-4}$ with 4000 warm-up steps and inverse square root scheduler~\cite{raffel2019exploring}. Batch size, training steps, and dropout rate are set by different tasks, which are recorded in Appendix C. \subsection{Experiments results} We train dense models, baseline MoE and MoEC models on several widely-used evaluation tasks, and the results are shown in Table~\ref{tab:main_results}. Compared with dense models, MoE models exhibit significant performance improvements, which benefit from the large model capacity. Besides, MoEC could bring notable improvement over the MoE baseline without applying the dropout strategy to experts. On WMT-14, it gives a 1.62 BLUE score boost. The advantage could be attributed to the clustered distribution of experts, which endows experts with more diverse and appropriate training samples. Moreover, with the application of the cluster-level expert dropout strategy, the performance of MoEC will be further improved. As shown in Figure~\ref{fig:val_loss}, the MoE baseline severely suffers from overfitting on WMT-14, while our MoEC shows excellent ability to mitigate overfitting. The overfitting phenomenon on the validation set is almost eliminated, and the validation loss is relatively lower. It shows that when our MoEC solves the sparse allocation of data, each expert could get more abundant and diverse training samples. In this way, the training data of each expert is kept sufficient, thereby alleviating the phenomenon of overfitting. Furthermore, we found that MoEC converges slightly slower. It is due to the fact that each expert needs to learn from more diverse training samples, which takes more steps to allow the expert to get sufficiently trained. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/val_loss.pdf} \caption{Loss curves on the WMT-14 validation set. All experiments are conducted with 64 experts for a fair comparison. The validation loss that rises with increasing training steps indicates the overfitting phenomenon. Our MoEC shows excellent ability to mitigate overfitting.} \label{fig:val_loss} \end{figure} \subsection{Detailed analysis of expert clusters} Next, we conduct a detailed analysis of expert clusters. Figure~\ref{fig:MoEC} shows the fraction of tokens dispatched to cluster 0 (expert 0$\sim$3) during training and inference. During training, the experts in cluster 0 get similar input tokens, which are affected by balance loss and clustering loss. During inference, the routing probabilities of experts in the cluster vary, which indicates that they still retain their own characteristics. They learn more fine-grained knowledge, which is the advantage of multiple similar experts compared to a single expert. For WMT14, the BLUE score of MoE with 16 experts is 30.49, while the BLUE score of MoE with 16 clusters (cluster size=4) is 32.16. It shows that multiple similar experts have an obvious advantage over a single expert. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/moec.pdf} \caption{Fraction of tokens dispatched to Expert 0$\sim$3 (i.e. $f_i$ mentioned above) of 64-expert MoEC (cluster size = 4) during training and inference. The graph on the left represents the fraction of tokens dispatched to cluster 0 during training, while the right shows the fraction of tokens dispatched to cluster 0 during inference.} \label{fig:MoEC} \end{figure} The cluster size also has a critical impact on the learning of MoEC, so we conduct experiments on different cluster sizes. As depicted in Table~\ref{tab:cluster-size}, the best performance is obtained when cluster size = 8. Compared to the MoE baseline with 64 experts, expert clusters could bring about a 1.62 BLUE scores improvement. When the cluster size is relatively small, the data shared among experts will be less, and the improvement brought by MoEC will not be fully exploited. As a special case, when cluster size=1, a single expert could not be called a cluster, and MoEC is equivalent to MoE baseline. When the cluster size is large, the data shared among experts will increase, but the similarity and correlation of these data will become lower, which will lead to an adverse impact on the ``professionalism" of each expert. When we expand the cluster size to 16, the performance of MoEC is even lower than that of the MoE baseline, which means that an excessively large cluster size will suppress the advantages of MoE structure and hurt the performance. \begin{table}[htbp] \centering \caption{\textbf{The performance of MoEC with different cluster sizes on WMT-14.} All experiments were conducted with 64 experts. For a fair comparison, all methods do not employ the dropout on experts.} \begin{tabular}{ccc} \toprule \textbf{Cluster size}& \textbf{Number of clusters} & \textbf{BLEU}\\%& \textbf{MoEC-e} \midrule 1 &64& 30.59 \\ 4 &16& 32.16 \\ 8 &8 & \textbf{32.21} \\ 16&4 & 29.98 \\ \bottomrule \end{tabular} \label{tab:cluster-size} \end{table} \subsection{Expert dropout: Cluster-level vs global-level} \begin{table}[htbp] \centering \caption{\textbf{Cluster-level vs global-level expert dropout on WMT-14.} All experiments are conducted on the 64-expert MoEC and cluster size = 8. Under this setting, the BLUE score of MoEC without expert dropout is 32.21.} \begin{tabular}{ccc} \toprule \textbf{Dropout rate}&\textbf{cluster-level}&\textbf{global-level} \\ \midrule 0 & 32.21 & 32.21 \\ 0.25 & 32.32 & 31.88 \\ 0.5 & \textbf{32.50} & 31.53 \\ 0.75 & 32.02 & 29.73 \\ \bottomrule \end{tabular} \label{tab:dropout} \end{table} In Table~\ref{tab:dropout}, we experiment on WMT-14 with the cluster-level expert dropout rate. We find that cluster-level dropout could enhance the generalization performance of MoEC. Such a regularization method could bring a 0.29 BLUE scores improvement for MoEC. Experimental results show that 0.5 is a good choice for the dropout rate. Besides, it is obvious that global-level expert dropout will hurt the performance. For cluster-level expert dropout, when dropping the best-matched expert for input tokens, the routing decision will still be made among the rest experts in the cluster. Regardless of how the dropped experts are selected, there will always be experts left in each cluster. It ensures that suitable experts are always available. But for the global-level one, due to the random distribution of experts, if all matched experts are dropped, the token will be routed to an inappropriate expert. It could cause experts to be distracted by low-relevant data, thus negatively impacting the learning of knowledge. Take Figure~\ref{fig:cluster_level} as a simple example (with setting the dropout rate to 0.5). For global-level expert dropout, when both expert1 and expert2 are dropped, then $H_n$ will only be dispatched to expert3 or expert4. This inappropriate allocation could hurt the performance of the model. \subsection{Role of the inter-cluster constraint coefficient $\emph{C}_{inter}$} We further explore whether the inter-cluster constraint coefficient $\emph{C}_{inter}$ (in Equation 6) will help improve performance. As depicted in Figure~\ref{fig:vs}, when dropout=0.75 or cluster size=4, setting $\mu$ to 1 will get better results. In other cases, it is better not to apply inter-cluster constraints by setting $\mu$ to 0. When there are sufficient experts in the cluster, it is better not to use the inter-cluster constraint by setting $\mu$ to 0. Intra-cluster constraints have already made other experts in the cluster have higher routing probabilities, while inter-cluster constraints will further widen the routing probability gap between clusters. This will cause the entropy of the routing probability distribution to be too small, which is not conducive to the learning of the gated network. We find that the inter-cluster constraint will benefit MoEC when the cluster size is small or the expert dropout rate is high. In this case, the number of experts in the cluster is small, and the intra-cluster constraint alone is not enough to form a globally reasonable routing probability distribution, so the assistance of constraints between clusters is needed. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig/moec_mu.pdf} \caption{Two sets of experiments on the inter-cluster constraint coefficient $\emph{C}_{inter}$. All experiments are performed on WMT14 En-De. The figure on the left is about experiments with different expert dropout rates (cluster size=8), and The figure on the right is about experiments with different cluster sizes (without expert dropout). } \label{fig:vs} \end{figure} \subsection{Raising the upper bound of MoE} \begin{table}[htbp] \centering \caption{\textbf{Results of scaling up MoEC.}} \begin{tabular}{cccc} \toprule \textbf{Expert num} &\textbf{MoE baseline}&\textbf{MoEC}&\textbf{Benefits}\\ \midrule 16 & 30.49 &30.50&+0.01\\ 32 & \textbf{30.81} &30.84&+0.03\\ 64 & 30.59 & \textbf{32.50}&+1.91\\ 128 & 30.21 & 32.40&\textbf{+2.19}\\ \bottomrule \end{tabular} \label{tab:scale-up} \end{table} In general, a higher number of experts means higher model capacity and better performance. However, for tasks with limited data, there exists a performance upper bound on scaling up MoE models. We take a deep dive into the ability of MoEC to raise the upper bound. As shown in Table~\ref{tab:scale-up}, for the MoE baseline, expert = 32 is the upper bound, which means that continuing to increase the number of experts will not bring any gain to the model. Our MoEC not only has a performance advantage over the MoE baseline with the same number of experts, but also improves the upper bound from 32 to 64. With the increase of experts, our MoEC could bring more gains. It is because MoEC could fully show its promising ability to solve severe overfitting and sparse allocation problems. With the mitigation of the above two problems, the superiority of the large-scale MoE model will be better exerted, thereby achieving the improvement of the upper bound of MoE models. With the help of MoEC, we could try to build sparse models with more experts. \section{Conclusion} In our work, we point out the overfitting and the sparse data allocation problems of large-scale MoE models and propose a novel training strategy - MoEC to convert experts into clusters. Each expert could get more abundant and diverse training samples. In this way, the training data of each expert is kept sufficient, thereby alleviating overfitting. We also propose the cluster-level expert dropout to realize regularization. We conduct experiments on machine translation and natural language understanding tasks. Experiment results show MoEC could improve performance and alleviate problems caused by scaling up experts without changing the model structure and routing strategy. The superiority of the large-scale MoE model will be better exerted by MoEC, thereby raising the upper bound of MoE models. With the help of MoEC, we could try to build sparse models with more experts. \bigskip \bigskip
2024-02-18T23:40:36.250Z
2022-07-20T02:09:27.000Z
algebraic_stack_train_0000
2,834
7,319
proofpile-arXiv_065-13840
\section{Introduction} \label{Introduction} Data-centric AI is an emerging topic that focuses on engineering data to develop AI applications with the off-the-shelf machine learning (ML) models \cite{landingai}. Previous efforts are mainly model-centric AI that assumes a static environment. In this environment, 1) the data collection and engineering are done, 2) and continuously developing ML models to achieve high performance on test sets is the main target \cite{eyuboglu2022dcbench}. However, real-world AI applications are facing more complicated scenarios, which can not be adequately addressed by model-centric AI. For instance, researchers have to spend a lot of time on data preparation, including data labeling \cite{chew2019smart}, error detection \cite{krishnan2017boostclean}, etc. Meanwhile, they also need to monitor data to detect the distribution drift so as to update models in time \cite{huang2021modelci}. Treating these issues only from a model view will lead to a sub-optimal solution. Therefore, to further improve and democratize AI applications, a lot of efforts are now turning to data-centric or combining model-centric and data-centric \cite{landingai}. Though the concept of data-centric AI has been proposed very recently, many pioneering studies whose core contributions lie in data engineering have already been proposed \cite{sener2018active, xu2021dataclue}. Among them, one vital direction is active learning (AL) \cite{ren2020survey}. The motivation of AL is to reduce manual labeling efforts while maintaining and even improving ML models' performance \cite{wang2014new, sener2018active, gal2017deep, ducoffe2018adversarial, caramalau2021sequential, ash2020deep, agarwal2020contextual, vzliobaite2013active, loy2012stream}. Specifically, it is well-known that ML models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of ML application development. To cope with the issue, AL selects the most representative yet diverse training samples from a large training data pool by utilizing AL strategies. Then it only sends the selected samples to an oracle (e.g., human annotators) to label. Next, ML models will only be trained on these sub-datasets. By doing so, we can still obtain an ML model with competitive performance but save labeling and training costs a lot. However, utilizing AL methods is a non-trivial task. Essentially, applying AL to AI application development is not simply searching for, selecting, or implementing the AL algorithms. Instead, users have to build a backend to run the AL pipeline, tailored for their applications in their own environment (e.g., a private cluster and AWS). In other words, they need to undertake much repetitive engineering work with boilerplate code. Moreover, users have to consider the efficiency and cost issues, as AL often runs on a vast dataset, and some AL algorithms (e.g., committee-based \cite{dagan1995committee, melville2004diverse}) require running more than one ML model for data selection. Under-consideration will result in a long process time and additional cost. Though several open-source AL tools \cite{modal, deepal, libact, alipy} lower the barrier to applying AL, they can not meet the efficiency requirement. To address these issues, we propose to build an efficient backend for AL. Our AL system, named Active-Learning-as-a-Service (ALaaS) (see Figure \ref{fig:system-arch}), is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Specifically, it adopts server-client architecture to perform AL tasks. As a result, the system can be easily installed on both laptops and the public cloud. After the installation, users can start the system with a simple configuration file by following our templates. Then the system will run AL tasks in an efficient pipeline manner. Meanwhile, more acceleration techniques such as data cache and batching \cite{crankshaw2017clipper, zhang2020mlmodelci, zhang2020hysia} will be utilized to further speed up the AL process. In addition to that, our system also considers the accessibility and modularity so that non-experts can use AL strategies stored in our AL zoo with ease, and experts can propose more advanced AL strategies for more scenarios. Experiments show that our ALaaS outperforms all other baselines in terms of latency and throughput. Further ablation studies show the effectiveness of our design and reveal more insightful conclusions. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/workflow_v2.pdf} \caption{ALaaS architecture. Our system adopts a server-client architecture, which can be deployed easily. It also supports various AL strategies, different model zoos, and serving engines.} \label{fig:system-arch} \end{figure*} \section{Related Work} This section presents the related work, including three categories: active learning (AL) algorithms and tools, Data-centric AI, and MLOps. \subsection{AL Algorithms and Tools} We categorize AL strategies into three classes, namely, diversity-based, uncertainty-based, and hybrid sampling. Diversity-based methods \cite{yang2015multi, sener2018active} are designed to select the most informative samples from the whole dataset to represent it. Uncertainty-based methods \cite{wang2014new, roth2006margin, gal2017deep} aim to select the samples that can not be identified confidently by current ML models and then use these samples to further improve ML models. Hybrid methods \cite{huang2010active, beluch2018power} combine both the above-mentioned methods. Our system supports all of these methods and runs them more efficiently. Many open-source AL tools have been developed to benefit both academia and industry, including ModAL\cite{modal}, DeepAL \cite{deepal}, Libact \cite{libact}, and ALiPy \cite{alipy}. Our ALaaS is inspired by these tools and further improves the AL efficiency and accessibility by adopting the MLOps concept. The detailed comparison is summarized in Table\ref{tab:open-source-tool-compare}. \begin{table}[t] \centering \caption{Comparison of Active Learning (AL) open-source tools. Our ALaaS provides a Machine-Learning-as-a-Service experience and improves AL efficiency a lot.} \label{tab:open-source-tool-compare} \vspace{7pt} \centering \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccccc} \toprule \begin{tabular}[c]{@{}c@{}}AL \\Open-source Tool\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pipelined \\Data Processing\end{tabular} & \begin{tabular}[c]{@{}c@{}}Elastic \\AL Serving\end{tabular} & \begin{tabular}[c]{@{}c@{}}Server-Client \\Architecture\end{tabular} & PyPI Install & Data Cache & AL Zoo \\ \midrule DeepAL \cite{deepal} & & & & & & \checkmark \\ ModAL \cite{modal} & & & & \checkmark & & \checkmark \\ ALiPy \cite{alipy} & & & & \checkmark & & \checkmark \\ libact \cite{libact} & & & & \checkmark & & \checkmark \\ \textbf{ALaaS (Ours)}& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \end{table} \subsection{Data-centric AI} Data-centric AI is proposed to improve AI application performance by engineering datasets rather than only focusing on models. Recent Data-centric AI competition and workshop \cite{landingai} from Landing.ai demonstrates many exciting studies from both academia and industry. Inspired by the pioneering work, many data-centric methods have been proposed for different areas, including NLP \cite{xu2021dataclue, seo2021automatic}, CV \cite{huang2021ymir, chakrabortyfirst}, Robot \cite{lin2022roboflow}, etc. Also, a new benchmark \cite{eyuboglu2022dcbench} has been built for pushing forward data-centric AI research. To the best of our knowledge, ALaaS is the first MLOps system for efficient AL from the data-centric view. \subsection{MLOps} MLOps (Machine Learning Operation) aims to streamline the ML model development and reduce the AI application maintenance cost. Many MLOps systems have been proposed for both data-centric AI and model-centric AI. From a data-centric view, labeling tools (e.g., labelme \cite{russell2008labelme}), data cleaning tools (e.g., ActiveClean \cite{krishnan2016activeclean}), data drift monitors, and so on, can all be regarded as MLOps systems. From a model-centric view, we have model store systems \cite{vartak2016modeldb}, model continuous integration \cite{zhang2020mlmodelci, renggli2019continuous} tools, training platforms \cite{jiang2020unified}, deployment platforms \cite{chen2018tvm}, etc. Different from these systems, ALaaS is designed specifically for running AL tasks more efficiently. In addition, tech giants starts to build end-to-end cloud platforms for MLOps (e.g., TFX \cite{baylor2017tfx}, SageMaker \cite{das2020amazon}, Ludwig \cite{molino2019ludwig}). Our ALaaS can be a good plugin that is complementary to these systems. \section{System Design and Architecture} This section first highlights our Active-Learning-as-a-Service (ALaaS) with three key features, then details the design of core modules of the system as shown in Figure \ref{fig:system-arch}. \subsection{ALaaS Highlights} We highlight three key features, namely efficiency, accessibility, and modularity, provided by our system. These features are also our design principles, leading the implementation to consider both experts (e.g., data scientists and machine learning (ML) engineers) and non-experts (e.g., customers with little domain knowledge) all the time. \textbf{Efficiency.} Active Learning (AL) always faces large-scale datasets to be labeled \cite{ren2020survey} and some AL even employ multiple computational intensive deep learning (DL) models (e.g., Query-By-Committee \cite{dagan1995committee, melville2004diverse}). Thus, it is critical to efficiently process these datasets and models to accelerate ML application development and save users' AL use cost. Our ALaaS offers an extremely efficient AL service to users by employing a lot of optimization technologies, including a pipeline process \cite{narayanan2019pipedream}, ML serving backend adoption\cite{trtserving}, etc. \textbf{Accessibility.} To further lower the application barrier as well as to improve the adoption rate, an AL system should ensure AL non-experts can easily use it with minimal effort and avoid writing much code. Our ALaaS follows this principle and enables a smooth user experience by implementing a containerized AL service with rich configuration temples to help users quickly get started. \textbf{Modularity.} AL is evolving fast, especially driven by the advance in Deep Learning, which requires a large amount of data to train. Making AL accessible should not hinder its advanced use by AL or ML experts. Therefore, our system is designed in a highly modular manner, enabling experts to prototype, extend and deploy state-of-the-art (SOTA) AL methods with ease. \subsection{ALaaS Architecture} The system adopts service-client architecture to abstract complex AL algorithms into web-based services, enabling an out-of-the-box user experience. Besides, our system provides a modular data manager and an AL strategy zoo, decoupling two key processes, large data operation (e.g., data index and storage) and AL strategy development and selection, in AL utilization. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_arch_v3.pdf} \caption{A deployed ALaaS system. The AL client sends data URIs to the AL server, where data will be downloaded. Then the AL server sends data samples to different workers for AL processing.} \label{fig:server-client-design} \end{figure*} \textbf{Server-Client}. The server-client architecture makes the AL accessible for different level users ranging from domain experts to ML beginners with almost no knowledge. It can be deployed to a personal laptop as well as a public cloud. We take an ALaaS deployed to AWS \cite{aws} (see Figure \ref{fig:server-client-design}) as an example to detail the whole workflow. First users only need to prepare a configuration file including basic settings like dataset path and AL methods by following provided templates, as shown in Figure \ref{fig:config-example}. Then, with very few lines of code (LoCs), users can start both the AL client and AL server. Next, users will push their unlabeled datasets, which can be stored either in the local disk or AWS S3 \cite{awss3}, to the AL server. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_api_example_v3.pdf} \caption{An AL service can be easily configured and started with YML files.} \label{fig:config-example} \end{figure*} After getting the dataset Uniform Resource Identifier (URI) from the AL client, the AL server will download the dataset and process it with specific AL strategies in a pipeline manner as shown in Figure \ref{pipeline-design}. With this frustratingly simple optimization, the processing speed can become 10x times faster than the other open-source platform (see Section \ref{compare-open-source}). Meanwhile, the AL server will index every sample in the dataset by assigning unique IDs to them with the help of the data manager. These IDs will be utilized by AL strategies. Finally, the server will distribute the downloaded samples to an optimized inference worker with ML serving backend to do inference. According to pre-defined AL strategies, the AL server will make decisions and generate a report including the URIs of selected samples to be labeled. As a result, the AL server only needs to return the URIs to the AL client, avoiding downloading selected samples from the AL server. \begin{figure*}[b] \centering \includegraphics[width=1.0\textwidth]{fig/alaas_pipeline_v2.pdf} \caption{Dataflow comparison among conventional pool-based learning methods (a), (b), and proposed ALaaS (c). These workflows show how data flows in machines in multiple rounds of AL with different methods. A red box represents a data sample at a download stage, a blue box represents a data sample at a process stage, a green box represents a data sample at an AL inference stage, and a box with a diagonal fill indicates there is no process. The numbers inside the box indicate different rounds of AL.} \label{pipeline-design} \end{figure*} \textbf{Data Manager.} The data manager manages the lifecycle of the dataset in our system. First, it accepts users' datasets and persists their metadata (e.g., name, owner, etc.) for data housekeeping. Second, during the system running, it will index data samples to avoid redundant data movement and batch data for an efficient GPU process. Meanwhile, it provides rich data transformation functions for different tasks like NLP, CV, and Audio. Moreover, for different kinds of AL methods, the data manager can equip the corresponding processing methods to improve usability. \textbf{AL Strategy Zoo.} The AL strategy zoo abstracts and stores many AL strategies, including uncertainty-based, Bayesian, density-based, batch mode, etc. It also provides a base class for advanced users to inherit and extend AL to new scenarios. \textbf{Other Utilities.} To further lower the barrier of using AL and improve efficiency, the system further offers many useful utility functions. For example, as shown in Figure \ref{fig:system-arch}, \textbf{model repository} is designed to connect many public model hubs like HuggingFace \cite{Wolf_Transformers_State-of-the-Art_Natural_2020} and TorchHub \cite{pytorchhub} and obtain pre-trained models from them. Second, as shown in Figure \ref{fig:server-client-design}, the data cache is employed to improve AL computation efficiency, and workers with \textbf{serving engine} are to call different ML serving backend to speed up ML model inference. \section{System Evaluation} This section presents the quantitative evaluation of our systems. We first compare our system with other open-source platforms. Then we benchmark our system from different perspectives to demonstrate its efficiency and accessibility. \subsection{Evaluation setup} \textbf{Hardware\&Software.} We evaluate the system on AWS EC2 and a MacBook laptop. The backend inference software is Triton Inference Server \cite{trtserving}. \textbf{Dataset.} We use the CIFAR-10 dataset \cite{cifar} to conduct experiments. It includes 50,000 training images and 10,000 test images. \textbf{Model.} We use the widely deployed ResNet-18 \cite{he2016deep} model to evaluate system performance as well as benchmark different AL strategies and AL settings. \subsection{Comparison with other AL open source tools} \label{compare-open-source} The first experiment compares the efficiency of ALaaS with that of other baselines. \textbf{Settings.} In this experiment, we simulate a one-round AL process, which applies AL methods to scan the whole dataset to generate a sub-pool. This sub-pool includes samples that will be used to further improve an existing ML model. Specifically, we first train an ML model with randomly selected 10,000 images from the CIFAR-10 training set as the initial model. Next, we use different AL tools to serve the trained model on an AWS 3x.large CPU/GPU EC2. For all tools, we use the same AL strategy named least confidence sampling \cite{sequential1994david}. Finally, we utilize these tools to select 10,000 samples from the rest 40,000 images in the training set and compare their latency and throughput. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{exp/exp_acc_budget.pdf} \caption{Top-1 and top-5 accuracy (ACC) with different AL budgets.} \label{fig:ablation-al-budget} \end{wrapfigure} \textbf{Results \& Insights.} The results are shown in Table \ref{tab:open-source-tool-perf-eval}. Compared to other tools, our ALaaS achieves the lowest latency and highest throughput while still maintaining the same accuracy. This efficiency improvement can be attributed to two sides. First, our ALaaS implements stage-level parallelism which reduces the device idle time extremely. Second, ALaaS adopts the existing ML inference servers to accelerate the model inference. Furthermore, we evaluate the intermediate results with different budgets of our ALaaS. As shown in Figure \ref{fig:ablation-al-budget}, as the budget increases, more samples will be selected and the accuracy will also be increased. This further proves the effectiveness of our system. \begin{table}[h] \centering \caption{Performance evaluation among different AL open-source tools. Compared to all baselines, ALaaS has the lowest latency and highest throughput.} \label{tab:open-source-tool-perf-eval} \vspace{7pt} \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccc} \toprule AL Open-source Tool & Top-1 Accuracy (\%) & Top-5 Accuracy (\%) & One-round AL Latency (sec) & End-to-end Throughput (Image/sec) \\ \midrule DeepAL \cite{deepal} & 86.90 & 89.67 & 2287.00 $\pm$ 179.37 & 17.49 \\ ModAL \cite{modal} & 86.90 & 85.72 & 2006.95 $\pm$ 37.98 & 19.93 \\ ALiPy \cite{alipy} & 86.90 & 83.46 & 2410.85 $\pm$ 77.81 & 16.59 \\ libact \cite{libact} & 85.14 & 81.23 & 1771.33 $\pm$ 109.77 & 22.58 \\ \textbf{ALaaS (Ours)} & 86.90 & 88.12 & 552.45 $\pm$ 30.385 & 72.40 \\ \bottomrule \end{tabular} \end{table} \subsection{ALaaS Characterization} We further benchmark our ALaaS with different system settings. The first experiment is to evaluate different AL strategies re-implemented in our system. The second experiment explores the batch size's impact on system efficiency. \subsubsection{AL strategy impact} Our ALaaS already provides many out-of-the-box AL strategies in ModelZoo for users. This experiment evaluate these strategies re-implemented by ALaaS from accuracy and efficiency views to provide more insights. All settings are the same as in the previous experiment. \textbf{Results \& Insights.} The accuracy of different methods is shown in Figure \ref{fig:ablation-al-strategy-acc}. Core-set \cite{sener2018active} achieves the highest accuracy with no surprise as it is designed for CNNs in computer vision (CV) tasks. Meanwhile, K-Center Greedy (KCG) \cite{alcluster2004nguyen} and Least Confidence (LC) \cite{sequential1994david} are the second and the third accuracy though proposed very early. This tells us that even in the deep learning (DL) era, traditional methods still play a vital role and can cooperate with DL very well. The throughput is shown in Figure \ref{fig:ablation-al-strategy-throughput}. LC has the highest throughput while Coreset achieves the lowest throughput. Combining the accuracy \ref{fig:ablation-al-strategy-acc}and the throughput \ref{fig:ablation-al-strategy-throughput}results, we can draw the conclusion that the accuracy improvement of Coreset comes from the heavy design while LC balances the trade-off between accuracy and efficiency well. In summary, ALaaS provides many methods with clear accuracy and efficiency reports so users can choose them based on their own scenarios accordingly. \begin{figure}[h] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_acc_strategy.pdf} \caption{Top-1 and top-5 accuracy (ACC)} \label{fig:ablation-al-strategy-acc} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_qps_strategy.pdf} \caption{AL query throughput} \label{fig:ablation-al-strategy-throughput} \end{subfigure}\hfill% \caption{Performance of one-round AL for ResNet-18 \cite{he2016deep} on CIFAR-10 dataset \cite{cifar} using different AL strategies (i.e., Least Confidence (LC) \cite{sequential1994david}, Margin Confidence (MC) \cite{margconf2001tobias}, Ratio Confidence (RC) \cite{settles2009active}, Entropy Sampling (ES) \cite{settles2009active}, K-Center Greedy (KCG) \cite{alcluster2004nguyen}, K-Means Sampling (KMeans) \cite{alcluster2004nguyen}, Core-set \cite{sener2018active}, and Diverse Mini-Batch (DBAL) \cite{diverse2019fedor}). The lower-bound baseline is using random sampling (Random) strategy, while the upper-bound baseline is using the entire dataset for training.} \label{fig:ablation-al-strategy} \end{figure} \subsubsection{Batch size impact.} \textbf{Settings.} We evaluate the batch size (BS) impact on two deployment scenarios, the private server and the AWS cloud. Specifically, we first store the CIFAR-10 dataset on a private FTP server and an AWS S3, respectively. We then start ALaaS on a laptop to simulate the end-to-end AL process, including downloading data from other devices, pre-processing data, and selecting AL with an AL strategy. The other settings are the same as the first experiment. \textbf{Results \& Insights.} Our ALaaS can manage the whole process in both environments with different batch sizes steadily and efficiently, as shown in Figure \ref{fig:ablation-al-infer-bs}. Also, from the Figure \ref{fig:ablation-al-infer-bs}, we have many interesting phenomenons. First, BS = 1 and BS = 2 have very close throughput. Second, the increasing trend from BS = 2 to BS = 8 is the most dramatic. Third, after BS=16, the increase will stop. We attribute the reason to that the transmission time accounts for a large proportion of total processing time when the batch size is small. Therefore, the throughput improvement is marginal at the beginning. Then the batch computation time becomes the largest part of the total processing time, so the improvement is dramatic. Finally, when the batch size reaches the computation capacity, the increase stops. \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_ftp_param_acc.pdf} \caption{Images stored on private FTP server} \label{fig:ablation-al-infer-bs-ftp} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_s3_param_acc.pdf} \caption{Images stored on AWS S3} \label{fig:ablation-al-infer-bs-s3} \end{subfigure}\hfill% \caption{End-to-end throughput of one-round pool-based AL for ResNet-18 \cite{he2016deep} on CIFAR-10 \cite{cifar} with different AL inference batch sizes. Storing images on a private FTP server (Figure \ref{fig:ablation-al-infer-bs-ftp}) and S3 (Figure \ref{fig:ablation-al-infer-bs-s3}) both show a monotonic increase of end-to-end throughput over inference batch size.} \label{fig:ablation-al-infer-bs} \end{figure} \section{Conclusion} This paper presents a new MLOps system, named ALaaS, for data-centric AI. ALaaS adopts the philosophy of Machine-Learning-as-Service and implements a server-client architecture, so users can use AL as a web service. Meanwhile, it abstracts the AL process into multiple components and develops several modules, including a data manager, an AL strategy zoo, and utility functions, to support them. More importantly, ALaaS employ stage-level parallelism (a pipeline manner), cache, and batching to improve AL running efficiency. Experiments show that our system has lower latency and higher throughput than all other baselines. We release our code to facilitate the AL research. \newpage \section{Introduction} \label{submission} We are witnessing that many deep learning (DL) models are shaping our world. The models, ranging from object detection and speech recognition to text summarization and product recommendation, empower our daily-used applications, such as YouTube, Amazon, etc., with a super-human performance. To adopt these DL models into applications, researchers and developers roughly need to iteratively go through several stages, including data collection, model design and training, model deployment, and serving. However, it is well-known that DL models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of DL application development. To cope with intensive human resources, both academia and industry are studying and utilizing active learning (AL) methods. AL aims to reduce manual labeling efforts while maintaining and even improving AI models' performance. Specifically, AL selects the most representative or diverse training samples from a large training data pool by utilizing carefully designed strategies and then only sends the selected samples to an oracle (e.g., human annotators) to do the labeling. By doing so, the labeling samples will be reduced a lot, whereas AI models can still obtain a high accuracy as the noise and unimportant samples have been removed. However, utilizing these AL methods is a non-trivial task. Essentially, applying AL to DL applications development is not just simply searching for several AL strategies and implementing the algorithms. Instead, researchers and practitioners have to build an efficient infrastructure for running AL tailored for their applications in their own environment (e.g., a private cluster and AWS), facing full of repetitive engineering work and boilerplate code, as AL always runs on a vast dataset. Besides, given the limited budget, they must wisely choose a suitable AL strategy to reduce human labor costs while maximizing model performance as well as decide the spent allocation on human labeling and running AL. Unfortunately, though many open-source AL python tools lower the barrier to applying AL, they have not considered either efficiency or selection issues. To address these issues, we propose to build an efficient system for AL, inspired by the recently proposed data-centric AI, which aims to build an AI system for data engineering. The AL system should consider efficiency and is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Further, it should automatically select AL strategies for different applications and allocate the users' budget. \begin{figure*}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\textwidth]{fig/workflow_v1.pdf}} \caption{ALaaS architecture.} \label{system-arch} \end{center} \vskip -0.2in \end{figure*} \section{Related Work} This section presents the relate work, which is classified into three categories, active learning (AL) algorithms and tools, data centric AI, and MLOps. \subsection{AL Algorithms and Tools} \subsection{Data Centric AI} \subsection{MLOps} \section{System Design} \subsection{System Overview} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in \cref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. \cref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in \cref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \section{Benchmarks} \section{Introduction} \label{Introduction} Data-centric AI is an emerging topic that focuses on engineering data to develop AI applications with the off-the-shelf machine learning (ML) models \cite{landingai}. Previous efforts are mainly model-centric AI that assumes a static environment. In this environment, 1) the data collection and engineering are done, 2) and continuously developing ML models to achieve high performance on test sets is the main target \cite{eyuboglu2022dcbench}. However, real-world AI applications are facing more complicated scenarios, which can not be adequately addressed by model-centric AI. For instance, researchers have to spend a lot of time on data preparation, including data labeling \cite{chew2019smart}, error detection \cite{krishnan2017boostclean}, etc. Meanwhile, they also need to monitor data to detect the distribution drift so as to update models in time \cite{huang2021modelci}. Treating these issues only from a model view will lead to a sub-optimal solution. Therefore, to further improve and democratize AI applications, a lot of efforts are now turning to data-centric or combining model-centric and data-centric \cite{landingai}. Though the concept of data-centric AI is proposed very recently, many pioneering studies whose core contributions lie in data engineering have already been proposed \cite{sener2018active, xu2021dataclue}. Among them, one vital direction is active learning (AL) \cite{ren2020survey}. The motivation of AL is to reduce manual labeling efforts while maintaining and even improving ML models' performance \cite{wang2014new, sener2018active, gal2017deep, ducoffe2018adversarial, caramalau2021sequential, ash2020deep, agarwal2020contextual, vzliobaite2013active, loy2012stream}. Specifically, it is well-known that ML models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of ML application development. To cope with the issue, AL selects the most representative yet diverse training samples from a large training data pool by utilizing AL strategies. Then it only sends the selected samples to an oracle (e.g., human annotators) to label. Next, ML models will only be trained on these sub-datasets. By doing so, we can still obtain an ML model with competitive performance but save labeling and training costs a lot. However, utilizing AL methods is a non-trivial task. Essentially, applying AL to AI application development is not simply searching for, selecting, or implementing the AL algorithms. Instead, users have to build a backend to run the AL pipeline, tailored for their applications in their own environment (e.g., a private cluster and AWS). In other words, they need to undertake much repetitive engineering work with boilerplate code. Moreover, users have to consider the efficiency and cost issues, as AL often runs on a vast dataset, and some AL algorithms (e.g., committee-based \cite{dagan1995committee, melville2004diverse}) require running more than one ML model for data selection. Under-consideration will result in a long process time and additional cost. Though several open-source AL tools \cite{modal, deepal, libact, alipy} lower the barrier to applying AL, they can not meet the efficiency requirement. To address these issues, we propose to build an efficient backend for AL. Our AL system, named Active-Learning-as-a-Service (ALaaS) (see Figure \ref{fig:system-arch}), is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Specifically, it adopts server-client architecture to perform AL tasks. As a result, the system can be easily installed on both laptops and the public cloud. After the installation, users can start the system with a simple configuration file by following our templates. Then the system will run AL tasks in an efficient pipeline manner. Meanwhile, more acceleration techniques such as data cache and batching \cite{crankshaw2017clipper, zhang2020mlmodelci, zhang2020hysia} will be utilized to further speed up the AL process. In addition to that, our system also considers the accessibility and modularity so that non-experts can use AL strategies stored in our AL zoo with ease, and experts can propose more advanced AL strategies for more scenarios. Experiments show that our ALaaS outperforms all other baselines in terms of latency and throughput. Further ablation studies show the effectiveness of our design and reveal more insightful conclusions. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/workflow_v2.pdf} \caption{ALaaS architecture. Our system adopts a server-client architecture, which can be deployed easily. It also supports various AL strategies, different model zoos, and serving engines.} \label{fig:system-arch} \end{figure*} \section{Related Work} This section presents the related work, including three categories: active learning (AL) algorithms and tools, Data-centric AI, and MLOps. \subsection{AL Algorithms and Tools} We categorize AL strategies into three classes, diversity-based, uncertainty-based, and hybrid sampling. Diversity-based methods \cite{yang2015multi, sener2018active} are designed to select the most informative samples from the whole dataset to represent it. Uncertainty-based methods \cite{wang2014new, roth2006margin, gal2017deep} aim to select the samples that can not be identified confidently by current ML models and then uses these samples to further improve ML models. Hybrid methods \cite{huang2010active, beluch2018power} combine both the above-mentioned methods. Our system supports all of these methods and runs them more efficiently. Many open-source AL tools have been developed to benefit both academia and industry, including ModAL\cite{modal}, DeepAL \cite{deepal}, Libact \cite{libact}, and ALiPy \cite{alipy}. Our ALaaS is inspired by these tools and further improves the AL efficiency and accessibility by adopting the MLOps concept. The detailed comparison is summarized in Table\ref{tab:open-source-tool-compare}. \begin{table}[t] \centering \caption{Comparison of Active Learning (AL) open-source tools. Our ALaaS provides a Machine-Learning-as-a-Service experience and improves AL efficiency a lot.} \label{tab:open-source-tool-compare} \vspace{7pt} \centering \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccccc} \toprule \begin{tabular}[c]{@{}c@{}}AL \\Open-source Tool\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pipelined \\Data Processing\end{tabular} & \begin{tabular}[c]{@{}c@{}}Elastic \\AL Serving\end{tabular} & \begin{tabular}[c]{@{}c@{}}Server-Client \\Architecture\end{tabular} & PyPI Install & Data Cache & AL Zoo \\ \midrule DeepAL \cite{deepal} & & & & & & \checkmark \\ ModAL \cite{modal} & & & & \checkmark & & \checkmark \\ ALiPy \cite{alipy} & & & & \checkmark & & \checkmark \\ libact \cite{libact} & & & & \checkmark & & \checkmark \\ \textbf{ALaaS (Ours)}& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \end{table} \subsection{Data-centric AI} Data-centric AI is proposed to improve AI application performance by engineering datasets rather than only focusing on models. Recent Data-centric AI competition and workshop \cite{landingai} from Landing.ai demonstrates many exciting studies from both academia and industry. Inspired by the pioneering work, many data-centric methods have been proposed for different areas, including NLP \cite{xu2021dataclue, seo2021automatic}, CV \cite{huang2021ymir, chakrabortyfirst}, Robot \cite{lin2022roboflow}, etc. Also, a new benchmark \cite{eyuboglu2022dcbench} has been built for pushing forward data-centric AI research. To the best of our knowledge, ALaaS is the first MLOps system for efficient AL from the data-centric view. \subsection{MLOps} MLOps (Machine Learning Operation) aims to streamline the ML model development and reduce the AI application maintenance cost. Many MLOps systems have been proposed for both data-centric AI and model-centric AI. From a data-centric view, labeling tools (e.g., labelme \cite{russell2008labelme}), data cleaning tools (e.g., ActiveClean \cite{krishnan2016activeclean}), data drift monitors, and so on, can all be regarded as MLOps systems. From a model-centric view, we have model store systems \cite{vartak2016modeldb}, model continuous integration \cite{zhang2020mlmodelci, renggli2019continuous} tools, training platforms \cite{jiang2020unified}, deployment platforms \cite{chen2018tvm}, etc. Different from these systems, ALaaS is designed specifically for running AL tasks more efficiently. In addition, tech giants starts to build end-to-end cloud platforms for MLOps (e.g., TFX \cite{baylor2017tfx}, SageMaker \cite{das2020amazon}, Ludwig \cite{molino2019ludwig}). Our ALaaS can be a good plugin that is complementary to these systems. \section{System Design and Architecture} This section first highlights our Active-Learning-as-a-Service (ALaaS) with three key features, then details the design of core modules of the system as shown in Figure \ref{fig:system-arch}. \subsection{ALaaS Highlights} We highlight three key features, namely efficiency, accessibility, and modularity, provided by our system. These features are also our design principles, leading the implementation to consider both experts (e.g., data scientists and machine learning (ML) engineers) and non-experts (e.g., customers with little domain knowledge) all the time. \textbf{Efficiency.} Active Learning (AL) always faces large-scale datasets to be labeled \cite{ren2020survey} and some AL even employ multiple computational intensive deep learning (DL) models (e.g., Query-By-Committee \cite{dagan1995committee, melville2004diverse}). Thus, it is critical to efficiently process these datasets and models to accelerate ML application development and save users' AL use cost. Our ALaaS offers an extremely efficient AL service to users by employing a lot of optimization technologies, including a pipeline process \cite{narayanan2019pipedream}, ML serving backend adoption\cite{trtserving}, etc. \textbf{Accessibility.} To further lower the application barrier as well as improve the adoption rate, an AL system should ensure AL non-experts can easily use it with minimal effort and avoid writing much code. Our ALaaS follows this principle and enables a smooth user experience by implementing a containerized AL service with rich configuration temples to help users quickly get started. \textbf{Modularity.} AL is evolving fast, especially driven by the advance in Deep Learning, which requires a large amount of data to train. Making AL accessible should not hinder its advanced use by AL or ML experts. Therefore, our system is designed in a highly modular manner, enabling experts to prototype, extend and deploy state-of-the-art (SOTA) AL methods with ease. \subsection{ALaaS Architecture} The system adopts service-client architecture to abstract complex AL algorithms into web-based services, enabling an out-of-the-box user experience. Besides, our system provides a modular data manager and an AL strategy zoo, decoupling two key processes, large data operation (e.g., data index and storage) and AL strategy development and selection, in AL utilization. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_arch_v3.pdf} \caption{A deployed ALaaS system. The AL client sends data URIs to the AL server, where data will be downloaded. Then the AL server sends data samples to different workers for AL processing.} \label{fig:server-client-design} \end{figure*} \textbf{Server-Client}. The server-client architecture makes the AL accessible for different level users ranging from domain experts to ML beginners with almost no knowledge. It can be deployed to a personal laptop as well as a public cloud. We take an ALaaS deployed to AWS \cite{aws} (see Figure \ref{fig:server-client-design}) as an example to detail the whole workflow. First users only need to prepare a configuration file including basic settings like dataset path and AL methods by following provided templates, as shown in Figure \ref{fig:config-example}. Then, with very few lines of code (LoCs), users can start both the AL client and AL server. Next, users will push their unlabeled datasets, which can be stored either in the local disk or AWS S3 \cite{awss3}, to the AL server. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_api_example_v2.pdf} \caption{An AL service can be easily configured and started with YML files.} \label{fig:config-example} \end{figure*} Next, after getting the dataset Uniform Resource Identifier (URI) from the AL client, the AL server will download the dataset and process it with specific AL strategies in a pipeline manner as shown in Figure \ref{pipeline-design}. With this frustratingly simple optimization, the processing speed can become 10x times faster than the other open-source platform (see Section \ref{compare-open-source}). Meanwhile, the AL server will index every sample in the dataset by assigning unique IDs to them with the help of the data manager. These IDs will be utilized by AL strategies. Finally, the server will distribute the downloaded samples to an optimized inference worker with ML serving backend to do inference. According to pre-defined AL strategies, the AL server will make decisions and generate a report including the URIs of selected samples to be labeled. As a result, the AL server only needs to return the URIs to the AL client, avoiding downloading selected samples from the AL server. \begin{figure*}[b] \centering \includegraphics[width=1.0\textwidth]{fig/alaas_pipeline_v2.pdf} \caption{Dataflow comparison among conventional pool-based learning methods (a), (b), and proposed ALaaS (c). These workflows show how data flows in machines in multiple rounds of AL with different methods. A red box represents a data sample at a download stage, a blue box represents a data sample at a process stage, a green box represents a data sample at an AL inference stage, and a box with a diagonal fill indicates there is no process. The numbers inside the box indicate different rounds of AL.} \label{pipeline-design} \end{figure*} \textbf{Data Manager.} The data manager manages the lifecycle of the dataset in our system. First, it accepts users' datasets and persists their metadata (e.g., name, owner, etc.) for data housekeeping. Second, during the system running, it will index data samples to avoid redundant data movement and batch data for an efficient GPU process. Meanwhile, it provides rich data transformation functions for different tasks like NLP, CV, and Audio. Moreover, for different kinds of AL methods, the data manager can equip the corresponding processing methods to improve usability. \textbf{AL Strategy Zoo.} The AL strategy zoo abstracts and stores many AL strategies, including uncertainty-based, Bayesian, density-based, batch mode, etc. It also provides a base class for advanced users to inherit and extend AL to new scenarios. \textbf{Other Utilities.} To further lower the barrier of using AL and improve efficiency, the system further offers many useful utility functions. For example, as shown in Figure \ref{fig:system-arch}, \textbf{model repository} is designed to connect many public model hubs like HuggingFace \cite{Wolf_Transformers_State-of-the-Art_Natural_2020} and TorchHub \cite{pytorchhub} and obtain pre-trained models from them. Second, as shown in Figure \ref{fig:server-client-design}, the data cache is employed to improve AL computation efficiency, and workers with \textbf{serving engine} are to call different ML serving backend to speed up ML model inference. \section{System Evaluation} This section presents the quantitative evaluation of our systems. We first compare our system with other open-source platforms. Then we benchmark our system from different perspectives to demonstrate its efficiency and accessibility. \subsection{Evaluation setup} \textbf{Hardware\&Software.} We evaluate the system on AWS EC2 and a MacBook laptop. The backend inference software is Triton Inference Server \cite{trtserving}. \textbf{Dataset.} We use the CIFAR-10 dataset \cite{cifar} to conduct experiments. It includes 50,000 training images and 10,000 test images. \textbf{Model.} We use the widely deployed ResNet-18 \cite{he2016deep} model to evaluate system performance as well as benchmark different AL strategies and AL settings. \subsection{Comparison with other AL open source tools} \label{compare-open-source} The first experiment compares the efficiency of ALaaS with that of other baselines. \textbf{Settings.} In this experiment, we simulate a one-round AL process, which applies AL methods to scan the whole dataset to generate a sub-pool. This sub-pool includes samples that will be used to further improve an existing ML model. Specifically, we first train an ML model with randomly selected 10,000 images from the CIFAR-10 training set as the initial model. Next, we use different AL tools to serve the trained model on an AWS 3x.large CPU/GPU EC2. For all tools, we use the same AL strategy named least confidence sampling \cite{sequential1994david}. Finally, we utilize these tools to select 10,000 samples from the rest 40,000 images in the training set and compare their latency and throughput. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{exp/exp_acc_budget.pdf} \caption{Top-1 and top-5 accuracy (ACC) with different AL budgets.} \label{fig:ablation-al-budget} \end{wrapfigure} \textbf{Results \& Insights.} The results are shown in Table \ref{tab:open-source-tool-perf-eval}. Compared to other tools, our ALaaS achieves the lowest latency and highest throughput while still maintaining the same accuracy. This efficiency improvement can be attributed to two sides. First, our ALaaS implements stage-level parallelism which reduces the device idle time extremely. Second, ALaaS adopts the existing ML inference servers to accelerate the model inference. Furthermore, we evaluate the intermediate results with different budgets of our ALaaS. As shown in Figure \ref{fig:ablation-al-budget}, as the budget increases, more samples will be selected and the accuracy will also be increased. This further proves the effectiveness of our system. \begin{table}[h] \centering \caption{Performance evaluation among different AL open-source tools. Compared to all baselines, ALaaS has the lowest latency and highest throughput.} \label{tab:open-source-tool-perf-eval} \vspace{7pt} \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccc} \toprule AL Open-source Tool & Top-1 Accuracy (\%) & Top-5 Accuracy (\%) & One-round AL Latency (sec) & End-to-end Throughput (Image/sec) \\ \midrule DeepAL \cite{deepal} & 86.90 & 89.67 & 2287.00 $\pm$ 179.37 & 17.49 \\ ModAL \cite{modal} & 86.90 & 85.72 & 2006.95 $\pm$ 37.98 & 19.93 \\ ALiPy \cite{alipy} & 86.90 & 83.46 & 2410.85 $\pm$ 77.81 & 16.59 \\ libact \cite{libact} & 85.14 & 81.23 & 1771.33 $\pm$ 109.77 & 22.58 \\ \textbf{ALaaS (Ours)} & 86.90 & 88.12 & 552.45 $\pm$ 30.385 & 72.40 \\ \bottomrule \end{tabular} \end{table} \subsection{ALaaS Characterization} We further benchmark our ALaaS with different system settings. The first experiment is to evaluate different AL strategies re-implemented in our system. The second experiment explores the batch size's impact on system efficiency. \subsubsection{AL strategy impact} Our ALaaS already provides many out-of-the-box AL strategies in ModelZoo for users. This experiment evaluate these strategies re-implemented by ALaaS from accuracy and efficiency views to provide more insights. All settings are the same as in the previous experiment. \textbf{Results \& Insights.} The accuracy of different methods is shown in Figure \ref{fig:ablation-al-strategy-acc}. Core-set \cite{sener2018active} achieves the highest accuracy with no surprise as it is designed for CNNs in computer vision (CV) tasks. Meanwhile, K-Center Greedy (KCG) \cite{alcluster2004nguyen} and Least Confidence (LC) \cite{sequential1994david} are the second and the third accuracy though proposed very early. This tells us that even in the deep learning (DL) era, traditional methods still play a vital role and can cooperate with DL very well. The throughput is shown in Figure \ref{fig:ablation-al-strategy-throughput}. LC has the highest throughput while Coreset achieves the lowest throughput. Combining the accuracy \ref{fig:ablation-al-strategy-acc}and the throughput \ref{fig:ablation-al-strategy-throughput}results, we can draw the conclusion that the accuracy improvement of Coreset comes from the heavy design while LC balances the trade-off between accuracy and efficiency well. In summary, ALaaS provides many methods with clear accuracy and efficiency reports so users can choose them based on their own scenarios accordingly. \begin{figure}[h] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_acc_strategy.pdf} \caption{Top-1 and top-5 accuracy (ACC)} \label{fig:ablation-al-strategy-acc} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_qps_strategy.pdf} \caption{AL query throughput} \label{fig:ablation-al-strategy-throughput} \end{subfigure}\hfill% \caption{Performance of one-round AL for ResNet-18 \cite{he2016deep} on CIFAR-10 dataset \cite{cifar} using different AL strategies (i.e., Least Confidence (LC) \cite{sequential1994david}, Margin Confidence (MC) \cite{margconf2001tobias}, Ratio Confidence (RC) \cite{settles2009active}, Entropy Sampling (ES) \cite{settles2009active}, K-Center Greedy (KCG) \cite{alcluster2004nguyen}, K-Means Sampling (KMeans) \cite{alcluster2004nguyen}, Core-set \cite{sener2018active}, and Diverse Mini-Batch (DBAL) \cite{diverse2019fedor}). The lower-bound baseline is using random sampling (Random) strategy, while the upper-bound baseline is using the entire dataset for training.} \label{fig:ablation-al-strategy} \end{figure} \subsubsection{Batch size impact.} \textbf{Settings.} We evaluate the batch size (BS) impact on two deployment scenarios, the private server and the AWS cloud. Specifically, we first store the CIFAR-10 dataset on a private FTP server and an AWS S3, respectively. We then start ALaaS on a laptop to simulate the end-to-end AL process, including downloading data from other devices, pre-processing data, and selecting AL with an AL strategy. The other settings are the same as the first experiment. \textbf{Results \& Insights.} Our ALaaS can manage the whole process in both environments with different batch sizes steadily and efficiently, as shown in Figure \ref{fig:ablation-al-infer-bs}. Also, from the Figure \ref{fig:ablation-al-infer-bs}, we have many interesting phenomenons. First, BS = 1 and BS = 2 have very close throughput. Second, the increasing trend from BS = 2 to BS = 8 is the most dramatic. Third, after BS=16, the increase will stop. We attribute the reason to that the transmission time accounts for a large proportion of total processing time when the batch size is small. Therefore, the throughput improvement is marginal at the beginning. Then the batch computation time becomes the largest part of the total processing time, so the improvement is dramatic. Finally, when the batch size reaches the computation capacity, the increase stops. \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_ftp_param_acc.pdf} \caption{Images stored on private FTP server} \label{fig:ablation-al-infer-bs-ftp} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_s3_param_acc.pdf} \caption{Images stored on AWS S3} \label{fig:ablation-al-infer-bs-s3} \end{subfigure}\hfill% \caption{End-to-end throughput of one-round pool-based AL for ResNet-18 \cite{he2016deep} on CIFAR-10 \cite{cifar} with different AL inference batch sizes. Storing images on a private FTP server (Figure \ref{fig:ablation-al-infer-bs-ftp}) and S3 (Figure \ref{fig:ablation-al-infer-bs-s3}) both show a monotonic increase of end-to-end throughput over inference batch size.} \label{fig:ablation-al-infer-bs} \end{figure} \section{Conclusion} This paper presents a new MLOps system, named ALaaS, for data-centric AI. ALaaS adopts the philosophy of Machine-Learning-as-Service and implements a server-client architecture, so users can use AL as a web service. Meanwhile, it abstracts the AL process into multiple components and develops several modules, including a data manager, an AL strategy zoo, and utility functions, to support them. More importantly, ALaaS employ stage-level parallelism (a pipeline manner), cache, and batching to improve AL running efficiency. Experiments show that our system has lower latency and higher throughput than all other baselines. We release our code to facilitate the AL research. \newpage
2024-02-18T23:40:36.446Z
2022-07-20T02:10:34.000Z
algebraic_stack_train_0000
2,842
8,770
proofpile-arXiv_065-14035
\section{Introduction} The problem of output feedback stabilization is one of deep interest in control theory. It consists of stabilizing the state of a dynamical system, that is only partially known, to a target point. Although a vast literature tackles this topic (see \cite{AndrieuPraly2009}, and references therein), some fundamental problems remain mostly open. The issue can be formulated in the following manner. Let $n$, $m$ and $p$ be positive integers, $f:\mathbb{R}^n\times\mathbb{R}^p\to\mathbb{R}^n$ and $h:\mathbb{R}^n\to\mathbb{R}^m$. For all $u\in C^0(\mathbb{R}_+, \mathbb{R}^p)$, consider the following observation-control system: \begin{equation}\label{E:system_general} \left\{ \begin{aligned} &\dot{x}=f(x, u) \\ &y= h(x) \end{aligned} \right. \end{equation} where $x$ is the state of the system, $u$ is the control (or input) and $y$ is the observation (or output). We assume that $f$ is uniformly locally Lipschitz with respect to $x$ and continuous. We may as well assume that the target point at which we aim to stabilize $x$ is the origin $0\in\mathbb{R}^n$, $h(0)=0$ and $f(0, 0)=0$. \begin{dfntn}[Dynamic output feedback stabilizability]\label{def:stab_out} System~\eqref{E:system_general} is said to be \emph{locally} (resp. \emph{globally}) \emph{stabilizable by means of a dynamic output feedback} if and only if the following holds. There exist two continuous map $\nu:\mathbb{R}^q\times\mathbb{R}^p\times\mathbb{R}^m\to\mathbb{R}^q$ and $\varpi:\mathbb{R}^q\times\mathbb{R}^m\to\mathbb{R}^p$ for some positive integer $q$ such that $(0, 0)\in\mathbb{R}^n\times\mathbb{R}^q$ is a locally (resp. globally) asymptotically stable\footnote{ Recall that a dynamical system is said to be asymptotically stable at an equilibrium point with some basin of attraction if and only if each initial condition in the basin of attraction yields at least one solution to the corresponding Cauchy problem, each solution converges to the equilibrium point, and the equilibrium point is Lyapunov stable. } equilibrium point of the following system: \begin{equation}\label{E:system_stab} \left\{ \begin{aligned} &\dot{x}=f(x, u) \\ &y= h(x) \end{aligned} \right. ,\qquad \left\{ \begin{aligned} &\dot{w}=\nu(w, u, y) \\ &u= \varpi(w, y). \end{aligned} \right. \end{equation} Additionally, if for any compact set $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$, there exist two continuous maps $\nu:\mathbb{R}^q\times\mathbb{R}^p\times\mathbb{R}^m\to\mathbb{R}^q$ and $\varpi:\mathbb{R}^q\times\mathbb{R}^m\to\mathbb{R}^p$ for some positive integer $q$, and a compact set $\widehat{\mathcal{K}}}%{\mathcal{K}_w\subset\mathbb{R}^q$ such that $(0, 0)\in\mathbb{R}^n\times\mathbb{R}^q$ is an asymptotically stable equilibrium point of \eqref{E:system_stab} with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$, then \eqref{E:system_general} is said to be \emph{semi-globally stabilizable by means of a dynamic output feedback}. \end{dfntn} Without loss of generality, we assume that if \eqref{E:system_stab} is locally asymptotically stable at $(0, 0)$, then the value of the control at the target point is zero: $\varpi(0, 0)=0\in\mathbb{R}^p$. A common strategy to achieve dynamic output feedback stabilization consists in finding a stabilizing state feedback, designing an observer system that learns the state from the dynamics of its output, and using as an input the state feedback applied to the observer. For linear systems, this corresponds to the so-called separation principle, which consists of designing ``separately'' a stabilizing state feedback law and a state observer. This strategy is known to fail in general for nonlinear systems. In \cite{jouan} and \cite{TeelPraly1994, TeelPraly1995}, the authors proved under a \emph{complete uniform observability} assumption that any system semi-globally stabilizable by means of a static state feedback is also semi-globally stabilizable by means of a dynamic output feedback. \begin{dfntn}[Observability]\label{def:obs} System~\eqref{E:system_general} is observable for some input $u\in C^0(\mathbb{R}_+, \mathbb{R}^p)$ in time $T>0$ if and only if, any two solutions $x, \tilde{x}$ of \eqref{E:system_general} whose corresponding outputs $y$ and $\tilde{y}$ are equal almost everywhere on $[0, T]$, must also be equal almost everywhere on $[0, T]$. \end{dfntn} Complete uniform observability required in \cite{TeelPraly1994, TeelPraly1995} implies observability for all inputs and all times. However, as proved in \cite{Gauthier_book}, it is not generic for nonlinear systems to be completely uniformly observable when the dimension of the output is less than or equal to the dimension of the input. The problem of dynamic output feedback stabilization remains open when singular inputs (that are, inputs that make the system unobservable) exist. Crucially, observability singularities prevent from applying classical tried-and-tested methods. However, such difficulties occur in practical engineering systems, where original strategies need to be explored \cite{pasillasxbs,Rapaport,9172770,ludo,flaya2019,surroop2020adding,combes2016adding}, leading to a renewal of interest in the issue in recent years. The main difficulties arise when the control input corresponding to the equilibrium point of \eqref{E:system_stab} is singular. Indeed, a contradiction may occur between the stabilization of the state at the target point, and the fact that the observer system may fail to properly estimate the state near the target. Various techniques have been introduced to remove this inconsistency, on which this paper focuses. Most of them rely on a modification of the input, that helps the observer system to estimate the state even near the target point. This strategy was employed in \cite{Coron1994} (that achieved local stabilization by using a periodic time-dependent perturbation), in \cite{ShimTeel2002} (that achieved practical stabilization by using a ``sample and hold'' time-dependent perturbation), and more recently in \cite{brivadis:hal-03180479} (that achieved global stabilization on a specific class of systems). Let us also mention the works of \cite{combes2016adding, flaya2019, surroop2020adding}, that also rely on a high-frequency excitation of the input. Adopting another point of view in line with \cite{MarcAurele, brivadis2019avoiding}, we are interested in time-independent perturbations of the feedback laws. Another important tool in stabilization theory is the use of weakly contractive systems, for which flows are non-expanding \cite{andrieu:hal-02611605,Manchester_contraction2014,Jafarpour_contraction2020,jq,mazenc}. The fact that the error between the state and the observer has contractive dynamics has proven to be a powerful tool in \cite{MarcAurele, sacchelli2019dynamic}, because it guarantees that the error is non-increasing, no matter the observability properties of the system. More precisely, in \cite{MarcAurele} a strategy of feedback perturbation is used in conjunction with the contraction property of a quantum control system to achieve stabilization at an unobservable target. In \cite{sacchelli2019dynamic}, the authors proved that for non-uniformly observable state-affine systems, detectability at the target is a sufficient condition to set up a separation principle. In order to access such contraction properties on a wider class of systems, we turn to embeddings techniques. Embedding the original nonlinear system into a bilinear system and design an observer with dissipative error is the second guideline that we aim to follow. In the present paper, we wish to coalesce the insights provided by these previous works in order to come up with solutions to attack the problem of dynamic output feedback stabilization at an unobservable target. Embedding techniques have to rely on the systems structure. Inspired by an example from \cite{Coron1994}, we focus here on systems with linear conservative dynamics and nonlinear observation maps. The strategies developed in the paper match the restrictions we impose on the systems, but aim to illustrate a more general framework in which to attack stabilization at unobservable targets. A direct method to linearize the output is to consider it as an additional state coordinate. If an observer with dissipative error system can be found for the new embedded system, we prove that a feedback law perturbation approach can be efficient for dynamic output feedback stabilization. However, the existence of such an observer is not guaranteed. We attempt to overcome this difficulty by proposing embeddings into infinite-dimensional unitary systems with linear output. As illustrated in \cite{Celle-etal.1989}, this idea helps in designing dissipative observers for a wider array of nonlinear systems. Furthermore, infinite-dimensional observers are not limited by topological obstructions that may occur in the finite-dimensional context (see Section~\ref{sec:coron}). Following this approach in the context of output feedback stabilization yields a coupled ODE--PDE system that demands an \emph{ad-hoc} functional framework. We set up this strategy on a two-dimensional system presenting an archetypal singularity at the target point. \bigskip \noindent \textbf{Content.} Necessary conditions for dynamic output feedback stabilization are discussed in Section~\ref{sec:nc}. The finite-dimensional strategy is explored in Section~\ref{sec:finite} (first, topological obstructions are raised in \ref{sec:coron}, while a positive stabilization result is proved in \ref{sec:converse}). In Section~\ref{sec:infinite}, we set up preliminaries for our infinite-dimensional strategy. In the final Section~\ref{sec:example_infinie}, these concepts are applied to achieve a stabilization result on a coupled ODE--PDE state-observer system. \bigskip \noindent \textbf{Notations.} Denote by $\mathbb{R}$ (resp. $\mathbb{R}_+$) the set of real (resp. non-negative) numbers, by $\mathbb{C}$ the set of complex numbers and by $\mathbb{Z}$ (resp. $\N$) the set of integers (resp. non-negative integers). The Euclidean norm over $\mathbb{R}^n$ (or $\mathbb{C}^n)$ is denoted by $|\cdot|$ for any $n\in\N$. The real and imaginary parts of $z\in\mathbb{C}$ are denoted by $\Re z$ and $\Im z$, respectively. For any matrix $A\in\mathbb{R}^{n\times m}$, the transpose of $A$ is denoted by $A'$. For any normed vector space $(Z, \|\cdot\|_Z)$, we denote by $B_Z(x, r)$ (resp. $\bar{B}_Z(x, r)$) the open (resp. closed) ball of $Z$ centered at $x\in Z$ of radius $r>0$ for the norm $\|\cdot\|_Z$. The identity operator over $Z$ is denoted by $\mathbbm{I}_Z$. For all $k\in\N\cup\{\infty\}$ and all interval $U\subset\mathbb{R}$, the set $C^k(U, Z)$ is the set of $k$-continuously differentiable functions from $U$ to $Z$. \medskip For any Hilbert space $Z$, denote by $\psX{\cdot}{\cdot}$ the inner product over $Z$ and $\norm{\cdot}$ the induced norm. If $Y$ is also a Hilbert space, then $\mathscr{L}(Z, Y)$ denotes the space of bounded linear maps from $Z$ to $Y$, and $\lin(\XX) = \mathscr{L}(Z, Z)$. We identify the Hilbert spaces with their dual spaces via the canonical isometry, so that the adjoint of $\mC\in\mathscr{L}(Z, Y)$, denoted by $\mC^*$, lies in $\mathscr{L}(Y, Z)$. Let us recall the characterization of the strong and weak topologies on $Z$. A sequence $(x_n)_{n\geq0}\inZ^\N$ is said to be strongly convergent to some $x^\star\inZ$ if $\norm{x_n-x^\star}\to 0$ as $n\to+\infty$, and we shall write $x_n\to x^\star$ as $n\to+\infty$. It is said to be weakly convergent to $x^\star$ if $\psX{x_n-x^\star}{\psi}\to 0$ as $n\to+\infty$ for all $\psi\inZ$, and we shall write $x_n\overset{w}{\rightharpoonup} x^\star$ as $n\to+\infty$. The strong topology on $Z$ is finer than the weak topology (see, \emph{e.g.,~} \cite{Brezis} for more properties on these usual topologies). \section{Necessary conditions}\label{sec:nc} The aim of the paper is to discuss dynamic output feedback stabilization strategies in the presence of observability singularities in contrast to \cite{TeelPraly1994, TeelPraly1995}, where uniform observability is assumed. However, in trying to weaken the observability assumption in the context dynamic output feedback stabilization, one first needs to check that the goal is still achievable. In this short section, necessary conditions for dynamic output feedback stabilizability are discussed. These should be put in perspective with sufficient conditions that can be found in the literature, as well as those we exhibit in the paper. \begin{dfntn}[State feedback stabilizability]\label{def:stab} System~\eqref{E:system_general} is said to be \emph{locally} (resp. \emph{globally}) \emph{stabilizable by means of a (static) state feedback} if and only if there exists a continuous map $\phi:\mathbb{R}^n\to\mathbb{R}^p$ such that $0\in\mathbb{R}^n$ is a locally (resp. globally) asymptotically stable equilibrium point of \begin{equation}\label{E:system_stab_state} \left\{ \begin{aligned} &\dot{x}=f(x, u) \\ &u=\phi(x). \end{aligned} \right. \end{equation} Additionally, if for any compact set $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$, there exists a continuous map $\phi:\mathbb{R}^n\to\mathbb{R}^p$ such that $0\in\mathbb{R}^n$ is an asymptotically stable equilibrium point of \eqref{E:system_stab_state} with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x$, then \eqref{E:system_general} is said to be \emph{semi-globally stabilizable by means of a static state feedback}. \end{dfntn} The problem of dynamic state feedback stabilization of \eqref{E:system_general} is equivalent to the dynamic output feedback stabilization in the case where $h(x)=x$. Therefore, dynamic state feedback stabilizability of \eqref{E:system_general} is a necessary condition for dynamic output feedback stabilizability. One may wonder if \emph{static} state feedback stabilizability of \eqref{E:system_general} is necessary for dynamic output feedback stabilizability. In \cite{AndrieuPraly2009}, the authors answer by the positive if a sufficiently regular selection function can be found. We recall their result below. \begin{thrm}[\!\!{\cite[Lemma 1, (1)]{AndrieuPraly2009}}] Assume that \eqref{E:system_stab} is locally asymptotically stable at $(0, 0)$ with basin of attraction\footnote{In \cite[Lemma 1, (1)]{AndrieuPraly2009}, the authors state only a global version of the result, that is, $\mathcal{U}_x=\mathbb{R}^n$ and $\mathcal{U}_\wdyn=\mathbb{R}^q$. However, the proof remains identical in the other cases. } $\mathcal{U}_x\times\mathcal{U}_\wdyn$. Let $V$ be a $C^\infty(\mathcal{U}_x\times\mathcal{U}_\wdyn, \mathbb{R}_+)$ strict proper Lyapunov function of \eqref{E:system_stab}. If there exists a selection map $\mathcal{U}_x\ni x\mapsto \phi(x)\in \argmin_{\mathcal{U}_\wdyn}V(x,\cdot)$ which is locally Hölder of order strictly larger than $\frac{1}{2}$, then \eqref{E:system_stab_state} is locally asymptotically stable at $0$ with basin of attraction containing $\mathcal{U}_x$. \end{thrm} Therefore, up to the existence of a sufficiently regular selection map, this result implies that the following local (resp. semi-global, global) condition is necessary for the local (resp. semi-global, global) stabilizability of \eqref{E:system_general} by means of a dynamic output feedback. \begin{condition}[State feedback stabilizability --- local, semi-global, global]\label{hyp:state_feedback} System~\eqref{E:system_general} is locally (resp. semi-globally, globally) stabilizable by means of a static state feedback. \end{condition} In \cite{Coron1994}, J.-M. Coron stated two additional conditions that he proved to be sufficient when local static state feedback stabilizability holds to ensure local dynamic output feedback stabilizability, provided that one allows the output feedback to depend on time (which we do \emph{not} allow in this paper). The two following conditions are weaker versions of the ones of \cite{Coron1994}. We prove that these two conditions are necessary to ensure dynamic output feedback stabilizability. The first one, known as \emph{$0$-detectability} is also used by E. Sontag in \cite{sontag1981conditions} in the context of abstract nonlinear regulation theory. Before stating this condition, let us recall the following. For any input $u\in C^0(\mathbb{R}_+, \mathbb{R}^n)$, and any initial condition $x_0\in\mathbb{R}^n$, there exists exactly one maximal solution of \eqref{E:system_general}, defined on $[0, T(x_0, u))$. This solution, denoted by $\varphi_t(x_0, u)$, is such that $\varphi_0(x_0, u) = x_0$ and $\frac{\partial\varphi_t(x_0, u)}{\partial t} = f( \varphi_t(x_0, u), u(t))$ for all $t\in[0, T(x_0, u))$. \begin{condition}[0-detectability --- local, global]\label{hyp:distinguish} Let $\mathcal{X}_0 = \{x_0\in\mathbb{R}^n\mid \forall t\in[0, T(x_0, 0)),$ $h(\varphi_t(x_0, 0))=0\}$. Then $0\in\mathcal{X}_0$ is a locally (resp. globally) asymptotically stable equilibrium point of the vector field $\mathcal{X}_0\ni x\mapsto f(x, 0)$. \end{condition} \begin{thrm}\label{th:necessary_distinguish} If \eqref{E:system_general} is locally (resp. semi-globally, globally) stabilizable by means of a dynamic output feedback, then Condition~\ref{hyp:distinguish} holds locally (resp. globally, globally). \end{thrm} \begin{proof} The set $\mathcal{X}_0$ is invariant for the vector field $x\mapsto f(x, 0)$ and $0\in\mathcal{X}_0$. Let $x_0\in\mathcal{X}_0$. Assume that \eqref{E:system_general} is locally stabilizable by means of a dynamic output feedback, and that $(x_0, 0)$ is in the basin of attraction of $(0, 0)$ for \eqref{E:system_stab}. Then $t\mapsto (\varphi_t(x_0, 0), 0)$ is a trajectory of \eqref{E:system_stab} with initial condition $(x_0, 0)$. Hence $\varphi_t(x_0, 0)$ is well-defined for all $t\geq0$ and tends towards $0$ as $t$ goes to infinity. Moreover, for all $R>0$, there exists $r>0$ such that, if $x_0\in B_{\mathbb{R}^n}(x_0, r)$, then $\varphi_t(x_0, 0)\in B_{\mathbb{R}^n}(x_0, R)$ for all $t\geq0$. If we assume that \eqref{E:system_general} is globally stabilizable by means of a dynamic output feedback, then the arguments still hold for any $x_0\in\mathbb{R}^n$. If \eqref{E:system_general} is only semi-globally stabilizable by means of a dynamic output feedback, we first define $\mathcal{K}}%{\mathcal{K}_x$ as in Definition~\ref{def:stab_out} containing $x_0$. \end{proof} \begin{condition}[Indistinguishability $\Rightarrow$ common stabilizability --- local, global]\label{hyp:common} For all $x_0$, $\tilde{x}_0$ in some neighborhood of $0\in\mathbb{R}^n$ (resp. for all $x_0$, $\tilde{x}_0$ in $\mathbb{R}^n$), if for all $u\in C^0(\mathbb{R}_+, \mathbb{R}^p)$ such that $T(x_0, u) = +\infty$ it holds that $h(\varphi_t(x_0, u)) = h(\varphi_t(\tilde{x}_0, u))$ for all $t\in[0, T(\tilde{x}_0, u))$, then there exists $v\in C^0(\mathbb{R}_+, \mathbb{R}^p)$ such that $\varphi_t(x_0, v)$ and $\varphi_t(\tilde{x}_0, v)$ are well-defined for all $t\in\mathbb{R}_+$ and tend towards $0$ as $t$ goes to infinity. \end{condition} \begin{thrm}\label{th:necessary_common} If \eqref{E:system_general} is locally (resp. semi-globally, globally) stabilizable by means of a dynamic output feedback, then Condition~\ref{hyp:common} holds locally (resp. globally, globally). \end{thrm} \begin{proof} Let $x_0, \tilde{x}_0\in \mathbb{R}^n$ be such that for all $u\in C^0(\mathbb{R}_+, \mathbb{R}^p)$ such that $T(x_0, u) = +\infty$ it holds that $h(\varphi_t(x_0, u)) = h(\varphi_t(\tilde{x}_0, u))$ for all $t\in[0, T(\tilde{x}_0, u))$, Assume that \eqref{E:system_general} is locally stabilizable by means of a dynamic output feedback, and that $(x_0, 0)$, $(\tilde{x}_0, 0)$ are in the basin of attraction of $(0, 0)$ for \eqref{E:system_stab}. Let $(x, w)$ be a solution of \eqref{E:system_stab} starting from $(x_0, 0)$. Set $v = \varpi(w, h(x))$. Then $T(x_0, v) = +\infty$ and $\varphi_t(x_0, v) \to 0$ as $t\to+\infty$. Let $\tilde{x}(t) = \varphi_t(\tilde{x}_0, v)$ for all $t\in[0, T(\tilde{x}_0, v))$. Since $h(\varphi_t(x_0, v)) = h(\varphi_t(\tilde{x}_0, v))$ for all $t\in[0, T(\tilde{x}_0, v))$, $(\tilde{x}, w)$ is a solution of \eqref{E:system_stab} starting from $(\tilde{x}_0, 0)$. Hence $T(\tilde{x}_0, v)=+\infty$ and $\varphi_t(\tilde{x}_0, v) \to 0$ as $t\to+\infty$. If we assume that \eqref{E:system_general} is globally stabilizable by means of a dynamic output feedback, then the arguments still hold for any $x_0, \tilde{x}_0\in\mathbb{R}^n$. If \eqref{E:system_general} is only semi-globally stabilizable by means of a dynamic output feedback, we first define $\mathcal{K}}%{\mathcal{K}_x$ as in Definition~\ref{def:stab_out} containing $x_0$ and $\tilde{x}_0$. \end{proof} \begin{rmrk} In \cite{sacchelli2019dynamic}, the authors consider the problem of dynamic output feedback stabilization of \emph{dissipative} state-affine systems, that is, systems of the form \begin{equation}\label{E:diss} \left\{ \begin{aligned} &\dot{x}= A(u)x+B(u) \\ &y= C x \end{aligned} \right. \end{equation} where there exists some positive definite matrix $P\in\mathbb{R}^n\times\mathbb{R}^n$ such that, for all inputs $u$ in some admissible set, \begin{align}\label{eq:diss} PA(u)+A(u)'P\leq0. \end{align} For such systems, Conditions~\ref{hyp:state_feedback} (local) and~\ref{hyp:distinguish} are proved to be sufficient to achieve the dynamic output feedback stabilization, which implies, by Theorem~\ref{th:necessary_common}, that Condition~\ref{hyp:common} is also satisfied. In this paper, we therefore focus on systems that are not in the form of \eqref{E:diss}-\eqref{eq:diss}. \end{rmrk} \section{An illustrative example}\label{sec:finite} \subsection{An obstruction by J.-M. Coron}\label{sec:coron} Consider the case where \eqref{E:system_general} is single-input single-output and $f$ is a linear map, so that it can be written in the form of \begin{equation}\label{E:system_linear} \left\{ \begin{aligned} &\dot{x}=Ax + bu, \\ &y=h(x). \end{aligned} \right. \end{equation} where $A\in\mathbb{R}^{n\times n}$ and $b\in\mathbb{R}^{n\times 1}$ and $h:\mathbb{R}^n\to\mathbb{R}$. If $h$ is nonlinear and is not an invertible transformation of a linear map, then the usual theory of linear systems fails to be applied. Condition~\ref{hyp:state_feedback} reduces to the stabilizability of the pair $(A, b)$. If it holds, then \eqref{E:system_linear} is globally stabilizable by a linear static state feedback. In \cite{Coron1994}, J.-M. Coron introduced the following illustrative one-dimensional example: \begin{equation}\label{E:system_coron} \dot x = u,\qquad y = x^2. \end{equation} He proved that \eqref{E:system_coron} is not locally stabilizable by means of a dynamic output feedback, unless introducing a time-dependent component in the feedback law. The difficulty with this system comes from the unobservability of the target point $0$. Indeed, \eqref{E:system_coron} is not observable for the constant input $u\equiv0$ in any time $T>0$. Indeed, the initial conditions $x_0$, $-x_0$ $\in\mathbb{R}$ are indistinguishable. In particular, the system is not uniformly observable, and consequently the results of \cite{jouan, TeelPraly1994, TeelPraly1995} fail to be applied. To overcome this issue, \cite{Coron1994} introduced time-dependent output feedback laws, and proved by this means the local stabilizability of \eqref{E:system_coron}. This system can also be stabilized by means of ``dead-beat'' or ``sample-and-hold'' techniques (see \cite{nevsic1998input}, \cite{ShimTeel2002}, respectively). A generalization of \eqref{E:system_coron} in higher dimension is \begin{equation}\label{E:system_rot} \left\{ \begin{aligned} &\dot{x}=Ax + bu, \\ &y=h(x)% \end{aligned} \right. \end{equation} for a skew-symmetric matrix $A$ and $h$ radially symmetric\footnote{ Up to a change of scalar product, one may also consider the case where $PA+AP=0$ for some positive definite matrix $P\in\mathbb{R}^{n\times n }$ and $h$ such that $(x_1'Px_1 = x_2'Px_2) \Rightarrow (h(x_1) = h(x_2))$. }. Again, the constant input $u\equiv0$ makes the system unobservable in any time $T>0$ since for any initial conditions $x_0$, $\tilde{x}_0$ in $\mathbb{R}^n$ satisfying $|x_0|=|\tilde{x}_0|$, $h(\varphi_t(x_0))=h(\tilde{x}_0)=h(x_0)=h(\varphi_t(x_0))$ for all $t\in\mathbb{R}_+$. Condition~\ref{hyp:state_feedback}~(global) reduces to the stabilizability of $(A, b)$ and Condition~\ref{hyp:distinguish}~(global) is always satisfied. Let us state a necessary condition for the stabilizability of \eqref{E:system_rot} by means of a dynamic output feedback. \begin{thrm}\label{th:impossible} If \eqref{E:system_rot} is locally stabilizable by means of a dynamic output feedback, then $A$ is invertible. \end{thrm} \begin{proof} The proof is an adaptation of the one given in \cite{Coron1994} in the one-dimensional context. Assume that $(0, 0)$ is a locally asymptotically stable equilibrium point of \begin{equation}\label{E:system_rot_stab} \left\{ \begin{aligned} &\dot{x}=A x+ bu, \\ &y= h(x) \end{aligned} \right. ,\qquad \left\{ \begin{aligned} &\dot{w}=\nu(w, u, y) \\ &u= \varpi(w, y) \end{aligned} \right. \end{equation} for some positive integer $q$ and two continuous maps $\nu:\mathbb{R}^q\times\mathbb{R}\times\mathbb{R}$ and $\varpi:\mathbb{R}^q\times\mathbb{R}$. Set $F:\mathbb{R}^n\times\mathbb{R}^q\ni(x, w)\mapsto\left(A x+ b\varpi\left(w, h(x)\right), \nu\left(w, \varpi\left(w, h(x)\right), h(x)\right)\right)$. Then, according to \cite[Theorem 52.1]{krasnosel1984geometrical} (see~\cite{coron1994relations} when one does not have uniqueness of the solutions to the Cauchy problem), the index of $-F$ at $(0, 0)$ is $1$. Assume, for the sake of contradiction, that $A$ is not invertible. Let $\mathcal{N}$ be a one-dimensional subspace of $\ker A$. Denote by $\Sigma$ the reflection through the hyperplane $\mathcal{N}^\perp$, that is, $\Sigma = \mathbbm{I}_{\mathbb{R}^n} - 2vv'$ for some unitary vector $v\in\mathcal{N}$. Then $\det \Sigma = -1$, $A\Sigma=A$ and $h(\Sigma x)=h(x)$. Hence $(x, w)\mapsto -F(\Sigma x, w)$ has index $-1$ at $(0, 0)$ and $F(\Sigma x, w) = F(x, w)$. Thus $1=-1$ which is a contradiction. \end{proof} According to the spectral theorem, we have the following immediate corollary. If $n=1$, we recover the result of J.-M. Coron in \cite{Coron1994}. \begin{crllr}\label{cor:impossible} If $n$ is odd and $A$ is skew-symmetric, then \eqref{E:system_rot} is not locally stabilizable by means of a dynamic output feedback. \end{crllr} \subsection{Converse theorem: a positive result of output feedback stabilization}\label{sec:converse} One of the main results of this paper is the following theorem which is the converse of Theorem~\ref{th:impossible} in the case where $h(x) = \frac{1}{2}|x|^2$. The proof relies on the guidelines described in the introduction, that is, an embedding into a bilinear system, an observer design with dissipative error-system and a feedback perturbation. Consider the special case for system~\eqref{E:system_rot}: \begin{equation}\label{E:system_rot2} \left\{ \begin{aligned} &\dot{x}=Ax + bu, \\ &y=h(x)=\frac{1}{2}|x|^2. \end{aligned} \right. \tag{\ref{E:system_rot}'} \end{equation} \begin{thrm}\label{th:finite} If $A$ is skew-symmetric and invertible and $(A, b)$ is stabilizable, then \eqref{E:system_rot2} is semi-globally stabilizable by means of a dynamic output feedback. \end{thrm} \begin{rmrk} The dynamic output feedback is explicitly given in \eqref{E:system_sep_rot}. It is easily implementable, and does not use time-dependent feedback laws. \end{rmrk} The proof of Theorem~\ref{th:finite} is the object of the section. We follow the same steps as in \cite{ludo}, with a very similar embedding strategy. The main difference is the observability analysis developped in Section~\ref{sec:obs_finie}: here the target is unobservable, while in \cite{ludo} it was observable. \subsubsection{Embedding into a bilinear system of higher dimension}\label{sec:embedding-finite} Consider the map \fonction{\uptau}{\mathbb{R}^n}{\mathbb{R}^{n+1}}{x}{\left(x, \frac{1}{2}\abs{x}^2\right).} If $x$ is a solution of \eqref{E:system_rot}, then $\frac{1}{2}\frac{\diff}{\diff t}\abs{x}^2 = x'Ax + x'bu = x'bu$ since $A$ is skew-symmetric. Hence $z = \uptau(x)$ defines an embedding of \eqref{E:system_rot} into \begin{equation}\label{E:system_plonge_finie} \left\{ \begin{aligned} &\dot{z}=\mathcal{A}(u)z + \mathcal{B} u \\ &y=\mCz. \end{aligned} \right. \end{equation} where $\mathcal{A}(u) = \begin{pmatrix} A & 0\\ ub' & 0 \end{pmatrix}$, $\mathcal{B} = \begin{pmatrix} b\\ 0 \end{pmatrix}$ and $\mathcal{C} = \begin{pmatrix} 0&\cdots&0&1 \end{pmatrix}$ and with initial conditions in $\mathcal{T}=\uptau(\mathbb{R}^n)$. Moreover, the semi-trajectory $z$ remains in $\mathcal{T}$. We denote by $\uppi:\mathbb{R}^{n+1}\to\mathbb{R}^n$ the projection operator given by $z =(z_1,\dots,z_{n+1}) \mapsto (z_1,\dots,z_{n})$. Note that $\uppi$ is a left-inverse of $\uptau$: \begin{equation}\label{def:uppi} \uppi(\uptau(x)) = x,\qquad \forall x\in\mathbb{R}^n. \end{equation} In the following, to ease notations, we often use the shorthand $\ubar{z}$ for $\uppi(z)$. \subsubsection{Observer design with dissipative error system} Let us introduce a Luenberger observer with dynamic gain for \eqref{E:system_plonge_finie}. In order to make the error system dissipative, set $ \mathcal{L}_\alpha(u)=\begin{pmatrix} bu\\ \alpha \end{pmatrix}\in\mathbb{R}^{n+1} $ for some positive constant $\alpha$ to be fixed later. The corresponding observer system is given by \begin{equation}\label{E:system_observer_open} \left\{ \begin{aligned} &\dot{\varepsilon}= \left(\mathcal{A}(u) -\mathcal{L}_\alpha(u)\mathcal{C}\right)\varepsilon \\ &\dot{\etath}= \mathcal{A}(u) \etath+ \mathcal{B} u -\mathcal{L}_\alpha(u)\mathcal{C}\varepsilon \end{aligned} \right. \end{equation} where $z = \hat{\etat} - \varepsilon$ satisfies \eqref{E:system_plonge_finie}, $\hat{\etat}$ is the estimation of the state made by the observer system and $\varepsilon$ is the error between the estimation of the state and the actual state of the system. Note that for all $u\in\mathbb{R}$, \begin{equation}\label{E:dissipative} \mathcal{A}(u) - \mathcal{L}_\alpha(u)\mathcal{C} = \begin{pmatrix} A & -bu\\ ub' & -\alpha \end{pmatrix} = \begin{pmatrix} A & -b u\\ ub' & 0 \end{pmatrix} - \alpha \mathcal{C}'\mathcal{C}. \end{equation} It implies that the $\varepsilon$-subsystem of \eqref{E:system_observer_open} is dissipative, that is, for all input $u\in C^0(\mathbb{R}_+, \mathbb{R})$, the solutions of \eqref{E:system_observer_open} satisfy \begin{equation}\label{E:eps_decroit} \frac{\diff |\varepsilon|^2}{\diff t} = 2\varepsilon'\left(\mathcal{A}(u) - \mathcal{L}_\alpha(u)\mathcal{C}\right)\varepsilon = -2\alpha |\mathcal{C}\varepsilon|^2 \leq 0. \end{equation} This is the first key fact of the strategy applied below. \subsubsection{Feedback perturbation and closed-loop system} \label{sec_FeedbackPert} Because $(A, b)$ is stabilizable, there exists $K\in\mathbb{R}^{1\times n}$ such that $A+bK$ is Hurwitz (in particular, $(K, A)$ is detectable). Since $A$ is skew-symmetric, its eigenvalues are purely imaginary. Hence, the Hautus lemmas for stabilizability (resp. detectability) and controllability (resp. observability) are equivalent. Therefore, $(A, b)$ is controllable and $(K, A)$ is observable. With a separation principle in mind, a natural strategy for dynamic output feedback stabilization of \eqref{E:system_rot} would be to combine the Luenberger observer~\eqref{E:system_observer_open} with the state feedback law $\phi:x\mapsto Kx$. However, it appears that this strategy fails to be applied due to the unobservability at the target. To overcome this difficulty, we rather consider a perturbed feedback law $\phi_\delta:x\mapsto Kx + \frac{\delta}{2}\abs{x}^2$ for some positive constant $\delta$ to be fixed later. This is the second key fact of the strategy. For all $\delta>0$, denote by $\mathcal{D}_\delta$ the basin of attraction of $0\in\mathbb{R}^n$ of the vector field $\mathbb{R}^n\ni x\mapsto Ax + b\phi_\delta(x)$. Since the linearization of this vector field at $0$ is $x\mapsto (A+bK)x$, it is locally asymptotically stable at $0$ for all $\delta>0$. As stated in the following lemma, the drawback of this perturbation is to pass from a globally stabilizing state feedback to a semi-globally stabilizing one. \begin{lmm}\label{lem:delta} For any compact set $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$, there exists $\delta_0>0$ such that for all $\delta\in(0, \delta_0)$, $\mathcal{K}}%{\mathcal{K}_x\subset\mathcal{D}_\delta$. \end{lmm} \begin{proof} Let $\rho>0$ be such that $\mathcal{K}}%{\mathcal{K}_x\subset B_{\mathbb{R}^n}(0, \rho)$. Since $A+bK$ is Hurwitz, there exists $P\in\mathbb{R}^{n\times n}$ positive definite such that $P(A+bK) + (A+bK)'P < -2 \mathbbm{I}_{\mathbb{R}^n}$ (recall that $'$ denotes the transpose operation). Set $V:\mathbb{R}^n\ni x\mapsto x'Px$. Then, for all $x\in\mathcal{K}}%{\mathcal{K}_x$, \begin{align*} \frac{\partial V}{\partial x}(x)(Ax+b\phi_\delta(x)) &=2x'P(A+bK)x + \delta |x|^2 x'Pb \\ &\leq (-2+ \delta |x||Pb|) |x|^2\\ &\leq (-2+ \delta \rho |Pb|) |x|^2. \end{align*} Set $\delta_0 = \frac{1}{\rho |Pb|}$ and let $\delta\in(0, \delta_0)$. Then $V$ is positive definite and \begin{align*} \frac{\partial V}{\partial x}(x)(Ax+b\phi_\delta(x)) < -|x|^2 \end{align*} for all $x\in\mathcal{K}}%{\mathcal{K}_x$. Hence, $0\in\mathbb{R}^n$ is a locally asymptotically (even exponentially) stable equilibrium point of the vector field $\mathbb{R}^n\ni x\mapsto Ax + b\phi_\delta(x)$ with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x$. \end{proof} Hence, for all compact set $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$ there exists $\delta_0>0$ such that if $\delta\in(0, \delta_0)$, then $\mathcal{K}}%{\mathcal{K}_x\subset\mathcal{D}_\delta$. On system~\eqref{E:system_plonge_finie}, we choose the feedback law \begin{equation} \lambda_\delta(z)= \begin{pmatrix} K& \delta \end{pmatrix}z, \end{equation} which satisfies $\phi_\delta=\lambda_\delta\circ\uptau$. The corresponding closed-loop system is given by \begin{equation}\label{E:system_observer_closed} \left\{ \begin{aligned} &\dot{\varepsilon}= \left(\mathcal{A}(\lambda_\delta(\etath)) -\mathcal{L}_\alpha(\lambda_\delta(\etath))\mathcal{C}\right)\varepsilon, \\ &\dot{\etath}= \mathcal{A}(\lambda_\delta(\etath)) \etath+ \mathcal{B} \lambda_\delta(\etath) -\mathcal{L}_\alpha(\lambda_\delta(\etath))\mathcal{C}\varepsilon. \end{aligned} \right. \end{equation} We are now able to exhibit a coupled system in the form of~\eqref{E:system_stab} (with $w=\hat{\etat}$) with which we intend to prove semi-global dynamic output feedback stabilization of~\eqref{E:system_rot}: \begin{equation}\label{E:system_sep_rot} \left\{ \begin{aligned} &\dot{x}=A x+ bu, \\ &y= \frac{1}{2}\abs{x}^2 \end{aligned} \right. ,\qquad \left\{ \begin{aligned} &\dot{\etath}=\mathcal{A}(u) \etath+ \mathcal{B} u -\mathcal{L}_\alpha(u)\left(\mathcal{C}\etath-y\right) \\ &u= \lambda_\delta(\etath). \end{aligned} \right. \end{equation} It is now sufficient to prove the following theorem, which implies Theorem~\ref{th:finite}, in the next sections. \begin{thrm}\label{th:explicit} For all compact set $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w\subset\mathbb{R}^n\times\mathbb{R}^{n+1}$, there exist $\delta_0>0$ and $\alpha_0>0$ such that for all $\delta\in(0, \delta_0)$ and all $\alpha\in(\alpha_0, +\infty)$, $(0, 0)\in\mathbb{R}^n\times\mathbb{R}^{n+1}$ is a locally asymptotically stable equilibrium point of \eqref{E:system_sep_rot} with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$. \end{thrm} \subsubsection{Boundedness of trajectories} Since $\mathbb{R}^n\ni x\mapsto \frac{1}{2}|x|^2$ and $\phi_\delta$ are locally Lipschitz continuous functions, according to the Cauchy-Lipschitz theorem, for any initial condition $(x_0, \hat{\etat}_0)\in\mathbb{R}^n\times\mathbb{R}^{n+1}$, there exists exactly one maximal solution $(x, \hat{\etat})$ of \eqref{E:system_sep_rot} such that $(x(0), \hat{\etat}(0)) = (x_0, \hat{\etat}_0)$. Before going into the proof of Theorem~\ref{th:explicit}, we need to ensure the existence of global solutions. \begin{lmm}\label{lem:bound} For any compact set $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w\subset\mathbb{R}^n\times\mathbb{R}^{n+1}$, there exist $\delta_0>0$ and $\alpha_0>0$ such that for all $\delta\in(0, \delta_0)$ and all $\alpha\in(\alpha_0, +\infty)$, \eqref{E:system_sep_rot} has a unique global solution $(x, \hat{\etat})$ for each initial condition $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$. Moreover, $(x, \hat{\etat})$ is bounded and $\ubar{\hat{\etat}}$ remains in a compact subset of $\mathcal{D}_\delta$. \end{lmm} \begin{proof} Let $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$ and $(x, \hat{\etat})$ be the corresponding maximal solution of \eqref{E:system_sep_rot}. Set $z=\uptau(x)$ and $\varepsilon=\hat{\etat}-z$, so that $(\varepsilon, \hat{\etat})$ is the maximal solution of \eqref{E:system_observer_closed} starting from $(\varepsilon_0, \hat{\etat}_0)$. Then, it is sufficient to prove that $(\varepsilon, \hat{\etat})$ is a global solution, $(\varepsilon, \hat{\etat})$ is bounded and $\ubar{\hat{\etat}}$ remains in a compact subset of $\mathcal{D}_\delta$. According to \eqref{E:eps_decroit}, $\varepsilon$ is bounded since $|\varepsilon|$ is non-increasing. Moreover, $\hat{\etat}_{n+1} = \varepsilon_{n+1} + \frac{1}{2}|\ubar{z}|^2 = \varepsilon_{n+1} + \frac{1}{2}|\ubar{\hat{\etat}}-\ubar{\varepsilon}|^2$. Then, it remains to show that there exist $\delta_0>0$ and $\alpha_0>0$ such that for all $\delta\in(0, \delta_0)$ and all $\alpha\in(\alpha_0, +\infty)$, for all initial conditions $(\varepsilon_0, \hat{\etat}_0)\in\left( \Kw - \uptau(\Kx) \right)}%{\mathcal{K}_\eps\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$, $\ubar{\hat{\etat}}$ remains in a compact subset of $\mathcal{D}_\delta$. Since $A+bK$ is Hurwitz, there exists $P\in\mathbb{R}^{n\times n}$ positive definite such that $P(A+bK) + (A+bK)'P < -2 \mathbbm{I}_{\mathbb{R}^n}$. Then $V:\mathbb{R}^n\ni x\mapsto x'Px$ is a strict Lyapunov function for system~\eqref{E:system_rot} with feedback law $\phi$. For all $r>0$, set $D(r)=\{x\in\mathbb{R}^n\mid V(x)\leq r\}$. Let $\rho'>\rho>0$ and $r'>r>0$ be such that $B_{\mathbb{R}^{n+1}}(0,\rho)$ contains $\left( \Kw - \uptau(\Kx) \right)}%{\mathcal{K}_\eps$ and $\widehat{\mathcal{K}}}%{\mathcal{K}_w$ and $B_{\mathbb{R}^{n+1}}(0,\rho)\subset D(r)\subset D(r')\subset B_{\mathbb{R}^{n+1}}(0,\rho')$. According to Lemma~\ref{lem:delta}, there exists $\delta_0>0$ such that for all $\delta\in(0, \delta_0)$, $\mathcal{D}_\delta$ contains the closure of $B_{\mathbb{R}^n}(0,\rho')$. In the following, we show that there exists $\alpha_0>0$ such that, if $\alpha>\alpha_0$, then $\ubar{\hat{\etat}}$ remains in $B_{\mathbb{R}^n}(0,\rho')$. For all $\etath, \varepsilon$ in $\mathbb{R}^{n+1}$, define \begin{align*} \mu^1_\delta(\etath) &=\mathcal{A}(\phi_\delta(\ubar{\hat{\etat}}))\etath+\mathcal{B}\phi_\delta(\ubar{\etath}),\\ \mu^2_\delta(\etath) &=(\mathcal{A}(\lambda_\delta(\etath))-\mathcal{A}(\phi_\delta(\ubar{\hat{\etat}})))\etath+\mathcal{B}(\lambda_\delta(\etath)-\phi_\delta(\ubar{\hat{\etat}})),\\ \mu^3_{\delta, \alpha}(\varepsilon,\etath)& =-\mathcal{L}_\alpha(\lambda_\delta(\etath))\mathcal{C}\varepsilon, \end{align*} so that the solutions of \eqref{E:system_observer_closed} satisfy \begin{equation} \dot{\etath}=\mu^1_\delta(\etath)+\mu^2_\delta(\etath)+\mu^3_{\delta, \alpha}(\varepsilon,\etath). \end{equation} In particular, $$ \dot{\ubar{\hat{\etat}}} = A\ubar{\hat{\etat}} + \lambda_\delta(\etath) b - \lambda_\delta(\etath) \varepsilon_{n+1} b. $$ By continuity of $(\etath, \delta)\mapsto\lambda_\delta(\etath)$, % $$ \overline{M}:=\sup_{\substack{ \varepsilon,\etath\in B_{\mathbb{R}^{n+1}}(0,\rho') \\ \delta\in[0, \delta_0]} } |A\ubar{\hat{\etat}} + \lambda_\delta(\etath) b - \lambda_\delta(\etath)\varepsilon_{n+1} b| <\infty. $$ Let $T_0 =\frac{\rho'-\rho}{\overline{M}}$. Since $|\varepsilon|$ is non-increasing, any trajectory of \eqref{E:system_observer_closed} starting in $B_{\mathbb{R}^{n+1}}(0,\rho)\times B_{\mathbb{R}^{n+1}}(0,\rho)$ will be such that $\ubar{\etath}$ remains in $B_{\mathbb{R}^{n}}(0,\rho')$ over the time interval $[0,T_0]$. It remains to show that $\ubar{\hat{\etat}}$ does not exit $B_{\mathbb{R}^n}(0,\rho')$ after time $T_0$. Note that $\mu^1_\delta(\etath_1) = \mu^1_\delta(\etath_2)$ if $\uppi(\etath_1)=\uppi(\etath_2)$. Then, $$ \underline{m}:= - \max_{ \substack{ \ubar{\hat{\etat}}\in\partial D(r') \\ \etath\in B_{\mathbb{R}^{n+1}}(0,\rho') } } \big(L_{\mu_0^1}V\circ\uppi\big)(\etath) = - \max_{ \substack{ \uppi(\hat{\etat})\in\partial D(r') \\ \etath\in B_{\mathbb{R}^{n+1}}(0,\rho') } } \frac{\partial V}{\partial x}\left(\uppi(\hat{\etat})\right)(A+bK)\uppi(\hat{\etat}) >0. $$ Notice that $ (\mu^1_\delta-\mu^1_0+\mu^2_\delta)(\etath) = \delta \etath_{n+1} \begin{pmatrix} b\\ b'\ubar{\etath} \end{pmatrix}$. Hence, without loss of generality, one can assume that $\delta_0>0$ is (small enough) such that for all $\delta\in(0, \delta_0)$, $$ \max_{ B_{\mathbb{R}^{n+1}}(0,\rho') } |L_{\mu^1_\delta-\mu^1_0+\mu^2_\delta}V\circ\uppi|\leq \frac{1}{3}\underline{m}. $$ Fix $\delta\in(0, \delta_0)$. Assume for the sake of contradiction that $\ubar{\hat{\etat}}$ leaves $D(r')$ for the first time at $T'_0>T_0$. Then \begin{align*} 0 &\leq \frac{\diff}{\diff t}V\left(\uppi(\etath(t))\right)\Big|_{t=T'_0}\\ &= (L_{\mu_0^1}V\circ\uppi)(\etath(T'_0)) + (L_{\mu^1_\delta-\mu^1_0+\mu^2_\delta}V\circ\uppi)(\etath(T'_0)) + % \frac{\partial V\circ\uppi}{\partial \etath}(\etath(T'_0))\mu^3_{\delta, \alpha}(\varepsilon(T'_0), \etath(T'_0)) \\ &\leq - \frac{2}{3}\underline{m} + \frac{\partial V\circ\uppi}{\partial \etath}(\etath(T'_0))\mu^3_{\delta, \alpha}(\varepsilon(T'_0), \etath(T'_0)) \end{align*} Now, we show that there exists $\alpha_0>0$ big enough such that for all $\alpha>\alpha_0$, \begin{equation}\label{E:size_pert_h} \frac{\partial V\circ\uppi}{\partial \etath}(\etath(T'_0))\mu^3_{\delta, \alpha}(\varepsilon(T'_0), \etath(T'_0)) \leq \frac{1}{3}\underline{m}, \end{equation} which contradicts $\underline{m}>0$. By definition of $\mathcal{L}_\alpha$, $\uppi$ and $\mu^3_{\delta, \alpha}$, $$ \frac{\partial V\circ\uppi}{\partial \etath}(\etath)\mu^3_{\delta, \alpha}(\varepsilon, \etath) = - \varepsilon_{n+1} \lambda_\delta(\etath) \frac{\partial V}{\partial x} (\uppi(\etath)) b. $$ Let $Q=\max_{ \substack{ (\etath_2,\etath_3)\in\partial B_{\mathbb{R}^n}(0,\rho') \\ \varepsilon,\etath\in B_{\mathbb{R}^{n+1}}(0,\rho') } } |\lambda_\delta(\etath) \frac{\partial V}{\partial x} (\uppi(\etath)) b| $, so that $|\lambda_\delta(\etath(T'_0)) \frac{\partial V}{\partial x} (\uppi(\etath(T'_0))) b|\leq Q$. Recall that $$ \dot{\varepsilon}_{n+1}=-\alpha \varepsilon_{n+1}+\lambda_\delta(\etath)b'\ubar{\varepsilon} $$ and thus, for all $t\geq0$, $$ \varepsilon_{n+1}(t)=e^{-\alpha t}\varepsilon_{n+1}(0)+\int_{0}^{t}e^{-\alpha(t-s)}\lambda_\delta(\etath(s))b'\ubar{\varepsilon}(s)\diff s. $$ Moreover, $\varepsilon(t)$ and $\ubar{\etath}(t)$ are in $B_{\mathbb{R}^{n+1}}(0, \rho')$ for all $t\in[0, T'_0]$ and $$ \lambda_\delta(\etath)=\begin{pmatrix} K&\delta \end{pmatrix} \etath = K \ubar{\etath} + \delta\left(\varepsilon_{n+1} + \frac{1}{2}|\ubar{\hat{\etat}}-\ubar{\varepsilon}|^2\right) . $$ Hence, $$ |\lambda_\delta(\etath)| \leq \rho' \left(|K| + \delta(1+2\rho') \right). $$ As a consequence, for all $t\in[0, T'_0]$, $$ |\varepsilon_{n+1}(t)|\leq \rho' \left(e^{-\alpha t}+\frac{\rho'^2|b|}{\alpha}\left(|K| + \delta(1+2\rho') \right)\right) . $$ Thus there exists $\alpha_0>0$ such that if $\alpha>\alpha_0$, then $|\varepsilon_{n+1}(T'_0)|\leq \dfrac{\underline{m}}{3 Q}$. Fix $\alpha>\alpha_0$. Then \eqref{E:size_pert_h} holds, which concludes the proof of the lemma. \end{proof} \subsubsection{Observability analysis}\label{sec:obs_finie} The following lemma is a crucial step of the proof of Theorem~\ref{th:finite} that emphasizes the usefulness of the feedback perturbation described above. Indeed, one can easily see that its proof fails if $\delta=0$ (since the matrix $\mathcal{Q}$ defined below is not invertible in this case). \begin{lmm}\label{lem:observability} Let $(z_0,\etath_0)\in \left(\mathcal{T}\times \mathbb{R}^{n+1}\right)\setminus\{(0,0)\}$. Let $(\varepsilon,\etath)$ be the semi-trajectory of \eqref{E:system_observer_open} with initial condition $(\etath_0-z_0,\etath_0)$. Then, for all $T>0$, \eqref{E:system_plonge_finie} is observable in time $T$ for the input $u=\lambda_\delta(\etath)$. \end{lmm} \begin{proof} Let $\omega_0\in\ker(\mathcal{C})\setminus\{0\}$, and consider $\omega$ a solution of the dynamical system \begin{equation}\label{E:omega} \dot{\omega}=\mathcal{A}(\lambda_\delta(\etath))\omega \end{equation} with initial condition $\omega_0$. To prove the result, it is sufficient to show that $\mathcal{C}\omega$ has a non-zero derivative of some order at $t=0$ if $(\varepsilon_0,\etath_0)\neq(0, 0)$. Indeed, it implies that for all initial conditions $z_0\neq\tilde{z}_0$ in $\mathbb{R}^{n+1}$, if $z$ (resp. $\tilde{z}$) is the solution of \eqref{E:system_plonge_finie} with initial condition $z_0$ (resp. $\tilde{z}_0$), then $\omega = z - \tilde{z}$ is a solution to \eqref{E:omega} starting at $\omega_0\neq0$ and $\mathcal{C}\omega$ is not constantly equal to zero on any time interval $[0, T]\subset\mathbb{R}_+$. We prove this fact by contradiction: assume that \begin{equation}\label{E:omega_der} \mathcal{C}\omega^{(k)}(0)=\omega_{n+1}^{(k)}(0)=0\qquad \forall k\in \N, \end{equation} for some $\omega(0)\neq0$, and prove that $(z_0, \etath_0) = (0, 0)$. Let $u = \lambda_\delta(\etath)$. Then $\dot\omega_{n+1}=ub'\ubar{\omega}$ and $\dot{\ubar{\omega}} = A\ubar{\omega}$. Hence \begin{equation} \label{leibniz} 0 = \omega_{n+1}^{(k+1)}(0) = \sum_{i=0}^k \binom{k}{i} u^{(i)}(0) b'A^{k-i}\ubar{\omega}(0) \end{equation} for all $k\in\N$, where $\binom{k}{i}$ denote binomial coefficients. The proof goes through the following three steps. \textbf{Step 1: Show that $\boldsymbol{u^{(k)}(0)=0}$ for all $\boldsymbol{k\in\N}$.} Let $p\in\N$ be the smallest integer such that $u^{(p)}(0)\neq0$ and look for a contradiction. Equation \eqref{leibniz} yields \begin{equation} \label{leibniz2} \sum_{i=0}^k \binom{p+k}{p+i} u^{(p+i)}(0) b'A^{k-i}\ubar{\omega}(0) = 0 \end{equation} for all $k\in\N$. Since $(A, b)$ is controllable and $\ubar{\omega}(0)\neq0$, there exists $q\in\{0,\dots,n\}$ such that $ b'A^{q}\ubar{\omega}(0)\neq0$ and $b'A^{i}\ubar{\omega}(0)=0$ for all $i\in\{0,\dots,q-1\}$. Then \begin{equation} \label{leibniz3} 0 = \sum_{i=0}^q \binom{p+q}{p+i} u^{(p+i)}(0) b'A^{q-i}\ubar{\omega}(0) =\binom{p+q}{p} u^{(p)}(0) b'A^{q}\ubar{\omega}(0). \end{equation} which is a contradiction. \textbf{Step 2: Find $\boldsymbol{\mathcal{Q}\in\mathbb{R}^{(n+2)\times(n+2)}}$ (invertible) such that $\boldsymbol{\mathcal{Q}\begin{pmatrix} \etath(0)\\ \epsilon_{n+1}(0) \end{pmatrix}=0}$.} For all $k\in\N$, $$ 0 = u^{(k)}(0) = \begin{pmatrix} K& \delta \end{pmatrix}\etath^{(k)}(0). $$ Moreover, \begin{align*} \begin{pmatrix} \dot{\ubar{\etath}}\\ \dot{\etath}_{n+1}\\ \dot{\epsilon}_{n+1} \end{pmatrix} = \begin{pmatrix} A & -bu & 0\\ b'u & 0 & -\alpha\\ 0 & 0 & -\alpha \end{pmatrix} \begin{pmatrix} \ubar{\etath}\\ \etath_{n+1}\\ \epsilon_{n+1} \end{pmatrix} + u \begin{pmatrix} b\\ 0\\ b' \ubar{\varepsilon} \end{pmatrix}. \end{align*} Hence, for all $k\geq1$, $ \ubar{\etath}^{(k)}(0) = A^k\ubar{\etath}(0) $ and $ \etath_{n+1}^{(k)}(0) = \epsilon_{n+1}^{(k)}(0) = (-\alpha)^k\epsilon_{n+1}(0) $.\\ Thus $\begin{pmatrix} KA^k& \delta(-\alpha)^{k} \end{pmatrix}\begin{pmatrix} \ubar{\etath}(0)\\ \epsilon_{n+1}(0) \end{pmatrix} = 0$ for all $k\geq1$. By setting \begin{equation} \mathcal{Q} = \begin{pmatrix} K & \delta & 0\\ K A & 0 & - \delta \alpha\\ \vdots & \vdots & \vdots \\ K A^{n+1} & 0 & \delta(-\alpha)^{n+1} \end{pmatrix} \end{equation} we get that $\mathcal{Q}\begin{pmatrix} \etath(0)\\ \epsilon_{n+1}(0) \end{pmatrix}=0$. \textbf{Step 3: Conclusion.} In Appendix~\ref{app:det}, we check that $\mathcal{Q}$ is invertible. Hence, $\etath(0) = 0$ and $\varepsilon_{n+1}(0) = 0$. Thus, $\frac{1}{2}\abs{\ubar{z}(0)}^2 = z_{n+1}(0) = \etath_{n+1}(0) - \varepsilon_{n+1}(0) = 0$ \emph{i.e.} $(z_0, \etath_0) = (0, 0)$ which is a contradiction. \end{proof} On the basis of Lemmas~\ref{lem:bound} and~\ref{lem:observability}, we are now in position to prove Theorem~\ref{th:explicit}. Let $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w\subset\mathbb{R}^n\times\mathbb{R}^{n+1}$ be a compact set, and $\delta_0>0$ and $\alpha_0>0$ be as in Lemma~\ref{lem:bound}. Fix $\delta\in(0, \delta_0)$ and $\alpha\in(\alpha_0, +\infty)$. Let $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$ and $(x, \hat{\etat})$ be the corresponding solution of \eqref{E:system_sep_rot}. Set $z=\uptau(x)$, $\varepsilon=\hat{\etat}-z$ so that $(\varepsilon, \hat{\etat})$ is the solution of \eqref{E:system_observer_closed} starting from $(\varepsilon_0, \hat{\etat}_0)$, $\varepsilon_0=\hat{\etat}_0-\uptau(x_0)$. We need to show the two following statements: \medskip \begin{itemize}[wide=0pt, labelwidth=\widthof{2. (Attractivity)~}] \item[1. (Stability)\hfill] $(0, 0)$ is a stable equilibrium point of~\eqref{E:system_sep_rot}, \item[2. (Attractivity)] and its basin of attraction contains $\mathcal{K}}%{\mathcal{K}_x\times\widehat{\mathcal{K}}}%{\mathcal{K}_w$. \end{itemize} \medskip We prove the former in Section~\ref{sec:stability} and the latter in Section~\ref{sec:attractivity}. \subsubsection{Stability}\label{sec:stability} Let $R>0$. We seek $r>0$ such that, if $|x_0|, |\hat{\etat}_0|\leq r$, then $|x(t)|, |\hat{\etat}(t)| \leq R$ for all $t\in\mathbb{R}_+$. We have \begin{align*} \dot x &= A x + b \lambda_\delta(\hat{\etat}) \\ &= A x + b \lambda_\delta(\uptau(x)+\varepsilon) \\ &= Ax + b\phi_\delta(x) + b \begin{pmatrix} K&\delta \end{pmatrix} \varepsilon. \end{align*} Fix $\eta>0$ such that % $R-\eta\sqrt{1+\frac{\eta^2}{2}}>0$. Since $x\mapsto Ax + b\phi_\delta(x)$ is locally asymptotically stable, there exist two positive constant $r_x<\eta$ and $r_\eps\leq R-\eta\sqrt{1+\frac{\eta^2}{2}}$ such that, if $|x_0|\leq r_x$ and $|\varepsilon(t)|\leq r_\eps$ for all $t\in\mathbb{R}_+$, then $|x(t)|\leq \eta$ for all $t\in\mathbb{R}_+$. Let $r\in(0, r_x)$ be such that $r+r\sqrt{1+\frac{r^2}{2}}\leqr_\eps$. Assume that $|x_0|, |\hat{\etat}_0|\leq r$. Then, $$ |\varepsilon_0| \leq |\hat{\etat}_0|+|\uptau(x_0)| = |\hat{\etat}_0|+|x_0|\sqrt{1+\frac{|x_0|^2}{2}} \leq r + r\sqrt{1+\frac{r^2}{2}} \leq r_\eps. $$ According to \eqref{E:eps_decroit}, $|\varepsilon|$ is non-increasing. Hence, for all $t\in\mathbb{R}_+$, $|x(t)|\leq\eta\leq R$ and $$|\hat{\etat}(t)|\leq |\uptau(x(t))| + |\varepsilon(t)|\leq\eta\sqrt{1+\frac{\eta^2}{2}}+r_\eps\leq R.$$ \subsubsection{Attractivity}\label{sec:attractivity} According to \eqref{E:eps_decroit}, $\frac{\diff |\varepsilon|^2}{\diff t}= -2\alpha |\mathcal{C}\varepsilon|^2$. Due to LaSalle's invariance principle, the $\omega$-limit set of $\varepsilon$ is the largest invariant subset of $\ker \mathcal{C}$. Since $\varepsilon$ satisfies~\eqref{E:omega} on this set, Lemma~\ref{lem:observability} guarantees that either $\varepsilon\equiv0$, or $(\varepsilon_0, \hat{\etat}_0)=(0, 0)$, which also implies $\varepsilon\equiv0$. Therefore, the $\omega$-limit set of $\varepsilon$ reduces to $\{0\}$, \emph{i.e.,~} $\varepsilon\to0$. Since $\hat{\etat}_{n+1} = \varepsilon_{n+1} + \frac{1}{2}|\ubar{\hat{\etat}}-\ubar{\varepsilon}|^2$, it remains to prove that $\ubar{\hat{\etat}}\to0$. First, notice that $$ |\mu^2_\delta(\etath)| = |\lambda_\delta(\etath)-\phi_\delta(\ubar{\etath})|\sqrt{|b|^2+|b'\ubar{\hat{\etat}}|^2} $$ and $$ |\mu^3_{\delta, \alpha}(\varepsilon,\etath)| = \sqrt{\alpha^2+|b|^2\lambda_\delta(\etath)^2}|\mathcal{C}\varepsilon|. $$ Since $C\varepsilon\to 0$ and $\etath$ is bounded, $|\mu^3_{\delta, \alpha}(\varepsilon,\etath)|\to 0$. Likewise, \begin{align*} \lambda_\delta(\etath)-\phi_\delta(\etath) &=\delta\left(\hat{\etat}_{n+1}-\frac{1}{2}|\ubar{\hat{\etat}}|^2\right)\\ &=\delta\left(\varepsilon_{n+1}+z_{n+1}-\frac{1}{2}|\ubar{\varepsilon}|^2-\frac{1}{2}|\ubar{z}|^2+\ubar{\varepsilon}'\ubar{z}\right)\\ &= \delta\left(\varepsilon_{n+1}-\frac{1}{2}|\ubar{\varepsilon}|^2+\ubar{\varepsilon}'\ubar{z}\right). \end{align*} Since $\varepsilon\to 0$ and $z$ is bounded, $\mu^2_\delta(\etath)\to0$. According to the converse Lyapunov theorem \cite[Theorem 1]{teel2000smooth}, there exists a strict proper Lyapunov function $V_\delta$ for system~\eqref{E:system_rot} with feedback law $\phi_\delta:x\mapsto K x+\frac{\delta}{2}|x|^2$ over the basin of attraction $\mathcal{D}_\delta$. For all $r>0$, set $D(r)=\{x\in\mathcal{D}_\delta\mid V_\delta(x)\leq r\}$. In order to prove that $\ubar{\hat{\etat}}\to0$, we show that for all $r>0$, there exists $T(r)\geq0$ such that $\ubar{\hat{\etat}}(t)\in D(r)$ for all $t\geq T(r)$. According to Lemma~\ref{lem:bound}, there exists a compact set $\mathcal{K}\subset \mathcal{D}_\delta$ such that $\ubar{\hat{\etat}}\in \mathcal{K}$. If $r>0$ is such that $\mathcal{K}\subset D(r)$ then $T(r)=0$ satisfies the statement. Let $0<r<R$ be such that $\mathcal{K}\not \subset D(r)$ and $\mathcal{K}\subset D(R)$, then $$ \bar{m}:=-\max_{x\in D(R)\setminus D(r)} \frac{\partial V}{\partial x}(x)(Ax+b\phi_\delta(x)) >0. $$ Since $|g(\etath(t))|\to 0$ and $|h(\varepsilon(t),\etath(t))|\to 0$, there exists $T_1(r)>0$ such that for all $t\geq T_1(r)$, if $\ubar{\hat{\etat}}(t)\not\in D(r)$, then $$ \frac{\diff}{\diff t}V_\delta(\ubar{\hat{\etat}})<-\frac{\bar{m}}{2}. $$ First, this implies that if $\ubar\etath(t)\in D(r)$ for some $t\geq T_1(r)$, then $\ubar\etath(s)\in D(r)$ for all $s\geq t$. Second, for all $t\geq0$, \begin{align*} V_\delta(\ubar{\hat{\etat}}(T_1(r)+t)) &= V_\delta(\ubar{\hat{\etat}}(T_1(r))) + \int_0^{t}\frac{\diff}{\diff s}V_\delta(\ubar{\hat{\etat}}(T_1(r)+s))\diff s \\ &\leq R - \frac{\bar{m}}{2}t \tag{while $\ubar{\hat{\etat}}(T_1(r)+t)\notin D(r).$} \end{align*} Set $T_2(r) = \frac{2R-r}{\bar{m}}$ and $T(r) = T_1(r) + T_2(r)$. Then for all $t\geq T(r)$, $\ubar{\hat{\etat}}(t)\in D(r)$, which concludes the proof of convergence, and therefore the proof of Theorem~\ref{th:explicit}. \section{An infinite-dimensional perspective}\label{sec:infinite} Guided by the illustrative example of Section~\ref{sec:finite}, we aim to provide more general results, based on the same two principles: embedding into a dissipative system, and feedback perturbation. The embedding strategy used in Section~\ref{sec:embedding-finite} appears to be hardly generalizable, a different strategy must be found. In \cite{Celle-etal.1989}, the authors introduce a technique for the synthesis of observers for nonlinear systems. The method is based on representation theory, and embedding into bilinear unitary systems. It is far more general than the embedding found in Section~\ref{sec:embedding-finite}. The price to pay is that the observer system can be infinite-dimensional. In the rest of the paper, we apply this strategy in the context of dynamic output feedback stabilization at an unobservable target. In this section, we exhibit some general results when such an embedding exists. \subsection{Infinite-dimensional framework and statement of the main result} Since our goal is to introduce an infinite-dimensional strategy for output feedback stabilization, Definition~\ref{def:stab_out} must be amended. When dealing with infinite-dimensional systems, it is necessary to fix a suitable functional framework. Moreover, we also would like take into account piecewise constant feedback laws. For these reasons, we introduce the following definitions. \begin{dfntn}\label{def:framework} Let $(Z,\|\cdot \|_{Z})$ be a Hilbert space and $\mathcal{D}\subsetZ$ be a dense subspace. Let $(\mathcal{E}(u):\mathcal{D}\toZ)_{u\in\mathbb{R}^p}$ be a family of (potentially) unbounded linear operators. Let $\mathcal{B}:\mathbb{R}^m\toZ$ and $\varpi:Z\times\mathbb{R}^m\to\mathbb{R}^p$ be two continuous maps. For all $k\in\N$, set $t_k = k\Delta$ for some $\Delta>0$. We call \emph{infinite-dimensional piecewise constant dynamic output feedback} of system~\eqref{E:system_general} the system \begin{equation}\label{E:system_stab_infinie_0} \left\{ \begin{aligned} &\dot x = f(x, u)\\ &y = h(x) \end{aligned} \right. ,\qquad \left\{ \begin{aligned} &\dot{\hat{\etat}}= \mathcal{E}(u) \hat{\etat} + \mathcal{B}(y)\\ &u(t_k) = \varpi(\hat{\etat}(t_k^-), y(t_k^-))\\ &u(t) = u(t_k),\qquad t\in[t_k,t_{k+1}) \end{aligned} \right. \end{equation} where % $\varpi(\hat{\etat}(t_k^-), y(t_k^-)) = \lim_{\substack{t\to t_k\\t< t_k}}\varpi(\hat{\etat}(t), y(t))$. \end{dfntn} \begin{dfntn}\label{def:stab_inf} Let $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$ be a compact set. System~\eqref{E:system_general} is said to be \emph{stabilizable over $\mathcal{K}}%{\mathcal{K}_x$ by means of an infinite-dimensional piecewise constant dynamic output feedback} if there exists a feedback in the form of \eqref{E:system_stab_infinie_0} as in Definition~\ref{def:framework}, a bounded set $\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}\subsetZ$ and $\hat{\etat}^\star\in\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}$ such that the following conditions are satisfied: \begin{enumerate}[label=\textit{(\roman*)}] \item For all $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}$, there exists at least one solution $(x, \hat{\etat})$ of \eqref{E:system_stab_infinie_0} in $C^0(\mathbb{R}_+, \mathbb{R}^n\times\dom)$. \label{defi} \item For all $R_x, R_{\hat{\etat}}>0$, there exist $r_x, r_{\hat{\etat}}>0$ such that for all $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}$, if $|x_0|<r_x$ and $\norm{\hat{\etat}_0-\hat{\etat}^\star}<r_{\hat{\etat}}$, then any solution $(x, \hat{\etat})$ of \eqref{E:system_stab_infinie_0} starting from $(x_0, \hat{\etat}_0)$ satisfies $|x(t)|<R_x$ and $\norm{\hat{\etat}(t)-\hat{\etat}^\star} < R_{\hat{\etat}}$ for all $t\geq0$. \label{defii} \item Any solution $(x, \hat{\etat})$ of \eqref{E:system_stab_infinie_0} with initial condition in $\mathcal{K}}%{\mathcal{K}_x\times\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}$ is such that $x(t)\to0$ and $\hat{\etat}(t)\overset{w}{\rightharpoonup}\hat{\etat}^\star$ as $t$ goes to infinity. \label{defiii} \end{enumerate} Furthermore, this property is said to be \emph{semi-global} if it holds for any compact $\mathcal{K}}%{\mathcal{K}_x\subset \mathbb{R}^n$. \end{dfntn} \begin{rmrk} If $Z$ is finite-dimensional, then \ref{defi}-\ref{defii}-\ref{defiii} is equivalent to the usual definition of asymptotic stability of~\eqref{E:system_stab_infinie_0} at $(0, 0)$ with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x\times\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}$ (except that the feedback is now piecewise constant). However, when $Z$ is infinite-dimensional (the case of interest in this section), the convergence of trajectories towards the equilibrium point holds only in the weak topology. Hence, \ref{defi}-\ref{defii}-\ref{defiii} is not equivalent to the usual definition of asymptotic stability of the infinite-dimensional system~\eqref{E:system_stab_infinie_0}. \end{rmrk} In the following three sections, we introduce some general tools that can be used to prove stabilizability by means of an infinite-dimensional piecewise constant dynamic output feedback. Their development is motivated by the investigation of the following two-dimensional system presenting an archetypal singularity at the target point: \begin{equation}\label{E:system_depart0} \left\{ \begin{aligned} &\dot{x}= \rot x + b u \\ &y= h(x) \end{aligned} \right. \qquad \text{with}\ \rot = \begin{pmatrix} 0&-1 \\ 1&0 \end{pmatrix}, \ b = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \ \text{and}\ h:\mathbb{R}^2\to\mathbb{R}^m. \end{equation} Note that \eqref{E:system_depart0} is in the form of~\eqref{E:system_rot} as soon as $h$ is radially symmetric. Of course, our analysis is of interest only if~\eqref{E:system_depart0} is not uniformly observable. In particular, with $h(x) = \frac{1}{2}|x|^2$, we recover a subcase of \eqref{E:system_rot2} that was investigated in Theorem~\ref{th:finite}. The finite-dimensional strategy developed in Section~\ref{sec:finite} was specific to this output, and we wish to explore stabilization for different output maps that present other observability singularities. We can now state the main stabilization result obtained on system~\eqref{E:system_depart0}, which is the main matter of Section~\ref{sec:example_infinie}. The method relies on unitary irreducible representations of the group induced by the dynamics. Their corresponding special functions, which are Bessel functions, play a major role. Recall that the Bessel function of the first kind of order $k\in\mathbb{Z}$ is given by: \begin{equation}\label{Bessel_def} J_k:\mathbb{R}\ni r \mapsto \frac{1}{2\pi}\int_0^{2\pi}e^{ir\sin(s) - iks}\diff s\in\mathbb{R}. \end{equation} The first positive zero of the derivative of $J_1$ is denoted by $j_1$. \begin{thrm}\label{cor:main} If for all $r>0$ and all $\theta\in\S^1$, $\mathfrak{h}}%{\hslash(h(r \cos\theta, r \sin\theta)) = \sum_{k\inI} c_k J_k(\mu r) e^{-ik\theta} $ for some map $\mathfrak{h}}%{\hslash:\mathbb{R}^m\to\mathbb{C}$, $\mu>0$, $(c_k)_{k\inI}\in \mathbb{C}^{I}$ and $I\subset\mathbb{Z}$ finite, then \eqref{E:system_depart0} is stabilizable over any compact set $\mathcal{K}$ in $B_{\mathbb{R}^2}(0, \frac{j_1}{\mu})$ by means of an infinite-dimensional piecewise constant dynamic output feedback. \end{thrm} The idea behind this result is that using representation theory, we are able to exhibit an embedding of System~\eqref{E:system_depart0} into a dissipative system over a Hilbert space. If the output of \eqref{E:system_depart0} can be transformed into a linear form of this space, this allows to write a classical Luenberger observer. This strategy is responsible for the particular form of $\mathfrak{h}}%{\hslash\circ h$ in terms of Bessel functions: it corresponds precisely to the composition of the embedding with linear forms, as will be made explicit later. In particular, this approach allows to recover output stabilization results with the output $h(x)=\frac{1}{2}|x|^2$ that was discussed in Section 3, as illustrated by the following corollary. \begin{crllr}\label{cor:x2inf} If $h(x)=\frac{1}{2}|x|^2$, then \eqref{E:system_depart0} is semi-globally stabilizable by means of an infinite-dimensional piecewise constant dynamic output feedback. \end{crllr} The proofs of Theorem~\ref{cor:main} and Corollary~\ref{cor:x2inf} are discussed in Section~\ref{sec:example_infinie} as an application of Theorem~\ref{th:infinite}. As for the case $h(x) = \frac{1}{2}|x|^2$ that was treated via a finite-dimensional strategy in Section~\ref{sec:finite}, one can devise a similar strategy for radially symmetric output maps where $\frac{1}{2}|x|^2$ is extracted at least locally around the target by inversion. However, this is impossible if the output is not radially symmetric. For instance, if $h(r \cos\theta, r \sin\theta) = J_2(\mu r)\cos(2\theta)$ for some $\mu>0$, then the output is not radially symmetric and the system is unobservable at the target point. Still, according to Theorem~\ref{cor:main}, system~\eqref{E:system_depart0} is stabilizable over any compact in $B_{\mathbb{R}^2}(0, \frac{j_1}{\mu})$ by means of an infinite-dimensional embedding-based dynamic output feedback. \subsection{Embedding into unitary systems and observer design} Let $(Z, \|\cdot\|_Z)$ be a separable Hilbert space and $\mathcal{D}$ be a dense subspace of $Z$. For all $u\in\mathbb{R}^p$, let $\mA(u) : \dom \to Z$ be the skew-adjoint generator of a strongly continuous unitary group on $Z$ and $\mC\in \mathscr{L}(Z, \mathbb{C}^{\mathfrak{m}})$ for some positive integer $\mathfrak{m}$. Let $u:\mathbb{R}_+\to\mathbb{R}^p$ be piecewise constant, \emph{i.e.,~} such that $u(t) = u(t_k)$ for all $t\in[t_k, t_{k+1})$ for some sequence $(t_k)_{k\in\N}$ in $\mathbb{R}_+$ with constant positive increment $t_{k+1} - t_k = \Delta$. Let $z_0\inZ$. Consider the non-autonomous linear abstract Cauchy problem with linear measured output \begin{equation}\label{E:system_plonge_infinie} \begin{aligned} \begin{cases} \dot z = \mA(u(t)) z\\ z(0) = z_0 \end{cases} \end{aligned} \qquad \mathfrak{y}}%{\hslash = \mathcal{C} z. \end{equation} Acording to \cite[Chapter 5, Theorem 4.8]{Pazy}, the family $(\mA(u(t)))_{t\in\mathbb{R}_+}$ is the generator of a unique evolution system on $Z$ that we denote by $(\sg_{t}(\cdot, u))_{t\in\mathbb{R}_+}$. For any $z_0\inZ$,~\eqref{E:system_plonge_infinie} admits a unique solution $z\in C^0(\mathbb{R}_+, Z)$ given by $z(t) = \sg_{t}(z_0, u)$ for all $t \in\mathbb{R}_+$. Moreover, if $z_0\in\dom$, then $z\in C^0(\mathbb{R}_+, \dom)$ and is continuously differentiable (with values in $Z$) on $[t_k, t_{k+1}]$ for all $k\in\N$. The reader may refer to \cite[Chapter~5]{Pazy}, \cite[Chapter~VI.9]{engel2001one} or \cite{ito} for more details on the evolution equations theory. For such systems, a Luenberger observer with constant gain $\alpha>0$ can be built as follows: \begin{equation} \begin{aligned} \begin{cases} \dot \hat{\etat} = \mA(u(t)) \hat{\etat} - \alpha\mC^*(\mC\hat{\etat}-\mathfrak{y}}%{\hslash)\\ \hat{\etat}(0) = \hat{\etat}_0\inZ. \end{cases} \end{aligned} \label{obs} \end{equation} Set $\varepsilon = \hat{\etat}-z$ and $\varepsilon_0 = \hat{\etat}_0-z_0$. From now on, $\hat{\etat}$ represents the state estimation made by the observer system and $\varepsilon$ the error between this estimation and the actual state of the system. Then $\hat{\etat}$ satisfies~\eqref{obs} if and only if $\varepsilon$ satisfies \begin{equation} \begin{aligned} \begin{cases} \dot \varepsilon = (\mA(u(t)) - \alpha\mC^*\mC)\varepsilon\\ \varepsilon(0) = \varepsilon_0. \end{cases} \end{aligned} \label{eps} \end{equation} Since $\mC\in\lin(\XX, \C^\mm)$, \cite[Chapter 5, Theorem 2.3]{Pazy} claims that $(\mA(u(t))-\alpha\mC^*\mC)_{t\ge 0}$ is also a stable family of generators of strongly continuous semigroups, and generates an evolution system on $Z$ denoted by $(\sgeps_t(\cdot, u))_{t\in\mathbb{R}_+}$. Then, systems \eqref{obs} and~\eqref{eps} have respectively a unique solution $\hat{\etat}$ and $\varepsilon$ in $C^0(\mathbb{R}_+, Z)$. Moreover, $\hat{\etat}(t) =\sg_t(z_0, u)+\sgeps_t(\varepsilon_0, u)$ and $\varepsilon(t) = \sgeps_t(\varepsilon_0, u)$ for all $t \in\mathbb{R}_+$. If $(\hat{\etat}_0, \varepsilon_0)\in\dom^2$, $\hat{\etat}$, $\varepsilon\in C^0(\mathbb{R}_+, \dom)$ and are continuously differentiable (with values in $Z$) on $[t_k, t_{k+1}]$ for all $k\in\N$. This infinite-dimensional Luenberger observer has been investigated in \cite{Celle-etal.1989} (see also \cite{brivadis:hal-02529820}), in which it is proved that $\varepsilon(t)\cvf0$ as $t$ goes to infinity if $u$ is a \emph{regularly persistent input}. Our goal is to embed the original system \eqref{E:system_general} into a unitary system, and to use this observer design in the context of dynamic output feedback stabilization. In Section~\ref{sec:rep}, we exhibit an explicit embedding of \eqref{E:system_depart0} into \eqref{E:system_plonge_infinie}. \begin{dfntn}[Embedding] An injective map $\uptau:\mathbb{R}^n\mapstoZ$ is said to be an embedding\footnote{ This definition does not coincides with the usual notion of embedding in differential topology.} of \eqref{E:system_general} into the unitary system \eqref{E:system_plonge_infinie} if there exists $\mathfrak{h}}%{\hslash:\mathbb{R}^m\to\mathbb{C}^\mathfrak{m}$ such that the following diagram is commutative for all $t\in\mathbb{R}_+$ and any piecewise constant input $u:\mathbb{R}_+\to \mathbb{R}^p$: \begin{equation}\label{diagram} \xymatrix{ \mathbb{R}^n \ar[d]_\uptau \ar[r]^{\varphi_t(\cdot, u)} & \mathbb{R}^n \ar[d]^\uptau \ar[r]^h & \mathbb{R}^m \ar[r]^\mathfrak{h}}%{\hslash & \mathbb{C}^\mathfrak{m} \\ Z \ar[r]_{\mathbb{T}_t(\uptau(\cdot), u)} & Z \ar[urr]_\mathcal{C} } \end{equation} \emph{i.e.}, for all $x_0\in\mathbb{R}^n$, $\uptau(\varphi_t(x_0, u)) = \mathbb{T}_t(\uptau(x_0), u)$ and $\mathfrak{h}}%{\hslash(h(x_0)) = \mathcal{C} \uptau(x_0)$. \end{dfntn} Here, the map $\mathfrak{h}}%{\hslash$ is a degree of freedom that may be chosen to find an embedding of \eqref{E:system_general} into~\eqref{E:system_plonge_infinie}. Let $u:\mathbb{R}_+\to \mathbb{R}^p$ be piecewise constant, $z_0,\, \varepsilon_0\in\mathcal{D}$, $z(t) = \sg_t(\etath_0, u)$ and $\varepsilon(t) = \sgeps_t(\varepsilon_0, u)$ for all $t\in\mathbb{R}_+$. For all $k\in\N$, $\mathcal{A}(u(t_k))$ is skew-adjoint, hence for all $t\in[t_k, t_{k+1})$, \begin{align} &\frac12\frac{\diff \norm{z}^2}{\diff t}(t) = \Re\psX{\mathcal{A}(u(t))z(t)}{z(t)} = 0, \label{E:etat_constant} \\ &\frac{1}{2}\frac{\diff \norm{\varepsilon}^2}{\diff t}(t) =\Re\psX{\mathcal{A}(u(t))\varepsilon(t)}{\varepsilon(t)} -\alpha\Re\psX{\mathcal{C}^*\mathcal{C}\varepsilon(t)}{\varepsilon(t)} = -\alpha \norm{\mathcal{C}\varepsilon(t)}^2 \leq 0. \label{E:eps_decroit_infinie} \end{align} Thus $\norm{z}$ is constant and $\norm{\varepsilon}$ is non-increasing. If there exists a positive constant $\beta$ such that for all $x\in\mathcal{D}$ and all $u\in\mathbb{R}$, \begin{equation}\label{plong:eqCCA} \norm{\mathcal{C}^*\mathcal{C}\mathcal{A}(u)x} \leq \beta \norm{x}, \end{equation} then \begin{align*} \frac12\frac{\diff}{\diff t}\|\mathcal{C}\varepsilon(t)\|_{\mathbb{C}^\mathfrak{m}}^2 &= \langle\mathcal{C}\varepsilon(t), \mathcal{C}\dot\varepsilon(t)\rangle_{\mathbb{C}^\mathfrak{m}}\\ &= \langle\mathcal{C}\varepsilon(t), \mathcal{C}\mathcal{A}(u(t))\varepsilon(t)\rangle_{\mathbb{C}^\mathfrak{m}} - \alpha \langle\mathcal{C}\varepsilon(t), \mathcal{C}\mC^*\mathcal{C}\varepsilon(t)\rangle_{\mathbb{C}^\mathfrak{m}}\\ &= \psX{\varepsilon(t)}{\mathcal{C}^*\mathcal{C}\mathcal{A}(u(t))\varepsilon(t)} - \alpha \|\mathcal{C}^*\mathcal{C}\varepsilon(t)\|_Z^2\\ &\leq \beta \|\varepsilon(0)\|_Z^2 \end{align*} since $\norm{\varepsilon}$ is non-increasing. Then, $t\mapsto\norm{\mathcal{C}\varepsilon(t)}^2$ is non-negative, integrable over $\mathbb{R}_+$ (by \eqref{E:eps_decroit_infinie}), and has bounded derivative (since $\mathcal{A}(u(t))$ is skew-adjoint). Hence, according to Barbalat's lemma, $\mathcal{C}\varepsilon(t)\to0$ as $t\to+\infty$. Inequality~\eqref{E:eps_decroit_infinie} is similar to \eqref{E:eps_decroit}, and will be a key argument to achieve the dynamic output feedback stabilization. \subsection{ Embedding inversion: from the embedded system's weak observer to the original system's observer } In Section~\ref{sec:finite}, a crucial argument was the existence of a left-inverse $\uppi$ to the embedding $\uptau$ (see \eqref{def:uppi}). Now, $Z$ being infinite-dimensional, we need to precise the notion of left-inverse, and, moreover, % the convergence of the observer $\hat{\etat}$ to the embedded state $z$ will hold only in the weak topology of $Z$, namely, $\varepsilon\cvf0$. This is an important issue, which causes difficulties in achieving output feedback stabilization. However, in this section, we show that if the original state $x$ remains bounded, and if the embedding $\uptau$ is injective and analytic, then $\hat{x}=\uppi(\hat{\etat})$ is actually an observer of $x$ in the usual topology of $\mathbb{R}^n$, namely, $\hat{x}-x\to0$. This is summarized in Corollary~\ref{cor:conv_faible_forte}, which is an important result of the paper. \begin{dfntn}[Strong left-inverse] Let $(Z, \|\cdot\|_Z)$ be a normed vector space, $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$ and $\uptau:\mathbb{R}^n\toZ$. A map $\uppi:Z\to\mathcal{K}}%{\mathcal{K}_x$ is called a \emph{strong left-inverse} of $\uptau$ on $\mathcal{K}}%{\mathcal{K}_x$ if and only if there exists a class $\Kclass_\infty$ function\footnote{A class $\Kclass_\infty$ function is a continuous function $\rho^*:\mathbb{R}_+\to\mathbb{R}_+$ such that $\rho^*(0)=0$, $\rho^*$ is strictly increasing and tends to infinity at infinity.} $\rho^*$ and $Q\in\mathscr{L}(Z,\mathbb{C}^q)$ for some a positive integer $q$ such that, for all $(x, \xi)\in\mathcal{K}}%{\mathcal{K}_x\timesZ$, \begin{equation}\label{E:ps_inv} |\uppi(\xi)-x| \leq \rho^*(|Q(\xi-\uptau(x))|). \end{equation} \end{dfntn} \begin{rmrk} If $\uppi$ is a strong left-inverse of $\uptau$ on $\mathcal{K}}%{\mathcal{K}_x$, then \eqref{E:ps_inv} implies that $\uppi$ is also a left-inverse in the usual sense: for all $x\in\mathcal{K}}%{\mathcal{K}_x$, $\uppi(\uptau(x))=x$. In particular, $\uptau$ is injective over $\mathcal{K}}%{\mathcal{K}_x$. \end{rmrk} The reason for which we look for a strong left-inverse of $\uptau$ is the following lemma, which follows directly from \eqref{E:ps_inv} and the fact that $Q\in\mathscr{L}(Z,\mathbb{C}^q)$. \begin{lmm}\label{lem:ps_inv} Let $(Z, \|\cdot\|_Z)$ be a normed vector space, $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$ and $\uptau:\mathbb{R}^n\toZ$. Let $\uppi:Z\to\mathcal{K}}%{\mathcal{K}_x$ be a strong left-inverse of $\uptau$ on $\mathcal{K}}%{\mathcal{K}_x$. Let $(x_n)_{n\in\N}$ and $(\xi_n)_{n\in\N}$ be two sequences in $\mathcal{K}}%{\mathcal{K}_x$ and $Z$, respectively. If $\xi_n - \uptau(x_n) \overset{w}{\rightharpoonup} 0$ as $n$ goes to infinity, then $\abs{\uppi(\xi_n) - x_n}\cv0$ as $n$ goes to infinity. \end{lmm} This justifies the denomination of \emph{strong} left-inverse, in the sense that it allows to pass from weak convergence in the infinite-dimensional space $Z$ to (usual) convergence in the finite-dimensional space $\mathbb{R}^n$. The following theorem states sufficient conditions for the existence of a strong left-inverse. \begin{thrm}\label{th:ps_inv} Let $Z$ be a separable Hilbert space, $\uptau:\mathbb{R}^n\toZ$ be an analytic map and $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^n$ be a compact set . If $\uptau|_\mathcal{K}}%{\mathcal{K}_x$ is injective, then $\uptau$ has a continuous strong left-inverse on $\mathcal{K}}%{\mathcal{K}_x$. \end{thrm} \begin{proof} Let $(e_k)_{k\in\N}$ be a Hilbert basis of $Z$. For all $i\in\N$, let $$E_i = \{(x_a, x_b) \in \mathbb{R}^n\times\mathbb{R}^n\mid \forall k\in\{0,\dots,i-1\},\ \psX{\uptau(x_a)-\uptau(x_b)}{e_k} = 0\}.$$ Then $(E_i)_{i\in\N}$ is a non-increasing family of analytic sets. According to \cite[Chapter 5, Corollary 1]{Narasimhan}, $(E_i \cap \mathcal{K}}%{\mathcal{K}_x^2)_{i\in\N}$ is stationary, \emph{i.e.}, there exists $q\in\N$ such that $E_q \cap \mathcal{K}}%{\mathcal{K}_x^2 = E_i \cap \mathcal{K}}%{\mathcal{K}_x^2$ for all $i\geq q$. Hence, \begin{align} E_q \cap \mathcal{K}}%{\mathcal{K}_x^2 &= \bigcap_{k\in\N} E_k\cap \mathcal{K}}%{\mathcal{K}_x^2\nonumber\\ &= \{(x_a, x_b) \in \mathcal{K}}%{\mathcal{K}_x^2\mid \uptau(x_a)=\uptau(x_b)\} \tag{since $(e_k)_{k\in\N}$ is a Hilbert basis of $Z$}\\ &= \{(x_a, x_a) \mid x_a\in \mathcal{K}}%{\mathcal{K}_x\}\nonumber \end{align} since $\uptau$ is injective on $\mathcal{K}}%{\mathcal{K}_x$. Let $Q : Z \ni \xi \mapsto (\psX{\xi}{e_k})_{k\in\{0,\dots,q-1\}} \in \mathbb{C}^q$ and $\tilde{\uptau} = Q \circ \uptau$. Then $\tilde{\uptau}$ is continuous and injective on $\mathcal{K}}%{\mathcal{K}_x$. Indeed, for all $(x_a, x_b)\in\mathcal{K}}%{\mathcal{K}_x^2$, if $\tilde{\uptau}(x_a) = \tilde{\uptau}(x_b)$, then $(x_a, x_b)\in E_q \cap \mathcal{K}}%{\mathcal{K}_x^2$ which yields $x_a=x_b$. Hence, combining \cite[Lemma 6]{Bernard-etal.2017} and \cite[Theorem 1]{AndrieuPraly2006}, there exists a continuous map $\tilde{\uppi}:\mathbb{C}^q\to\mathcal{K}}%{\mathcal{K}_x$ and a class $\Kclass_\infty$ function $\rho^*$ such that for all $(x, \mathfrak{z})\in\mathcal{K}}%{\mathcal{K}_x\times\mathbb{C}^q$, $|\tilde{\uppi}(\mathfrak{z})-x| \leq \rho^*(|\mathfrak{z}-\tilde{\uptau}(x)|). $ Set $\uppi = \tilde{\uppi}\circQ$. Then $\uppi$ is continuous and for all $(x, \xi)\in\mathcal{K}}%{\mathcal{K}_x\timesZ$, \[ |\uppi(\xi)-x| \leq \rho^*(|Q(\xi)-\tilde{\uptau}(x)|) = \rho^*(|Q(\xi-\uptau(x))|). \] \end{proof} Applying Theorem~\ref{th:ps_inv}, then Lemma~\ref{lem:ps_inv}, we get the following important corollary in our context. \begin{crllr}\label{cor:conv_faible_forte} Let $\uptau:\mathbb{R}^n\toZ$ be an analytic embedding of \eqref{E:system_general} into the unitary system \eqref{E:system_plonge_infinie} and $\mathcal{K}}%{\mathcal{K}_x$ be a compact subset of $\mathbb{R}^n$. Then $\uptau$ has a continuous strong left-inverse $\uppi$ on $\mathcal{K}}%{\mathcal{K}_x$. Let $x_0\in\mathcal{K}}%{\mathcal{K}_x$, $\etath_0\inZ$ and $u:\mathbb{R}_+\to\mathbb{R}^p$ piecewise constant. Denote by $x$ and $\etath$ the corresponding solutions of \eqref{E:system_general} and \eqref{obs}, respectively. Set $z=\uptau(x)$ and $\hat{x} = \uppi(\etath)$. Assume that $x(t)\in\mathcal{K}}%{\mathcal{K}_x$ for all $t\in\mathbb{R}_+$. If $\etath - z\cvf0$, then $\hat{x} - x\cv0$. \end{crllr} \begin{rmrk} Beyond the problem of output feedback stabilization, Corollary~\ref{cor:conv_faible_forte} may be used in the context of observer design. In \cite{Celle-etal.1989}, after embedding the original finite-dimensional system into an infinite-dimensional unitary system, the authors investigate only the convergence of the infinite-dimensional observer. Corollary~\ref{cor:conv_faible_forte} states that if the infinite-dimensional observer converges and if the original system's state trajectory remains bounded, then an observer can be built for the original system, by using a strong left-inverse of the embedding. \end{rmrk} \subsection{Feedback perturbation and closed-loop system} In order to set up a separation principle to solve the dynamic output feedback stabilization problem of \eqref{E:system_general}, let us assume that Condition~\ref{hyp:state_feedback} (semi-global) and that \eqref{E:system_general} admits an analytic embedding into the unitary system \eqref{E:system_plonge_infinie}. Let $\mathcal{K}}%{\mathcal{K}_x$ be a compact subset of $\mathbb{R}^n$. Denote by $\phi$ a locally asymptotically stabilizing state feedback of \eqref{E:system_general} with basin of attraction containing $\mathcal{K}}%{\mathcal{K}_x$ and by $\uptau$ an embedding of \eqref{E:system_general} into \eqref{E:system_plonge_infinie}. According to Theorem~\ref{th:ps_inv}, there exists $\uppi:Z\to\mathcal{K}}%{\mathcal{K}_x$, a strong left-inverse of $\uptau$ on $\mathcal{K}}%{\mathcal{K}_x$. Then, a natural way to build a dynamic output feedback would be to combine \eqref{E:system_general}-\eqref{obs} with the control input $u = \phi(\uppi(\hat{\etat}))$, and to ensure that the state $x$ of \eqref{E:system_general} remains in $\mathcal{K}}%{\mathcal{K}_x$. However, due to the unobservability of the original system at the target, we propose, as in Section~\ref{sec_FeedbackPert}, to add a perturbation to this feedback law. In \cite{Celle-etal.1989}, the convergence of the error system \eqref{eps} to $0$, when it holds, is only in the weak topology of $Z$. Therefore, the perturbation added to the feedback law must be chosen to vanish when the observer state $\hat{\etat}$ of \eqref{obs} tends towards $\uptau(0)$ in the weak topology. For this reason, let us define a weak norm on $Z$. \begin{dfntn}[Weak norm] Let $(e_k)_{k\in\mathbb{Z}}$ be a Hilbert basis of $Z$. For all $\xi\inZ$, set $$\mathcal{N}(\xi) = \sqrt{\sum_{k\in\mathbb{Z}} \frac{|\psX{\xi}{e_k}|^2}{k^2+1}}.$$ Then $\mathcal{N}$ defines a norm, we call the \emph{weak norm}, on $Z$. \end{dfntn} Note that $\mathcal{N}$ is not equivalent to $\|\cdot\|_Z$, but satisfies \begin{equation}\label{E:def_nu} \mathcal{N}(\cdot)\leq \nu\|\cdot\|_Z \quad \text{ with }\quad \nu = \sqrt{\sum_{k\in\mathbb{Z}} \frac{1}{k^2+1}} <+\infty. \end{equation} Moreover, $\mathcal{N}$ induces a metric on bounded sets of $Z$ endowed with the weak topology. More precisely, for any bounded sequence $(\xi_n)_{n\in\N}$ in $Z$, $\mathcal{N}(\xi_n) \to 0$ as $n$ goes to infinity if and only if $\xi_n\overset{w}{\rightharpoonup} 0$ as $n$ goes to infinity. Now, for some positive constant $\delta$ to be fixed (small enough) later, we can add the perturbation $\hat{\etat}\mapsto\delta \mathcal{N}^2(\etath-\uptau(0))$ to the feedback law. Finally, the following coupled system is an infinite-dimensional piecewise constant dynamic output feedback of system~\eqref{E:system_general}: \begin{equation}\label{E:system_stab_infinie} \left\{ \begin{aligned} &\dot x = f(x, u)\\ &y = h(x) \end{aligned} \right. ,\qquad \left\{ \begin{aligned} &\dot{\etath}= \mathcal{A}(u) \etath -\alpha\mathcal{C}^*(\mathcal{C}\etath-\mathfrak{h}}%{\hslash(y))\\ &u(t_k) = \phi(\uppi(\etath(t_k^-))) + \delta\mathcal{N}^2(\etath(t_k^-)-\uptau(0))\\ &u(t)=u(t_k),\qquad t\in[t_k,t_{k+1}) \end{aligned} \right. \end{equation} where $t_k = k\Delta$ for all $k\in\N$ for some $\Delta>0$. \section{Revisiting the illustrative example} \label{sec:example_infinie} In this section, we illustrate the use of infinite-dimensional embeddings in the context of output feedback stabilization on a two-dimensional example with linear dynamics and nonlinear observation map. Let $h:\mathbb{R}^2\to\mathbb{R}^m$. We consider the problem of stabilization by means of an infinite-dimensional embedding-based dynamic output feedback of the following system: \begin{equation}\label{E:system_depart} \left\{ \begin{aligned} &\dot{x}= \rot x + b u \\ &y= h(x) \end{aligned} \right. \qquad \text{with}\ \rot = \begin{pmatrix} 0&-1 \\ 1&0 \end{pmatrix} \ \text{and}\ b = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \end{equation} Since $(A, b)$ is stabilizable, there exists $K\in\mathbb{R}^{1\times2}$ such that $A+bK$ is Hurwitz. Moreover, $A$ is skew-symmetric. Hence $\kappa = |K|$ can be chosen arbitrarily small. Then, the state feedback law $\phi:x\mapsto Kx$ is such that \eqref{E:system_depart} with $u = \phi(x)$ is globally asymptotically stable at 0. Note that \eqref{E:system_depart} does not exactly fit the form of~\eqref{E:system_rot} since $h$ is not necessarily radially-symmetric. Of course, our analysis is of interest only if~\eqref{E:system_depart} is not uniformly observable. In Example~\ref{ex:Jk}, we exhibit a non-radially symmetric $h$ that makes the system non-uniformly observable, and on which our (infinite-dimensional) embedding-based strategy does apply. In the following we give some sufficient conditions on $h$ allowing the design of a stabilizing infinite-dimensional dynamic output feedback. The main result of this section, Theorem~\ref{th:infinite} (stated in Section~\ref{sec:obs_infinie}), relies on three main hypotheses: the existence of an embedding of~\eqref{E:system_depart} into~\eqref{E:system_plonge_infinie}, and two observability assumptions. For each of these assumptions, we provide examples of output maps $h$ satisfying theses hypotheses. We wish to underline that in the rest of the paper, all mentions of prior occurrences of $f(x,u)$ make the assumption that $f(x,u)=Ax+bu$. \subsection{Unitary representations and embeddings} \label{sec:rep} In \cite{Celle-etal.1989}, the authors investigated the problem of observer design for~\eqref{E:system_depart} by means of infinite-dimensional embeddings. We briefly recall their strategy, that relies on representation theory (see, \emph{e.g.,~} \cite{vilenkin1978special, barut}). The Lie group $\mathbb{G}$ of system~\eqref{E:system_depart} (the group of flows generated by the dynamical system \eqref{E:system_depart} with constant inputs) is isomorphic to $\mathbb{R}^2 \rtimes_\mathcal{R} H$, where $H\simeq \{e^{tA}, t\in\mathbb{R}_+\} \simeq \S^1$ is the group of rotations (isomorphic to the unit circle), $\mathcal{R}:\S^1\ni\theta\mapsto e^{\theta A}$ is an automorphism of $\mathbb{R}^2$ and $\rtimes_\mathcal{R}$ denotes the outer semi-direct product with respect to $\mathcal{R}$. Hence $\mathbb{G}$ is the group of motions of the plane. According to \cite[Section IV.2]{vilenkin1978special}, its unitary irreducible representations are given by a family $(\rho_\mu)_{\mu >0}$, where for each $\mu>0$, \fonction{\rho_\mu}{\mathbb{G}}{\mathscr{L}(L^2(\S^1, \mathbb{C}))}{(x, \vartheta)}{ \left( \xi\in L^2(\S^1, \mathbb{C}) \mapsto \left(\S^1\nis\mapsto e^{i\mu (1, 0) e^{s A'} x} \xi(s-\vartheta)\right) \right).} Let $Z = L^2(\S^1, \mathbb{C})$ be the set of real-valued square-integrable functions over $\S^1$. Then $Z$ is a Hilbert space endowed with the scalar product defined by $\psX{\xi}{\zeta} = \frac{1}{2\pi}\int_0^{2\pi}\xi(s)\bar{\zeta}(s)\diff s$ and the induced norm $\|\cdot\|_Z$. Since $\S^1$ is compact, the constant function $\mathbbold{1}:s\mapsto1$ lies in $Z$. Let $\mu>0$ to be fixed later. Set \fonction{\uptau_\mu}{\mathbb{R}^2}{Z}{x}{\rho_\mu(x, 0)\mathbbold{1}.} Note that $\uptau_\mu$ depends on $\mu$, but from now on we omit this dependence in the notation and write $\uptau$ instead of $\uptau_\mu$. Since $\rho_\mu$ is a unitary representation, $\|\uptau(x)\|_Z=1$ for all $x\in\mathbb{R}^2$ and $\uptau(0) = \mathbbold{1}$. For all $x=(x_1, x_2)=(r\cos\theta, r \sin\theta)$ in $\mathbb{R}^2$, we have \begin{equation}\label{E:deftau} \uptau(x):\S^1\nis\mapsto e^{i\mu( x_1\cos(s)+ x_2\sin(s)) }= e^{i\mu r\cos(s-\theta) }. \end{equation} If $x, \tilde{x}\in\mathbb{R}^2$ are such that $\uptau(x) = \uptau(\tilde{x})$, then $(x_1 - \tilde{x}_1)\cos(s) + (x_2 - \tilde{x}_2)\sin(s) = 0$ for all $s\in\S^1$, hence $x = \tilde{x}$. Thus $\uptau$ is injective. Let $x$ be a solution of \eqref{E:system_depart} and set $z = \uptau(x)$ Then \begin{align*} \dot z &= i \mu \left(\dot{x}_1 \cos(s) + \dot{x}_2 \sin(s)\right) z\\ &= i \mu \left(-x_2 \cos(s) + x_1 \sin(s) + u \sin(s) \right) z\\ &= - \frac{\partial z}{\partial s} + i u \mu \sin(s) z\\ &= \mathcal{A}(u) z \end{align*} with $\mathcal{A}(u) = -\frac{\partial}{\partial s} + i u \sin(s)$ defined on the dense domain $\mathcal{D} = H^1(\S^1, \mathbb{C}) = \{f\inZ\mid f'\inZ\}$. The operator $\mathcal{A}(u)$ is the skew-adjoint generator of a strongly continuous unitary group on $Z$ for any $u\in\mathbb{R}$. In order to make $\uptau$ an embedding of \eqref{E:system_depart} into \eqref{E:system_plonge_infinie}, we need the output map to be in the form $\mathfrak y = \mathcal{C} z$. This is where the degree of freedom $\mathfrak{h}}%{\hslash$ introduced in \eqref{E:system_stab_infinie} may be employed. More specifically, we make the following first assumption on the observation map $h$. \begin{assumption}[Linearizable output map]\label{ass:h} There exist $\mathfrak{h}}%{\hslash:\mathbb{R}^m\to\mathbb{C}^\mathfrak{m}$ and $\mathcal{C}\in \mathscr{L}(Z, \mathbb{C}^\mathfrak{m})$ such that $\mathfrak{h}}%{\hslash(h(x)) = \mathcal{C}\uptau(x)$ for all $x\in\mathbb{R}^2$. \end{assumption} If Assumption~\ref{ass:h} holds for System~\eqref{E:system_depart}, then the closed loop infinite-dimensional piecewise constant dynamic output feedback system can take the form of \eqref{E:system_stab_infinie}. For all $k\in\mathbb{Z}$, let \fonction{e_k}{\S^1}{\mathbb{C}}{s}{e^{iks}.} The family $(e_k)_{k\in\mathbb{Z}}$ forms a Hilbert basis of $Z$. In the rest of the paper, the weak norm $\mathcal{N}$ is always defined with respect to this Hilbert basis. Then, for all $x=(r \cos\theta, r \sin\theta)\in\mathbb{R}^2$ and all $k\in\mathbb{Z}$, \begin{align} \psX{\uptau(x)}{e_k} &= \frac{1}{2\pi} \int_0^{2\pi} e^{i\mur\cos(s-\theta)-iks}\diff s\nonumber\\ &= \frac{1}{2\pi} e^{-ik\theta + i\frac{\pi}{2}}\int_0^{2\pi} e^{i\mur\sin(s)-iks}\diff s\nonumber\\ &=i^k J_k(\mu r) e^{-ik\theta} \label{eq_Bessel} \end{align} where $J_k$ denotes the Bessel function of the first kind of order $k\in\mathbb{Z}$ (see \eqref{Bessel_def}). Since $(e_k)_{k\in\mathbb{Z}}$ is a Hilbert basis of $Z$ we get the following interpretation of the linearizable output map assumption in the case of \eqref{E:system_depart}. \begin{prpstn}[Outputs satisfying Assumption~\ref{ass:h}]\label{prop:Jk} If there exist a map $\mathfrak{h}}%{\hslash:\mathbb{R}^m\to\mathbb{C}$ and a sequence $(c_k)_{k\in\mathbb{Z}}\in l^2(\mathbb{Z}, \mathbb{C})$ such that % $\mathfrak{h}}%{\hslash(h(r \cos\theta, r \sin\theta)) = \sum_{k\in\mathbb{Z}} c_k J_k(\mu r) e^{-ik\theta} $ then \eqref{E:system_depart} satisfies Assumption~\ref{ass:h}. \end{prpstn} \begin{xmpl}\label{ex:Jk} The three observation maps $h(x) = J_0(\mu|x|)-1$ (with $\mathfrak{h}}%{\hslash(y) = y+1$), $h(x) = J_2(\mu|x|)\cos(2\theta)$ (with $\mathfrak{h}}%{\hslash(y) = y$) and $h(x) = |x|$ (with $\mathfrak{h}}%{\hslash(y) = J_0(\mu y)$) are suitable. In each of these cases, the constant input $u\equiv0$ makes \eqref{E:system_depart} unobservable. Moreover, $h(x) = J_0(\mu|x|)-1$ and $h(x)=|x|$ are radially symmetric but $h(x) = J_2(\mu|x|)\cos(2\theta)$ is not. If $h(x) = |x|$, then \eqref{E:system_depart} is a subcase of system~\eqref{E:system_rot}. \end{xmpl} \begin{rmrk}\label{rem:gr} According to the Gelfand--Raïkov theorem, the finite linear combinations of pure positive-type functions (functions of the form $(x, \vartheta)\mapsto\psX{\rho_\mu(x, \vartheta)\xi}{\xi}$, where $\mu>0$ and $\xi\inZ$) is dense for the uniform convergence on compact sets, in the continuous bounded complex-valued functions on $\mathbb{G}$. Hence, the set of functions $(r \cos\theta, r \sin\theta) \mapsto \sum_{\ell\in I_1} \sum_{k\in I_2} c_k J_k(\mu_\ell r) e^{-ik\theta} $, where $I_1$ and $I_2$ are finite subsets of $\mathbb{Z}$, $\mu_\ell>0$ and $c_k\in\mathbb{C}$, is dense for the uniform convergence on compact sets of $\mathbb{R}^2$, in the continuous bounded complex-valued functions on $\mathbb{R}^2$. In the examples of applications of our results, we will focus on output maps $h$ of the form $\mathfrak{h}}%{\hslash(h(x)) = \sum_{k\in I} c_k J_k(\mu r) e^{-ik\theta}$ for some $\mathfrak{h}}%{\hslash:\mathbb{R}^m\to\mathbb{C}^\mathfrak{m}$ and some fixed $\mu>0$. \end{rmrk} \subsection{Explicit strong left-inverse} Having in mind to use the strategy developed in the previous section, we now explicitly construct a strong left-inverse $\uppi$ of $\uptau$ defined in \eqref{E:deftau} over some compact set. With Corollary \ref{cor:conv_faible_forte}, we already know that a strong left-inverse $\uppi$ exists. However, we would like to give an explicit expression. This can be done by employing the relationship between Bessel functions of the first kind given in \eqref{Bessel_def} and the embedding $\uptau$, as shown in equation \eqref{eq_Bessel}. Indeed, let $j_1$ denote the first zero of $J_1'$. Then $J_1$ is increasing over $[0, j_1]$. Denote $J_1^{-1}$ its inverse over $[0, j_1]$. Let $\Phi:\mathbb{C}\ni x_1+ix_2\mapsto(x_1, x_2)\in\mathbb{R}^2$ be the canonical bijection. Let $j\in(0, j_1)$. For all $\zeta\in\mathbb{C}$, let \begin{equation}\label{E:def_f} \mathfrak{f}(\zeta) = \begin{cases} 0 & \text{if } \zeta=0\\ \Phi\left(\frac{i\bar{\zeta}}{\mu |\zeta|} J_1^{-1}(|\zeta|)\right) & \text{if } 0<|\zeta|\leq J_1(j)\\ \Phi\left(\frac{i\bar{\zeta}}{\mu |\zeta|} j_1\right) & \text{if } |\zeta|\geq J_1(j_1) \end{cases} \end{equation} If $J_1(j)< |\zeta| < J_1(j_1)$, define $\mathfrak{f}(\zeta)$ such that $\mathfrak{f}$ is continuously differentiable and globally Lipschitz over $\mathbb{C}$. Denote by $\ell_{\mathfrak{f}}$ its Lipschitz constant. Let $e_1\in Z$ be defined by $e_1(s) = e^{is}$ for all $s\in\S^1$. Let \begin{equation}\label{E:definv} \displaystyle \begin{array}{lrcl} \uppi: & Z & \longrightarrow & \mathbb{R}^2 \\ & \xi & \longmapsto & \mathfrak{f}(\psX{\xi}{e_1}) \end{array} \end{equation} \begin{lmm}\label{lem:explicit_pi} The map $\uppi$ is a strong left-inverse of $\uptau$ over $\bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})$. \end{lmm} \begin{proof} Set $\mathcal{K}}%{\mathcal{K}_x = \bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})$. According to \eqref{E:def_f}, $\phi(\xi)\in\mathcal{K}}%{\mathcal{K}_x$ for all $x\in\mathcal{K}}%{\mathcal{K}_x$. Let $x=(r \cos\theta, r \sin\theta)$ in $\mathcal{K}}%{\mathcal{K}_x$. Then, with \eqref{eq_Bessel}, \begin{align*} \psX{\uptau(x)}{e_1} % = ie^{-i\theta}J_1(\mu r) \inB_{\mathbb{C}}\left(0, J_1\left(j\right)\right). \end{align*} Hence $\uppi(\uptau(x))=\Phi(re^{i\theta})=x$. Let $\xi\inZ$. We have \begin{align*} |\uppi(\xi)-x| = |\uppi(\xi)-\uppi(\uptau(x))| = |\mathfrak{f}(\psX{\xi}{e_1})-\mathfrak{f}(\psX{\uptau(x)}{e_1})| \leq \ell_{\mathfrak{f}} |\psX{\xi - \uptau(x)}{e_1}|. \end{align*} Hence $\uppi$ is a strong left-inverse of $\uptau$ over $\mathcal{K}}%{\mathcal{K}_x$. \end{proof} \begin{rmrk} Letting $\mu$ tend towards $0$, the domain of the left-inverse tends towards $\mathbb{R}^2$. This will be of use to achieve semi-global stabilization. \end{rmrk} \subsection{Well-posedness and boundedness of trajectories} We now check the well-posedness of the closed-loop system \eqref{E:system_stab_infinie}. In a second step, since $\uppi(\xi)$ is meaningful only if $|\psX{\xi}{e_1}|\leq J_1(j)$, we show that by selecting the (perturbation) parameter $\delta$ sufficiently small, $\hat{\etat}$ remains in this domain along the trajectories of the closed-loop system. \begin{lmm}\label{lem:well} For all $\mu, \alpha, \delta ,\Delta >0$ and all $x_0, \hat{x}_0$ in $\bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})$, the system~\eqref{E:system_stab_infinie} (with $\uppi$ as in Lemma~\ref{lem:explicit_pi}) admits a unique solution $(x, \hat{\etat})\in C^0(\mathbb{R}_+, \mathbb{R}^2\times \mathcal{D})$ such that $x(0) = x_0$ and $\hat{\etat}(0) = \uptau(\hat{x}_0)$. Moreover, for all $k\in\N$, $(x, \hat{\etat})\vert_{[t_k, t_{k+1}]}\in C^1([t_k, t_{k+1}], \mathbb{R}^2\times Z)$. \end{lmm} \begin{proof} Let $\mathcal{K}}%{\mathcal{K}_x = \bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})$ and $x_0$, $\hat{x}_0$ in $\mathcal{K}}%{\mathcal{K}_x$. Set $z_0 = \uptau(x_0)\in\mathcal{D}$ and $\varepsilon_0 = \uptau(\hat{x}_0) - \uptau(x_0)\in\mathcal{D}$. The well-posedness of system~\eqref{E:system_stab_infinie} is equivalent to the well-posedness of the following system: \begin{equation}\label{E:syst_hyp} \left\{ \begin{aligned} &\dot{z}= \mathcal{A}(u)z \\ &\dot{\varepsilon}= (\mathcal{A}(u) - \alpha\mathcal{C}^*\mathcal{C})\varepsilon \\ &u(t_k) = \phi(\uppi(z(t_k^-)+\varepsilon(t_k^-))) + \delta\mathcal{N}^2(z(t_k^-)+\varepsilon(t_k^-)-\mathbbold{1})\\ &u(t) = u(t_k),\qquad t\in[t_k,t_{k+1}) \\ &z(0)=z_0,\ \varepsilon(0)=\varepsilon_0 \end{aligned} \right. \end{equation} where $\mathcal{A}(u) = -\frac{\partial}{\partial s} + i\mu u \sin$ and $\mathcal{C}\in \mathscr{L}(Z, \mathbb{C}^\mathfrak{m})$. For all $u\in\mathbb{R}$, $\mathcal{A}(u)$ is the generator of a strongly continuous semigroup on $Z$. Since, $\mathcal{C}$ is bounded, according to \cite[Chapter 3, Theorem 1.1]{Pazy}, $\mathcal{A}(u)-\alpha \mathcal{C}^*\mathcal{C}$ is also the generator of a strongly continuous semigroup on $Z$. Thus, reasoning by induction on $k\in\N$, \eqref{E:syst_hyp} admits a unique solution $(z, \varepsilon)\in C^0(\mathbb{R}_+, Z^2)$. Moreover, since $z_0, \varepsilon_0\in\dom$, $(z, \varepsilon)\in C^0(\mathbb{R}_+, \dom^2)$ and is continuously differentiable (with values in $Z^2$) on $[t_k, t_{k+1}]$ for all $k\in\N$. Setting $x=\uppi(z)$ and $\etath=z+\varepsilon$, we get the statement. \end{proof} Now that existence and uniqueness of solutions of \eqref{E:system_stab_infinie} have been proved, let us show that the trajectories are bounded. \begin{lmm}\label{lem:borne} For all $\mu>0$, all $R_2\in(0, \frac{j}{\mu})$ and all $R_1\in(0, R_2)$, there exist $R_0\in(0, R_1)$, $\delta_0>0$ and $\Delta_0>0$ such that for all $x_0, \hat{x}_0$ in $B_{\mathbb{R}^2}(0, R_0)$, all $\alpha>0$, all $\delta\in(0, \delta_0)$ and all $\Delta\in(0, \Delta_0)$, the unique solution $(x, \hat{\etat})\in C^0(\mathbb{R}_+, \mathbb{R}^2\times\dom)$ of \eqref{E:system_stab_infinie} such that $x(0) = x_0$ and $\hat{\etat}(0) = \uptau(\hat{x}_0)$ satisfies $|x(t)| < R_1$, $|\psX{\hat{\etat}(t)}{e_1}| < J_1(\muR_2)$ and $|\uppi(\hat{\etat}(t))| < R_2$ for all $t\in\mathbb{R}_+$. \end{lmm} \begin{proof} For any bounded $u:\mathbb{R}^+\to \mathbb{R}$, we can decompose the dynamics of $x$ as $\dot x = (A+bK)x + b(u - Kx)$. Thus for any initial condition $x_0\in \mathbb{R}^2$, the variation of constants formula yields $$ x(t)=e^{t(A+bK)}x_0+\int_0^te^{(t-s)(A+bK)}b(u(s) - Kx(s))\diff s. $$ Since $A+bK$ is Hurwitz, $\|e^{t(A+bK)}\|\leq 1$ and there exists constants $\gamma_1,\gamma_2>0$ such that $\left|e^{t(A+bK)}b\right|\leq \gamma_1e^{-\gamma_2 t}$ for all $t\geq 0$. Then there exists $M>0$ (depending only on $A$, $b$ and $K$) such that \begin{equation}\label{E:varcons} |x(t)| \leq |x_0| + M\sup_{s\in[0, t]} |u(s) - Kx(s)|\qquad \forall t\in\mathbb{R}_+. \end{equation} We use this expression to bound $|x|$. Recall $\kappa = |K|$. Denote by $\ell_{\inv}$ the global Lipschitz constant of $\uppi$. Since $J_0(0)=1$, we can pick $R_0 \in(0, R_1)$, $\delta_0>0$ and $\Delta_0>0$ small enough so that the following inequalities are satisfied (with $\nu$ defined by \eqref{E:def_nu}): \begin{align} & R_0 + M\left( 2\kappa \ell_{\inv}\sqrt{2(1-J_0(\mu R_0))} + 16\nu^2\delta_0 + \kappa \Delta_0 (R_1 + 3\kappa\ell_{\inv} + 16\nu^2\delta_0) \right) < R_1, \label{Ineq1} \\ &2\sqrt{2(1-J_0(\mu R_0))} + J_1(\mu R_1) < J_1(\mu R_2). \label{Ineq2} \end{align} Let $\delta\in(0, \delta_0)$, $\Delta\in(0, \Delta_0)$, $x_0, \hat{x}_0\inB_{\mathbb{R}^2}(0, R_0)$, $(x, \hat{\etat})$ as in Lemma~\ref{lem:well}, $z = \uptau(x)$, $\varepsilon = \hat{\etat}-z$, $t_k=k\Delta$ for $k\in\N$, $u(t_k) = \phi(\uppi(\etath(t_k^-))) + \delta\mathcal{N}^2(\etath(t_k^-)-\mathbbold{1})$ and $u(t) = u(t_k)$ for $t\in[t_k, t_{k+1})$. Let $k\in\N$ and $t\in[t_k, t_{k+1})$. The expression of $u$ allows to split $|u- Kx|$ into three parts (recall $\uppi(z(t_k))=x(t_k)$): \begin{align}\label{E:control_maj} | u(t) - Kx(t) | \leq \kappa |\uppi(\hat{\etat}(t_k)) - \uppi(z(t_k))| + \kappa|x(t_k) - x(t)| + \delta \mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1}). \end{align} We bound these terms going right to left. First, \begin{align} \mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1}) &\leq\nu^2\norm{\hat{\etat}(t_k)-\mathbbold{1}}^2 \tag{by \eqref{E:def_nu}}\\ &\leq\nu^2\left(\norm{\varepsilon(t_k)} + \norm{z(t_k)-\mathbbold{1}}\right)^2\tag{by triangular inequality}\\ &\leq\nu^2\left(\norm{\varepsilon_0} + 2\right)^2\tag{since $\norm{\varepsilon}$ is non-increasing and $\norm{\uptau(x(t))}=1$} % \end{align} Since $\norm{\varepsilon_0}\leq\norm{\hat z_0}+\norm{z_0}=1+1$, this yields \begin{equation}\label{E:term_right} \mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1}) \leq 16\nu^2. \end{equation} This allows to give a bound on $u(t_k)$: $$ |u(t_k)| \leq \kappa|\uppi(\etath(t_k))|+\delta\mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1}) \leq \kappa\ell_{\inv}(\norm{\varepsilon(t_k)}+\norm{z(t_k)})+ 16\nu^2\delta\leq 3\kappa\ell_{\inv} + 16\nu^2\delta. $$ Then, with another variation of constants, we obtain (since $|b|=1$, and $\Delta$ small enough) \begin{equation}\label{E:term_middle} |x(t_k) - x(t)|\leq |(e^{(t-t_k)A}-\mathbbm{I}_{\mathbb{R}^2})x(t_k)|+|u(t_k)|(t-t_k) \leq \Delta \left(|x(t_k)| + 3\kappa\ell_{\inv} + 16\nu^2\delta \right). \end{equation} Finally, note that for any $x_0\in B_{\mathbb{R}^2}(0,R_0)$, $$ \norm{\uptau(x_0)-\mathbbold{1}} % = \left(\norm{\uptau(x_0)}^2+1 - 2\psX{\uptau(x_0)}{1}\right)^{\frac{1}{2}} % = \sqrt{2(1-J_0(\mu |x_0|))} % \leq \sqrt{2(1-J_0(\mu R_0))}. $$ Then $$ \norm{\varepsilon_0} \leq \norm{\hat{\etat}_0-\mathbbold{1}} + \norm{z_0-\mathbbold{1}} \leq 2\sqrt{2(1-J_0(\mu R_0))}, $$ which implies \begin{equation}\label{E:term_left} |\uppi(\hat{\etat}(t_k)) - \uppi(z(t_k))| \leq \ell_{\inv} \norm{\varepsilon(t_k)} \leq \ell_{\inv} \norm{\varepsilon_0} \leq 2\ell_{\inv}\sqrt{2(1-J_0(\mu R_0))}. \end{equation} In conclusion, $\kappa\times\eqref{E:term_left}+\kappa\times\eqref{E:term_middle}+\delta\times\eqref{E:term_right}$ implies that \eqref{E:control_maj} becomes \begin{align} | u(t) - Kx(t)| &\leq 2\kappa \ell_{\inv}\sqrt{2(1-J_0(\mu R_0))} + \kappa \Delta \left(|x(t_k)| + 3\kappa\ell_{\inv} + 16\nu^2\delta\right) + 16\nu^2\delta \label{Ineq_e} \end{align} Assume for the sake of contradiction that there exists $t\in\mathbb{R}_+$ such that $|x(t)|>R_1$. Let $T = \min\{t\in\mathbb{R}_+\mid |x(t)|=R_1\}$. Then \eqref{E:varcons} at $t=T$ combined with \eqref{Ineq1} and \eqref{Ineq_e} yields \begin{align*} R_1 \leq R_0 + M\left( 2\kappa \ell_{\inv}\sqrt{2(1-J_0(\mu R_0))} + \kappa \Delta (R_1 + 3\kappa\ell_{\inv} + 16\nu^2\delta) + 16\nu^2\delta \right) < R_1, \end{align*} which is a contradiction. Thus $|x(t)|<R_1$ for all $t\in\mathbb{R}_+$. Furthermore, for all $t\in\mathbb{R}_+$, \begin{align*} \left|\psX{\hat{\etat}(t)}{e_1}\right| &\leq |\psX{\varepsilon(t)}{e_1}| + |\psX{z(t)}{e_1}|\\ &\leq \norm{\varepsilon(t)} + |\psX{\uptau(x(t))}{e_1}|\\ &\leq \norm{\varepsilon_0} + J_1(\mu |x|)\\ &\leq 2\sqrt{2(1-J_0(\mu R_0))} + J_1(\mu R_1)\\ &< J_1(\mu R_2). \end{align*} Thus, \eqref{Ineq2} yields $|\psX{\hat{\etat}(t)}{e_1}| < J_1(\muR_2)$ for all $t\in\mathbb{R}_+$. Finally, since $J_1(\muR_2) < J_1(j)$, $|\uppi(\hat{\etat}(t))| = |\mathfrak{f}(\psX{\hat{\etat}(t)}{e_1})| = \frac{1}{\mu} J_1^{-1}(\psX{\hat{\etat}(t)}{e_1}) \leq R_2. $ \end{proof} For any compact set of initial conditions, taking $\mu$, $\delta$ and $\Delta$ sufficiently small ensures that trajectories are bounded. This is shown in the following corollary. \begin{crllr}\label{cor:semiglob} For all $R_0>0$, there exist $\mu_0, \delta_0, \Delta_0>0$ and $R_2>R_1>R_0$ such that for all $x_0, \hat{x}_0$ in $B_{\mathbb{R}^2}(0, R_0)$, all $\mu\in(0, \mu_0)$, all $\delta\in(0, \delta_0)$, all $\Delta\in(0, \Delta_0)$, and all $\alpha>0$, the unique solution $(x, \hat{\etat})\in C^0(\mathbb{R}_+, \mathbb{R}^2\times \mathcal{D})$ of \eqref{E:system_stab_infinie} such that $x(0) = x_0$ and $\hat{\etat}(0) = \uptau(\hat{x}_0)$ satisfies $|x(t)| < R_1$, $|\psX{\hat{\etat}(t)}{e_1}| < J_1(\muR_2)$ and $|\uppi(\hat{\etat}(t))| < R_2$ for all $t\in\mathbb{R}_+$. \end{crllr} \begin{proof} Let $\beta_2> \beta_1 >1$ to be fixed later, and let $R_1 = \beta_1R_0$ and $R_2 = \beta_2R_0$. Then there exist $\mu_0, \delta_0,\Delta_0>0$ small enough such that \eqref{Ineq1} holds for all $\mu\in(0, \mu_0)$, $\delta\in(0, \delta_0)$ and $\Delta\in(0, \Delta_0)$, Recall the following asymptotic expansions of the Bessel functions of the first kind at 0: \begin{align*} J_0(r) = 1 - \frac{r^2}{4} + o(r^2),\qquad J_1(r) = \frac{r}{2} + o(r). \end{align*} Then for all $\mu>0$, \begin{align*} 2\sqrt{2(1-J_0(\muR_0))} + J_1(\mu R_1) = \muR_0 \left(\sqrt{2} + \frac{\beta_1}{2}\right) + o(\mu),\ J_1(\mu R_2) = \muR_0\frac{\beta_2}{2} + o(\mu). \end{align*} Hence, if $\beta_2>2\sqrt{2}+\beta_1$, then there exists $\mu_0>0$ such that \eqref{Ineq2} holds for all $\mu\in(0, \mu_0)$. Set $\beta_1 = 2$ and $\beta_2 = 2\sqrt{2}+3$. Then there exist $\mu_0, \delta_0, \Delta_0>0 $ such that $\mu_0R_2<j$ and \eqref{Ineq1} and \eqref{Ineq2} are satisfied for all $\mu\in(0, \mu_0)$, $\delta\in(0, \delta_0)$ and $\Delta\in(0, \Delta_0)$. Reasoning as in the proof of Lemma~\ref{lem:borne}, the result follows. \end{proof} \subsection{Observability analysis}\label{sec:obs_infinie} To achieve state estimation in the embedded system in presence of an observability singularity at the target, we need to check that enough information is still accessible during the stabilization process. This takes the form of two last assumptions on the linear output map $\mC\in\lin(\XX, \C^\mm)$ (obtained from the function $h$ in Assumption~\ref{ass:h}), reminding of the discussions in Section~\ref{sec:nc}, except in the new infinite-dimensional context. The \emph{short time 0-detectability} assumption states that the measure distinguishes the target point for its neighbors. The \emph{isolated observability singularity} assumption regards separation of the unobservable input $u\equiv0$ from other singular inputs of the infinite-dimensional system. For each assumption, we discuss examples of suitable output maps $h$. Following Remark~\ref{rem:gr}, we investigate the case where at least one of the components of the output map is in the linear span of a finite number of elements of the Hilbert basis. This component is used to ensure the two observability properties. \begin{assumption}[Short time 0-detectability]\label{ass:detec} Let $u:\mathbb{R}_+\to\mathbb{R}$ be constant over $[t_k, t_{k+1})$ for all $k\in\N$, where $t_{k+1}-t_k = \Delta$ is a positive constant. Let $x$ be a solution of \eqref{E:system_depart} bounded by $\frac{j}{\mu}$. If there exists a subsequence $(t_{k_n})_{n\in\N}$ such that $u(t_{k_n})\cvl{n\to+\infty} 0$ and $\mC\uptau(x(t_{k_n}+t')) \cvl{n\to+\infty} \mC\uptau(0)$ for all $t'\in[0, \Delta]$, then $x(t_{k_{n}}) \cvl{n\to+\infty} 0$. \end{assumption} \begin{rmrk} Assumption~\ref{ass:detec} implies the necessary Condition~\ref{hyp:distinguish}~(local). Indeed, if $x$ is a solution of \eqref{E:system_depart} with $u=0$ and $h(x(t))=0$ for all $t\geq0$, then for any positive increasing sequence $(t_n)_{n\in\N}\to+\infty$, $u(t_n) = 0$ and $C\uptau(x(t_n+t)) = \mathfrak{h}}%{\hslash(0)$ for all $n\in\N$ and all $t\geq0$. Hence, according to Assumption~\ref{ass:detec}, $x(t_n)\to0$ and Condition~\ref{hyp:distinguish}~(local) is satisfied. Moreover, if $\mathfrak{h}}%{\hslash$ has a continuous inverse in a neighborhood of $0$, then Assumption~\ref{ass:detec} implies (for piecewise constant inputs only) the input/output-to-state stability condition (see \emph{e.g.,~} \cite{krichman2001input}), which states that any solution $x$ of \eqref{E:system_depart} such that $u(t)\to0$ and $y(t)\to0$ is such that $x(t)\to0$ as $t\to+\infty$. This condition has proved to be of interest in the context of output feedback stabilization. \end{rmrk} \begin{prpstn}[Outputs satisfying Assumption~\ref{ass:detec}]\label{prop:detec} If one of the components of $\mC$ (seen as a $\mathfrak{m}$-tuple of linear forms on $Z$) has the form $\psX{\cdot}{\zeta}$ where $\zeta = \sum_{p\inI}c_pe_p\inZ\setminus\{0\}$, with $I\subset\mathbb{Z}$ finite, then \eqref{E:system_depart} satisfies Assumption~\ref{ass:detec}. \end{prpstn} \begin{proof} Since $x$ is bounded, we show that its only accumulation point is $0$. With no loss of generality, we may assume that $x(t_{k_n})$ tends towards some $x^\star = (r^\star\cos \theta^\star, r^\star\sin \theta^\star)\neq0$. According to \eqref{eq_Bessel}, we have $\psX{\uptau(x(t_{k_n}+t'))}{\zeta}=\sum_{p\inI} c_p J_p(\mu r(t_{k_n}+t')) e^{-ip\theta(t_{k_n}+t')}$, with $x=(r \cos\theta , r \sin\theta )$. As $n$ goes to $+\infty$, since $\mC\uptau(x(t_{k_n}+t'))$ tends towards $\mC\uptau(0)$, $\psX{\uptau(x(t_{k_n}+t'))}{\zeta}$ tends towards $c_0$ if $0\in I$, or to $0$ otherwise. In particular, for $t'=0$, we get that $\sum_{p\inI} c_p J_p(\mu r^\star) e^{-ip\theta^\star} = c_0$ if $0\in I$, or $0$ otherwise. Thus $c_0J_0(\mu r^\star)=c_0$ if $0\in I$, and $c_pJ_p(\mu r^\star)=0$ for $p\in I\setminus\{0\}$. Since $u(t) = u(t_{k_n})$ for all $t\in[t_{k_n}, t_{k_n+1}]$ tends towards $0$ as $n$ goes to $+\infty$, Duhamel's formula implies that for all $t'\in[0, \Delta]$, \begin{equation*} x(t_{k_n}+t') - e^{t' A}x(t_{k_n}) \cvl{n\to+\infty} 0, \end{equation*} \emph{i.e.,~} \begin{equation*} r(t_{k_n}+t') - r(t_{k_n})\cvl{n\to+\infty} 0 \ \text{and}\ e^{i\theta(t_{k_n}+t')} - e^{i(\theta(t_{k_n})+t')}\cvl{n\to+\infty} 0. \end{equation*} Hence, since $I$ is finite, for all $t'\in[0, \Delta]$, $\sum_{p\inI} c_p J_p(\mu r(t_{k_n}))e^{-ip \theta(t_{k_n})} e^{-ip t'}$ tends towards $c_0$ if $0\in I$, or towards $0$ otherwise, as $n$ goes to $+\infty$. Denote by $j_0$ the first zero of $J_0$. Then $J_p(r)\neq0$ for any $r\in (-j_0, j_0)\setminus \{0\}$ and any $p\in\mathbb{Z}$. Since for some $p\inI$, $c_p\neq0$, we have $J_p(\mu r(t_{k_n}))\to J_p(0)$, hence $r^\star=0$ since $|x|<\frac{j}{\mu}$ with $j<j_1<j_0$. \end{proof} \begin{rmrk} In the above definition of $\zeta$, if there exist $k_1, k_2\in\mathbb{Z}$ with $|k_1|\neq |k_2|$, $c_{k_1}\neq0$ and $c_{k_2}\neq0$, then $j_0=+\infty$ is a suitable choice due to the Bourget's hypothesis, proved by Siegel in \cite{siegel2014einige}. \end{rmrk} One can easily check that condition~\eqref{plong:eqCCA} is satisfied if $\mathcal{C} = \langle\cdot, \zeta \rangle_Z$ for some $\zeta\in\dom$ (because $\mathcal{C}^*\mathcal{C}\mathcal{A} = \langle\cdot, \mathcal{A}^*\zeta\rangle\zeta$). Thus, if $\mathcal{C}$ has the form $\langle\cdot,\zeta\rangle_Z$ with $\zeta$ as in Proposition~\ref{prop:detec}, solutions of \eqref{eps} are such that $\mathcal{C}\varepsilon(t)\to0$ as $t\to+\infty$. In order to obtain the convergence of $\varepsilon$ towards $0$, we need an additional observability assumption. Let us recall the usual definition of approximate observability of \eqref{E:system_plonge_infinie} (see, \emph{e.g.,~} \cite{TW2009}). \begin{dfntn}[Approximate observability]\label{def:app_obs} System~\eqref{E:system_plonge_infinie} is said to be \emph{approximately observable} in some time $T>0$ for some constant input $u$ if and only if \begin{equation} (\forall t\in[0, T],\ \mC\mathbb{T}_t(z_0, u) = 0) \Longrightarrow z_0=0. \end{equation} \end{dfntn} Since \eqref{E:system_plonge_infinie} is a linear system, Definition~\ref{def:app_obs} coincides with Definition~\ref{def:obs} in the finite-dimensional context. \begin{assumption}[Isolated observability singularity]\label{ass:inobs} Let $u\in[-u_{\max}, u_{\max}]$ where $u_{\max} = \kappa \frac{j}{\mu} + 16 \nu^2 \delta $. If $u\neq0$, then the constant input $u$ makes \eqref{E:system_plonge_infinie} approximately observable in any positive time. \end{assumption} In \cite[Example 1]{Celle-etal.1989}, the authors investigated the observability of \eqref{E:system_plonge_infinie} in the case where $\mu=1$ and $\mC = \psX{\cdot}{\mathbbold{1}}$ (\emph{i.e.,~} $\mathfrak{h}}%{\hslash\circ h (x) = J_0(|x|)$, see Example~\ref{ex:Jk}). Using a similar method, we prove the following. \begin{prpstn}[Outputs satisfying Assumption~\ref{ass:inobs}]\label{prop:obs} If one of the components of $\mC$ has the form $\psX{\cdot}{\zeta}$ where $\zeta = \sum_{k\inI}c_ke_k\inZ\setminus\{0\}$, with $I\subset\mathbb{Z}$ finite, and $\mu u_{\max}<j_0$ for some $j_0>0$, then \eqref{E:system_depart} satisfies Assumption~\ref{ass:inobs}. \end{prpstn} \begin{proof} Let $z_0\inZ$, $u\in\mathbb{R}\setminus\{0\}$ and $z(t) = \mathbb{T}_t(z_0, u)$ be the unique corresponding solution of \eqref{E:system_plonge_infinie}. The method of characteristics yields \begin{align} \psX{z(t)}{\zeta} &= \frac{1}{2\pi}\int_0^{2\pi} e^{-i\mu u \int_0^t\sin(s-\sigma)\diff \sigma}z_0(s-t) \sum_{k\inI}\bar{c}_ke^{-iks} \diff s \nonumber \\ &= \frac{1}{2\pi}\int_0^{2\pi} \left( \sum_{k\inI}\bar{c}_k e^{-i\mu u \cos(s)-iks} \right) e^{i\mu u \cos(s-t)}z_0(s-t) \diff s \nonumber \\ &= \left(\psi*\uppsi_0\right)(t) \nonumber \end{align} where $*$ denotes the convolution product over $Z$, $\psi:s\mapsto \sum_{k\inI}\bar{c}_k e^{-i\mu u \cos(s)-iks}$ and $\uppsi_0:s\mapsto e^{i\mu u \cos(s)}z_0(s)$. Hence, according to Parseval's theorem, \begin{align*} \frac{1}{2\pi} \int_0^{2\pi}|\psX{z(t)}{\zeta}|^2\diff t =\norm{\psi*\uppsi_0}^2 =\normA{\hat{\psi}\cdot\hat{\uppsi}_0}{\hat{Z}}^2 =\sum_{\ell\in\mathbb{Z}} |\psX{\psi}{e_\ell}|^2 |\psX{\uppsi_0}{e_\ell}|^2, \end{align*} where $\hat{\psi}$ (resp. $\hat{\uppsi}_0$) denotes the Fourier series coefficients of $\psi$ (resp $\uppsi_0$) in $Z = L^2(\S^1, \mathbb{C})\subset L^1(\S^1, \mathbb{C})$ and $\hatZ = l^2(\mathbb{Z}, \mathbb{C})$. Hence, it is sufficient to show that there exists $j_0>0$ such that, if $\mu u<j_0$, then $\psX{\psi}{e_\ell}\neq0$ for all $\ell\in\mathbb{Z}$. Indeed, it yields that if $\opcz(t) = 0$ for all $t\in[0, 2\pi]$, then $\uppsi_0 = 0$, \emph{i.e.,~}$z_0=0$, and thus $u$ makes \eqref{E:system_plonge_infinie} approximately observable in time $2\pi$. Note that \begin{align} \psX{\psi}{e_\ell} = \frac{1}{2\pi}\int_0^{2\pi} \sum_{k\inI}\bar{c}_k e^{-i\mu u \cos(s)-i(k+\ell)s} \diff s = \sum_{k\inI}\bar{c}_k i^k J_{k+\ell}(\mu u). \tag{by \eqref{eq_Bessel}} \end{align} Set $d_k = \bar{c}_k i^k$ and $F_\ell(r) = \sum_{k\inI}d_k J_{k+\ell}(r)$ for all $r\in\mathbb{R}$. Since $F_\ell$ is analytic for each $\ell\in\mathbb{Z}$, its zeros are isolated. Hence, for all $L>0$, there exists $j_0>0$ such that, if $|\ell|<L$, then $F_\ell(r) \neq 0$ for all $r\in(-j_0, j_0)\setminus\{0\}$. Now, let $k_{\min} = \min\{k\inI\mid d_k\neq0\}$ and let us prove that there exists $j_0>0$ such that $F_\ell(r) \neq 0$ for all $r\in(-j_0, j_0)\setminus\{0\}$ and all $\ell\geq-k_{\min}$. (One can reason similarly for $\ell\leq \max\{k\inI\mid d_k\neq0\}$). We have $F_\ell(r) = d_{k_{\min}}J_{k_{\min}+\ell}(r)\left(1 + \sum_{k\inI} \frac{d_k}{d_{k_{\min}}}\frac{J_{k+\ell}(r)}{J_{k_{\min}+\ell}(r)}\right)$. According to \cite{neuman2004inequalities}, $\left|J_{k+\ell}(r)\right| \leq \frac{1}{(k+\ell)!}\left(\frac{|r|}{2}\right)^{k+\ell}$ for all $r\in\mathbb{R}$. Moreover, according to \cite{laforgia1986inequalities}, if $|r|\leq 1$, then $$\left|J_{k_{\min}+\ell}(r)\right|\geq |r|^{k_{\min}+\ell}J_{k_{\min}+\ell}(1) \geq \frac{|r|^{k_{\min}+\ell}}{(k_{\min}+\ell)!2^{k_{\min}+\ell}}\left(1 - \frac{1}{2(k_{\min}+\ell+1)}\right). $$ Hence \begin{align*} |F_\ell(r)| &\geq \left|d_{k_{\min}}\right|\left|J_{k_{\min}+\ell}(r)\right| \left(1 - \sum_{k\inI} \frac{\left|d_k\right|}{\left|d_{k_{\min}}\right|}\frac{\left|J_{k+\ell}(r)\right|}{\left|J_{k_{\min}+\ell}(r)\right|}\right)\\ &\geq \left|d_{k_{\min}}\right|\left|J_{k_{\min}+\ell}(r)\right| \left(1 - 2\sum_{k\inI} \frac{\left|d_k\right|}{\left|d_{k_{\min}}\right|} \left(\frac{|r|}{2}\right)^{k-k_{\min}} \right). \end{align*} Hence, there exits $j_0>0$ such that, if $0<|r|<j_0$, $|F_\ell(r)|\geq \frac{\left|d_{k_{\min}}\right|}{2} \left|J_{k_{\min}+\ell}(r)\right|$ for all $\ell\in\mathbb{Z}$. Choosing $j_0\leq\min\{r>0\mid J_0(r)=0\}$, one has $J_{k_{\min}+\ell}(r)\neq0$ for all $\ell\in\mathbb{Z}$, hence $F_\ell(r)\neq0$. In particular, if $\zeta = e_k$ for some $k\inI$, then $j_0 = \min\{r>0\mid J_0(r)=0\}$ is a suitable choice. Indeed, $J_k(r)\neq0$ for all $r\in (-j_0, j_0)\setminus\{0\}$ and all $k\in\mathbb{Z}$. Hence, if $\mu u_{\max} < j_0$, then $u$ makes \eqref{E:system_plonge_infinie} approximately observable in time $2\pi$. Let $\upzeta(t) = \mathbb{T}_t^*(\zeta, u)$, \emph{i.e.,~} the solution of $\dot\upzeta = \mathcal{A}(u(t))^*\upzeta$, $\upzeta(0)=\zeta$. Since $\zeta$ is analytic, $t\mapsto\upzeta(t)$ is analytic as the unique solution of an analytic system. Hence $t\mapsto\psX{z(t)}{\zeta} = \psX{z_0}{\zeta(t)}$ is also analytic. Thus $u$ makes \eqref{E:system_plonge_infinie} approximately observable in any positive time, because $t\mapsto\psX{z(t)}{\zeta}$ vanishes on $[0, T]$ for some $T>0$ if and only if it vanishes on $[0, 2\pi]$. \end{proof} \begin{rmrk} It is always possible to make $\muu_{\max}<j_0$ by choosing $\kappaj$ and $\mu\delta$ small enough. As explained in Remark~\ref{rem:gr}, the considered set of such maps $\mC$ is sufficient to approximate any output map $h$. Moreover, if $\psX{\cdot}{e_{k_1}}$ and $\psX{\cdot}{e_{k_2}}$ are two of linear forms of $\mC$ with $|k_1|\neq |k_2|$, then $j_0=+\infty$ is a suitable choice due to the Bourget's hypothesis, proved by Siegel in \cite{siegel2014einige}. \end{rmrk} We are now in position to state the main result of Section~\ref{sec:example_infinie}. \begin{thrm}\label{th:infinite} Let $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^2$ be a compact set. Let $R_0>0$ be such that $\mathcal{K}}%{\mathcal{K}_x\subset B_{\mathbb{R}^2}(0, R_0)$. Let $\mu_0, \delta_0, \Delta_0>0$ be as in Corollary~\ref{cor:semiglob}. Suppose there exists $\mu\in(0, \mu_0)$ such that for all $\delta\in(0, \delta_0)$ and all $\Delta\in(0, \Delta_0)$, Assumptions~\ref{ass:h},~\ref{ass:detec} and~\ref{ass:inobs} are satisfied. Then system \eqref{E:system_depart} is stabilizable over $\mathcal{K}}%{\mathcal{K}_x$ by means of an infinite-dimensional piecewise constant dynamic output feedback. Moreover, the closed-loop system is explicitly given by \eqref{E:system_stab_infinie} for any $\alpha>0$ and with $\uptau$ as in \eqref{E:deftau} and $\uppi$ as in \eqref{E:definv}. \end{thrm} Combining Propositions~\ref{prop:Jk},~\ref{prop:detec} and~\ref{prop:obs}, we obtain Theorem~\ref{cor:main} as an immediate corollary. Regarding Corollary~\ref{cor:x2inf}, the proof is as follows. \begin{proof}[Proof of Corollary~\ref{cor:x2inf}] Let $\mathcal{K}}%{\mathcal{K}_x\subset\mathbb{R}^2$ be a compact set. Let $R_0>0$ be such that $\mathcal{K}}%{\mathcal{K}_x\subset B_{\mathbb{R}^2}(0, R_0)$. Let $\mu_0, \delta_0, \Delta_0>0$ be as in Corollary~\ref{cor:semiglob}. According to Example~\ref{ex:Jk} and Proposition~\ref{prop:detec}, Assumptions~\ref{ass:h} and~\ref{ass:detec} are satisfied for any $\mu\in(0,\mu_0)$ by considering $\mathfrak{h}}%{\hslash:y\mapsto J_0(\mu\sqrt{2y})$. Moreover, by choosing $\kappaj<j_0$ and $\delta<\frac{j_0-\kappaj}{16\nu^2\mu}$, Assumption~\ref{ass:inobs} is also satisfied according to Proposition~\ref{prop:obs}. Hence, Theorem~\ref{th:infinite} does apply on $\mathcal{K}}%{\mathcal{K}_x$. \end{proof} \subsection{Proof of Theorem~\ref{th:infinite}} Let $\mathcal{K}}%{\mathcal{K}_x$ be a compact subset of $\mathbb{R}^2$. Let $R_0>0$ be such that $\mathcal{K}}%{\mathcal{K}_x\subsetB_{\mathbb{R}^2}(0, R_0)$ and $\mu_0, \delta_0, \Delta_0>0$ be as in Corollary~\ref{cor:semiglob}. This implies the statement \ref{defi} of Definition~\ref{def:stab_inf}, with $\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}=\uptau(\mathcal{K}}%{\mathcal{K}_x)$. Only \ref{defii} and \ref{defiii} remain to be proved. Let $\alpha>0$, $\uptau$ be as in \eqref{E:deftau} and $\uppi$ be as in \eqref{E:definv}. Let $x_0$ and $\hat{x}_0$ be in $\mathcal{K}}%{\mathcal{K}_x$, $(x, \hat{\etat})$ be the corresponding solution of \eqref{E:system_plonge_infinie}, $z=\uptau(x)$, $\varepsilon = \hat{\etat}-z$ and $u = \phi(\uppi(\hat{\etat})) + \delta\mathcal{N}^2(\hat{\etat}-\mathbbold{1})$. Remark that \begin{align*} \norm{\varepsilon} - \ell_{\plong}|x| &\leq \norm{\hat{\etat}-\mathbbold{1}} + \norm{z-\mathbbold{1}} - \ell_{\plong}|x|\\ &=\norm{\hat{\etat}-\mathbbold{1}} + \norm{\uptau(x)-\uptau(0)} - \ell_{\plong}|x|\\ &\leq \norm{\hat{\etat}-\mathbbold{1}} \end{align*} and \begin{align*} \norm{\hat{\etat}-\mathbbold{1}} \leq \norm{\varepsilon} + \norm{z-\mathbbold{1}} \leq \norm{\varepsilon} + \ell_{\plong}|x| \end{align*} where $\ell_{\plong}$ is the Lipschitz constant of $\uptau$ over $\mathcal{K}}%{\mathcal{K}_x$. Hence proving statement \ref{defii} of Definition~\ref{def:stab_inf} reduces to prove (again with $\Xi}% Ludo: domaine pour zhat dans def:stab_inf \widehat{\mathcal{Z}=\uptau(\mathcal{K}}%{\mathcal{K}_x)$) that for some $\mu, \delta$ and $\Delta$ small enough, \begin{enumerate}[label = \textit{(ii')}] \item \label{defii2} For all $R_x, R_{\varepsilon}>0$, there exists $r_x, r_{\varepsilon}>0$ such that for all $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\uptau(\mathcal{K}}%{\mathcal{K}_x)$, if $|x_0|<r_x$ and $\norm{\varepsilon_0}<r_{\varepsilon}$, then $|x(t)|<R_x$ and $\norm{\varepsilon(t)}<R_{\varepsilon}$ for all $t\geq0$. \end{enumerate} Since $\uptau$ is continuous, if $x\to0$, then $\uptau(x)\to\mathbbold{1}$, and, a fortiori, $\uptau(x)\overset{w}{\rightharpoonup}\mathbbold{1}$. Hence proving statement \ref{defiii} of Definition~\ref{def:stab_inf} reduces to prove that for some $\mu, \delta$ and $\Delta$ small enough, \begin{enumerate}[label = \textit{(iii')}] \item \label{defiii2} $x(t)\to0$ and $\varepsilon(t)\cvf0$ as $t$ goes to infinity. \end{enumerate} We prove \ref{defii2} in Section~\ref{sec:stability_infinie} and \ref{defiii2} in Section~\ref{sec:attractivity_infinie}. \subsubsection{Stability}\label{sec:stability_infinie} In order to prove stability, we reason as in Section~\ref{sec:stability}. Let $R_x, R_{\varepsilon}>0$. We seek $r_x, r_{\varepsilon}>0$ such that for all $(x_0, \hat{\etat}_0)\in\mathcal{K}}%{\mathcal{K}_x\times\uptau(\widehat{\mathcal{K}}}%{\mathcal{K}_w)$, if $|x_0|<r_x$ and $\norm{\varepsilon_0}<r_{\varepsilon}$, then $|x(t)|<R_x$ and $\norm{\varepsilon(t)}<R_{\varepsilon}$ for all $t\geq0$. Since $\norm{\varepsilon}$ is non-increasing, choosing $r_\varepsilon \leq R_\varepsilon$ proves the stability of $\varepsilon$. The dynamics of $x(t)$ can be written as: \begin{align} \dot x(t) &= (A+bK)x(t) + \delta \mathcal{N}^2(\uptau(x(t))-\mathbbold{1}) b\label{ligne1}\\ &\quad + bK(x(t_k) - x(t)) +\delta\mathcal{N}^2(\uptau(x(t_k))-\mathbbold{1})b -\delta \mathcal{N}^2(\uptau(x(t))-\mathbbold{1})b\label{ligne2}\\ &\quad + bK(\uppi(\hat{\etat}(t_k)-x(t_k)) +\delta\mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1})b -\delta \mathcal{N}^2(\uptau(x(t_k))-\mathbbold{1})b\label{ligne3} \end{align} First, we show that \eqref{ligne1} is a locally exponentially stable dynamical system when $\delta$ is small enough. Indeed, $A+bK$ is Hurwitz and for all $\xi_1, \xi_2$ in $Z$, \begin{align} \left|\mathcal{N}^2(\xi_1) - \mathcal{N}^2(\xi_2)\right| &\leq\sum_{k\in\N} \frac{1}{k^2+1} \left||\psX{\xi_1}{e_k}|^2 - |\psX{\xi_2}{e_k}|^2\right|\nonumber\\ &\leq\sum_{k\in\N} \frac{1}{k^2+1} |\psX{\xi_1-\xi_2}{e_k}|\left(|\psX{\xi_1}{e_k}|+|\psX{\xi_2}{e_k}|\right)\nonumber\\ &\leq \nu^2 \|\xi_1-\xi_2\|_Z(\|\xi_1\|_Z+\|\xi_2\|_Z).\label{Nlip} \end{align} Hence, for all $x_1, x_2$ in $\mathbb{R}^2$, \begin{align*} \left|\delta \mathcal{N}^2(\uptau(x_1)-\mathbbold{1}) - \delta \mathcal{N}^2(\uptau(x_2)-\mathbbold{1})\right| \leq 4\delta \nu^2\ell_{\plong}|x_1-x_2|. \end{align*} Let $P\in\mathbb{R}^{2\times 2}$ be positive definite such that $P(A+bK) + (A+bK)'P < -2 \mathbbm{I}_{\mathbb{R}^2}$. If $\chi$ is a solution of \eqref{ligne1}, then we have \begin{align*} \frac{\diff}{\diff t} (\chi'P\chi)(t) \leq -2|\chi(t)|^2 + 4\delta \nu^2\ell_{\plong}|Pb||\chi(t)|^2 \end{align*} Thus, by choosing $\delta<\min(\delta_0, \frac{1}{4\nu^2\ell_{\plong}|Pb|})$, we get $\frac{\diff}{\diff t} (\chi'P\chi)(t) \leq -|\chi(t)|^2 $, hence the local exponential stability of \eqref{ligne1}. Now, we show that the perturbation term \eqref{ligne2} preserve stability. If $\chi$ is a solution of \eqref{ligne1}-\eqref{ligne2}, then, with the variation of constants, \begin{align*} \chi(t) = e^{(t-t_k)(A+bK)}\chi(t_k) +\int_{t_k}^t e^{(t-s)(A+bK)} b\left(K(\chi(t_k)-\chi(s))+ \delta\mathcal{N}^2(\uptau(\chi(t_k))-\mathbbold{1})\right)\diff s. \end{align*} Hence, for all $k\in\N$ and all $t\in[t_k, t_{k+1})$, \begin{align*} |\chi(t)-\chi(t_k)| &\leq \left\|\mathbbm{I}_{\mathbb{R}^2} - e^{(t_k-t)(A+bK)} \right\| |\chi(t)|\\ &\quad+ \int_{t_k}^t \left\|e^{(t_k-s)(A+bK)}\right\| \left( \kappa |\chi(t_k)-\chi(s)| + 4\delta \nu^2\ell_{\plong}|\chi(t_k)|\right) \diff s\\ &\leq\left( \left\|\mathbbm{I}_{\mathbb{R}^2} - e^{(t_k-t)(A+bK)} \right\| + 4\delta \nu^2\ell_{\plong} \int_{t_k}^t \left\|e^{(t_k-s)(A+bK)}\right\| \diff s\right) |\chi(t)|\\ &\quad+ \int_{t_k}^t \left\|e^{(t_k-s)(A+bK)}\right\| \kappa |\chi(t_k)-\chi(s)|\diff s\\ &\quad+ \int_{t_k}^t \left\|e^{(t_k-s)(A+bK)}\right\| 4\delta \nu^2\ell_{\plong} |\chi(t)-\chi(t_k)| \diff s.\\ \end{align*} Since $\left\|\mathbbm{I}_{\mathbb{R}^2}-e^{(t_k-t)(A+bK)}\right\|\leq \Delta\|A+bK\|e^{\Delta\|A+bK\|}$ and $\int_{t_k}^{t} \left\|e^{(t_k-s)(A+bK)}\right\|\diff s\leq M$ for some $M>0$ independent of $t$ and $k$ (as in the proof of \eqref{E:varcons}), we obtain \begin{multline*} (1 - 4\delta \nu^2\ell_{\plong} M)|\chi(t)-\chi(t_k)| \leq( \Delta\|A+bK\|e^{\Delta\|A+bK\|} + 4\delta \nu^2\ell_{\plong} M) |\chi(t)|\\ + \int_{t_k}^t \left\|e^{(t_k-s)(A+bK)}\right\| \kappa |\chi(t_k)-\chi(s)|\diff s. \end{multline*} Choosing $\delta<\frac{1}{8\nu^2\ell_{\plong} M}$, we obtain by Grönwall's lemma, \begin{align}\label{gronwall} |\chi(t)-\chi(t_k)|\leq 2\left(\Delta\|A+bK\|e^{\Delta\|A+bK\|}+4\delta \nu^2\ell_{\plong} M\right)e^{2\Delta\kappa M}|\chi(t)| \end{align} Using the fact that with $\delta$ as above $\frac{\diff}{\diff t} (\chi'P\chi)(t) \leq -|\chi(t)|^2 $ if $\chi$ satisfies \eqref{ligne1}, we also get if $\chi$ satisfies \eqref{ligne1}-\eqref{ligne2} that \begin{align}\label{lyapgronwall} \frac{\diff}{\diff t} (\chi'P\chi)(t) \leq -|\chi(t)|^2 + |\chi(t)| |Pb| \left(\kappa |\chi(t_k)-\chi(t)| +4\delta \nu^2\ell_{\plong}|\chi(t_k)-\chi(t)| \right) \end{align} Combining \eqref{gronwall} and \eqref{lyapgronwall} shows that \eqref{ligne1}-\eqref{ligne2} is still locally exponentially stable by choosing $\delta$ and $\Delta$ small enough. Finally, we show that the perturbation term \eqref{ligne3} preserves stability. Since the dynamical system \eqref{ligne1}-\eqref{ligne2} is locally exponentially stable, there exists $r_x>0$ and $\eta>0$ such that, if $|x_0|\leq r_x$ and the perturbation term \eqref{ligne3} is bounded by $\eta$, then $|x(t)|\leq R_x$ for all $t\in\mathbb{R}_+$. Since we have \begin{align*} |bK\uppi(\hat{\etat}(t_k))-x(t_k)| \leq \kappa\ell_{\inv} \norm{\varepsilon(t_k)} \leq \kappa\ell_{\inv} r_\varepsilon \end{align*} and (by \eqref{Nlip}), \begin{align*} \left|\delta\mathcal{N}^2(\hat{\etat}(t_k)-\mathbbold{1}) -\delta \mathcal{N}^2(\uptau(x(t_k))-\mathbbold{1}) \right| \leq \delta\nu^2\|\varepsilon(t_k)\|_Z(\|\etath(t_k)\|_Z+\|\tau(x(t_k))\|_Z) \leq \delta\nu^2r_\varepsilon(r_\varepsilon+2) \end{align*} we obtain the desired result by choosing \begin{equation} \kappa\ell_{\inv} r_\varepsilon + \delta\nu^2r_\varepsilon(r_\varepsilon+2) \leq \eta. \end{equation} \subsubsection{Attractivity}\label{sec:attractivity_infinie} \noindent \textbf{Step 1: Show that $\boldsymbol{\varepsilon\overset{w}{\rightharpoonup} 0}$.} Let $\Omega$ be the set of weak limit points of $(\epsilon(t))_{t\in\mathbb{R}_+}$. According to \eqref{E:eps_decroit_infinie}, $\varepsilon$ is bounded. Hence, by Alaoglu's theorem, $\Omega$ is not empty. It remains to show that $\Omega = \{0\}$. Let $\varepsilon^\star\in\Omega$ and an increasing sequence $(t_n)_{n\in\N} \to+\infty $ such that $\varepsilon(t_{n})\overset{w}{\rightharpoonup}\varepsilon^\star$ as $n\to+\infty$. For all $n\in\N$, let $k_n$ be such that $t_n\in[t_{k_n}, t_{k_n+1})$. Again, after passing to a subsequence, we may assume that $(t_{k_n})$ is increasing. Let $p\in\N$ to be fixed later. Consider the sequences $(u(t_{k_n})), (u(t_{k_n-1})),\dots,(u(t_{k_n-p}))$ (with $t_\ell=0$ if $\ell<0$). According to Corollary~\ref{cor:semiglob}, $|u|$ is bounded by $\kappa \frac{j}{\mu} + 16 \nu^2 \delta $. Moreover, combining \eqref{E:etat_constant} and \eqref{E:eps_decroit_infinie}, $\hat{\etat}$ is also bounded. Hence, after passing to a subsequence, we may assume that $(u(t_{k_{n}})), (u(t_{k_{n}-1})),\dots,(u(t_{k_{n}-p}))$ converge and $\hat{\etat}(t_{k_n-p})$ converges weakly to some $\hat{\etat}^\star\in Z$. If there exists $j\in\{0,\dots,p\}$ such that $(u(t_{k_{n}-j}))_{n\in\N}$ does not converge to $0$, then the arguments used with persistance assumptions (such as the ones found in \cite{Celle-etal.1989}) remain valid. According to Assumption~\ref{ass:inobs}, $(u(t_{k_{n}-j}))_{n\in\N}$ has an accumulation point $u^\star$ that makes the system~\eqref{E:system_plonge_infinie} approximately observable in any positive time. Since $u(t) = u(t_{k_{n}-j})$ for all $t\in[t_{k_{n}-j}, t_{k_{n}-j+1})$ we obtain by \cite[Theorem 3.5]{brivadis:hal-02529820} (see also \cite[Theorem 7, Step 4]{Celle-etal.1989}) that $\varepsilon^\star=0$. Now, assume that $u(t_{k_{n}-j})\to0$ as $n\to+\infty$ for all $j\in\{0,\dots,p\}$. The system is sufficiently explicit to allow computation of $\hat{\etat}(t_{k_{n}-j})$ from the knowledge of $\hat{\etat}(t_{k_{n}-p})$ for all $j\in\{0,\dots,p-1\}$. The assumption $u(t_{k_{n}-j})\to0$ as $n\to+\infty$ then implies strong constraints on the weak limit $\hat{\etat}^\star$ of $\hat{\etat}(t_{k_n}-p)$. With $p=2$, we actually obtain that the only possible case corresponds to $(\hat{\etat}(t_n),\varepsilon(t_n))\cvf(\mathbbold{1},0)$. This takes the remainder of the step. Passing to the limit in the expression of $u(t_{k_n-p})$, we get the existence of $\psn^2_\infty\in\mathbb{R}_+$ such that $\mathcal{N}^2(\hat{\etat}(t_{k_n-p})-\mathbbold{1}) \to \psn^2_\infty$ and \begin{align} K \mathfrak{f}(\psX{\hat{\etat}^\star}{e_1}) + \delta \psn^2_\infty = 0. \end{align} Using the method of characteristics, one can show that for all $t, t'\in\mathbb{R}_+$ and almost all $s\in\S^1$, \begin{align}\label{E:uconst} z(t+t', s) = \mathcal{I}(t+t', t, s)z(t, s-t'). \end{align} where $\mathcal{I}(t+t', t, s) = e^{-i\mu \int_{t}^{t+t'}u(\sigma)\sin(s-\sigma)\diff \sigma}$. Then, according to Duhamel's formula, \begin{align}\label{E:xihat} \hat{\etat}(t+t', s) = \mathcal{I}(t+t', t, s)\hat{\etat}(t, s-t') - \alpha \int_{t}^{t+t'} \mathcal{I}(t+t', \sigma, s) \big(\left(\mC^*\mC\varepsilon(\sigma)\right)(s-t')\big)\diff \sigma. \end{align} Since $u(t) = u(t_{k_{n}-j})$ for all $t\in[t_{k_{n}-j}, t_{k_{n}-j+1})$ and $u(t_{k_{n}-j})\to0$ as $n\to+\infty$ for all $j\in\{0,\dots,p\}$, we have that $\mathcal{I}(t_{k_{n}-p}+t', t_{k_{n}-p}, s)\to1$ as $n\to+\infty$, uniformly in $s\in\S^1$, for all $t'\in[0, p\Delta]$. Then \begin{multline}\label{E:ineqzhat} \norm{\hat{\etat}(t_{k_{n}-p}+t', \cdot) - \hat{\etat}(t_{k_{n}-p}, \cdot-t')} \leq \sup_{s\in\S^1}|\mathcal{I}(t_{k_{n}-p}+t', t_{k_{n}-p}, s) - 1| \norm{\hat{\etat}(t_{k_{n}-p})} \\ + \alpha \sup_{\substack{\sigma\in[t_{k_{n}-p}, t_{k_{n}-p}+t']\\ s\in\S^1}}|\mathcal{I}(t_{k_{n}-p}+t', \sigma, s)| \norm{\int_{t_{k_{n}-p}}^{t_{k_{n}-p}+t'}\mC^*\mC\varepsilon(\sigma)\diff \sigma} \end{multline} tends towards $0$ as $n$ goes to $+\infty$ since $\hat{\etat}$ and $\varepsilon$ are bounded, $t\mapsto\norm{\mathcal{C}\varepsilon(t)}$ is integrable over $\mathbb{R}_+$ (see \eqref{E:eps_decroit_infinie}) Hence, for all $t'\in[0, p\Delta]$, \begin{align}\label{E:cvzhat1} \psX{\hat{\etat}(t_{k_{n}-p}+t',\cdot)}{e_1} \to \psX{\hat{\etat}^\star(\cdot-t')}{e_1} = e^{-it'} \psX{\hat{\etat}^\star}{e_1} \end{align} and \begin{align}\label{E:cvzhat2} \mathcal{N}^2(\hat{\etat}(t_{k_n-p}+t', \cdot)-\mathbbold{1}) \to \mathcal{N}^2(\hat{\etat}^\star(\cdot-t')-\mathbbold{1}) = \psn^2_\infty. \end{align} as $n$ goes to $+\infty$. According to \eqref{E:cvzhat1} and \eqref{E:cvzhat2} for $t'=j\Delta$ and $j\in\{0,\dots,p\}$, and since $u(t_{k_{n}-j})\to0$, we get \begin{equation}\label{syst:absurde} K \mathfrak{f}(e^{-ij\Delta} \psX{\hat{\etat}^\star}{e_1}) + \delta \psn^2_\infty = 0,\qquad \forall j\in\{0,\dots,p\}. \end{equation} For all $t\in\mathbb{R}$ and all $\zeta\in B_{\mathbb{C}}\left(0, J_1\left(j\right)\right)$, we have by \eqref{E:def_f}, $ K \mathfrak{f}(e^{i t} \zeta) = K \mathfrak{R}(t) \mathfrak{f}(\zeta) $ where $\mathfrak{R}(t) = \begin{pmatrix} \cos t &\sin t\\ -\sin t&\cos t \end{pmatrix}$. Equations \eqref{syst:absurde} with $p=2$ can be rewritten as the matrix equality \begin{equation*} \begin{pmatrix} K & 1\\ K \mathfrak{R}(\Delta) & 1\\ K \mathfrak{R}(2\Delta) & 1\\ \end{pmatrix} \begin{pmatrix} K\mathfrak{f}(\psX{\hat{\etat}^\star}{e_1})\\ \delta \psn^2_\infty \end{pmatrix} = 0. \end{equation*} Since the square matrix on the left hand side is invertible for $\Delta\in(0, \pi)$, we get that $\psn^2_\infty = 0$ \emph{i.e.} $\hat{\etat}(t_{k_{n}-p}) \cvf \mathbbold{1}$. Combining it with \eqref{E:ineqzhat}, we have $\hat{\etat}(t_{k_{n}}-t') \cvf \mathbbold{1}$ as $n$ goes to $+\infty$ for all $t'\in[0, p\Delta]$. In particular, $\mC \hat{\etat}(t_{k_{n}}-t') \to \mC\uptau(0)$. Since $\mC\varepsilon(t)\to 0$ as $t\to+\infty$ by \eqref{E:eps_decroit_infinie}, we obtain $\mC\uptau(x(t_{k_{n}}-t')) \to \mC\uptau(0)$ as $n\to+\infty$. Hence, by Assumption~\ref{ass:detec}, $x(t_{k_{n}})\to0$ \emph{i.e.} $z(t_{k_{n}})\to\mathbbold{1}$. Thus $\varepsilon(t_{k_{n}})\cvf 0$ \emph{i.e.} $\varepsilon^\star=0$. \medskip \noindent \textbf{Step 2: Show that $\boldsymbol{x\cv0}$.} Recall that $x$ satisfies the following dynamics: \begin{align*} \dot{x} = (\rot+bK) x + bK(\uppi(\hat{\etat})-x) + \delta\mathcal{N}^2(\hat{\etat}-\mathbbold{1}) b. \end{align*} Since $A+bK$ is Hurwitz, there exists $P\in\mathbb{R}^{2\times 2}$ positive definite such that $P(A+bK) + (A+bK)'P < -2 \mathbbm{I}_{\mathbb{R}^2}$. Set $V:\mathbb{R}^2\ni x\mapsto x'Px$. Then \begin{align*} \frac{\diff}{\diff t} V(x) &\leq -2 |x|^2 + 2|x| |Pb| \kappa |\uppi(\hat{\etat})-x| + 2|x||Pb|\delta\mathcal{N}^2(\hat{\etat}-\mathbbold{1})\\ &\leq -2 |x|^2 + 2|Pb|\frac{j}{\mu}\left(\kappa |\uppi(\hat{\etat})-x|+ \delta\mathcal{N}^2(\hat{\etat}-\mathbbold{1})\right). \end{align*} We have \begin{align*} \mathcal{N}(\hat{\etat}-\mathbbold{1}) \leq \mathcal{N}(\varepsilon) + \mathcal{N}(z-\mathbbold{1}) \leq \mathcal{N}(\varepsilon) + \nu\norm{z-\mathbbold{1}} \leq \mathcal{N}(\varepsilon) + \nu \ell_{\plong}|x| \end{align*} where $\ell_{\plong}$ is the Lipschitz constant of $\uptau$ over $\mathcal{K}}%{\mathcal{K}_x$. Hence, if $\delta\leq\frac{\mu}{4|Pb|j\nu^2\ell_{\plong}^2}$ (which we can assume without loss of generality by replacing $\delta_0$ by $\min(\delta_0, \frac{\mu}{4|Pb|j\nu^2\ell_{\plong}^2})$, since diminishing $\delta$ ), then \begin{align*} \frac{\diff}{\diff t} V(x) &\leq -|x|^2 + 2|Pb|\frac{j}{\mu}\left(\kappa|\uppi(\hat{\etat})-x| + 2\delta\mathcal{N}^2(\varepsilon)\right). \end{align*} Recall that $|x|$ and $|\uppi(\hat{\etat})|$ are bounded by $\frac{j}{\mu}$. Moreover, $\mathcal{N}(\varepsilon(t))\to0$ as $t\to+\infty$ by Step~1, and $\uppi(\hat{\etat})-x\to0$ as $t\to+\infty$ since $\uppi$ is a strong left-inverse of $\uptau$ (see Corollary~\ref{cor:conv_faible_forte}). For all $r>0$, set $D(r)=\{x\in\mathbb{R}^2\mid V(x)\leq r\}$. In order to prove that $x\to0$, we show that for all $r>0$, there exists $T(r)\geq0$ such that $x(t)\in D(r)$ for all $t\geq T(r)$. If $r>0$ is such that $\bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})\subset D(r)$ then $T(r)=0$ satisfies the statement. Let $0<r<R$ be such that $\bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})\not \subset D(r)$ and $\bar{B}_{\mathbb{R}^2}(0, \frac{j}{\mu})\subset D(R)$. Since $\mathcal{N}(\varepsilon(t))\to 0$ and $\uppi(\hat{\etat}(t))-x(t)\to 0$, there exists $T_1(r)>0$ such that for all $t\geq T_1(r)$, if $x(t)\not\in D(r)$, then $ \frac{\diff}{\diff t}V(x)<-\bar{m}, $ for some $\bar{m}>0$. First, this implies that if $x(t)\in D(r)$ for some $t\geq T_1(r)$, then $x(s)\in D(r)$ for all $s\geq t$. Second, for all $t\geq0$, \begin{align*} V(x(T_1(r)+t)) &= V(x(T_1(r))) + \int_0^{t}\frac{\diff}{\diff \tau}V(x(T_1(r)+\tau))\diff \tau \leq R - \bar{m}t \tag{while $x(T_1(r)+t)\notin D(r)$.} \end{align*} Set $T_2(r) = \frac{R-r}{m}$ and $T(r) = T_1(r) + T_2(r)$. Then for all $t\geq T(r)$, $x(t)\in D(r)$, which concludes the proof. \section{Conclusion} The goal of the paper was to illustrate new approaches to tackle the problem of output stabilization at an unobservable target. As we explained, Luenberger observers can be employed, as long as some embedding is provided. To mitigate the effect of the observability singularity, we rely on the dissipativity of the error system and perturbations of the feedback law. The promise of such strategies is well exemplified by Theorem~\ref{th:finite}, where we were able to set up this approach. However, while classical output linearization methods allow to introduce linear observers, the dissipativity is not guaranteed. Relying on representation theory, we explored, for a specific control system, a different approach to obtain an embedding of the dynamics into a unitary infinite dimensional system. This allowed to design an observer for many nonlinear outputs of the original system while guaranteeing the dissipative nature of the error system, with Theorem~\ref{cor:main} also covering some of the cases treated in Theorem~\ref{th:finite}. Beyond the method we explored in the present article, we wish to stress that topological obstructions to output feedback stabilization can be lifted when infinite-dimensional observers are considered. More precisely, the obstruction brought up in \cite{Coron1994} regarding the stabilizability of $\dot x = u,\, y = x^2$ and extended in Corollary~\ref{cor:impossible} vanishes if one extends the usual definition of dynamic output feedback stabilizability by allowing infinite-dimensional states fed by the output, as in Definition~\ref{def:stab_inf}. Therefore, new infinite-dimensional embedding techniques for output feedback stabilization, either based on the more general framework of \cite{Celle-etal.1989}, or on other infinite-dimensional observers, need to be investigated. \subsubsection*{Acknowledgements} The authors would like to thank Vincent Andrieu for many fruitful discussions.\\ This research was partially funded by the French Grant ANR ODISSE (ANR-19-CE48-0004-01) and by the ANR SRGI (ANR-15-CE40-0018).
2024-02-18T23:40:37.271Z
2022-02-02T02:13:07.000Z
algebraic_stack_train_0000
2,873
23,921
proofpile-arXiv_065-14110
\section{Introduction} Yang-Mills set of Dyson-Schwinger equations can be solved through a class of exact solutions of the 1-point function, similarly to the $\phi^4$ theory \cite{Frasca:2015yva}. Indeed, both theories can map each other. Adding quarks to the Lagrangian makes the theory no more amenable to an exact treatment. But, notwithstanding such a difficulty, full QCD can be treated with the identical approach through Dyson-Schwinger equations. The technique we use is due to Bender, Milton and Savage \cite{Bender:1999ek}. This method has the important advantage that the differential form of the Dyson-Schwinger equations is retained. This is especially useful when, as in our case, we have exact solutions for the 1-point and 2-point equations in the classical case. Therefore, starting from the results for Yang-Mills theory without quarks, the set of Dyson-Schwinger equations for quantum chromodynamics (QCD) becomes amenable to a perturbative treatment in the strong coupling limit provided the 't Hooft limit $N\rightarrow\infty,\ Ng^2=constant,\ Ng^2\gg 1$ is taken. Our main conclusion is that the low-energy limit of QCD yields a confining non-local Nambu-Jona-Lasinio approximation \cite{Frasca:2019ysi} when the condition for confinement is assumed to have the quark propagator off-shell due to the behaviour of the mass function of quarks \cite{Roberts:1994dr,Gribov:1998kb}. Therefore, no free quark is observable being not anymore a state of the theory. In this paper we will follow the derivation given in Ref.\cite{Frasca:2019ysi}. \section{Bender-Milton-Savage method} The principal point in the Bender-Milton-Savage (BMS) technique is to derive the Dyson-Schwinger equations retaining their PDE form \cite{Bender:1999ek}. In this way, vertexes are never introduced and there is no need to move to momenta space obtaining cumbersome integral expressions. We consider the partition function of a given theory, e..g a scalar field theory to fix the ideas, given by \begin{equation} Z[j]=\int[D\phi]e^{iS(\phi)+i\int d^4xj(x)\phi(x)}. \end{equation} For the 1P-function, it is \begin{equation} \left\langle\frac{\delta S}{\delta\phi(x)}\right\rangle=j(x) \end{equation} being \begin{equation} \left\langle\ldots\right\rangle=\frac{\int[D\phi]\ldots e^{iS(\phi)+i\int d^4xj(x)\phi(x)}}{\int[D\phi]e^{iS(\phi)+i\int d^4xj(x)\phi(x)}} \end{equation} Then, we set $j=0$. We derive this equation again with respect to $j$ to get the equation for the 2P-function. We assume the following definition of the nP-functions \begin{equation} \langle\phi(x_1)\phi(x_2)\ldots\phi(x_n)\rangle=\frac{\delta^n\ln(Z[j])}{\delta j(x_1)\delta j(x_2)\ldots\delta j(x_n)}. \end{equation} This will yield \begin{equation} \frac{\delta G_k(\ldots)}{\delta j(x)}=G_{k+1}(\ldots,x). \end{equation} This procedure can be iterated to any desired order giving, in principle, all the hierarchy of the Dyson-Schwinger equations in PDE form \cite{Frasca:2015yva}. This is advantageous when the solutions for 1P- and 2P-functions are known in the classical case. \section{1P and 2P functions of QCD} For our computations, we choose the Landau gauge that permits to simplify the computations and decouples the ghost field. By applying the BMS tecnhinque to the QCD partition function \cite{Frasca:2019ysi}, one has for the 1P-functions \begin{eqnarray} &&\partial^2G_{1\nu}^{a}(x)+gf^{abc}( \partial^\mu G_{2\mu\nu}^{bc}(0)+ \nonumber \\ &&\partial^\mu G_{1\mu}^{b}(x)G_{1\nu}^{c}(x)- \partial_\nu G_{2\mu}^{\nu bc}(0) \nonumber \\ &&-\partial_\nu G_{1\mu}^{b}(x)G_{1}^{\mu c}(x)) \nonumber \\ &&+gf^{abc}\partial^\mu G_{2\mu\nu}^{bc}(0)+gf^{abc}\partial^\mu(G_{1\mu}^{b}(x)G_{1\nu}^{c}(x)) \nonumber \\ &&+g^2f^{abc}f^{cde}(G_{3\mu\nu}^{\mu bde}(0,0) +G_{2\mu\nu}^{bd}(0)G_{1}^{\mu e}(x) \nonumber \\ &&+G_{2\nu\rho}^{eb}(0)G_{1}^{\rho d}(x) +G_{2\mu\nu}^{de}(0)G_{1}^{\mu b}(x)+ \nonumber \\ &&G_{1}^{\mu b}(x)G_{1\mu}^{d}(x)G_{1\nu}^{e}(x)) \nonumber \\ &&=g\sum_{q,i}\gamma_\nu T^aS_{q}^{ii}(0)+g\sum_{q,i}{\bar q}_1^i(x)\gamma_\nu T^a q_1^i(x), \end{eqnarray} and for the quarks \begin{equation} (i\slashed\partial-{\hat M}_q)q_{1}^{i}(x)+g{\bm T}\cdot\slashed{\bm G}_1(x) q_{1}^{i}(x) = 0, \end{equation} the mass function being given by \begin{equation} {\hat M}_q^i=m_qI-g{\bm T}\cdot\slashed{\bm W}^{i}_q(x,x). \end{equation} Here and in the following Greek indexes ($\mu,\nu,\ldots$) are for the space-time and Latin index ($a, b,\ldots$) for the gauge group. It is not difficult to see that, as expected, in the equations for the correlation functions of lower order appear contributions from higher order correlations functions. This is peculiar to the Dyson-Schwinger scheme. We will prove this harmless in the following sections. We can obtain a reduced set of such equations by using the selected solutions \cite{Frasca:2019ysi} \begin{equation} G_{1\nu}^a(x)\rightarrow\eta_\nu^a\phi(x) \end{equation} being $\phi(x)$ a scalar field, and we introduce the $\eta$-symbols with the following properties \begin{eqnarray} \eta_\mu^a\eta^{a\mu} &=& N^2-1. \nonumber \\ \eta_\mu^a\eta^{b\mu} &=& \delta_{ab}, \nonumber \\ \eta_\mu^a\eta_\nu^a &=& \left(g_{\mu\nu}-\delta_{\mu\nu}\right)/2. \end{eqnarray} This yields the set of reduced 1P-function equations \begin{eqnarray} &&\partial^2\phi(x)+2Ng^2\Delta(0)\phi(x)+Ng^2\phi^3(x) \nonumber \\ &&=\frac{1}{N^2-1}\left[g\sum_{q,i}\eta^{a\nu}\gamma_\nu T^aS_{q}^{ii}(0)\right. \nonumber \\ &&\left.+g\sum_{q,i}{\bar q}_1^i(x)\eta^{a\nu}\gamma_\nu T^a q_1^i(x)\right] \nonumber \\ &&(i\slashed\partial-{\hat M}_q^i)q_{1}^{i}(x)+g{\bm T}\cdot\slashed\eta\phi(x) q_{1}^{i}(x) = 0. \end{eqnarray} We now give here equations for the 2P-functions. We make the choice for the gluon 2P-function \begin{equation} G_{2\mu\nu}^{ab}(x-y)=\left(\eta_{\mu\nu}-\frac{\partial_\mu\partial_\nu}{\partial^2}\right)\Delta_\phi(x-y) \end{equation} where $\eta_{\mu\nu}$ is the Minkowski metric, and $\Delta_\phi(x-y)$ is the propagator of the $\phi$ we introduced above to solve the equations to map them onto. Then, the set of remapped 2P-functions is \begin{eqnarray} &&\partial^2\Delta_\phi(x-y)+2Ng^2\Delta_\phi(0)\Delta_\phi(x-y)+3Ng^2\phi^2(x)\Delta_\phi(x-y) \nonumber \\ &&=g\sum_{q,i}{\bar Q}^{ia}_\nu(x-y)\gamma^\nu T^a q_{1}^{i}(x) \nonumber \\ &&+g\sum_{q,i}{\bar q}_1^{i}(x)\gamma^\nu T^a Q^{ia}_\nu(x-y) + \delta^4(x-y)\nonumber \\ &&\partial^2 P^{ad}_2(x-y)=\delta_{ad}\delta^4(x-y) \nonumber \\ &&(i\slashed\partial-{\hat M}_q^i)S^{ij}_q(x-y) \nonumber \\ &&+g{\bm T}\cdot\slashed\eta\phi(x) S^{ij}_q(x-y)=\delta_{ij}\delta^4(x-y) \nonumber \\ &&\partial^2W_{q\nu}^{ai}(x-y)+2Ng^2\Delta_\phi(0)W_{q\nu}^{ai}(x-y)+3Ng^2\phi^2(x)W_{q\nu}^{ai} \nonumber \\ &&=g\sum_{j}{\bar q}_1^{j}(x)\gamma_\nu T^a S^{ji}_q(x-y)\nonumber \\ &&(i\slashed\partial-{\hat M}_q^i)Q^{ia}_\mu(x-y)+g{\bm T}\cdot\slashed\eta\phi(x) Q^{ia}_\mu(x-y) \nonumber \\ &&+gT^a\gamma_\mu\Delta_\phi(x-y) q_{1}^{i}(x)=0. \end{eqnarray} \section{'t Hooft limit} 't Hooft limit corresponds to solve the theory when \cite{tHooft:1973alw, tHooft:1974pnl} \begin{equation} N\rightarrow\infty,\qquad Ng^2=constant, \qquad Ng^2\gg 1. \end{equation} We are assuming a SU(N) gauge group and so, $N$ is the number of colors. Therefore, we are able to evaluate our set of Dyson-Schwinger equations in this limit. We need a perturbation series for a coupling formally running to infinity as the one proposed in Ref.\cite{Frasca:2013tma}. To this aim, we re-scale $x\rightarrow\sqrt{Ng^2}x$. So, e.g., the equation for the gluon field will become \begin{eqnarray} \partial^2\phi(x')+2\Delta_\phi(0)\phi(x')+3\phi^3(x')&=& \\ \frac{1}{\sqrt{Ng^2}\sqrt{N}(N^2-1)}\left[\sum_{q,i}\eta\cdot\gamma\cdot TS_{q}^{ii}(0)+\right.&& \nonumber \\ \left.\sum_{q,i}{\bar q}_1^i(x')\eta\cdot\gamma\cdot T q_1^i(x')\right].&&\nonumber \end{eqnarray} Then, taking formally the 't Hooft limit, it yields the 1P-equations at the leading order \begin{eqnarray} \partial^2\phi_0(x)+2Ng^2\Delta_\phi(0)\phi_0(x)+3Ng^2\phi_0^3(x)=0,& \nonumber \\ (i\slashed\partial-{\hat M}_q^i){\hat q}_{1}^{i}(x)=0.& \end{eqnarray} From this equations we see that the effect of the interactions is on the masses. We can solve this set of equations as \begin{eqnarray} \phi_0(x)=\sqrt{\frac{2\mu^4}{m^2+\sqrt{m^4+2Ng^2\mu^4}}}\times \nonumber \\ {\rm sn}\left(p\cdot x+\chi,\kappa\right), \end{eqnarray} being sn a Jacobi elliptical function, $\mu$ and $\chi$ arbitrary integration constants and $m^2=2Ng^2\Delta_\phi(0)$. Then, \begin{equation} \kappa=\frac{-m^2+\sqrt{m^4+2Ng^2\mu^4}}{-m^2-\sqrt{m^4+2Ng^2\mu^4}}. \end{equation} This is true provided that the following dispersion relation holds \begin{equation} p^2=m^2+\frac{Ng^2\mu^4}{m^2+\sqrt{m^4+2Ng^2\mu^4}}. \end{equation} In the same limit we get the set of 2P-equations \begin{eqnarray} \partial^2\Delta_\phi(x,y)+2Ng^2\Delta_\phi(0)\Delta(x-y)+3Ng^2\phi_0^2(x)\Delta_\phi(x-y) \nonumber \\ =g\sum_{q,i}{\bar Q}^{ia}_\nu(x,y)\gamma^\nu T^a {\hat q}_{1}^{i}(x) \nonumber \\ +g\sum_{q,i}{\bar{\hat q}}_1^{i}(x)\gamma^\nu T^a Q^{ia}_\nu(x,y) + \delta^4(x-y) \nonumber \\ \partial^2 P^{ad}_2(x-y)=\delta_{ad}\delta^4(x-y) \nonumber \\ (i\slashed\partial-{\hat M}_q^i){\hat S}^{ij}_q(x-y)=\delta_{ij}\delta^4(x-y) \nonumber \\ \partial^2W_{q\nu}^{ai}(x,y)+2Ng^2\Delta_\phi(0)W_{q\nu}^{ai}(x,y)+3Ng^2\phi_0^2(x)W_{q\nu}^{ai}(x,y)\nonumber \\ =g\sum_{j}{\bar {\hat q}}_1^{j}(x)\gamma_\nu T^a {\hat S}^{ji}(x-y) \nonumber \\ (i\slashed\partial-{\hat M}_q^i){\hat Q}^{ia}_\mu(x,y)+gT^a\gamma_\mu\Delta_\phi(x-y) {\hat q}_{1}^{i}(x)=0. \end{eqnarray} In order to solve this set of equations, we consider \begin{eqnarray} \partial^2\Delta_0(x-y)+[m^2+3Ng^2\phi_0^2(x)]\Delta_0(x-y)&=& \nonumber \\ \delta^4(x-y)&& \end{eqnarray} that admits the following solution in momenta space \cite{Frasca:2015yva,Frasca:2013tma} \begin{eqnarray} \Delta_0(p)=M{\hat Z}(\mu,m,Ng^2)\frac{2\pi^3}{K^3(\kappa)}\times \nonumber \\ \sum_{n=0}^\infty(-1)^n\frac{e^{-(n+\frac{1}{2})\pi\frac{K'(\kappa)}{K(\kappa)}}} {1-e^{-(2n+1)\frac{K'(\kappa)}{K(\kappa)}\pi}}\times \nonumber \\ (2n+1)^2\frac{1}{p^2-m_n^2+i\epsilon} \end{eqnarray} being \begin{equation} M=\sqrt{m^2+\frac{Ng^2\mu^4}{m^2+\sqrt{m^4+2Ng^2\mu^4}}}, \end{equation} and ${\hat Z}(\mu,m,Ng^2)$ a given constant. This gives rise to a gap equation for the mass shift $m$ on the theory spectrum $m_n$ \cite{Frasca:2017slg}. Given the gluon propagator, one gets \begin{eqnarray} {\hat S}^{ij}_q(x,y)=\delta_{ij}(i\slashed\partial-{\hat M}_q^i)^{-1}\delta^4(x-y)\nonumber \\ {\hat Q}^{ia}_\mu(x,y)=-g\int d^4y'\sum_j{\hat S}^{ij}_q(x-y')T^a\gamma_\mu\Delta_0(y',y) {\hat q}_{1}^{j}(y') \nonumber \\ W_{q\nu}^{ai}(x,y)=g\int d^4y'\Delta_0(x-y')\sum_{j}{\bar {\hat q}}_1^{j}(y')\gamma_\nu T^a {\hat S}^{ji}_q(y'-y) \nonumber \\ \Delta(x,y)=\Delta_0(x-y)+ \nonumber \\ g\int d^4y'\Delta_0(x-y')\left[\sum_{q,i}{\bar {\hat Q}}^{ia}_\nu(y',y)\gamma^\nu T^a {\hat q}_{1}^{i}(y')\right. \nonumber \\ \left.+\sum_{q,i}{\bar{\hat q}}_1^{i}(y')\gamma^\nu T^a {\hat Q}^{ia}_\nu(y',y)\right]. \end{eqnarray} Finally, the quark propagator can be obtained by this set of equations as \begin{eqnarray} (i\slashed\partial-{\hat M}_q^i){\hat q}_{1}^{i}(x)= 0\nonumber \\ (i\slashed\partial-{\hat M}_q^i){\hat S}^{ij}_q(x-y)=\delta_{ij}\delta^4(x-y), \end{eqnarray} given the mass matrix \begin{eqnarray} \label{eq:se} {\hat M}_q^i=m_qI- g^2\int d^4y'\Delta_0(x-y')T^a\gamma^\nu\times \nonumber \\ \sum_{k}{\bar {\hat q}}_1^{k}(y')\gamma_\nu T^a {\hat S}^{ki}_q(y'-x). \end{eqnarray} This can be solved by iteration starting from the free quark propagator. When the on-shell condition fails, we will have quark confinement. \section{Non-local NJL approximation} From eq.(\ref{eq:se}) we can define the the self-energy \begin{eqnarray} \Sigma(x,x)=g^2\int d^4y'\Delta_0(x-y')T^a\gamma^\nu\times \nonumber \\ \sum_{k}{\bar {\hat q}}_1^{k}(y')\gamma_\nu T^a {\hat S}^{ki}_q(y'-x) \end{eqnarray} From this, we can introduce a non-local-Nambu-Jona-Lasinio model (nlNJL) \cite{Frasca:2019ysi} \begin{equation} (i\slashed\partial-m_q+\Sigma^i_{NJL}(x,x)){\hat S}^{ij}_q(x-y)=\delta_{ij}\delta^4(x-y) \end{equation} being now the quark self-energy computed at the first iteration by the free quark propagator giving \begin{eqnarray} \Sigma^i_{NJL}(x,x)=g^2\int d^4y'\Delta_0(x-y')T^a\gamma^\nu\times \nonumber \\ \sum_{j}{\bar {\hat q}}_0^{j}(y')\gamma_\nu T^a {\hat S}^{ji}_{0q}(y'-x) \end{eqnarray} or, in momentum space, \begin{eqnarray} \Sigma^i_{NJL}(p)=g^2\int\frac{d^4p_1}{(2\pi)^4}\Delta_0(p_1)T^a\gamma^\nu\times \nonumber \\ \sum_{j}{\bar {\hat q}}_0^{j}(p)\gamma_\nu T^a {\hat S}^{ji}_{0q}(p_1-p). \end{eqnarray} This nlNJL-model gives rise, as usual, to a mass gap equation for quarks implying the formation of a condensate. This generally grants a pole in the quark propagator. Failing to find such a pole means that the quark propagator has no physical mass states and the quarks are confined \cite{Roberts:1994dr,Gribov:1998kb}. Then, at very low energies \begin{equation} M_q=m_q-{\rm Tr}\Sigma^i_{NJL}(0) \end{equation} where the trace is over flavors, colors and spinor indexes. This yields \begin{eqnarray} M_q=m_q+\frac{N_f(N^2-1)Ng^2}{2}\times \\ \int\frac{d^4p}{(2\pi)^4}\Delta_0(p)\frac{M_q}{p^2+M^2_q} \nonumber \end{eqnarray} We consider the gluon propagator neglecting the mass shift, as this is generally small as shown in \cite{Frasca:2017slg}, \begin{equation} \Delta_0(p)=\sum_{n=0}^\infty\frac{B_n}{p^2+m_n^2}. \end{equation} Therefore, we have to compute \begin{eqnarray} M_q=m_q+\frac{N_f(N^2-1)Ng^2}{2}\int\frac{d^4p}{(2\pi)^4} \times \\ \sum_{n=0}^\infty\frac{B_n}{p^2+m_n^2}\frac{M_q}{p^2+M^2_q}. \nonumber \end{eqnarray} The corresponding integral can be evaluated exactly when a cut-off $\Lambda$ is used, as usual for Nambu-Jona-Lasinio models that are generally expected not to be renormalizable \cite{Klevansky:1992qe}. This yields \begin{eqnarray} M_q=m_q+\frac{N_f(N^2-1)Ng^2}{16\pi^2} \sum_{n=0}^\infty\frac{B_nM_q}{2(m_n^2-M_q^2)}\times \nonumber \\ \left[m_n^2\ln\left(1+\frac{\Lambda^2}{m_n^2}\right)-M_q^2\ln\left(1+\frac{\Lambda^2}{M_q^2}\right)\right]. \end{eqnarray} This equation is amenable to a numerical treatment provided that $M_q\ge m_q$ and $M_q\ll\Lambda$, $\Lambda$ is the nlNJL-model cut-off. The ultraviolet cut-off represents, at least, the boundary of the region where asymptotic freedom starts to set in (generally taken at $\Lambda\approx 1\ {\rm GeV}$). We normalize the mass function taking $x=m_0/\Lambda$ and $y=M_q/\Lambda$ having set $m_n=(2n+1)m_0$. The mass gap $m_0$ can be assumed to be that of the $\sigma$ meson or f(500) that we fix to $m_0=0.417\ {\rm GeV}$ \cite{Zyla:2020zbs}. Then, the function to study is \begin{eqnarray} \label{eq:y} y=\frac{m_q}{\Lambda}+\kappa\alpha_s\sum_{n=0}^\infty\frac{B_ny}{(2n+1)^2x^2-y^2}\times \nonumber \\ \left[(2n+1)^2x^2\ln\left(1+\frac{1}{(2n+1)^2x^2}\right)-y^2\ln\left(1+\frac{1}{y^2}\right)\right]. \end{eqnarray} From this, we derive the mass function \begin{eqnarray} \mu(\alpha_s,y)=y- \frac{m_q}{\Lambda}-\kappa\alpha_s\sum_{n=0}^\infty\frac{B_ny}{(2n+1)^2x^2-y^2}\times \\ \left[(2n+1)^2x^2\ln\left(1+\frac{1}{(2n+1)^2x^2}\right)-y^2 \ln\left(1+\frac{1}{y^2}\right)\right]. \nonumber \end{eqnarray} that we plotted in fig.\ref{fig1} for its zeros. \begin{figure}[H] \includegraphics[width=.49\textwidth]{surf0}\hfil \includegraphics[width=.49\textwidth]{curves} \caption{Zeros of the quark mass function $\mu$ given by the red curve for the 3D figure (above) and the corresponding profiles with the physical limit in red (below).}\label{fig1} \end{figure} Zero curve exists then, there is a range of physical parameters where the chiral symmetry is broken. At energies higher than 1 GeV, asymptotic freedom starts to set in and quarks retain their bare masses. At increasing $\alpha_s$, the effective quark mass starts to be nonphysical overcoming the cut-off. So, we have no solutions and the quark propagator has no physical poles for a free quark. A confinement condition can be straightforwardly obtained by taking $M_q=\Lambda$, the limit of the physical region. This yields \begin{equation} \alpha_s=\min_{q=u,d,s}\frac{1-\frac{m_q}{\Lambda}}{\kappa\xi}. \end{equation} being $\kappa=N_fN(N^2-1)/8\pi$, $ \xi=\xi(m_0/\Lambda)$ afunction just depending on the mass gap and the UV cut-off obtainable by eq.(\ref{eq:y}), and $\alpha_s=g^2/4\pi$. \section{Conclusions} We have derived the set of Dyson-Schwinger equations, till to 2P-functions, for QCD with the Bender-Milton-Savage technique. Then, we were able to solve them in the 't Hooft limit. We recognized that the low-energy limit is given by a nonlocal-Nambu-Jona-Lasinio approximation. Consequently, in the low-energy limit, we were able to show that the model is confining. A confinement condition was also obtained obtained.
2024-02-18T23:40:37.585Z
2020-11-26T02:03:33.000Z
algebraic_stack_train_0000
2,888
3,311
proofpile-arXiv_065-14167
\section{Introduction} Exploring active matter has become a popular subject in contemporary physics leading to many new insights into a large variety of intriguing systems. Active systems are ubiquitous in nature, thereby drawing interest from different scientific communities (physics, chemistry, biology, material science, ecology, robotics) and offering a wealth of surprising dynamic phenomena \cite{Ramaswamy2010, Vicsek2012, Romanczuk2012b, Marchetti2013, Bechinger2016, Beer2019}. Moreover, they provide many new challenges for our understanding of non-equilibrium systems. Realizations of active matter systems range from intracellular processes and bacterial suspensions \cite{Ramaswamy2010, Beer2019, Zhang2010, Koch2011}, artificial Janus particles \cite{Walther2013,Nishiguchi2015,Buttinoni2013} to schools of fish and flocks of birds \cite{Katz2011,Cavagna2010}. More recent reviews have focused on the prominent role of models with alignment interaction \cite{Chate2020} and on anisotropic, self-propelled particles \cite{Bar2020} as well as on the large variety of computational approaches to active matter \cite{Shaebani2020} and on a roadmap outlining a multitude of promising directions for the field \cite{Gompper2020}. Among various collective states that characterize active matter, one phenomenon of general interest is meso-scale turbulence. Meso-scale turbulence is reported in various experimental studies, for example for suspensions of \textit{Bacillus subtilis} \cite{Dombrowski2004,Cisneros2007}, \textit{Escherichia coli} \cite{Ishikawa2011,Liu2012} and \textit{Serratia marcescens} \cite{Steager2008}. The main feature (and difference to ordinary inertial turbulence) of meso-scale turbulence in bacterial suspensions is the appearance of a characteristic length scale \cite{Dombrowski2004,Ishikawa2011,Wensink2012a,Bratanov2015,Zhang2009,James2018,James2018b}. A continuum model that agrees with experimental findings for wild-type \textit{Bacillus subtilis} suspensions was presented in \cite{Wensink2012a,Dunkel2013a,Dunkel2013b,Heidenreich2016,Reinken2018a}. When solving this model numerically for a broad range of parameters, the observed velocity statistic is close to Gaussian \cite{Ariel2018} in agreement with experimental findings \cite{Wensink2012a}. Recent work on discrete models with self-propelled rods revealed that effective polar alignment is often observed in self-propelled rods with steric repulsion \cite{Grossmann2020, Jayaram2020}. It is important to note that the model for meso-scale turbulence analyzed in this paper is appropriate for the description of polar fluids and does not apply to active turbulence in so called “active nematics”, see e.\ g.\ \cite{Doostmohammadi2018, Alert2020}, wherein a different kind of turbulence without a characteristic length-scale, compare discussion in \cite{Bar2020}, is observed. Current experiments on engineering of vortex lattices in bacterial suspensions \cite{Nishiguchi2018} indeed showed that a typical length-scale is controlling the behavior in meso-scale turbulence. As a result, the continuum model of meso-scale turbulence described in detail above allowed to reproduce the characteristics of these experiments \cite{Reinken2020}. However, in recent experiments on \textit{Bacillus subtilis} suspensions anomalous velocity statistics have been observed. For example, swarms of very short or very long cells (compared to the wild-type) show anomalous velocity statistics \cite{Ilkanaiv2017}. Moreover, adding sublethal concentrations of antibiotics to wild-type swarms produces anomalous velocity statistics \cite{Benisty2015}. The deviations from normal statistics are quantified by measuring the kurtosis $\kappa$ (scaled fourth moment), where $\kappa\neq3$ indicates anomalous statistics. Such anomalous statistics are not reported for the theory presented in \cite{Wensink2012a,Dunkel2013b}. Hence, a theory accounting for anomalous statistics in meso-scale turbulent systems is still lacking. We present and analyse a minimal model based on \cite{Wensink2012a,Dunkel2013b}, exhibiting meso-scale turbulence and anomalous statistics. The main idea is summarized as follows: The model introduced in \cite{Wensink2012a,Dunkel2013b} assumes constant density and constant self-propulsion speed. We relax these assumptions by allowing for velocity variations mediated by density variations. This is a very natural assumption, as a dependency of the speed on the density is reported for \textit{Bacillus subtilis} suspensions in several experimental studies \cite{Ariel2018,Sokolov2007}. More specifically, we borrow ideas from motility-induced phase separation (MIPS) \cite{Cates2010,Fily2012,Bialke2013} to model density variations. Combining ideas from meso-scale turbulence and MIPS is intriguing from a general point of view as well. In bacterial suspensions steric interactions (through volume exclusion), alignment (through elongated shapes) and hydrodynamic interactions (through self-propulsion in the surrounding medium) are assumed to be present simultaneously. Considering steric interactions and alignment individually gives rise to MIPS \cite{Cates2010,Fily2012,Bialke2013} and global order in Vicsek-like models \cite{Vicsek1995,Toner1995,Toner1998,Toner2005} respectively, while meso-scale turbulence results from the combination of alignment and hydrodynamics \cite{Wensink2012a,Dunkel2013b,Heidenreich2016,Reinken2018a}. However, a theory aiming to comprehensively describe the dynamics of bacterial suspensions needs to incorporate all three of these effects and the interplay between them. Recently, several studies focusing on the interplay of steric interactions and alignment in active matter \cite{VanDamme2019,Grossmann2020,Jayaram2020,Sese-Sansa2018,Shi2018,Geyer2019,Barre2014,Theers2018,VanDerLinden2019} report interesting and sometimes contradicting results. However, hydrodynamic interactions are commonly neglected. Our model features elements from all three of these prominent theories of active matter (MIPS, global order, meso-scale turbulence) and connects them in a minimalist fashion. Hence, our work contributes to the current discussion on how to connect different branches of active matter and provides an insight into the expected dynamics. We remark that in this study we mainly focus on the interplay between MIPS and the finite-wavelength instability arising due to hydrodynamic interactions, while global order will be of less relevance. The structure of the paper is as follows: In the second section we propose a phenomenological model based on continuum models well established in the literature, which combines their central features. In the third section we present a linear stability analysis, hinting at the expected dynamics. In the fourth section we present numerical solutions of our model, sketch a phase portrait and discuss the observed anomalous velocity statistics. \section{Modeling Approach}\label{sec:Modeling} We present our phenomenological model in the following steps: First, we revisit two continuum models. The first model describes MIPS, while the second model describes meso-scale turbulence. Based on these models, we propose a minimal phenomenological model that combines their main features. When discussing continuum models of active matter, it is helpful to keep the microscopic picture in mind. We consider active particles that self-propel with some speed along their individual axes. Coarse-graining gives an averaged polarization, which, multiplied with the speed, coincides with the macroscopic velocity. Hence, the polarization plays a dual role of both order parameter and velocity field \cite{Ramaswamy2010,Marchetti2013,Fodor2018a}. \subsection{Revisiting established models}\label{sec:BaseModels} A minimal hydrodynamic model to describe MIPS was presented in \cite{Fily2012,Bialke2013}. It consists of coupled equations for the particle density $\rho$ and polarization density field $\mathbf{p}$ \begin{subequations}\label{eq:MIPS} \begin{align}\label{eq:MIPS_Continuity} \partial_t\rho &= -\nabla \cdot [v(\rho)\mathbf{p}] + D\Delta\rho,\\ \label{eq:MIPS_Pol} \partial_t\mathbf{p} &= -\frac{1}{2}\nabla [v(\rho)\rho] + D\Delta\mathbf{p} - D_r\mathbf{p}, \end{align} \end{subequations} where $D$ and $D_r$ are diffusion coefficients. The polarization density ${\bf p}$ is given by the averaged orientation of the self-propelled particles. The coupling between the density $\rho$ and the polarization density $\mathbf{p}$ is achieved through a density-dependent speed $v(\rho)$, with \begin{equation}\label{eq:v_rho_linear} v(\rho) = v_0 - \zeta\rho \end{equation} modelling a decrease of the self-propulsion speed at very high density (jamming) \cite{Ariel2018,Beer2020}. A transition from a homogeneous density profile to phase separation is encountered for sufficiently high densities (or alternatively a strong enough damping constant $\zeta$). The phase separation can be understood by assuming a slow variation of $\mathbf{p}$ in time and space. Setting the corresponding derivatives in Eq.\ \eqref{eq:MIPS_Pol} to zero and substituting the resulting expression for $\mathbf{p}$ into Eq.\ \eqref{eq:MIPS_Continuity} gives \begin{equation}\label{eq:MIPS_effective} \partial_t\rho = \nabla\cdot\mathcal{D}(\rho)\nabla\rho,\quad \mathcal{D}(\rho) = D+\frac{v(\rho)\left[v(\rho)+v'(\rho)\rho\right]}{2D_r}. \end{equation} Above a critical density, the sign of the effective diffusion coefficient $\mathcal{D}(\rho)$ changes due to $v'(\rho)<0 $, triggering phase separation. For more details refer to \cite{Fily2012,Bialke2013,Fodor2018a}. Taking a different approach, in \cite{Wensink2012a,Dunkel2013b} the authors present a model which reproduces the statistical features of meso-scale turbulence as observed in dense suspensions of \textit{Bacillus subtilis}. This model was proposed on a phenomenological basis and was later derived from a microscopic microswimmer model \cite{Heidenreich2016,Reinken2018a}. It can be regarded as the combination of a (simplified) Toner-Tu model \cite{Toner1995,Toner1998} and a fourth-order term as in the Swift-Hohenberg equation \cite{Swift1977}. The time-dependent polarization density evolves according to \begin{subequations}\label{eq:Incomp} \begin{align}\label{eq:pIncomp} &\partial_t \mathbf{p} +\lambda_0 (\mathbf{p}\cdot\nabla)\mathbf{p} =-\nabla P -(A+C|\mathbf{p}|^2)\mathbf{p}+\Gamma_0\Delta\mathbf{p}-\Gamma_2\Delta^2\mathbf{p},\\ \label{eq:divFree} &\nabla\cdot\mathbf{p}=0. \end{align} \end{subequations} The density is assumed to be constant, which leads to the incompressibility condition Eq.\ \eqref{eq:divFree} and introduces the Lagrange multiplier $P$ enforcing this condition. Rewriting model \eqref{eq:Incomp} in potential form \begin{equation}\label{eq:PotentialForm} \partial_t \mathbf{p} +\lambda_0 (\mathbf{p}\cdot\nabla)\mathbf{p} =-\frac{\delta\mathcal{F}}{\delta\mathbf{p}}, \quad\nabla\cdot\mathbf{p}=0, \end{equation} with \begin{equation} \mathcal{F} = P(\nabla\cdot\mathbf{p}) + \frac{1}{2}A|\mathbf{p}|^2 + \frac{1}{4}C|\mathbf{p}|^4 + \frac{1}{2}\Gamma_0(\nabla\mathbf{p})^2 + \frac{1}{2}\Gamma_2(\nabla\nabla\mathbf{p})^2 \end{equation} shows that the dynamics of $\mathbf{p}$ is governed by pure relaxational dynamics derived from $\mathcal{F}$ and a convective part $\lambda_0 (\mathbf{p}\cdot\nabla)\mathbf{p}$. For the bulk terms in $\mathcal{F}$ we distinguish between two regimes: For $A>0$ and $C>0$ these terms stabilize a disordered state. For $A<0$ and $C>0$ they represent a double-well potential, forcing a nonzero magnitude of the polarization density. Physically, the rotational symmetry is broken spontaneously and a globally ordered state with $|\mathbf{p}|=\sqrt{-A/C}$ is stable in this case. Hence, for $\Gamma_0>0$ and $\Gamma_2=0$ the model \eqref{eq:Incomp} reduces to the Toner-Tu theory. However, if $\Gamma_0<0$ and $\Gamma_2>0$ a finite-wavelength instability is introduced, similar as in the Swift-Hohenberg equation \cite{Swift1977}. This instability destabilizes both the disordered and globally ordered state simultaneously, leading to meso-scale turbulence. Setting $\Gamma_0<0$ is justified by physical arguments. Indeed, model \eqref{eq:Incomp} can be derived from microscopic considerations using a coarse-graining procedure \cite{Heidenreich2016,Reinken2018a}, which indeed leads to $\Gamma_0<0$ due to activity and hydrodynamics. The competition between alignment and hydrodynamics sets the effective length scale of the evolving pattern. In this derivation it also becomes apparent that the coefficients in model \eqref{eq:Incomp} should, in general, also depend on density. \subsection{Extended model} Two major assumptions underlie model \eqref{eq:Incomp}: That density is constant and that the particles propel with constant speed $v_0$ along their orientation. We relax these assumptions and replace them by expressions motivated by model \eqref{eq:MIPS}. First, we assume that the velocity is density-dependent, i.e. we replace the constant speed $v_0$ with a density-dependent speed $v(\rho)$. Such an assumption is very natural in realistic active matter systems as excluded volume as well as collective effects lead to a density-dependent speed \cite{Buttinoni2013,Ariel2018,Sokolov2007,Fily2012}. Next, we have to replace the incompressibility condition by an evolution equation for the density. A natural choice is a continuity equation consisting of the divergence of the mass flux and a diffusive term \begin{equation}\label{eq:continuity} \partial_t \rho = -\nabla\cdot[v(\rho)\mathbf{p}]+D\Delta\rho. \end{equation} Moreover, this equation agrees with the ones derived by \cite{Fily2012,Bialke2013} for MIPS (see Eq.\ \eqref{eq:MIPS_Continuity}) as well as the one derived for model \eqref{eq:Incomp} (in a suitable defined limit). In model \eqref{eq:Incomp} the coupling to the (degenerate) density equation is accomplished via the Lagrange multiplier $P$ acting as a pressure. As we replace the incompressibility condition Eq.\ \eqref{eq:divFree} with the continuity equation Eq.\ \eqref{eq:continuity} an explicit coupling in terms of the density is needed. We choose \begin{equation}\label{eq:pressure} P(\rho) = \frac{1}{2}v(\rho)\rho. \end{equation} Such a term appears naturally when coarse-graining microscopic models that incorporate self-propulsion. Indeed, this term is reported for all comparable systems we are aware of \cite{Fily2012,Bialke2013,Grossmann2020,Jayaram2020,Geyer2019,Speck2014,Speck2015}, see also Eq.\ \eqref{eq:MIPS_Pol}. While the details of the underlying microscopic model and coarse-graining procedure (especially the choice of an appropriate closure) might introduce additional terms to the dynamics of the polarization density $\mathbf{p}$, a term as in Eq.\ \eqref{eq:pressure} will always be present. Secondly, one can think of Eq.\ \eqref{eq:pressure} as a low order approximation of the pressure, disregarding higher order coupling between $\rho$ and $\mathbf{p}$. We note that Eq.\ \eqref{eq:pressure} can only formally be regarded as a pressure. Determining the pressure of active fluids is in general a complicated task \cite{Takatori2014,Solon2015a,Solon2015c}, especially when accounting for hydrodynamic interactions. Alternatively, a virial expansion \cite{Falasco2016} or a treatment as in the Toner-Tu theory \cite{Toner1995, Toner1998} is possible. As briefly mentioned in section \ref{sec:BaseModels}, all coefficients of model \eqref{eq:Incomp} are, in principle, dependent on the density. An appropriate rescaling reduces the possible density-dependent parameters to $\lambda_0,\Gamma_0$ and $A$. Experimental and numerical findings agree that the characteristic length scale set by $\Gamma_0$ in the turbulent regime does not depend on the (overall) density \cite{Sokolov2012}. Hence, we drop that dependency. Furthermore, for simplicity we assume $\lambda_0$ to be independent of density as well. Note though that earlier studies suggest a non-monotone dependence of $\lambda_0$ on $\rho$ \cite{Heidenreich2016}. This leaves $A$ as the only density-dependent parameter. In fact, the transition from a dilute, disordered state to a dense, globally ordered state in the Vicsek model can be explained by a change of sign of $A$ through an increased density. Altogether, our phenomenological model in its most general form is given by \begin{subequations}\label{eq:Dynamics} \begin{align} \partial_t \rho &= -\nabla\cdot[v(\rho)\mathbf{p}]+D\Delta\rho,\\ \partial_t \mathbf{p} +\lambda_0 (\mathbf{p}\cdot\nabla)\mathbf{p} &=-\frac{1}{2}\nabla [v(\rho)\rho] - \left[A(\rho)+C|\mathbf{p}|^2\right]\mathbf{p} +\Gamma_0\Delta\mathbf{p}-\Gamma_2\Delta^2\mathbf{p}. \end{align} \end{subequations} Overall, our model is essentially a minimal model that incorporates the main features of the models presented in the previous section. Moreover, the three instabilities can be tuned independently through the coupling terms and $\Gamma_0$. \section{Stability Analysis}\label{sec:stability} As a first insight into the dynamics expected from model \eqref{eq:Dynamics} we perform a linear stability analysis for the steady states of the model. Clearly, a trivial steady state is given by $(\rho,\mathbf{p})=(\rho_0,\mathbf{p}_0)$, where $\rho_0$ and $\mathbf{p}_0$ are uniform in space. In the following we distinguish between the case $\mathbf{p}_0=0$ (disorder) and $|\mathbf{p}_0|>0$ (global polar order). Due to the special structure of the stability matrix (see Appendix \ref{app:stability} for details), the dispersion relations for the disordered state can be computed analytically. This is also possible for the polar state. However, finding the dispersion relation, i.e.\ solving for the eigenvalues of the stability matrix, produces lengthy expressions offering little insight. Hence, we only present numerical results for the polar state. \subsection{Disordered State}\label{sec:stab_dis} Linearizing Eq.\ \eqref{eq:Dynamics} around the steady state $(\rho,\mathbf{p}) \equiv (\rho_0,0)$ and expanding perturbations into Fourier modes reveals the dispersion relations \begin{subequations}\label{eq:Dispersion} \begin{align} \label{eq:sigma1} \sigma_1(k) &= -A(\rho_0)-\Gamma_0 k^2-\Gamma_2 k^4,\\ \label{eq:sigma2} \sigma_{2,3}(k) &= \frac{1}{2}\left[-A(\rho_0)-(\Gamma_0+D)k^2-\Gamma_2k^4 \pm \sqrt{r(k)}\right], \end{align} \end{subequations} where $k=|\mathbf{k}|$ is the magnitude of the wavevector $\mathbf{k}=(k_1,k_2)$. See Appendix \ref{app:stability} for details. Furthermore, we introduced \begin{equation} \begin{aligned} r(k) =& A(\rho_0)^2+\left[2A(\rho_0)(\Gamma_0-D)-4\gamma\right]k^2 +\left[2A(\rho_0)\Gamma_2+(\Gamma_0-D)^2\right]k^4\\ &+2(\Gamma_0-D)\Gamma_2 k^6 +\Gamma_2^2 k^8, \end{aligned} \end{equation} where the constant $\gamma$, quantifying the coupling to the density-dependent propulsion speed, is given by \begin{equation}\label{eq:coupling_combined} \gamma = \frac{1}{2}v(\rho_0)\left[v(\rho_0)+v'(\rho_0)\rho_0\right]. \end{equation} The first eigenvalue $\sigma_1(k)$ does not contain any coupling terms nor any contributions from the density equation. Furthermore, the corresponding eigenvector is given by \begin{equation} \mathbf{v}_1 = (0,-k_2,k_1), \end{equation} which is independent of density. Hence, $\sigma_1(k)$ solely affects the stability of the polarization density $\mathbf{p}$. Moreover, the same dispersion relation is found in model \eqref{eq:Incomp}, see \cite{Wensink2012a,Dunkel2013b}. Therefore, model \eqref{eq:Dynamics} inherits the finite-wavelength instability of the polarization for sufficiently small $\Gamma_0<0$ from model \eqref{eq:Incomp}. This instability is characterized by a band of unstable modes bounded away from zero as can be seen in figure \ref{fig:Disp}\textbf{A}. It is straightforward to compute the critical parameter from Eq.\ \eqref{eq:sigma1} as \begin{equation}\label{eq:Gamm0crit} \Gamma_0^c = -\sqrt{4A(\rho_0)\Gamma_2}. \end{equation} The qualitative behavior of the other eigenvalues $\sigma_{2,3}(k)$ cannot be read off directly from Eq.\ \eqref{eq:sigma2}. Instead, performing a small wavenumber expansion \begin{equation}\label{eq:sigma2approx} \sigma_2(k) \approx -\left(D + \frac{\gamma}{A(\rho_0)}\right)k^2 = -\mathcal{D}(\rho_0)k^2 \end{equation} reveals a long-wavelength instability for $\mathcal{D}(\rho_0)<0$. From this expression and Eq.\ \eqref{eq:coupling_combined} we can calculate critical parameters by specifying $v(\rho)$. The resulting long-wavelength instability is pictured in figure \ref{fig:Disp}\textbf{B}. As expected from our modeling approach, this instability is similar to the one reported for MIPS, see section \ref{sec:BaseModels} and \cite{Fily2012,Speck2015}. Note that $\mathcal{D}(\rho_0)$ coincides with $\mathcal{D}(\rho)$ introduced in Eq.\ \eqref{eq:MIPS_effective} for the choice $\rho=\rho_0$. While both instabilities have been studied in great detail separately, to the authors knowledge, a situation as depicted in figure \ref{fig:Disp}\textbf{C}, where both instabilities are present at the same time, has not been studied yet. As we will show in the next section, this leads to interesting dynamics. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth, height=\textheight,keepaspectratio]{stability_all_new} \caption{Dispersion relations for the disordered state (\textbf{A}-\textbf{C}) and the polar state in the direction $\mathbf{k}_\perp$ perpendicular to $\mathbf{p}_0$ (\textbf{D}-\textbf{F}). For \textbf{A},\textbf{D} we set $\Gamma_0<\Gamma_0^c$ and $\zeta<\zeta^c$. For \textbf{B},\textbf{E} we set $\Gamma_0>\Gamma_0^c$ and $\zeta>\zeta^c$ and for \textbf{C},\textbf{F} we set $\Gamma_0<\Gamma_0^c$ and $\zeta>\zeta^c$. In all subplots $v(\rho)$ is chosen as Eq.\ \eqref{eq:v_rho_linear}.} \label{fig:Disp} \end{figure} \subsection{Polar State} There is an additional long-wavelength instability of the disordered state for $A(\rho_0)<0$. As discussed in section \ref{sec:BaseModels}, a steady state with polar order, i.e.\ $(\rho,\mathbf{p}) \equiv (\rho_0,\mathbf{p}_0)$ and $|\mathbf{p}_0| = \sqrt{-A(\rho_0)/C}$, emerges in this situation. The stability analysis of the polar state can be carried out similar to the disordered case. However, the resulting dispersion relations are lengthy and intricate, hampering an intuitive interpretation. Numerical computation of the eigenvalues reveals instabilities similar to the disordered state. There is a finite-wavelength instability for $\Gamma_0\leq\Gamma_0^c$ perpendicular to $\mathbf{p}_0$ (see figure \ref{fig:Disp}\textbf{D}) and a long-wavelength instability above a critical coupling constant, also perpendicular to $\mathbf{p}_0$ (see figure \ref{fig:Disp}\textbf{E}). Finally, both instabilities can be present as in the disordered case, see figure \ref{fig:Disp}\textbf{F}. Interestingly, the numerically computed critical values for both instabilities are the same as in the disordered case. Hence, the disordered and polar states loose stability simultaneously, indicating the existence of a new dynamical attractor. \section{Numerical solution of the model equations}\label{sec:pp} We now explore the dynamics produced by model \eqref{eq:Dynamics} numerically. For this purpose, we have to specify the coupling terms $v(\rho)$ and $A(\rho)$. As the model is complex, we aim for simple coupling terms in order to ease the numerical burdens and reduce the amount of possible parameters. Hence, we choose $v(\rho) = v_0 - \zeta\rho$, i.e.\ we study the linear case of Eq.\ \eqref{eq:v_rho_linear}. A monotone decrease with density is motivated by crowding effects, i.e.\ self-propulsion is counteracted by steric hindrance in dense areas. The linear model Eq.\ \eqref{eq:v_rho_linear} with coefficient $\zeta$ was discussed in \cite{Fily2012,Bialke2013,Speck2014,Speck2015} and derived as a first-order approximation from microscopic coniderations. Other monotonically decreasing functions of density have been studied in the literature as well (see \cite{Cates2010} and \cite{Geyer2019} for an exponential or hyperbolic tangent dependence respectively). In addition, to further simplify the analysis, we choose $A(\rho) \equiv A > 0$. While such a choice might seem arbitrary at first glance, complex non-equilibrium dynamics can be observed for sufficiently small $\Gamma_0$, even for $A>0$, due to local shear stresses. A profound analysis on the influence of the potential terms with coefficients $A$ and $C$ on the dynamics of model \eqref{eq:Incomp} can be found in \cite{Dunkel2013b}. Therein, the authors conclude that the main difference is an absence of jets for $A>0$. From a general point of view, the choice $A(\rho) \equiv A > 0$ disregards the polar state and its effects on the dynamics, which allows us to focus on the interplay between meso-scale turbulence and MIPS and to reduce the amount of parameters by setting $C=0$. First, we will present a numerical phase portrait. As the model includes several parameters, we have to restrict ourself to a low-dimensional cut in parameter space. We study the model in the space spanned by $\Gamma_0$ and $\zeta$. These parameters can be used to control the instabilities independently. Furthermore, we can compare critical values computed numerically with the ones found from the stability analysis in section \ref{sec:stab_dis}. The phenomenological parameters can be related to physical properties by examining microscopic models. While $\Gamma_0$ is determined by the characteristics of the surrounding fluid and the activity (see \cite{Heidenreich2016,Reinken2018a}), the parameter $\zeta$ depends on the details of the repulsive interactions (see \cite{Bialke2013}). We then give a qualitative description of the dynamical phases encountered, when the different instabilities are present. Finally, we discuss the anomalous velocity statistics observed in more detail. \subsection{Phase Portrait}\label{sec:phase_portrait} To obtain a phase portrait, we numerically solve Eq.\ \eqref{eq:Dynamics} for slightly perturbed homogeneous initial conditions. The exact simulation setup can be found in Appendix \ref{app:phase_identifiers}, details on the numerical implementation are provided in Appendix \ref{app:numerics}. To distinguish phases numerically, we introduce two quantifiers: the enstrophy as a measure for the presence of vortices and the modality of the density distribution to detect clustering. Details can be found in Appendix \ref{app:phase_identifiers}. Using these indicators, the phase portrait figure \ref{fig:phase_portrait} is numerically calculated, where the different colors correspond to the phases. Phase boundaries expected from the linear stability analysis are depicted as yellow lines. They can be obtained by calculating critical parameters, which we already determined for $\Gamma_0^c$ in Eq.\ \eqref{eq:Gamm0crit}. Similarly, the critical damping parameter $\zeta^c$ can be obtained by using Eq.\ \eqref{eq:sigma2approx} and Eq.\ \eqref{eq:v_rho_linear}, giving \begin{equation}\label{eq:zetacrit} \zeta^c = \frac{1}{4\rho_0}\left(3v_0 - \sqrt{v_0^2-16A(\rho_0)D}\right)\approx \frac{v_0}{2\rho_0}, \end{equation} where the last approximation holds, if the product $A(\rho_0)D$ is small. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{phase_portrait_regions.png} \caption{Phase portrait in the $\zeta$-$\Gamma_0$ plane. Phases are color coded, grey = disordered (D), green = Isotropic Turbulence (IT), red = Motility-Induced Clustering (MIC) and blue = Isotropic Turbulence with Clustering (ITC). Additionally, region ITC is subdivided into three subregions. Simulation parameters are listed in table \ref{tab:sim_parameters} in Appendix \ref{app:sim_parameters}.} \label{fig:phase_portrait} \end{figure} \subsection{Qualitative phase descriptions}\label{sec:pp_simple} Model \eqref{eq:Dynamics} produces a wealth of new dynamics, which cannot be covered entirely within this work. We therefore only give a qualitative description of the phases characterized by our coarse numerical measures.\\ \textit{Disordered State} We start our analysis of the phase portrait around the disordered state $(\rho,\mathbf{p}_0) = (\rho_0,\mathbf{0})$. As expected from the stability analysis, the disordered state is stable for $\zeta<\zeta^c$ and $\Gamma_0>\Gamma_0^c$, which encompasses region $D$ of the phase diagram figure \ref{fig:phase_portrait}. We do not include snapshots of the dynamics in figure \ref{fig:snapshots}, since there are no notable dynamics or features to report.\\ \begin{figure}[!h] \centering \includegraphics[width=\linewidth, height=\textheight,keepaspectratio]{snapshots_updated} \caption{Typical snapshots of the dynamics for different points in the phase portrait, \textbf{A} Isotropic Turbulence \textbf{B}, \textbf{D} Motility-Induced Clustering and Motility-Induced Phase Separation \textbf{C}, \textbf{E} Isotropic Turbulence with Clustering. The rescaled density values are given by the colorbar. Streamlines of the mass flux are only plotted in the dilute phase for better visibility. Snapshots in \textbf{A}-\textbf{C} are taken at $t=30$, showing quasi-static states. In \textbf{D} and \textbf{E} snapshots at times $t=30,70,150$ are shown to visualize long time dynamics without and with turbulence respectively. Simulation parameters are detailed in Appendix \ref{app:sim_parameters}.} \label{fig:snapshots} \end{figure} \textit{Isotropic Turbulence} For $\zeta\ll\zeta^c$ and $\Gamma_0<\Gamma_0^c$ Isotropic Turbulence is observed (region IT in figure \ref{fig:phase_portrait}). This state is governed by vortex-like structures which split and merge but exhibit a characteristic length scale, see figure \ref{fig:snapshots}\textbf{A}. This length scale can be obtained by the dominant mode of Eq.\ \eqref{eq:sigma1}, which reveals $\Lambda=2\pi \sqrt{-2\Gamma_2/\Gamma_0}$. Numerically, the length scale manifests itself as a dip in the spatial (time averaged) velocity correlation function and as a peak in the power spectrum (not plotted here, see \cite{Wensink2012a,Dunkel2013b}) as these quantities are linked by the Wiener-Khinchin theorem \cite{frisch1995turbulence}. The density stays almost constant throughout the simulation (narrow distribution around $\rho_0$, see figure \ref{fig:snapshots}\textbf{A} and Appendix \ref{app:phase_identifiers}). As briefly discussed in the introduction of this section, we label this state as Isotropic Turbulence (IT) to account for the absence of jets commonly found in bacterial turbulence, see \cite{Dunkel2013b}.\\ \textit{Motility-Induced Clustering and Motility-Induced Phase Separation} Taking $\zeta\geq\zeta^c$ and $\Gamma_0\gg\Gamma_0^c$ leads to Motility-Induced Clustering (MIC) and Motility-Induced Phase Separation (MIPS), to be found in region MIC of the phase portrait. The dynamics are characterized by the emergence of dense clusters with $\mathbf{v}=0$ surrounded by a dilute phase, see figure \ref{fig:snapshots}\textbf{B}. We term the generic case MIC, but refer to MIPS when cluster coarsen over time, eventually reaching a completely phase-separated state, see figure \ref{fig:snapshots}\textbf{D}. In MIPS, clusters have an almost perfect spherical shape and their number decreases monotonically.\\ \textit{Isotropic Turbulence with Clustering} In the lower right corner of the phase portrait (region ITC), a combination of the two states discussed previously is encountered. To be precise, we observe a phase separation into a dense phase with $\mathbf{v}=0$ and a dilute phase with $\mathbf{v}\neq0$. The dilute phase shows dynamics similar to isotropic turbulence. That is, we encounter vortices with a characteristic length scale in the dilute part of the simulation domain. Snapshots can be seen in \ref{fig:snapshots}\textbf{C} and \textbf{E}. Note that the interfaces between dilute and dense phases are highly irregular. Furthermore, we observe fluctuations in the number of clusters and their shape.\\ Most of the dynamics discussed previously can be expected from the linear stability analysis. However, there are two noteworthy exceptions which we label as regions $\text{ITC}_2$ and $\text{ITC}_3$ in figure \ref{fig:phase_portrait}. To understand the dynamics in these regimes, we have to study the nucleation and coarsening processes in more detail. \subsubsection{Nucleation through turbulence} Insights into region $\text{ITC}_2$ can be gained by studying the nucleation of clusters. Nucleation and coarsening in MIPS is well studied in the literature, see for example \cite{Speck2014,Speck2015,Gonnella2015,Stenhammar2014,Wittkowski2014,Stenhammar2013,Patch2017}. A central result is that these processes in MIPS are quite similar to passive gas-liquid phase separation. As this classical transition is known to be a first-order phase transition (with the critical point being a notable exception) the same applies for MIPS, see \cite{Levis2017,Solon2018a} for an extensive study. Accordingly, clustering and phase separation can occur via two different mechanisms: Either nucleation and growth (in the metastable region) or spinodal decomposition (in the spinodal region). In the latter case there is no energy barrier to form a new phase. Hence, small perturbations start growing almost instantly. Therefore, the boundaries of the spinodal region, also known as the spinodal, can be detected by a linear stability analysis. In our case this corresponds to vertical yellow line in figure \ref{fig:phase_portrait}. To the right of that line we observe spinodal decomposition, either with or without the presence of turbulence, see figure \ref{fig:snapshots_nucleation}\textbf{A}. \begin{figure}[!h] \centering \includegraphics[width=\linewidth, height=\textheight,keepaspectratio]{nucleation} \caption{Nucleation dynamics \textbf{A} Spinodal decomposition precedes turbulence (region $\text{ITC}_1$) \textbf{B} Nucleation (and growth) is triggered by density inhomogenities due to the turbulent motion (region $\text{ITC}_2$). Depicted are the same simulation runs as in figure \ref{fig:snapshots} \textbf{D} and \textbf{C} respectively at $t=4,20,24$.} \label{fig:snapshots_nucleation} \end{figure} However, the spinodal region is accompanied by a metastable region. There, perturbations have to overcome an energy barrier to form nuclei. In classic gas-liquid phase separation this energy barrier can be overcome over time by thermal fluctuations. As this is not possible in our model, we do not observe nucleation and growth in the absence of turbulence. In contrast, the turbulent motion for $\Gamma_0<\Gamma_0^c$ leads to local density inhomogeneities, which can overcome the energy barrier and trigger nucleation. This is what is found in region $\text{ITC}_2$ and illustrated in figure \ref{fig:snapshots_nucleation}\textbf{B}. At $t=4$, no clustering is observed, whereas for spinodal decomposition clusters appear throughout the entire simulation domain, see figure \ref{fig:snapshots_nucleation}\textbf{A}. Turbulence sets in at approximately $t\approx20$, which leads to the formation of a few nuclei at random sites, see $t=24$. We want to point out that this is a surprising result. Since inertial turbulence is associate with mixing, one would expect turbulence to inhibit nucleation. This mechanism offers a possible explanation for the shape of the phase boundary between regions IT and $\text{ITC}_2$. Clearly, nucleation and growth are only possible in the metastable regime. The extent of the metastable region can be numerically estimated by checking for hysteresis. We initiate simulation runs either with a homogeneous density profile or with a completely phase-separated state consisting of a single droplet (in the non-turbulent regime). The lower boundary of the metastable region is reached when the droplet looses stability. From our simulations this point can be estimated to be at $\zeta_b \approx 0.68$. The upper boundary is provided by the spinodal at $\zeta^c=1.25$. While checking for hysteresis, we made another important observation. Close to the spinodal a droplet with slightly higher density than the average density $\rho_0$ is sufficient to trigger nucleation. At the binodal, only droplets with a density close enough to the maximal density are stable. Additionally, the maximal density increases when decreasing $\zeta$ due to Eq.\ \eqref{eq:v_rho_linear}. Hence, when moving away from the spinodal, larger density inhomogeneities have to be provided to trigger nucleation. Furthermore, from our simulations we observe that the amplitude of density variations, i.e.\ $\rho_{max}-\rho_{min}$, is proportional to $\Gamma_0^c-\Gamma_0$, see Appendix \ref{app:di}. Hence, for $\Gamma_0\approx\Gamma_0^c$, only small density inhomogenities are observed. These are enough to trigger nucleation close to the spinodal, but are not sufficient close to the binodal. Altogether, these observations explain the shape of the phase boundary in the metastable regime of the phase portrait. \subsubsection{Altered Ostwald Ripening} We will now focus on region $\text{ITC}_3$ of figure \ref{fig:phase_portrait}, i.e.\ we want to answer the question why we observe enstrophy above the critical value for the onset of turbulence $\Gamma_0^c$. To explain this, we focus on the coarsening kinetics. Coarsening in MIPS is similar to Ostwald ripening, see \cite{Cates2010,Gonnella2015,Stenhammar2014}. That is, clusters grow on the expense of smaller clusters, which dissolve and redeposit onto larger ones. What that process typically looks like for classical MIPS (model \eqref{eq:MIPS}) is shown in figure \ref{fig:snapshots_interface}\textbf{A}. During the dissolution process, the mass flux streamlines (which indicate the direction of mass transport) are perpendicular to the dissolving surface. Almost immediately after the cluster disappears, the streamlines rearrange, leaving no trace of the dissolved cluster, see $t=19$. \begin{figure}[!h] \centering \includegraphics[width=\linewidth, height=\textheight,keepaspectratio]{interface_instabilities} \caption{Ostwald Ripening without (\textbf{A}) and with (\textbf{B}) the convective term $\lambda_0(\mathbf{p}\cdot\nabla)\mathbf{p}$. Streamlines are perpendicular to the surface of the dissolving cluster for \textbf{A} whereas they are tangential in \textbf{B}. The remaining parameters are chosen as listed in table \ref{tab:sim_parameters_interface} in Appendix \ref{app:sim_parameters}.} \label{fig:snapshots_interface} \end{figure} However, the presence of the convective term $\lambda_0(\mathbf{p}\cdot\nabla)\mathbf{p}$ in our simulations (see model \eqref{eq:Dynamics}) seems to alter the dissolution dynamics. The snapshots in figure \ref{fig:snapshots_interface}\textbf{A} are produced by setting $\lambda_0=0$, whereas the snapshots in figure \ref{fig:snapshots_interface}\textbf{B} are obtained for $\lambda_0=3$ (all other parameters are unchanged). For the latter choice, the streamlines appear tangential to the cluster surface, leading to a vortex after the cluster disappears. As $\lambda_0=3$ is the parameter used to compute the phase diagram, this behaviour is observed in the entire region ITC. In region $\text{ITC}_3$ (as opposed to regions $\text{ITC}_1$ and $\text{ITC}_2$), these vortices are transient, i.e.\ they vanish some time $\Delta t$ after the cluster has dissolved. This is expected as the finite-wavelength instability is inactive in this regime, i.e.\ turbulence is not self-sustained. However, as other clusters dissolve, new vortices are formed constantly, leading to an increased enstrophy over a long time, see figure \ref{fig:add_figs} in Appendix \ref{app:di}. While focusing on coarsening dynamics, we made another interesting observation concerning the phase portrait: For some simulation runs, there appears to be no coarsening (at least on the simulation time scale). That is, the number of clusters does not decrease nor does the size of the largest cluster increase, see Appendix \ref{app:coarsening} for details. The dynamics for these cases are shown in figure \ref{fig:snapshots}\textbf{B} and \textbf{C} without and with the presence of turbulence respectively. Altogether, the linear stability analysis provides a viable intuition into the expected phases. However, it fails to cover metastable regimes and nonlinear effects. Nevertheless, these effects and the apparent arrest of coarsening pose an intriguing research opportunity, which we will pursue in the future. \subsection{Anomalous velocity statistics}\label{sec:anomalous} The main motivation to study the combined model \eqref{eq:Dynamics} was to observe and explain anomalous velocity statistics. To quantify the deviations from Gaussian statistics, we compute the kurtosis $\kappa$ of the standardized velocities $\hat v_i = (v_i-\overline{\langle v\rangle})/\sigma(v_i)$, where $\overline{\langle v\rangle}$ and $\sigma(v_i)$ are the mean and the standard deviation of the velocity component $v_i$ respectively. The kurtosis coincides with the fourth moment for a standardized distribution. In the lower left corner of figure \ref{fig:phase_portrait_kurtosis} (almost) normal statistics are observed, i.e. a kurtosis $\kappa\approx3$ is reported. This is not surprising since this region coincides with the turbulent regime (region IT in figure \ref{fig:phase_portrait}), where we expect normal statistics as in the incompressible model \eqref{eq:Incomp}. However, strongly anomalous statistics ($\kappa\geq6$) are reported for a large part of the phase portrait. The anomalous region matches with the clustering phases (ITC and MIC) of figure \ref{fig:phase_portrait}, indicating that the deviation from normal statistics stems from clustering. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{phase_portrait_kurtosis} \caption{Kurtosis in the $\zeta$-$\Gamma_0$ plane. For $\zeta<\zeta_c$ and $\Gamma_0>\Gamma_0^c$ no velocity statistics are computed as $\mathbf{v}\equiv0$ (up to numerical errors). Simulation runs with a kurtosis higher than 6 are yellow.} \label{fig:phase_portrait_kurtosis} \end{figure} Indeed, that hypothesis is supported by investigating the velocity statistics in more detail. Compared to the (almost) normal statistics in the IT regime, a clear peak around zero is visible in the ITC phase in figure \ref{fig:anomalous}\textbf{A}. This peak can be explained by a subpopulation argument: Computing the velocity statistics in the dense clusters and dilute, turbulent regimes separately reveals a clear split, see figure \ref{fig:anomalous}\textbf{B}. Details on the thresholding can be found in Appendix \ref{app:coarsening}. As expected, the statistics in the dilute part are similar to the ones reported for IT, whereas in the dense phase $\mathbf{v}\approx 0$. Combining both distributions gives the blue curve in figure \ref{fig:anomalous}\textbf{A}. Note that the statistics in the dilute phase are not perfectly Gaussian. The offset could be due to the interface between phases and the interactions of turbulence and clustering. Moreover, the statistics in the IT phase already show a slight deviation from normal statistics, i.e.\ they are only approximately Gaussian. \begin{figure}[!h] \centering \includegraphics[width=\linewidth, height=\textheight,keepaspectratio]{anomalous_rescaled_new} \caption{Velocity statistics \textbf{A} Typical distribution functions of the standardized velocity components for Isotropic Turbulence (IT) and Isotropic Turbulence with Clustering (ITC) and a normal distribution as comparison. \textbf{B} Velocity distribution for the dense and dilute regions (dotted lines) for Isotropic Turbulence with Clustering. Data obtained as described in Appendix \ref{app:sim_parameters}.} \label{fig:anomalous} \end{figure} Furthermore, we want to point out that figure \ref{fig:anomalous}\textbf{B} was produced in the spinodal regime (region $\text{ITC}_1$), i.e.\ when clusters form due to spinodal decomposition. The subpopulation argument works worse in the metastable regime, i.e.\ when nucleation is triggered by turbulence (region $\text{ITC}_2$). We speculate that in this case the interactions between clustering and turbulence are pronounced, leading to stronger correlations. Also note that the transition from normal to anomalous statistics gets smeared out for lower values of $\zeta$. This hints at a possible continuous transition at the nucleation point. Altogether, the statistical properties of the system in the metastable regime require a more fundamental analysis and deeper understanding of the interactions in the competing processes. \section{Conclusion, Discussion and Outlook}\label{sec:discussion} We have presented and studied a phenomenological model combining ideas from MIPS and a fourth-order theory proposed to describe active turbulence. The underlying theories are distinct in the type of main instability they describe. While MIPS is driven by a global (long-wavelength) instability, meso-scale turbulence is characterized by a specific length scale (short-wavelength instability). Our model inherits both instabilities, which results in rich dynamics and an interesting interplay between the two. While we showed that turbulence can trigger nucleation, the effect of turbulence on the coarsening process is not yet clear. Important alterations concerning the shape of the phase boundaries, fluctuations and cluster fluidity are evident from our simulations. However, these aspects need to be investigated further to quantify their importance and physical origin. Furthermore, we observed situations where coarsening appears to be frozen on a certain length scale. As this happens also in the absence of turbulence, we speculate that such an arrested phase separation might be connected to the loss of pure relaxation dynamics by the convective term. Spatial segregation into static clusters surrounded by swarming bacteria was reported for colonies of \textit{Bacillus subtilis} when adding (sublethal concentrations of) antibiotics in \cite{Benisty2015}. Motile cells are diluted, resulting in a lower density of swarming bacteria. Our simulations show qualitatively similar results as clusters with $\mathbf{v}=0$ are surrounded by a dilute, turbulent phase. Moreover, we observe anomalous statistics with a kurtosis up to 6 or larger, which was also reported in \cite{Benisty2015}. Furthermore, a recent experimental study \cite{Grobas2020} indicates that the coexistence of swarming dynamics and MIPS might explain the stress-induced transition to biofilm. Connecting phenomenological transport coefficients with experimentally traceable parameters is not straightforward. The derivation of model \eqref{eq:Incomp} presented in \cite{Heidenreich2016,Reinken2018a} can give a hint for most parameters. However, obtaining the exact form of $v(\rho)$ for biological system from microscopic considerations is fairly complicated. Alternatively, the local $v(\rho)$ can be fitted to (global) experimental data reported in \cite{Ariel2018,Aranson2007} for example. This data suggests a non-monotone dependence of velocity on density, in contrast to the simple linear assumption we used in our numerical study. Nevertheless, preliminary simulations show that our general results still hold for more complicated functions $v(\rho)$. Our phenomenological approach allows us to understand anomalous statistics in meso-scale turbulent systems without specifying the microscopic details. Hence, our model might be applicable for different experimental setups. For example, anomalous velocity statistics were reported in \cite{Ilkanaiv2017} (different aspect ratio of mutated \textit{Bacillus subtilis}) and \cite{Beer2020} (monolayer swarming of \textit{Bacillus subtilis} for low density). However, different mechanisms could be underlying these observations. For example, a chemical response could lead to velocity variations while leaving the density unchanged. Such a situation could be possibly modelled by allowing $v(\rho)$ to depend on an external scalar field instead of the density. Altogether, our results provide a simple explanation for how anomalous statistics can arise in meso-scale turbulence. While addressing that topic, new questions are raised concerning the role of turbulence for nucleation and coarsening. This poses a challenging research opportunity we will pursue in the future. \section*{Acknowledgments} We are grateful to Henning Reinken, Michael Wilczek and Nir Gov for helpful discussion. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through grants HE 5995/3-1 (SH, VMW and AB), BA 1222/7-1 (MB and GA) and SFB 910 (projects B4 (HS) and B5 (MB)). GA and AB are thankful for partial support from the Israel Science Foundation grant 373/16. \clearpage
2024-02-18T23:40:37.850Z
2020-12-03T02:24:36.000Z
algebraic_stack_train_0000
2,905
7,280
proofpile-arXiv_065-14215
\section{Introduction} With increasing ease of collecting data and low cost storage, there is increasing interest to use data-enabled methods developing models for prediction and control,~\cite{abraham2017model,mamakoukas2021derivative,hewing2020learning, asadi2021gaussian,piche2000neural,kabzan2019learning,kocijan2004gaussian}. Such models can be optimized to best fit the data and methods are also available to estimate the error bounds on the predictions, e.g., using predicted time derivatives of the observables~\cite{mamakoukas2021derivative}. However, the conditions under which such data-enabled models can achieve sufficient precision remains unclear. A major challenge is that the model (i.e., the relationship between the input and the measurable outputs) can be dependent on the system's internal states, which are hidden in the sense that they are not directly measured, nor inferred using standard observer designs since they require prior knowledge of the system dynamics. Several approaches are available to address the lack of direct access to the hidden states. One approach is to represent the dynamics through Markov models with a predefined number of hidden states, and then minimize the model prediction error~\cite{tarbouriech2020active,yoon2019hidden,pohle2017selecting}. A difficulty is that the optimal selection of the number of hidden states can be computationally expensive, and there is no guarantee that the resulting models will achieve the desired precision. A second class of approaches to handle the lack of direct access to the hidden states is to model the system dynamics (flow) in a lifted observable space (with generalized functions of the observables) using Koopman operator theory \cite{schmid2008dynamic,mezic2005spectral}. Recent techniques include sparse identification of nonlinear dynamical systems (SINDy)~\cite{brunton2016discovering} and linearization Dynamic Mode Decomposition(DMD)~\cite{kutz2016dynamic}. Nevertheless, with a finite number of states, there is uncertainty about how to select a sufficient set of generalized observable functions to achieve a specified level of prediction precision. A third class of approaches is to use time history of the input and output data to find forward models, e.g., with (i)~transfer function models in the frequency domain~\cite{devasia2017iterative,yan2021mimo}; (ii)~autoregressive models with extra input (ARX)~\cite{ljung1987theory} as well as nonlinear ARX (NARX)~\cite{kocijan2004gaussian,pham2010hybrid}; (iii)~time-delayed information in the Koopman operator framework~\cite{kamb2020time}; and fitting a relation between the time-delayed output data and the inverse input \cite{butterworth2012analysis,blanken2020kernel,aarnoudse2021control}. Again, determining the type of data needed to capture the input-output relationship (with high precision) when models are not available a-priori remains uncertain. When precision of the inverse is not sufficient, it can be improved using iterative techniques, with the inverse of the plant considered as the learning operator, \cite{ghosh2001iterative,fine2009model,teng2015comparison,spiegel2021iterative}. Nevertheless, increasing the precision of the inverse model can improve ILC convergence. The goal of this article is to identify the type of output data needed to develop inverse (output-to-input) operators, with a desired level of precision. Rather than the two step processes of first learning forward models and second using model-predictive control (MPC) to optimally select the control input, the proposed approach seeks to solve the inverse problem of directly finding the input for a given output, e.g., similar to~\cite{dev96ac,willems2005note}. In particular, the relative degree of the system is used to identify the number of time derivatives that need to be added to input-output data to facilitate precision data-enabled learning of the inverse operator. Previous works on inversion of system dynamics, using known models of the system, have shown that the impact of neglecting the boundary conditions of the internal states can be made arbitrarily small~\cite{zou1999preview,zou2007precision} by choosing a sufficiently large time history of the desired output and its derivatives. This motivates the proposed data-enabled algorithm to learn the inverse operator directly from input-output data (without the need to explicitly capture the hidden state dynamics) by using time-delayed observations of the output, along with the output's time derivatives. The main contribution of this paper is to propose a Koopman-type time-delay and output-derivative-based data-enabled inverse operator that minimizes the impact of the hidden state dependency and achieves precision (illustrated with a simulation example). Overall, the work provides insight into the need for including derivative features and time history to achieve precision in Koopman-type inverse operators. Even for forward Koopman-type operators (which only depend on past observable outputs) it is shown that that the output-derivative at the current time instant needs to be included for precision prediction. \section{Problem formulation and solution} The inverse operator is developed for linear time-invariant (LTI) single-input-single-output (SISO) system. Let the system be \begin{align} \Dot{x}(t)&=Ax(t)+Bu(t)\label{eq:X_dynamics}\\ y(t)&=Cx(t)\label{eq:output} \end{align} with states $x(t)\in \mathbb{R}^{n}$, input $u(t)\in \mathbb{R}$ and output $y(t)\in \mathbb{R}$ with matrices $A \in \mathbb{R}^n \times \mathbb{R}^n, B\in \mathbb{R}^n \times 1,C \in 1\times \mathbb{R}^n$. \begin{assumption}[System properties] The system described in (\ref{eq:X_dynamics}) and (\ref{eq:output}) is stable (i.e., $A$ is Hurwitz), hyperbolic (no zeros on the imaginary axis), and has relative degree $r \le n$ (i.e., the difference between the number of poles and the number of zeros). \label{assum:relative_degree} \end{assumption} \begin{assumption} The desired output $y_d$, specified in inverse operator problems, is sufficiently smooth, and has bounded time derivatives upto the relative degree $r$. \end{assumption} \subsection{Hidden state dependency} The system state $x$ can split into state components $\xi$ that directly depend on the output and its time derivatives \begin{align} \xi(t) &= \begin{bmatrix}y(t),\Dot{y}(t),\dots,\frac{d^{r-1} y(t)}{d t^{r-1}}\end{bmatrix}'\in \mathbb{R}^{r\times 1} \label{eq:xi_def} \end{align} and internal states $\eta$, \begin{equation} \begin{bmatrix} \xi(t)\\ \eta(t) \end{bmatrix}=Sx(t) \label{eq:coord_trans} \end{equation} such that in the new coordinates, (\ref{eq:X_dynamics}) can be written as, e.g., see \cite{marino1995nonlinear}, Example 4.1.3, \begin{align} \Dot{\xi}(t)&=A_1\xi(t)+A_2\eta(t)+B_1u(t) \label{eq:xi_dynamic}\\ \Dot{\eta}(t)&=A_3y(t)+A_4\eta(t) \label{eq:eta_dynamic} \end{align} where \begin{equation*} B_1= \begin{bmatrix} 0\\ 0 \\\vdots\\b_{n-r} \end{bmatrix}\in\mathbb{R}^{r\times 1},\quad A_3=\begin{bmatrix} 0\\0\\\vdots \\ 1/b_{n-r} \end{bmatrix}, \end{equation*} \begin{equation*} A_4=\begin{bmatrix} 0&1&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1\\ -b_0/b_{n-r}&-b_1/b_{n-r}&\dots&-b_{n-r-1} / b_{n-r}, \end{bmatrix} \end{equation*} and the eigenvalues of matrix $A_4$ are the zeros of the transfer function of system (\ref{eq:X_dynamics}) and (\ref{eq:output}). \begin{equation} G(s) = \frac{Y(s)}{U(s)}= \frac{b_0+b_1s+\dots+b_{n-r}s^{n-r}}{a_0+a_1s+\dots+a_{n-1}s^{n-1}+s^n}. \label{eq:transfer_func} \end{equation} Note that the internal state $\eta$ is only driven by the output $y = \xi_1$. Moreover, due to the relative degree $r$ assumption, the input $u$ is directly related to the $r^{th}$ derivative of the output, and therefore, the $r^{th}$ row of (\ref{eq:xi_dynamic}) can be written as \begin{equation} \begin{split} y^{(r)}(t)\triangleq \frac{d^{r}y(t)}{dt^{r}}&=C A^rx +CA^{r-1}Bu(t)\\ &=CA^r S^{-1}\begin{bmatrix} \xi(t)\\\eta(t) \end{bmatrix}+b_{n-r}u(t) \\ &= A_{\xi} \xi(t) +A_{\eta} \eta(t) + b_{n-r}u(t), \end{split} \label{eq:rel_degree_connection} \end{equation} and the matrices $A_1$ and $A_2$ in (\ref{eq:xi_dynamic}) are given by \begin{equation*} A_1=\begin{bmatrix} \begin{matrix} 0&1&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1 \end{matrix} \\ \hline \\[-0.1in] A_{\xi} \end{bmatrix}, ~~ A_2=\begin{bmatrix} \begin{matrix} 0&0&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&0 \end{matrix} \\ \hline \\[-0.1in] A_{\eta} \end{bmatrix}. \end{equation*} where $A_{\xi}$ and $A_{\eta}$ are the last rows of matrices $A_1$ and $A_2$ respectively. \subsection{Research problem} \vspace{-0.1in} The desired output and its derivatives, $(y_d^{(r)}, \xi_d)$ can be used to predict the inverse input $u_d$ from (\ref{eq:rel_degree_connection}) , as \begin{equation} u_d(t) = b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) - A_{\eta} \eta_d(t) \right], \label{eq:inv_model} \end{equation} which depends on the internal states $\eta$ that are hidden or not directly measured. The goal is to minimize the hidden state effects on the inverse model, by addressing the following research problems. \begin{enumerate}[label=(\roman*)] \item Finding the hidden state from output: Develop an operator that maps the time history of the output $y$ with length $T$ to an estimate of the hidden state $\eta$ at time $t$ \begin{equation} \hat\eta(t) = \hat{\mathbb{H}}[y(t-T:t)]. \label{op_internal_eta} \end{equation} \item Koopman-type inverse operator: Using the operator in (\ref{op_internal_eta}), develop a data-enabled Koopman-type inverse operator $\hat{\mathbb{G}}^{-1}$ that uses the history of the desired output and its time derivatives to predict the inverse input as \begin{align} \hat{u}_d(t) &= \hat{\mathbb{G}}^{-1}[y_d(t-T:t),\xi_d(t),y^{(r)}_d(t)]\label{op_inverse_min_phase}. \end{align} \item Inverse operator precision: Quantify the error $\|\hat{u}_d(t)-u_d(t)\|_2$ dependence on each argument of $\hat{\mathbb{G}}^{-1}$. \end{enumerate} \subsection{Solution} \subsubsection{Finding the hidden state from output} If the system is minimum-phase ($A_4$ is Hurwitz), i.e., (\ref{eq:transfer_func}) has no zeros on the right half plane, then $\eta(t)$ can be obtained from the history of the output by solving (\ref{eq:eta_dynamic}) \begin{equation} \begin{split} \eta(t) &= \int_{-\infty}^t e^{A_4(t-\tau)}A_3y(\tau)d\tau\\ &\triangleq \mathbb{H}[y(-\infty:t)]. \end{split} \label{eq:unknown_state_by_integral} \end{equation} In practice, such an operator is hard to capture in a data-enabled way since it requires an infinite window. Therefore, an estimate $\hat{\eta}$ is obtained with an approximate operator $\hat{\mathbb{H}}$ with a finite time history length $T$ is defined \begin{equation} \begin{split} \hat{\eta}(t) &\triangleq\int_{t-T}^t e^{A_4(t-\tau)}A_3y(\tau)d\tau\\ &\triangleq \hat{\mathbb{H}}[y(t-T:t)]. \end{split} \label{eq_approx_unknown_state} \end{equation} The approximate operator $\hat{\mathbb{H}}$ approaches the exact operator ${\mathbb{H}}$ exponentially as the time history $T$ increases. \begin{lemma} \label{lemma_internal_state_estimate} If the output trajectory is bounded, \begin{equation} M=\max_{\tau\in[-\infty, t-T]}\|y(\tau)\|_2 < \infty, \label{eq_output_bound} \end{equation} then the error in computing the hidden state $\eta(t)$ decays exponentially with the time history $T$, i.e., there exists positive scalars $\alpha_1>0, \beta_1>0$ such that \begin{equation} \begin{split} \|\Delta \eta(t)\|_2\triangleq\|\eta(t)-\hat{\eta}(t)\|_{2} \le \beta_1 e^{-\alpha_1 T}. \end{split} \label{eq_eta_err_bound_exp} \end{equation} \end{lemma} \begin{pf} Since the system is assumed to be minimum phase, the and the eigenvalues of matrix $A_4$, which are the zeros of the transfer function of system (\ref{eq:X_dynamics}), lie in the open left-half of the complex plane, i.e., the matrix $A_4$ is Hurwitz. Then, there exists positive scalars $\kappa_1 >0, \alpha_1 >0$ such that,~\cite{desoer1975feedback} \begin{equation} \|e^{A_4t}\|_{2}\le \kappa_1 e^{-\alpha_1 t}. \label{eq:exponeital_decay} \end{equation} Then, from (\ref{eq:unknown_state_by_integral},\ref{eq_approx_unknown_state}), the approximation error can be bounded as \begin{equation} \begin{split} \|\eta(t)-\hat{\eta}(t)\|_{2}&=\left\| \int_{-\infty}^{t-T}e^{A_4(t-\tau)}A_3y(\tau)d\tau \right\|_{2}\\ &\le M\|A_3\|_{2} \int_{-\infty}^{t-T}\kappa_1 e^{-\alpha_1 (t-\tau)} d\tau \\ & \qquad {\mbox{using (\ref{eq_output_bound}, \ref{eq:exponeital_decay})} } \\ &= M\|A_3\|_{2} \int_{T}^{+\infty}\kappa_1 e^{-\alpha_1 \tau'} d\tau'\\ &=M\|A_3\|_{2} \frac{\kappa_1}{\alpha_1}e^{-\alpha_1 T}. \end{split} \end{equation} The result follows with \begin{equation} \beta_1 = M\|A_3\|_{2} \frac{\kappa_1}{\alpha_1} . \label{eq_output_bound_2} \end{equation} \end{pf} \subsubsection{Koopman-type inverse operator} Given an estimate $\hat{\eta}$ of the internal state $\eta $, the inverse operator prediction in (\ref{op_inverse_min_phase}) can be estimated as \begin{align} \hat{u}_d(t) &=b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) -A_{\eta} \hat{\eta}_d(t) \right] \nonumber \\ & = b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) -A_{\eta} \hat{\mathbb{H}}[y_d(t-T:t)] \right] \nonumber \\ & \qquad {\mbox{using (\ref{eq_approx_unknown_state})} } \nonumber \\ &\triangleq \hat{\mathbb{G}}^{-1}[y^{(r)}_d(t), \xi_d(t), y_d(t-T:t)]. \label{eq:inv_op_derivation} \end{align} \vspace{0.1in} \begin{remark} \label{rem_inverse_preicison} In addition to sufficient time history (large $T$) of the output to accurately find the internal state (to let $\Delta \eta \longrightarrow 0$), information about the derivatives of the output (upto the relative degree $r$ at time $t$, i.e., $y_d^{(r)}(t), \xi(t)$) are also needed for precisely computing the inverse input $u_d$ in (\ref{op_inverse_min_phase}) as illustrated in Fig.~\ref{fig_inverse_hidden_state_depend_demo}. \end{remark} \vspace{-0.1in} \begin{figure}[!ht] \centering \includegraphics[width=0.95\columnwidth]{Images/inverse_demo.png} \caption{The inverse operator's dependence on the hidden state is removed by use of past output history and current time derivatives of the output. } \label{fig_inverse_hidden_state_depend_demo} \end{figure} \vspace{-0.01in} \subsubsection{Koopman-type forward operators using output history\newline} The output $y$ can be related to the input as \begin{equation} \begin{split} {y}(t+T_f) &=C\int_{-\infty}^{t+T_f}e^{A(t-\tau)}Bu(\tau)d\tau \end{split} \end{equation} and approximated by \begin{equation} \begin{split} \hat{y}(t+T_f) &=C\int_{t-T}^{t+T_f}e^{A(t-\tau)}Bu(\tau)d\tau . \end{split} \label{forward_map_approx_u_hist} \end{equation} Therefore, using arguments similar to the proof of Lemma 1, the error in computing the output using just the history of input $u$ tends to zero as the time history of the input increases, i.e., as $T \rightarrow \infty$. Thus, it is possible to find a map that only depends on the input and its past history, \begin{equation} \begin{split} \hat{y}(t+T_f) & = \hat{\mathbb{G}}_u[u(t-T:t+T_f)], \end{split} \end{equation} which justifies the use of ARX models to capture forward linear system models using past input history (and augmented by the output history). In contrast, with Koopman-type operators where past history of the observable output is used to predict future values, the forward model prediction can be written as \begin{equation} \begin{split} &\hat{y}(t+T_f)\\ &=Ce^{AT_f}\hat{x}(t)+C\int_{t}^{t+T_f}e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ &=Ce^{AT_f} S^{-1} \begin{bmatrix} \xi(t)\\ \hat{\eta}(t) \end{bmatrix} +C\int_{t}^{t+T_f} \!\!\! e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ & \qquad {\mbox{using (\ref{eq:coord_trans})} } \\ &=Ce^{AT_f} S^{-1} \begin{bmatrix} \xi(t)\\ \hat{\mathbb{H}}[y_d(t-T:t)](t) \end{bmatrix} \\ & \qquad \qquad \qquad \qquad +C\int_{t}^{t+T_f} \!\!\! e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ & \qquad {\mbox{using (\ref{eq_approx_unknown_state})} } \\ &\triangleq \hat{\mathbb{G}}[y(t-T:t), \xi(t), u(t:t+T_f)]. \end{split} \end{equation} Therefore, past history of the output can also be used to develop Koopman-type forward operators, provided access is available to current time derivatives of the output $\xi(t)$. \subsubsection{Inverse operator precision} The inverse operator depends not only on the past history of the output (to remove the hidden state $\eta$ dependency) but also on the output and its time derivatives at the current time instant $t$. The impact of the time history $T$, output and its time derivatives on the precision of the operator is quantified in the next lemma. \begin{lemma} \label{Lemma_prediction_error} The prediction error of the inverse operator is bounded, i.e there exists positive scalars $L_1>0, L_2>0, L_3 >0$ such that the error between the predicted input $\hat{u}_d(t)$ and the true input $u_d(t)$ is \begin{equation} \begin{split} &\|\hat{u}_d(t)-u_d(t)\|_{2}\\ &\le L_1\|\Delta y^{(r)}_d(t)\|_2 + L_2\|\Delta \xi_d(t)\|_2+L_3 e^{-\alpha_1 T}. \end{split} \label{eq:inv_u_err} \end{equation} \end{lemma} \begin{pf} From (\ref{eq:inv_model}) and (\ref{eq:inv_op_derivation}), \begin{equation} \begin{split} &\|\hat{u}_d(t)-u_{d}(t)\|_{2}\\ &\le | b_{n-r}^{-1}| \left[\|\Delta y^{(r)}_d(t)\|_2+\|A_{\xi}\|_2\|\Delta \xi_d(t)\|_2 \right. \\ & \qquad \qquad \qquad \qquad\left. + \|A_{\eta}\|_2\|\Delta \eta_d(t)\|_2\right], \end{split} \label{eq:quasi_err_step1} \end{equation} where $\Delta y^{(r)}_d(t) \triangleq \hat{y}^{(r)}_d(t)-y^{(r)}_d(t)$, $\Delta \xi_d(t)\triangleq \hat{\xi}_d(t)-\xi_d(t)$ and $\Delta \eta_d(t)\triangleq \hat{\eta}_d(t)-\eta_d(t)$. The results follows from (\ref{eq_eta_err_bound_exp}) with \begin{equation} L_1 = |b^{-1}_{n-r}|, \quad L_2 = L_1\|A_{\xi}\|_2, \quad L_3 = L_1\|A_{\eta}\|_2 \beta_1 . \end{equation} \end{pf} \vspace{0.1in} \begin{remark}[Data-enabled algorithm] \label{rem_Data_based_algorithm} Known values of the desired output and its derivatives, specified with a sampling period $\Delta t$ and time history $T$ can be used to estimate a discrete-time inverse operator from (\ref{eq:inv_op_derivation}) as \begin{align} \hat{u}_d[m] &= \mathbb{G}_d^{-1}[y_d[m-m_T: 1:m],\xi_d[m],y^{(r)}_d[m]], \label{eq_data_inverse} \end{align} where $[m]$ indicates value at time $t_m=m \Delta t$, and $m_T = T/{\Delta t} $. Data-enabled algorithms can be used to learn the operator $\mathbb{G}_d^{-1}$, since (\ref{eq_data_inverse}) maps a finite number of variables (desired output and its time derivatives) to the inverse input at time $t_m$. \end{remark} \section{Simulation results} In this section, an example system is introduced, followed by the data-enabled learning of the inverse operator. \subsection{Example system} Consider the following two-mass-spring-damper system, where the input $u$ is the force acting on mass $m_2$ and its displacement $x_2$ is the output $y$, as shown in Fig.~\ref{fig:example_sys}. \begin{figure}[!ht] \centering \includegraphics[width=0.75\columnwidth]{Images/mass_spring_damper_sys.png} \caption{Example system plot} \label{fig:example_sys} \end{figure} The corresponding state space model can be written as \begin{align} \frac{d}{dt} X &= AX+Bu\\ y=x_2&=CX \end{align} where $X\triangleq \begin{bmatrix} x_1&\Dot{x}_1&x_2&\Dot{x}_2 \end{bmatrix}'$, $C=\begin{bmatrix} 0&0&1&0 \end{bmatrix}$, \begin{equation} A=\begin{bmatrix} 0&1&0&0\\ -\frac{k_1+k_2}{m_1}&-\frac{c_1+c_2}{m_1}&\frac{k_2}{m_1}&\frac{c_2}{m_1}\\ 0&0&0&1\\ \frac{k_2}{m_2}&\frac{c_2}{m_2}&-\frac{k_2}{m_2}&-\frac{c_2}{m_2} \end{bmatrix} , B= \begin{bmatrix} 0\\0\\0\\ a/m_2 \end{bmatrix}, \end{equation} $m_1=10,m_2=5,k_1=110,c_1=68,a=k_1/2,k_2 = 75$ and $c_2=60$ in SI units. The relative degree of the system is $r=2$ and the input-output relation is given by \begin{equation} \begin{split} \ddot{y}(t)&= -25 y(t) -12 \dot{y}(t) +25 x_1(t) +12 \dot{x}_1(t) + {11 u(t)}. \end{split} \label{eq_yddot_example} \end{equation} \subsection{Preliminary selections}\label{subsec_pre} Selection of the data-enabled model types to evaluate, the sampling time (which needs to be sufficiently small to reduce discretization error), the evaluation metric, and sufficiently smooth output trajectories for model evaluation are described below. \vspace{-0.1in} \begin{enumerate}[label=(\roman*)] \item A two-layer feedforward neural-net (created through MATLAB function \texttt{feedforwardnet()} with default activation function) is used to learn the inverse operator from data. \item For the two-layer neural-net, each model pool consists of 5 candidates with different number $N\in\{5,10,20,40,80\}$ of neurons in the hidden layer \item The sampling frequency is varied from $5$ Hz to $20$ Hz, which is substantially higher than the system bandwidth of 1.7 Hz. \begin{figure}[!ht] \centering \includegraphics[width=0.95\columnwidth]{Images/filter.png} \caption{Filter process to generate desired trajectories. } \label{fig:sig_filter} \end{figure} \item The inverse operator is assessed using 10 different desired trajectories $y_{d,k}(t), 1\le k \le 10, t\in [0,10]$ with a fixed prediction sampling time of $0.01$ s. Each desired trajectory $y_{d,k}$ used for assessment needs to be sufficiently smooth to investigate the impact of different order of output's time derivatives on the inverse operator, although from (\ref{eq:inv_u_err}) the expectation is that only output derivatives upto the $r^{th}$ order ($r=2$ for this example) are required. Therefore, nominal trajectories $y_{0,k}$ (specified in the appendix) are filtered as shown in Fig.~\ref{fig:sig_filter}, to obtain desired outputs $y_{d,k}$ and their derivatives as \begin{equation} \begin{bmatrix} y_{d,k}\\\Dot{y}_{d,k}\\\Ddot{y}_{d,k}\\y^{(3)}_{d,k}\\y^{(4)}_{d,k} \end{bmatrix}(t) = \begin{bmatrix} 1&0&0&0&0\\ -a&a&0&0&0\\ a^2&-2a^2&a^2&0&0\\ -a^3&3a^3&-3a^3&a^3&0\\ a^4&-4a^4&6a^4&-4a^4&a^4 \end{bmatrix} \begin{bmatrix} y_{d,k}\\y_{3,k}\\y_{2,k}\\y_{1,k}\\y_{0,k} \end{bmatrix}(t) \label{eq_filter_compu} \end{equation} where $a=2\pi$ (cut-off frequency as 1 Hz), which is less than the system's bandwidth of 1.7 Hz, and example trajectories are shown in Fig.~\ref{fig_test_traj_demo2}. \item For a given time history $T$ and sampling time $\Delta t$, as in Remark~\ref{rem_Data_based_algorithm}, the evaluation metrics for the data-enabled inverse operator with $N$ neurons in the hidden layer are selected as the mean $e_{u,N}$ and maximum $\overline{e}_{u,N}$ normalized prediction error over the ten evaluation trajectories $y_{d,k}(\cdot)$, i.e., \begin{equation} e_{u,N} = \frac{1}{10}\sum_{k=1}^{10}\frac{\max_{m} |\hat{u}_k[m]-u_{d,k}[m]|}{\max_{m} |u_{d,k}[m]|}\times 100\% \label{metric_1} \end{equation} \begin{equation} \overline{e}_{u,N} = \max_{k=1,\dots,10}\frac{\max_{m} |\hat{u}_k[m]-u_{d,k}[m]|}{\max_{m} |u_{d,k}[m]|}\times 100\%, \label{metric_2} \end{equation} where the ideal inverse $u_{d,k}$ was found using (\ref{eq:inv_model}) where $\eta_d$ was obtained through (\ref{eq:unknown_state_by_integral}). Moreover, the smallest normalized prediction error over different number of neurons in the hidden layer is defined as \begin{equation} e_{u} = e_{u, N^*}, \quad \bar{e}_{u} = \bar{e}_{u, N^*} \quad {\mbox{where} }\quad N^* = \argmin_N{e_{u, N}} \label{metric_3} \end{equation} to quantify the precision of the inverse operator. \end{enumerate} \begin{figure}[!ht] \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.44\columnwidth]{Images/demo2.png} & \includegraphics[width=0.45\columnwidth]{Images/demo6.png} \end{tabular} \vspace{-0.1in} \caption{Comparison of the example filtered desired output $y_{d,k}$ and nominal trajectories $y_{0,k}$ for $k=2$ (triangular) and $k=6$ (sinusoidal). } \label{fig_test_traj_demo2} \end{figure} \vspace{-0.1in} \subsection{Data Collection } \vspace{-0.1in} The inverse operators are trained using input-output data collected from simulations. Both noisy and noise free output data are used to assess the impact of noise. The input signal $u$ applied to the system is constructed by concatenating $20$ cycles of $p_{(f_i,\alpha_i)}(\cdot)$ ($i=1,2,3,\dots,20$) with different parameters, which are tabulated in Table.~\ref{tab:excitation_sig}. \begin{equation} p_{(f_i,\alpha_i)}(t)=\alpha_i [ 4\sin{(\pi c t^2)}+s(t)+r(t) ] \label{eq:excitation_sig} \end{equation} where $c = f_i/10$, \begin{equation*} \text{s}(t) =\begin{cases} 1 & 2\le t < 4\\ -0.9 & 4\le t < 6\\ 0.5 & 6\le t < 8\\ 0 & \text{otherwise}, \end{cases} \quad \text{r}(t)=\begin{cases} 0.4t & 0\le t < 1\\ 0.4 & 1\le t < 9\\ \text{r}(10-t) & 9\le t \le 10. \end{cases} \end{equation*} \begin{table}[!ht] \centering \caption{Parameters of $p_{(f_i,\alpha_i)}$ in Eq.~\eqref{eq:excitation_sig}. } \begin{tabular}{|c|c|c|c|c|c|} \hline Cycle $\#$, $i$ & $f_i$ & $\alpha_i$ & Cycle $\#$, $i$ & $f_i$ & $\alpha_i$\\ \hline 1 & 6 & 0.75 &11 & 1 & 0.25 \\ 2 & 3 & 0.5 &12 & 0.5 & 0.25\\ 3 & 2 & 0.5 &13 & 1 & -0.1\\ 4 & 0.5 & 0.5 &14 & 0.5 & -0.05\\ 5 & 0.5 & 0.3 &15 & 0.5 & 0.1\\ 6 & 0.3 & 0.3 &16 & 0.5 & -0.1\\ 7 & 0.1 & 0.3 &17 & 2 & 0.25\\ 8 & 0.5 & -0.3 &18 & 1 & 0.1\\ 9 & 0.3 & -0.3 &19 & 0.5 & 0.05\\ 10 & 0.1 & -0.3 &20 & 1 & 0.5\\ \hline \end{tabular} \label{tab:excitation_sig} \end{table} For the noisy case, additive white gaussian noise with signal-to-noise ratio of 20 is separately added to each output and its time derivatives. Simulations were done in MATLAB with \texttt{ode45()} with sampling rate of 100 Hz (to be consistent with the evaluation metrics from (\ref{metric_1}) to (\ref{metric_3})). Input, output and the output's time derivatives (upto the fourth order) were collected. Second order derivative was obtained from~(\ref{eq_yddot_example}). Third and fourth order derivatives for training purposes were estimated from the data, using finite difference as, \begin{align} \begin{bmatrix} y^{(3)}[m]\\ y^{(4)}[m] \end{bmatrix} &=\frac{1}{12(\Delta t)}\begin{bmatrix} -1&8&0&-8&1 \\ -\frac{1}{\Delta t}&\frac{16}{\Delta t}&-\frac{30}{\Delta t}&\frac{16}{\Delta t}&-\frac{1}{\Delta t} \end{bmatrix}\begin{bmatrix} \ddot{y}[m+2]\\\ddot{y}[m+1]\\\ddot{y}[m]\\\ddot{y}[m-1]\\\ddot{y}[m-2] \end{bmatrix}. \nonumber \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{Images/relative_degree_identify.png} \vspace{-0.1in} \caption{Identifying the relative degree $r$ from input-output data, based on discontinuity in the $r^{th}$ derivative of the output for a step input. } \label{fig_relative_degree_id} \end{figure} \vspace{0.1in} \subsection{Reducing impact of hidden states using output history } To investigate the reduction of the impact of the hidden states on the prediction precision of data-enabled inverse operators, the performance of the data-enabled inverse operators was assessed for different time history $T$ of the output. In this part of the study, the number of time derivatives of the output used was the same as the relative degree of the example system. The relative degree $r=2$ can be established by applying a step input --- a corresponding discontinuity will appear in ${y}^{(r)}$, while the lower order derivatives ($y,\dot{y}$ in this example) remain continuous as seen in Fig.~\ref{fig_relative_degree_id}. Then, from (\ref{eq_data_inverse}), \begin{align} \hat{u}_d[m] &= \mathbb{G}^{-1}_d[y_d[m-m_T:1:m],\Dot{y}_d[m],\Ddot{y}_d[m]]. \label{eq:experiment_data_inverse} \end{align} The inverse operator's prediction error $e_u$ (\ref{metric_3}) was obtained for varying output time history $T$ ([0.1, 0.2, 0.4, 0.8, 1.6, 3.2, $ \dots$] s), for different sampling time $\Delta t \in \left\{ 0,05 s, 0.1 s, 0.2 s \right\} $, and for different number $N$ of neurons in the hidden layer, and plotted in Fig.~\ref{fig_inv_exponential_decay} for the case without noise in the training data. The associated prediction errors are tabulated in Table~\ref{tab_excitation_sig_mean} for the fastest sampling time $\Delta t = 0.05$ s. The precision of the inverse operator improves with larger output time history $T$, as seen in Table~\ref{tab_excitation_sig_mean}, where the evaluation values of the two-layer neural net with different $N$ neurons in the hidden layer are listed. Note that typically $N^*\le 20$ yields good precision for this application from Table~\ref{tab_excitation_sig_mean}. Over all selections of neuron numbers $N$, the variation of the smallest prediction error $e_u=e_{u,N^*}$ (\ref{metric_3}) with sampling time of $\Delta t = 0.05$ s ($20$ Hz) fits an exponential decay curve $e_u(T) \approx 1.88e^{-2.18T}$, shown in red in Fig.~\ref{fig_inv_exponential_decay}. This exponential improvement in precision is expected from Lemma~\ref{Lemma_prediction_error}, which predicts an exponential decay of error in the estimation of the hidden states, dependent on $\|e^{AT}\|_2$ from (\ref{eq:exponeital_decay}), and shown in Fig.~\ref{fig_inv_exponential_decay}. Thus, the impact of hidden states on the prediction precision of data-enabled inverse operator can be reduced by using sufficient time history of the desired output. \vspace{0.1in} \begin{remark}[Reducing hidden state dependence] In the following simulations, the time history $T$ is chosen to be sufficiently large $T^*=3.2$ s, which results in a normalized error $e_u \approx 0.01\%$. \label{remark_select_Tstart} \end{remark} \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{Images/inv_exponential_decay.png} \caption{Inverse operator's precision in terms of prediction error $e_u$ (\ref{metric_3}) exponentially improves with respect to different window length $T$ of output history, for different sampling times, $\Delta t = 0.05 s(20 \text{Hz, blue}), 0.1 s(10 \text{Hz, cyan}), 0.2 s(5\text{Hz, red})$. Similar results are seen over different $N^*$ neurons in the hidden layer: $5$ triangle ($\triangle$), $10$ (square $\square$),$20$ (diamond $\diamondsuit$),$40$ (pentagram $\medwhitestar$), and $80$ (circle $\fullmoon$). The fitted exponential decay (red line) is obtained with sampling time of $\Delta t = 0.05$ s ($20$ \text{Hz, blue}).} \label{fig_inv_exponential_decay} \end{figure} \vspace{0.1in} \begin{table}[!t] \centering \caption{Inverse operator's precision improvement in terms of prediction error $e_{u,N}$ (\ref{metric_1}) and $\overline{e}_{u,N}$ (\ref{metric_2}) for varying output time history $T$ and number $N$ of neurons in the hidden layer, with sampling time $\Delta t = 0.05$ s.} \begin{tabular}{|c|c|c|c|c|c|} \hline \diagbox{T}{N} & 5 & 10 & 20 & 40 & 80 \\ \hline & \multicolumn{4}{c}{$e_{u,N} (\%)$ as in (\ref{metric_1})} & \\ \hline 0.1&1.78 &2.23 & 1.64 & 2.05 & 2.43\\ 0.2&0.79 &0.88 & 0.87 & 0.88 & 0.98\\ 0.4& 0.95& 0.85& 0.88 & 0.92 & 0.91\\ 0.8& 0.46&0.51 & 0.49 & 0.48 &0.52 \\ 1.6& 0.14&0.12 & 0.12 & 0.14 & 0.16\\ 3.2&0.05 & 0.01 & 0.01 & 0.01 & 0.05\\ \hline & \multicolumn{4}{c}{$\overline{e}_{u,N}(\%)$ as in (\ref{metric_2}) } & \\ \hline 0.1&3.17 & 3.72& 4.73 & 5.61 & 6.28\\ 0.2& 1.22& 1.59& 1.56 & 1.48 & 1.76 \\ 0.4& 1.17&1.10 & 1.33 & 1.75 & 1.69\\ 0.8&0.54 &0.65 &0.61 & 0.67 &1.01 \\ 1.6&0.20 & 0.16& 0.18 & 0.33& 0.44 \\ 3.2& 0.08&0.02 &0.02 & 0.02& 0.14 \\ \hline \end{tabular} \label{tab_excitation_sig_mean} \end{table} \vspace{-0.1in} \subsection{Need to include output time derivatives} \vspace{-0.1in} From (\ref{eq:inv_u_err}) in Lemma~\ref{Lemma_prediction_error}, even if the hidden state error is reduced by having sufficiently large time history $T$, (as shown in the previous subsection), current time derivatives of the output $\xi_d(t),y^{(r)}(t)$ are needed to achieve precision prediction with the inverse operator. Therefore, the impact of adding time-derivative information is investigated through the following two steps, for different sampling periods $\Delta t \in \left\{ 0,05 s, 0.1 s, 0.2 s \right\}$ and for different number $N$ of neurons in the hidden layer. \begin{enumerate}[label=(\roman*)] \item Incrementally including higher-order time derivatives of the output when learning the inverse operator $\mathbb{G}^{-1}_{d,l}$ that predicts the inverse input $\hat{u}_d$ similar to (\ref{eq:experiment_data_inverse}), where output time derivatives till order $l$ ($0 \le l \le 4$) are included in the data-enabled operator learning, e.g., with $l=i \ge 0$, \begin{align} \hat{u}_d[m] & = \mathbb{G}^{-1}_{d,i}[y_d[m-m_T:1:m], \nonumber \\ & \qquad \quad y^{(i)}_d[m], y^{(i-1)}_d[m], \hdots y^{(0)}_d[m]], \label{inv_G_d_4} \end{align} where $\mathbb{G}^{-1}_{d,2} = \mathbb{G}^{-1}_{d}$ in (\ref{eq:experiment_data_inverse}). \item Adding the output's time derivatives $\dot{y}_d(t),\ddot{y}_d(t)$ to NARX-type inverse operators where the inverse operator is learned using both input and output time history, i.e., to compare \begin{equation} \begin{split} \hat{u}_d[m] = & \text{NARX}[y_d[m-m_T:1:m], \\ & \quad u_d[m-m_T:1:m-1]] \end{split} \label{eq_narx} \end{equation} \begin{equation} \begin{split} \hat{u}_d[m] = & \text{NARX}^{*}[y_d[m-m_T:1:m],\dot{y}_d[m], \\ & \quad \ddot{y}_d[m], u_d[m-m_T:1:m-1]]. \end{split} \label{eq_narx_star} \end{equation} \end{enumerate} The corresponding prediction performance, in terms of errors $e_u$ and $\bar{e}_u$ in (\ref{metric_3}), for $T^*=3.2$ s and $\Delta t = 0.05$~s are tabulated in Table~\ref{tab_derivative_impact}, and plotted in Fig~\ref{fig_inv_map_noise_free} for $T^*=3.2$~s and different sampling time $\Delta t \in \{0.05s, 0.1s, 0.2s\}$. \begin{table}[!ht] \centering \caption{Prediction error $e_u,\bar{e}_u$ (\ref{metric_3}) for inverse operators from (\ref{inv_G_d_4}) to (\ref{eq_narx_star}) with $\Delta t = 0.05$ s.} \begin{tabular}{|c|c|c|c|c|c|} \hline & $e_u(\%)$ & $\bar{e}_u (\%)$ & & $e_u(\%)$ & $\bar{e}_u(\%)$ \\ \hline & \multicolumn{4}{c}{Noise free training data} & \\ \hline $\mathbb{G}^{-1}_{d,0}$& 3.13& 9.82 & $\mathbb{G}^{-1}_{d,4}$ & 0.01 & 0.02\\ $\mathbb{G}^{-1}_{d,1}$& 0.74& 2.10 & NARX & 1.60 & 5.93\\ $\mathbb{G}^{-1}_{d,2} =\mathbb{G}^{-1}_{d}$& 0.01& 0.02 & $\text{NARX}^*$ & 0.01 & 0.02\\ $\mathbb{G}^{-1}_{d,3}$& 0.01& 0.02& & & \\ \hline & \multicolumn{4}{c}{Noisy training data} & \\ \hline $\mathbb{G}^{-1}_{d,0}$& 53.91 & 114.68 & $\mathbb{G}^{-1}_{d,4}$ & 0.41 & 0.78\\ $\mathbb{G}^{-1}_{d,1}$& 11.53& 37.82& NARX & 3.89 & 17.95\\ $\mathbb{G}^{-1}_{d,2} =\mathbb{G}^{-1}_{d}$ & 0.53 & 1.05 & $\text{NARX}^*$ & 0.21 &0.45 \\ $\mathbb{G}^{-1}_{d,3}$& 0.65& 1.32& & & \\ \hline \end{tabular} \label{tab_derivative_impact} \end{table} \begin{figure}[!ht] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.9\columnwidth]{Images/inv_mapping_noise_free_with_narx_comparison.png}\\ \includegraphics[width=0.9\columnwidth]{Images/inv_mapping_noisy_with_narx_comparison.png} \end{tabular} \caption{ Inverse operator's precision in terms of prediction error $e_u, \overline{e}_u$ (\ref{metric_3}) improves for all cases with the addition of derivative information. (Top: noise free training data. Bottom: noisy training data). Similar results are seen for different number $N^*$ of neurons in the hidden layer, with symbols as in Fig.~\ref{fig_inv_exponential_decay}, where the filled symbols correspond to $\overline{e}_u$ and unfilled correspond to ${e}_u$. Performance of NARX-type operator with input and output history but without derivative information is also improved with the addition of derivative information in $\text{NARX}^*$, as in (\ref{eq_narx},\ref{eq_narx_star})} \label{fig_inv_map_noise_free} \end{figure} {\underline{Impact of including derivatives}} The precision of the inverse operator depends on inclusion of the output derivative upto order $r$ (the relative degree). When the number of derivatives $l$ (included in the training and evaluation) is increased from $l=0$ to $l=4$, the precision of the inverse operator improves significantly when all the required number ($l=2=r$) of time derivative features are included in the training and evaluation data. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $9.82\% $ to $0.02\%$ for the case with noise free training data and from $114.68\% $ to $1.05\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact}. Therefore, there is substantial improvement in the inverse operator's precision (especially in the presence of noise) when time derivatives upto the required order of 2 are included. {\underline{Impact on NARX-type inverse operator}} Inclusion of time derivatives is also important for NARX-type inverse operators where both input and output time history are used in the inverse operator. This can be seen by comparing NARX (\ref{eq_narx}) without time derivatives and $\text{NARX}^*$ (\ref{eq_narx_star}) with the derivatives in Table~\ref{tab_derivative_impact} and in Fig~\ref{fig_inv_map_noise_free}. When time derivatives $l=2$ are included in the training and evaluation, the precision of the inverse operator improves significantly. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $5.93\% $ to $0.02\%$ for the case with noise free training data and from $17.95\% $ to $0.45\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact} . Therefore, there is substantial improvement in the precision of the NARX-type inverse operator when the output time derivatives upto the required order of 2 are included. {\underline{Derivative information in output time history}} Conceptually, information about the derivatives upto $r-1$ (one less than the relative degree $r$) are available in the time history of the output and only the $r^{th}$ time derivative $y_d^{(r)}[m]$ is directly affected by the input $u[m]$. In particular, output derivatives can be related to the output time history using finite difference techniques, especially in the noise free case, and hence direct computation of the derivatives might not appear to be critical if time history of the output is used during training. Nevertheless, including computed or measured values (even with some noise) of the time derivative $\dot{y}[m]$ (which is not directly affected by the input $u[m]$) still can improve the precision of the inverse operator as seen in Fig.~\ref{fig_inv_map_noise_free} and Table~\ref{tab_derivative_impact}. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $9.82\% $ to $2.10\%$ for the case with noise free training data and from $114.68\% $ to $37.82\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact}. Therefore, while the noise free case precision could be improved by smaller sampling time $\Delta t$ without the inclusion of $\dot{y}$, for the noisy case, direct measurements of the output time derivatives can substantially improve the inverse operator training, and lead to better precision in its predictions. Moreover, the precision of the inverse operator is further improved by including time derivatives upto the required order of r (relative degree). \section{Conclusion} \vspace{-0.1in} This work showed that Koopman-type data-enabled inverse operators can have high precision if a sufficient large time history of the output is included to reduce the impact of hidden internal states. Additionally, measurements of the instantaneous output time derivatives (upto the relative degree) are required during training to improve the data-enabled inverse operator precision. Our ongoing work is aimed at extending these results to Koopman-type data-enabled inverse operators for nonlinear nonminimum-phase systems. \vspace{-0.1in}
2024-02-18T23:40:38.052Z
2022-07-05T02:10:01.000Z
algebraic_stack_train_0000
2,914
6,619
proofpile-arXiv_065-14414
\section{Introduction} The inflationary paradigm \cite{SKS} proposes an era in the very early universe during which the energy density is dominated by vacuum energy. It explains why the universe is close to flat and the near isotropy of the cosmic microwave background radiation. In addition, it has a simple quantum mechanical mechanism for generating energy density perturbations with wavelengths that are well outside the horizon in the early universe. The energy density perturbations resulting from inflation have an almost scale invariant Harrison-Zeldovich power spectrum. The simplest inflation models consist of a single scalar field $\phi$ called the inflaton. The quantum fluctuations in the Goldstone mode $\pi$ associated with the breaking of time translation invariance by the inflaton~\cite{Cheung:2007st} source the energy density fluctuations. In the simplest of these single field inflationary models, the density perturbations are approximately Gaussian~\cite{Maldacena:2002vr}. Quasi-single field inflation \cite{Chen:2009zp} is a simple generalization of single field inflation that consists of a massive scalar field, the isocurvaton field $s$, that couples to the inflaton. This coupling can give rise to significant non-Gaussianities in the correlators of $\pi$. The Lagrange density in this model contains an unusual kinetic mixing of the form $\mu {\dot \pi} s$ that gives rise to a wealth of interesting phenomena. In this paper, we study the effects of primordial non-Gaussianities on large scale structure. One complication that is not present for the microwave background radiation is that galaxies are biased objects. They do not trace the mass distribution but rather arise at special points, for example where the fluctuations in the mass density exceed some threshold. It was realized in~\cite{AGW} and \cite{Dalal:2007cu} that the power spectrum for biased objects can deviate significantly from Harrison-Zeldovich on large scales if the primordial mass density perturbations are non-Gaussian. These effects have become known as scale-dependent bias and stochastic bias. In \cite{Baumann:2012bc} these enhancements for the power spectrum of biased objects were systematically explored within the context of quasi-single field inflation.\footnote{We refer to these effects as enhancements even though for some range of wave-vectors and model parameters they can interfere destructively with the usual part arising from Gaussian primordial density fluctuations.} Quantitative predictions for the power spectrum of galactic halos in quasi-single field inflation (and other models for non-Gaussian primordial fluctuations) were recently made in~\cite{Gleyzes:2016tdh}. Very recently the scale-dependent bias introduced by higher spin fields~\cite{Arkani-Hamed:2015bza} coupled to the inflaton has been explored~\cite{MoradinezhadDizgah:2017szk}. In this paper we continue and extend the work of \cite{Baumann:2012bc} and compute the galactic halo power spectrum and bispectrum in quasi-single field inflation. The bispectrum for galaxies was computed for local non-Gaussianity in \cite{Tellarini:2015faa} and for equilateral non-Gaussianity \cite{Mizuno:2015qma}. We make explicit numerical predictions by adopting the very simple model in which galaxies arise at points where the underlying energy density fluctuations (averaged over a volume) are above a threshold~\cite{Press:1973iz}.\footnote{Kaiser applied this model to explain the biasing of rich clusters of galaxies~\cite{Kaiser}.} Also, we identify the scaling of the $n$-point function of the halo overdensity in quasi-single field inflation within this threshold model. The impact of the non-Gaussianities in quasi-single field inflation is largest when the kinetic mixing $\mu$ and the isocurvaton mass $m$ are small compared to the Hubble constant during inflation $H$. We derive new analytic methods to calculate the correlations of $\pi$ in this region of parameter space. These are applied to derive analytic expressions for the two-, three-, four-, five-, and six-point functions of $\pi$. We apply these results to derive explicit expressions for the galactic halo power spectrum and bispectrum. The effects in the power spectrum and the bispectrum of galaxies due to primordial non-Gaussianities can become pronounced at the scale $q \simeq 1/ (200 h^{-1}{\rm Mpc})$. In this work we neglect the time evolution of the galaxy distribution after galaxies form. Even though this is not a small effect, we do not expect that neglecting it will qualitatively impact our conclusions. Furthermore, the computations we perform of the higher correlations of $\pi$ will be useful for a more complete computation of the galaxy bispectrum. In section II we outline the quasi-single field inflation model. We discuss the power series expansion of the mode functions of the quantum fields $\pi$ and $s$ at small $|\tau|$, where $\tau$ is conformal time. For small $\mu/H$ and $m/H$, a method is developed to determine the power series coefficients needed to compute the two-, three-, four-, five- and six-point correlations of the curvature perturbation $\zeta$.\footnote{$\pi$ and $\zeta$ are linearly related.} In section III we compute the three-, four-, five- and six-point correlations of $\zeta$. The three- and four-point functions are computed for general wave-vectors, but the five- and six-point functions are only computed for the configurations of wave-vectors that are relevant to the long wavelength enhancements to the galactic halo bispectrum. Section IV introduces the bias expansion and the points above threshold model for the galactic halo overdensity. The results from Section III are used to compute the halo power spectrum and bispectrum. We also present the scaling of the $n$-point function of the halo overdensity in quasi-single field inflation. Concluding remarks are given in section V. \section{The model and the mode functions} We consider a quasi-single field inflation theory in which inflation is driven by a single scalar inflaton field $\phi$ and the inflaton is coupled to a single massive scalar isocurvaton field $s$. The classical background field of the inflaton, $\phi_0(t)$, is time-dependent but we will impose conditions so that to leading order in slow-roll parameters, the background value of $s$ is zero. We also impose a shift symmetry $\phi \rightarrow \phi + c$ and a $Z_{2}$ symmetry $\phi \rightarrow -\phi$ on the inflaton that is only broken by its potential. This implies that the isocurvaton field $s$ couples to derivatives of the inflaton. The lowest dimension operator coupling the inflaton to the isocurvaton is the dimension five operator, \begin{equation} {\cal L}_{{\rm dim} ~5}= \frac{1}{\Lambda}{g^{\mu \nu}\partial_{\mu}\phi \partial_{\nu} \phi} s. \end{equation} We choose the gauge in which the inflaton is only a function of time, $\phi(x) = \phi_0(t)$. We expand the potential for $s$ in a power series about $s=0$, $V(s)=V's +V'' s^2/2 + V'''s^3/3! +... $ and assume the tadpole in $s$ cancels, $(\dot \phi_0)^2/\Lambda -V'=0$. Since we work to leading order in slow-roll parameters, we can neglect ${\ddot{\phi}_0}$, making this cancellation possible. To obtain long wavelength enhancements to the correlations of biased objects, we need $m$, the mass of $s$ ($m^2=V''$), to be less than the Hubble constant during inflation, $H$. We assume there is some inflaton potential (likely non-analytic in $\phi$) that gives values of the power spectrum tilt $n_S$ and the tensor to scalar ratio $r$ consistent with observations. The Goldstone field $\pi(x)$, associated with time translational invariance breaking by the time dependence of $\phi_0$, gives rise to the curvature fluctuations. In a de-Sitter background, the Lagrangian describing $\pi(x)$ and $s(x)$ is \begin{align} \mathcal{L} = \mathcal{L}_{0} + \mathcal{L}_{{\rm int}} \end{align} where \begin{equation} \label{fft} \mathcal{L}_{0}=\frac{1}{2(H\tau)^{2}}\left(\left(\partial_{\tau}\pi\right)^{2} - \nabla\pi\cdot\nabla\pi + (\partial_{\tau}s)^{2} - \frac{m^{2}}{(H\tau)^{2}}s^{2}-\nabla s\cdot\nabla s-\frac{2\mu}{H\tau}s\partial_{\tau}\pi\right) \end{equation} and \begin{equation} \label{intlagrange} {\cal L}_{\rm int} = \frac{1}{(H\tau)^{4}}\left(\frac{(H\tau)^{2}}{\Lambda} \left((\partial_{\tau}\pi)^{2}- {\nabla { \pi}} \cdot {\nabla \pi} \right)s - \frac{V'''}{3!} s^3 - \frac{V^{(4)}}{4!} s^{4} \ldots \right) . \end{equation} In eq.~(\ref{fft}) we have introduced \begin{equation}\label{eq:mu} \mu=2{\dot { \phi}_0}/{{ \Lambda}} \end{equation} and conformal time $\tau= -e^{-Ht}/H$. We have rescaled $\pi$ by $\dot{\phi}_{0}$ (we take $\dot{\phi_0} >0$) to obtain a more standard normalization for the $\pi$ kinetic term. We have also included the measure factor $\sqrt{-g}$ in the Lagrangian so that the action is equal to $\int d^3x d\tau {\cal L}$. Note the unusual kinetic mixing term in (\ref{fft}) which is a result of the background inflaton field breaking Lorentz invariance. To compute correlation functions involving $\pi$ and $s$, we expand the quantum fields in terms of creation and annihilation operators. Since the fields $\pi$ and $s$ have kinetic mixing, they share a pair of creation and annihilation operators. Introducing $\eta=k \tau$ we write, \begin{equation}\label{eq:mode} \pi({\bf x},\tau)=\int {d^3 k \over (2 \pi)^3} \left( a^{(1)}({\bf k}) \pi_{k}^{(1)}(\eta) e^{i{\bf k}\cdot {\bf x}}+a^{(2)}({\bf k}) \pi_{k}^{(2)}(\eta)e^{i{\bf k} \cdot {\bf x}} +{\rm h.c.} \right) \end{equation} and \begin{equation}\label{eq:mode2} s({\bf x},\tau)=\int {d^3 k \over (2 \pi)^3} \left( a^{(1)}({\bf k}) {s}_{k}^{(1)}(\eta) e^{i{\bf k}\cdot {\bf x}}+a^{(2)}({\bf k}) {s}_{k}^{(2)}(\eta)e^{i{\bf k} \cdot {\bf x}} +{\rm h.c.} \right) \end{equation} By varying (\ref{fft}) we can obtain the equations of motion for the mode functions $\pi_{k}^{(i)}(\eta)$ and $s_{k}^{(i)}(\eta)$. These are \begin{equation}\label{eq:diff1} \pi^{(i)\prime \prime}_k - \frac{2\pi^{(i) \prime}_k}{\eta} + \pi^{(i)}_k - \frac{\mu}{H} \left( \frac{s^{(i)\prime}_k}{\eta} - \frac{3s^{(i)}_k}{\eta^2}\right) = 0 \ \end{equation} and \begin{equation}\label{eq:diff2} s^{(i)\prime \prime}_k-\frac{2s^{(i) \prime}_k}{\eta} + \left(1+\frac{m^2}{H^2\eta^2}\right)s^{(i)}_k + \frac{\mu}{H} \frac{\pi^{(i) \prime}_k}{\eta} = 0 \ , \end{equation} where a `` $'$ '' indicates an $\eta$ derivative. \subsection{Power Series Solution} As mentioned in the introduction it is difficult to solve equations (\ref{eq:diff1}) and (\ref{eq:diff2}) analytically for general $m$ and $\mu$. Fortunately, in the small $m/H$ and $\mu/H$ regime we do not need the mode functions' full time-dependence to determine the leading behavior of the correlation functions of $\pi$. Rather, we only need their small $-\eta$ behavior.\footnote{Conformal time $\eta$ satisfies $-\infty<\eta < 0$ with inflation ending at $\eta=0$.} To determine this, we obtain a power series solution to (\ref{eq:diff1}) and (\ref{eq:diff2}). To begin, we rescale the mode functions \begin{align} \label{rescaled mode functions} \pi^{(i)}_k (\eta) &= (H/k^{3/2}) \pi^{(i)} (\eta)\ \ \ \ \ \ \ \ s^{(i)}_k (\eta)= (H/k^{3/2}) s^{(i)} (\eta) \end{align} and then expand $\pi^{(i)}(\eta)$ and $s^{(i)}(\eta)$ as a power series in $-\eta$ \begin{align} \label{series solution} \pi^i(\eta)=\sum_{n=0}^{\infty}a^{(i)}_{r,n}(-\eta)^{n+r}\ \ \ \ \ \ \ \ s^{(i)}(\eta) = \sum_{n=0}^{\infty}b^{(i)}_{r,n}(-\eta)^{n+r}. \end{align} By plugging (\ref{series solution}) into (\ref{eq:diff1}) and (\ref{eq:diff2}), we derive relations among the coefficients $a^{(i)}_{r,n}$ and $b^{(i)}_{r,n}$ \begin{align} \label{coefficient equations} &\left[a^{(i)}_{r,0}r-{\mu \over H} b^{(i)}_{r,0}\right](r-3)(-\eta)^{r-2} + \left[a^{(i)}_{r,1}(r+1) - {\mu \over H}b^{(i)}_{r,1}\right](r-2)(-\eta)^{r-1}\cr &\ \ \ \ + \sum_{n=0}^{\infty}\left[\left[a^{(i)}_{r,n+2}(n+r+2)-{\mu \over H} b^{(i)}_{r,n+2}\right](n+r-1) + a^{(i)}_{r,n}\right](-\eta)^{n+r} = 0\cr &\left[\left[b^{(i)}_{r,0}(r-3)+{\mu \over H} a^{(i)}_{r,0}\right]r + b^{(i)}_{r,0}{m^{2}\over H^2}\right](-\eta)^{r-2} + \left[\left[b^{(i)}_{r,1}(r-2) + {\mu \over H} a^{(i)}_{r,1}\right](r+1) + b^{(i)}_{r,1}{m^{2}\over H^2}\right](-\eta)^{r-1}\cr &\ \ \ \ + \sum_{n=0}^{\infty}\left[\left[b^{(i)}_{r,n+2}(n+r-1)+{\mu \over H} a^{(i)}_{r,n+2}\right](n+r+2) + b^{(i)}_{r,n+2}{m^{2}\over H^2} + b^{(i)}_{r,n}\right](-\eta)^{n+r} = 0. \end{align} Since (\ref{coefficient equations}) is true for all $\eta<0$, the coefficient multiplying each power of $-\eta$ vanishes. The constraints due to the coefficients multiplying $(-\eta)^{n+r}$ provide recursion relations relating the $n+2$ coefficients to the $n$ ones. The constraints due to the coefficients multiplying $(-\eta)^{r-2}$ are \begin{align} \label{zero equations} (a^{(i)}_{r,0}r - {\mu \over H} b^{(i)}_{r,0})(r-3) = 0,\ \ \ \ \ \left[b^{(i)}_{r,0}(r-3) + {\mu \over H} a^{(i)}_{r,0}\right]r + b^{(i)}_{r,0}{m^{2} \over H^2} = 0. \end{align} Equation (\ref{zero equations}) implies the only possible values of $r$ are \begin{align} \label{r values} r = 0, 3,\ \alpha_{-},\ \alpha_{+} \end{align} where \begin{align} \label{alpha pm} \alpha_{\pm} = 3/2 \pm \sqrt{9/4-\left(\mu/H\right)^{2} - \left(m/H\right)^{2}}. \end{align} Note $\alpha_{-}$ and $\alpha_{+}$ approach 0 and 3 when $m$ and $\mu$ approach zero. Then small $\mu/H$ and $m/H$ imply small $\alpha_{-}$. Considering odd $n$ instead of even $n$ results in the same exact solution, so we take $a_{r,1}^{(i)}=b_{r,1}^{(i)}=0$ to eliminate this redundant solution. There are then four branches of the series solution (\ref{series solution}). The leading power of each branch is $(-\eta)^{r}$ and the successive terms go like $(-\eta)^{r + 2 k}$ where $k$ is a positive integer. The series solutions (\ref{series solution}) are a linear combination of each branch. The small $-\eta$ behavior of $\pi^{(i)}$ and $s^{(i)}$ is then \begin{align} \label{small mode function behavior} \pi^{(i)}(\eta) &= a^{(i)}_{0}+ a^{(i)}_{-}(-\eta)^{\alpha_{-}} + a^{(i)}_{0,2}(-\eta)^{2}+a^{(i)}_{-,2}(-\eta)^{\alpha_{-}+2}+ a^{(i)}_{+}(-\eta)^{\alpha_{+}} + a^{(i)}_{3}(-\eta)^{3} + \dots\cr s^{(i)}(\eta) &= b^{(i)}_{-}(-\eta)^{\alpha_{-}}+b^{(i)}_{0,2}(-\eta)^{2} + b^{(i)}_{-}(-\eta)^{\alpha_{-}+2} + b^{(i)}_{+}(-\eta)^{\alpha_{+}}+b^{(i)}_{3}(-\eta)^{3} + \dots \end{align} Note that we have used the notation $a_{\pm,n}^{(i)}\equiv a_{\alpha_\pm,n}^{(i)}$ and $b_{\pm,n}^{(i)}\equiv b_{\alpha_\pm,n}^{(i)}$, and we have also written the $n=0$ coefficients as $a_{r}^{(i)}$. Moreover, $b_0^{(i)}=0$ due to ~(\ref{zero equations}). As $-\eta \rightarrow 0$, $s^{(i)}(\eta)\rightarrow 0$ while $\pi^{(i)}(\eta) \rightarrow a^{(i)}_{0}$. However, for $\alpha_{-} << 1$ the $(-\eta)^{\alpha_{-}}$ term will remain significant even for $-\eta << 1$ which means $\pi$ can undergo superhorizon evolution. We can estimate the value of $\eta$ at which $\pi$ stops evolving using $\alpha_{-} \simeq \left(\mu^{2} + m^{2}\right)/(3H^{2})$ which is valid for small $\mu$ and $m$. The $\pi$ modes then stop evolving at $-\eta \sim e^{-3H^2/\left(\mu^{2} + m^{2}\right)}$. In this paper we only consider values of $m$ and $\mu$ such that the modes of interest stop evolving before the end of inflation. Then one does not need to consider the details of reheating to make predictions for the curvature perturbations. Equation (\ref{zero equations}) can also be used to relate the $a^{(i)}$ and $b^{(i)}$ coefficients multiplying the leading $(-\eta)^{r}$ term of each branch \begin{align} \label{btoas} b^{(i)}_{0} = 0,\ b^{(i)}_{-} = \frac{H a^{(i)}_{-} \alpha_{-}}{\mu},\ b^{(i)}_{+} = \frac{H a^{(i)}_{+} \alpha_{+}}{\mu}, b^{(i)}_{3} = \frac{-3 H\mu}{m^{2}}a^{(i)}_{3}. \end{align} A full solution to the mode equations is unnecessary. We only need certain combinations of the power series coefficients to derive the leading (for small $m$ and $\mu$) behavior of the correlation functions of $\pi$ and $s$. For example, the combinations $\sum\limits_{i}|a^{(i)}_{0}|^{2}$, $\sum\limits_{i}a^{(i)}_{0}b^{(i)*}_{-}$ and $\sum\limits_{i}|b^{(i)}_{-}|^{2}$ determine the two point functions $\left<\pi \pi\right>$, $\left<\pi s\right>$ and $\left<s s\right>$ at late times. \subsection{Power Series Coefficients} \label{commutation relations section and other stuff} In this section, we outline the derivation of the combinations of power series coefficients that are needed to compute the correlation functions of $\pi$ when $m/H$ and $\mu/H$ are small. We begin with the combination $\sum\limits_{i}|b^{(i)}_{-}|^{2}$, which can be obtained by matching to an effective field theory that reproduces the correct two point function of $s$ in the small $\eta$ limit. It turns out that once we know $\sum\limits_{i}|b^{(i)}_{-}|^{2}$ we can determine $\sum\limits_{i}|a^{(i)}_{0}|^{2}$ and $\sum\limits_{i}a^{(i)}_{0}b^{(i)*}_{-}$ from the full theory. In the small $-\eta$ limit we can neglect the second term appearing in (\ref{fft}). Then: \begin{align} \label{EFT2} \mathcal{L}_0^{{\rm EFT}} = \frac{1}{2(H\tau)^{2}}\left(\left({\partial_{\tau} \pi}\right)^{2}+ (\partial_{\tau}s)^{2} - \frac{m^{2}}{(H\tau)^{2}}s^{2}-\nabla s\cdot\nabla s-\frac{2\mu}{H\tau}s{\partial_{\tau}\pi}\right) \end{align} The $\pi$ equation of motion gives \begin{align} \label{small tau pi} \partial_{\tau}\pi = \frac{\mu}{H}\frac{s(\tau)}{\tau} \end{align} where we have dropped a term proportional to $\tau^2$ in (\ref{small tau pi}). The solution of eq.~(\ref{small tau pi}) is \begin{align} \label{small tau pi in terms of s} \pi(\tau) = c_{1} + \int\limits_{-\infty}^{\tau}\frac{\mu}{H}\frac{s(\tau^{\prime})}{\tau^{\prime}}d\tau^{\prime} \end{align} where $c_{1}$ is a constant operator. As mentioned earlier, since (for small $\eta$) $s^{(i)}_{k}(\eta) \simeq b^{(i)}_{-}(-\eta)^{\alpha_{-}}$ and $\alpha_{-}$ is small, the mode functions $s^{(i)}_{k}$ remain nonzero even after the mode wave-vector has exited the horizon (i.e., when $|\eta|<1)$. Due to the factor of $1/\tau$ in the integral in (\ref{small tau pi in terms of s}), the $\pi$ mode functions will undergo superhorizon growth and can become quite large if $m/H$ and $\mu/H$ are small. We use eq.~(\ref{small tau pi in terms of s}) to express the field $\pi$ in terms of $s$. Integrating out $\pi$ using its equation of motion yields an effective Lagrangian for $s$: \begin{align} \label{EFT} \mathcal{L}_0^{{\rm EFT}} = \frac{1}{2(H\tau)^{2}}\left((\partial_{\tau}s)^{2} - \frac{m^{2}+\mu^2}{(H\tau)^{2}}s^{2}-\nabla s\cdot\nabla s \right). \end{align} Since in this effective theory there is only one field $s$, it can be written in terms of a single mode function $s_k$ that satisfies the differential equation, \begin{align} \label{s mode equation} s_k''(\eta) - \frac{2}{\eta}s_k'(\eta) + s_k(\eta) + \left(\frac{\mu^{2}}{H^{2}} + \frac{m^{2}}{H^{2}}\right)\frac{s_k(\eta)}{\eta^{2}} = 0. \end{align} The solution to (\ref{s mode equation}) that satisfies the asymptotic Bunch-Davies vacuum condition and is consistent with the canonical commutation relations is \begin{align} \label{s mode solution} s_{k}(\eta) = H\sqrt{\frac{\pi}{4k^{3}}}(-\eta)^{3/2}H^{(1)}_{\nu}(\eta) \end{align} where $\nu = \sqrt{9/4 - (\mu/H)^{2} - (m/H)^{2}}$ and $H_\nu^{(1)}$ is a Hankel function of the first kind. The small $-\eta$ limit of (\ref{s mode solution}) is \begin{align} \label{small s mode function} s_{k}(\eta) = H(-\eta)^{\alpha_{-}}\frac{i}{k^{3/2}}\frac{1}{\sqrt{2}}. \end{align} Using (\ref{small s mode function}), we can determine the small $-\eta$ limit of the two-point function of the Fourier transform of $s$. Denoting this Fourier transform by $s_{\bf k}$, we obtain \begin{align} \label{2pt:ss} \left<s_{ \bf k}s_{\bf k'}\right>(\tau') = (2\pi)^{3}\delta^{3}\left({\bf k} + {\bf k}'\right)\frac{H^{2}}{2k^{3}}(-\eta)^{2\alpha_{-}}. \end{align} By matching the full theory prediction for $\left< s s \right>$ to (\ref{2pt:ss}) we find \begin{align} \label{b minus squared} \sum\limits_{i}\left|b_{-}^{(i)}\right|^{2} = \frac{1}{2}. \end{align} Equation (\ref{small tau pi in terms of s}) can be used to determine the leading small $-\eta$ behavior of the $\pi$ mode functions in the full theory. It gives \begin{align} \label{pi 0 in terms of s} \pi^{(i)}(0) = c^{(i)}_{1} + \int\limits_{-\infty}^{0}\frac{\mu}{H}\frac{s^{(i)}(\eta^{\prime})}{\eta^{\prime}}d\eta^{\prime}. \end{align} From equation (\ref{small mode function behavior}) we see that the integrand in (\ref{pi 0 in terms of s}) goes like $(-\eta)^{-1 + \alpha_{-}}$ in the IR region of the integral, i.e. $-\eta < 1$. For small $m/H$ and $\mu/H$, $\alpha_{-}$ is very small and the integral will receive a large contribution from the IR. On the other hand, the contribution from the UV is small because the mode functions become oscillatory with smaller amplitude when $-\eta > 1$. This means the integral is fixed by the integrand's IR behavior so that\footnote{We will use these same arguments when we evaluate the time integrals involved in the calculation of higher point correlators.} \begin{align} \label{eft pi zero} \pi^{(i)}(0) \simeq c^{(i)}_{1} - \frac{\mu b_{-}^{(i)}}{H}\int\limits_{-1}^{0}(-\eta)^{-1 + \alpha_{-}}d\eta = c^{(i)}_{1} - \frac{\mu b_{-}^{(i)}}{H}\frac{1}{\alpha_{-}} = c^{(i)}_{1} - \frac{3\mu Hb_{-}^{(i)}}{\mu^{2} + m^{2}}. \end{align} In (\ref{eft pi zero}) we have used $\alpha_{-}^{-1} \simeq 3H^{2}/\left(\mu^{2} + m^{2}\right)$. The corrections to (\ref{eft pi zero}) are suppressed by powers of $\alpha_{-}$ and are unimportant when $m/H$ and $\mu/H$ are small. The integral is insensitive to the exact value of the UV cutoff because $\alpha_{-}$ is small. We can now compute the two-point function of the Fourier transform of $\pi$, which can be written as \begin{align} \label{2pt:pipi} \left<\pi_{\bf k}(0)\pi_{\bf k'}(0)\right> \simeq (2\pi)^{3}\delta\left({\bf k} + {\bf k}'\right)\frac{H^{2}}{k^{3}} C_{2}(\mu, m). \end{align} We determine $C_{2}(\mu, m)$ by taking the magnitude squared of (\ref{eft pi zero}): \begin{align} \label{C2 full exp} C_{2}(\mu, m) \simeq \sum\limits_{i}\left[\left|c^{(i)}_{1}\right|^{2} + \frac{9 \mu^{2} H^{2}}{\left(\mu^{2} + m^{2}\right)}\left|b^{(i)}_{-}\right|^{2} - \frac{6\mu H}{\mu^{2} + m^{2}}{\rm Re}\left(c_{1}^{(i)}b_{-}^{(i)*}\right)\right]. \end{align} In writing (\ref{C2 full exp}), we have only kept the terms that are most important for $m/H$ and $\mu/H$ small. Now $\left< \pi \pi \right>$ is invariant under $s \rightarrow -s$.\footnote{If we treat $\mu$ as a perturbation then all of the corrections to $\left< \pi \pi \right>$ involve even powers of the $s$ field.} This implies the last term in the brackets of (\ref{C2 full exp}) has to vanish. We can determine the first term by noting that the constant $c_{1}^{(i)}$ is $\mu$ independent. This can be seen from the fact that it is a boundary condition fixed by the UV, thereby independent of the mixing factor $\mu$. We can then fix the first term in (\ref{C2 full exp}) by demanding that $C_{2}(0,m) = 1/2$. Finally, using (\ref{b minus squared}) we find that \begin{align} \label{leading C2} C_{2}(\mu, m) \simeq \frac{1}{2} + \frac{9\mu^{2}H^{2}}{2\left(\mu^{2} + m^{2}\right)^{2}}. \end{align} Equation (\ref{leading C2}) gives the leading behavior of $C_{2}(\mu, m)$ in the limit of small $m/H$ and $\mu/H$. We can determine the accuracy of (\ref{leading C2}) by extending the numerical techniques developed in \cite{Assassi:2013gxa} and \cite{An:2017hlx} to the region of small $m/H$ and $\mu/H$ and computing the power spectrum numerically. This is done in appendix \ref{numerical checks appendix}. We now compute the leading $m$ and $\mu$ dependence of the curvature perturbation two-point function. The curvature perturbation is related to the Goldstone field by \begin{align} \zeta_{\bf k} = -\frac{H}{\dot{\phi_0}}\pi_{\bf k}. \end{align} The curvature perturbation two-point is then \begin{align} \label{power spectrum} \left<\zeta_{{\bf k}_{1}}\zeta_{{\bf k}_{2}}\right> = \left(\frac{H}{\dot{\phi}_{0}}\right)^{2}\left<\pi_{{\bf k}_{1}}\pi_{{\bf k}_{2}}\right> &= (2\pi)^{3}\delta({\bf k}_{1} + {\bf k}_{2}){\cal P}_{\zeta}(k)\cr &= (2\pi)^{3}\delta({\bf k}_{1} + {\bf k}_{2})\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{2}\frac{1}{k^{3}} C_{2}(\mu,m). \end{align} Using (\ref{power spectrum}) we can express $\dot{\phi}_{0}$ in terms of $\mu$, $m$, and the measured value of the dimensionless power spectrum $\Delta_{\zeta}$ \cite{Ade:2015xua}: \begin{align} \Delta_{\zeta}^2 = 2.12 \times 10^{-9} = \frac{k^{3}}{2\pi^{2}}{\cal P}_{\zeta}(k) = \left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{2}\frac{1}{2\pi^{2}}C_{2}(\mu,m). \end{align} This implies \begin{align} \label{phi not dot} \frac{\dot{\phi}_{0}}{H^{2}} = \sqrt{\frac{C_{2}(\mu,m)}{2\pi^{2}\Delta_{\zeta}^2}}. \end{align} We can determine the combination $\sum\limits_{i}a_{0}^{(i)}b_{-}^{(i)*}$ by multiplying both sides of (\ref{eft pi zero}) by $b_{-}^{(i)*}$ and summing over $i$. This gives \begin{align} \label{first a0 b- eq} \sum\limits_{i}a_{0}^{(i)}b_{-}^{(i)*} = \sum\limits_{i}c_{1}^{(i)}b^{(i)*}_{-} - \frac{3\mu H}{2\left(\mu^{2} + m^{2}\right)}. \end{align} We have already shown that $\sum\limits_{i}{\rm Re}\left(c_{1}^{(i)}b_{-}^{(i)*}\right) = 0$, which implies \begin{align} \label{IR constraints 1} \sum\limits_{i}{\rm Re}\left(a_{0}^{(i)}b^{(i)*}_{-}\right) = -\frac{3\mu H}{2\left(\mu^{2} + m^{2}\right)}. \end{align} The remaining combinations of power series coefficients needed to compute the higher order correlation functions of $\pi$ are fixed using the canonical commutation relations of $s$ and $\pi$. Consider the equal time relation $\left[s({\bf x},\tau), \pi({\bf y},\tau)\right] = 0$. By inserting (\ref{eq:mode}) and (\ref{eq:mode2}) into this relation, we find \begin{align} \left[\pi({\bf x},\tau), s({\bf y},\tau)\right] = \int \frac{d^{3}{\bf k}}{(2\pi)^{3}}e^{i{\bf k}\cdot\left({\bf x} - \bf {y}\right)}\sum\limits_{i}\left[\pi^{(i)}_{k}(\eta)s_{k}^{(i)*}(\eta) - {\rm c.c.} \right] = 0. \end{align} The mode functions must then satisfy \begin{align} \label{comm:s phi} \sum\limits_{i}{\rm Im}\left[\pi^{(i)}_{k}(\eta)s_{k}^{(i)*}(\eta)\right] = 0 \end{align} for all $\eta$. Plugging the leading IR behavior of the mode functions (\ref{small mode function behavior}) into (\ref{comm:s phi}) and demanding it holds at orders $(-\eta)^{\alpha_-}$, $(-\eta)^{\alpha_+}$, $(-\eta)^2$, and $(-\eta)^3$ respectively yields the following constraints \begin{align} \label{IRconstraints: phi s} \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{-}\right] &= \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{+}\right] =\sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{0,2}\right]\cr &= \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3} + a^{(i)}_{+}b^{(i)*}_{-} + a^{(i)}_{-}b^{(i)*}_{+}\right] = 0. \end{align} Given the fact that the recursion relations ({\ref{coefficient equations}}) and eq. (\ref{btoas}) are real, eqs. (\ref{IRconstraints: phi s}) and (\ref{btoas}) further imply that: \begin{align} \label{more zero comms} \sum_i{\rm Im}\left[a_0^{(i)}b_{-,2}^{(i)*}\right]=\sum_i {\rm Im}\left[b_-^{(i)}b_{0,2}^{(i)*}\right]=0 \end{align} Moreover, the recursion relations ({\ref{coefficient equations}) being real along with the fact that $\sum_i{\rm Im}\left[|b_-^{(i)}|^2\right]=0$ imply that \begin{align} \label{bb zero comm} \sum_i {\rm Im}\left[b_-^{(i)}b_{-,2}^{(i)*}\right]=0 \end{align} Furthermore, using the commutation relation $\left[\pi({\bf x},\tau), \Pi_{\pi}({\bf y},\tau)\right] = i\delta^{3}({\bf x} - {\bf y})$ gives: \begin{align} \label{phi Pi phi} &\sum\limits_{i}{\rm Im}\left[3a^{(i)}_{0}a^{(i)*}_{3} + \alpha_{+}a^{(i)}_{-}a^{(i)*}_{+} + \alpha_{-}a^{(i)}_{+}a^{(i)*}_{-}\right] = -\frac{1}{2}\cr &\sum_i {\rm Im}\left[a_-^{(i)}a_3^{(i)*}\right]=0 \end{align} Again using the fact that relations (\ref{btoas}) are real, we can convert the second equation in (\ref{phi Pi phi}) to: \begin{align} \label{bb3 zero comm} \sum_i{\rm Im}\left[b_-^{(i)}b_3^{(i)*}\right]=0 \end{align} Using (\ref{btoas}), we can combine the final equation of (\ref{IRconstraints: phi s}) with the first equation of (\ref{phi Pi phi}) to find \begin{align} \label{IR constraints 2} \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3}\right] &= \frac{\mu H}{2\left(\mu^{2} + m^{2}\right)}\cr \sum\limits_{i}{\rm Im}\left[b_{-}^{(i)}b^{(i)*}_{+}\right] &= \frac{-1}{2\left(\alpha_{+} - \alpha_{-}\right)} \simeq -\frac{1}{6}. \end{align} The equalities in eq. (\ref{IR constraints 2}) hold for all $m$ and $\mu$ such that $m^{2} + \mu^{2} \leq 9H^{2}/4$, i.e. for $\alpha_{-}$ and $\alpha_{+}$ real. Equations (\ref{b minus squared}), (\ref{leading C2}), (\ref{IR constraints 1}), (\ref{more zero comms}), (\ref{bb zero comm}), (\ref{bb3 zero comm}), and (\ref{IR constraints 2}) comprise the full set of relations among power series coefficients we need to compute the leading $m$ and $\mu$ dependence of the correlation functions of $\pi$. We will also need the fact that $n>0$ coefficients $a_{r,n}^{(i)}$ and $b_{r,n}^{(i)}$ are not enhanced by powers of $\alpha_-^{-1}$ compared to $a_r^{(i)}$ and $b_{r}^{(i)}$ coefficients for small $\alpha_-$, a fact which is simple to see from the recursion relations (\ref{coefficient equations}). \iffalse The remaining combinations of power series coefficients needed to compute the higher order correlation functions of $\pi$ are fixed using the canonical commutation relations of $s$ and $\pi$. Consider the equal time relation $\left[s({\bf x},\tau), \pi({\bf y},\tau)\right] = 0$. By inserting (\ref{eq:mode}) and (\ref{eq:mode2}) into this relation, we find \begin{align} \left[\pi({\bf x},\tau), s({\bf y},\tau)\right] = \int \frac{d^{3}{\bf k}}{(2\pi)^{3}}e^{i{\bf k}\cdot\left({\bf x} - \bf {y}\right)}\sum\limits_{i}\left[\pi^{(i)}_{k}(\eta)s_{k}^{(i)*}(\eta) - {\rm c.c.} \right] = 0. \end{align} The mode functions must then satisfy \begin{align} \label{comm:s phi} \sum\limits_{i}{\rm Im}\left[\pi^{(i)}_{k}(\eta)s_{k}^{(i)*}(\eta)\right] = 0 \end{align} for all $\eta$. Plugging the leading IR behavior of the mode functions (\ref{small mode function behavior}) into (\ref{comm:s phi}) and demanding it hold order by order in $\eta$ yields the following constraints \begin{align} \label{IRconstraints: phi s} &\sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{0,2}\right] = \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{+}\right] = \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{-}\right] =\sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{-,2} + a^{(i)}_{0,2}b^{(i)*}_{-} + a^{(i)}_{-}b^{(i)*}_{0,2}\right]\cr &= \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3} + a^{(i)}_{+}b^{(i)*}_{-} + a^{(i)}_{-}b^{(i)*}_{+}\right] = 0. \end{align} We can obtain similar constraints using $\left[\pi({\bf x},\tau),\Pi_{s}( {\bf y},\tau)\right] = 0$ and $\left[\Pi_{\pi}({\bf x},\tau), s({\bf y},\tau)\right] = 0$, where $\Pi_{\pi}$ and $\Pi_{s}$ are the conjugate momenta of the $\pi$ and $s$ fields: \begin{align} \Pi_{\pi}({\bf x},\tau) &\equiv \frac{\partial \mathcal{L}}{\partial \phi'} = \frac{1}{(H\tau)^{2}}\left[\partial_{\tau}\phi({\bf x},\tau) - \frac{\mu}{H\tau}s({\bf x},\tau) \right]\cr \Pi_{s}({\bf x},\tau) &\equiv \frac{\partial \mathcal{L}}{\partial s'} = \frac{1}{(H\tau)^{2}}\partial_{\tau}s({\bf x},\tau). \end{align} Performing the same steps as before, we find \begin{align} \label{IRconstraints: pi Pi s} \sum\limits_{i}{\rm Im}\left[\left(\alpha_{-} + 2\right)a^{(i)}_{0}b^{(i)*}_{-,2} + \alpha_{-}a^{(i)}_{0,2}b^{(i)*}_{-} + 2 a^{(i)}_{-}b^{(i)*}_{0,2}\right] = \sum\limits_{i}{\rm Im}\left[3a^{(i)}_{0}b^{(i)*}_{3} + \alpha_{-}a^{(i)}_{+}b^{(i)*}_{-} + \alpha_{+}a^{(i)}_{-}b^{(i)*}_{+}\right] = 0 \end{align} and \begin{align} \label{IRconstraints: Pi pi s} \sum\limits_{i}{\rm Im}\left[2a^{(i)}_{0,2}b^{(i)*}_{-} + \alpha_{-}a^{(i)}_{-}b^{(i)*}_{0,2}\right] = \sum\limits_{i}{\rm Im}\left[\alpha_{+}a^{(i)}_{+}b^{(i)*}_{-} + \alpha_{-}a^{(i)}_{-}b^{(i)*}_{+}\right] = 0. \end{align} The three constraints in equations ($\ref{IRconstraints: phi s}$), ($\ref{IRconstraints: pi Pi s}$) and ($\ref{IRconstraints: Pi pi s}$) relating ${\rm Im}\left[b^{(i)}_{-,2}a^{(i)*}_{0}\right]$, ${\rm Im}\left[b^{(i)}_{-}a^{(i) *}_{0,2}\right]$ and ${\rm Im}\left[b^{(i)}_{0,2}a^{(i)*}_{-}\right]$ are linearly independent and imply each term vanishes: \begin{align} \label{pseries constraints} {\rm Im}\left[b^{(i)}_{-,2}a^{(i)*}_{0}\right] = {\rm Im}\left[b^{(i)}_{-}a^{(i)*}_{0,2}\right] = {\rm Im}\left[b^{(i)}_{0,2}a^{(i)*}_{-}\right] = 0. \end{align} On the other hand, the constraints relating the combinations ${\rm Im}\left[b^{(i)}_{3}a^{(i)*}_{0}\right]$, ${\rm Im}\left[b^{(i)}_{-}a^{(i)*}_{+}\right]$ and ${\rm Im}\left[b^{(i)}_{+}a^{(i)*}_{-}\right]$ are linearly dependent, which means each combination can be nonzero. In fact, we can derive an exact formula for ${\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3}\right]$ using the commutation relation $\left[\pi({\bf x},\tau), \Pi_{\pi}({\bf y},\tau)\right] = i\delta^{3}({\bf x} - {\bf y})$, which gives \begin{align} \label{phi Pi phi} \sum\limits_{i}{\rm Im}\left[3a^{(i)}_{0}a^{(i)*}_{3} + \alpha_{+}a^{(i)}_{-}a^{(i)*}_{+} + \alpha_{-}a^{(i)}_{+}a^{(i)*}_{-}\right] = -\frac{1}{2} \end{align} Using (\ref{btoas}), we can combine (\ref{IRconstraints: pi Pi s}) with (\ref{IRconstraints: Pi pi s}) to find \begin{align} \label{IR constraints 2} \sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3}\right] &= \frac{\mu H}{2\left(\mu^{2} + m^{2}\right)}\cr \sum\limits_{i}{\rm Im}\left[b_{-}^{(i)}b^{(i)*}_{+}\right] &= \frac{-1}{2\left(\alpha_{+} - \alpha_{-}\right)} \simeq -\frac{1}{6}. \end{align} which is true for all $m$ and $\mu$ such that $m^{2} + \mu^{2} \leq 9H^{2}/4$, i.e. for $\alpha_{-}$ and $\alpha_{+}$ real. The last set of relations of power series coefficients can be obtained from $\left[s({\bf x},\tau),\Pi_{s}({\bf y},\tau)\right]=i\delta^{3}({\bf x} - {\bf y})$. This identity implies \begin{align} \label{b coeff constrants comm} {\rm Im}\left[b^{(i)}_{-}b^{(i)*}_{0,2}\right] = {\rm Im}\left[b^{(i)}_{-}b^{(i)*}_{3}\right] = {\rm Im}\left[b^{(i)}_{-}b^{(i)*}_{-,2}\right] = 0. \end{align} Equations (\ref{IR constraints 1}), (\ref{IRconstraints: phi s}), (\ref{pseries constraints}), (\ref{IR constraints 2}), and (\ref{b coeff constrants comm}) comprise the full set of relations among power series coefficients we need to compute the leading $m$ and $\mu$ dependence of the correlation functions of $\pi$. \fi \section{Primordial Non-Gaussianities} \label{Primordial Non-Gaussianities label} In this section we compute the leading $m$ and $\mu$ behavior of the connected three- and four-point functions of the curvature perturbation $\zeta$ for arbitrary external wave-vectors. We also compute the connected five-and six-point functions in certain kinematic limits. We will use these results to calculate the two- and three-point functions of biased objects. We perform the computation of these correlation functions using the in-in formalism \cite{Weinberg:2005vy}. We will mostly use the commutator form of the in-in correlator of an operator $\mathcal{O}(0)$: \begin{align} \label{in in} \langle \mathcal{O}(0) \rangle = \sum_{N=0}^{\infty}i^{N}\int_{-\infty}^{0}d\tau_{N}\int_{-\infty}^{\tau_{N}}d\tau_{N-1}\dots \int_{-\infty}^{\tau_{2}}d\tau_{1}\langle \lbrack H_{int}^{I}(\tau_{1}), \lbrack H_{int}^{I}(\tau_{2}),\dots\lbrack H_{int}^{I}(\tau_{N}),\mathcal{O}^{I}(0)\rbrack \dots \rbrack \rangle_{I} \end{align} where $I$ denotes a state or operator evolving in the interaction picture and $H_{int}$ denotes the interaction Hamiltonian\footnote{We restrict our attention to renormalizable terms in the potential for $s$.} \begin{align} \label{Hint} H_{int}(\tau) = \frac{1}{(H \tau)^4}\int d^{3}{\bf x} \left[\frac{1}{\Lambda}s(x) g^{\mu\nu}\partial_{\mu}\pi(x) \partial_{\nu} \pi(x) + \frac{V'''}{3!}s(x)^{3} + \frac{V^{(4)}}{4!}s(x)^{4}\right]. \end{align} For simplicity, we assume $V^{(4)}$ is much smaller than $\left(V'''/H\right)^{2}$ and can be neglected. We have also explored the importance of the $s\partial \pi \partial \pi$ interaction in comparison with the $s^3$ interaction for the primordial curvature bispectrum. For the range of parameters that we are using in this paper, we find numerically that the ratio of these contributions is $O(10^{-3})/f_{NL}$. We suspect that this interaction is subdominant for the other primordial correlation functions as well, and neglect this interaction henceforth. All relevant interactions are then mediated by the $V'''$ term. We assume $|V'''|/H<1$ so that perturbation theory is valid. \subsection{Three-Point Function}\label{three point function section} The three-point function of $\zeta$ can be written \begin{align} \left<\zeta_{{\bf k}_{1}}\zeta_{{\bf k}_{2}}\zeta_{{\bf k}_{3}}\right> \equiv B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3})(2\pi)^{3}\delta^{3}\left({\bf k}_{1} + {\bf k}_{2} + {\bf k}_{3}\right). \end{align} The leading contribution to the bispectrum $B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3})$ is obtained by inserting a single factor of the $V'''$ interaction into (\ref{in in}). This yields \begin{align} \label{3 point no normalization} B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3}) &=-2V'''\left(\frac{H}{\dot{\phi}_{0}}\right)^{3}{\rm Im}\int\limits_{-\infty}^{0}\frac{d\tau}{(H\tau)^{4}}\prod_{l=1}^{3}\left[\pi^{(1)}_{k_{l}}(0)s_{k_{l}}^{(1)*}(k_{l} \tau) + \pi^{(2)}_{k_{l}}(0)s_{k_{l}}^{(2)*}(k_{l} \tau)\right]. \end{align} Equation (\ref{3 point no normalization}), written in terms of the rescaled mode functions (\ref{rescaled mode functions}), becomes \begin{align} \label{3 point} B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3}) = -2\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{3}\left(\frac{V'''}{H}\right)\left(\prod_{i}^{3}\frac{1}{k_{i}^{3}}\right){\rm Im}\int\limits_{-\infty}^{0}\frac{d\tau}{\tau^{4}}\prod_{l=1}^{3}\sum\limits_{i}\pi^{(i)}(0)s^{(i)*}(k_{l} \tau). \end{align} Let us now focus on the evaluation of the integral in (\ref{3 point}), which can be written: \begin{align} \label{3 point master integral} k_{ UV}^{3}{\rm Im}\int\limits_{-\infty}^{0}\frac{d\eta}{\eta^{4}}\prod_{l =1}^{3}\sum\limits_{i}\pi^{(i)}(0)s^{(i)*}\left(\frac{k_{l}}{k_{UV}}\eta\right) \end{align} where we define $k_{UV} = {\rm max}(k_{l})$ and $\eta=k_{UV}\tau$. In the small $\mu$ and $m$ regime, (\ref{3 point master integral}) receives most of its support from the IR region of the integral (when the arguments of the mode functions are less than 1 in magnitude) due to the superhorizon growth mentioned in the discussion following (\ref{small tau pi in terms of s}). The contribution from the UV region is subdominant. Our choice of $k_{UV}$ implies the leading $m$ and $\mu$ contribution to the integral comes from the region $-1 \le \eta \le 0$, and (\ref{3 point master integral}) becomes: \begin{align} \label{IR 3 point} &k_{UV}^{3}{\rm Im} \int\limits_{-1}^{0}\frac{d\eta}{\eta^{4}}\prod_{l=1}^{3}\sum\limits_{i}\left[\left(a^{(i)}_{0}b^{(i)*}_{-}\right)\left(-\frac{k_{l}}{k_{UV}}\eta\right)^{\alpha_{-}} + \left(a^{(i)}_{0}b^{(i)*}_{0,2}\right)\left(-\frac{k_{l}}{k_{UV}}\eta\right)^{2}\right.\cr &\left. + \left(a^{(i)}_{0}b^{(i)*}_{-,2}\right)\left(-\frac{k_{l}}{k_{UV}}\eta\right)^{\alpha_{-}+2} +\left(a^{(i)}_{0}b^{(i)*}_{+}\right)\left(-\frac{k_{l}}{k_{UV}}\eta\right)^{\alpha_{+}} + \left(a^{(i)}_{0}b^{(i)*}_{3}\right)\left(-\frac{k_{l}}{k_{UV}}\eta\right)^{3} + O(\eta^4) \right].\cr \end{align} Note the integral is potentially IR divergent because of the factor of $1/\eta^{4}$. However, eqs. (\ref{IRconstraints: phi s}) and (\ref{more zero comms}) imply the coefficients multiplying the IR divergent terms are zero, and that the leading $\mu$ and $m$ behavior of (\ref{IR 3 point}) is \begin{align} \label{C3 equil IR approx} \left(\sum\limits_{i} {\rm Re}\left[a^{(i)}_{0}b_{-}^{(i)*}\right]\right)^{2} & \left(\sum\limits_{i}{\rm Im}\left[a^{(i)}_{0}b^{(i)*}_{3}\right]\right)\left[k^{3}_{1}\left(\frac{k_{2}k_{3}}{k_{UV}^{2}}\right)^{\alpha_{-}} + {\rm cyc.\ perm}\right]\int\limits_{-1}^{0}d\eta (-\eta)^{-1 + 2\alpha_{-}}\cr &= \frac{27}{16}\frac{\mu^{3}H^5}{\left(\mu^{2} + m^{2}\right)^{4}}\left[k^{3}_{1}\left(\frac{k_{2}k_{3}}{k_{UV}^{2}}\right)^{\alpha_{-}} + {\rm cyc.\ perm} \right]. \end{align} As long as $\alpha_{-}$ is small, the answer does not depend on the precise choice of $k_{UV}$, we only have to choose it to be of the same order as the hardest wave-vector entering the vertex.\footnote{The ratios of external wave-vectors to $k_{UV}$ raised to the power $\alpha_{-}$ in equation (\ref{C3 equil IR approx}) can be interpreted as the re-summation of leading logs in the $\alpha_{-}$ expansion.} Equivalently, the answer is insensitive to the precise choice of the lower bound of the $\eta$ integral. Plugging (\ref{C3 equil IR approx}) into (\ref{3 point}), we find that the leading $m$ and $\mu$ behavior of the $O(V''')$ contribution to the bispectrum is \begin{align} \label{3 point leading behavior} B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3}) &= -\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{3}\left(\frac{V'''}{H}\right)\frac{1}{k^{3}_{1}k^{3}_{2}k^{3}_{3}} \frac{\left(3\mu/2\right)^{3}H^5}{\left(\mu^{2} + m^{2}\right)^{4}}\cr &\ \ \ \ \ \ \ \times\left[k^{3}_{1}\left(\frac{k_{2}k_{3}}{k_{UV}^{2}}\right)^{\alpha_{-}} + k^{3}_{2}\left(\frac{k_{1}k_{3}}{k_{UV}^{2}}\right)^{\alpha_{-}} + k^{3}_{3}\left(\frac{k_{1}k_{2}}{k_{UV}^{2}}\right)^{\alpha_{-}}\right] \end{align} where $k_{UV} = {\rm max}(k_{i})$. Equation (\ref{3 point leading behavior}) was computed numerically in \cite{Chen:2009zp} and is valid for any external wave-vector configuration. Note that when the wave-vectors $k_1$, $k_2$ and $k_3$ are the same order of magnitude, the terms raised to the power $\alpha_{-}$ can be set to unity. Then the bispectrum has the same form as local non-Gaussianity, {\it i.e.}, $B_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3}) \propto \left[P_{\zeta}(k_1)P_{\zeta}(k_2)+P_{\zeta}(k_1)P_{\zeta}(k_3)+P_{\zeta}(k_2)P_{\zeta}(k_3)\right]$. \begin{figure} \includegraphics[width=2in]{2ptbiasdiagrams.png} \caption{Diagrams that contribute to three- and four-point correlations of $\zeta$ in the squeezed and collapsed limits respectively. These diagrams contribute to the galactic halo power spectrum. Dashed lines represent $\pi$, while solid lines represent $s$.} \label{fig:feynman 2 pt} \end{figure} We now study (\ref{3 point leading behavior}) in a couple interesting kinematic limits. First, consider (\ref{3 point leading behavior}) in the equilateral limit $k_{i} \equiv k$ \begin{align} \label{3 point equil B} B^{\rm equil}_{\zeta}(k) = -\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^3\left(\frac{V'''}{H}\right)\frac{1}{k^{6}}\frac{3\left(3\mu/2\right)^{3}H^5}{\left(\mu^{2} + m^{2}\right)^{4}}. \end{align} We can use (\ref{3 point equil B}) to relate $V'''$ to the model's prediction for $f_{NL}$. We estimate $f_{NL}$ using \begin{align} \label{fequil} f_{NL} = \frac{5}{18} \times \frac{B^{{\rm equil}}_{\zeta}(k)}{\mathcal{P}_{\zeta}(k)^{2}}. \end{align} Substituting (\ref{power spectrum}), (\ref{phi not dot}) and (\ref{3 point equil B}) into (\ref{fequil}) gives \begin{align} \label{fnl bound on V'''} \frac{V'''}{H} = -\frac{6}{5}f_{NL}\sqrt{2\pi^{2}\Delta^{2}_{\zeta}}C_{2}(\mu,m)^{\frac{3}{2}}\frac{(\mu^2+m^2)^4}{(3\mu/2)^3H^5}. \end{align} The current Planck $95\%$ C.L. constraint for local non-Gaussianity is $f_{NL}=2.7\pm 11.6$. For $f_{NL}=10$ and $\mu/H=m/H=0.3$, we find that $|V'''|/H\simeq 10^{-3}$. The two kinematic configurations we will be most interested in when we compute galactic halo correlators are when all three external wave-vectors are soft (Fig. \ref{fig:feynman 3 pt}$c$), and when one leg is soft while the other two are hard - the so-called squeezed limit (Fig. \ref{fig:feynman 2 pt}$a$). In what follows, we will denote hard wave-vectors by $k$ and soft wave-vectors by $q$. First we consider the squeezed limit. We choose ${\bf k}_{2} = -{\bf k}_{1} - {\bf q}$ and $k_{1} = k_{2} \equiv k >> k_{3} \equiv q$. The full $O(V''')$ contribution (\ref{3 point}) to the bispectrum in this limit can be written \begin{align} \label{B squeezed} B^{\rm sq}_{\zeta}(k,q) =-\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^3\left(\frac{V'''}{H}\right)\frac{2\left(3\mu/2\right)^{3}H^5}{\left(\mu^{2} + m^{2}\right)^{4}}\frac{1}{k^{3+\alpha_{-}}q^{3-\alpha_{-}}}. \end{align} The wave-vector dependence of equation (\ref{B squeezed}) was first determined in \cite{Chen:2009zp,Sefusatti:2012ye}. Finally, the bispectrum in the limit where all three external wave-vectors are soft can be obtained simply by making the replacement $k_{i} \rightarrow q_{i}$ in (\ref{3 point leading behavior}). \subsection{Four-Point Function} \label{Four Point Function label} The four-point function of $\zeta$ can be written \begin{align} \label{4 point def} \left<\zeta_{{\bf k}_{1}}\zeta_{{\bf k}_{2}}\zeta_{{\bf k}_{3}}\zeta_{{\bf k}_{4}}\right> \equiv N^{(4)}_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3},{\bf k}_{4})(2\pi)^{3}\delta^{3}\left(\sum_{i}^{4}{\bf k}_{i}\right). \end{align} We can derive the leading contribution to $N_{\zeta}^{(4)}$ by inserting two factors of the $V'''$ interaction into (\ref{in in}). It is convenient to define \begin{align} A(x)\equiv \sum_{i}\pi^{(i)}(0)s^{(i)*}(x)\ \ \ \ \ \ \ B(x)\equiv \sum_{i}b_-^{(i)}s^{(i)*}(x). \end{align} By expanding the commutators and performing all possible contractions, we find: \begin{align} \label{4 point full} &N^{(4)}_{\zeta}({\bf k}_{1},{\bf k}_{2},{\bf k}_{3},{\bf k}_{4}) = 4 \left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{4}\left(\frac{V'''}{H}\right)^{2}\left(\prod_{i}^{4}\frac{1}{k_{i}^{3}}\right)\frac{1}{k_{12}^{3}}\int\limits_{-\infty}^{0}\frac{d\tau}{\tau^{4}}\int\limits_{-\infty}^{\tau}\frac{d\tau'}{\tau'^{4}}\cr &\ \ \ \ \ \ \ \ \times{\rm Im}\left[A(k_1 \tau)A(k_2 \tau) \right]{\rm Im}\left[A(k_3 \tau')A(k_4 \tau')\sum\limits_{i}s^{(i)}(k_{12}\tau)s^{(i)*}(k_{12}\tau')\right]\cr &\ \ \ \ \ \ \ \ \ \ \ \ + (k_{1} \leftrightarrow k_{3}, k_2 \leftrightarrow k_4) + {\rm cyc. \ perms}({\bf k}_{2},{\bf k}_{3},{\bf k}_{4}) \end{align} where $k_{12} =\left |{\bf k}_{1} + {\bf k}_{2}\right|$. Unlike the calculation of the three-point function, the four-point one involves nested time integrals. Again, the four-point integral is dominated by the IR for $\alpha_{-} << 1$ and the integrand reduces to polynomials in $\tau$ and $\tau'$. Like before, we make the change of variable $\eta\equiv k_{UV_{12}}\tau$ and $\eta'\equiv k_{UV_{34}}\tau'$, where $k_{UV_{ij}}\equiv \text{max}\{k_i,k_j,|{\bf k}_i+{\bf k}_j|\}$ and cut off the integrals at $\eta_{UV}=-1$ and $\eta_{UV}'=-1$ (recall that the result is not sensitive to this cutoff value as long as $\alpha_{-}$ is small). The relationships among the power series coefficients deduced in section \ref{commutation relations section and other stuff} imply the integral converges in the IR. Without loss of generality, assume that $k_1$ is the largest external wave-vector (this implies that $k_{UV_{12}}\ge k_{UV_{34}}$). Using the identities relating the power series coefficients derived in section \ref{commutation relations section and other stuff}, the time integral in (\ref{4 point full}) becomes: \begin{align} \label{first term 4 pt final} &\int\limits_{-\infty}^{0}\frac{d\tau}{\tau^{4}}\int\limits_{-\infty}^{\tau}\frac{d\tau'}{\tau'^{4}}{\rm Im}\left[A(k_1 \tau)A(k_2 \tau)\right]{\rm Im}\left[A(k_3\tau')A(k_4 \tau')\sum\limits_{i}s^{(i)}(k_{12}\tau)s^{(i)*}(k_{12}\tau')\right] + \left(k_{1} \leftrightarrow k_{3}, k_2 \leftrightarrow k_4\right)\cr &= \frac{9}{32}\frac{\mu^4H^4}{\left(\mu^{2} + m^{2}\right)^4}\left[\left(\frac{k_{I}^2}{k_{UV_{12}}^2k_{UV_{34}}^2}\right)^{\alpha_{-}}\left[k_{1}^{3}k_{2}^{\alpha_{-}} + k_{2}^{3}k_{1}^{\alpha_{-}}\right]\left[k_{3}^{3}k_{4}^{\alpha_{-}} + k_{4}^{3}k_{3}^{\alpha_{-}}\right]\right.\cr &\ \ \ \ \times \left[\int\limits_{-1}^{0}d\eta(-\eta)^{-1 + 2\alpha_{-}}\int\limits_{-1}^{\frac{k_{UV_{34}}}{k_{UV_{12}}}\eta}d\eta' (-\eta')^{-1 + 2\alpha_{-}} + \int\limits_{-\frac{k_{UV_{34}}}{k_{UV_{12}}}}^{0}d\eta(-\eta)^{-1 + 2\alpha_{-}}\int\limits_{-1}^{\frac{k_{UV_{12}}}{k_{UV_{34}}}\eta}d\eta' (-\eta')^{-1 + 2\alpha_{-}}\right]\cr & +\left(\frac{k_I^3}{{k_{UV_{12}}}^{2\alpha_-}{k_{UV_{34}}}^{\alpha_-}}\right)\left[k_{1}^{3}k_{2}^{\alpha_{-}} + k_{2}^{3}k_{1}^{\alpha_{-}}\right]\left[k_{3}^{\alpha_-}k_{4}^{\alpha_{-}} \right]\int\limits_{-1}^{0}d\eta(-\eta)^{-1 + 2\alpha_{-}}\int\limits_{-1}^{\frac{k_{UV_{34}}}{k_{UV_{12}}}\eta}d\eta' (-\eta')^{-1 + \alpha_{-}} \cr &\left. +\left(\frac{k_I^3}{{k_{UV_{12}}}^{\alpha_-}{k_{UV_{34}}}^{2\alpha_-}}\right)\left[k_{1}^{\alpha_-}k_{2}^{\alpha_{-}}\right]\left[k_{3}^{3}k_{4}^{\alpha_-}+k_{3}^{\alpha_-}k_{4}^{3}\right] \int\limits_{-\frac{k_{UV_{34}}}{k_{UV_{12}}}}^{0}d\eta(-\eta)^{-1 + 2\alpha_{-}}\int\limits_{-1}^{\frac{k_{UV_{12}}}{k_{UV_{34}}}\eta}d\eta' (-\eta')^{-1 + \alpha_{-}}\right]. \end{align} Notice not all of the lower bounds of the $\eta$ integrals equal -1, some are cutoff by $-\frac{k_{UV_{34}}}{k_{UV_{12}}}$. This is to ensure that the upper bound of the $\eta'$ integral is greater than -1. Evaluating the time integrals, we find the four-point function for general external wave-vectors is \begin{align} \label{small mu m 4 pt} &N_\zeta^{(4)}({\bf k}_1,{\bf k}_2,{\bf k}_3,{\bf k}_4)=\left(\frac{H^2}{\dot \phi_0}\right)^4\left(\frac{V'''}{H}\right)^2\left(\prod_{i=1}^4\frac{1}{k_i^3}\right)\frac{1}{k_{12}^3}\frac{(3\mu/2)^4H^8}{2(\mu^2+m^2)^6}\cr &\ \ \ \times\left[\left(k_1^3 k_2^{\alpha_-}+k_1^{\alpha_-}k_2^3\right)\left(k_3^3 k_4^{\alpha_-}+k_3^{\alpha_-}k_4^3\right)\left(\frac{k_{12}}{k_{UV_{12}}k_{UV_{34}}}\right)^{2\alpha_-}\right.\cr &\ \ \ \ \ \ \ \ \ \ \ +2\left(1-\frac{2}{3}\left(\frac{k_{UV_{34}}}{k_{UV_{12}}}\right)^{\alpha_-}\right)(k_1^3 k_2^{\alpha_-}+k_1^{\alpha_-}k_2^3)(k_3 k_4)^{\alpha_-}\frac{k_{12}^3}{k_{UV_{12}}^{2\alpha_-}k_{UV_{34}}^{\alpha_-}}\cr &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+\frac{2}{3}\left(k_1 k_2\right)^{\alpha_-}\left(k_3^3k_4^{\alpha_-}+k_3^{\alpha_-}k_4^3\right)\frac{k_{12}^3}{k_{UV_{12}}^{3\alpha_-}}\right]+\text{cyc. perm}({\bf k}_2,{\bf k}_3,{\bf k}_4) \end{align} \begin{figure} \includegraphics[width=5in]{3ptbiasdiagrams.png} \caption{Diagrams that contribute to the three-, four-, five-, and six-point correlations of $\zeta$ in the kinematic regimes that contribute to the enhanced part of the galactic halo bispectrum. Dashed lines represent $\pi$, while solid lines represent $s$.} \label{fig:feynman 3 pt} \end{figure} We now focus on kinematic limits of (\ref{small mu m 4 pt}) that are most important in the calculation of the two- and three-point functions of galactic dark matter halos. The enhancements discovered in~\cite{AGW} and~\cite{Dalal:2007cu} respectively occur when the magnitude of a sum of wave-vectors in the correlation function of $\zeta$ is small or when the magnitude of an external wave-vector is small. For the four-point correlation, the first of these is referred to as the collapsed limit. Suppose that $q$ denotes small wave-vectors, and $k$ denotes large wave-vectors. In these computations (as well as in later computations of the five- and six-point functions of $\zeta$), we assume that $(k_i/k_j)^{\alpha_-}\simeq 1$ and $(q_i/q_j)^{\alpha_-}\simeq 1$. This approximation is justified in our application to galactic halos since we will want to consider $k$'s roughly on the order of the inverse of the galactic halo radius, and since the $q$'s will be taken to be within an order of magnitude from each other (i.e. between about $(50~{\rm Mpc}/h)^{-1}$ and $(1000~ {\rm Mpc}/h)^{-1}$). However, we do not take $(q/k)^{\alpha_-}$ to be approximately $1$ since $q$ and $k$ may differ by several orders of magnitude. We first specialize to the collapsed limit of (\ref{small mu m 4 pt}) which occurs when two pairs of legs have nearly equal and opposite wave-vectors. Let ${\bf k}_{2} = -{\bf k}_{1}+{\bf q}$ and ${\bf k}_{4} = -{\bf k}_{3}-{\bf q}$ where $q<<k_1,\ k_3$. Then the most important permutation of (\ref{small mu m 4 pt}) in this collapsed limit is when ${\bf k}_{1}$ and ${\bf k}_{2}$ are attached to one vertex, and ${\bf k}_{3}$ and ${\bf k}_{4}$ are attached to the other. The wave-vector of the internal line becomes very small (Fig. {\ref{fig:feynman 2 pt}$b$) and eq.~(\ref{small mu m 4 pt}) becomes \begin{align} \label{analytic 4 point comp} N^{(4),\text{ coll}}_{\zeta}({\bf k}_{1},-{\bf k}_{1}+{\bf q},{\bf k}_{3},-{\bf k}_{3}-{\bf q}) = \left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{4}\left(\frac{V'''}{H}\right)^{2}\frac{1}{q^{3-2\alpha_{-}}}\frac{1}{(k_{1}k_{3})^{3+\alpha_{-}}}\frac{2(3\mu/2)^4H^8}{(\mu^2+m^2)^6}. \end{align} The four-point in the collapsed limit was previously computed in~\cite{Assassi:2012zq}. The other interesting kinematic limit of (\ref{small mu m 4 pt}) is when one pair of legs have nearly equal and opposite wave-vectors and the wave-vectors of the other two legs are soft. We find for the sum of Figs. \ref{fig:feynman 3 pt}$d$ and \ref{fig:feynman 3 pt}$e$: \begin{align} &N^{(4)}_{\zeta}({\bf k}_1,-{\bf k}_1+{\bf q}_{1},{\bf q}_{2},{\bf q}_{3}) =\left(\frac{H^{2}}{\dot{\phi}_{0}}\right)^{4}\left(\frac{V'''}{H}\right)^{2}\frac{(3\mu/2)^{4}H^8}{\left(\mu^{2} + m^{2}\right)^{6}}\frac{1}{k_1^3}\left(\frac{q}{k_{1}} \right)^{\alpha_{-}}\cr &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\left[\frac{1}{q_1^3 q_2^3}+\frac{1}{q_1^3 q_3^3}+2\left(1+\frac{1}{2}\left(\frac{q}{k_1}\right)^{\alpha_-}\right)\frac{1}{q_2^3 q_3^3}\right]. \end{align} \subsection{Five- and Six-Point Functions} Given the techniques we have developed so far, it is possible to compute the five- and six-point functions of $\zeta$ for general external wave-vectors. However, our primary purpose in studying these objects is to compute their most important contributions to the three-point function of galactic dark matter halos in the limit of large halo separation. We then only focus on the kinematic limits of the five- and six-points giving rise to the largest long wavelength enhanced terms. Even in these limits, the calculation is too long to present here. In this section we just quote results and relegate an outline of the derivation to Appendix \ref{appendix 5 and 6}. The strongest long wavelength enhanced behavior of the five-point function is achieved when one leg is soft and the other four come in pairs of nearly equal and opposite wave-vectors. Panels $f$ and $g$ of Fig. \ref{fig:feynman 3 pt} illustrate this kinematic setup. The contribution of these graphs to the five-point function is: \begin{align} N_\zeta^{(5)}&({\bf k}_1,{\bf q}_1-{\bf k}_1,{\bf k}_2,{\bf q}_2-{\bf k}_2,{\bf q}_3)=-\left(\frac{H^2}{\dot \phi_0}\right)^5\left(\frac{V'''}{H}\right)^3\frac{(3\mu/2)^5H^{11}}{(\mu^2+m^2)^8}\frac{1}{k_1^3 k_2^3}\left(\frac{q^2}{k_1 k_2}\right)^{\alpha_-}\cr &\ \ \ \ \ \ \ \times\left[\frac{1}{q_1^3 q_2^3} +\left(2-\frac{1}{6}\left(\frac{q}{k_2}\right)^{\alpha_-}\right)\frac{1}{q_2^3 q_3^3}+\left(2-\frac{1}{6}\left(\frac{q}{k_1}\right)^{\alpha_-}\right)\frac{1}{q_1^3 q_3^3}\right] \end{align} where we have defined $q=\max\{q_i\}$. The most important long wavelength contributions to the six-point function occur when all six legs come in pairs of nearly equal and opposite wave-vectors. The most important diagrams are displayed in panels $h$ and $i$ of Fig. \ref{fig:feynman 3 pt} and the sum of their contributions is \begin{align} &N_\zeta^{(6)}({\bf k}_1,{\bf q}_1-{\bf k}_1,{\bf k}_2,{\bf q}_2-{\bf k}_2,{\bf k}_3,{\bf q}_3-{\bf k}_3)=\left(\frac{H^2}{\dot \phi_0}\right)^6\left(\frac{V'''}{H}\right)^4\frac{1}{k_1^3 k_2^3 k_3^3}\frac{2(3\mu/2)^6H^{14}}{(\mu^2+m^2)^{10}}\cr &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left(1+\frac{1}{2}\left(\frac{q^3}{k_1 k_2 k_3}\right)^{\alpha_-/3}\right)\left(\frac{q_1 q_2 q_3}{k_1 k_2 k_3}\right)^{\alpha_-}\left[\frac{1}{q_2^3q_3^3}+\frac{1}{q_1^3q_3^3}+\frac{1}{q_1^3q_2^3}\right] \end{align} \section{Correlation Functions of Biased Objects} In this section we review the computation of the galactic halo power spectrum, and compute the bispectrum in the limit of large halo separation. At large enough separation, the primordial non-Gaussian contributions to the power spectrum and bispectrum are larger than the Gaussian ones. This leads to interesting observable long wavelength effects. The long wavelength scaling of the power spectrum was already discussed in ~\cite{Baumann:2012bc}. Here we compute the long wavelength enhanced contributions and present results for the bispectrum as well. We start by assuming halos form instantaneously, at the same time $t_{\rm coll}$, and at points where the matter overdensity $\delta({\bf x})$ averaged over a spherical region with comoving radius $R$ exceeds a threshold $\delta_c$. We choose the smoothing radius $R$ to be of order the characteristic length scale of the region of space that collapses to form a halo.\footnote{We set $R = 2.8~ {\rm Mpc}.$} The smoothed matter overdensity is related to the matter overdensity by \begin{align} \delta_{R}({\bf x},a) =\int d^{3}{\bf y}\ W_{R}(|{\bf x} - {\bf y}|)\delta({\bf y},a). \end{align} Here $W_{R}(|{\bf x} - {\bf y}|)=\Theta_{H}(R-|{\bf x} - {\bf y}|)$ is the top hat window function.\footnote{$\Theta_{H}$ is the Heaviside step function.} The Fourier transform of the window function is: \begin{align} \label{window function} W_{R}(k) = \frac{3\left({\rm sin}kR - kR{\rm cos}kR \right)}{(kR)^{3}}. \end{align} Assuming $\delta({\bf x},a)$ undergoes linear growth before the collapse time, we can express the density perturbations at the time of collapse in terms of the linearly evolved density perturbations today, $\delta_{R}({\bf x}$, $a_{\rm coll}) =\delta_{R}({\bf x})D(a_{\rm coll})$ where today $a=1$ and the growth factor $D(1)=1$. We will ignore the evolution of halos after collapse, and so the number density of halos today, up to an irrelevant dimensionful normalization constant, is given by: \begin{align} \label{halo formation model} n_{h}({\bf x}) = \Theta_{H}(\delta_{R}({\bf x}, a_{\rm coll}) - \delta_{c}(a_{\rm coll})) = \Theta_{H}(\delta_{R}({\bf x}) - \delta_{c}) \end{align} where $\delta_{c} \equiv \delta_{c}(a_{\rm coll})/D(a_{\rm coll})$. We use $\delta_c=4.215$, which assumes that $\delta_c(a_{\rm coll})=1.686$ with $z_{\rm coll}=1.5$~\cite{{Press:1973iz}}. The halo overdensity $\delta_{h}({\bf x})$ at a point ${\bf x}$ today is defined by \begin{align} \label{pertubation average density} \delta_{h}({\bf x}) = \frac{n_{h}({\bf x}) - \left<n_{h}\right>}{\left<n_{h}\right>}. \end{align} where $\left<n_{h}\right>$ is the average halo density. We are interested in the two- and three-point functions of $\delta_{h}({\bf x})$. These can be computed using (\ref{halo formation model}) and the path integral techniques discussed in \cite{Politzer:1984nu}. A more general approach that we adopt here is to write $\delta_h$ as\footnote{The ellipses denote higher order terms in the bias expansion. They are not needed to the order we work in $(qR)$ and $(V'''/H)$. However it is important to remember that they are defined with subtractions. For example, the next order term is $b_{3} (\delta_{R}^3({\bf x})-3\langle \delta_{R}^2 \rangle \delta_R({\bf x}))$.}$^{,}$\footnote{A completely general approach is possible; for a review, see \cite{Desjacques:2016bnm}.} \begin{equation} \label{bias expansion} \delta_{h}({\bf x}) = b_{1} \delta_{R} ({\bf x}) + b_{2} (\delta_{R}^2({\bf x})-\langle \delta_{R}^2 \rangle ) + \dots \end{equation} The constants $b_{1}$ and $b_{2}$ are bias coefficients. They can be computed using a specific model of halo formation such as (\ref{halo formation model}) that expresses the halo overdensity in terms of $\delta_R$ or determined from data. The two-point function of the halo overdensity is then: \begin{eqnarray} \label{2 point bias expan} \left<\delta_{h}({\bf x})\delta_{h}({\bf y})\right> &=& b_{1}^{2}\left<\delta_{R}({\bf x})\delta_{R}({\bf y})\right> \\ \nonumber &&+b_{1}b_{2} \left< (\delta_{R}^{2}({\bf x})-\left<\delta_R^2\right>)\delta_{R}({\bf y})\right> + \left<\delta_{R}({\bf x})(\delta_{R}^{2}({\bf y})-\left<\delta_R^2\right>)\right> \\ \nonumber &&+b_{2}^{2}\left<(\delta_{R}^{2}({\bf x})-\left<\delta_R^2\right>)(\delta_{R}^{2}({\bf y})-\left<\delta_R^2\right>)\right> + \dots ~. \end{eqnarray} Note \begin{equation} \left<\delta_{R}^{2}({\bf x})\delta_{R}^{2}({\bf y})\right>=\left<\delta_{R}^2\right>^{2} + \left<\delta_{R}({\bf x})\delta_{R}({\bf y})\right>^2+\left<\delta_{R}^{2}({\bf x})\delta_{R}^{2}({\bf y})\right>_c. \end{equation} We can neglect the second term because it is very small at large halo separations compared to the $b_1^2$ term in (\ref{2 point bias expan}). All factors of $\left<\delta_{R}^2\right>$ cancel and we find \begin{equation} \label{2 point bias expan1} \left<\delta_{h}({\bf x})\delta_{h}({\bf y})\right> \simeq b_{1}^{2}\left<\delta_{R}({\bf x})\delta_{R}({\bf y})\right> + b_{1}b_{2} (\left< \delta_{R}^{2}({\bf x})\delta_{R}({\bf y})\right> + \left<\delta_{R}({\bf x})\delta_{R}^{2}({\bf y})\right>) +b_{2}^{2}\left<\delta_{R}^{2}({\bf x})\delta_{R}^{2}({\bf y})\right>_c + \dots ~. \end{equation} The term proportional to $b_1^2$ comes from the Gaussian two-point function of $\zeta$ and the remaining terms arise from the connected three- and four-point functions of $\zeta$ that we computed earlier. Similarly, we can express the three-point function of $\delta_{h}$ as: \begin{eqnarray} \label{biashalo3} &&\left<\delta_{h}({\bf x})\delta_{h}({\bf y})\delta_{h}({\bf z})\right> =b_{1}^{3}\left<\delta_{R}({\bf x})\delta_{R}({\bf y})\delta_{R}({\bf z})\right>_c +b_{2}^{3}\left<\delta({\bf x})^{2}\delta({\bf y})^{2} \delta({\bf z})^{2}\right>_c \nonumber \\ &&\ \ \ \ \ \ +\left[2b_{1}^{2}b_{2}\left<\delta_{R}({\bf x})\delta_{R}({\bf y})\right>\left<\delta_{R}({\bf x})\delta_{R}({\bf z})\right>+b_{1}^{2}b_{2}\left<\delta_{R}^{2}({\bf x})\delta_{R}({\bf y})\delta_{R}({\bf z})\right>_c \right. \nonumber \\ &&\left.\ \ \ \ \ \ \ \ \ \ \ \ \ \ + b_{1}b_{2}^{2}\left<\delta_{R}^{2}({\bf x})\delta_{R}^{2}({\bf y})\delta_{R}({\bf z})\right>_c + {\rm cyc. \ perm}({\bf x},{\bf y},{\bf z})\right] +\ldots~ \end{eqnarray} The first term proportional to $b_1^2 b_2$ is the three-point halo correlation when the underlying curvature perturbations are Gaussian, which was first calculated in~\cite{Politzer:1984nu}. The remaining terms arise from the non-Gaussian correlations of the primordial fluctuations. In the next section we present a power counting argument showing that for widely separated points $|{\bf x}-{\bf y}|>> R$ and $|V'''|/H <1$, the higher order terms in the bias expansion are negligible in the threshold model. Only $b_1$ and $b_2$ are needed to compute the halo overdensity power spectrum and bispectrum evaluated at wave-vectors $q<<1/R$. Using, for example, path integral methods, it is straightforward to derive expressions for $\left<n_h\right>$ and the bias coefficients $b_1$ and $b_2$ in the threshold model mentioned above. They can be expressed in terms of $\delta_c$ and \begin{equation} \sigma_R^2=\langle \delta_R({\bf x})\delta_R({\bf x})\rangle \end{equation} as \begin{equation} \left< n_h \right>={1 \over 2}{\rm erfc}\left( \delta_c \over \sqrt{2} \sigma_R \right) \end{equation} and \begin{equation} \label{bias points above threshold} b_1={e^{-\delta_c^2/(2\sigma_R^2)} \over \sqrt{2 \pi} \sigma_R \left<n_h\right>}~~~~b_2={e^{-\delta_c^2/(2\sigma_R^2)} \delta_c\over 2 \sqrt{2 \pi} \sigma_R^3\left<n_h\right>}. \end{equation} The Fourier transformed smoothed matter overdensity $\delta_{R}({\bf k})$ is related to the curvature perturbation through \begin{align} \label{delta to xi main text} \delta_{R}({\bf k}) = \frac{2k^2}{5\Omega_m H_0^2}T(k)W_R(k)\zeta_{{\bf k}} \end{align} where $T(k)$ is the transfer function, $\Omega_{m}$ is the ratio of the matter density to the critical density today, and $H_{0}$ is the Hubble constant evaluated today \cite{Dodelson:2003ft}. When performing integrals against $T(k)$ we use the BBKS approximation to the transfer function \cite{Bardeen:1985tr}: \begin{align} \label{transfer function} T\left(k=\left(\Omega_{m}h^{2}{\rm Mpc^{-1}}\right)u\right) = \frac{{\rm ln}\left[1 + 2.34 u\right]}{(2.34 u)}\left[1 + 3.89u + (16.2 u)^{2} + (5.47 u)^{3} + (6.71 u)^{4} \right]^{-1/4} \end{align} We can then write $\sigma_R^2$ as \begin{align} \label{sigma squared app} \sigma_{R}^{2} = \left(\frac{H^{2}}{\dot{\phi}_{0}}\frac{2}{5}\frac{1}{\Omega_{m}H_{0}^{2}R^2}\right)^{2}C_{2}(\mu,m){\cal J} \end{align} where \begin{align} {\cal J} = \frac{1}{2\pi^{2}}\int\limits_{0}^{\infty}dx x^{3}T(x/R)^{2}W(x)^2 \end{align} and $W(x)\equiv W_R(x/R)$ is independent of $R$. The Fourier transform of the halo two-point gives the halo power spectrum \begin{align} P_{hh}({ q}) = \int d^{3}{\bf x}\left<\delta_{h}({\bf x})\delta_{h}({\bf 0})\right>e^{-i{\bf q}\cdot {\bf x}}. \end{align} Fourier transforming (\ref{2 point bias expan1}) and plugging in (\ref{delta to xi main text}) to express the correlation functions of $\delta_{R}({\bf k})$ in terms of those of $\zeta_{\bf k}$, we find for $q<<1/R$: \begin{align} \label{halo power bias expan} P_{hh}({\bf q}) &= b_{1}^{2}\alpha_{R}(q)^{2}P_{\zeta}({\bf q}) + 2b_{1}b_{2}\alpha_{R}({\bf q})\int\frac{d^{3}k}{(2\pi)^{3}}\alpha_{R}(k)^2B_{\zeta}({\bf q},{\bf k},-{\bf k}-{\bf q})\cr &\ \ \ \ \ \ \ \ + b_{2}^{2}\int\frac{d^{3}k_{1}}{(2\pi)^{3}}\frac{d^{3}k_{2}}{(2\pi)^{3}}\alpha_{R}(k_{1})^{2}\alpha_{R}(k_{2})^{2}N_{\zeta}^{(4)}({\bf k}_{1},{\bf q}-{\bf k}_{1},{\bf k}_{2},-{\bf k}_{2}-{\bf q}). \end{align} To condense the expression we have defined \begin{align} \alpha_{R}(k) =\frac{2k^{2}}{5\Omega_{m}H_{0}^{2}}T(k)W_{R}(k). \end{align} The wave-vectors integrated over in the integrals of (\ref{halo power bias expan}) are of order $1/R$. Since we are interested in $q << 1/R$ the curvature bispectrum and trispectrum appearing in (\ref{halo power bias expan}) are in their squeezed and collapsed configurations. Equations (\ref{B squeezed}) and (\ref{analytic 4 point comp}) imply the strongest small $q$ scaling of the primordial squeezed bispectrum and collapsed trispectrum are $1/q^{3-\alpha_{-}}$ and $1/q^{3-2\alpha_{-}}$. Note that the bispectrum's contribution to the halo power spectrum is suppressed by a factor of $\alpha_R(q)\propto q^2$, so that term goes like $1/q^{1-\alpha_-}$. \begin{figure} \includegraphics[width=2in]{2ptbiasschematics.png} \caption{A diagrammatic representation of terms contributing to the galactic halo power spectrum. Cf. Fig. {\ref{fig:feynman 2 pt}}.} \label{fig:schematic 2 pt} \end{figure} \begin{figure} \includegraphics[width=5in]{3ptbiasschematics.png} \caption{A diagrammatic representation of terms contributing to the galactic halo bispectrum. Cf. Fig. {\ref{fig:feynman 3 pt}}.} \label{fig:schematic 3 pt} \end{figure} An intuitive picture of the non-Gaussian contributions to (\ref{halo power bias expan}) is given by Fig. \ref{fig:schematic 2 pt}. The shaded circles represent the halo overdensity, while the lines they are attached to are $\zeta$ legs. In these graphs, the external $\zeta$ legs are each multiplied by $\alpha_R$. If one $\zeta$ leg is attached to a shaded circle it carries a soft wave-vector and a factor of $b_{1}$. If two legs are attached to a shaded circle, they carry equal and opposite wave-vectors with magnitude approximately $1/R$. In this case, the shaded circle also contains a factor of $b_{2}$ and a wave-vector integral. The halo power spectrum is then\footnote{In writing (\ref{galactic power spectrum}) we have used $\int\limits_{0}^\infty dx x^{3-n\alpha_{-}}T(x/R)^{2}W(x)^{2} \simeq \int\limits_{0}^\infty dx x^{3}T(x/R)^{2}W(x)^{2}$, where $n$ is an O(1) integer.} \begin{align}\label{galactic power spectrum} P_{hh}(q) &= P_{hh}^{G}(q)\left[1+\gamma(\mu,m) \left(2\frac{\beta(\mu,m)}{(qR)^{2-\alpha_-}T(q)}+\frac{\beta(\mu,m)^2}{(qR)^{4-2\alpha_-}T(q)^2}\right)\right] \end{align} where \begin{align} &P_{hh}^G(q)=b_1^2 P_{mm}(q)\cr &P_{mm}(q)=R^3C_2(\mu,m)\left(\frac{H^2}{\dot \phi_0}\right)^2\left(\frac{2}{5 \Omega_m H_0^2 R^2}\right)^2(qR) T(q)^2\cr &\gamma(\mu,m)=\frac{9\mu^2H^2}{(\mu^2+m^2)^2+9\mu^2H^2}\cr &\beta(\mu,m)=\frac{6}{5}\frac{b_2}{b_1}\frac{H^2}{\dot \phi_0}\frac{2}{5\Omega_m H_0^2 R^2}{\cal J}f_{NL}\sqrt{2\pi^{2}\Delta^{2}_{\zeta}}C_{2}(\mu,m)^{\frac{3}{2}}\frac{(\mu^2+m^2)^2}{(3\mu/2)^2H^2}. \end{align} $P_{mm}$ denotes the ``matter-matter'' power spectrum, \textit{i.e.}, the Fourier transform of $\langle \delta_R({\bf x})\delta_R({\bf y})\rangle$. Since $0<\gamma(\mu,m)<1$, it is simple to show that $P_{hh}(q)$ is positive definite, as it must be. Note that for $f_{NL}<0$, this would not be true at very small wave-vectors without the contribution due to the four-point function of $\zeta$. The scale non-Gaussianities begin to dominate is $(qR)^2\sim \beta(\mu,m)\propto f_{NL}$ (up to $(qR)^{\alpha_-}$ terms). Current measurements of the galactic power spectrum have not seen significant deviations from Gaussian initial conditions at wave-vectors around $q \sim h/(100 \ {\rm Mpc})$~\cite{Beutler}. In the threshold model, we find that $\beta\propto R^2$, indicating that the scale at which non-Gaussianities begin to dominate is independent of model parameter $R$. On the other hand, we can also compute the matter-halo cross correlation power spectrum $P_{hm}(q)$, which corresponds to the two-point function $\langle \delta_h({\bf x})\delta_R({\bf y})\rangle$. The ``$h$" in $P_{hm}$ stands for halo, and the ``$m$" for matter. The result is \begin{align} P_{hm}(q)=b(q)P_{mm}(q) \end{align} where \begin{align} b(q)\equiv b_1+b_1\gamma(\mu,m)\beta(\mu,m)\frac{1}{(qR)^{2-\alpha_-}T(q)}. \end{align} This implies a scale-dependent bias:\footnote{Recall that we have neglected the time evolution of the distribution of galaxies after they have formed.} \begin{align} \Delta b(q)=b_1\gamma(\mu,m)\beta(\mu,m)\frac{1}{(qR)^{2-\alpha_-}T(q)}. \end{align} Note that $P_{hh}$ can be written in this notation as: \begin{align} P_{hh}(q)=\left(b(q)^2+b_1^2\beta(\mu,m)^2\gamma(\mu,m)\left(1-\gamma(\mu,m)\right)\frac{1}{(qR)^{4-2\alpha_-}T(q)^2}\right)P_{mm}(q). \end{align} In this form, the second term in the brackets is due to stochastic bias. Note that this term is proportional to $1-\gamma(\mu,m)$, which approaches $0$ in the limit that $\mu\gtrsim m$ as $\mu/H$ and $m/H$ go to zero. This suppression is evident in Fig. \ref{fig:alpha2pt}. If the stochastic bias were zero, then the purple curves' minimum value would be $0$. Since they all reach a minimum value less than around $0.1$, this indicates that the stochastic bias is small in the $\mu\sim m$ regime. However, for $\mu<<m$ the stochastic bias can become large, see Fig. \ref{fig:mum2pt}. As we will show toward the end of this section, for $\mu$ several orders of magnitude smaller than $m$, other contributions to the power spectrum that we have neglected become important. In figures \ref{fig:alpha2pt} and \ref{fig:mum2pt}, we plot the ratio of the galactic halo power spectrum in quasi-single field inflation divided by the Gaussian contribution $P_{hh}^G$. Notice that for reasonable model parameters, $P_{hh}(q)$ begins to differ from $P_{hh}^G(q)$ at $q\sim 0.005 h/{\rm Mpc}$. The difference becomes very large for values of $q$ significantly less than this. Figures \ref{fig:alpha2pt} and \ref{fig:mum2pt} use $f_{NL}=\pm 10$, and various values for $\alpha_-$ and $\mu$. \begin{figure*}[tp] \centering \includegraphics[width=5in]{alpha2pt} \caption{We plot the ratio of the galactic halo power spectrum in quasi-single field inflation to the halo power spectrum in which there are no primordial non-Gaussianities for a range of $\alpha_{-}$: $\alpha_-=0.025$ (lightest), $0.050$, $0.075$, $0.100$, $0.125$, $0.150$ (darkest). We plot for $\mu=m$ and $f_{NL}=10$ (green) and $f_{NL}=-10$ (purple).} \label{fig:alpha2pt} \end{figure*} \begin{figure*}[tp] \centering \includegraphics[width=5in]{mum2pt} \caption{We plot the ratio of the galactic halo power spectrum in quasi-single field inflation to the halo power spectrum in which there are no primordial non-Gaussianities for a range of $\mu$. We plot for $\ln(\mu/H)=-1$ (darkest), $-2$, $-3$, $-4$ (lightest), with $\alpha_-=0.05$ and $f_{NL}=10$ (pink) and $f_{NL}=-10$ (blue).} \label{fig:mum2pt} \end{figure*} Let us now study the halo three-point function given in equation (\ref{biashalo3}). The non-Gaussian contributions are depicted in Figure \ref{fig:schematic 3 pt}. Fourier transforming equation (\ref{biashalo3}), we find that the bispectrum of the halo overdensity is \begin{align} &B_{hhh}({\bf q}_1,{\bf q}_2,{\bf q}_3)=b_1^3 \alpha_R(q)^3 B_\zeta({\bf q}_1,{\bf q}_2,{\bf q}_3)\cr &+\left[2b_1^2b_2\alpha_R(q_2)^2\alpha_R(q_3)^2P_\zeta(q_2)P_\zeta(q_3)+b_1^2b_2\alpha_R(q_2)\alpha_R(q_3)\int \frac{d^3 k}{(2\pi)^3}\alpha_R(k)^2N_\zeta^{(4)}({\bf k},{\bf q}_1-{\bf k},{\bf q}_2, {\bf q}_3)\right.\cr &\left.+b_1 b_2^2 \alpha_R(q_3)\int\frac{d^3k_1}{(2\pi)^3}\frac{d^3 k_2}{(2\pi)^3}\alpha_R(k_1)^2\alpha_R(k_2)^2 N_\zeta^{(5)}({\bf k}_1,{\bf q}_1-{\bf k}_1,{\bf k}_2,{\bf q}_2-{\bf k}_2,{\bf q}_3)+\text{cyc. perm}({\bf q}_1,{\bf q}_2,{\bf q}_3)\right] \cr &+b_2^3 \int \frac{d^3k_1}{(2\pi)^3}\frac{d^3 k_2}{(2\pi)^3}\frac{d^3 k_3}{(2\pi)^3}\alpha_R(k_1)^2\alpha_R(k_2)^2\alpha_R(k_3)^2N_\zeta^{(6)}({\bf k}_1,{\bf q}_1-{\bf k}_1,{\bf k}_2,{\bf q}_2-{\bf k}_2,{\bf k}_3,{\bf q}_3-{\bf k}_3).\cr \end{align} Similar to the calculation of the two-point, we can simplify the wave-vector integrals to express the bispectrum as \begin{align} \label{full bispectrum eqt} &B_{hhh}({\bf q}_1,{\bf q}_2,{\bf q}_3)=2b_1^2b_2R^6\left(\frac{H^2}{\dot \phi_0}\right)^4\left(\frac{2}{5\Omega_m H_0^2 R^2}\right)^4C_2^2\Bigg[T(q_1)^2T(q_2)^2q_1 q_2 R^2.\cr &+\left.\omega(\mu,m)\Bigg(\beta(\mu,m)\frac{q_1^2}{q_2q_3}T(q_1)T(q_2)T(q_3)\right.\cr & +\beta(\mu,m)^2T(q_2)T(q_3)(qR)^{\alpha_{-}}\bigg[\frac{q_2^2}{R^2q_1^3 q_3}+\frac{q_3^2}{R^2q_1^3 q_2} + 2\big(1+\frac{1}{2}(q R)^{\alpha_-}\big)\frac{1}{R^2q_2 q_3}\bigg]\cr &+\beta(\mu,m)^3 T(q_3)(qR)^{2\alpha_{-}}\bigg[\frac{q_3^2}{q_1^3 q_2^3 R^4}+ 2\big(1-\frac{1}{12}\left(q R\big)^{\alpha_-}\right)\frac{1}{R^4 q_2^3 q_3}+2\big(1-\frac{1}{12}\left(q R\big)^{\alpha_-}\right)\frac{1}{R^4 q_1^3 q_3}\bigg]\cr &+\beta(\mu,m)^4(qR)^{3\alpha_{-}}\left(2+\left(q R\right)^{\alpha_-}\right)\frac{1}{R^6 q_1^3 q_2^3}\Bigg)\Bigg]+\text{ cyc. perm}(q_1,q_2,q_3). \end{align} where $q \equiv {\rm max}(q_{i})$, and \begin{align} \omega(\mu,m)=\frac{b_1^2}{4b_2^2}\frac{1}{{\cal J}C_2}\left(\frac{\dot \phi_0}{H^2}\right)^2\left(\frac{5\Omega_m H_0^2R^2}{2}\right)^2\gamma(\mu,m). \end{align} Again, the scale at which the non-Gaussian contributions begin to dominate is $(qR)^2\sim\beta(\mu,m)$, which means the galactic power spectrum and bispectrum both begin to deviate from their Gaussian contributions at roughly the same scale. Since it is easier to measure the halo two-point function than the halo three-point function, it is more likely that we will see these non-Gaussian effects in the halo two-point before we see them in the three-point. The equilateral configuration of the galactic halo bispectrum is plotted in Figs. \ref{fig:alpha3pt} and \ref{fig:mum3pt} for various values of $\alpha_-$ and $\mu$. Note that we have scaled the bispectrum by its value when $V'''=0$, \begin{align} B_{hhh}^G({\bf q}_1,{\bf q}_2,{\bf q}_3)=2b_1^2b_2R^6\left(\frac{H^2}{\dot \phi_0}\right)^4\left(\frac{2}{5\Omega_m H_0^2 R^2}\right)^4C_2^2T(q_1)^2T(q_2)^2q_1 q_2 R^2 + {\rm ~cyc.~ perm}(q_1,q_2,q_3). \end{align} In the equilateral configuration with $f_{NL}<0$, this scaled bispectrum never falls significantly below unity. Note also that it rises more rapidly than the scaled power spectrum shown in Figs. \ref{fig:alpha2pt} and \ref{fig:mum2pt} as $q$ becomes small. Equation (\ref{full bispectrum eqt}) expresses the bispectrum in terms of the magnitude of the wave-vectors ${\bf q}_{1}, {\bf q}_{2}$ and ${\bf q}_{3}$. It could also be expressed in terms of $q_{1}$ and $ q_{2}$ and the angle between them. This angular dependence is usually displayed as a multipole expansion. \begin{figure*}[tp] \centering \includegraphics[width=5in]{alpha3pt} \caption{We plot the ratio of the galactic halo bispectrum in quasi-single field inflation to the galactic halo bispectrum with no primordial non-Gaussianities ($B_{hhh}^G$) in the equilateral configuration for a range of $\alpha_{-}$: $\alpha_{-} = 0.025 $ (lightest), $0.050$, $0.075$, $0.100$, $0.125$, $0.150$ (darkest). We plot for $\mu=m$ and $f_{NL}=10$ (green) and $f_{NL}=-10$ (purple).} \label{fig:alpha3pt} \end{figure*} \begin{figure*}[tp] \centering \includegraphics[width=5in]{mum3pt} \caption{We plot the ratio of the galactic halo bispectrum in quasi-single field inflation to the galactic halo bispectrum with no primordial non-Gaussianities ($B_{hhh}^G$) in the equilateral configuration for a range of $\mu$: $\ln(\mu/H)=-1$ (darkest), $-2$, $-3$, $-4$ (lightest). We plot for $\alpha_-=0.05$ and $f_{NL}=10$ (pink) and $f_{NL}=-10$ (blue).} \label{fig:mum3pt} \end{figure*} Currently, there are measurements of the galaxy bispectrum at wave-vectors as small as about $h/(20 ~{\rm Mpc})$~\cite{Gil Martin}. There is no evidence in this data for the type of effects we have found. We have ignored the evolution of the galactic halo distribution after their collapse. These effects are $O(1)$. However, we do not expect that including them greatly shifts at what scale non-Gaussianities or their rapid growth become important. One can include these effects either by numerical simulation or analytic methods~\cite{Fry:1983cj,Mirbabayi:2014zca,Angulo:2015eqa}. Evolution during this period is expected to decrease the influence of bias, drawing the galactic distribution closer to the dark matter distribution. Some of these effects cancel out in the ratios we have plotted. We have chosen to plot the power spectrum and bispectrum scaled by $P_{hh}^G$ and $B_{hhh}^G$ since these ratios are less sensitive to the value of $R$ than the power spectrum and bispectrum alone. \begin{figure*}[tp] \centering \includegraphics[width=2.5in]{extradiagram.png} \caption{The above diagram can contribute significantly to the galactic halo power-spectrum if $|V'''|/H$ is not very small. However, it can be ignored as long as $|V'''|/H<<1$. In the context of eq. (\ref{n point power counting}), this is a $p=1$, $j=0$ term.} \label{fig:extradiagram} \end{figure*} It is possible to use the methods developed here to consider even higher correlations of the halo overdensity. The dependence of galactic halo $n$-point correlations on the parameters $V'''$, $q$, and $R$ in quasi-single field inflation with the halo number density modeled by eq. (\ref{halo formation model}) is given by \begin{align}\label{n point power counting} \langle\delta_h^n\rangle\sim R^{3(n-1)}(qR)^{n-1}\left[1+\sum_{i=n-2}^{2n-2}\left(\frac{V'''}{H(qR)^2}\right)^i \ \sum_{j=0}^{n-1}\sum_{p=0}^\infty(qR)^{3j}\left(\frac{V'''}{H}\right)^p\right] \end{align} where for simplicity, factors of $(qR)^{\alpha_-}$ have been set to unity. In our analysis of the power spectrum ($n=2$) and the bispectrum ($n=3$), we have included only the $j=p=0$ terms in the sums. Recall, the validity of our calculations relies on the several assumptions. First of all we have assumed that $\alpha_-=(\mu^2+m^2)/3H^2<<1$. However, we must also have $\alpha_-\gtrsim 1/60$ or else superhorizon evolution would have persisted to the end of inflation. Finally, we assumed $qR<<1$ and $|V'''|/H<<1$. Note that for fixed $|f_{NL}|=10$ and $\alpha_-=0.05$, then $|V'''|/H> 1$ for $\mu<0.005$. Therefore, our results do not apply at very small $\mu/m$. For $|V'''|/H$ not small, we would need to include additional contributions, \textit{e.g.}, the diagram shown in Fig. ~\ref{fig:extradiagram}. \section{Conclusions} The $1/q^3$ dependence of the de-Sitter propagator for massless scalar fields implies that if the primordial curvature fluctuations are non-Gaussian, they have the potential to give rise to enhancements in the correlations of biased objects at small wave-vectors~\cite{AGW,Dalal:2007cu}. This effect cannot be produced by nonlinear gravitational evolution without primordial non-Gaussianities. The main goal of this paper was to explore these enhancements within quasi-single field inflation. We developed a method to analytically compute the correlation functions of the curvature perturbation $\zeta$ in quasi-single field inflation in the limit of small $m/H$ and $\mu/H$. We computed the three- and four-point functions of $\zeta$ for arbitrary external wave-vectors and computed the five- and six-point functions in the kinematic limits that give the strongest long wavelength enhanced contributions to the three-point function of the galactic halo overdensity $\delta_{h}$. We applied these results to the computation of the two- and three-point correlations of $\delta_{h}$ ({\it i.e.}, the power spectrum and bispectrum). For model parameters consistent with the constraints on $f_{NL}$, we found that non-Gaussian contributions to these correlation functions are larger than the Gaussian ones at scales around $ h/(200 {\rm Mpc})$. Even larger scales will be probed in upcoming large scale surveys such as SPHEREx. Prospects for future improvements in measurements of the galactic power spectrum and bispectrum are reviewed in~\cite{Alvarez:2014vva}. After making a number of approximations, we obtained analytic expressions for the power spectrum and bispectrum\footnote{Since galactic halos are biased objects, even if the primordial fluctuations are Gaussian a halo bispectrum is not zero.} of $\delta_{h}$ that are valid at small wave-vectors. We studied the dependence of the stochastic bias on the parameters $\mu$ and $m$, and found that it could be small or significant depending on the values of $\mu$ and $m$. The departure from the predictions of Gaussian primordial perturbations in both the equilateral configuration of the bispectrum and the power spectrum begin at wave-vectors around $h/(200 {\rm Mpc})$ (when $|f_{NL}|$ is near its upper bound). However, for the bispectrum the deviation grows much more rapidly as the wave-vectors decrease than in the power spectrum. Unfortunately, it is more difficult to measure the three-point correlation than the two-point correlation of $\delta_{h}$. If these enhancements exist, it is more likely we will first see them in the power spectrum than in the bispectrum. Finally, we identified the scaling of the $n$-point function of $\delta_{h}$. The calculations (at small wave-vectors) of the galactic power spectrum and bispectrum presented in this paper can be improved and made more model independent. We hope to address this in future work. \section*{Acknowledgements} We would like to thank Olivier Dor\'e, Roland de-Putter, Daniel Green and Mikhail Solon for useful discussions. This work was supported by the DOE Grant DE-SC0011632. We are also grateful for the support provided by the Walter Burke Institute for Theoretical Physics.
2024-02-18T23:40:38.858Z
2017-11-09T02:00:14.000Z
algebraic_stack_train_0000
2,951
15,330
proofpile-arXiv_065-14481
\section{Introduction} In this paper we deal with three types of order convergence, introduce an appropriate topology and relate these concepts. Moreover, we study the according four types of order continuity of maps and obtain properties of the corresponding sets of order continuous maps. We investigate these concepts in partially ordered sets, in partially ordered abelian groups as well as in partially ordered vector spaces, where we intend to give the results as general as possible. The first concept of order convergence which we will deal with ($o_1$-convergence) is motivated by the usual order convergence in vector lattices, see e.g.\ \cite[Chapter 1, Section 4]{Positiveoperators_old} or \cite[Definition 1.1]{Abra}. For bounded nets, a definition of $o_1$-convergence can also be found in \cite[Chapter 1; Definition 5.1.]{Peressini67}. In partially ordered vector spaces, $o_1$-convergence is considered e.g.\ in \cite{IaB} and \cite[Definition 1.7.]{Imh}. The second and the third concept of order convergence ($o_2$-convergence, $o_3$-convergence) are given in vector lattices in \cite[p.\ 288]{Abra} and \cite[Definition 1.2]{Abra}, respectively. After introducing these concepts in partially ordered sets, we will show that $o_3$-convergence coincides with the convergence given in \cite[Definition 1]{Wolk} in partially ordered sets, and with the convergence introduced in \cite[Definition II.6.3.]{Vulikh67} in lattices. Our definition is inspired by \cite[Definition 1.8.]{Imh}, where the concept is considered in partially ordered vector spaces. Operators in vector lattices that are continuous with respect to $o_1$-convergence are frequently studied, see e.g.\ \cite[Definition 4.1]{Positiveoperators_old}, \cite[Definition 1.3.8]{Meyer91}. Operators on vector lattices that preserve $o_2$-convergence or $o_3$-convergence are considered in \cite{Abra}. Our aim is to introduce a concept of topology in partially ordered sets such that $o_1$-, $o_2$- and $o_3$-continuity, respectively, coincide with the topological continuity under mild conditions. Therefore we introduce an order topology $\tau_o$, which generalises the concept of order topology in partially ordered vector spaces given in \cite{Imh}. Note that $\tau_o$ is a special case of a $\sigma$-compatible topology on partially ordered sets considered in \cite{Floyd1955}. We will show that $\tau_o$ coincides with the topology defined in lattices in \cite[Definition II.7.1]{Vulikh67} as well as in \cite{Dobbertin84}. Note that another concept of topology, the so-called order bound topology, is introduced in partially ordered vector spaces in \cite[p.\ 20]{Namioka57}, see also \cite[Def.\ 2.66]{CAD}. In \cite[Theorem 5.2]{Namioka57} it is shown that each regular operator between partially ordered vector spaces is continuous with respect to the order bound topology. As there clearly exist examples of regular operators that are not $o_1$-continuous, the concept of order bound topology is not suitable for our purpose. The results in this paper are organised as follows. In Section 2 we introduce and characterise net catching sets and define $\tau_o$ in partially ordered sets. The three concepts of order convergence are defined in Section 3 in partially ordered sets. We link the concepts to the ones in the literature, show that the three concepts differ, investigate their relations and show that they imply $\tau_o$-convergence. We prove that closedness with respect to $\tau_o$ is characterised by means of order convergence. Further properties of order convergence concepts such as monotonicity and a Sandwich theorem will be established. In Section 4 we investigate maps that are continuous with respect to the order convergences and $\tau_o$-convergence, respectively, and relate these concepts. We show that $o_3$-convergence in a lattice can be characterised by $o_2$-convergence in a Dedekind complete cover. In Section 5 we characterise the concepts of order convergence and net catching sets in partially ordered abelian groups. Section 6 contains the Riesz-Kantorivich theorem in the setting of partially ordered abelian groups. In Section 7, we give sufficient conditions on the domain and the codomain of an order bounded map between partially ordered abelian groups that guarantee the equivalence of the four concepts of continuity. Under the same conditions, we show a generalisation of Ogasawara's theorem that can be found in \cite[Theorem 4.4]{Positiveoperators_old}, i.e.\ we prove that the set of all order bounded additive continuous maps is an order closed ideal in the lattice-ordered abelian group of all order bounded additive maps. In Section 8, we show that the scalar multiplication in partially ordered vector spaces is linked appropriately to the $o_i$-convergences if and only if the space is Archimedean and directed. Examples are given which show that the order convergences differ in this setting. In Section 9 we show that the results of Section 7 are also valid for linear operators on partially ordered vector spaces. Next we fix our notation. As usual, on a non-empty set $P$ a binary relation $\leq$ is called a \emph{partial order} if it is reflexive, transitive and anti-symmetric. The set $P$ is then called a \emph{partially ordered set}. For $x,y\in P$ we write $x<y$ if $x\leq y$ and $x\neq y$. For $U,V\subseteq P$ we denote $U\leq V$ if for every $u\in U$ and $v\in V$ we have $u\leq v$. If $V=\{v\}$ for $v\in P$, we abbreviate $U\leq \{v\}$ by $U\leq v$ (and similarly $v\leq U$). For $x \in P$ and $M \subseteq P$ define $M_{\geq x}:=\{m \in M;\, m \geq x\}$ and $M_{\leq x}:=\{m \in M;\, m \leq x\}$. A set $M\subseteq P$ is called \emph{majorising} in $P$ if for every $x\in P$ the set $M_{\geq x}$ is non-empty. For $x,y\in P$ the \emph{order interval} is given by $[x,y]:=\{z\in P; \, x\leq z\leq y\}$. $P$ is called \emph{directed (upward)} if for every $x,y\in P$ the set $P_{\geq x}\cap P_{\geq y}$ is non-empty. \emph{Directed downward} is defined analogously. A set $M\subseteq P$ is called \emph{full} if for every $x,y\in M $ one has $[x,y]\subseteq M$. For a subset of $P$, the notions \emph{bounded above}, \emph{bounded below}, \emph{order bounded}, \emph{upper (or lower) bound} and \emph{infimum (or supremum)} are defined as usual. For a net $(x_\alpha)_{\alpha \in A}$ in $P$ we denote $x_\alpha \downarrow$ if $x_\alpha \leq x_\beta$ whenever $\alpha\geq \beta$. For $x\in P$ we write $x_\alpha \downarrow x$ if $x_\alpha \downarrow$ and $\inf \{x_\alpha; \, \alpha \in A\}=x$. Similarly we define $x_\alpha \uparrow$ and $x_\alpha \uparrow x$. $P$ is said to have the \emph{Riesz interpolation property} if for every non-empty finite sets $U,V\subseteq P$ with $U\leq V$ there is $x\in P$ such that $U\leq x\leq V$. We call $P$ a \emph{lattice} if for every non-empty finite subset of $P$ the infimum and the supremum exist in $P$. A lattice $P$ is called \emph{Dedekind complete} if every non-empty set which is bounded above has a supremum, and every non-empty set which is bounded below has an infimum. We say that a lattice $P$ satisfies the \emph{infinite distributive laws} if for every $x\in P$ and $M\subseteq P$ the following equations hold \begin{eqnarray} x\wedge \left(\bigvee M\right)&=&\bigvee (x\wedge M),\nonumber\\ x\vee \left(\bigwedge M\right)&=&\bigwedge (x\vee M)\nonumber \end{eqnarray} (where in the first equation it is meant that if the supremum of the left-hand side of the equation exists, then also the one on the right-hand side, and both are equal). If $P$ is a lattice which satisfies the infinite distributive laws, then for $M,N\subseteq P$ the formulas \begin{eqnarray}\label{equ:distr_law}\left(\bigvee M\right)\wedge \left(\bigvee N\right)&=& \bigvee(M\wedge N)\\ \left(\bigwedge M\right)\vee \left(\bigwedge N\right)&=& \bigwedge(M\vee N)\nonumber \end{eqnarray} are satisfied, see \cite[Chapter II.4]{Vulikh67}. The following statement is straightforward. \begin{lemma} \label{lem:majosetsandsuppe} Let $P$ be a partially ordered set and $A \subseteq B \subseteq P$ such that $A$ is majorising in $B$. If the supremum of $B$ exists, then the supremum of $A$ exists and satisfies $\sup A = \sup B$. \end{lemma} We call $M\subseteq P$ \emph{order dense} in $P$ if for every $x\in P$ one has \[\sup M_{\leq x}=x=\inf M_{\geq x}.\] Clearly, every order dense subset of $P$ is majorising. The next statement is shown for partially ordered vector spaces in \cite[Stelling 1.2.7]{Waaij}, for sake of completeness we give a shorter proof here. \begin{proposition} \label{pro:orderdensitytransitive} Let $M \subseteq N\subseteq P$. If $M$ is order dense in $N$ and $N$ is order dense in $P$, then $M$ is order dense in $P$. \end{proposition} \begin{proof} Let $p \in P$. Clearly, $p$ is a lower bound of $M_{\geq p}$. To show that $p$ is the greatest lower bound of $M_{\geq p}$, let $z\in P$ be another lower bound of $M_{\geq p}$. To obtain $p\geq z$, it is sufficient to show that $N_{\geq p}\subseteq N_{\geq z}$, since then the order density of $N$ in $P$ implies $p=\inf N_{\geq p}\geq \inf N_{\geq z}=z$. Let $n\in N_{\geq p}$. Then $M_{\geq n}\subseteq M_{\geq p}$, hence $z$ is a lower bound of $M_{\geq n}$. As $M$ is order dense in $N$, we obtain $n=\inf M_{\geq n}\geq z$. Therefore $N_{\geq p}\subseteq N_{\geq z}$. We have shown $p=\inf M_{\geq p}$. A similar argument gives $p=\sup M_{\leq p}$. \end{proof} Let $P$ and $Q$ be partially ordered sets and $f\colon P\to Q$ a map. $f$ is called \emph{monotone} if for every $x,y\in P$ with $x\leq y$ one has that $f(x)\leq f(y)$, and \emph{order reflecting} if for every $x,y\in P$ with $f(x)\leq f(y)$ one has that $x\leq y$. Note that every order reflecting map is injective. We call $f$ an \emph{order embedding} if $f$ is monotone and order reflecting. $f$ is called \emph{order bounded} if every order bounded set is mapped into an order bounded set. In the next statement, for sets $U\subseteq P$ and $V\subseteq Q$ we use the notation $f[U]$ for the image of $U$ under $f$, and $[V]f$ for the preimage of $V$. \begin{proposition} \label{pro:infimum} Let $f\colon P\to Q$ be an order embedding and $M\subseteq P$. \begin{itemize} \item[(i)] If the infimum of $f[M]$ exists in $Q$ and is an element of $f[P]$, then the infimum of $M$ exists in $P$ and equals the unique preimage of $\inf f[M]$, i.e.\ $[\{\inf f[M]\}]f=\{\inf M\}$. \item[(ii)] Assume that $f[P]$ is order dense in $Q$. Then the infimum of $M$ exists in $P$ if and only if the infimum of $f[M]$ exists in $Q$ and is an element of $f[P]$. \end{itemize} Analogous statements are valid for the supremum. \end{proposition} \begin{proof} For (i), assume that the infimum of $f[M]$ exists in $Q$ and is an element of $f[P]$. Since $f$ is injective, there is a unique $p\in P$ with $f(p)=\inf f[M]$. It is sufficient to show that $p=\inf M$. As $f$ is order reflecting, $p$ is a lower bound of $M$. For any other lower bound $l\in P$ of $M$ the monotony of $f$ implies $f(l)$ to be a lower bound of $f[M]$. Thus $f(l)\leq \inf f[M]=f(p)$. Since $f$ is order reflecting, we conclude $l\leq p$. This proves $p$ to be the greatest lower bound of $M$, i.e.\ $p = \inf M$. In order to prove (ii), assume that the infimum of $M$ exists in $P$. We show that $f(\inf M)$ is the infimum of $f[M]$. The monotony of $f$ implies $f(\inf M)$ to be a lower bound of $f[M]$. Let $l\in Q$ be a lower bound of $f[M]$. Since $f[P]$ is order dense in $Q$, we know that $l=\sup \{q \in f[P]; q \leq l\}$. In order to prove $l\leq f(\inf M)$ it is sufficient to show that $f(\inf M)$ is an upper bound of $\{q \in f[P]; q \leq l\}$. For $q \in f[P]$ there is $p \in P$ such that $f(p)=q$. If furthermore $q \leq l$, we conclude $f(p)=q\leq l \leq f[M]$. Since $f$ is order reflecting, $p$ is a lower bound of $M$. This implies $p \leq \inf M$, and the monotony of $f$ shows $q=f(p) \leq f(\inf M)$. We have therefore proven $f(\inf M)$ to be an upper bound of $\{q \in f[P]; q \leq l\}$. This implies $f(\inf M)$ to be the infimum of $f[M]$. The statements about the supremum are shown analogously. \end{proof} Let $G$ be a partially ordered abelian group, i.e.\ $(G,+,0)$ is an abelian group with a partial order such that for every $x,y,z\in G$ with $x\le y$ it follows $x+z\leq y+z$. Note that $G_+:=G_{\ge 0}$ is a monoid (with the induced operation from $G$). We call the elements of $G_+$ \emph{positive}. $G_+$ is called \emph{generating}\footnote{As usual, for $M,N\subseteq G$ we define $M-N:=\{m-n; \, (m,n)\in M \times N\}.$} if $G=G_+-G_+$. Note that $G$ is directed if and only if $G_+$ is generating. We say that $G$ is \emph{Archimedean} if for every $x,y\in G$ with $nx\le y$ for all $n\in \mathbb{N}$ one has that $x\le 0$. A directed full subgroup $I$ of $G$ is called an \emph{ideal}. A subgroup $H$ of $G$ is full if and only if $H\cap G_+$ is full. $G$ has the \emph{Riesz decomposition property} if for every $x,y\in G_+$ and $w\in [0,x+y]$ there are $u\in [0,x]$ and $v\in [0,y]$ such that $w=u+v$. Observe that $G$ has the Riesz decomposition property if and only if $G$ has the Riesz interpolation property, see e.g.\ \cite[Proposition 2.1]{Goodearl1986}. If $G$ is a lattice, then $G$ is called a \emph{lattice-ordered abelian group}. Note that every lattice-ordered abelian group satisfies the infinite distributive laws, see \cite[Proposition 1.7]{Goodearl1986}, and hence the equations \eqref{equ:distr_law}. For further standard notions in partially ordered abelian groups see \cite{Goodearl1986}. Let $G$, $H$ be partially ordered abelian groups. We call a group homomorphism $f\colon G\to H$ \emph{additive} and denote the set of all additive maps from $G$ to $H$ by $\operatorname{A}(G,H)$. As usual, on $\operatorname{A}(G,H)$ a group structure is introduced by means of $f+g\colon G\to H$, $x\mapsto f(x)+g(x)$, where the neutral element is $0\colon x\mapsto 0$. A translation invariant pre-order on $\operatorname{A}(G,H)$ is defined by $f\leq g$ whenever for every $x\in G_+$ we have $f(x)\leq g(x)$. If $G$ is directed, then $\leq$ is a partial order on $\operatorname{A}(G,H)$. Note that an element in $\operatorname{A}(G,H)$ is positive if and only if it is monotone. We denote the set of all monotone maps in $\operatorname{A}(G,H)$ by $\operatorname{A}_{+}(G,H)$. An element of the set $\operatorname{A}_{\operatorname{r}}(G,H):=\operatorname{A}_{+}(G,H)-\operatorname{A}_{+}(G,H)$ is called a \emph{regular} map. Finally, we denote the set of all order bounded maps in $\operatorname{A}(G,H)$ by $\operatorname{A}_{\operatorname{b}}(G,H)$. Clearly, $\operatorname{A}_{\operatorname{r}}(G,H)\subseteq \operatorname{A}_{\operatorname{b}}(G,H)$. If $G$ is directed, then $\operatorname{A}(G,H)$, $\operatorname{A}_{\operatorname{b}}(G,H)$ and $\operatorname{A}_{\operatorname{r}}(G,H)$ are partially ordered abelian groups. On a real vector space $X$, we consider a partial order $\leq$ on $X$ such that $X$ is a partially ordered abelian group under addition, and for every $\lambda \in \mathbb{R}_+$ and $x\in X_+$ one has that $\lambda x\in X_+$. Then $X$ is called a \emph{partially ordered vector space}. Note that $X$ is Archimedean if and only if $\frac{1}{n}x\downarrow 0$ for every $x \in X_+$. If a partially ordered vector space $X$ is a lattice, we call $X$ a \emph{vector lattice}. For standard notations in the case that $X$ is a vector lattice see \cite{Positiveoperators_old}. If $X$ is an Archimedean directed partially ordered vector space, then there is an essentially unique Dedekind complete vector lattice $X^\delta$ and a linear order embedding $J\colon X \to X^\delta$ such that $J[X]$ is order dense in $X^\delta$. As usual, $X^\delta$ is called the \emph{Dedekind completion} of $X$. For partially ordered vector spaces $X$ and $Y$, $\operatorname{L}(X,Y)$ denotes the space of all linear operators. We set $\operatorname{L}_+(X,Y)=\operatorname{A}_+(X,Y)\cap \operatorname{L}(X,Y)$, $\operatorname{L}_{\operatorname{r}}(X,Y)=\operatorname{A}_{\operatorname{r}}(X,Y)\cap \operatorname{L}(X,Y)$ and $\operatorname{L}_{\operatorname{b}}(X,Y)=\operatorname{A}_{\operatorname{b}}(X,Y)\cap \operatorname{L}(X,Y)$. If $X$ is directed, $\operatorname{L}(X,Y)$, $\operatorname{L}_{\operatorname{b}}(X,Y)$ and $\operatorname{L}_{\operatorname{r}}(X,Y)$ are partially ordered vector spaces. \section{Order topology in partially ordered sets} In this section, let $P$ be a partially ordered set. We will introduce the order topology $\tau_o$ on $P$ using net catching sets, which we define next. \begin{definition} A subset $U\subseteq P$ is called a \emph{net catching set} for $x\in P$ if for all nets $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\alpha)_{\alpha\in A}$ in $P$ with $\hat{x}_\alpha \uparrow x$ and $\check{x}_\alpha\downarrow x$ there is $\alpha \in A$ such that $[\hat{x}_\alpha,\check{x}_\alpha]\subseteq U$. \end{definition} \begin{proposition}\label{pro:net_catching_sets_pos} Let $U\subseteq P$ and $x\in P$. The following statements are equivalent. \begin{itemize} \item[(i)] $U$ is a net catching set for $x$. \item[(ii)] For all nets $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\beta)_{\beta\in B}$ in $P$ with $\hat{x}_\alpha \uparrow x$ and $\check{x}_\beta\downarrow x$ there are $\alpha \in A$ and $\beta \in B$ such that $[\hat{x}_\alpha,\check{x}_\beta]\subseteq U$. \item[(iii)] For all subsets $\hat{M}\subseteq P$ being directed upward and $\check{M}\subseteq P$ being directed downward with $\sup \hat{M}=x=\inf \check{M}$ there are $\hat{m}\in \hat{M}$ and $\check{m}\in \check{M}$ such that $[\hat{m},\check{m}]\subseteq U$. \end{itemize} \end{proposition} \begin{proof} It is clear that (ii)$\Rightarrow$(i). In order to show (i)$\Rightarrow$(iii), let $\hat{M}$ and $\check{M}$ be as in (iii). We endow $\check{M}$ with the reversed order and define $A:=\hat{M}\times \check{M}$ with the component-wise order on $A$. For $\alpha=(\hat{m},\check{m})\in A$ let $\hat{x}_\alpha := \hat{m}$ and $\check{x}_\alpha:=\check{m}$. This defines nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ with $\hat{x}_\alpha \uparrow x$ and $\check{x}_\alpha \downarrow x$. Thus (i) shows the existence of $(\hat{m},\check{m})= \alpha\in A$ such that $[\hat{m},\check{m}]= [\hat{x}_\alpha,\check{x}_\alpha] \subseteq U$. It remains to show (iii)$\Rightarrow$(ii). Let $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\beta)_{\beta\in B}$ be as in (ii). Define $\hat{M}:=\{\hat{x}_\alpha;\alpha \in A\}$ and $\check{M}:=\{\check{x}_\beta;\beta \in B\}$ and observe that $\hat{M}$ is directed upward and $\check{M}$ is directed downward with $\sup \hat{M}=x=\inf \check{M}$. From (iii) we conclude the existence of $\hat{m}\in \hat{M}$ and $\check{m}\in \check{M}$ such that $[\hat{m},\check{m}]\subseteq U$. There are $\alpha \in A$ and $\beta \in B$ such that $\hat{m}=\hat{x}_\alpha$ and $\check{m}=\check{x}_\beta$, which implies $[\hat{x}_\alpha,\check{x}_\beta]=[\hat{m},\check{m}]\subseteq U$. \end{proof} \begin{definition} A subset $O$ of $P$ is called \emph{order open} if $O$ is a net catching set for every $x\in O$. A subset $C$ of $P$ is called \emph{order closed} if $P\setminus C$ is order open. Define \begin{align*} \tau_o(P):=\{O\subseteq P;\, O \mbox{ is order open}\}. \end{align*} \end{definition} The following is straightforward. \begin{proposition} $\tau_o(P)$ is a topology on $P$. \end{proposition} The topology $\tau_o(P)$ (or, shortly, $\tau_o$) is referred to as the \emph{order topology} on $P$. As usual, for a net $(x_\alpha)$ in $P$ converging to $x\in P$ with respect to the topology $\tau_o$ we write $x_\alpha \xrightarrow{\tau_o} x$. \begin{remark} Our definition of the order topology is a straightforward generalisation of the topology given in \cite{Dobbertin84} on complete lattices. For this, compare \cite[Proposition 1]{Dobbertin84} with \ref{pro:net_catching_sets_pos} (iii). On the other hand, note that a net catching set is a generalisation of a concept in partially ordered vector spaces introduced in \cite[Definition 3.3]{Imh}. By \cite[Theorem 4.2]{Imh}, the order topology coincides with the topology studied in \cite{Imh}. \end{remark} \section{Order convergence in partially ordered sets} In this section, let $P$ be a partially ordered set. We will introduce three types of order convergence and relate them to $\tau_o$-convergence. \begin{definition} \label{def:orderconvergences} Let $x \in P$ and let $(x_\alpha)_{\alpha \in A}$ be a net in $P$. We define \begin{itemize} \item[(i)] $x_\alpha \xrightarrow{o_1} x$, if there are nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ in $P$ such that $\check{x}_\alpha \downarrow x$, $\hat{x}_\alpha \uparrow x$ and $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A$. \item[(ii)] $x_\alpha \xrightarrow{o_2} x$, if there are nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ in $P$ and $\alpha_0 \in A$ such that $\check{x}_\alpha \downarrow x$, $\hat{x}_\alpha \uparrow x$ and $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A_{\geq \alpha_0}$. \item[(iii)] $x_\alpha \xrightarrow{o_3} x$, if there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. \end{itemize} \end{definition} \begin{remark} \label{rem:linktoliteratureoiconv} Note that the $o_1$-convergence is inspired by the classical order convergence in vector lattices, see e.g.\ \cite{Positiveoperators_old}. The concepts of $o_2$-convergence and $o_3$-convergence are adopted from \cite{Abra}, where these convergences are considered in vector lattices. In Proposition \ref{pro:char_o_i_poag} below the precise link will be given. The $o_3$-convergence in partially ordered vector spaces is defined in \cite[Section 1.4]{Wulich2017}. Note furthermore that the order convergence concepts studied in \cite[II.6.3]{Vulikh67} for lattices and in \cite[Definition 1]{Wolk} for partially ordered sets are equivalent to the $o_3$-convergence. This will be established in Proposition \ref{pro:char_o3conv} below. \end{remark} To establish the link to the order convergence concepts given in \cite{Wulich2017} and \cite{Wolk}, we need the following notion. \begin{definition} Let $M$ be a set. A net $(x_\alpha)_{\alpha \in A}$ is called a \emph{direction} if for arbitrary $\alpha \in A$ there is $\beta \in A$ such that $\alpha < \beta$. \end{definition} The next lemma gives a link between directions and nets. \begin{lemma} \label{lem:directions} Let $M$ be a set and let $(x_\alpha)_{\alpha \in A}$ be a net in $M$. If $A \times \mathbb{N}$ is ordered componentwise, $(x_\alpha)_{(\alpha,n)\in A\times \mathbb{N} }$ is a direction and a subnet of $(x_\alpha)_{\alpha \in A}$. \end{lemma} \begin{proof} Clearly $(x_\alpha)_{(\alpha,n)\in A\times \mathbb{N} }$ is a direction. The map $\phi \colon A\times \mathbb{N} \to A$, $(\alpha,n)\mapsto \alpha$ is monotone and $\phi[A\times \mathbb{N}]$ is majorising in $A$. Since $x_\alpha=x_{\phi(\alpha,n)}$ for every $(\alpha,n)\in A \times \mathbb{N}$, the net $(x_\alpha)_{(\alpha,n)\in A\times \mathbb{N} }$ is a subnet of $(x_\alpha)_{\alpha \in A}$. \end{proof} In the subsequent proposition, the statement in (iii) is the convergence given in \cite[Definition II.6.3]{Vulikh67}, and the concept in (iv) is the convergence considered in \cite[Definition 1]{Wolk}. \begin{proposition}\label{pro:char_o3conv} Let $x \in P$ and let $(x_\alpha)_{\alpha \in A}$ be a net in $P$. Then the following statements are equivalent. \begin{itemize} \item[(i)] $x_\alpha \xrightarrow{o_3} x$, \item[(ii)] there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\beta)_{\beta \in B}$ in $P$ and a map $\eta\colon B \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\beta \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\beta$ for every $\beta\in B$ and $\alpha \in A_{\geq \eta(\beta)}$, \item[(iii)] there are directions $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. \item[(iv)] there are sets $\hat{M},\check{M}\subseteq P$ and $\kappa \colon \hat{M}\times \check{M}\rightarrow A$ such that $\hat{M}$ is directed upward, $\check{M}$ is directed downward, $\sup \hat{M}=x=\inf \check{M}$ and for every $\hat{m} \in \hat{M}$, $\check{m}\in \check{M}$ and $\alpha \in A_{\geq \kappa(\hat{m},\check{m})}$ we have $\hat{m}\leq x_\alpha \leq \check{m}$. \end{itemize} \begin{proof} It is clear that (ii) implies (i) and that (iii) implies (i). To show that (i) implies (ii), we assume that there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. For $(\beta,\gamma)\in B\times C$ we define $\hat{y}_{(\beta,\gamma)}:=\hat{x}_\beta$ and $\check{y}_{(\beta,\gamma)}:=\check{x}_\gamma$. Observe that $(\hat{y}_\delta)_{\delta\in B\times C}$ is a subnet of $(\hat{x}_\beta)_{\beta \in B}$ and, similarly, $(\check{y}_\delta)_{\delta\in B\times C}$ is a subnet of $(\check{x}_\gamma)_{\gamma \in C}$. Thus $\hat{y}_\delta\uparrow x$ and $\check{y}_\delta\downarrow x$. Furthermore, for $(\beta,\gamma)\in B\times C$ and $\alpha\in A_{\geq \eta(\beta,\gamma)}$ we have $\hat{y}_{(\beta,\gamma)}=\hat{x}_\beta\leq x_\alpha\leq \check{x}_\gamma=\check{y}_{(\beta,\gamma)}$. We next show that (i) implies (iii). Let $(\hat{x}_\beta)_{\beta\in B}$, $(\check{x}_\gamma)_{\gamma \in C}$ and $\eta\colon B \times C \to A$ be as in Definition \ref{def:orderconvergences}. According to Lemma \ref{lem:directions} we consider the directions $(\hat{x}_\beta)_{(\beta,n)\in B\times \mathbb{N}}$, $(\check{x}_\gamma)_{(\gamma,m) \in C\times \mathbb{N}}$ and define $\tilde{\eta}\colon(B\times \mathbb{N})\times (C\times \mathbb{N})$, $((\beta,n),(\gamma,m))\mapsto\eta(\beta,\gamma)$ to obtain (iii). To show that (i) implies (iv), set $\hat{M}:=\{\hat{x}_\beta; \beta\in B\}$ and $\check{M}:=\{\check{x}_\gamma; \gamma\in C\}$ and observe that $\hat{M}$ is directed upward, $\check{M}$ is directed downward and $\sup \hat{M}=x=\inf \check{M}$ is satisfied. To construct $\kappa$, note that for $(\hat{m},\check{m})\in\hat{M}\times\check{M}$ there is $(\beta,\gamma)\in B\times C$ such that $\hat{m}=\hat{x}_\beta$ and $\check{m}=\check{x}_\gamma$. Hence we can define $\kappa(\hat{m},\check{m}):=\eta(\beta,\gamma)$ and obtain for $\alpha \in A_{\geq \kappa(\hat{m},\check{m})}=A_{\geq \eta(\beta,\gamma)}$ that $\hat{m}=\hat{x}_\beta\leq x_\alpha \leq \check{x}_\gamma=\check{m}$. Finally we establish that (iv) implies (i). Define $B:=\hat{M}$, $C:=\check{M}$, where $C$ is endowed with the reversed order of $P$. For $\beta\in B$ and $\gamma\in C$ set $\hat{x}_\beta:=\beta$ and $\check{x}_\gamma:=\gamma$, moreover define $\eta:=\kappa$, which yield the desired properties. \end{proof} \end{proposition} The following proposition gives the general relationships between the different concepts of order convergence. The further discussion below will show that all the concepts differ. \begin{proposition}\label{pro:basic_convergences} Let $x \in P$ and let $(x_\alpha)_{\alpha \in A}$ be a net in $P$. Then \begin{itemize} \item[(i)] $x_\alpha \xrightarrow{o_1} x$ implies $x_\alpha \xrightarrow{o_2} x$, \item[(ii)] $x_\alpha \xrightarrow{o_2} x$ implies $x_\alpha \xrightarrow{o_3} x$, and \item[(iii)] $x_\alpha \xrightarrow{o_3} x$ implies $x_\alpha \xrightarrow{\tau_o} x$. \end{itemize} \end{proposition} \begin{proof} As (i) and (ii) are straightforward, it remains to show (iii). For this, let $O\in \tau_o$ be a neighbourhod of $x$. The convergence $x_\alpha \xrightarrow{o_3} x$ means that there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. Since $O$ is a net catching set for $x$, Proposition \ref{pro:net_catching_sets_pos} shows the existence of $\beta\in B$ and $\gamma\in C$ such that $[\hat{x}_\beta,\check{x}_\gamma]\subseteq O$. Hence for $\alpha\in A_{\geq \eta(\beta,\gamma)}$ we have $x_\alpha\in [\hat{x}_\beta,\check{x}_\gamma]\subseteq O$. \end{proof} \begin{remark}\label{rem:decreasingnet} (a) Observe that every net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\downarrow x\in P$ satisfies $x_\alpha\xrightarrow{o_1}x$, and due to Proposition \ref{pro:basic_convergences} also $x_\alpha\xrightarrow{\tau_o}x$.\\ (b) Let $M \subseteq P$, let $(x_\alpha)_{\alpha \in A}$ be a net in $M$ and let $i \in \{1,2,3\}$. Note that if $x_\alpha \xrightarrow{o_i}x\in M$ in $M$, then also $x_\alpha \xrightarrow{o_i}x$ in $P$. An analogue is valid for $\tau_o$-convergence. Note furthermore that the converse statements are not true, in general. This is shown in Example \ref{exa:extensionprop} below, where $M$ is even an order dense subspace of a vector lattice $P$. \end{remark} \begin{remark}\label{rem:o_1_and_o_2} Let $(x_\alpha)_{\alpha\in A}$ be a net in $P$ and $x\in P$. We have $x_\alpha\xrightarrow{o_2}x$ if and only if there is $\alpha\in A$ such that the net $(x_\beta)_{\beta\in A_{\geq \alpha}}$ satisfies $x_\beta\xrightarrow{o_1}x$. \end{remark} In general, $o_2$-convergence does not imply $o_1$-convergence. \begin{proposition} \label{pro:o2noto1} Let $x\in P$ have the property that for every $p\in P_{\geq x}$ there is a $q\in P$ such that $p<q$. Then there is a net $(x_\alpha)_{\alpha\in A}$ in $P$ and such that $x_\alpha\xrightarrow{o_2}x$, but not $x_\alpha\xrightarrow{o_1}x$. \end{proposition} \begin{proof} Let $x\in P$ have the above property. Consider $A:=P_{\geq x}$ and define a partial order $\preceq$ on $A$, where on $A\setminus \{x\}$ the induced order from $P$ is taken. Moreover, define for every $y\in A$ that $y\preceq x$. Observe that $A$ is directed upward. Set $x_\alpha:=\alpha$ for every $\alpha\in A$. First we show $x_\alpha\xrightarrow{o_2}x$. We define $\alpha_0:=x$ and $\hat{x}_{\alpha}:=\check{x}_{\alpha}:=x$ for every $\alpha\in A$ and obtain $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for every $\alpha\in A_{\succeq \alpha_0}=\{x\}$. It remains to show that $x_\alpha\xrightarrow{o_1}x$ does not hold. Assuming the contrary, there is a net $(\check{x}_{\alpha})_{\alpha\in A}$ with $\check{x}_\alpha\downarrow x$ and $x_\alpha\leq\check{x}_\alpha$ for every $\alpha\in A$. By the assumption, there is $\alpha \in A$ such that $\alpha>x$ and $\beta\in A$ such that $\beta> \check{x}_{\alpha}\in A$. Observe that $\beta\geq \check{x}_\alpha\geq x_\alpha=\alpha>x$, hence $\beta\succeq \alpha$ and thus $\beta>\check{x}_\alpha\geq \check{x}_\beta\geq x_\beta=\beta$, which is a contradiction. \end{proof} \begin{remark}\label{rem:o1o2} (a) Assume $P$ to be directed upward and downward, $(x_\alpha)_{\alpha \in A}$ to be a net in $P$ such that $\{x_\alpha;\, \alpha \in A\}$ is bounded, and $p \in P$. Then $x_\alpha \xrightarrow{o_1}p$ if and only if $x_\alpha \xrightarrow{o_2}p$. One implication follows from Proposition \ref{pro:basic_convergences}. To show the other one, let $x_\alpha \xrightarrow{o_2}p$. Thus there are nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha\in A}$ and $\alpha_0 \in A$ such that $\hat{x}_\alpha \uparrow p$, $\check{x}_\alpha \downarrow p$ and $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for all $\alpha \in A_{\geq \alpha_0}$. Since $P$ is directed upward and $\{x_\alpha;\, \alpha \in A\}$ is bounded, there is an upper bound $\check{p}$ of $\{x_\alpha ;\, \alpha \in A\}\cup \{\check{x}_{\alpha_0}\}$. For $\alpha \in A$ define $\check{y}_\alpha:=\check{x}_\alpha$ if $\alpha \geq \alpha_0$ and $\check{y}_\alpha:=\check{p}$ otherwise. This defines a net $(\check{y}_\alpha)_{\alpha \in A}$ with $\check{y}_\alpha \downarrow p$ and $x_\alpha \leq \check{y}_\alpha$ for every $\alpha \in A$. Similarly we can define a net $(\hat{y}_\alpha)_{\alpha \in A}$ to obtain $x_\alpha \xrightarrow{o_1}p$. (b) The statement in (a) shows that the definition of order convergence given in \cite[Chapter 1, Section 5]{Peressini67} for nets with bounded domain coincides with the concepts of $o_1$-convergence and $o_2$-convergence. (c) If $x\in P$ is such that $P_{\geq x}$ is directed upward and $P_{\leq x}$ is directed downward, then the following are equivalent: \begin{itemize} \item[(i)] For every net $(x_\alpha)_{\alpha\in A}$ in $P$ with $x_\alpha\xrightarrow{o_2} x$ we have that $x_\alpha\xrightarrow{o_1} x$. \item[(ii)] $P_{\geq x}$ is bounded above and $P_{\leq x}$ is bounded below. \end{itemize} Indeed, to show (i)$\Rightarrow$(ii), we assume, to the contrary, that (ii) is not valid. Suppose w.l.o.g.\ that $P_{\geq x}$ is not bounded from above, thus for every $p\in P_{\geq x}$ there is $r\in P_{\geq x}$ such that $r\not\leq p$. Since $P_{\geq x}$ is directed upward, there is $q\in P_{\geq x}$ such that $p,r\leq q$. As $p<q$, the assumption of Proposition \ref{pro:o2noto1} is satisfied, i.e.\ (i) is not true. We establish (ii)$\Rightarrow$(i). Let $(x_\alpha)_{\alpha\in A}$ be a net in $P$ such that $x_\alpha\xrightarrow{o_2} x$, i.e.\ there are nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ in $P$ and $\alpha_0 \in A$ such that $\check{x}_\alpha \downarrow x$, $\hat{x}_\alpha \uparrow x$ and $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A_{\geq \alpha_0}$. By (ii) there is an upper bound $u\in P$ for $P_{\geq x}$ and a lower bound $l\in P$ for $P_{\leq x}$. For $\alpha\in A$, set $\check{y}_\alpha:=\check{x}_\alpha$ whenever $\alpha\geq\alpha_0$, and $\check{y}_\alpha:=u$ otherwise. Similarly, set $\hat{y}_\alpha:=\hat{x}_\alpha$ whenever $\alpha\geq\alpha_0$, and $\hat{y}_\alpha:=l$ otherwise. Observe that $\check{y}_\alpha \downarrow x$, $\hat{y}_\alpha \uparrow x$ and $\hat{y}_\alpha \leq x_\alpha \leq \check{y}_\alpha$ for every $\alpha \in A$. Thus $x_\alpha\xrightarrow{o_1} x$. \end{remark} \begin{remark} Due to Remark \ref{rem:o1o2}(c), in every partially ordered vector space the concepts of $o_1$-convergence and $o_2$-convergence differ. Furthermore, an example of Fremlin in \cite[Example 1.4]{Abra} shows that $o_3$-convergence does not imply $o_2$-convergence. For this, use Proposition \ref{pro:char_o_i_poag} below. A sequence which is $\tau_o$-convergent, but not $o_3$-convergent, can be found in Example \ref{exa:ordertopconvergentnetnoto3convergent} below. The last two examples are given in the setting of vector lattices. Note that there are examples where $o_2$-convergence, $o_3$-convergence and $\tau_o$-convergence coincide, see Example \ref{exa:opensubsetsofR} below. \end{remark} \begin{proposition} \label{pro:o2_o3_convergenceDedekind_complete_lattice} Let $P$ be a Dedekind complete lattice, let $(x_\alpha)_{\alpha \in A}$ be a net in $P$ and $x \in P$. Then $x_\alpha \xrightarrow{o_2}x$ if and only if $x_\alpha \xrightarrow{o_3}x$. \end{proposition} \begin{proof} Due to Proposition \ref{pro:basic_convergences} it is sufficient to show that $x_\alpha \xrightarrow{o_3}x$ implies $x_\alpha \xrightarrow{o_2}x$. Assume that there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. Fix $(\beta_0,\gamma_0)\in B \times C$. Set $\alpha_0:=\eta(\beta_0,\gamma_0)$. By Remark \ref{rem:o_1_and_o_2} it is sufficient to prove that $(x_\alpha)_{\alpha \in A_{\geq \alpha_0}}$ is $o_1$-convergent to $x$. For $\alpha \in A$ define $M_\alpha:=\{x_{\kappa}; \, \kappa \in A_{\geq \alpha}\}\cup \{x\}$. Note that for $(\beta,\gamma) \in B \times C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$ we have $\hat{x}_\beta \leq M_\alpha\leq \check{x}_{\gamma}$. As $P$ is a Dedekind complete lattice, $\hat{y}_\alpha:=\inf M_\alpha$ and $\check{y}_\alpha:=\sup M_\alpha$ exist for $\alpha \in A_{\geq \alpha_0}$. Furthermore $\hat{y}_\alpha \leq \{x_\alpha,x\} \leq \check{y}_\alpha$ for all $\alpha \in A_{\geq \alpha_0}$, $\hat{y}_\alpha \uparrow$ and $\check{y}_\alpha \downarrow$. Let $\hat{y}_\alpha\leq z$ for all $\alpha \in A_{\geq \alpha_0}$. For $\beta \in B$ there is $\alpha \in A_{\geq \alpha_0}$ such that $\eta(\beta,\gamma_0)\leq \alpha$. Hence $\hat{x}_\beta \leq \inf M_{\eta(\beta,\gamma_0)}\leq \inf M_\alpha= \hat{y}_\alpha\leq z$ and we obtain $x=\sup\{\hat{x}_\beta;\, \beta \in B\}\leq z$. This shows $\hat{y}_\alpha \uparrow x$. Analogously we get $\check{y}_\alpha \downarrow x$. \end{proof} If we introduce the order topology $\tau_o$ on the partially ordered set of real numbers $\mathbb{R}$, we obtain the standard topology on $\mathbb{R}$. \begin{example} \label{exa:opensubsetsofR} Let $M \subseteq \mathbb{R}$ be an open set with respect to the standard topology $\tau$ and equip $M$ with the standard order of $\mathbb{R}$. We show that $\tau_o(M)$ is the restriction $\tau(M)$ of $\tau$ to $M$ and that $o_2$- and $o_3$-convergence in $M$ coincide with the convergence with respect to $\tau(M)$. Note that from Remark \ref{rem:o1o2} (c) it follows that $o_1$-convergence and $o_2$-convergence in $M$ do not coincide. We first show that convergence with respect to $\tau(M)$ implies $o_2$-convergence. Indeed, let $(x_\alpha)_{\alpha\in A}$ be a net in $M$ such that $x_\alpha \xrightarrow{\tau(M)}x \in M$. Since $M$ is open, there is $r>0$ such that the open ball $B_r(x)\subseteq \mathbb{R}$ with center $x$ and radius $r$ is contained in $M$. Hence there is $\alpha_0\in A$ such that for every $\alpha \in A_{\geq \alpha_0}$ we have $x_\alpha \in B_r(x)$. We therefore assume w.l.o.g.\ that $(x_\alpha)_{\alpha\in A}$ is a net in $B_r(x)$. Since $B_r(x)$ is a Dedekind complete lattice, by Proposition \ref{pro:o2_o3_convergenceDedekind_complete_lattice} it is sufficient to show that $x_\alpha \xrightarrow{o_3}x$. For $\beta \in B:=(0,r)$ let $\hat{x}_\beta:= x-\beta$ and $\check{x}_\beta:=x+ \beta$. If we equip $B$ with the reversed order of $\mathbb{R}$, we obtain nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\beta)_{\beta \in B}$ in $B_r(x)$ with $\hat{x}_\beta \uparrow x$ and $\check{x}_\beta \downarrow x$. For every $\beta \in B$ there is $\alpha_\beta \in A$ such that for every $\alpha \in A_{\geq \alpha_\beta}$ we have $|x_\alpha-x|\leq \beta$, i.e.\ $\hat{x}_\beta\leq x_\alpha \leq \check{x}_\beta$. We set $\eta \colon B \to A$, $\beta \mapsto \alpha_\beta$, and obtain $x_\alpha \xrightarrow{o_3}x$. We have now shown that convergence with respect to $\tau(M)$ implies $o_2$-convergence in $M$. Note that $o_2$-convergence implies $o_3$-convergence and that $o_3$-convergence implies convergence with respect to $\tau_o(M)$ in $M$ by Proposition \ref{pro:basic_convergences}. It therefore remains to establish that convergence with respect to $\tau_o(M)$ implies convergence with respect to $\tau(M)$. To show that $\tau(M) \subseteq \tau_o(M)$, let $O \in \tau(M)$ and $x \in O$. Since $M$ is open in $\mathbb{R}$ with respect to $\tau$ and $O \in \tau(M)$, we conclude $O \in \tau$. Thus there is $r>0$ such that $B_{2r}(x)\subseteq O$. To show that $O$ is a net catching set for $x$ let $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ be nets in $M$ such that $\hat{x}_\alpha \uparrow x$ and $\check{x}_\alpha \downarrow x$. Thus there is $\alpha \in A$ such that $[\hat{x}_\alpha,\check{x}_\alpha]\subseteq [x-r,x+r]\subseteq B_{2r}(x)\subseteq O$. This proves $O \in \tau_o(M)$. \end{example} Order closed sets can be characterised by means of $o_i$-convergence. \begin{theorem} \label{thm:orderclosed} Let $i\in \{1,2,3\}$ and $C\subseteq P$. The following statements are equivalent: \begin{itemize} \item[(i)] $C$ is order closed. \item[(ii)] For every net $(x_{\alpha})_{\alpha\in A}$ in $C$ with $x_\alpha\xrightarrow{o_i} x\in P$ it follows that $x\in C$. \end{itemize} \end{theorem} \begin{proof} In this proof, a set $C$ that satisfies (ii) is called $o_i$-closed. Observe that from Proposition \ref{pro:basic_convergences} it follows that order closed sets are always $o_3$-closed, $o_3$-closed sets are $o_2$-closed and that $o_2 $-closed sets are $o_1$-closed. It remains to show that $o_1$-closed sets are order closed. By contradiction, assume that $C\subseteq P$ is not order closed. Thus $P\setminus C$ is not order open, i.e.\ there is $x\in P\setminus C$ such that $P\setminus C$ is not a net catching set for $x$. This implies the existence of nets $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\alpha)_{\alpha\in A}$ in $P$ with $\hat{x}_\alpha \uparrow x$ and $\check{x}_\alpha\downarrow x$ such that for every $\alpha \in A$ we have that $[\hat{x}_\alpha,\check{x}_\alpha]\not\subseteq P\setminus C$. Hence, for every $\alpha\in A$ there is $x_\alpha\in[\hat{x}_\alpha,\check{x}_\alpha]\cap C$. Note that $(x_\alpha)_{\alpha\in A}$ is a net in $C$ with $x_\alpha\xrightarrow{o_1}x\in P\setminus C$, hence $C$ is not $o_1$-closed. \end{proof} \begin{corollary} \label{cor:orderdenseimpliesordertopologicdense} Let $M\subseteq P$ be a lattice with the induced order from $P$. If $M$ is order dense in $P$, then $M$ is dense in $P$ with respect to $\tau_o(P)$. \end{corollary} \begin{proof} Let $p \in P$. Let $A:=M_{\geq p}$ be equipped with the reversed order of $M$. Since $M$ is a lattice, we know $A$ to be directed. Setting $x_\alpha:=\alpha$ for $\alpha \in A$, we obtain a net $(x_\alpha)_{\alpha \in A}$ in $M$ with $x_\alpha \downarrow$. Since $M$ is order dense in $P$, we know furthermore $\inf\{x_\alpha;\, \alpha \in A\}=\inf A=\inf M_{\geq p}=p$, hence $x_\alpha \downarrow p$. Thus $x_\alpha \xrightarrow{o_1}p$ and Theorem \ref{thm:orderclosed} shows that $p$ is contained in the closure of $M$ with respect to $\tau_o(P)$. \end{proof} For $o_i$-limits, we obtain the following monotonicity property. \begin{proposition}\label{pro:monotony} Let $i\in\{1,2,3\}$ and $(x_\alpha)_{\alpha\in A}$ and $(y_\beta)_{\beta\in B}$ be nets in $P$ such that $x_\alpha\xrightarrow{o_i} x\in P$ and $y_\beta\xrightarrow{o_i} y\in P$. If for every $\alpha_0\in A$ and $\beta_0\in B$ there are $\alpha\in A_{\geq\alpha_0}$ and $\beta\in B_{\geq\beta_0}$ such that $x_\alpha\leq y_\beta$, then $x\leq y$. \end{proposition} \begin{proof} By Proposition \ref{pro:basic_convergences} it is sufficient to show the statement for $i=3$. In this case, there are nets $(\hat{x}_\gamma)_{\gamma \in C}$, $(\check{x}_\delta)_{\delta \in D}$, $(\hat{y}_\varepsilon)_{\varepsilon \in E}$, $(\check{y}_\varphi)_{\varphi \in F}$ in $P$ and maps $\eta_x\colon C \times D \rightarrow A$, $\eta_y\colon E \times F \rightarrow B$ such that $\hat{x}_\gamma \uparrow x$, $\check{x}_\delta \downarrow x$, $\hat{y}_\varepsilon \uparrow y$, $\check{y}_\varphi \downarrow y$, $\hat{x}_\gamma \leq x_\alpha \leq \check{x}_\delta$, $\hat{y}_\varepsilon \leq y_\beta \leq \check{y}_\varphi$ for every $\gamma \in C$, $\delta\in D$, $\varepsilon\in E$, $\varphi\in F$, $\alpha \in A_{\geq \eta_x(\gamma,\delta)}$ and $\beta \in B_{\geq \eta_y(\varepsilon,\varphi)}$. For every $\gamma\in C$ and $\varphi\in F$ we have that $\hat{x}_\gamma\le \check{y}_\varphi$. Indeed, let $\delta\in D$, $\varepsilon\in E$ and note that by assumption there are $\alpha\in A_{\geq\eta_x(\gamma,\delta)}$ and $\beta\in B_{\geq\eta_y(\varepsilon,\varphi)}$ such that $\hat{x}_\gamma\leq x_\alpha\leq y_\beta\le \check{y}_\varphi$. From $\hat{x}_\gamma \uparrow x$ and $\check{y}_\varphi \downarrow y$ we conclude that $x\leq y$. \end{proof} \begin{remark} \label{rem:unique_order_limits} Note that Proposition \ref{pro:monotony} immediately implies the uniqueness of the $o_i$-limits. \end{remark} The combination of Theorem \ref{thm:orderclosed} with Proposition \ref{pro:monotony} yields the following statement. \begin{corollary}\label{cor:upperboundset_orderclosed} For every $p\in P$ the sets $P_{\leq p}$ and $P_{\geq p}$ are order closed. \end{corollary} \begin{remark}\label{rem:Floyd_sigma_comp} Corollary \ref{cor:upperboundset_orderclosed} implies that for every $p\in P$ the set $\{p\}$ is order closed, thus $P$ with the order topology is $\operatorname{T_1}$. Note that the order topology is not Hausdorff, in general. Indeed, a combination of Proposition \ref{pro:basic_convergences} and Remark \ref{rem:decreasingnet} yields that the order topology is always $\sigma$-compatible in the sense of \cite{Floyd1955}. Thus, \cite[Theorem 1]{Floyd1955} presents an example of a complete Boolean algebra on which the order topology is not Hausdorff. \end{remark} The following statement is a generalisation of the sandwich theorem for sequences given in \cite[Chapter II, \S 6,c)]{Vulikh67}. \begin{proposition} \label{pro:sandwichtheorem} \begin{itemize} \item[(i)] Let $(x_\alpha)_{\alpha \in A}$, $(y_\alpha)_{\alpha \in A}$ and $(z_\alpha)_{\alpha \in A}$ be nets in $P$ such that $x_\alpha \xrightarrow{o_1} p\in P$ and $z_\alpha \xrightarrow{o_1}p$. If for every $\alpha \in A$ one has $x_\alpha \leq y_\alpha \leq z_\alpha$, then $y_\alpha \xrightarrow{o_1}p$. \item[(ii)] Let $(x_\alpha)_{\alpha \in A}$, $(y_\alpha)_{\alpha \in A}$ and $(z_\alpha)_{\alpha \in A}$ be nets in $P$ such that $x_\alpha \xrightarrow{o_2} p\in P$ and $z_\alpha \xrightarrow{o_2}p$. If there is $\alpha_0 \in A$ such that for each $\alpha \in A_{\geq \alpha_0}$ we have $x_\alpha \leq y_\alpha \leq z_\alpha$, then $y_\alpha \xrightarrow{o_2}p$. \item[(iii)] Let $(x_\alpha)_{\alpha \in A}$, $(y_\beta)_{\beta \in B}$ and $(z_\gamma)_{\gamma \in C}$ be nets in $P$ such that $x_\alpha \xrightarrow{o_3} p\in P$ and $z_\gamma \xrightarrow{o_3}p$. If for $(\alpha_0,\gamma_0)\in A \times C$ there is $\beta_0 \in B$ such that for all $\beta \in B_{\geq \beta_0}$ there is $(\alpha,\gamma)\in A_{\geq \alpha_0}\times C_{\geq \gamma_0}$ with $x_\alpha \leq y_\beta \leq z_\gamma$, then $y_\beta \xrightarrow{o_3}p$. \end{itemize} \end{proposition} \begin{proof} To show (i), let $x_\alpha \xrightarrow{o_1} p\in P$ and $z_\alpha \xrightarrow{o_1}p$. Thus there are nets $(\hat{x}_\alpha)_{\alpha \in A}$ and $(\check{z}_\alpha)_{\alpha \in A}$ in $P$ such that $\hat{x}_\alpha \uparrow p$, $\check{z}_\alpha \downarrow p$ and $\hat{x}_\alpha \leq x_\alpha \leq y_\alpha \leq z_\alpha \leq \check{z}_\alpha$ for every $\alpha \in A$, hence we obtain $y_\alpha \xrightarrow{o_1}p$. The proof of (ii) is similar. To show (iii), assume $x_\alpha \xrightarrow{o_3} p\in P$ and $z_\gamma \xrightarrow{o_3}p$. Hence there are nets $(\hat{x}_\delta)_{\delta \in D}$, $(\check{x}_\kappa)_{\kappa \in K}$, $(\hat{z}_\lambda)_{\lambda \in L}$ and $(\check{z}_\epsilon)_{\epsilon \in E}$ in $P$ and maps $\eta_x \colon D \times K\rightarrow A$ and $\eta_z \colon L \times E \rightarrow C$ such that $\hat{x}_\delta \uparrow p$, $\check{x}_\kappa \downarrow p$, $\hat{z}_\lambda \uparrow p$, $\check{z}_\epsilon \downarrow p$, $\hat{x}_\delta\leq x_\alpha \leq \check{x}_\kappa$ for all $(\delta,\kappa)\in D\times K$ and $\alpha \in A_{\geq \eta_x(\delta,\kappa)}$, and $\hat{z}_\lambda \leq z_\gamma \leq \check{z}_\epsilon$ for all $(\lambda,\epsilon)\in L\times E$ and $\gamma \in C_{\geq \eta_z(\lambda,\epsilon)}$. Fix $\kappa \in K$ and $\lambda \in L$. By assumption, for $(\delta,\epsilon)\in D \times E$ there is $\beta_{(\delta,\epsilon)}\in A$ such that for all $\beta \in B_{\geq \beta_{(\delta,\epsilon)}}$ there exists $(\alpha,\gamma)\in A_{\geq \eta_x(\delta,\kappa)} \times C_{\geq \eta_z(\lambda,\epsilon)}$ with $x_\alpha\leq y_\beta \leq z_\gamma$, hence also $\hat{x}_\delta\leq x_\alpha \leq y_\beta \leq z_\gamma\leq \check{z}_\epsilon$. Thus $\eta_y \colon D \times E \rightarrow B$ with $\eta_y(\delta,\epsilon):=\beta_{(\delta,\epsilon)}$ defines a map such that $\hat{x}_\delta \leq y_\beta \leq \check{z}_\epsilon$ holds for every $(\delta,\epsilon)\in D \times E$ and $\beta \in B_{\geq \eta_y(\delta,\epsilon)}$. This proves $y_\beta \xrightarrow{o_3}p$. \end{proof} If all three nets have the same index set, we can simplify (iii) to the statements given in the following Corollary. \begin{corollary} \label{cor:sandwichtheorem} Let $(x_\alpha)_{\alpha \in A}$, $(y_\alpha)_{\alpha \in A}$ and $(z_\alpha)_{\alpha \in A}$ be nets in $P$ such that $x_\alpha \xrightarrow{o_3} p\in P$ and $z_\alpha \xrightarrow{o_3}p$. \begin{itemize} \item[(i)] If there is $\delta \in A$ such that for each $\alpha \in A_{\geq \delta}$ we have $x_\alpha \leq y_\alpha \leq z_\alpha$, then $y_\alpha \xrightarrow{o_3}p$. \item[(ii)] If for every $\delta\in A$ there is is $\alpha_{\delta} \in A$ such that for every $\alpha \in A_{\geq \alpha_{\delta} }$ we have $x_{\delta} \leq y_\alpha \leq z_{\delta}$, then $y_\alpha \xrightarrow{o_3}p$. \end{itemize} \end{corollary} \begin{proof} For $(\alpha_0,\gamma_0)\in A\times A$ there is $\beta_0 \in A$ with $\beta_0\geq \delta$, $\beta_0\geq \alpha_0$ and $\beta_0\geq \gamma_0$. For $\beta \in A_{\geq \beta_0}$ the inequality $x_\beta \leq y_\beta \leq z_\beta$ is valid. If we set $\alpha:=\beta$ and $\gamma:=\beta$, we obtain $(\alpha,\gamma)\in A_{\geq \alpha_0}\times C_{\geq \gamma_0}$ with $x_\alpha= x_\beta \leq y_\beta \leq z_\beta =z_\gamma$. Hence Proposition \ref{pro:sandwichtheorem}(iii) implies the statement (i). For $(\alpha_0,\gamma_0)\in A\times A$ there is $\beta_0 \in A$ with $\beta_0\geq \alpha_0$ and $\beta_0\geq \gamma_0$. Now the assumption implies the existence of $\alpha_{\beta_0}\in A$ with $x_{\beta_0}\leq y_\beta \leq z_{\beta_0}$ for every $\beta \in A_{\geq \alpha_{\beta_0}}$. For $\beta \in A_{\geq \beta_0}$ we set $\alpha:=\beta_0$ and $\gamma:=\beta_0$ to get $(\alpha,\beta)\in A_{\geq \alpha_0}\times A_{\geq \gamma_0}$ with $x_\alpha= x_{\beta_0} \leq y_\beta \leq z_{\beta_0}= z_\gamma$. Hence Proposition \ref{pro:sandwichtheorem}(iii) implies the statement (ii) as well. \end{proof} In distributive lattices the lattice operations are compatible with the order convergences. \begin{proposition} \label{pro:inf_o_i} Let $P$ be a distributive lattice and let $(x_\alpha)_{\alpha\in A}$ and $(y_\beta)_{\beta\in B}$ be nets in $P$. Let $A\times B$ be ordered component-wise and let $i\in\{1,2,3\}$. If $x_\alpha\xrightarrow{o_i} x\in P$ and $y_\beta\xrightarrow{o_i} y\in P$, then the net $(x_\alpha\wedge y_\beta)_{(\alpha,\beta)\in A\times B}$ satisfies $x_\alpha\wedge y_\beta\xrightarrow{o_i} x\wedge y$. An analogous statement is valid for the supremum. \end{proposition} \begin{proof} We show the result for $i=1$; the cases $i=2$ and $i=3$ are similar. Let $(\hat{x}_\alpha)_{\alpha\in A}$, $(\check{x}_\alpha)_{\alpha\in A}$, $(\hat{y}_\beta)_{\beta\in B}$ and $(\check{y}_\beta)_{\beta\in B}$ be nets in $P$ such that $\hat{x}_\alpha\uparrow x$, $\check{x}_\alpha\downarrow x$, $\hat{y}_\beta\uparrow y$, $\check{y}_\beta\downarrow y$, $\hat{x}_\alpha\leq x_\alpha\leq \check{x}_\alpha$ for every $\alpha\in A$, and $\hat{y}_\beta\leq y_\beta\leq \check{y}_\beta$ for every $\beta\in B$. We get immediately that $\hat{x}_\alpha\wedge\hat{y}_\beta\leq x_\alpha\wedge y_\beta\leq \check{x}_\alpha\wedge \check{y}_\beta$ for every $(\alpha,\beta)\in A\times B$ and that the net $\left(\check{x}_\alpha\wedge \check{y}_\beta\right)_{(\alpha,\beta)\in A\times B}$ satisfies $\check{x}_\alpha\wedge \check{y}_\beta\downarrow x\wedge y$. Furthermore, \eqref{equ:distr_law} with $M=\{\hat{x}_\alpha;\, \alpha\in A\}$ and $N=\{\hat{y}_\beta;\,\beta\in B\}$ implies $\hat{x}_\alpha\wedge \hat{y}_\beta\uparrow x\wedge y$. \end{proof} \begin{remark}\label{rem:subnet_o_i} Let $(x_\alpha)_{\alpha\in A}$ be a net in $P$ and let $(y_\beta)_{\beta\in B}$ be a subnet of $(x_\alpha)_{\alpha\in A}$. Let $x\in P$ and fix $i\in\{1,2,3\}$. If $x_\alpha \xrightarrow{o_i} x$, then $y_\beta\xrightarrow{o_i} x$. This will be useful in combination with the following statement. Let $Q$ be a partially ordered set. For a net $(x_\alpha)_{\alpha\in A}$ in $P$ and $(y_\alpha)_{\alpha\in A}$ in $Q$ and a map $f\colon P\times Q\to Q$ the net $(f(x_\alpha,y_\alpha))_{\alpha\in A}$ is a subnet of $(f(x_\alpha,y_\beta))_{(\alpha,\beta)\in A\times A}$. In particular, if $(x_\alpha)_{\alpha \in A}$ and $(y_\alpha )_{\alpha \in A}$ are nets in a distributive lattice $P$ with $x_\alpha \xrightarrow{o_i} x\in P$ and $y_\alpha \xrightarrow{o_i} y\in P$, then Proposition \ref{pro:inf_o_i} shows that the net $(x_\alpha \wedge y_\alpha)_{\alpha \in A}$ satisfies $x_\alpha \wedge y_\alpha \xrightarrow{o_i} x\wedge y$. This technique will also be applied to the addition of nets in partially ordered abelian groups and the multiplication of a scalar net and a net in a partially ordered vector space in the subsequent discussion. \end{remark} \section{Continuous maps on partially ordered sets} In this section, $P$ and $Q$ are partially ordered sets. For $o_1$-, $o_2$-, $o_3$- and $\tau_o$-convergence, we will introduce the corresponding concepts of continuity. It will be shown that for monotone maps these concepts are equivalent. \begin{definition} \label{def:ordercontinuity} A map $f\colon P\to Q$ is called \begin{itemize} \item[(i)] \emph{$o_i$-continuous in $x\in P$}, if for every net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\xrightarrow{o_i}x$ we have that $f(x_\alpha)\xrightarrow{o_i}f(x)$ (where $i\in \{1,2,3\}$). \item[(ii)] \emph{order continuous in $x\in P$}, if it is continuous in $x$ with respect to the order topologies $\tau_o(P)$ and $\tau_o(Q)$, respectively. \end{itemize} $f$ is called \emph{$o_i$-continuous} (\emph{order continuous}, respectively) if it is $o_i$-continuous (order continuous, respectively) in $x$ for every $x\in P$. \end{definition} \begin{theorem}\label{thm:ordercontinuous} Let $i\in \{1,2,3\}$. Every $o_i$-continuous map $f\colon P\to Q$ is order continuous. \end{theorem} \begin{proof} We show that for every order closed set $C\subseteq Q$ the preimage $[C]f$ is order closed in $P$. Indeed, let $C\subseteq Q$ be order closed. By Theorem \ref{thm:orderclosed} it suffices to show that for every net $(x_\alpha)_{\alpha\in A}$ in $[C]f$ with $x_\alpha\xrightarrow{o_i}x\in P$ we have that $x\in [C]f$. Since $f$ is $o_i$-continuous, we obtain $f(x_\alpha)\xrightarrow{o_i}f(x)$. Since $(f(x_\alpha))_{\alpha\in A}$ is a net in $C$ and $C$ is order closed, Theorem \ref{thm:orderclosed} implies that $f(x)\in C$, hence $x\in [C]f$. \end{proof} To show that all concepts introduced in Definition \ref{def:ordercontinuity} coincide for monotone maps, we need the following lemma. \begin{lemma}\label{lem:topologicalconv_inf} Let $(x_\alpha)_{\alpha\in A}$ be a net in $P$ with $x_\alpha\xrightarrow{\tau_o}x\in P$. \begin{itemize} \item[(i)] If $\inf\{x_\alpha;\alpha\in A\}$ exists, then $\inf\{x_\alpha;\alpha\in A\}\leq x$. \item[(ii)] If for every $\alpha\in A$ we have $x_\alpha\in P_{\geq x}$, then $\inf\{x_\alpha;\alpha\in A\}$ exists and satisfies $\inf\{x_\alpha;\alpha\in A\}=x$. \end{itemize} \end{lemma} \begin{proof} Note that for both statements it is sufficient to show that for every lower bound $p$ of $\{x_\alpha;\alpha\in A\}$ we have $p\leq x$. Let $p$ be a lower bound of $\{x_\alpha;\alpha\in A\}$, i.e.\ for every $\alpha\in A$ we have $x_\alpha\in P_{\geq p}$. Since $x_\alpha\xrightarrow{\tau_o}x$ and $P_{\geq p}$ is order closed by Corollary \ref{cor:upperboundset_orderclosed}, we conclude $x\in P_{\geq p}$, i.e.\ $p\leq x$. \end{proof} \begin{theorem} \label{thm:monotone_ordercont} Let $f\colon P\to Q$ be a monotone map and $i\in\{1,2,3\}$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $f$ is $o_i$-continuous. \item[(ii)] $f$ is order continuous. \item[(iii)] For every net $(x_\alpha)_{\alpha\in A}$ in $P$ and $x\in P$ the following implications are valid: \begin{itemize} \item[(a)] If $x_\alpha\downarrow x$ then $\inf\{f(x_\alpha);\alpha\in A\}$ exists and satisfies $\inf\{f(x_\alpha);\alpha\in A\}=f(x)$. \item[(b)] If $x_\alpha\uparrow x$ then $\sup\{f(x_\alpha);\alpha\in A\}$ exists and satisfies $\sup\{f(x_\alpha);\alpha\in A\}=f(x)$. \end{itemize} \end{itemize} \end{theorem} \begin{proof} The implication (i)$\Rightarrow$(ii) is contained in Theorem \ref{thm:ordercontinuous}. We show (ii)$\Rightarrow$(iii). Let $(x_\alpha)_{\alpha\in A}$ be a net in $P$ such that $x_\alpha\downarrow x\in P$. Due to Remark \ref{rem:decreasingnet} and Proposition \ref{pro:basic_convergences} this implies $x_\alpha\xrightarrow{\tau_o} x$. Since $f$ is order continuous, we obtain $f(x_\alpha)\xrightarrow{\tau_o} f(x)$. Furthermore, the monotony of $f$ yields for every $\alpha\in A$ that $f(x_\alpha)\in Q_{\geq f(x)}$. Thus Lemma \ref{lem:topologicalconv_inf} (ii) implies that $\inf\{f(x_\alpha);\alpha\in A\}$ exists and satisfies $\inf\{f(x_\alpha);\alpha\in A\}= f(x)$. The second statement in (iii) is shown analogously. It remains to show (iii)$\Rightarrow$(i). We proof this implication for $i=3$; the argumentation for $i\in\{1,2\}$ is similar. Let $(x_\alpha)_{\alpha\in A}$ be a net such that $x_\alpha\xrightarrow{o_3}x\in P$, i.e.\ there are nets $(\hat{x}_\beta)_{\beta \in B}$ and $(\check{x}_\gamma)_{\gamma \in C}$ in $P$ and a map $\eta\colon B \times C \rightarrow A$ such that $\hat{x}_\beta \uparrow x$, $\check{x}_\gamma \downarrow x$ and $\hat{x}_\beta \leq x_\alpha \leq \check{x}_\gamma$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. The monotony of $f$ and condition (iii) implies that $f(\hat{x}_\beta) \uparrow f(x)$ and $f(\check{x}_\gamma) \downarrow f(x)$. Furthermore the monotony of $f$ yields $f(\hat{x}_\beta) \leq f(x_\alpha) \leq f(\check{x}_\gamma)$ for every $\beta\in B$, $\gamma \in C$ and $\alpha \in A_{\geq \eta(\beta,\gamma)}$. Thus $f(x_\alpha)\xrightarrow{o_3}f(x)$. \end{proof} Combining Theorem \ref{thm:monotone_ordercont} and Proposition \ref{pro:infimum} we obtain the following statement. \begin{corollary} \label{coro:orderembeddingswithroderdenseimagesarecontinuous} Every order embedding $f\colon P\to Q$ for which $f[P]$ is order dense in $Q$ is order continuous (and, hence, $o_i$-continuous, where $i\in\{1,2,3\}$). \end{corollary} \begin{remark}\label{rem:RestrictionandExtensionproperty} Assume that $M\subseteq P$ is order dense in $P$. Then the embedding $f\colon M\to P$ is order continuous by Corollary \ref{coro:orderembeddingswithroderdenseimagesarecontinuous}, therefore the induced topology of $\tau_o(P)$ on $M$ satisfies \begin{equation} \label{equ:restrictionproperty} \{O\cap M; O\in\tau_o(P)\}\subseteq \tau_o(M). \end{equation} Thus for every order closed set $N\subseteq P$ we obtain that $N \cap M$ is order closed in $M$. By means of Theorem \ref{thm:orderclosed} this generalises \cite[Proposition 5.1(iii)]{IaB}. Example \ref{exa:extensionprop} below shows that the converse implication in \eqref{equ:restrictionproperty} is not valid, in general. \end{remark} The next statement follows from Proposition \ref{pro:o2_o3_convergenceDedekind_complete_lattice}. \begin{proposition} Let $f \colon P \to Q$ be a map. \begin{itemize} \item[(i)] If $P$ is a Dedekind complete lattice and $f$ is $o_2$-continuous, then $f$ is also $o_3$-continuous. \item[(ii)] If $Q$ is a Dedekind complete lattice and $f$ is $o_3$-continuous, then $f$ is also $o_2$-continuous. \end{itemize} \end{proposition} \begin{remark} (i) Note that by Remark \ref{rem:o_1_and_o_2} every $o_1$-continuous map is $o_2$-continuous. The converse implication is not true, in general, see Example \ref{exa:ordercontnotob} below, but it is open whether it is true in partially ordered abelian groups. (ii) In \cite[Example 1.8]{Abra} it is shown that $o_3$-continuity of maps between vector lattices does not imply $o_2$-continuity, in general. In Corollary \ref{cor:orderboundedando2contiso3cont} below we present a setting where $o_2$-continuity implies $o_3$-continuity. It is an open question whether this implication is valid in more general situations. Moreover it is not clear under which conditions the converse implications in Theorem \ref{thm:ordercontinuous} are true. (iii) In Theorem \ref{the:ogasawara_spaces_are_equal} we will present a situation where all concepts introduced in Definition \ref{def:ordercontinuity} coincide. \end{remark} In \cite[Proposition 1.5]{Abra} it is shown that the $o_3$-convergence in a vector lattice $X$ is equivalent to the $o_2$-convergence in the Dedekind completion $X^{\delta}$ of $X$. To show that a generalisation\footnote{To link our notions with the one in \cite{Abra}, use Proposition \ref{pro:char_o_i_poag} below.} to lattices holds, we need the following technical statement. \begin{lemma}\label{lem:index_net} Let $P$ be a lattice, $Q$ a partially ordered set and $f\colon P \rightarrow Q$ an order embedding such that $f[P]$ is order dense in $Q$. Let $(\check{y}_\alpha)_{\alpha\in A}$ be a net in $Q$ such that $\check{y}_\alpha \downarrow f(x)$ for $x \in P$. If \begin{align*} B:=\{v \in P;\,\exists \alpha \in A\colon f(v) \geq \check{y}_\alpha\} \end{align*} is equipped with the reversed order of $P$, then $B$ is directed and $\inf B=x$. Thus $\check{x}_\beta:=\beta$ for all $\beta \in B$ defines a net in $P$ with $\check{x}_\beta\downarrow x$. \end{lemma} \begin{proof} For $v_1,v_2\in B$ there are $\alpha_1,\alpha_2\in A$ such that $f(v_1) \geq \check{y}_{\alpha_1}$ and $f(v_2)\geq \check{y}_{\alpha_2}$. Since $A$ is directed there is $\alpha \in A$ with $ \alpha \geq \{\alpha_1,\alpha_2\}$. We use $\check{y}_\alpha \downarrow$ and get $f(v_1)\geq\check{y}_{\alpha}$ and $f(v_2)\geq \check{y}_{\alpha}$. By Proposition \ref{pro:infimum} we conclude $f(v_1 \wedge v_2)=f(v_1)\wedge f(v_2)\geq \check{y}_\alpha$. Thus $v_1\wedge v_2\in B$, and we have shown $B$ to be directed.\\ It is left to show that $\inf B=x$. For $v \in B$ we have $f(v)\geq \check{y}_\alpha \geq f(x)$ for some $\alpha \in A$. Since $f$ is order reflecting we know $x$ to be a lower bound of $B$. In order to show that $x$ is the greatest lower bound of $B$ let $z\in P$ be another lower bound. For $\alpha \in A$ the monotony of $f$ implies \begin{align*} f(z)\leq f[B]\supseteq f[\{v \in P;\, f(v)\geq \check{y}_\alpha\}]=\{y \in f[P];\, y \geq \check{y}_\alpha\}. \end{align*} Since $f[P]$ is order dense in $Q$ we conclude $f(z)\leq \inf\{y \in f[P];\, y \geq \check{y}_\alpha\}=\check{y}_\alpha$. Thus $\check{y}_\alpha \downarrow f(x)$ yields $f(z)\leq f(x)$. Since $f$ is order reflecting we conclude $z\leq x$. This proves $x$ to be the greatest lower bound of $B$. \end{proof} \begin{proposition} \label{pro:Dedekindcompletionando3o2} Let $Q$ be a partially ordered set and $f\colon P \to Q$ an order embedding such that $f[P]$ is order dense in $Q$. Let $(x_\alpha)_{\alpha \in A}$ be a net in $P$ and $x\in P$. \begin{itemize} \item[(i)] If $Q$ is a Dedekind complete lattice, then $x_\alpha \xrightarrow{o_3}x$ implies $f(x_\alpha) \xrightarrow{o_2} f(x)$. \item[(ii)] If $P$ is a lattice, then $f(x_\alpha) \xrightarrow{o_2} f(x)$ implies $x_\alpha \xrightarrow{o_3}x$. \end{itemize} \end{proposition} \begin{proof} To show (i), let $x_\alpha \xrightarrow{o_3} x$. Corollary \ref{coro:orderembeddingswithroderdenseimagesarecontinuous} implies $f(x_\alpha) \xrightarrow{o_3}f(x)$. Thus Proposition \ref{pro:o2_o3_convergenceDedekind_complete_lattice} yields $f(x_\alpha)\xrightarrow{o_2}f(x)$. To prove (ii), let $f(x_\alpha) \xrightarrow{o_2} f(x)$. Hence there are nets $(\hat{y}_\alpha)_{\alpha \in A}$ and $(\check{y}_\alpha)_{\alpha \in A}$ with $\hat{y}_\alpha \uparrow f(x)$, $\check{y}_\alpha \downarrow f(x)$ and $\hat{y}_\alpha \leq f(x_\alpha)\leq \check{y}_\alpha$ for all $\alpha \in A$. Let $(\check{x}_\beta)_{\beta \in B}$ be defined as in Lemma \ref{lem:index_net} and note that $\check{x}_\beta \downarrow x$. By the definition of $B$, for $\beta \in B$ there is $\alpha_\beta \in A$ such that $f(x_\alpha) \leq \check{y}_\alpha \leq \check{y}_{\alpha_\beta}\leq f(\beta)=f(\check{x}_\beta)$ for all $\alpha \in A_{\geq \alpha_\beta}$. Since $f$ is order reflecting we obtain $x_\alpha \leq \check{x}_\beta$. An analogous construction shows the existence of a net $(\hat{x}_\gamma)_{\gamma \in C}$ with $\hat{x}_\gamma \uparrow x$ and such that for $\gamma \in C$ there exists $\alpha_\gamma\in A$ with $\hat{x}_\gamma \leq x_\alpha$ for all $\alpha \in A_{\geq \alpha_\gamma}$. For $(\beta,\gamma)\in B \times C$ let $\alpha_{(\beta,\gamma)} \in A$ be such that $\alpha_{(\beta,\gamma)}\geq \alpha_\beta$ and $\alpha_{(\beta,\gamma)}\geq \alpha_\gamma$. Thus $\eta\colon B \times C \rightarrow A$, $(\beta,\gamma)\mapsto\alpha_{(\beta,\gamma)}$ yields a map as in the definition of the $o_3$-convergence. \end{proof} Proposition \ref{pro:Dedekindcompletionando3o2} in combination with Remark \ref{rem:o1o2}(a) yields the following. \begin{corollary} \label{cor:Dedekindcompletionando3o1_orderbddnets} Let $Q$ be a partially ordered set that is directed upward and downward, and $f\colon P \to Q$ an order embedding such that $f[P]$ is order dense in $Q$. Let $(x_\alpha)_{\alpha \in A}$ be a net in $P$ such that $\{f(x_\alpha);\, \alpha \in A\}$ is bounded, and let $x\in P$. \begin{itemize} \item[(i)] If $Q$ is a Dedekind complete lattice, then $x_\alpha \xrightarrow{o_3}x$ implies $f(x_\alpha) \xrightarrow{o_1} f(x)$. \item[(ii)] If $P$ is a lattice, then $f(x_\alpha) \xrightarrow{o_1} f(x)$ implies $x_\alpha \xrightarrow{o_3}x$. \end{itemize} \end{corollary} \begin{remark} Note that the implications in Proposition \ref{pro:Dedekindcompletionando3o2}(ii) and in Corollary \ref{cor:Dedekindcompletionando3o1_orderbddnets}(ii) are not valid, in general. In Example \ref{exa:extensionprop} below a partially ordered vector space $P=X$ and a vector lattice $Q=Y$ are provided which lead to a counterexample, where $f\colon P\to Q$ is the inclusion map. \end{remark} One can characterise $o_3$-convergence in lattices by means of $o_3$-convergence in a cover. \begin{proposition} \label{pro:o3conv_inlatticeandDedeindcmpletion} Let $P$ be a lattice, let $Q$ be a partially ordered set and let $f\colon P \to Q$ be an order embedding such that $f[P]$ is order dense in $Q$. Let $(x_\alpha)_{\alpha \in A}$ be a net in $P$ and $x\in P$. Then $x_\alpha \xrightarrow{o_3}x$ if and only if $f(x_\alpha)\xrightarrow{o_3} f(x)$. \end{proposition} \begin{proof} If $x_\alpha \xrightarrow{o_3}x$, then $f(x_\alpha) \xrightarrow{o_3}f(x)$ in $f[P]$, hence also in $Q$. To show the converse implication, let $Q^\mu$ be the Dedekind-MacNeille completion\footnote{If $Q$ is a partially ordered set, then there is a complete lattice $Q^\mu$ and an order embedding $J\colon Q \to Q^\mu$ such that $J[Q]$ is order dense in $Q^\mu$. The set $Q^\mu$ is called \emph{Dedekind-MacNeille completion} of $Q$.} and $J\colon Q\to Q^\mu$ the canonical embedding. If $f(x_\alpha) \xrightarrow{o_3}f(x)$ in $Q$, then Proposition \ref{pro:Dedekindcompletionando3o2}(i) shows $J(f(x_\alpha))\xrightarrow{o_2} J(f(x))$. Since $J\circ f[P]$ is order dense in $J[Q]$ and $J[Q]$ is order dense in $Q^\mu$, by Proposition \ref{pro:orderdensitytransitive} we conclude $J\circ f[P]$ to be order dense in $Q^\mu$. Note furthermore that $J\circ f\colon P \to Q^\mu$ is an order embedding. Hence Proposition \ref{pro:Dedekindcompletionando3o2}(ii) shows $x_\alpha \xrightarrow{o_3}x$. \end{proof} \begin{remark} In \cite[Example 1.4]{Abra} an example of a vector lattice $X$ and a net $(x_\alpha)_{\alpha \in A}$ with $\{x_\alpha;\, \alpha \in A\}$ bounded is given that $o_3$-convergences, but does not $o_2$-converge. Hence by Proposition \ref{pro:basic_convergences} the net $(x_\alpha)_{\alpha \in A}$ does not $o_1$-converge. Since $(x_\alpha)_{\alpha \in A}$ is $o_3$-convergent in $X$ and $\{x_\alpha;\, \alpha \in A\}$ is bounded, Corollary \ref{cor:Dedekindcompletionando3o1_orderbddnets} implies $(x_\alpha)_{\alpha \in A}$ to be $o_1$-convergent in $X^\delta$, and hence $o_2$-convergent in $X^\delta$. Thus an analogue of Proposition \ref{pro:o3conv_inlatticeandDedeindcmpletion} for $o_1$-convergence and $o_2$-convergence is not valid. In Proposition \ref{pro:o3conv_inlatticeandDedeindcmpletion} the statement is not valid for arbitrary partially ordered sets $P$. Indeed, in Example \ref{exa:extensionprop} below we will present a partially ordered vector space $P=X$, a vector lattice $Q=Y$, and a net $(x_\alpha)_{\alpha \in A}$ in $P$ such that for the canonical embedding $f\colon P \to Q$ we have that $f(x_\alpha) \xrightarrow{o_3}f(x)$, but $(x_\alpha)_{\alpha \in A}$ does not $o_3$-converge. \end{remark} Next we discuss the link between $o_1$-continuity and order boundedness. The proof of the subsequent proposition is adopted from \cite[Proposition 149]{Mali2017}. \begin{proposition}\label{pro:o1ob} Every $o_1$-continuous map $f\colon P\to Q$ is order bounded. \end{proposition} \begin{proof} Let $A:=[v,w]$ be an order interval in $P$ and consider the net $(x_{\alpha})_{\alpha\in A}$ with $x_\alpha:=\alpha$. Note that $x_\alpha\uparrow w$, therefore $x_\alpha\xrightarrow{o_1}w$. Thus $f(x_\alpha)\xrightarrow{o_1}f(w)$, hence there are nets $(\hat{y}_\alpha)_{\alpha\in A}$ and $(\check{y}_\alpha)_{\alpha\in A}$ such that $\hat{y}_\alpha\uparrow f(w)$, $\check{y}_\alpha\downarrow f(w)$ and $\hat{y}_\alpha\leq f(x_\alpha)\leq \check{y}_\alpha$ for every $\alpha\in A$. Consequently $f\left[[v,w]\right]\subseteq [\hat{y}_v, \check{y}_v]$. \end{proof} The subsequent simple example shows that $o_2$-, $o_3$-, and order continuity do not imply order boundedness, in general. \begin{example} \label{exa:ordercontnotob} Consider the partially ordered set $P:=\mathbb{R}\setminus\{0\}$ with the standard order and the map $f\colon P\to P$, $x\mapsto \frac{1}{x^2}$. Clearly, $f$ is not order bounded and, hence, not $o_1$-continuous due to Proposition \ref{pro:o1ob}. Since $f$ is continuous with respect to the standard topology of $P$, Example \ref{exa:opensubsetsofR} yields that $f$ is $o_2$-continuous, $o_3$-continuous and order continuous. \end{example} \section{Order convergence and order topology in partially ordered abelian groups} Let $G$ be a partially ordered abelian group. In this section, we characterise net catching sets as well as the three concepts of order convergence in partially ordered abelian groups. \begin{proposition}\label{pro:char_netcatchingsetImhoff} Let $U\subseteq G$ and $x\in U$. \begin{itemize} \item[(i)] $U$ is a net catching set for $0$ if and only if for every net $(x_\alpha)_{\alpha\in A}$ in $G$ with $x_\alpha\downarrow 0$ there is $\alpha\in A$ such that $[-x_\alpha,x_\alpha]\subseteq U$. \item[(ii)] $U$ is a net catching set for $x$ if and only if $U-x$ is a net catching set for $0$. \end{itemize} \end{proposition} \begin{proof} (i) Let $U$ be a net catching set for $0$. If $(x_\alpha)_{\alpha\in A}$ is a net in $G$ with $x_\alpha\downarrow 0$, then $-x_\alpha\uparrow 0$, hence $[-x_\alpha,x_\alpha]\subseteq U$. For the converse implication, we have to show that $U$ is a net catching set for $0$. Let $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\alpha)_{\alpha\in A}$ be nets in $G$ with $\hat{x}_\alpha \uparrow 0$ and $\check{x}_\alpha\downarrow 0$. Thus $(\check{x}_\alpha-\hat{x}_\alpha)\downarrow 0$. By the assumption there is $\alpha \in A$ such that $[\hat{x}_\alpha,\check{x}_\alpha]\subseteq[-(\check{x}_\alpha-\hat{x}_\alpha), \check{x}_\alpha-\hat{x}_\alpha]\subseteq U$. The result in (ii) follows from the fact that $x_\alpha\downarrow x$ if and only if $x_\alpha-x\downarrow 0$ (and the similar statement for increasing nets). \end{proof} \begin{remark} In the case of a partially ordered vector spaces, the concept of O-neighbourhood is introduced in \cite[Definition 3.3]{Imh}. Proposition \ref{pro:char_netcatchingsetImhoff} shows that O-neighbourhoods are exactly the net catching sets. \end{remark} \begin{remark}\label{rem:G+-G+orderopen} \begin{itemize} \item[(a)] The set $G_+$ is order closed, due to Corollary \ref{cor:upperboundset_orderclosed}.\item[(b)] The set $G_+-G_+$ is order closed. Indeed, by Theorem \ref{thm:orderclosed} it is sufficient to show that $G_+-G_+$ is closed under $o_1$-convergence. Let $(x_\alpha)_{\alpha \in A}$ be a net in $G_+-G_+$ such that $x_\alpha \xrightarrow{o_1}x\in G$. Then there are nets $(\hat{x}_\alpha)_{\alpha\in A}$ and $(\check{x}_\alpha)_{\alpha \in A}$ such that $\hat{x}_\alpha \uparrow x$, $\check{x}_\alpha \downarrow x$ and $\hat{x}_\alpha\leq x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A$. Thus for every $\alpha \in A$ we obtain $x\in G_+ + \hat{x}_\alpha \subseteq G_+ +(x_\alpha-G_+) \subseteq G_+ + ((G_+-G_+) -G_+)=G_+-G_+$. \item[(c)] The set $G_+-G_+$ is order open. Indeed, by Proposition \ref{pro:char_netcatchingsetImhoff} (ii) it is sufficient to show that $G_+-G_+$ is a net-catching set for $0$. Let $(x_\alpha)_{\alpha\in A}$ be a net in $G$ with $x_\alpha\downarrow 0$, then for every $\alpha \in A$ we have $[-x_\alpha,x_\alpha]\subseteq x_\alpha-G_+ \subseteq G_+-G_+$. \end{itemize} \end{remark} Note that for nets $(x_\alpha)_{\alpha\in A}$ and $(y_\beta)_{\beta \in B}$ in $G$ with $x_\alpha\downarrow x\in G$ and $y_\beta\downarrow y\in G$ the net $(x_\alpha+y_\beta)_{(\alpha,\beta)\in A\times B}$ satisfies $x_\alpha+y_\beta\downarrow x+y$, where $A\times B$ is ordered component-wise. This yields the following statement. \begin{proposition} \label{pro:plus_o_i} Let $G$ be a partially ordered abelian group and let $(x_\alpha)_{\alpha\in A}$ and $(y_\beta)_{\beta\in B}$ be nets in $G$. Let $A\times B$ be ordered component-wise and let $i\in\{1,2,3\}$. If $x_\alpha\xrightarrow{o_i} x\in G$ and $y_\beta\xrightarrow{o_i} y\in G$, then the net $(x_\alpha+ y_\beta)_{(\alpha,\beta)\in A\times B}$ satisfies $x_\alpha+ y_\beta\xrightarrow{o_i} x+ y$. \end{proposition} \begin{remark}\label{rem:t_o_not_linear} Due to Remark \ref{rem:Floyd_sigma_comp} the order topology is $\operatorname{T}_1 $ and $\sigma$-compatible, hence the assumptions in \cite[Theorem 3]{Floyd1955} are satisfied. Since the map $G \to G\colon g\mapsto -g$ is order continuous for every partially ordered abelian group $G$, by \cite[Corollary]{Floyd1955} there is a Dedekind complete vector lattice $X$ endowed with the order topology with the property that the addition $X\times X\to X$, $(x,y)\mapsto x+y$ is not continuous, where $X\times X$ is equipped with the product topology. As the order bound topology introduced in \cite[p.\ 20]{Namioka57} is always a linear topology, this shows that $\tau_o$ does not coincide with the order bound topology of $X$. \end{remark} The order convergences in vector lattices investigated in \cite{Abra} are special cases of the $o_i$-convergences, as the next proposition shows. \begin{proposition} \label{pro:char_o_i_poag} Let $(x_\alpha)_{\alpha \in A}$ be a net in $G$. Then \begin{itemize} \item[(i)] $x_\alpha \xrightarrow{o_1} 0$ if and only if there is a net $(\check{x}_\alpha)_{\alpha \in A}$ in $G$ such that $\check{x}_\alpha \downarrow 0$ and $\pm x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A$, \item[(ii)] $x_\alpha \xrightarrow{o_2} 0$ if and only if there is a net $(\check{x}_\alpha)_{\alpha \in A}$ in $G$ and $\alpha_0 \in A$ such that $\check{x}_\alpha \downarrow 0$ and $\pm x_\alpha \leq \check{x}_\alpha$ for every $\alpha \in A_{\geq \alpha_0}$, \item[(iii)] $x_\alpha \xrightarrow{o_3} 0$, if and only if there is a net $(\check{x}_\beta)_{\beta \in B}$ and a map $\eta\colon B \rightarrow A$ such that $\check{x}_\beta \downarrow 0$ and $\pm x_\alpha \leq \check{x}_\beta$ for every $\beta\in B$ and $\alpha \in A_{\geq \eta(\beta)}$, \item[(iv)] for every $i\in \{1,2,3\}$ and $x\in G$ we have that $x_\alpha\xrightarrow{o_i}x$ if and only if $x_\alpha-x\xrightarrow{o_i}0$. \end{itemize} \end{proposition} \begin{proof} We show (iii), observe that (i) and (ii) are similar. Let $x_\alpha \xrightarrow{o_3} 0$. Then Proposition \ref{pro:char_o3conv} yields the existence of nets $(\hat{y}_\beta)_{\beta\in B}$ and $(\check{y}_\beta)_{\beta\in B}$ and a map $\eta\colon B\to A$ such that $\hat{y}_\beta\uparrow 0$, $\check{y}_\beta\downarrow 0$ and $\hat{y}_\beta\leq x_\alpha\leq \check{y}_\beta$ for every $\beta\in B$ and $\alpha\in A_{\geq\eta(\beta)}$. For $\beta\in B$ define $\check{x}_\beta:=\check{y}_\beta-\hat{y}_\beta$. Observe that $\check{y}_\beta-\hat{y}_\beta\downarrow 0$. Furthermore $-\check{x}_\beta\leq \hat{y}_\beta \leq x_\alpha\leq \check{y}_\beta\leq \check{x}_\beta$ holds for all $\beta \in B$ and $\alpha \in A_{\geq \eta(\beta)}$. The converse implication in (iii) is straightforward. The statement in (iv) is a direct consequence of Proposition \ref{pro:plus_o_i}. \end{proof} Order closed subgroups of lattice-ordered abelian groups are characterised as follows. \begin{proposition}\label{pro:McapG+oclosed} Let $M$ be a subgroup of a lattice-ordered abelian group $G$ such that $M$ is closed under the lattice operations of $G$ (i.e.\ for every $x,y\in M$ the element $x\vee y\in G$ belongs to $M$). Then $M$ is order closed if and only if $M\cap G_+$ is order closed. \end{proposition} \begin{proof} Let $M$ be order closed. Since $G_+$ is order closed, we obtain that $M\cap G_+$ is order closed. For the converse implication, we use Theorem \ref{thm:orderclosed}. Let $(x_\alpha)_{\alpha\in A}$ be a net in $M$ with $x_\alpha\xrightarrow{o_1}x\in G$. By Proposition \ref{pro:inf_o_i} we obtain $x_\alpha^+\xrightarrow{o_1}x^+$ and $x_\alpha^-\xrightarrow{o_1}x^-$. Since $x_\alpha^+, x_\alpha^-\in M\cap G_+$, we conclude $x=x^+-x^-\in M$. \end{proof} \section{The Riesz-Kantorovich formulas for group homomorphisms} In this section we study conditions on partially ordered abelian groups $G$ and $H$ such that the set $\operatorname{A}_{\operatorname{b}}(G,H)$ of all order bounded additive maps turns out to be a lattice-ordered abelian group. The arguments are straightforward adaptations of the classical Riesz-Kantorovich theorem, see \cite{Riesz30} and \cite{Kan1940}. We include the proofs here for sake of completeness. \begin{proposition}\label{pro:Kantorovich} Let $G$ and $H$ be partially ordered abelian groups such that $G$ is directed. Let $f\colon G_+\to H$ be a semigroup homomorphism. Then there exists a unique additive map $g\colon G\to H$ such that $f=g$ on $G_+$. Moreover, if $f[G_+]\subseteq H_+$, then $g$ is monotone. \end{proposition} \begin{proof} First observe that for $u,v,x,y\in G_+$ with $v-u=y-x$ we have that $f(v)-f(u)=f(y)-f(x)$. Indeed, from $v+x=u+y$ it follows that $f(v)+f(x)=f(v+x)=f(u+y)=f(u)+f(y)$. For $x\in G$ there are $u,v\in G_+$ such that $x=u-v$. Define $g(x):=f(u)-f(v)$ and note that the definition is independent of the choice of $u$ and $v$. $g$ is additive. Indeed, let $x,y\in G$ be such that $x=v-u$ and $y=z-w$ with $u,v,w,z\in G_+$. Since $f(v)+f(z)+f(u+w)=f(v+z)+f(u)+f(w)$, we have \begin{align*}g(x+y)&=g(v-u+z-w)=f(v+z)-f(u+w)\\&=f(v)-f(u)+f(z)-f(w)=g(v-u)+g(z-w)\\&=g(x)+g(y).\end{align*} Moreover, $g$ is unique. \end{proof} The next proposition contains the crucial conditions under which the partially ordered abelian group $\operatorname{A}_{\operatorname{b}}(G,H)$ is a lattice. \begin{proposition}\label{pro:RK} Let $G$ be a directed partially ordered abelian group with the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. For $f\in \operatorname{A}_{\operatorname{b}}(G,H)$ and $x\in G_+$ define \[g(x):=\sup\{f(u); \, u\in [0,x]\}.\] Then there exists a unique additive map $h\in \operatorname{A}_{+}(G,H)$ such that $h=g$ on $G_+$. Moreover, the supremum of $f$ and $0$ exists in $\operatorname{A}_{\operatorname{b}}(G,H)$ and equals $h$. \end{proposition} \begin{proof} As $f$ is order bounded and $H$ is Dedekind complete, $g\colon G_+\to H_+$ is well-defined. To show that $g$ is a semigroup homomorphism, let $x,y\in G_+$. For $u\in [0,x]$ and $v\in[0,y]$ we have $u+v\in [0,x+y]$ and $f(u+v)=f(u)+f(v)$, hence $g(x+y)\geq f(u)+f(v)$. Then, by taking the supremum over all $u$, we have $g(x+y)\geq g(x) +f(v)$. Similarly, the supremum over $v$ yields $g(x+y)\geq g(x) +g(y)$. Next, for $w\in[0,x+y]$ the Riesz decomposition property of $G$ provides us with $u\in[0,x]$ and $v\in[0,y]$ such that $w=u+v$. Then $f(w)=f(u)+f(v)\leq g(x)+g(y)$. The supremum over $w$ results in $g(x+y)\leq g(x)+g(y)$. According to Proposition \ref{pro:Kantorovich}, there exists $h\in \operatorname{A}_{+}(G,H)$ such that $h=g$ on $G_+$. Now we show that $h$ is the supremum of $f$ and $0$. Indeed, for $x\in G_+$ we have $h(x)=g(x)\ge f(x)$, hence $h$ is an upper bound of $f$ and $0$. Let $q\in\operatorname{A}_{+}(G,H)$ be an upper bound of $f$. Then for $x\in G_+$ and $u\in[0,x]$ we have $q(x)\ge q(u)\ge f(u)$, so that $q(x)\ge g(x)=h(x)$, thus $q\ge h$. Hence $h=f\vee 0$. \end{proof} In fact, Proposition \ref{pro:RK} yields the positive part $f^+:=h$ of $f$, hence $\operatorname{A}_{\operatorname{b}}(G,H)$ is a lattice. \begin{theorem}\label{the:RK_final} Let $G$ be a directed partially ordered abelian group with the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. Then $\operatorname{A}_{\operatorname{b}}(G,H)$ is a Dedekind complete lattice-ordered abelian group. \end{theorem} \begin{proof} It remains to show that $\operatorname{A}_{\operatorname{b}}(G,H)$ is Dedekind complete. Let $A$ be a non-empty subset of $\operatorname{A}_{\operatorname{b}}(G,H)$ that is bounded from above. Let $q$ be an upper bound of $A$. Denote by $B$ the set of all suprema of finite non-empty subsets of $A$. Note that $q$ is also an upper bound of $B$. For $x\in G_+$ define \begin{equation} \label{equ:InfRieszKantorovichB} g(x):=\sup\{f(x);\, f\in B\}. \end{equation} To show that $g$ is a semigroup homomorphism, let $x,y\in G_+$. For every $f\in B$ we have $f(x+y)=f(x)+f(y)\le g(x)+g(y)$, hence $g(x+y)\le g(x)+g(y)$. Conversely, for every $f,h\in B$ we have $f\vee h\in B$, hence $g(x+y)\ge (f\vee h)(x+y)=(f\vee h)(x)+(f\vee h)(y)\ge f(x)+h(y)$. By taking supremum first over $f$ and then over $h$ we obtain $g(x+y)\ge g(x)+g(y)$. We conclude that $g$ is a semigroup homomorphism. According to Proposition \ref{pro:Kantorovich} there exists a unique map $h\in \operatorname{A}(G,H)$ with $h=g$ on $G_+$. From the definition of $g$ it is clear that $h$ is an upper bound of $B$, and hence of $A$. As $A$ is non-empty, there is $f\in A$ such that $f\leq h$. Moreover, $h\leq q$, hence $h\in\operatorname{A}_{\operatorname{b}}(G,H)$. As $q$ is an arbitrary upper bound of $A$, it follows that $h$ is the supremum of $A$. \end{proof} \begin{remark}\label{rem:RK} Under the conditions of Theorem \ref{the:RK_final}, the lattice operations in $\operatorname{A}_{\operatorname{b}}(G,H)$ are given by the following formulas. For every $x\in G_+$ and $f,g\in\operatorname{A}_{\operatorname{b}}(G,H)$ we have \begin{eqnarray*} f^+(x)&=&\sup\{f(u); \, u\in [0,x]\},\\ f^-(x)&=&\sup\{-f(u); \, u\in [0,x]\},\\ |f|(x)&=&\sup\{|f(u)|; \, u\in [-x,x]\},\\ (f\vee g)(x)&=&\sup\{f(x-u)+g(u); \, u\in [0,x]\},\\ (f\wedge g)(x)&=&\inf\{f(x-u)+g(u); \, u\in [0,x]\}. \end{eqnarray*} These formulas are called the \emph{Riesz-Kantorovich formulas}. \end{remark} \begin{corollary}\label{cor:RK_pointwise_convergence} Under the conditions of Theorem \ref{the:RK_final} the following statements are valid. \begin{itemize} \item[(i)] If $A\subseteq \operatorname{A}_{\operatorname{b}}(G,H)$ is upward directed and bounded from above, then for every $x \in G_+$ we have \[(\sup A)(x)=\sup\{f(x); f \in A\}.\] A similar statement is valid for the infimum of a downward directed set that is bounded from below. \item[(ii)] For a net $(f_\alpha)_{\alpha\in A}$ in $\operatorname{A}_{\operatorname{b}}(G,H)$ we have $f_\alpha\downarrow 0$ if and only if for every $x\in G_+$ it holds $f_\alpha(x)\downarrow 0$. \item[(iii)] Let $i\in\{1,2,3\}$, $(f_\alpha)_{\alpha\in A}$ be a net in $\operatorname{A}_{\operatorname{b}}(G,H)$ and $f\in \operatorname{A}_{\operatorname{b}}(G,H)$ with $f_\alpha \xrightarrow{o_i} f$. Then for every $x\in G$ one has $f_\alpha(x)\xrightarrow{o_i} f(x)$. \end{itemize} \end{corollary} \begin{proof} To prove the statement in (i), let $B$ be as in the proof of Theorem \ref{the:RK_final}. Equation \eqref{equ:InfRieszKantorovichB} shows that for every $x \in G_+$ we have $(\sup A)(x)=\sup\{f(x); f \in B\}$. Since $A$ is a majorising subset of $B$, we obtain that $\{f(x); f \in A\}$ is a majorising subset of $\{f(x); f \in B\}$. Thus we conclude $(\sup A)(x)=\sup\{f(x); f \in A\}$ by Lemma \ref{lem:majosetsandsuppe}. \\ The statement (ii) follows from (i). To show (iii), let the net $(\check{f}_\alpha)_{\alpha\in A}$ in $\operatorname{A}_{\operatorname{b}}(G,H)$ be such that $\pm(f_\alpha-f)\le \check{f}_\alpha\downarrow 0$. By (ii), for $x\in G_+$ we get $\pm(f_\alpha(x)-f(x))\le \check{f}_\alpha(x)\downarrow 0$. As $G$ is directed, Proposition \ref{pro:plus_o_i} yields the statement for $x\in G$. \end{proof} \section{Properties of the set of order continuous homomorphisms of partially ordered abelian groups} In this section let $G$, $H$ be partially ordered abelian groups. We show that under the conditions of the Riesz-Kantorovich Theorem \ref{the:RK_final} for an order bounded map $f\colon G \to H$ the four concepts of continuity from Section 5 coincide. We furthermore show that under the same conditions the set of order continuous maps is an order closed ideal in the lattice order abelian group $\operatorname{A}_{\operatorname{b}} (G,H)$ of all order bounded additive maps. For $i\in\{1,2,3\}$ we denote the set of all $o_i$-continuous maps in $\operatorname{A}_{\operatorname{b}}(G,H)$ by $\operatorname{A}^{o_i}_{\operatorname{b}}(G,H)$. The set of all order continuous maps in $\operatorname{A}_{\operatorname{b}}(G,H)$ is denoted by $\operatorname{A}^{\tau_o}_{\operatorname{b}}(G,H)$. Theorem \ref{thm:monotone_ordercont} reads then as \begin{equation} \label{equ:def_cone_ocont_maps} \operatorname{A}^{o_i}_{\operatorname{b}}(G,H)\cap \operatorname{A}_+(G,H)=\operatorname{A}^{\tau_o}_{\operatorname{b}}(G,H)\cap \operatorname{A}_+(G,H)=:\operatorname{A}_+^{\operatorname{oc}}(G,H).\end{equation} The set $\operatorname{A}_+^{\operatorname{oc}}(G,H)$ of positive order continuous additive maps is characterised as follows. \begin{proposition}\label{pro:charAocplus} For every $f\in \operatorname{A}(G,H)$ we have $f\in \operatorname{A}_+^{\operatorname{oc}}(G,H)$ if and only if for every net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\downarrow 0$ it holds $f(x_\alpha)\downarrow 0$. \end{proposition} \begin{proof} Let $f\in \operatorname{A}(G,H)$ be such that for every net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\downarrow 0$ it holds $f(x_\alpha)\downarrow 0$. First we show that $f$ is monotone. Indeed, let $x\in G_+$, then for the net $(x_{\alpha})_{\alpha\in\{-x,0\}}$ with $x_{\alpha}=-\alpha$ we have $x_\alpha\downarrow 0$ and hence $f(x_\alpha)\downarrow 0$, which implies $f(x)=f(x_{-x})\ge 0$. To show that $f$ is order continuous, note that the assumption implies that for every net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\uparrow 0$ we have $f(x_\alpha)\uparrow 0$. Then Theorem \ref{thm:monotone_ordercont} yields the order continuity of $f$, due to the translation invariance of infimum and supremum. The converse implication follows directly from Theorem \ref{thm:monotone_ordercont}. \end{proof} As a consequence of Proposition \ref{pro:charAocplus} we obtain the following statement. \begin{proposition} \label{pro:Aoc+oclosed} Under the conditions of Theorem \ref{the:RK_final}, the set $\operatorname{A}_+^{\operatorname{oc}}(G,H)$ is order closed in $\operatorname{A}_{\operatorname{b}}(G,H)$. \end{proposition} \begin{proof} We use Theorem \ref{thm:orderclosed}. Let $(f_\alpha)_{\alpha\in A}$ be a net in $\operatorname{A}_+^{\operatorname{oc}}(G,H)$ such that $f_\alpha\xrightarrow{o_1}f\in \operatorname{A}_{\operatorname{b}}(G,H)$. By Remark \ref{rem:G+-G+orderopen} (a), the set $\operatorname{A}_{+}(G,H)$ is order closed, hence $f$ is monotone. By Proposition \ref{pro:char_o_i_poag} there is a net $(\check{f}_\alpha)_{\alpha\in A}$ such that $\check{f}_\alpha\downarrow 0$ and $\pm (f_\alpha-f)\le \check{f}_\alpha$ for every $\alpha\in A$. In order to apply Proposition \ref{pro:charAocplus}, let $(x_\beta)_{\beta\in B}$ be a net in $G$ such that $x_\beta\downarrow 0$. Since $f$ is monotone, $f(x_\beta)\downarrow$ and $0$ is a lower bound of $\{f(x_\beta);\, \beta\in B\}$. Let $z$ be a lower bound of $\{f(x_\beta);\, \beta\in B\}$. Let $\beta\in B$. We will show that for every $\alpha\in A$ we have that $z\leq \check{f}_\alpha(x_\beta)$. Indeed, for $\gamma\in B_{\geq \beta}$ we calculate \[z\leq f(x_\gamma)\leq (f_\alpha+\check{f}_\alpha)(x_\gamma)\leq f_\alpha(x_\gamma)+\check{f}_\alpha(x_\beta),\] and from $f_\alpha \in\operatorname{A}_+^{\operatorname{oc}}(G,H)$ we conclude $\inf\{f_\alpha(x_\gamma);\, \gamma\in B_{\geq \beta} \}=0$. Hence $z\le \check{f}_\alpha(x_\beta)$. Thus, Corollary \ref{cor:RK_pointwise_convergence} establishs $z\leq\inf\{\check{f}_\alpha(x_\beta);\, \alpha\in A\}=0$. \end{proof} In order to establish $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ as an ideal in $\operatorname{A}_{\operatorname{b}}(G,H)$, we first show the following. \begin{proposition}\label{pro:full_subgroup} The set $\operatorname{A}^{o_i}_{\operatorname{b}}(G,H)$ is a full subgroup of $\operatorname{A}(G,H)$. \end{proposition} \begin{proof} Due to Proposition \ref{pro:plus_o_i} the set $\operatorname{A}^{o_i}_{\operatorname{b}}(G,H)$ is a subgroup of $\operatorname{A}(G,H)$. To prove that $\operatorname{A}^{o_i}_{\operatorname{b}}(G,H)$ is full, it suffices to show that $\operatorname{A}^{\operatorname{oc}}_+(G,H)$ is full. Let $f,h\in \operatorname{A}^{\operatorname{oc}}_+(G,H)$ and let $g\in \operatorname{A}(G,H)$ be such that $f\leq g\leq h$. For a net $(x_\alpha)_{\alpha\in A}$ with $x_\alpha\downarrow 0$ we have $h(x_\alpha)\downarrow 0$. Since $0\leq f(x_\alpha)\leq g(x_\alpha)\le h(x_\alpha)$ (for every $\alpha\in A$) we conclude $g(x_\alpha)\downarrow 0$. \end{proof} To show that under the conditions of the Riesz-Kantorovich Theorem \ref{the:RK_final} the sets $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ and $\operatorname{A}_{\operatorname{b}}^{o_i}(G,H)$ coincide for $i\in \{1,2,3\}$, we need three technical statements. \begin{lemma} \label{lem:Ogasawara_RDP} Let $G$ be a partially ordered abelian group that satisfies the Riesz decomposition property. Let $x,y,z \in G$ be such that $\{x,y\} \subseteq [0,z]$. Then there is $w \in G$ with \begin{itemize} \item[(i)] $\pm w \leq x$, \item[(ii)] $\pm w \leq y$ and \item[(iii)] $y-w \leq z - x$. \end{itemize} \end{lemma} \begin{proof} Let $A:=\{-x,-y,x+y-z\}$ and $B:=\{x,y\}$. Since $A\leq B$, the Riesz decomposition property implies the existence of $w \in G$ with $A \leq w \leq B$. It is straightforward that $w$ satisfies (i), (ii) and (iii). \end{proof} \begin{lemma} \label{lem:Ogasawara_existencefancynet} Let $G$ be a partially ordered abelian group with the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. Let $f \in \operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ and $(y_\alpha)_{\alpha \in A}$ be a net in $G$ such that $y_\alpha \downarrow 0$. For $\beta \in A$ and $y \in [0,y_\beta]$ there is a net $(w_\alpha)_{\alpha \in A_{\geq \beta}}$ in $G$ such that \begin{itemize} \item[(i)] $0 \leq y-w_\alpha \leq y_\beta -y_\alpha$ for every $\alpha \in A_{\geq \beta}$, \item[(ii)] $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}$ exists and satisfies $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}\leq 0$. \end{itemize} \end{lemma} \begin{proof} Let $\beta \in A$ and let $y \in [0,y_\beta]$. For $\alpha \in A_{\geq \beta}$ we have $0 \leq y_\alpha \leq y_\beta$. So $\{y_\alpha,y\} \subseteq [0,y_\beta]$. By Lemma \ref{lem:Ogasawara_RDP} there is $w_\alpha \in G$ such that $\pm w_\alpha \leq y_\alpha$, $\pm w_\alpha \leq y$ and $y-w_\alpha \leq y_\beta - y_\alpha$ for every $\alpha \in A_{\geq \beta}$. Thus the net $(w_\alpha)_{\alpha \in A_{\geq \beta}}$ satisfies (i).\\ Next we will show that $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}$ exists. Note that $\{w_\alpha; \alpha\in A_{\geq \beta}\} \subseteq [-y,y]$. Since $f$ is order bounded, we know that $\{f(w_\alpha); \alpha\in A_{\geq \beta}\}$ is order bounded in $H$. Thus the Dedekind completeness of $H$ implies the existence of $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}$. \\ It is left to prove that $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}\leq 0$. Note that the net $(y_\alpha)_{\alpha \in A}$ satisfies $y_\alpha \downarrow 0$ and that $\pm w_\alpha \leq y_\alpha$ for every $\alpha \in A_{\geq \beta}$. Thus for the net $(w_\alpha)_{\alpha \in A_{\geq \beta}}$ we have $ w_\alpha\xrightarrow{o_1} 0$, and Proposition \ref{pro:basic_convergences} implies $w_\alpha \xrightarrow{\tau_o} 0$. Since $f$ is order continuous, it follows that $f(w_\alpha)\xrightarrow{\tau_o}0$. Hence Lemma \ref{lem:topologicalconv_inf} implies $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}\leq 0$. \end{proof} Due to Theorem \ref{the:RK_final}, the conditions in the subsequent Proposition \ref{pro:Ogasawara_f+ordercontinuous} and Theorem \ref{the:ogasawara_spaces_are_equal} yield \[\operatorname{A}_{\operatorname{b}}(G,H)=\operatorname{A}_{\operatorname{r}}(G,H).\] The operator $f^+$ is the positive part of $f$ in the Dedekind complete lattice-ordered abelian group $\operatorname{A}_{\operatorname{b}}(G,H)$, and $f^-$ is the negative part. \begin{proposition} \label{pro:Ogasawara_f+ordercontinuous} Let $G$ be a directed partially ordered abelian group with the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. If $f \in \operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$, then $f^+, f^- \in \operatorname{A}_+^{\operatorname{oc}}(G,H)$. \end{proposition} \begin{proof} Let $f \in \operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$. We will use Proposition \ref{pro:charAocplus} to show $f^+ \in \operatorname{A}_+^{\operatorname{oc}}(G,H)$. Let $(y_\alpha)_{\alpha \in A}$ be a net in $X$ such that $y_\alpha \downarrow 0$. From the monotony of $f^+$ it follows that $f^+(y_\alpha)\downarrow $ and that $f^+(y_\alpha)\geq 0$ for every $\alpha \in A$. To show that $\inf \{f^+(y_\alpha); \alpha \in A\}=0$, let $z$ be a lower bound of $\{f^+(y_\alpha); \alpha \in A\}$. Fix $\beta \in A$ and $y \in [0,y_\beta]$. By Lemma \ref{lem:Ogasawara_existencefancynet} there is a net $(w_\alpha)_{\alpha \in A_{\geq \beta}}$ in $G$ such that $0 \leq y-w_\alpha \leq y_\beta -y_\alpha$ for every $\alpha \in A_{\geq \beta}$ and such that $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}$ exists and satisfies $\inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\}\leq 0$. For $\alpha \in A_{\geq \beta}$ we can use $0 \leq y-w_\alpha \leq y_\beta -y_\alpha$ to see \begin{align*} f(y) - f(w_\alpha) = f(y-w_\alpha)\leq f^+(y- w_\alpha)\leq f^+(y_\beta-y_\alpha)= f^+(y_\beta)-f^+(y_\alpha). \end{align*} Therefore we have shown that \begin{align*} z \leq f^+ (y_\alpha) \leq f^+ (y_\beta) - f(y) + f(w_\alpha) \end{align*} for every $\alpha \in A_{\geq \beta}$. Thus \begin{align*} z \leq f^+ (y_\beta) - f(y) + \inf\{f(w_\alpha); \alpha \in A_{\geq \beta}\} \leq f^+ (y_\beta) - f(y) + 0. \end{align*} The infimum over $y$ yields \begin{align*} z &\leq f^+ (y_\beta) + \inf\{-f(y); y \in [0,y_\beta]\}= f^+ (y_\beta) - \sup\{f(y); y \in [0,y_\beta]\}\\&= f^+ (y_\beta) - f^+ (y_\beta)=0. \end{align*} We conclude $\inf \{f^+(y_\alpha); \alpha \in A\}=0$, hence $f^+ \in \operatorname{A}_+^{\operatorname{oc}}(G,H)$. Since for $f \in \operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ we have that $-f \in \operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$, we obtain $f^-=(-f)^+ \in \operatorname{A}_+^{\operatorname{oc}}(G,H)$. \end{proof} Now we are in a position to present the main results of the present paper in the subsequent two theorems. \begin{theorem}\label{the:ogasawara_spaces_are_equal} Let $G$ be a directed partially ordered abelian group that satisfies the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. Then \begin{align*} \operatorname{A}_{\operatorname{b}}^{o_1}(G,H)&=\operatorname{A}_{\operatorname{b}}^{o_2}(G,H)=\operatorname{A}_{\operatorname{b}}^{o_3}(G,H)=\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)\\&=\operatorname{A}_{+}^{\operatorname{oc}}(G,H)-\operatorname{A}_{+}^{\operatorname{oc}}(G,H).\end{align*} \end{theorem} \begin{proof} Let $i\in\{1,2,3\}$. By Theorem \ref{thm:ordercontinuous} we have \[\operatorname{A}_{\operatorname{b}}^{o_i}(G,H)\subseteq\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H).\] Proposition \ref{pro:Ogasawara_f+ordercontinuous} implies that \[\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)\subseteq\operatorname{A}_{+}^{\operatorname{oc}}(G,H)-\operatorname{A}_{+}^{\operatorname{oc}}(G,H).\] By Proposition \ref{pro:full_subgroup} the set $\operatorname{A}_{\operatorname{b}}^{o_i}(G,H)$ is a subgroup of $\operatorname{A}_{\operatorname{b}}(G,H)$, hence \[\operatorname{A}_{+}^{\operatorname{oc}}(G,H)-\operatorname{A}_{+}^{\operatorname{oc}}(G,H)\subseteq\operatorname{A}_{\operatorname{b}}^{o_i}(G,H).\] \end{proof} \begin{theorem} \label{the:ogasawara_part2} Let $G$ be a directed partially ordered abelian group that satisfies the Riesz decomposition property and let $H$ be a Dedekind complete lattice-ordered abelian group. Then $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ is an order closed ideal in $\operatorname{A}_{\operatorname{b}}(G,H)$. \end{theorem} \begin{proof} From Proposition \ref{pro:full_subgroup} and Theorem \ref{the:ogasawara_spaces_are_equal} it follows that $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ is a full subgroup of $\operatorname{A}_{\operatorname{b}}(G,H)$. Proposition \ref{pro:Ogasawara_f+ordercontinuous} implies that $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ is closed under the lattice operations in $\operatorname{A}_{\operatorname{b}}(G,H)$. In particular, $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ is directed, i.e.\ it is an ideal. Combining Theorem \ref{the:ogasawara_spaces_are_equal}, Proposition \ref{pro:Aoc+oclosed} and Proposition \ref{pro:McapG+oclosed}, we conclude that $\operatorname{A}_{\operatorname{b}}^{\tau_o}(G,H)$ is order closed. \end{proof} Theorem \ref{the:ogasawara_part2} is a generalisation of a theorem by Ogasawara \cite{Ogasawara1944} (see also \cite[Theorem 4.4]{Positiveoperators_old}) for $o_1$-continuous operators on vector lattices. The following slight generalisation of \cite[Proposition 1.6]{Abra} is obtained due to Theorem \ref{the:ogasawara_spaces_are_equal}. \begin{corollary} \label{cor:orderboundedando2contiso3cont} Let $G$ be a directed partially ordered abelian group that satisfies the Riesz decomposition property and let $H$ be an Archimedean lattice-ordered abelian group. Then $\operatorname{A}_{\operatorname{b}}^{o_2}(G,H)\subseteq \operatorname{A}_{\operatorname{b}}^{o_3}(G,H)$. \end{corollary} \begin{proof} Let $f \in \operatorname{A}_{\operatorname{b}}^{o_2}(G,H)$ and $(x_\alpha)_{\alpha \in A}$ a net in $G$ such that $x_\alpha \xrightarrow{o_3}x\in G$. Furthermore let $(H^\gamma,J)$ be the group Dedekind completion\footnote{A slight adaptation of arguments given in \cite[Theorem IV.11.1]{Vulikh67} yields that for every Archimedean lattice-ordered abelian group $G$ there is a Dedekind complete lattice-ordered abelian group $G^\gamma$ and an additive order embedding $J\colon G\to G^\gamma$ such that $J[G]$ is order dense in $G^\gamma$. We say that $(G^\gamma,J)$ is the \emph{group Dedekind completion} of $G$.} of $H$. Due to Corollary \ref{coro:orderembeddingswithroderdenseimagesarecontinuous} the map $J$ is $o_2$-continuous, hence also $J\circ f\colon G \to H^\gamma$. Since $J\circ f$ is order bounded, Theorem \ref{the:ogasawara_spaces_are_equal} yields $J\circ f\in \operatorname{A}_{\operatorname{b}}^{o_2}(G,H^\gamma)= \operatorname{A}_{\operatorname{b}}^{o_3}(G,H^\gamma)$. Thus $J(f(x_\alpha))\xrightarrow{o_3}J(f(x))$ in $H^\gamma$. Now Proposition \ref{pro:o2_o3_convergenceDedekind_complete_lattice} yields $J(f(x_\alpha))\xrightarrow{o_2}J(f(x))$ in $H^\gamma$. Thus Proposition \ref{pro:Dedekindcompletionando3o2}(ii) shows that $f(x_\alpha) \xrightarrow{o_3}f(x)$. \end{proof} \section{Order convergence and order topology in partially ordered vector spaces} In this section let $X$ be a partially ordered vector space. We will show that for $i\in \{1,2,3\}$ the scalar multiplication is jointly continuous with respect to $o_i$-convergence on $X$ and $\mathbb{R}$, respectively, if and only if $X$ is Archimedean and directed. Examples are presented in which the order convergence concepts differ. \begin{lemma} \label{lem:characterisationArchimedeananddirectedwithorderconvergence} Let $i\in\{1,2,3\}$. Then the following statements are equivalent. \begin{itemize} \item[(i)] For every $x \in X$ the sequence $(\frac{1}{n}x)_{n \in \mathbb{N}}$ satisfies $\frac{1}{n}x\xrightarrow{o_i}0$. \item[(ii)] For every $x \in X$ the sequence $(\frac{1}{n}x)_{n \in \mathbb{N}}$ satisfies $\frac{1}{n}x\xrightarrow{\tau_o}0$. \item[(iii)] $X$ is Archimedean and directed. \end{itemize} \end{lemma} \begin{proof} The implication (i)$\Rightarrow$(ii) follows from Proposition \ref{pro:basic_convergences}. To show (ii)$\Rightarrow$(iii), we first establish $X_+$ to be generating in $X$. Let $x\in X$, then for the sequence $(\frac{1}{n}x)_{n \in \mathbb{N}}$ we have $\frac{1}{n}x\xrightarrow{\tau_o}0$, hence Remark \ref{rem:G+-G+orderopen} (c) shows the existence of $n\in\mathbb{N}$ with $\frac{1}{n}x\in X_+-X_+$. Since $X_+-X_+$ is a vector space, we obtain $x\in X_+-X_+$. To show that $X$ is Archimedean, let $x\in X_+$. By (ii), we have $\frac{1}{n}x\xrightarrow{\tau_o}0$. Since $(\frac{1}{n}x)\downarrow$, Lemma \ref{lem:topologicalconv_inf} proves $(\frac{1}{n}x)\downarrow 0$. Next we show (iii)$\Rightarrow$(i). Let $x\in X$. By the directedness of $X$, we have $x_1,x_2 \in X_+ $ with $x=x_1-x_2$. Since $X$ is Archimedean, we get $\frac{1}{n}x_j \downarrow 0$ for $j \in \{1,2\}$. Thus $\frac{1}{n}x_j \xrightarrow{o_i} 0$ by Remark \ref{rem:decreasingnet} and Proposition \ref{pro:basic_convergences}. Hence Proposition \ref{pro:plus_o_i} implies $\frac{1}{n}x=\frac{1}{n}x_1-\frac{1}{n}x_2\xrightarrow{o_i}0$. \end{proof} \begin{proposition} \label{pro:characterisation_o_icontinuousscalarmultiplication} Let $i \in \{1,2,3\}$. Then the following statements are equivalent. \begin{itemize} \item[(i)] $X$ is Archimedean and directed. \item[(ii)] For every net $(\lambda_\alpha)_{\alpha\in A}$ in $\mathbb{R}$ with $\lambda_\alpha\xrightarrow{o_i}\lambda\in \mathbb{R}$ and every net $(x_\beta)_{\beta\in B}$ in $X$ with $x_\beta\xrightarrow{o_i}x\in X$ the net $(\lambda_\alpha x_\beta)_{(\alpha,\beta) \in A\times B}$ satisfies $\lambda_\alpha x_\beta\xrightarrow{o_i}\lambda x$ (where $A\times B$ is ordered component-wise). \item[(iii)] For every net $(\lambda_\alpha)_{\alpha\in A}$ in $\mathbb{R}$ with $\lambda_\alpha\xrightarrow{o_i}\lambda\in \mathbb{R}$ and every net $(x_\alpha)_{\alpha\in A}$ in $X$ with $x_\alpha\xrightarrow{o_i}x\in X$ the net $(\lambda_\alpha x_\alpha)_{\alpha \in A}$ satisfies $\lambda_\alpha x_\alpha\xrightarrow{o_i}\lambda x$. \end{itemize} \end{proposition} \begin{proof} To show (i)$\Rightarrow$(ii), let $(\lambda_\alpha)_{\alpha\in A}$ be a net in $\mathbb{R}$ with $\lambda_\alpha\xrightarrow{o_1}\lambda\in \mathbb{R}$ and let $(x_\beta)_{\beta\in B}$ be a net in $X$ with $x_\beta\xrightarrow{o_1}x\in X$. According to Proposition \ref{pro:char_o_i_poag}, there is a net $(\check{\lambda}_\alpha)_{\alpha\in A}$ in $\mathbb{R}$ with $\check{\lambda}_\alpha\downarrow 0$ and $\pm (\lambda_\alpha-\lambda)\leq \check{\lambda}_\alpha$ for every $\alpha\in A$, and a net $(\check{x}_\beta)_{\beta\in B}$ in $X$ with $\check{x}_\beta\downarrow 0$ and $\pm (x_\beta-x)\leq \check{x}_\beta$ for every $\beta\in B$. Since $X$ is directed, there is $\check{x}\in X$ with $\pm x\leq \check{x}$. The net $(\check{\lambda}_\alpha \check{x})_{(\alpha,\beta)\in A\times B}$ is a subnet of $(\check{\lambda}_\alpha \check{x})_{\alpha\in A}$, hence $X$ being Archimedean implies that $\check{\lambda}_\alpha \check{x}\downarrow 0$. A straightforward argument shows that the net $(\check{\lambda}_\alpha \check{x}_\beta +\check{\lambda}_\alpha \check{x}+|\lambda|\check{x}_\beta)_{(\alpha,\beta)\in A\times B}$ satisfies $\check{\lambda}_\alpha \check{x}_\beta +\check{\lambda}_\alpha \check{x}+|\lambda|\check{x}_\beta\downarrow 0$. For $(\alpha,\beta)\in A\times B$ we have $\pm \lambda_\alpha\leq \check{\lambda}_\alpha \mp \lambda\leq \check{\lambda}_\alpha+|\lambda|$ and hence \begin{align*} \pm(\lambda_\alpha x_\beta-\lambda x)= \pm \lambda_\alpha (x_\beta- x)\pm (\lambda_\alpha-\lambda)x\leq (\check{\lambda}_\alpha+|\lambda|)\check{x}_\beta +\check{\lambda}_\alpha \check{x}, \end{align*} such that the net $(\lambda_\alpha x_\beta)_{(\alpha,\beta) \in A\times B}$ satisfies $\lambda_\alpha x_\beta\xrightarrow{o_1}\lambda x$. The arguments for $o_2$-convergence and $o_3$-convergence are similar. \\ The implication (ii)$\Rightarrow$(iii) follows from Remark \ref{rem:subnet_o_i}. \\ By Lemma \ref{lem:characterisationArchimedeananddirectedwithorderconvergence} we obtain (iii)$\Rightarrow$(i). \end{proof} Next we present an example of a vector lattice in which $\tau_o$-convergence and $o_3$-convergence do not coincide. \begin{example} \label{exa:ordertopconvergentnetnoto3convergent} Let $X$ be the vector lattice of all real, Lebesgue-measurable, almost everywhere finite functions on $[0,1]$. As usual, we identify almost everywhere equal functions and order $X$ component-wise almost everywhere. Let $(f_n)_{n \in \mathbb{N}}$ be the sequence of characteristic functions of the intervals \[\textstyle [0,1],[0,\frac{1}{2}],[\frac{1}{2},1],[0,\frac{1}{4}],[\frac{1}{4},\frac{2}{4}],[\frac{2}{4},\frac{3}{4}],[\frac{3}{4},1],[0,\frac{1}{8}],\ldots \] The sequence $(f_n)_{n \in \mathbb{N}}$ does not $o_3$-converge to $0$. Indeed, assume $f_n \xrightarrow{o_3}0$. By Proposition \ref{pro:char_o_i_poag} there is a net $(\check{f}_\alpha)_{\alpha \in A}$ in $X$ with $\check{f}_\alpha \downarrow 0$ and a map $\eta\colon A \rightarrow \mathbb{N}$ such that $\pm f_n \leq \check{f}_\alpha$ for all $\alpha \in A$ and $n \in \mathbb{N}_{\geq \eta(\alpha)}$. To obtain a contradiction note that $1 =\sup \{f_n; \, n \in \mathbb{N}_{\geq \eta(\alpha)}\}\leq \check{f}_\alpha$ for all $\alpha \in A$. We show that $f_n \xrightarrow{\tau_o}0$. Let $V\subseteq X$ be order open such that $0\in V$. For $t\in[0,1]$ and $\varepsilon\in \mathbb{R}_{>0}$ let $g^{(t)}_\varepsilon$ be the characteristic function of the interval $[0,1]\cap\left[t-\varepsilon, t+\varepsilon\right]$. Note that for every $t\in[0,1]$ the sequence $\left(g^{(t)}_{\frac{1}{n}}\right)_{n\in\mathbb{N}}$ satisfies $g^{(t)}_{\frac{1}{n}}\downarrow_n 0$. As $V$ is a net catching set for $0$, for every $t\in[0,1]$ there is $\varepsilon(t)\in \mathbb{R}_{>0}$ such that $\left[-g^{(t)}_{\varepsilon(t)},g^{(t)}_{\varepsilon(t)}\right]\subseteq V$. Since $[0,1]$ is compact, there is a finite set $I\subset[0,1]$ such that $\{[t-\varepsilon(t),t+\varepsilon(t)];\, t\in I\}$ is an open cover of $[0,1]$. Let $\delta$ be a Lebesgue number of this cover. There is $n_0\in \mathbb{N}$ such that for every $n \in \mathbb{N}_{\geq n_0}$ the support of $f_n$ has diameter less than $\delta$. Therefore for every $n \in \mathbb{N}_{\geq n_0}$ there is $t \in I$ such that $f_n\in \left[-g^{(t)}_{\varepsilon(t)},g^{(t)}_{\varepsilon(t)}\right]\subseteq V$. This proves that $f_n \xrightarrow{\tau_o}0$. \end{example} As a continuation of Remark \ref{rem:RestrictionandExtensionproperty}, in the subsequent example we present a vector lattice $Y$ with order topology $\tau_o(Y)$ and an order dense subspace $X$ such that the induced topology differs from the order topology $\tau_o(X)$. In the spirit of \cite{IaB} this means, in particular, that the Extension property (E) is not satisfied for order closed sets. \begin{example}\label{exa:extensionprop} In \cite[Example 5.2]{IaB} the vector lattice \[Y=\left\{y=(y_i)_{i\in \mathbb{Z}} \in l^\infty;\, \lim_{i \rightarrow \infty} y_i \text{ exists}\right\}\] and its order dense subspace \[X=\left\{x=(x_i)_{i \in \mathbb{Z}}\in Y;\, \sum_{k=1}^\infty \frac{x_{-k}}{2^k}=\lim_{i \rightarrow \infty} x_i\right\}\] are considered. Moreover, it is shown that the sequence of unit vectors $(e^{(n)})_{n \in \mathbb{N}}$ is $o_1$-convergent to $0$ in $Y$, but is not $o_1$-convergent in $X$. Here for $n,k \in \mathbb{Z}$ we set $e^{(n)}_k:=1$ for $n=k$ and $e^{(n)}_k:=0$ otherwise. Let $M:=\{e^{(n)};\, n \in \mathbb{N}\}$. By Theorem \ref{thm:orderclosed}, $M$ is not order closed in $Y$. We will show in (A) that $M$ is order closed in $X$ and in (B) that there is no order closed $N\subseteq Y$ such that $N\cap X=M$. Moreover, in (C) we prove that the sequence $(e^{(n)})_{n \in \mathbb{N}}$ is not convergent with respect to $\tau_o(X)$, and hence not $o_3$-convergent and not $o_2$-convergent. (A) To show that $M$ is order closed in $X$, we use Theorem \ref{thm:orderclosed}. Let $(n_\alpha)_{\alpha \in A}$ be a net in $\mathbb{N}$ such that $e^{(n_\alpha)}\xrightarrow{o_1}x\in X$. Hence there is a net $(\check{e}^{\alpha})_{\alpha \in A}$ in $X$ such that $\check{e}^\alpha \downarrow 0$ and $\pm\left(e^{(n_\alpha)}-x\right)\leq \check{e}^\alpha$ for all $\alpha \in A$. We show in the steps (A1) and (A2) that $(n_\alpha)_{\alpha \in A}$ has exactly one accumulation point $l$, which implies $x=e^{(l)}\in M$. (A1) The net $(n_\alpha)_{\alpha \in A}$ has an accumulation point. Indeed, assume the contrary. Let $k \in \mathbb{Z}$. Since no element of $\{0,\ldots,k\}$ is an accumulation point of $(n_\alpha)_{\alpha \in A}$, there is $\alpha_k \in A$ such that for every $\alpha \in A_{\geq \alpha_k}$ we have $n_\alpha > k$. Hence $e^{(n_\alpha)}_k=0$ for every $\alpha \in A_{\geq \alpha_k}$, and $|x_k|=\left|e^{(n_\alpha)}_k-x_k\right|\leq \check{e}_k^\alpha \downarrow 0$ implies $x_k=0$. This shows $x=0$. We show that $\lim_{k \rightarrow \infty}\check{e}^\alpha_k \geq 1$ for every $\alpha \in A$. Assuming $\lim_{k \rightarrow \infty}\check{e}^\alpha_k < 1$, for every $\alpha \in A$ there is $K \in \mathbb{N}$ such that for every $k \in \mathbb{N}_{\geq K}$ we have $\check{e}_k^\alpha<1$. Since $(n_\beta)_{\beta \in A}$ has no accumulation points, there is $\beta\in A_{\geq \alpha}$ such that $n_\beta \geq K$, and we obtain the contradiction $1>\check{e}^\alpha_{n_\beta}\geq \check{e}^{\beta}_{n_\beta}\geq e^{(n_\beta)}_{n_\beta}=1$. We do not have $\check{e}_k^\alpha \downarrow_\alpha 0$ for every $k \in \mathbb{Z}\setminus \mathbb{N}$, since otherwise monotone convergence would imply $1\leq \lim_{k \rightarrow \infty}\check{e}^\alpha_k= \sum_{k=1}^\infty \frac{\check{e}_{-k}^\alpha}{2^k} \downarrow_\alpha 0$. Hence there is $k \in \mathbb{Z}\setminus \mathbb{N}$ and $\delta>0$ with $\check{e}_k^\alpha\geq \delta$ for every $\alpha \in A$. Put $w:=\delta e^{(k)}-2 \delta e^{(k-1)} $ and observe the contradiction $w\leq \check{e}^\alpha \downarrow_\alpha 0$. This shows that $(n_\alpha)_{\alpha \in A}$ has accumulation points. (A2) The net $(n_\alpha)_{\alpha \in A}$ has at most one accumulation point. Indeed, let $l,k\in \mathbb{N}$ be accumulation points of this net. As $\check{e}_l^{\alpha}\downarrow 0$, we obtain that for every $\epsilon>0$ there is an $\alpha_0 \in A$ such that for every $\alpha \in A_{\geq \alpha_0}$ we have $\left|e^{(n_\alpha)}_l-x_l\right|\leq \check{e}_l^\alpha \leq \check{e}_l^{\alpha_0} \leq \epsilon$. Since $l$ is an accumulation point of $(n_\alpha)_{\alpha \in A}$, there is $\alpha \in A_{\geq \alpha_0}$ such that $l=n_\alpha$. Thus $|1-x_l|=\left|e^{(l)}_l-x_l\right|=\left|e^{(n_\alpha)}_l-x_l\right|\leq \epsilon$, consequently $x_l=1$. Since $k$ is an accumulation point of $(n_\alpha)_{\alpha \in A}$, there is $\beta \in A_{\geq \alpha_0}$ such that $k=n_\beta$. Hence $\left|e^{(k)}_l-1\right|=\left|e^{(n_\beta)}_l-x_l\right|\leq \epsilon$ and we have shown $e^{(k)}_l=1$, i.e.\ $k=l$. (B) To show that there is no order closed set $N\subseteq Y$ such that $N\cap X=M$, assume the contrary. As $e^{(n)}\xrightarrow{o_1}0$ we obtain $0\in N$. Hence $0\in N\cap X=M$, which is a contradiction. (C) Assume that $e^{(n)}\xrightarrow{\tau_o(X)}x \in X$. Since $M$ is order closed in $X$, there is $l \in \mathbb{N}$ such that $x =e^{(l)}$. Let $O:=\{x \in X;\, x_l\in (0,2)\}$ and observe that $e^{(l)}\in O\in \tau_o(X)$. Thus $e^{(n)}\xrightarrow{\tau_o(X)}e^{(l)}$ implies the existence of $N\in \mathbb{N}$ such that for every $n \in \mathbb{N}_{\geq N}$ we have $e^{(n)}\in O$, a contradiction. \end{example} \section{Properties of the set of order continuous linear operators in partially ordered vector spaces} In this section, let $X$ and $Y$ be partially ordered vector spaces. In this setting, we provide similar statements as in Section 7. The following is a slight generalisation of \cite[Theorem 2.1]{Abra}. Note that for $i=1$ the result is contained in Proposition \ref{pro:o1ob}. \begin{proposition} \label{pro:o_icontinuousisorderbdd} Let $X$ be Archimedean, $G$ be a partially ordered abelian group and $i \in \{1,2,3\}$. Every $o_i$-continuous and additive map $f\colon X \rightarrow G$ is order bounded. \end{proposition} \begin{proof} Note that it is sufficient to show that $f[[0,v]]$ is order bounded in $G$ for every $v \in X_+$. Let $A:=\mathbb{N}\times [0,v]$ be ordered lexicographically and define $x_{(n,w)}:=\frac{1}{n}w$, $\hat{x}_{(n,w)}:=-\frac{1}{n}v$ and $\check{x}_{(n,w)}:=\frac{1}{n}v$ for $(n,w)\in A$. Note that $\hat{x}_\alpha \uparrow 0$ and $\check{x}_\alpha \downarrow 0$ and that $\hat{x}_\alpha \leq x_\alpha \leq \check{x}_\alpha$ for all $\alpha \in A$. Thus $x_\alpha \xrightarrow{o_1}0$. Since $f$ is $o_i$-continuous, by Proposition \ref{pro:basic_convergences} we obtain $f(x_\alpha)\xrightarrow{o_3}0$. Therefore by Proposition \ref{pro:char_o_i_poag}(iii) there is a net $(y_\beta)_{\beta \in B}$ and a map $\eta \colon B \rightarrow A$ such that $y_\beta \downarrow 0$ and $\pm f(x_\alpha)\leq y_\beta$ for every $\beta \in B$ and $\alpha \in A_{\geq \eta(\beta)}$. Fix $\beta\in B$. Since $\eta(\beta)\in A$ there are $(m,u)\in A$ such that $\eta(\beta)=(m,u)$. Now let $w \in [0,v]$ and observe that $(m+1,w)\geq (m,u)=\eta(\beta)$. Thus $\pm f(w)=\pm(m+1)f\left(\frac{1}{m+1}w\right)=\pm(m+1)f\left(x_{(m+1,w)}\right)\leq (m+1) y_\beta$. Hence $f[[0,v]]\subseteq [-(m+1)y_\beta,(m+1)y_\beta]$. \end{proof} \begin{remark} It is an open question whether Proposition \ref{pro:o_icontinuousisorderbdd} is valid if $X$ is an Archimedean partially ordered abelian group. \end{remark} We denote $\operatorname{L}^{o_i}_{\operatorname{b}}(X,Y)= \operatorname{A}^{o_i}_{\operatorname{b}}(X,Y)\cap \operatorname{L}(X,Y)$, $\operatorname{L}^{\tau_o}_{\operatorname{b}}(X,Y)= \operatorname{A}^{\tau_o}_{\operatorname{b}}(X,Y)\cap \operatorname{L}(X,Y)$ and $\operatorname{L}_+^{\operatorname{oc}}(X,Y)= \operatorname{A}_+^{\operatorname{oc}}(X,Y)\cap \operatorname{L}(X,Y)$. The proof of the following statement is similar to the one in \cite[Lemma 1.26]{CAD}. \begin{proposition} \label{pro:additive_mon_implies_homogen} If $X$ is directed and $Y$ is Archimedean, then every additive monotone map is homogeneous, i.e.\ $\operatorname{A}_+(X,Y)=\operatorname{L}_+(X,Y)$. \end{proposition} An analogue for $o_i$-continuous maps is given next. \begin{proposition} \label{pro:additive_o_i_cont_implies_homogen} Let $X, Y$ be directed and Archimedean and let $i\in\{1,2,3\}$. Then every additive $o_i$-continuous map from $X$ to $Y$ is homogeneous, hence $\operatorname{A}^{o_i}_{\operatorname{b}}(X,Y)=\operatorname{L}^{o_i}_{\operatorname{b}}(X,Y)$. Furthermore, $\operatorname{A}_+^{\operatorname{oc}}(X,Y)=\operatorname{L}_+^{\operatorname{oc}}(X,Y)$. \end{proposition} \begin{proof} Let $T\in \operatorname{A}^{o_i}_{\operatorname{b}}(X,Y)$. Observe that every additive maps is $\mathbb{Q}$-homo\-geneous. Let $\lambda\in\mathbb{R}$ and $x\in X$. There is a sequence $(\lambda_n)_{n\in\mathbb{N}}$ in $\mathbb{Q}$ that $o_i$-convergences to $\lambda$ (with respect to $\mathbb{R}$, cf.\ Example \ref{exa:opensubsetsofR}). By Proposition \ref{pro:characterisation_o_icontinuousscalarmultiplication} we get $\lambda_n x\xrightarrow{o_i} \lambda x$ and $\lambda_n T(x)\xrightarrow{o_i} \lambda T(x)$. Since $T$ is $o_i$-continuous, we obtain $T(\lambda_n x)\xrightarrow{o_i} T(\lambda x)$. As $T$ is $\mathbb{Q}$-homogeneous, we get for every $n\in\mathbb{N}$ that $T(\lambda_n x)=\lambda_n T(x)$. Due to Remark \ref{rem:unique_order_limits} order limits are unique, hence we conclude $T(\lambda x)=\lambda T(x)$. \end{proof} Under the conditions of Proposition \ref{pro:additive_mon_implies_homogen}, we obtain \[\operatorname{A}_+(X,Y)-\operatorname{A}_+(X,Y)= \operatorname{L}_+(X,Y)-\operatorname{L}_+(X,Y)\subseteq \operatorname{L}_{\operatorname{b}}(X,Y)\subseteq \operatorname{A}_{\operatorname{b}}(X,Y).\] Hence, if $\operatorname{A}_{\operatorname{b}}(X,Y)$ is directed, then $\operatorname{A}_{\operatorname{b}}(X,Y)=\operatorname{L}_{\operatorname{b}}(X,Y)$. Therefore, Theorem \ref{the:RK_final} yields the following statement. \begin{theorem} Let $X$ be a directed partially ordered vector space with the Riesz decomposition property, and let $Y$ be a Dedekind complete vector lattice. Then every additive order bounded map is homogeneous, i.e.\ $\operatorname{A}_{\operatorname{b}}(X,Y)=\operatorname{L}_{\operatorname{b}}(X,Y)$. \end{theorem} We reformulate the Theorems \ref{the:ogasawara_spaces_are_equal} and \ref{the:ogasawara_part2} and obtain a generalisation of the Ogasawara theorem. \begin{theorem}\label{the:finalOga} Let $X$ be a directed partially ordered vector space with the Riesz decomposition property, and let $Y$ be a Dedekind complete vector lattice. Then \begin{align*} \operatorname{L}_{\operatorname{b}}^{o_1}(X,Y)&=\operatorname{L}_{\operatorname{b}}^{o_2}(X,Y)=\operatorname{L}_{\operatorname{b}}^{o_3}(X,Y)=\operatorname{L}_{\operatorname{b}}^{\tau_o}(X,Y)\\&=\operatorname{L}_{+}^{\operatorname{oc}}(X,Y)-\operatorname{L}_{+}^{\operatorname{oc}}(X,Y).\end{align*} Moreover, $\operatorname{L}_{\operatorname{b}}^{\tau_o}(X,Y)$ is an order closed ideal in $\operatorname{L}_{\operatorname{b}}(X,Y)$. \end{theorem} If $X$ is, in addition, Archimedean, then by Proposition \ref{pro:o_icontinuousisorderbdd} and Theorem \ref{the:finalOga} a linear operator $T\colon X \to Y$ is $o_i$-continuous if and only if $T\in \operatorname{L}_{+}^{\operatorname{oc}}(X,Y)-\operatorname{L}_{+}^{\operatorname{oc}}(X,Y)$. It is an open question whether one obtains similar results to the ones in Theorem \ref{the:finalOga} under weaker assumtions. In particular, if $Y$ is an Archimedean vector lattice, but not Dedekind complete, then the set of all regular linear operators is an Archimedean directed partially ordered vector space, and the notion of an ideal is at hand, see \cite{IaB}. One can ask whether the set of order continuous (or $o_i$-continuous) regular linear operators is an order closed ideal in the space of regular operators. \footnotesize
2024-02-18T23:40:39.153Z
2017-11-09T02:08:28.000Z
algebraic_stack_train_0000
2,968
22,161
proofpile-arXiv_065-14504
\section{Introduction} The smoothing problem often refers to the scenario where one has an unobserved Markov chain (or signal) in discrete or continuous time and one is interested in inferring the hidden process on the basis of observations, which depend upon the hidden chain. The case we consider is where the hidden process follows a \gls{sde} and the observations are regularly recorded at discrete times; given the signal at a time $t$ the observation is assumed to be conditionally independent of all other random variables. The process of filtering is to infer some functional of the hidden state at time $t$ given all the observations at time $t$ and the smoothing problem to infer some functional of potentially all the states at the discrete observation times again given all the observations. It is often of interest to do this recursively in time. This modelling context is relevant for many real applications in econometrics, finance and engineering; see e.g.\ \cite{Cappe2005} and the references therein. The smoothing problem is notoriously challenging. Supposing one has access to the exact transition of the \gls{sde}, then unless the observation density is Gaussian and depends linearly on the hidden state and the transition density is also Gaussian depending linearly on the previous state, the filter and smoother are not analytically tractable (unless the state-space of the position of the diffusion at any given time is finite and of small cardinality); see \cite{Crisan2008}. However, it is seldom the case that even the transition density (or some unbiased approximation of it, e.g.\ \cite{Fearnhead2008} and the references therein) is available; this is assumed throughout the article. Thus typically, one time-discretizes the diffusion process and then one seeks to perform filtering and smoothing from the time-discretized model. This latter task is still challenging as it is still analytically intractable. There is a vast literature on how to numerically approximate the filter/smoother (e.g.\ \cite{Crisan2011}) and perhaps the most popular of which is the particle filter. This is a method whose cost grows linearly with the time parameter and generates $N$ samples in parallel. These samples are put through sampling and resampling operations. It is well-known that when estimating the filter, the error is uniform in time. For the smoother, the error often grows due to the so-called path degeneracy problem and indeed, there are many smoothing problems for which it is not appropriate; see \cite{Kantas2015} for some review and discussion. In the context of the problem in this article, when only considering the filter, ignoring the time parameter and under assumptions, to obtain a \gls{mse} of $\mathcal{O}(\epsilon^2)$ for some $\epsilon>0$ the cost of the \gls{pf} is $\mathcal{O}(\epsilon^{-3})$. The \gls{mse} takes into account the exact filter (i.e.~the one with no time discretization). \Gls{mlmc} methods \cite{Giles2008, Giles2015, Giles2009, Giles2014, Heinrich2001} are of interest in continuum systems which have to be discretized in one dimension, just as in this article (extensions to discretization in multiple dimensions have been proposed and studied in \cite{Crisan2017,Haji2016}). We explain the idea informally as follows: let the time parameter be fixed and denote by $p^L_t$ the filter associated to a (say Euler) discretization level $h_L>0$, set $X_t\in\mathbb{R}^d$, $d\geq 1$ and for $\varphi:\mathbb{R}^d\rightarrow\mathbb{R}$ bounded denote by $p^L_t(\varphi)$ the expectation of $\varphi$ \gls{wrt} the filter. Then the \gls{mlmc} method is based upon the following approach. Consider $0<h_L<h_{L-1}<\cdots<h_0<+\infty$ a sequence of discretizations, where $h_L$ is the most accurate (finest) discretization and $h_0$ the least (coarsest), the \gls{ml} identity is \eqns{ p^L_t(\varphi) = \sum_{l=0}^L ( p^l_t - p^{l-1}_t)(\varphi) } where $p^{-1}_t$ is an arbitrary measure satisfying $p^{-1}_t(\varphi) = 0$ for every $\varphi$. The idea is then to sample $N_0$ independent samples from $p^0_t$ and then, independently for each $1\leq l \leq L$ independently sample $N_l$ coupled pairs from the pair $(p^l_t,p^{l-1}_t)$. The \gls{mlmc} estimator is then \eqns{ \frac{1}{N_0}\sum_{i=1}^{N_0} \varphi(X^0_{t,i}) + \sum_{l=1}^L\frac{1}{N_l}\sum_{i=1}^{N_l}[\varphi(X^l_{t,i})-\varphi(X^{l-}_{t,i})] } where $\{X^0_{t,i}\}_{i=1}^{N_0}$ are i.i.d.\ $p^0_t$ and $\{(X^l_{t,i},X^{l-}_{t,i})\}_{i=1}^{N_l}$ are i.i.d.\ from a coupling of $(p^l_t,p^{l-1}_t)$. To obtain a \gls{mse} of $\mathcal{O}(\epsilon^2)$ one sets $L$ such that the squared bias is $\mathcal{O}(\epsilon^2)$ (the bias is known in the context of interest). If one has $\Var(\varphi(X^l_{t,1})-\varphi(X^{l-}_{t,1})) = \mathcal{O}(h_l^{\beta})$ for some $\beta>0$ then one can try to minimize (\gls{wrt} $N_1,\dots,N_L$) the cost $\sum_{l=1}^LN_lh_l^{-\zeta}$ ($\zeta=1$ for an Euler discretization) subject to the variance $1/N_0 + \sum_{l=1}^L h_l^{\beta}/N_l$ being $\mathcal{O}(\epsilon^2)$. \cite{Giles2008} finds a solution to this problem. The main issue in the context of smoothing, is that one (typically) does not know how to sample from the smoothers nor the couplings. In \cite{Gregory2016,Jasra2015,Jasra2018} it is shown how to utilize the \gls{pf} to leverage on the potential decrease in cost to obtain a given \gls{mse}. This has been termed the MLPF. The idea is to use couplings in the Euler dynamics and the resampling operation of a \gls{pf}. This has been later refined in \cite{Sen2018}. To our knowledge, the only theoretical work for the \gls{mlpf} in \cite{Jasra2015}, shows that to obtain a \gls{mse} of $\mathcal{O}(\epsilon^2)$ the cost in \gls{mlpf} is $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$, for some specific (constant diffusion coefficient) models and under particular assumptions. This is known to be worse than the rates obtained in \cite{Giles2008} in the case where there are no observations. Here and throughout, the time parameter is omitted from the discussion on cost and error, despite the fact that these are important considerations in general. The main idea in this article is to adopt an alternative method to the \gls{pf}. The approach is to use transport methods \cite{Spantini2017}. Transport maps have been used for Bayesian inference \cite{ElMoselhy2012, Heng2015} and more specifically for parameter estimation in \cite{Parno2016} based on a related multi-scale idea. The basic idea is to obtain a map such that the image of samples from an easy-to-sample distribution through this map has exactly the type which one desires. In \cite{Spantini2017} it is shown how to develop numerical approximations of maps, associated exactly to the distributions of interest in this article. These approximations often induce i.i.d.~Monte Carlo approximations of expectations of interest, albeit with a numerical error associated to the approximation of the transport map. As mentioned in \cite{JasLaw2017}, it is simple to induce coupled pairs using the method of \cite{Spantini2017} and this is exactly what is done in this paper. The potential advantages of this method relative to the \gls{mlpf} are then as follows: \begin{enumerate}[label=(\roman*)] \item \label{it:coupledResampling} The \gls{ml} rate lost by coupled resampling can be regained in the context of filtering. \item \label{it:smoother} The method can be used for approximating the expectation of some functionals \gls{wrt} the smoother, whereas the approach in \cite{Jasra2015,Jasra2018} is typically not useful for smoothing at large time-lags. \end{enumerate} In this article we establish that \ref{it:coupledResampling} can hold in an ideal special case, where the model is linear and Gaussian and the transport map is exact. This result is reinforced by numerical examples which show that the result seems to hold more generally. The significance of \ref{it:coupledResampling} is that to obtain a \gls{mse} of $\mathcal{O}(\epsilon^2)$ the cost is $\mathcal{O}(\epsilon^{-2})$; this is better than the \gls{mlpf}. Point \ref{it:smoother} relates to the afore-mentioned path degeneracy effect, which can mean \glspl{pf} (and hence the \gls{mlpf}) are not so useful in the context of large lag smoothing. The structure of the article is as follows: \Cref{sec:multiLevelSDE} introduces the model and transport methodology. \Cref{sec:MLMC} presents the multilevel approach and the MLPF as well as the mechanisms underlying the computation of transport maps for a given level of discretization. The efficiency of the proposed approach is shown numerically on increasingly challenging scenarios in \cref{sec:numericalStudy}. \section[Methodology for SDE smoothing]{Methodology for \gls{sde} smoothing} \label{sec:multiLevelSDE} In this section, the considered notations and assumptions for the smoothing of \glspl{sde} are presented, together with a brief overview of the transport methodology. \subsection[The SDE model]{The \gls{sde} model} Throughout the article, all random variables will be assumed to be on the same complete probability space $(\Omega,\Sigma,\mathbb{P})$ and will be denoted by upper-case letters, while their realisations will be in lower case. We consider a diffusion process $\bm{X} = \{X_t\}_{t\in[0,T]}$ on the space $\mathbb{R}^d$ of the form \eqnl{eq:diffusion}{ \mathrm{d} X_t = a(X_t) \mathrm{d} t + b(X_t)\mathrm{d} W_t ,\qquad t \in [0,T], } where $T$ is the final time, $\{W_t\}_{t \in [0,T]}$ is the Brownian motion on $\mathbb{R}^d$, $a(\cdot)$ is in the set $\mathcal{C}^2(\mathbb{R}^d,\mathbb{R}^d)$ of twice continuously differentiable mappings from $\mathbb{R}^d$ to itself and $b(\cdot)$ is in $\mathcal{C}^2(\mathbb{R}^d, \mathbb{M}_d(\mathbb{R}))$ with $\mathbb{M}_d(\mathbb{R})$ the space of square matrices of size $d$. The mapping $b$ is assumed to be such that $b(x)b(x)^{\tr}$ is positive definite for all $x \in \mathbb{R}^d$, with $\cdot^{\tr}$ denoting the transposition. Moreover, the drift and diffusion coefficients are assumed to be globally Lipschitz, i.e.\ there exists $c > 0$ such that \eqns{ |a(x) - a(x')| + |b(x) - b(x')| \leq c|x-x'| } for all $x,x' \in \mathbb{R}^d$. The initial distribution of the process $\bm{X}$, i.e.\ the distribution of $X_0$, is denoted $p_0$ (and might be equal to $\delta_{x_0}$ for some initial condition $x_0 \in \mathbb{R}^d$). It is assumed that the $m$\textsuperscript{th}-order moment of $X_0$ defined as $\mathbb{E}(|X_0|^m)$ is finite for any $m \geq 1$. Probability density functions will be considered with respect to the Lebesgue measure on $\mathbb{R}^d$ and both probability measures and their corresponding density functions will be referred to by the same notation. The distribution of $X_k$, $k \in \{1,\dots,T\}$, given a realisation $x_{k-1}$ of the state $X_{k-1}$ is denoted $Q(x_{k-1},\cdot)$. In addition to the fact that the expression of the Markov transition $Q$ is unavailable in general, it is not usually possible to devise an unbiased estimator for it or even to sample from it. In the case where $d=1$, one can obtain ``skeletons'' of exact paths using the algorithm of \cite{Beskos2005,Beskos2006}, however, the extension of this approach to \glspl{sde} of higher dimensions might not be possible \cite{AitSahalia2008}. The diffusion process $\bm{X}$ is assumed to be observed in $\mathbb{R}^{d'}$, $d' \in \mathbb{N}$, at all the integer-valued times so that the final time $T$ is also assumed to be an integer. These assumptions are made for the sake of notational simplicity and can be easily removed. For all $k \in \{0,\dots,T\}$, the observation $Y_k$ is a random variable that is conditionally independent on the state $X_t$ at times $t \neq k$ given $X_k$. The observation process can be expressed in general as \eqnl{eq:obsEquation}{ Y_k = g_k(X_k, V_k) } where $g_k$ is a deterministic observation function and where $\{V_k\}_{k=0}^T$ is a collection of independent random variables. It is assumed without any real loss of generality that both $g_k$ and the distribution of $V_k$ do not depend on the time index $k$, the corresponding likelihood for a realisation $y_k$ of $Y_k$ is denoted $\ell(X_k, y_k)$. \subsection[Smoothing for SDEs]{Smoothing for \glspl{sde}} Throughout the article, joint states in $\mathbb{R}^{d(n+1)}$ for some $n \in \mathbb{N}_0$ will be denoted either by $x_{k:k+n} \doteq (x_k,x_{k+1},\dots,x_{k+n})$ with $k \in \mathbb{N}$ or by $x_S$, with $S = \{s_0,s_1,\dots,s_n\}$ a finite subset of $[0,T]$ such that $s_i < s_j$ for all $0 \leq i < j \leq n$, defined as $x_S \doteq (x_{s_1},x_{s_2}, \dots,x_{s_n})$. The smoothing distribution associated with the \gls{sde} \cref{eq:diffusion} is defined formally as the joint law of the diffusion process $\bm{X}$ at all the integer times given realisations $y_0,\dots,y_T$ of the observation process \cref{eq:obsEquation}, and can be expressed for any $x_{0:T} \in \mathbb{R}^{d(T+1)}$ as \eqns{ \bm{p}(x_{0:T}) = \dfrac{\ell(x_0, y_0) p_0(x_0) \prod_{k=1}^T \big[Q(x_{k-1}, x_k) \ell(x_k, y_k) \big] }{ \int \ell(x'_0, y_0) p_0(x'_0) \prod_{k=1}^T \big[Q(x'_{k-1}, x'_k) \ell(x'_k, y_k) \big] \mathrm{d} x'_{0:T} }. } The dependence of the smoothing distribution on the realisations $y_0,\dots,y_T$ of the observation process is omitted for the sake of notational simplicity. This is justified by the fact that these observations will be fixed in the remainder of the article so that the smoothing distribution $\bm{p}$ and its approximations will always be conditioned on the same given observations. The expression of $\bm{p}$ is a direct consequence of Bayes' theorem applied to the prior $p_0(x_0) \prod_{k=1}^T Q(x_{k-1}, x_k)$ describing the law of the unobserved (hidden) diffusion process together with the joint likelihood $\prod_{k=0}^T \ell(x_k, y_k)$ whose expression results from the conditional independence of the observations. Using the same principle of implicit conditioning as with the smoothing distribution, the filtering distribution $p_k$ at time $k$ is defined as the law of $X_k$ given the realisations $y_0,\dots,y_k$ and is expressed recursively as \eqns{ p_k(x_k) = \dfrac{\ell(x_k, y_k) \int Q(x_{k-1},x_k) p_{k-1}(x_{k-1}) \mathrm{d} x_{k-1}}{\int \ell(x'_k, y_k) Q(x'_{k-1},x'_k) p_{k-1}(x'_{k-1}) \mathrm{d} x'_k \mathrm{d} x'_{k-1}} } for any $x_k \in \mathbb{R}^d$ and any $k \in \{1,\dots,T\}$. The marginal distribution of $X_k$ induced by the smoothing distribution $\bm{p}$ corresponds to the filtering distribution $p_k$ when $k = T$ only. The objective in this article can now be formally expressed as follows: to compute the expectation $\bm{p}(\varphi) \doteq \int \varphi(x_{0:T}) \bm{p}(x_{0:T}) \mathrm{d} x_{0:T}$ of some bounded measurable function $\varphi$ on $\mathbb{R}^{d(T+1)}$. Although the above formulation casts the considered problem into the standard Bayesian inference framework, the Markov transition $Q$ is unavailable in general, so that expressing analytically the distributions $\bm{p}$ and $p_k$ is not usually possible. The first step toward our objective is then to apply a time-discretization to the \gls{sde} \cref{eq:diffusion}, which, for the sake of simplicity, is illustrated with Euler's method for some discretization level $l \in \mathbb{N}_0$: \eqnl{eq:EulerGen}{ X_{t+h_l} = X_t + h_l a(X_t) + \sqrt{h_l} b(X_t) U_t, } for some time-step $h_l = 2^{-l}$ and for all $t \in \mathcal{T}_l \setminus \{T\}$ where $\mathcal{T}_l \doteq \{0,h_l,\dots,T\}$, with $\{U_t\}_{t \in \mathcal{T}_l \setminus \{T\}}$ a collection of independent Gaussian random variables with density $\phi(\cdot\,; 0,\bm{I}_d)$ where $\bm{I}_d$ is the identity matrix of size $d$. The choice of time step $h_l = 2^{-l}$ is made for the sake of convenience and is not necessary. The only requirement for both the \gls{mlpf} and the multilevel transport is that the ratio $h_{l-1}/h_l$ has to be an integer. The number of time steps from a given observation time up to and including the next observation time, that is in the interval $(k,k+1]$ for some $k \in \{0,\dots,T-1\}$, is $M_l = 2^l$. The numeral scheme \cref{eq:EulerGen} yields a Markov transition $K^l$ between two successive discretization times defined as \eqns{ K^l(x, \cdot) = \phi\big(\cdot\, ; x + h_l a(x), h_l b(x)b(x)^{\tr}\big) } for any $x \in \mathbb{R}^d$, which enables the approximation of $Q$ by another Markov kernel $Q^l$ defined as \eqns{ Q^l(x,\cdot) = \underbrace{K^l \dots K^l}_{M_l\text{ times}}(x,\cdot), } where $KK'(x,\cdot) = \int K(x,x') K'(x',\cdot) \mathrm{d} x'$ for any transition kernels $K$, $K'$. The smoothing distribution $\bm{p}^l$ induced by \cref{eq:EulerGen}, which approximates $\bm{p}$, is expressed on $\mathbb{R}^{d(M_lT+1)}$ instead of $\mathbb{R}^{d(T+1)}$ and is characterised by \eqns{ \bm{p}^l(x_{\mathcal{T}_l}) \propto p_0(x_0) \prod_{t \in \mathcal{T}_l \setminus \{T\}} K^l\big(x_t, x_{t+h_l}\big) \prod_{k=0}^T \ell(x_k, y_k) } for any $x_{\mathcal{T}_l} \in \mathbb{R}^{d(M_l T+1)}$. Marginalising \gls{wrt} all $x_t$ such that $t \notin \mathbb{N}_0$ gives a distribution on $\mathbb{R}^{d(T+1)}$ which depends on the same time steps as $\bm{p}$. It is understood that the error in the approximation of $Q$ and $\bm{p}$ by $Q^l$ and $\bm{p}^l$ decreases when $l$ increases and tend to $0$ as $l$ tends to infinity. The measure $\bm{p}^l(\varphi)$ of the function $\varphi$ is understood as the measure of the canonical extension $\bar\varphi$ of $\varphi$ from $\mathbb{R}^{d(T+1)}$ to $\mathbb{R}^{d(M_lT+1)}$ defined as \eqns{ \bar\varphi(x_t) = \begin{dcases*} \varphi(x_t) & if $t \in \mathbb{N}_0$ \\ 1 & otherwise. \end{dcases*} } The extension $\bar\varphi$ of the function $\varphi$ can indeed be seen as canonical since it holds that \eqnsa{ \bm{p}^l(\bar\varphi) & \propto \int \bar\varphi(x_{\mathcal{T}_l}) p_0(x_0) \prod_{t \in \mathcal{T}_l \setminus \{T\}} K^l\big(x_t, x_{t+h_l}\big) \prod_{k=0}^T \ell(x_k, y_k) \mathrm{d} x_{\mathcal{T}_l} \\ & = \int \varphi(x_{0:T}) \ell(x_0, y_0) p_0(x_0) \prod_{k=1}^T \big[ Q^l(x_{k-1}, x_k) \ell(x_k, y_k) \big] \mathrm{d} x_{0:T}, } as expected. Henceforth, $\bm{p}^l(\varphi)$ will be used has a shorthand notation for $\bm{p}^l(\bar\varphi)$ when there is no ambiguity. At this stage, standard Bayesian inference methods can be easily applied. For instance, if $a$ and $b$ are linear and constant functions respectively and if the observation equation \eqref{eq:obsEquation} takes the form \eqns{ Y_k = g_k(X_k) + V_k } with $g_k$ a linear map and with $V_k$ normally distributed, then the Kalman methodology can be used to determine the filtering and smoothing distributions. When this is not the case, the \gls{pf} methodology can be used instead, the approach exposed in \cite{Doucet2011} being one of the most popular versions. The latter applies sampling and resampling mechanisms to determine the filtering distribution with an error that is uniform in time. It is however less efficient for smoothing problems \cite{Kantas2015}, mostly because of the path degeneracy induced by the use of repeated resampling procedures. The proposed second step toward the efficient computation of $\bm{p}(\varphi)$ is to use a method that enables i.i.d.\ samples to be drawn directly from the smoothing distribution $\bm{p}^l$ and hence avoiding path degeneracy. This has been made possible by transport methods \cite{Villani2008, Spantini2017} which are presented in the next section. \subsection{Transport methodology} \label{sec:transport} The general principle of transport methods, when applied to the considered problem, is to compute a deterministic coupling between the \emph{base} probability distribution $\bm{\eta}^l$ of a convenient i.i.d.\ process on $\mathbb{R}^d$ and the \emph{target} distribution $\bm{p}^l$, that is to compute a mapping $\bm{G}^l$ from $\mathbb{R}^{d(M_l T+1)}$ to itself that pushes forward $\bm{\eta}^l$ to $\bm{p}^l$, i.e.\ such that \eqns{ \bm{p}^l(\bm{x}^l) = \bm{G}^l_{\pf} \bm{\eta}^l (\bm{x}^l) \doteq \bm{\eta}^l\big((\bm{G}^l)^{-1}(\bm{x}^l)\big) \big|\det \nabla (\bm{G}^l)^{-1}(\bm{x}^l) \big|, } where $\nabla (\bm{G}^l)^{-1}(\bm{x}^l)$ is the gradient of the inverse transport map $(\bm{G}^l)^{-1}$ evaluated at~$\bm{x}^l \in \mathbb{R}^{d(M_l T+1)}$. In this setting, the distribution $\bm{\eta}^l$ is also assumed to be on $\mathbb{R}^{d(M_l T+1)}$. The method introduced in \cite{Spantini2017} makes use of the specific structure of $\bm{p}^l$, which is induced by the Markov property of the underlying diffusion process~$\bm{X}$, to divide the problem into a sequence of low-dimensional couplings. Each of these deterministic couplings, say $M^l_t$ for some $t \in \mathcal{T}_l \setminus \{T\}$, is a mapping from $\mathbb{R}^d \times \mathbb{R}^d$ to itself which is assumed to take the form \eqns{ M^l_t : (x_t,x_{t+h_l}) \mapsto \big(M^{l,1}_t(x_t,x_{t+h_l}), M^{l,2}_t(x_{t+h_l})\big)^{\tr}, } for some $M^{l,1}_t : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ and $M^{l,2}_t : \mathbb{R}^d \to \mathbb{R}^d$. Under additional assumptions on $M^{l,1}_t$ and $M^{l,2}_t$ (see \cref{eq:assumptionM1M2} below), the mapping $M^l_t$ can be characterised by \eqns{ (M^l_t)_{\pf} \bm{\eta}^l_{t,t+h_l} = \bm{\pi}_{t,t+h_l}, } where the probability distribution $\bm{\eta}^l_{t,t+h_l}$ on $\mathbb{R}^d \times \mathbb{R}^d$ is the marginal of $\bm{\eta}^l$ at discretization steps $(t,t+h_l)$ and where $\bm{\pi}_{t,t+h_l}$ is related to the marginal law of $(X_t,X_{t+h_l})$ and is characterised when $t > 0$ by \eqns{ \bm{\pi}_{t,t+h_l}(x_t,x_{t+h_l}) \propto \begin{cases*} \eta^l_t(x_t) K^l\big(M^{l,2}_{t-h_l}(x_t),x_{t+h_l}\big) \ell(x_{t+h_l}, y_{t+h_l}) & if $t+h_l \in \mathbb{N}$ \\ \eta^l_t(x_t) K^l\big(M^{l,2}_{t-h_l}(x_t),x_{t+h_l}\big) & otherwise, \end{cases*} } where $\eta^l_t$ is the marginal of $\bm{\eta}^l$ on $\mathbb{R}^d$ at discretization time $t$, and by \eqns{ \bm{\pi}_{0,h_l}(x_0,x_{h_l}) \propto \begin{cases*} p_0(x_0) K^0(x_0,x_1) \ell(x_0, y_0) \ell(x_1, y_1) & if $l = 0$ \\ p_0(x_0) K^l(x_0,x_{h_l}) \ell(x_0, y_0) & otherwise. \end{cases*} } \begin{remark} The expression of $\bm{\pi}_{t,t+h_l}$ at level $0$ is the one corresponding to the standard state space model presented in \cite{Spantini2017}, that is \eqnsa{ \bm{\pi}_{t,t+1}(x_t,x_{t+1}) & \propto \eta_t(x_t) K\big(M^2_{t-1}(x_t),x_{t+1}\big) \ell(x_{t+1}, y_{t+1}), \qquad t > 0 \\ \bm{\pi}_{0,1}(x_0,x_1) & \propto p_0(x_0) K(x_0,x_1) \ell(x_0, y_0) \ell(x_1, y_1), } where the superscripts $0$ indicating the level have been omitted. \end{remark} The distribution $\bm{\eta}^l$ is a design variable which is chosen to be the normal distribution $\mathcal{N}(0,\bm{I}_{d(M_lT+1)})$ for the sake of convenience (so that $\bm{\eta}^l_{t,t+h_l} = \phi(\cdot\,; 0,\bm{I}_{2d})$ and $\eta^l \doteq \eta^l_t = \phi(\cdot\,; 0,\bm{I}_d)$ do not depend on $t$). The two components of the mapping $M^l_t$ are instrumental for the proposed approach since they allow to transport samples from a convenient distribution to samples from the filtering or smoothing distributions. The filtering case is straightforward since it holds \cite[Theorem~7.1]{Spantini2017} that $M^{l,2}_t$ pushes forward $\eta^l_{t+h_l}$ to the filtering distribution $p^l_{t+h_l}$. To obtain samples from the smoothing distribution, it is necessary to first embed $M^l_t$ into the identity function on $\mathbb{R}^{d(M_lT+1)}$, which results in a function $G^l_t$ defined as \eqns{ G^l_t : (x_0,x_{h_l},\dots,x_T) \mapsto \big(x_0,\dots,x_{t-h_l}, M^{l,1}_t(x_t,x_{t+h_l}) , M^{l,2}_t(x_{t+h_l}) , x_{t+2h_l},\dots,x_T \big)^{\tr}. } It is also demonstrated in \cite[Theorem~7.1]{Spantini2017} that the desired mapping $\bm{G}^l$, that is the one that pushes forward $\bm{\eta}^l$ to the smoothing distribution $\bm{p}^l$, is defined by the composition \eqnl{eq:compositionOfMaps}{ \bm{G}^l = G^l_0 \circ G^l_{h_l} \circ \dots \circ G^l_{T-h_l}. } \begin{remark} It would be possible to deduce a collection $\{\tilde{G}^{l-1}_t\}_t$ of transport maps at level $l-1$ by approximating pairwise compositions of maps at level $l$ as \eqns{ \tilde{G}^{l-1}_t \approx G^l_t \circ G^l_{t+h_l} } for any $t \in \mathcal{T}_{l-1} \setminus \{T\}$. However, it is less clear in this case which distribution is approximated by this new collection of transport maps. \end{remark} Although the transport maps $M^l_t$ have been identified, their computation is not straightforward. Assuming that the mappings $M^{l,1}_t$ and $M^{l,2}_t$ are of the form \eqnl{eq:assumptionM1M2}{ M^{l,1}_t(x_{1:d},x'_{1:d}) = \begin{bmatrix} M^{l,1,1}_t(x_{1:d},x'_{1:d}) \\ \vdots \\ M^{l,1,d}_t(x_d,x'_{1:d}) \end{bmatrix} \qquad\mbox{ and }\qquad M^{l,2}_t(x_{1:d}) = \begin{bmatrix} M^{l,2,1}_t(x_{1:d}) \\ \vdots \\ M^{l,2,d}_t(x_d) \end{bmatrix}, } for any $x_{1:d},x'_{1:d} \in \mathbb{R}^d$, i.e.\ loosely speaking, that $M^{l,1}_t$ and $M^{l,2}_t$ are upper triangular, it follows that $M^l_t$ is a $\sigma$-generalised Knothe-Rosenblatt (KR) rearrangement with $\sigma = (2d,2d-1,\dots,1)$, that is, informally, a map whose $i$\textsuperscript{th} component depends only on the variables $x_{2d},\dots,x_i$ and which pushes forward the $i$\textsuperscript{th} conditional of the base distribution to the corresponding conditional of the target distribution (see \cite[Definition~A.3]{Spantini2017} for more details). In order to find $M^l_t$, we first have to solve the following optimisation problem: \eqnl{eq:opt_prob}{ M^{l,*} = \argmin_{M} - \mathbb{E} \bigg( \log \bm{\pi}_{t,t+h_l}(S_{\sigma}(M(\bm{Z}))) + \sum_{i=1}^{2d} \log \partial_i M^i(\bm{Z}) - \log \bm{\eta}^l_{t,t+h_l}(S_{\sigma}(\bm{Z})) \bigg) } subject to $M$ being a monotone increasing lower triangular mapping, where the expectation is \gls{wrt} $\bm{Z} \sim \bm{\eta}^l_{t,t+h_l}$ and where $S_{\sigma}$ is the linear map corresponding to the transposition matrix induced by $\sigma$. It follows that $M^l_t = S_{\sigma} \circ M^{l,*} \circ S_{\sigma}$ since it holds that $S_{\sigma}^{-1} = S_{\sigma}$ for the considered permutation $\sigma$. The above optimisation problem can be solved in different ways, e.g.\ by Gauss quadrature or by having recourse to Monte Carlo techniques \cite{Robert2004, Davis2007}. The transport map $\bm{G}^l$ enables an approximation of $\bm{p}^l(\varphi)$ to be computed by drawing $N$ samples $\{\bm{z}_i\}_{i=1}^N$ from $\bm{\eta}^l$ and by computing the empirical average \eqns{ \tilde\bm{p}^l(\varphi) \doteq \dfrac{1}{N} \sum_{i=1}^N \varphi\big( \bm{G}^l(\bm{z}_i) \big) \approx \bm{p}^l(\varphi). } The \gls{mse} corresponding to the approximation of $\bm{p}(\varphi)$ by $\tilde\bm{p}^l(\varphi)$ can be expressed as the sum of a variance term and a bias term as follows \eqns{ \mathbb{E}\big( (\tilde\bm{p}^l - \bm{p})(\varphi)^2 \big) = \mathbb{E}\big( (\tilde\bm{p}^l - \bm{p}^l)(\varphi)^2 \big) + (\bm{p}^l - \bm{p})(\varphi)^2. } We propose to further enhance the estimation by having recourse to a multilevel strategy for which transport methods will appear to be particularly well suited. Although the method presented in this section applies in principle to state spaces of any dimension, it is important to note that the computational cost of the corresponding algorithm can be prohibitively high even for moderate dimensions. This issue can however be mitigated by identifying some specific dependence structure between the different dimensions and by applying the same principles as the ones applied here between time steps. \section{Multilevel Monte Carlo} \label{sec:MLMC} We now consider that the discretization \cref{eq:EulerGen} of the \gls{sde} \cref{eq:diffusion} is performed at different discretization levels $l \in \{0,\dots,L\}$ so that $0 < h_L < \dots < h_0 = 1$ for the considered value of $h_l$. This implies that the solution at the coarsest level $l=0$ is computationally efficient but possibly inaccurate whereas the solution at the finest level $L$ is more accurate but slower to compute. The principle of \gls{mlmc} is that the respective advantages of the coarsest and finest levels can be combined within a single estimation procedure by coupling the estimation of $\bm{p}(\varphi)$ for adjacent levels. More specifically, the first step is to notice that the smoothing distribution $\bm{p}^L$ corresponding to the discretization at level $L$ can be expressed via a telescopic sum involving the smoothing distributions $\bm{p}^l$ at the other levels $l < L$, that is \eqnl{eq:telescopicSum}{ \bm{p}^L(\varphi) = \sum_{l=0}^L ( \bm{p}^l - \bm{p}^{l-1})(\varphi) } where $\bm{p}^{-1}$ is an arbitrary measure satisfying $\bm{p}^{-1}(\varphi) = 0$, e.g.\ the null measure. \Cref{eq:telescopicSum} motivates the introduction of some i.i.d.\ random variables $\{\bm{X}^0_i\}_{i=1}^{N_0}$ in $\mathbb{R}^{d(T+1)}$ with law $\bm{p}^0$ and some i.i.d.\ random variables $\{\bm{X}^{l,l-1}_i\}_{i=1}^{N_l}$ in the space $\mathbb{R}^{d(M_lT+1)} \times \mathbb{R}^{d(M_{l-1}T+1)}$ expressed as $\bm{X}^{l,l-1}_i = (\bm{X}^l_i,\bm{X}^{l-}_i)$ and such that $\bm{X}^l_i$ and $\bm{X}^{l-}_i$ have marginal laws $\bm{p}^l$ and $\bm{p}^{l-1}$ respectively, for all $l \in \{1,\dots,L\}$. This enables an approximation of $\bm{p}^L(\varphi)$ as \eqnl{eq:telescopicSumApprox}{ \bm{p}^L(\varphi) \approx \tilde\bm{p}^L(\varphi) \doteq \dfrac{1}{N_0} \sum_{i=1}^{N_0} \varphi(\bm{X}^0_i) + \sum_{l=1}^L \dfrac{1}{N_l} \sum_{i=1}^{N_l} \big( \varphi(\bm{X}^l_i) - \varphi(\bm{X}^{l-}_i) \big). } This approximation of $\bm{p}^L$ is useful if the random variables $\bm{X}^0_{i_0}, \bm{X}^{1,0}_{i_1}, \dots, \bm{X}^{L,L-1}_{i_L}$ are independent of each other for all $i_0, i_1, \dots, i_L$ and if their respective components $\bm{X}^l_1$ and $\bm{X}^{l-}_1$ are as correlated as possible for all $l \in \{1,\dots,L\}$ (and hence for all random variables $\bm{X}^l_i$ and $\bm{X}^{l-}_i$ with $i \in \{1,\dots,N_l\}$ since they are i.i.d.). In order to determine the number of samples $N_l$ required at each level, we first express the \gls{mse} related to \cref{eq:telescopicSumApprox} as the sum of a variance term and a bias term as \eqnl{eq:mseMl}{ \mathbb{E}\big( (\tilde\bm{p}^L - \bm{p})(\varphi)^2 \big) = \sum_{l=0}^L \mathcal{V}_l + (\bm{p}^L - \bm{p})(\varphi)^2 } with \eqns{ \mathcal{V}_l = \begin{dcases*} \mathbb{E}\Bigg( \bigg[\dfrac{1}{N_0} \sum_{i=1}^{N_0} \varphi(\bm{X}^0_i) - \bm{p}^0(\varphi) \bigg]^2 \Bigg)& if $l=0$ \\ \mathbb{E}\Bigg( \bigg[\dfrac{1}{N_l} \sum_{i=1}^{N_l} \big( \varphi(\bm{X}^l_i) - \varphi(\bm{X}^{l-}_i) \big) - (\bm{p}^l - \bm{p}^{l-1})(\varphi) \bigg]^2 \Bigg)& otherwise. \end{dcases*} } Assuming that the bias is of order $\mathcal{O}(h_L^{\alpha})$ for some integer $\alpha > 0$, it follows that a bias proportional to $\epsilon$ requires \eqns{ L \propto -\dfrac{1}{\alpha}\log_2(\epsilon). } We also assume that the variance $\mathcal{V}_l$ at level $l > 0$ is of order $\mathcal{O}(h_l^{\beta})$ and that the cost $\mathcal{C}_l$ at level $l$ is of order $\mathcal{O}(h_l^{-\zeta})$ for some positive integers $\beta$ and $\zeta$. The number of samples $N_{l}$ at level $l > 1$ can then be determined by optimising the total cost $\mathcal{C} = \sum_l \mathcal{C}_l N_l$ for a given total variance $\mathcal{V} = \sum_l \mathcal{V}_l / N_l$. This leads to \eqnl{eq:Nl}{ N_l = N_1 2^{-(\beta + \zeta)(l-1)/2}, } so that, to obtain a \gls{mse} of order $\epsilon^2$, that is a bias of order $\epsilon$ and a total variance of order $\epsilon^2$, one must take $N_0 \propto \epsilon^{-2}$ and \eqns{ N_1 \propto \epsilon^{-2} \sum_{l = 1}^L 2^{(\zeta - \beta)l/2}. } Therefore, the number of samples and the cost for a \gls{mse} of order $\mathcal{O}(\epsilon^2)$ depends on the respective values of $\beta$ and $\zeta$. For instance, if $\beta > \zeta$, then both $N_1$ and $\mathcal{C}$ are of order $\mathcal{O}(\epsilon^{-2})$. \subsection{Multilevel particle filter} It is assumed in this section that the interest lies in estimating the filtering distribution $p^L_k$ at time $k$ through the multilevel identity~\cref{eq:telescopicSum}. Since it is generally difficult to sample directly from a reasonable candidate for a coupling of $p^l_k$ and $p^{l-1}_k$, one solution is to adopt a \gls{pf} strategy within the \gls{ml} formulation. In order to obtain samples that are correlated between two adjacent levels, a special joint Markov transition $Q^{l,l-1}$ can be devised together with a resampling procedure that retains the correlation of the samples. This is the principle of the \gls{mlpf} which is briefly discussed here. Assume that we have some collections of samples $\{x^l_{i,k-1}\}_{i=1}^{N_l}$ and $\{x^{l-}_{i,k-1}\}_{i=1}^{N_l}$ at time $k-1$ approximating $p^l_{k-1}$ and $p^{l-1}_{k-1}$ respectively. For all $i \in \{1,\dots,N_l\}$ and all $l \in \{1,\dots,L\}$, samples $x^l_{i,k}$ and $x^{l-}_{i,k}$ at time $k$ are produced through the Markov transition $Q^{l,l-1}((x^l_{i,k-1},x^{l-}_{i,k-1}),\cdot)$ as follows: \begin{enumerate}[label=(\roman*)] \item Simulate \cref{eq:EulerGen} starting from the initial condition $x_0 = x^l_{i,k-1}$ over $M_l$ time steps, denote by $x^l_{i,k}$ the obtained state of the process and by $\{u^l_t\}_{t\in\{0,h_l,\dots,1-h_l\}}$ the collection of realisations of the perturbation $U^l_t$ drawn during the procedure. \item Using the initial condition $x^{l-}_0 = x^{l-}_{i,k-1}$, define $x^{l-}_{i,k}$ as the result of the deterministic recursion \eqns{ x^{l-}_{t+h_{l-1}} = x^{l-}_t + h_{l-1} a(x^{l-}_t) + \sqrt{h_{l-1}} b(x^{l-}_t) ( u^l_{t} + u^l_{t+h_l} ), } for any $t \in \{0,h_{l-1},\dots,1-h_{l-1}\}$. This recursion is meaningful since $h_{l-1} = 2h_l$ so that $u^l_{t} + u^l_{t+h_l}$ corresponds to the noise in the step from $t$ to $t+h_{l-1}$ induced by $\{u^l_t\}_t$. \end{enumerate} This procedure yields $N_l$ pairs of correlated samples $\{(x^l_{i,k}, x^{l-}_{i,k})\}_{i=1}^{N_l}$ according to the predictive distribution at time $k$ given observations up to time $k-1$. The information provided by the observation $y_k$ is simply taken into account by attributing the respective weights $w^l_{i,k}$ and $w^{l-}_{i,k}$ to the samples $x^l_{i,k}$ and $x^{l-}_{i,k}$ in a similar fashion: \eqns{ w^l_{i,k} = \dfrac{\ell(x^l_{i,k}, y_k)}{ \sum_{j=1}^{N_l} \ell(x^l_{j,k}, y_k) } \qquad\mbox{ and }\qquad w^{l-}_{i,k} = \dfrac{\ell(x^{l-}_{i,k}, y_k)}{ \sum_{j=1}^{N_l} \ell(x^{l-}_{j,k}, y_k) }. } Following the weighting of the samples, the difference $(p^l_k - p^{l-1}_k)(\varphi)$ can be estimated via \eqns{ (p^l_k - p^{l-1}_k)(\varphi) \approx \sum_{i=1}^{N_l} \Big( w^l_{i,k}\varphi\big(x^l_{i,k}\big) - w^{l-}_{i,k}\varphi\big(x^{l-}_{i,k}\big) \Big). } Although this approximation would behave well in general, most of the sample weights would tend to $0$ if we were to apply the same procedure repeatedly in order to reach the next observation times, resulting in a rapid increase of the empirical variance. The usual way to address this problem in the standard \gls{pf} formulation is to perform resampling, that is to draw new samples from the old ones according, for instance, to the multinomial distribution induced by the weights. Applying the same approach to the \gls{mlpf} would result in the loss of the correlation between the samples at adjacent levels. A \emph{coupled} resampling is used instead as follows. For all $i \in \{1,\dots,N_l\}$ and all $l \in \{1,\dots,L\}$: \begin{enumerate}[label=(\roman*)] \item \label{it:coupledIndex} With probability $\rho^l_k = \sum_{i=1}^{N_l} \min\{w^l_{i,k}, w^{l-}_{i,k}\}$ draw the index $i^l$ according to the probability mass function (p.m.f.) $\hat{m}^l_k$ on $\{1,\dots,N_l\}$ characterised by \eqns{ \hat{m}^l_k(j) = \dfrac{1}{\rho^l_k} \min\{w^l_{j,k}, w^{l-}_{j,k}\} } and define $i^{l-} = i^l$. \item If \ref{it:coupledIndex} is not selected (with probability $1-\rho^l_k$), draw the indices $i^l$ and $i^{l-}$ independently according to the p.m.f.s $m^l_k$ and $m^{l-}_k$ on $\{1,\dots,N_l\}$ characterised by \eqns{ m^l_k(j) \propto w^l_{j,k} - \min\{w^l_{j,k}, w^{l-}_{j,k}\} \qquad\mbox{ and }\qquad m^{l-}_k(j) \propto w^{l-}_{j,k} - \min\{w^l_{j,k}, w^{l-}_{j,k}\}. } \item Define the new pair of samples $(\tilde{x}^l_{i,k},\tilde{x}^{l-}_{i,k})$ as $(x^l_{i^l,k},x^{l-}_{i^{l-},k})$. \end{enumerate} Although the \emph{coupled} resampling addresses the problem of reducing the empirical variance without completely losing the correlation between samples at adjacent levels, it nevertheless has a negative impact of the \gls{ml} rate. Indeed, as demonstrated in \cite{Jasra2015}, one needs $\beta > 2\zeta$ to obtain a cost of order $\mathcal{O}(\epsilon^{-2})$ for a \gls{mse} of order $\mathcal{O}(\epsilon^2)$. In the case where $\beta = 2\zeta$, e.g.\ for Euler's scheme ($\zeta = 1$) with $\beta = 2$, the cost is of order $\mathcal{O}(\epsilon^{-2} \log(\epsilon)^2)$. Also, even if the \gls{mlpf} can handle smoothing on a short time window, i.e.\ it can successfully approximate the distribution of $\{X_{t'}\}_{t' \in \{t-s,t-s+1,\dots,t\}}$ given $y_0,\dots,y_t$ for small values of $s \in \mathbb{N}$, the error in the approximation of the full smoothing distribution would increase in time because of the path degeneracy effect. Indeed, resampling tends to multiply the samples of higher weights so that, after a certain number of time steps, all samples will be descendants of the same earlier sample. \subsection{Multilevel transport} In order to avoid the path degeneracy inherent to any \gls{pf} approach and to regain the \gls{ml} rate lost through the coupled resampling of the \gls{mlpf}, we propose to compute samples from the distributions $\bm{p}^l$ via the transport maps $\bm{G}^l$ characterised by $\bm{p}^l = \bm{G}^l_{\pf} \bm{\eta}^l$ with $\bm{\eta}^l = \phi(\cdot\,; 0,\bm{I}_{d(M_lT+1)})$ for all $l \in \{0,\dots,L\}$. The specific procedure is described as follows. For all $i \in \{1,\dots,N_l\}$: \begin{enumerate}[label=(\roman*)] \item draw a sample $\bm{z}_i^l = (z_{i,0}^l,z_{i,1}^l,\dots,z_{i,M_lT}^l)$ from $\bm{\eta}^l$ \item map $\bm{z}_i^l$ through $\bm{G}^l$ to obtain a sample $\bm{x}_i^l = \bm{G}^l(\bm{z}_i^l)$ from $\bm{p}^l$ \item define a \emph{thinned} sample $\bm{z}_i^{l-} = (z_{i,0}^l,z_{i,2}^l,\dots,z_{i,M_lT}^l)$ \item map $\bm{z}_i^{l-}$ through $\bm{G}^{l-1}$ to obtain a sample $\bm{x}_i^{l-} = \bm{G}^{l-1}(\bm{z}_i^{l-})$ from $\bm{p}^{l-1}$ \end{enumerate} This simple procedure yields two collections $\{\bm{x}_i^l\}_i$ and $\{\bm{x}_i^{l-}\}_i$ of samples drawn from a joint distribution that obviously has marginals $\bm{p}^l$ and $\bm{p}^{l-1}$ and that correlates adjacent levels as desired. As a motivation for this coupling, note that it is optimal in terms of squared Wasserstein distance with the Euclidean metric in the case where $d=1$ and assuming that the transport maps can be computed exactly. The efficiency of the approach comes from the fact that the transport maps $\bm{G}^l$ have to be computed once only. Given the computation of the maps, it is relatively fast to obtain the samples. Although there is, strictly speaking, no path degeneracy in the considered approach, there might be some accumulation of error through time induced by the composition of transport maps defining $\bm{G}^l$ as in \cref{eq:compositionOfMaps}. This accumulation of error will however be seen to be milder than the one experienced by the \gls{pf} in \cref{sec:numericalStudy}. It is assumed that the procedure underlying the computation of the transport maps is deterministic, so that there is no undesired correlations between samples from $\bm{X}^{l,l-1}$ and $\bm{X}^{l',l'-1}$ when $l \neq l'$. Further neglecting the numerical error in the computed transport maps, it follows that the expression \cref{eq:mseMl} of the \gls{mse} holds for the considered approach. Before proceeding to a numerical study, the legitimacy of the proposed approach is verified for the linear-Gaussian case. Consider the \gls{sde} \cref{eq:diffusion} in dimension $d=1$ and with $p_0 = \delta_{x_0}$ (so that the observation at time $t=0$ has no impact). The corresponding filtering distribution at time $k \in \mathbb{N}$ and at level $l \in \{0,\dots,L\}$ simplifies to \eqns{ p^l_k(x_k) \propto \int \prod_{n=1}^k \big[ Q^l(x_{n-1},x_n) \ell(x_n, y_n) \big] \mathrm{d} x_{1:k-1} } for any $x_k \in \mathbb{R}^d$. Denote $\hat{G}^l_k \doteq M^{l,2}_{k-h_l}$ the transport map from the base distribution $\eta^l = \phi(\cdot\,; 0,1)$ to $p^l_k$, i.e.\ such that $(\hat{G}^l_k)_{\#} \eta^l = p^l_k$. If $F_{\eta^l}$ and $F_{l,k}$ denote the cumulative distribution functions (c.d.f.) of $\eta^l$ and $p^l_k$ respectively, then it holds that $\hat{G}^l_k = F^{-1}_{l,k} \circ F_{\eta^l}$, where $F^{-1}$ is the generalised inverse \eqns{ F^{-1}(u) = \inf\{x \in \mathbb{R} : F(x) \geq u\}, \qquad \forall u \in [0,1]. } Considering i.i.d.\ random variables $Z_i \sim \eta^l$ for $i \in \{1,\dots,N_l\}$, the objective is to determine the order of \eqns{ \mathcal{V}_{l,k} = \Var\bigg( \dfrac{1}{N_l} \sum_{i=1}^{N_l} \Big( \varphi\big(\hat{G}^l_k(Z_i) \big) - \varphi\big( \hat{G}^{l-1}_k(Z_i) \big) \Big) \bigg) } \gls{wrt} $h_l$ for any function $\varphi$ that is at the intersection of the set $\mathcal{B}_b(\mathbb{R})$ of bounded measurable functions and of the set $\Lip(\mathbb{R})$ of Lipschitz functions. Since the $Z_i$'s are i.i.d.\ and by definition of $\hat{G}^l_k$, it holds that \eqnsa{ \mathcal{V}_{l,k} & = \dfrac{1}{N_l} \Var\Big( \varphi\big(\hat{G}^l_k(Z) \big) - \varphi\big( \hat{G}^{l-1}_k(Z) \big) \Big) \\ & = \dfrac{1}{N_l} \Var\Big( \varphi\big(F^{-1}_{l,k}(U) \big) - \varphi\big( F^{-1}_{l-1,k}(U) \big) \Big) \\ & \leq \dfrac{c}{N_l} \mathbb{E}\Big( \big[F^{-1}_{l,k}(U) - F^{-1}_{l-1,k}(U) \big]^2 \Big) } for some $c > 0$, with $Z \sim \eta^l$ and $U \sim \mathcal{U}([0,1])$, where the inequality comes from the fact that $\varphi \in \Lip(\mathbb{R})$. The linear case is addressed in the following theorem as a proof of concept. \begin{theorem} \label{res:orderVarLinearGaussian} Let $\bm{X}$ a $1$-dimensional diffusion process with linear drift and constant diffusion coefficient observed at all integer times through a linear-Gaussian likelihood $\ell(x, \cdot) = \phi(\cdot\,; x,\tau^2)$ for some $\tau > 0$, then the variance $\mathcal{V}_{l,k}$ obtained at level $l$ for Euler's method with discretization $h_l = 2^{-l}$ and with the transport-based approach satisfies \eqns{ \mathcal{V}_{l,k} = \mathcal{O}(h_l^2) } for any $k \in \{1,\dots,T\}$. \end{theorem} \begin{proof} The objective is to compute the order of \eqns{ F^{-1}_{l,k}(u) - F^{-1}_{l-1,k}(u) = \hat\mu_{l,k} - \hat\mu_{l-1,k} + \sqrt{2}\erf^{-1}(2u-1) (\hat\sigma_{l,k} - \hat\sigma_{l-1,k}) } \gls{wrt} $h_l$, where $\erf^{-1}$ is the inverse error function and where the updated mean $\hat\mu_{l,k}$ and standard deviation $\hat\sigma_{l,k}$ at level $l$ and at time $k$ can be found through the Kalman filter to be \eqnsa{ \hat\mu_{l,k} = \mu_{l,k} + \dfrac{\sigma^2_{l,k}(y - \mu_{l,k})}{\tau^2 + \sigma_{l,k}^2} \qquad\mbox{ and }\qquad \hat\sigma_{l,k}^2 = \dfrac{\tau^2\sigma_{l,k}^2}{\tau^2 + \sigma_{l,k}^2} } with $\mu_{l,k}$ and $\sigma_{l,k}$ the predicted mean and standard deviation expressed as \eqns{ \mu_{l,k} = (1 + h_l a)^{M_l}\hat\mu_{l,k-1} \qquad\mbox{ and }\qquad \sigma_{l,k}^2 = (1 + h_l a)^{2M_l}\hat\sigma^2_{l,k-1} + h_l b^2 \sum_{i=0}^{M_l-1}(1 + h_l a)^{2i}. } First, the predicted mean $\mu_{l,k}$ and standard deviation $\sigma_{l,k}$ have to be developed to the second order. The main term appearing in the expressions of $\mu_{l,k}$ is \eqns{ (1 + h_l a)^{M_l} = \sum_{n=0}^{M_l} \dfrac{a^n}{n!} \prod_{i=0}^{n-1} \big[ h_l (M_l - i)\big] = \sum_{n=0}^{M_l} \dfrac{a^n}{n!} + \dfrac{h_l}{2} \sum_{n=2}^{M_l} \dfrac{a^n}{(n-2)!} + \mathcal{O}(h_l^2), } For the sake of compactness we define \eqns{ A_m = \sum_{n=0}^m \dfrac{a^n}{n!} \qquad\mbox{ and }\qquad B_m = \sum_{n=2}^m \dfrac{a^n}{(n-2)!}. } Assuming that \eqnmla{eq:proof:assumedForm}{ \hat{\mu}_{l,k-1} & = c_{k-1} + r_{k-1,l} h_l + \mathcal{O}(h_l^2) \\ \hat{\sigma}_{l,k-1} & = c'_{k-1} + r'_{k-1,l} h_l + \mathcal{O}(h_l^2) } where $c_{k-1}$ and $c'_{k-1}$ do not depend on $l$, and where $r_{k-1,l}$ and $r'_{k-1,l}$ are of order $\mathcal{O}(1)$ \gls{wrt} $h_l$, it follows that \eqnsa{ \mu_{l,k} & = \hat\mu_{l,k-1} \Big( A_{M_l} + \dfrac{h_l}{2} B_{M_l} \Big) + \mathcal{O}(h_l^2) \\ & = c_{k-1} A_{M_l} + r_{k-1,l} h_l A_{M_l} + h_l\dfrac{c_{k-1}}{2} B_{M_l} + \mathcal{O}(h_l^2). } Recalling that $M_l = 2^l$ and noticing that \eqns{ A_{M_l} = e^a - \sum_{n \geq M_l + 1} \dfrac{a^n}{n!} = e^a + o(h_l) } with $o(h_l)$ referring to terms that are negligible in front of $h_l$, $\mu_{k,l}$ can be seen to be of the same form as $\hat\mu_{k,l}$, that is \eqns{ \mu_{l,k} = c_{k-1} e^a + r_{k-1,l} h_l e^a + h_l\dfrac{c_{k-1}}{2} B_{M_l} + \mathcal{O}(h_l^2). } The same type of expansion can be used for the first term in the variance $\sigma_{l,k}^2$ as follows \eqns{ \sigma_{l,k}^2 = c'^2_{k-1} e^a + 2c'_{k-1} r'_{k-1,l} h_l e^a + h_l\dfrac{c'^2_{k-1}}{2} B_{2^{l+1}} + b^2 h_l \sum_{i=0}^{M_l-1}(1 + h_l a)^{2i} + \mathcal{O}(h_l^2). } The second term has however a slightly different form and must be studied on its own: \eqnl{eq:secondTermStdDev}{ h_l\sum_{i=0}^{M_l-1}(1 + h_l a)^{2i} = \sum_{n = 0}^{2M_l - 2} \Bigg( h_l^{n+1} a^n \sum_{i=\lceil n/2 \rceil}^{M_l - 1} \binom{2i}{n} \Bigg) } where it appears that \eqns{ h_l^{n+1} a^n\sum_{i=\lceil n/2 \rceil}^{M_l-1} \binom{2i}{n} \leq h_l^{n+1} a^n \sum_{i=1}^{M_l} \dfrac{(2i)^n}{n!} = \dfrac{(2a)^n}{(n+1)!} } where the r.h.s.\ tends exponentially fast to $0$ when $n \to \infty$. It follows that \cref{eq:secondTermStdDev} is of the form $s + o(h_l)$ where $s$ does not depend on $l$, so that \eqns{ \sigma_{l,k}^2 = c'^2_{k-1} e^a + 2c'_{k-1} r'_{k-1,l} h_l e^a + h_l\dfrac{c'^2_{k-1}}{2} B_{2^{l+1}} + s b^2 + \mathcal{O}(h_l^2), } from which the expansion of the standard deviation $\sigma_{l,k}$ can be expressed as \eqns{ \sigma_{l,k} = \sqrt{C_l} + \dfrac{h_l}{2\sqrt{C_l}}\bigg( 2c'_{k-1} r'_{k-1,l} h_l e^a + \dfrac{c'^2_{k-1}}{2} B_{2^{l+1}} \bigg) + \mathcal{O}(h_l^2) } where $C_l = e^a c'^2_{k-1} + s b^2$ is the term of order $\mathcal{O}(1)$ in $\sigma_{l,k}^2$. We conclude that \eqns{ \mu_{l,k} - \mu_{l-1,k} = h_l \big(r_{k-1,l} A_{2^l} - 2 r_{k-1,l-1} A_{2^{l-1}} \big) + h_l \dfrac{c_{k-1}}{2} \big(B_{2^l} - 2B_{2^{l-1}} \big) + \mathcal{O}(h_l^2) = \mathcal{O}(h_l). } Similarly, it holds that $\sigma_{l,k} - \sigma_{l-1,k} = \mathcal{O}(h_l)$. Proceeding to the updated terms, it holds that \eqnsa{ \sigma^2_{l,k}(y_k - \mu_{l,k}) & = (e^a c'^2_{k-1} + s b^2) (y_k - c_{k-1} e^a) + \mathcal{O}(h_l) \\ \tau^2 + \sigma^2_{l,k} & = \tau^2 + (e^a c'^2_{k-1} + s b^2) + \mathcal{O}(h_l), } so that \eqnsa{ \hat\mu_{l,k} & = \mu_{l,k} + \dfrac{\sigma^2_{l,k}(y_k - \mu_{l,k})}{\tau^2 + \sigma^2_{l,k}} = c_{k-1} e^a + \dfrac{(e^a c'^2_{k-1} + s b^2) (y_k - c_{k-1} e^a)}{\tau^2 + (e^a c'^2_{k-1} + s b^2)} + \mathcal{O}(h_l)\\ \hat\sigma_{l,k} & = \dfrac{\tau^2\sigma^2_{l,k}}{\tau^2 + \sigma^2_{l,k}} = \dfrac{\tau^2 (e^a c'^2_{k-1} + s b^2)}{\tau^2 + (e^a c'^2_{k-1} + s b^2)} + \mathcal{O}(h_l). } If follows from reasoning by induction that $\hat\mu_{l,k}$ and $\hat\sigma_{l,k}$ have the form assumed in \cref{eq:proof:assumedForm} for all $k \in \{0,\dots,T\}$, the result being obvious for $k=0$. Combining the different results it can be easily verified that \eqnsa{ \hat\mu_{l,k} - \hat\mu_{l-1,k} = \mathcal{O}(h_l) \qquad\mbox{ and }\qquad \hat\sigma_{l,k} - \hat\sigma_{l-1,k} = \mathcal{O}(h_l), } which yields $\mathcal{V}_{l,k} = \mathcal{O}(h_l^2)$ as desired. This concludes the proof of the \lcnamecref{res:orderVarLinearGaussian}. \end{proof} \section{Numerical study} \label{sec:numericalStudy} In this section, the effectiveness of the proposed method is shown in simulations for different \gls{sde} models. Numerical verifications of some of the considered assumptions are also provided. The scenarios considered for simulation are the same as for the \gls{mlpf} in \cite{Jasra2015}, so that results can be compared. \subsection{Linear Gaussian} The first simulation study is performed on the linear-Gaussian case with $a=-0.1$, $b=1$ and with a likelihood $\ell(x, \cdot) = \phi(\cdot\,; x,\tau^2)$ with $\tau = 0.25$ which corresponds to an observation process of the form \eqnl{eq:linearObservation}{ Y_k \given X_k \sim \mathcal{N}(0,\tau^2). } The initial distribution is $p_0 = \phi(\cdot\,; 0,\sigma)$ with $\sigma = 1$ and the final time is $T=4$. A realisation of the state and observation processes are shown in \cref{fig:fourLevels} together with the mean and some percentiles corresponding to samples drawn from the smoothing distribution. The involved transport maps\footnote{The solver used for the determination of the transport maps is the one provided at \url{http://transportmaps.mit.edu/docs/index.html}}, say $T$, are assumed to be triangular maps which $i$\textsuperscript{th} component $T^{(i)}$ takes the form \eqns{ T^{(i)}(x_1,\dots,x_i) = a_i(x_1,\dots,x_{i-1}) + \int_0^{x_i} b_i(x_1, \dots, x_{i-1},t)^2 \mathrm{d} t } where $a_i$ and $b_i$ are real-valued functions defined on $\mathbb{R}^{i-1}$ and $\mathbb{R}^i$ respectively. For any $j \leq i-1$, it is assumed that the functions $x_j \mapsto a_i(x_1,\dots,x_{i-1})$ and $x_j \mapsto b_i(x_1,\dots,x_{i-1},t)$ are Hermite Probabilists' functions extended with constant and linear components whereas the function $t \mapsto b_i(x_1,\dots,x_{i-1},t)$ is assumed to be a Hermite Probabilists' function extended with a constant component only. Then, the functions $a_i$ and $b_i$, when expressed as functions from $\mathbb{R}^{i-1}$ and $\mathbb{R}^i$ respectively, take the form \begin{align*} a_i(x_1,\dots,x_{i-1}) & = \sum_{k = 1}^{2d(o_{\mathrm{m}}+1)} c_k \Phi_k(x_1,\dots,x_{i-1}) \\ b_i(x_1,\dots,x_{i-1},t) & = \sum_{k = 1}^{2do_{\mathrm{m}}} c'_k \Psi_k(x_1,\dots,x_{i-1},t) \end{align*} with $o_{\mathrm{m}}$ the map order, with $\{c_k\}_{k \geq 1}$ and $\{c'_k\}_{k \geq 1}$ some collections of real coefficients and with $\Phi_k$ and $\Psi_k$ basis functions based on the above mentioned Hermite Probabilists' functions. In the simulations, the case $o_{\mathrm{m}} = 4$ is considered. The integration in \cref{eq:opt_prob} is performed using a Gauss quadrature of order $10$ in each dimension. The optimisation relies on the Newton-CG algorithm (Newton algorithm using the conjugate-gradient method for each step) with a tolerance of $10^{-4}$. \begin{figure} \centering \includegraphics[trim=70pt 70pt 100pt 90pt,clip,width=.8\textwidth]{fourLevels2.pdf} \caption{Mean and percentiles of samples generated according to the target distribution of the linear-Gaussian \gls{sde} at four consecutive levels (blue line: state of the process; red dots: observations; black line: samples mean; red areas: $1$-$99$, $5$-$95$ and $20$-$80$ percentiles).} \label{fig:fourLevels} \end{figure} \subsubsection*{\gls{mlmc} rates} The behaviour of the numerical scheme for different levels is displayed in \cref{fig:varDiff_cost}, where $\Var(\varphi(\bm{X}^l) - \varphi(\bm{X}^{l-1}))$ is considered with $\varphi(x_{0:T}) = x_T$ and where the cost is the computational time required to obtain one sample at a given level $l$. This result confirms the applicability of multilevel techniques by showing that $\mathcal{V}_l = \mathcal{O}(h_l^2)$ and $\mathcal{C}_l = \mathcal{O}(h_l^{-1})$, that is $\beta = 2$ and $\zeta = 1$. One important point is that the time spent to obtain samples at a high level is small when compared to the time required to compute the underlying transport map. For instance, it takes about $25\mathrm{s}$ to calculate the transport map at level $5$ while a sample is obtained in $0.00025\mathrm{s}$, so that a $100,000$ samples can be drawn in the time spent to compute the map. It is therefore necessary to verify that the gain obtained with the multilevel approach is not compensated by the additional time spent computing more transport maps (one for each level). \begin{figure} \centering \subfloat[Linear-Gaussian]{% \label{fig:varDiff_cost}% \includegraphics[width=0.495\textwidth]{varDiff_cost_hl_orders.pdf}% }% \subfloat[Langevin]{% \label{fig:varDiff_cost_LD}% \includegraphics[width=0.495\textwidth]{varDiff_cost_hl_LD.pdf}% }\\ \subfloat[Non-linear diffusion]{% \label{fig:varDiff_cost_NLD}% \includegraphics[width=0.495\textwidth]{varDiff_cost_hl_NLD_L5.pdf}% }% \caption{Variance of $\varphi(\bm{X}^l) - \varphi(\bm{X}^{l-1})$ with $\varphi(x_{0:T}) = x_T$ and cost as a function of $h_l$ (Blue dashed line: poly.\ fit of order $2$; green dashed line: least-square fitting of the form $a/h_l$). The experimental (exp) cost for two map-approximation orders are indicated in the linear-Gaussian case together with their corresponding least-square fittings (fit).} \label{fig:varDiff_cost_all} \end{figure} \subsubsection*{Multilevel vs computation at the highest level} The objective with the multilevel approach is to reduce the computational cost to reach a given error when compared to computations at the highest level only. This aspect is verified in \cref{fig:MLvsHL_LG} where the multilevel approach appears to outperform the one based on samples at the highest level. The above-mentioned fact that calculation of the transport maps might be time-consuming is shown to be compensated by the efficiency of the multilevel approach within a reasonable time interval. This is in spite of the fact that the multilevel approach nearly doubles the number of maps to be computed. In particular, in the considered linear-Gaussian scenario, the average computational cost for the calculation of the maps in the multilevel and highest-level approach is respectively $10.76\mathrm{s}$ and $6.15\mathrm{s}$. More specifically, \cref{fig:MLvsHL} is obtained by first computing all the required transport maps and then by generating samples by batches of $1000$. The multilevel estimate is obtained by sweeping the different levels sequentially until the predetermined number $N_l$ of samples has been computed at level $l$. The number $N_0$ of samples at level $0$ is fixed to $2^{13} \times 1000$ for all the considered \glspl{sde}, that is $2^{13}$ batches of $1000$ samples. The number of samples at level $1$ is determined by the ratio between the variance at levels $0$ and $1$ and the number of samples for the subsequent levels are computed through \cref{eq:Nl}. \begin{figure} \centering \subfloat[Linear-Gaussian $\varphi(x_{0:T}) = x_T$]{% \label{fig:MLvsHL_LG}% \includegraphics[width=0.495\textwidth]{Error_logy_LG_L4_MC50_new.pdf}% }% \subfloat[Langevin $\varphi(x_{0:T}) = x_T$]{% \label{fig:MLvsHL_LD}% \includegraphics[width=0.495\textwidth]{Error_logy_LD_L4_MC50_new.pdf}% }\\ \subfloat[Langevin $\varphi(x_{0:T}) = \sum_{t=0}^T e^{-\kappa(T-t)}x_t$]{% \label{fig:MLvsHL_LD_2}% \includegraphics[width=0.495\textwidth]{Error_logy_LD_L4_MC50_exp.pdf}% }% \subfloat[Non-linear diffusion $\varphi(x_{0:T}) = x_T$]{% \label{fig:MLvsHL_NLD}% \includegraphics[width=0.495\textwidth]{Error_logy_NLD_L4_MC50_new.pdf}% }% \caption{MSE vs.\ cost for the multilevel approach compared with computations at the highest level $L=4$ (semi-log scale, averaged over 50 Monte Carlo simulations). The first $200$ iterations are not displayed.} \label{fig:MLvsHL} \end{figure} \subsection[Langevin SDE]{Langevin \gls{sde}} We now consider a Langevin \gls{sde} of the form \eqns{ \mathrm{d} X_t = \dfrac{1}{2} \nabla \log \mathcal{S}_{\nu} (X_t) \mathrm{d} t + b\, \mathrm{d} W_t, \qquad t \in [0,T] } where $\mathcal{S}_{\nu}$ is the Student's t distribution with $\nu=10$ degrees of freedom and with $b = 1$. The observations are generated according to \eqnl{eq:obsExpVar}{ Y_k \given X_k \sim \mathcal{N}\big(0,\tau^2 \exp(X_k)\big) } with $\tau = 1$. The initial distribution is the same as in the previous example. A realisation of the considered Langevin \gls{sde} is shown in \cref{fig:fourLevels_LD} together with mean and percentiles of samples obtained using transport maps. It appears clearly on this figure that the observation process characterised by \cref{eq:obsExpVar} is less informative than the one modelled by \cref{eq:linearObservation}. \Cref{fig:varDiff_cost_LD} shows that the considered Langevin \gls{sde} also displays a variance of order $\mathcal{O}(h_l^2)$, although the actual values are much higher than in the linear-Gaussian case, which might be due to both the nature of the \gls{sde} and the quality of the approximation of the transport maps. A comparison of the computational efficiency of the multilevel approach is given in \cref{fig:MLvsHL_LD} where the proposed method is seen to outperform the approach based on computations at the highest level. The time needed to initialise the latter, i.e.\ the time to compute the transport map at level $L=4$ and to perform the first $200$ iterations, is however slightly less affected than with the multilevel approach. \Cref{fig:MLvsHL_LD_2} shows the performance of the proposed approach with a different functional, that is \eqns{ \varphi(x_{0:T}) = \sum_{t=0}^T \exp(-\kappa(T-t))x_t, } which gives the sum of the states at the observations weighted by a forgetting factor~$\kappa$, with $\kappa = 2$ in the simulations. In this case, the tolerance of the optimisation is also adapted to the level as follows: the tolerance at level $l$ is $10^{-l-1}$. This helps retaining the benefits of the multi-level approach in this more challenging smoothing problem. \begin{figure} \centering \includegraphics[trim=70pt 70pt 100pt 90pt,clip,width=.8\textwidth]{fourLevels_LD2.pdf} \caption{Mean and percentiles of samples generated according to the target distribution of the Langevin \gls{sde} at four consecutive levels (blue line: state of the process; red dots: observations; black line: samples mean; red areas: $1$-$99$, $5$-$95$ and $20$-$80$ percentiles).} \label{fig:fourLevels_LD} \end{figure} \subsection{Nonlinear diffusion} We now consider a \gls{sde} with a nonlinear diffusion term: \eqns{ \mathrm{d} X_t = \theta (\mu - X_t) \mathrm{d} t + \dfrac{\varsigma}{\sqrt{1+X_t^2}} \mathrm{d} W_t, \qquad t \in [0,T] } with $\theta = 1$, $\mu = 1$ and $\varsigma = 1$ and with a time step of $0.5$ between observation times, so that the final time is $T=2$. The linear-Gaussian observation model \cref{eq:linearObservation} is considered with $\tau = 1$. The initial distribution is the same as in the previous examples. A realisation of the considered \gls{sde} is displayed in \cref{fig:fourLevels_NLD} together with mean and percentiles of samples obtained using transport maps. \Cref{fig:varDiff_cost_NLD} shows that the same rates as in the previous cases apply although the contribution of the quadratic term in the variance is smaller than before. It appears in \cref{fig:MLvsHL_NLD} that the time spent computing the transport maps has largely increased for both approaches when compared to the linear-Gaussian and Langevin \glspl{sde}. This might be due to the challenging nature of the problem which induces a slower convergence on the involved optimisation methods. However, the proposed method still displays a significant gain in performance, although the first 200 iterations just gave it enough time to compensate for the computational overhead caused by the calculation of the maps at all level. \begin{figure} \centering \includegraphics[trim=70pt 70pt 100pt 90pt,clip,width=.8\textwidth]{fourLevels_NLD2.pdf} \caption{Mean and percentiles of samples generated according to the target distribution of the \gls{sde} with nonlinear diffusion at four consecutive levels (blue line: state of the process; red dots: observations; black line: samples mean; red areas: $1$-$99$, $5$-$95$ and $20$-$80$ percentiles).} \label{fig:fourLevels_NLD} \end{figure} \section{Conclusion} An algorithm for the determination of expectations with respect to laws of partially-observed \glspl{sde} has been proposed. The observations are received at discrete times and depend only on the state at the time they occurred, hence enabling a standard state space modelling to be used. The proposed method relies on three principles: \begin{enumerate*}[label=(\roman*)] \item the discretization of the considered \gls{sde}, for instance with Euler's method, \item the expression of the smoothing distribution at a given level as a telescopic sum involving coarser discretizations and \item the generation of pairs of samples correlated across adjacent levels via the application of different transport maps to samples from a common base distribution. \end{enumerate*} As opposed to \gls{mlpf}, the proposed approach retains the ``ideal'' \gls{mlmc} rates, since, in particular, it does not require resampling techniques to be used. In addition to a numerical verification of its performance, the proposed method was shown to have the desired behaviour in the linear-Gaussian case. Future works include the theoretical verification of the rates that are observed in practice for more diverse types of \glspl{sde}, as well as the study of the optimal parametrisation of the transport maps as a function of the discretization level. \subsection*{Acknowledgements} The authors would like to thank the Associate Editor as well as the referees for their detailed comments and suggestions for the manuscript. All authors were supported by Singapore Ministry of Education AcRF tier 1 grant R-155-000-182-114. AJ was also supported under KAUST CRG4 Award Ref:2584. AJ is affiliated with the Risk Management Institute, OR and analytics cluster and the Center for Quantitative Finance at NUS. \bibliographystyle{siamplain}
2024-02-18T23:40:39.282Z
2018-05-15T02:14:26.000Z
algebraic_stack_train_0000
2,971
11,250
proofpile-arXiv_065-14548
\section{Introduction} Identifying software defects through testing is a challenging problem. Over the years, a number of approaches have been developed to test software, including random mutation testing (black box fuzzing)~\cite{doupe2012enemy,woo2013scheduling}, abstract interpretation (of either source or machine code) ~\cite{cousot1977abstract,cadar2008klee,ma2011directed}, and property based testing ~\cite{arts2006testing,claessen2011quickcheck}.\\ Methods such as symbolic and concolic execution have increased the fidelity of analyses run over programs~\cite{schwartz2010all}. The development of Satisfiability Modulo Theories (SMT) solvers such as Z3, Boolector, and others have allowed for powerful programmatic analysis of reasoning about software ~\cite{de2008z3,brummayer2009boolector}. Separation logic has allowed for analyses to be applied to complicated data structures~\cite{reynolds2002separation,dongol2015program}. American Fuzzy Lop (AFL) is an advanced fuzzing framework that has been used to discover a number of novel software vulnerabilities (\url{https://github.com/mrash/afl-cve}). AFL uses random mutations of byte strings to identify unique code paths and discover defects in target programs. The inputs that successfully generated unique code paths are then documented as "seed files". We propose to use these native seed files as training data for deep generative models to create augmented seed files. Our proposed reinitialization methods are a scalable process that can improve the time to discovery of software defects. Other researchers have used machine learning to augment fuzzing frameworks including: ~\cite{godefroid2017learn}, ~\cite{wang2017skyfire}. To identify deeper bugs and code paths, Steelix~\cite{Li:2017:SPB:3106237.3106295} uses a program-state based binary fuzzing approach and Driller~\cite{stephens2016driller} demonstrates a hybrid approach using fuzzing and selective concolic execution. AFLFAST~\cite{Bohme:2016:CGF:2976749.2978428} extends AFL using Markov chain models. Deep Neural Networks (DNNs)~\cite{bengio2015deep}, have had great success in the fields of Natural Language Processing (NLP)~\cite{jones2014learning,wu2016google}, Computer Vision~\cite{krizhevsky2012imagenet}, and the playing of bounded games such as Go~\cite{mnih2013playing} or video games like ATARI~\cite{silver2016mastering}. Can these DNN's help existing program analysis tools perform better? In our work we investigate that question. We augment AFL ~\cite{zalewski2015american}, an advanced fuzzing framework, using Generative Adversarial Networks (GAN)~\cite{goodfellow2014generative} and Long Short Term Memory (LSTM)~\cite{sak2014long} to increase its rate of unique code path discovery. Our work quantifies the benefits that augmentation strategies such as generative models can provide, even when limited by small quantities of training data. By periodically perturbing the state of AFL as it explores the input space, we are able to improve its performance, as measured by unique code paths. Specifically, we test our approach on the software ecosystem surrounding the Ethereum ~\cite{wood2014ethereum} project. As a financial system, correctness of the Ethereum code base is important for guaranteeing that transactions or calculations run without fault. We choose ethkey as an initial fuzzing target. Ethkey is a small C++ program provided as part of the {\tt cpp-ethereum} project used to load, parse, and perform maintenance on Ethereum wallets, and importantly, takes a simple input file, making it easy to test with AFL. \section{Experimental Design} First, we describe the basic functionality of AFL, highlighting the key features that connect with the proposed augmentation framework. Next, we describe the methodology used to create the LSTM and GAN generated seed files. As a baseline, we also consider random generation of seed files from the training data used to construct the LSTM and GAN models. AFL has extensions to the GCC compiler which in conjunction with Genetic Algorithms, it uses to create seed files. Each seed file documents the input that yielded a unique code path, the time of discovery, and is used as the basis for mutation (or fuzzing) to generate future seed files. Our augmentation strategy takes advantage of the fact that if an external tool places additional seed files in the AFL working directory, AFL will use those files as inputs in subsequent fuzzing runs. To produce the training data for our methods, we run AFL on a target program $P$ for a fixed amount of time. AFL generates an initial set of seed files $S=\{S_0,...,S_K\}$ for each unique execution trace $\tau$ taken through $P$. We use $S$ as training examples for the LSTM and GAN models, which are both trained using Keras ~\cite{chollet2017keras}. Our LSTM is trained from the concatenation of AFL-generated seed file corpus $S$ into a single file and generates new seed files with a maximum length of $40$ characters. The LSTM model has a $128$ wide initial layer, an internal dense layer, and a final softmax activation layer. To train the LSTM model, we use RMS propagation as our optimizer and a categorical cross-entropy loss function. The model takes in a seed sequence sampled from the training corpus and predicts the next character in the sequence. We additionally tune a separate temperature parameter to diversify the output seed files from the network. The generated seed files are noted as $S_L$. In our GAN architecture, two models are built, a generator G, which is pitted against a discriminator D. G is optimized to generate realistic output, and the discriminator D has the task of predicting if the data is true or fake. This training strategy is unsupervised and particularly expressive. The generative model G, is a fully connected 2 layer DNN with a ReLU non-linearity as the inner activation and a tanh output activation. It is trained with a binary cross-entropy loss function via stochastic gradient descent. The discriminative model D is a 3 layer DNN, but the first layer has 25\% dropout followed by two fully connected layers. It uses an Adam optimizer for stochastic gradient descent and the seed files resulting from the GAN process are noted as $S_G$. Additionally, given the native AFL seed files, $S$, we randomly draw bytes from this training set and produce new, random seed files $S_R$ of the same length as $S_G$. This serves as a baseline to determine if the added time and complexity of GAN and LSTM based seed generation are truly providing an advantage over a simple strategy of randomly perturbing AFL's state. {\bf Small Experiment:} The seed files ($S_R$, $S_G$, and $S_L$) alone are not an end goal. However, we are interested in characterizing their variability and other properties as they will provide a set of initial conditions when AFL is restarted. In a fuzzing run on a single CPU core, we produce $936$ unique code paths used to train initial GAN and LSTM models. Random seed generation is performed by drawing random bytes from /dev/urandom. For each method, we generate $200$ samples, reinitialize AFL on a single CPU with only the seed files of one method and run for an additional $72$ hours to measure the impact on code path discovery. Both LSTM and GAN models sightly out-perform random sampling for AFL reinitialization. We summarize the resulting mean time to discover new code paths in Table~\ref{tab:three}. Each seed file produces a program trace when supplied as an input to ethkey. Code paths that have different lengths will differ in at least one basic block or branch instruction. The unique code path length is fast to compute but only provides a lower bound on the number of unique code paths exercised by the testing framework, across fuzzing runs using AFL. Two code paths with the same length can result from unique traces, thus detailed evaluation is needed to determine the true uniqueness of a code path from seed file execution. \begin{table}% \centering \begin{tabular}{ccccc} \hline Class C & $|C|$ & $L(C)$ & Sec/Path & Relative Rate \\ \hline Urandom & 1231 & 0.9017 & 214.478 & 1.00 \\ \hline LSTM & 1251 & 0.8984 & 197.130 & 1.08 \\ \hline GAN & 1240 & 0.8694 & 191.893 & 1.11 \\ \hline \\ \end{tabular} \caption{{\bf Initial Run:} $|C|$ is the number of seeds generated after reinitializing AFL. We observe GAN or LSTM allows discovery of novel code paths at a quicker rate than restarting AFL using a random sampling of bytes. $L(C)$ is the number of unique code path lengths $l(c_i)$ associated to input files $c_i$ in the set $C$ of LSTM, GAN, or uniform Random reinitialization.} \label{tab:three}% \end{table}% \begin{table} \centering \begin{tabular}{cccccc } \hline Class $C$ & $|C|$ & $L(C)$ & \% Unique & $\mu(L(C))$ & $\sigma(L(C))$ \\ \hline AFL seed & 38384 & 31212 & 0.813 & 26.968M & 33.958M \\ \hline Rand seed & 19824 & 485 & 0.024 & 2.602M & 724.674K \\ \hline LSTM seed & 20000 & 1921 & 0.096 & 2.596M & 8.687K \\ \hline GAN seed & 20000 & 119 & 0.006 & 2.593M & 1.841K \\ \hline \\ \end{tabular} \caption{{\bf Synthetic Seed Files:} Random sampling, LSTM, and GAN can be used to produce synthetic seed files for AFL. We compute statistics on the synthetic files from the $3$ strategies. The synthetic files are not themselves deep or varied code paths, but can be used to reinitilize AFL. } \label{tab:one}% \end{table}% {\bf Large Experiment:} To demonstrate the scalability of this augmentation strategy, we performed an extended run of AFL on $200$ CPU cores for $72$ hours. Each core in the AFL run stopped finding seed files after the first 10 to 12 hours of fuzzing and accumulated a total of 39,185 seed files across 49 workers. All seed files produced within a given node are known to be unique, due to AFL's internal book keeping mechanism. However, seed files whose content are different across nodes, could in principle exercise the same code path. By measuring the length of each program trace (code path), we can compute a lower bound on the number of unique paths discovered by only counting paths that have a unique length. After removing identical seed files from across the nodes, and seed files that resulted in the same code path length, we estimate 802 of the initial files were duplicates from the independent worker nodes. Removing those duplicates resulted in a total of 38,384 unique files. We then trained GAN and LSTM networks on the total corpus of unique seed files and generated approximately 20,000 samples from each method, respectively, to use as synthetic seed files in order to reinitialize AFL. GAN took approximately 30 minutes to train and generate synthetic seed files, while LSTM took 14 hours to do so. In Table~\ref{tab:one} we summarize the mean and variance of the length of program traces (i.e., code paths) associated with the seed files from native AFL and from the synthetic generation methods for this larger experiment. The synthetic seed files, when provided as inputs to the program under test, do not cause deep paths to be explored, compared to AFL. So, we cannot simply {\em replace} AFL with a generative model. Instead, we seek to combine generative models with AFL to {\em boost} its performance. We see from this data that, in fact, the seed files generated by LSTM and GAN are not representative of the distribution $S$ in terms of the mean and variance of code paths generated. This reinforces the need to use $S_G$ and $S_L$ as an augmentation strategy rather than a direct replacement of AFL seeds. Next, we performed $24$ hours of fuzzing with GAN, LSTM, and a random reinitilialzation strategy using a random sampling of bytes from the initial seed files (i.e., performing no learning on the seed files). Table ~\ref{tab:five} summarizes our results. All three strategies allowed for additional seed files to be generated. The GAN-based approach produced seed files 14.23\% quicker than the random approach and 60.72\% faster than using LSTM. We do lose 30 minutes of training time for GAN that could otherwise be used for fuzzing using the random sampling method; discounting by this amount of time reduces the code path rate to an 11.85\% improvement. However, we are most interested in unique code paths. GAN found the greatest number of seed files whose associated code paths had lengths not found in the initial fuzzing run, outperforming the random control approach by 6.16\%. The average code path length discovered by GAN was 13.84\% longer than the random control, so GAN is capable of exercising deeper paths in the program. The LSTM model underperformed both GAN and random sampling and took a substantially longer time (14 hours) to train. \begin{table}% \centering \begin{tabular}{cccccccc} \hline Class C & |C| & L(C) & Novel & L(C) Rate & Novel Rate & $\mu(L(C))$ & $\sigma(L(C))$ \\ \hline Rand & 780 & 778 & 682 & 1.000 & 1.000 & 25.373M & 3.339M \\ \hline LSTM & 555 & 555 & 481 & 0.713 & 0.7053 & 26.541M & 3.385M \\ \hline GAN & 891 & 837 & 724 & 1.0758 & 1.0616 & 28.885M & 3.456M \\ \hline \\ \end{tabular} \caption{{\bf Sustained Run}: With 38,000 training seed file samples, we compare the seed files generated after reinitialization from random, GAN, and LSTM generation methods. L(C) Rate is the speedup over the random strategy of discovery of code paths with unique length per second. Novel Rate is the speedup over random for unique lengths not found in the training set. \\}\label{tab:five}% \end{table}% \section{Conclusions} In this work, we explored the utility of augmenting random mutation testing with deep neural models. Natively AFL, combines file mutation strategies from Genetic Algorithms with program instrumentation via the use of compiler plugins. We observed the most improvement in AFL's performance when we restart a fuzzing run mid-course, using novel seed files built from a GAN model. Though the synthetic seed file statistics on average had similar path length, the GAN out-performed reinitialization from a random or LSTM strategy when restarting the fuzzing system. The LSTM model was deficient in both training time and code path discovery time. Both approaches used no manual analysis or information about file formats for the program under test. The GAN and random strategies both improve the performance of AFL, even though the internal state of the program is never directly exposed. Future work of interest includes experimentation on additional targets, including the DARPA Cyber Grand Challenge problems, open source OS network services, bytecode interpreters, and other system applications and programs where input data is easily generated. We also plan to explore exposing the internal state of the program under test in order to define a reward function for reinforcement learning. We envision this internal state could be exposed by: 1) the instrumentation AFL adds to programs via its GCC compiler plugins, 2) using Intel's PIN tool to output the length of each code path or summary information about a given trace 3) recording program traces using a replay framework such as Mozilla's rr tool in order to collect additional descriptive statistics. \subsubsection*{Acknowledgements} The authors would like to thank Court Corley, Nathan Hodas, and Sam Winters for useful discussions. The research described in this paper is part of the Deep Science Initiative at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development Program at PNNL, a multi-program national laboratory operated by Battelle for the U.S. Department of Energy. \clearpage \medskip
2024-02-18T23:40:39.450Z
2017-11-09T02:04:57.000Z
algebraic_stack_train_0000
2,983
2,518
proofpile-arXiv_065-14613
\section{Introduction} Let $V\subset \mathbb R^d$ be a hypersurface which is endowed with a surface measure $d\sigma.$ In the Euclidean setting, the extension problem is to determine the exponents $1\le p, r\le \infty$ such that the following inequality holds: $$ \|(fd\sigma)^\vee\|_{L^{r}(\mathbb R^d)} \le C \|f\|_{L^{p}(V, d\sigma)},$$ where the constant $C>0$ is independent of functions $f\in L^p(V, d\sigma).$ By duality, this extension estimate is same as the restriction estimate $$\|\widehat{g}\|_{L^{p'}(V, d\sigma)} \le C\|g\|_{L^{r'}(\mathbb R^d)}.$$ Here, $p'$ and $r'$ denote the H\"{o}lder conjugates of $p$ and $r$, respectively (i.e. $1/p + 1/p'=1$). Therefore, the extension problem is also called the restriction problem. In 1967, E.M. Stein \cite{St78} introduced the restriction problem. This problem had been completely solved for the parabola and the circle in two dimensions, and the cones in three and four dimensions (see \cite{Zy74, Ba85, Wo01}). However, it is still open in other cases although improved results have been obtained by harmonic analysts. We refer readers to \cite{Gu15, St93,Ta03, Ta04} for further information and recent developments on the restriction problem in the Euclidean setting.\\ In 2002, Mockenhaupt and Tao \cite{MT04} initially posed and studied the extension problem for various varieties in $d$-dimensional vector spaces over finite fields. In order to formulate a finite field analogue of the extension problem, the real set is replaced by finite fields. We begin by reviewing the definition of the finite field extension problem. We denote by $\mathbb F_q$ a finite field with $q$ elements. Throughout this paper, we shall assume that $q$ is a power of odd prime. Let $\mathbb F_q^d$ be a $d$-dimensional vector space over the finite field $\mathbb F_q.$ We endow the vector space $\mathbb F_q^d$ with the counting measure $dm.$ We write $(\mathbb F_q^d, dm)$ to stress that the vector space $\mathbb F_q^d$ is endowed with the counting measure $dm.$ Since the vector space $\mathbb F_q^d$ is isomorphic to its dual space as an abstract group, we identify the space $\mathbb F_q^d$ with its dual space. However, a normalized counting measure $d\xi$ is endowed with its dual space which will be denoted by $(\mathbb F_q^d, d\xi).$ We always use the variable $m$ for an element of the vector space $(\mathbb F_q^d, dm)$. On the other hand, the variable $\xi$ will be an element of the dual space $(\mathbb F_q^d, d\xi).$ For example, we simply write $m\in \mathbb F_q^d$ and $\xi \in \mathbb F_q^d$ for $m\in (\mathbb F_q^d, dx)$ and $\xi \in (\mathbb F_q^d, d\xi)$, respectively. For a complex valued function $g: (\mathbb F_q^d, dm)\to \mathbb C$, the Fourier transform $\widehat{g}$ on $(\mathbb F_q^d, d\xi)$ is defined by $$ \widehat{g}(\xi)=\int_{\mathbb F_q^d} g(m) \chi(-m\cdot \xi)\,dm = \sum_{m\in \mathbb F_q^d} g(m)\chi(-m\cdot \xi)$$ where $\chi$ denotes a nontrivial additive character of $\mathbb F_q$ and the dot product is defined by $m\cdot \xi=m_1\xi_1 + \cdots + m_d \xi_d$ for $m=(m_1,\ldots,m_d),\, \xi=(\xi_1,\ldots, \xi_d)\in \mathbb F_q^d.$ For a complex valued function $f:(\mathbb F_q^d, d\xi) \to \mathbb C$, the inverse Fourier transform $f^\vee$ on $(\mathbb F_q^d, dm)$ is given by $$ f^\vee(m)=\int_{\mathbb F_q^d} f(\xi) \chi(\xi\cdot m) \,d\xi = \frac{1}{q^d} \sum_{\xi\in \mathbb F_q^d} f(\xi) \chi(\xi\cdot m).$$ Using the orthogonality relation of the nontrivial character $\chi$ of $\mathbb F_q$, we obtain the Plancherel theorem: $$\|\widehat{g}\|_{L^2(\mathbb F_q^d, d\xi)} = \|g\|_{L^2(\mathbb F_q^d, dm)} \quad \mbox{or}\quad \|f\|_{L^2(\mathbb F_q^d, d\xi)}=\|f^\vee\|_{L^2(\mathbb F_q^d, dm)}.$$ Namely, the Plancherel theorem yields the following equation $$ \frac{1}{q^d} \sum_{\xi\in \mathbb F_q^d} |\widehat{g}(\xi)|^2 =\sum_{m\in \mathbb F_q^d} |g(m)|^2 \quad \mbox{or}\quad \frac{1}{q^d} \sum_{\xi\in \mathbb F_q^d} |f(\xi)|^2 =\sum_{m\in \mathbb F_q^d} |f^\vee(m)|^2. $$ Notice by the Plancherel theorem that if $G, F\subset \mathbb F_q^d$, then we have $$\frac{1}{q^d} \sum_{\xi\in \mathbb F_q^d} |\widehat{G}(\xi)|^2 =|G| \quad \mbox{and} \quad \sum_{m\in \mathbb F_q^d} |F^\vee(m)|^2 =\frac{|F|}{q^d},$$ where $|E|$ denotes the cardinality of a set $E\subset \mathbb F_q^d.$ Here, and throughout this paper, we shall identify the set $E\subset \mathbb F_q^d$ with the indicator function $1_E$ on the set $E.$ Namely, we shall write $\widehat{E}$ for $\widehat{1_E}$, which allows us to use a simple notation. Given functions $g_1, g_2: (\mathbb F_q^d, dm) \to \mathbb C,$ the convolution function $g_1\ast g_2$ on $(\mathbb F_q^d, dm)$ is defined by $$ g_1\ast g_2(n) = \int_{\mathbb F_q^d} g_1(n-m) g_2(m)\,dm = \sum_{m\in \mathbb F_q^d} g_1(n-m) g_2(m).$$ On the other hand, if $f_1, f_2: (\mathbb F_q^d, d\xi) \to \mathbb C,$ then the convolution function $f_1\ast f_2$ on $(\mathbb F_q^d, d\xi)$ is given by $$ f_1\ast f_2(\eta)=\int_{\mathbb F_q^d} f_1(\eta-\xi) f_2(\xi)\,d\xi = \frac{1}{q^d} \sum_{\xi\in \mathbb F_q^d} f_1(\eta-\xi) f_2(\xi).$$ Then it is not hard to see that $$ \widehat{g_1\ast g_2} = \widehat{g_1} \widehat{g_2} \quad \mbox{and}\quad (f_1\ast f_2)^\vee = f_1^\vee f_2^\vee.$$ Given an algebraic variety $V\subset (\mathbb F_q^d, d\xi)$, we endow $V$ with the normalized surface measure $d\sigma$ which is defined by the relation $$ \int_V f(\xi)\,d\sigma(\xi) =\frac{1}{|V|} \sum_{\xi \in V} f(\xi).$$ Notice that $d\sigma(\xi)=\frac{q^d}{|V|}\, 1_V(\xi)\, d\xi$ and we have $$ (fd\sigma)^\vee(m)=\int_V f(\xi) \chi(m\cdot \xi)\, d\sigma(\xi) =\frac{1}{|V|} \sum_{\xi\in V} f(\xi) \chi(m\cdot \xi).$$ For each $1\le p,r\le \infty$, we define $R^*_V(p\to r)$ as the smallest positive real number such that the following extension estimate holds: $$ \|(fd\sigma)^\vee\|_{L^{r}(\mathbb F_q^d, dm)} \le R^*_V(p\to r) \,\|f\|_{L^{p}(V, d\sigma)} \quad \mbox{for all functions}~~f:V \to \mathbb C.$$ By duality, $R^*_V(p\to r)$ is also the smallest positive constant such that the following restriction estimate holds: $$\|\widehat{g}\|_{L^{p'}(V, d\sigma)} \le R^*_V(p\to r) \,\|g\|_{L^{r'}(\mathbb F_q^d, dm)} \quad\mbox{for all functions}~~g:(\mathbb F_q^d, dm) \to \mathbb C.$$ The number $R^*_V(p\to r)$ may depend on $q$, the size of the underlying finite field $\mathbb F_q.$ The main question on the extension problem for $V\subset \mathbb F_q^d$ is to determine $1\le p, r\le \infty$ such that the number $R^*_V(p\to r)$ is independent of $q.$ Throughout this paper, we shall use $X\lesssim Y$ for $X, Y>0$ if there is a constant $C>0$ independent of $q=|\mathbb F_q|$ such that $ X\le C Y.$ We also write $Y\gtrsim X$ for $X\lesssim Y,$ and $X\sim Y$ means that $X\lesssim Y$ and $Y\lesssim X.$ In addition, we shall use $X\lessapprox Y$ if for every $\varepsilon>0$ there exists $C_{\varepsilon}>0$ such that $X\lesssim C_{\varepsilon} q^{\varepsilon} Y.$ This notation is handy for suppressing powers of $\log{q}.$ Using the notation $\lesssim$, the extension problem for $V$ is to determine $1\le p,r\le \infty$ such that $R^*_V(p\to r)\lesssim 1.$ \\ Since the finite filed extension problem was addressed in 2002 by Mockenhaupt and Tao \cite{MT04}, it has been studied for several algebraic varieties such as paraboloids, spheres, and cones (see, for example, \cite{LL13, LL10, KS12, IK10, KS13}.) In particular, very interesting results have been recovered for paraboloids. From now on, we restrict ourselves to the study of the extension problem for the paraboloid $P\subset \mathbb (\mathbb F_q^d, d\xi)$ defined as \begin{equation}\label{defP} P= \{\xi\in \mathbb F_q^d: \xi_d=\xi_1^2+ \cdots +\xi_{d-1}^2\}.\end{equation} This paper is written to achieve two main goals. One is to address clarified conjectures on the extension problem for paraboloids. The other is to improve the previously known $L^2\to L^r$ extension estimates for paraboloids in higher dimensions.\\ In Section \ref{secII}, we shall introduce neat necessary conditions which we may conjecture as sufficient conditions for $R_P^*(p\to r)\lesssim 1.$ In particular, by Lemma \ref{GeN} in Section \ref{secII} it is natural to conjecture the following statement on the $L^2\to L^r$ extension problem for paraboloids. \begin{conjecture}\label{Conj1} Let $P\subset \mathbb F_q^d$ be the paraboloid defined as in \eqref{defP}. Then we have \begin{enumerate} \item If $d\ge 2$ is even, then $ R_P^*(2\to r)\lesssim 1 \iff \frac{2d+4}{d}\le r\le \infty$ \item If $d=4\ell-1$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then we have $$ R_P^*(2\to r)\lesssim 1 \iff \frac{2d+6}{d+1}\le r\le \infty$$ \item If $d=4\ell+1$ for $\ell \in \mathbb N$, then $ R_P^*(2\to r)\lesssim 1 \iff \frac{2d+2}{d-1}\le r\le \infty$ \item If $d\ge 3$ is odd, and $-1\in \mathbb F_q$ is a square number, then we have $$R_P^*(2\to r)\lesssim 1 \iff \frac{2d+2}{d-1}\le r\le \infty.$$ \end{enumerate} \end{conjecture} In the conclusions of Conjecture \ref{Conj1}, the statements for $``\Longrightarrow"$ direction follow immediately from Lemma \ref{GeN} in the following section. Hence, Conjecture \ref{Conj1} can be reduced to the following critical endpoint estimate, because $R^*_P(2\to r_1) \ge R^*_P(2\to r_2)$ for $1\le r_1\le r_2 \le \infty.$ \newpage \begin{conjecture}\label{Conj2} Let $P\subset \mathbb F_q^d$ be the paraboloid defined as in \eqref{defP}. Then we have \begin{enumerate} \item If $d\ge 2$ is even, then $ R_P^*\left(2\to\frac{2d+4}{d} \right)\lesssim 1$ \item If $d=4\ell-1$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then $ R_P^*\left(2\to \frac{2d+6}{d+1}\right)\lesssim 1$ \item If $d=4\ell+1$ for $\ell \in \mathbb N$, then $ R_P^*\left(2\to \frac{2d+2}{d-1}\right) \lesssim 1$ \item If $d\ge 3$ is odd, and $-1\in \mathbb F_q$ is a square number, then $R_P^*\left(2\to \frac{2d+2}{d-1}\right)\lesssim 1.$ \end{enumerate} \end{conjecture} \subsection{Statement of main results} By the Stein-Tomas argument, Mockenhaupt and Tao \cite{MT04} already showed that the statements $(3), (4)$ in Conjecture \ref{Conj2} are true. In fact, they proved that $R_P^*(2 \to (2d+2)/(d-1)) \lesssim 1 $ for all dimensions $d\ge 2$ without further assumptions.\\ The statements $(1), (2)$ in Conjecture \ref{Conj2} are very interesting in that the conjectured results are better than the Stein-Tomas inequality which is sharp in the Euclidean case. This is due to number theoretic issue which we can enjoy when we study harmonic analysis in finite fields. In dimension two, the statement $(1)$ in Conjecture \ref{Conj2} was already proved by Mockenhaupt and Tao \cite{MT04}, but it is open in higher even dimensions. For higher even dimensions $d\ge 4,$ Iosevich and Koh \cite{IK09} proved that $R^*_P(2\to 2d^2/(d^2-2d+2))\lessapprox 1$ which improves the Stein-Tomas inequality due to Mockenhaupt and Tao. This result was obtained by using a connection between $L^p\to L^4$ extension results and $L^2\to L^r$ extension estimates. In \cite {LL10}, A. Lewko and M. Lewko improved the result of Iosevich and Koh by recovering the endpoint. They adapted the bilinear approach to derive the improved result, $R^*_P(2\to 2d^2/(d^2-2d+2))\lesssim 1.$ In this paper, we shall obtain further improvement in higher even dimensions $d\ge 6.$ Our first main result is as follows. \begin{theorem}\label{main1} Let $P\subset \mathbb F_q^d$ be the paraboloid defined as in \eqref{defP}. If the dimension $d\ge 6$ is even, then for each $\varepsilon >0$ we have $$ R_P^*\left( 2 \to \frac{6d+8}{3d-2} +\varepsilon\right) \lesssim 1.$$ \end{theorem} Notice that if $d\ge 6$, then $(6d+8)/(3d-2) <2d^2/(d^2-2d+2),$ which implies that Theorem \ref{main1} is better than the result $R_P^*(2\to 2d^2/(d^2-2d+2))\lesssim 1$ due to A. Lewko and M. Lewko.\\ The statement $(2)$ in Conjecture \ref{Conj2} has not been solved in any case. In the case when $d=3$ and $q$ is a prime with $q\equiv 3 \,(\mbox{mod}~4)$, Mockenhaupt and Tao \cite{MT04} deduced the following extension result: for every $\varepsilon >0$, \begin{equation} \label{Ta3D} R_P^*\left(2 \to \frac{18}{5}+\varepsilon\right)\lesssim 1.\end{equation} This was improved to $R_P^*(2 \to \frac{18}{5}) \lesssim 1$ by A. Lewko and M. Lewko \cite{LL10} (Bennett, Carbery, Garrigos, and Wright independently proved it in unpublished work). Recently, Lewko \cite{LL13} discovered a nice connection between the finite field extension problem and the finite field Szemer\'{e}di-Trotter incidence problem. Using the connection with ingenious arguments, he obtained the currently best known result on extension problems for the 3-d paraboloid. More precisely, he proved that if the dimension $d$ is three and $-1\in \mathbb F_q$ is not a square, then there exists an $\varepsilon>0$ such that \begin{equation}\label{Lew3DG} R_P^*\left(2\to \frac{18}{5}-\varepsilon\right)\lesssim 1.\end{equation} Furthermore, assuming that $q$ is a prime and $-1\in \mathbb F_q$ is not a square, he gave the following explicit result for $d=3$: \begin{equation} \label{Lew3D} R_P^*\left(2\to \frac{18}{5}-\frac{1}{1035}+\varepsilon\right) \lesssim 1 \quad \mbox{for any}\quad \varepsilon>0.\end{equation} Although this result is still far from the conjectured result, $R^*_P(2\to 3)\lesssim 1,$ M. Lewko provided novel ideas useful in developing the finite field extension problem and we will also adapt many of his methods to deduce our improved results. In specific higher odd dimensions, Iosevich and Koh \cite{IK09} proved that $R_P^*(2\to \frac{2d^2}{d^2-2d+2}) \lessapprox 1$ with the assumptions of the statement $(2)$ in Conjecture \ref{Conj2}. This result is also better than the Stein-Tomas inequality. A. Lewko and M. Lewko \cite{LL10} obtained the endpoint estimate so that the result by Iosevich and Koh was improved to \begin{equation} \label{LewR} R_P^*\left(2\to \frac{2d^2}{d^2-2d+2}\right) \lesssim 1.\end{equation} As our second result, we shall improve this result in the case when $d=4\ell-1 \ge 7$ for $\ell \in \mathbb N.$ More precisely, we have the following result. \begin{theorem}\label{main2} Let $P \subset \mathbb F_q^d$ be the paraboloid defined as in \eqref{defP}. If $d=4\ell+3$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then for every $\varepsilon >0$, we have $$ R_P^*\left(2\to \frac{6d+10}{3d-1} +\varepsilon \right)\lesssim 1.$$ \end{theorem} Notice that Theorem \ref{main2} is superior to the result \eqref{LewR} due to A. Lewko and M. Lewko. If one could obtain the exponent in Theorem \ref{main2} for $d=3$, we could have $R^*_P(2\to \frac{7}{2}+\varepsilon)\lesssim 1,$ which is much better than the best known result \eqref{Lew3D} due to M. Lewko. Unfortunately, our result does not cover the case of three dimensions and it only improves the previous known results in specific higher odd dimensions.\\ This paper will be organized as follows. In section 2, we deduce the necessary conditions for $R^*_P(p\to r)$ bound from which we make a conjecture on extension problems for paraboloids. In section 3, we collect several lemmas which are essential in proving our main results, Theorem \ref{main1} and Theorem \ref{main2}. In the final section, we give the complete proofs of our main theorems. In addition, we shall provide summary of progress on the finite field extension problems for paraboloids. \section{Conjecture on extension problems for paraboloids}\label{secII} In \cite{MT04}, Mockenhaupt and Tao observed that if $|V|\sim q^{d-1},$ then the necessary conditions for $R_V^*(p\to r)\lesssim 1$ are given by \begin{equation} \label{Necessary1} r\geq \frac{2d}{d-1} \quad \mbox{and} \quad r\geq \frac{pd}{ (p-1)(d-1)}. \end{equation} In particular, when the variety $V$ contains an affine subspace $\Omega$ with $|\Omega|=q^k$ for $0\le k\le d-1$, the above necessary conditions can be improved to the conditions \begin{equation}\label{Necessary2} r\geq \frac{2d}{d-1} \quad \mbox{and} \quad r\geq\frac{p(d-k)}{(p-1)(d-1-k)}.\end{equation} Now, let us observe the necessary conditions for $R^*_P(p\to r)$ bound where the paraboloid $P\subset \mathbb F_q^d$ is defined as in \eqref{defP}. To find more exact necessary conditions for $R_P^*(p\to r)\lesssim 1,$ it is essential to know the size of subspaces lying on the paraboloid $P\subset \mathbb F_q^d.$ To this end, we need the following lemma which is a direct consequence of Lemma 2.1 in \cite{Vi12}. \begin{lemma}\label{Vi} Let $S_0=\{(x_1,\ldots, x_{d-1})\in \mathbb F_q^{d-1}: x_1^2+\cdots+x_{d-1}^2=0\}$ be a variety in $\mathbb F_q^{d-1}$ with $d\ge 2.$ Denote by $\eta$ the quadratic character of $\mathbb F_q.$ If $W$ is a subspace of maximal dimension contained in $S_0$, then we have the following facts: \begin{enumerate} \item If $d-1$ is odd, then $|W|=q^{\frac{d-2}{2}}$ \item If $d-1$ is even and $(\eta(-1))^{\frac{d-1}{2}}=1$, then $|W|=q^{\frac{d-1}{2}}$ \item If $d-1$ is even and $(\eta(-1))^{\frac{d-1}{2}}=-1,$ then $|W|=q^{\frac{d-3}{2}}.$ \end{enumerate} \end{lemma} Observe from Lemma \ref{Vi} that $\Omega:=W \times \{0\} \subset \mathbb F_q^{d-1} \times \mathbb F_q$ is a subspace contained in the paraboloid $P \subset \mathbb F_q^d.$ Since $|\Omega|=|W|$, we have the following result from Lemma \ref{Vi}. \begin{corollary} Let $P\subset \mathbb F_q^d$ be the paraboloid. Then the following statements hold: \begin{enumerate} \label{SubP} \item If $d\ge 2$ is even, then the paraboloid $P$ contains a subspace $\Omega$ with $|\Omega|=q^{\frac{d-2}{2}}$ \item If $d=4\ell-1$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then the paraboloid $P$ contains a subspace $\Omega$ with $|\Omega|=q^{\frac{d-3}{2}}$ \item If $d=4\ell+1$ for $\ell \in \mathbb N$, then the paraboloid $P$ contains a subspace $\Omega$ with $|\Omega|= q^{\frac{d-1}{2}}$ \item If $d\ge 3$ is odd, and $-1\in \mathbb F_q$ is a square number, then the paraboloid $P$ contains a subspace $\Omega$ with $|\Omega|= q^{\frac{d-1}{2}}.$ \end{enumerate} \end{corollary} Applying Corollary \ref{SubP} to \eqref{Necessary2}, the necessary conditions for $R_P^*(p\to r)\lesssim 1$ are given as follows: \begin{lemma} \label{GeN}Let $P\subset \mathbb F_q^d$ be the paraboloid defined as in \eqref{defP}. Assume that $R_P^*(p\to r)\lesssim 1$ for $1\le p,r\le \infty.$ Then the following statements are true: \begin{enumerate} \item If $d\ge 2$ is even, then $(1/p, 1/r)$ must be contained in the convex hull of points $$(1, 0), (0,0), \left(0, \frac{d-1}{2d}\right),\, \mbox{and}~~ P_1:=\left(\frac{d^2-d+2}{2d^2},~~ \frac{d-1}{2d}\right).$$ \item If $d=4\ell-1$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then $(1/p, 1/r)$ lies on the convex hull of points $$(1, 0), (0,0), \left(0, \frac{d-1}{2d}\right), \, \mbox{and}~~ P_2:=\left(\frac{d^2+3}{2d^2+2d},~~ \frac{d-1}{2d}\right).$$ \item If $d=4\ell+1$ for $\ell \in \mathbb N$, then $(1/p, 1/r)$ must be contained in the convex hull of points $(1, 0), (0,0), \left(0, \frac{d-1}{2d}\right),$ and $P_3:=\left(\frac{d-1}{2d},~~ \frac{d-1}{2d}\right).$ \item If $d\ge 3$ is odd, and $-1\in \mathbb F_q$ is a square number, then $(1/p, 1/r)$ must be contained in the convex hull of points $(1, 0), (0,0), \left(0, \frac{d-1}{2d}\right),$ and $\left(\frac{d-1}{2d},~~ \frac{d-1}{2d}\right).$ \end{enumerate} \end{lemma} We may conjecture that the necessary conditions for $R_P^*(p\to r)\lesssim 1$ in Lemma \ref{GeN} are in fact sufficient. For this reason, we could settle the extension problem for paraboloids if we could obtain the critical endpoints $P_1, P_2, P_3$ in the statement of Lemma \ref{GeN}. In conclusion, to solve the extension problem for paraboloids, it suffices to establish the following conjecture on critical endpoints. \begin{conjecture}\label{simpleconj} The following statements hold: \begin{enumerate} \item If $d\ge 2$ is even, then $ R_P^*\left(\frac{2d^2}{d^2-d+2},~~ \frac{2d}{d-1}\right)\lesssim 1$ \item If $d=4\ell-1$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then $ R_P^*\left(\frac{2d^2+2d}{d^2+3},~~ \frac{2d}{d-1}\right)\lesssim 1$ \item If $d=4\ell+1$ for $\ell \in \mathbb N$, then $ R_P^*\left(\frac{2d}{d-1},~~ \frac{2d}{d-1}\right) \lesssim 1$ \item If $d\ge 3$ is odd, and $-1\in \mathbb F_q$ is a square number, then $R_P^*\left(\frac{2d}{d-1},~~ \frac{2d}{d-1}\right)\lesssim 1.$ \end{enumerate} \end{conjecture} \section{Preliminary lemmas} In this section, we collect several lemmas which shall be used to prove our main results. As we shall see, both Theorem \ref{main1} and Theorem \ref{main2} will be proved in terms of the restriction estimates (dual extension estimate). Thus, we start with lemmas about the restriction operators associated with paraboloids. We shall write $R_P(p\to r)$ for $R^*_P(r'\to p')$ for $1\le p,r \le \infty.$ Namely, $R_P(p\to r)$ is the smallest positive real number such that the following restriction estimate holds: $$\|\widehat{g}\|_{L^{r}(P, d\sigma)} \le R_P(p\to r) \,\|g\|_{L^{p}(\mathbb F_q^d, dm)} \quad\mbox{for all functions}~~g:(\mathbb F_q^d, dm) \to \mathbb C.$$ The following definition was given in \cite{LL13}. \begin{definition}\label{defregular} Let $G\subset \mathbb F_q^d.$ For each $a\in \mathbb F_q$, define a level set $$ G_a=\{(m_1,\ldots, m_{d-1}, m_d) \in G: m_d=a\}.$$ In addition, define $$ L_G=\{a\in \mathbb F_q: |G_a| \ge 1 \}.$$ We say that the set $G$ is a regular set if $$ \frac{|G_a|}{2}\le |G_{a'}| \le 2\,|G_a| \quad \mbox{for}~~ a, a'\in L_G.$$ Finally, the function $g:\mathbb F_q^d \to \mathbb C$ is called a regular function if the function $g$ is supported on a regular set $G$ and $\frac{1}{2}\le |g(m)|\le 1$ for $m\in G.$ \end{definition} Notice that if $G$ is a regular set, then $|G|\sim |G_a||L_G|$ for all $a\in L_G.$ By the the dyadic pigeonhole principle, the following lemma was given by M. Lewko (see Lemma 14 in \cite{LL13}). \begin{lemma} \label{lem3.2} If the restriction estimate $$\|\widehat{g}\|_{L^{r}(P, d\sigma)} \le R_P(p\to r) \,\|g\|_{L^{p}(\mathbb F_q^d, dm)}$$ holds for all regular functions $g:(\mathbb F_q^d, dm)\to \mathbb C,$ then for each $\varepsilon >0$, $$R_P\left(p-\varepsilon\, \to r\right) \lesssim 1.$$ \end{lemma} Working on regular test functions, we lose the endpoint result but our analysis becomes extremely simplified. When the size of the support $G$ of a regular function $g$ is somewhat big, we shall invoke the following restriction estimate. \begin{lemma} \label{lem3.3} Let $g$ is a regular function on $(\mathbb F_q^d, dm)$ with $\mbox{supp}(g)=G.$ Then we have $$\|\widehat{g}\|_{L^2(P,d\sigma)} \le q^{\frac{1}{2}} |G|^{\frac{1}{2}}.$$ \end{lemma} \begin{proof} By the Plancherel theorem, we see that $$ \|{(fd\sigma)}^\vee\|_{L^2(\mathbb F_q^d, dm)} = q^{\frac{1}{2}} \|f\|_{L^2(P, d\sigma)} \quad \mbox{for all functions}~~f: P\to \mathbb C.$$ By duality, it is clear that $$ \|\widehat{g}\|_{L^2(P,d\sigma)} \le q^{\frac{1}{2}}\|g\|_{L^2( \mathbb F_q^d, dm)} \le q^{\frac{1}{2}} \|G\|_{L^2( \mathbb F_q^d, dm)} = q^{\frac{1}{2}} |G|^{\frac{1}{2}},$$ where the last inequality follows from the property of the regular function $g$ (namely, $\frac{1}{2}\le |g|\le 1$ on its support $G$.) \end{proof} The following result is well known in \cite{MT04} (see also \cite{IK09}). \begin{lemma}\label{explicit} Let $d\sigma$ be the normalized surface measure on the paraboloid $P \subset (\mathbb F_q^d, d\xi).$ For each $m=(\underline{m}, m_d) \in {\mathbb F}_q^{d-1}\times {\mathbb F}_q$ , we have $$ (d\sigma)^{\vee}(m)= \left\{\begin{array}{ll} q^{-(d-1)} \chi \left( \frac{\|\underline{m}\| }{-4m_d}\right) \eta^{d-1}(m_d)\, G_1^{d-1} \quad &\mbox{if} \quad m_d \ne 0\\ 0 \quad &\mbox{if} \quad m_d =0,\, m \ne (0,\dots,0)\\ 1 \quad &\mbox{if} \quad m=(0,\ldots,0).\end{array}\right.,$$ where $\|\underline{m}\|:=m_1^2+\cdots+ m_{d-1}^2$, $\eta$ denotes the quadratic character of $\mathbb F_q^*$, and $ G_1$ denotes the standard Gauss sum with $|G_1|=|\sum\limits_{s\ne 0} \eta(s) \chi(s)|=q^{\frac{1}{2}}.$ \end{lemma} When a regular function $g$ is supported on a small set $G$, the following result will be useful to deduce a good $L^2$ restriction estimate. \begin{lemma}\label{lem3.5} If $g$ is a regular function on $(\mathbb F_q^d, dm)$ with $\mbox{supp}(g)=G,$ then we have $$\|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + q^{\frac{-d+1}{4}} |G|.$$ \end{lemma} \begin{proof} It follows that \begin{align*} \|\widehat{g}\|^2_{L^2(P, d\sigma)} &=\frac{1}{|P|} \sum_{\xi \in P} |\widehat{g}(\xi)|^2=\frac{1}{q^{d-1}} \sum_{\xi\in P} \sum_{m, m'\in G} \chi(\xi\cdot(m-m')) g(m) \overline{g(m')}\\ &=q \sum_{m,m'\in G} {P}^\vee(m-m')g(m) \overline{g(m')} \le q \sum_{m, m'\in G} |{P}^\vee(m-m')|\\ &=q \sum_{m\in G} |{P}^\vee(0,\ldots,0)| + q \sum_{m,m'\in G: m\ne m'} |{P}^\vee(m-m')|= \mbox{I} + \mbox{II}. \end{align*} Since ${P}^\vee(0,\ldots,0) =\frac{|P|}{q^d}=\frac{1}{q},$ we see that $\mbox{I}=|G|.$ To estimate $\mbox{II}$, we observe from Lemma \ref{explicit} that if $w\ne (0,\ldots,0),$ $$ |{P}^\vee(w)| = \left|\frac{1}{q} \,(d\sigma)^\vee(w)\right|\le q^{\frac{-d-1}{2}}.$$ Then it is clear that $\mbox{II}\le q^{\frac{-d+1}{2}} |G|^2.$ Putting all estimates together, we obtain the lemma. \end{proof} The improved $L^p\to L^2$ restriction estimates for paraboloids have been obtained by extending the idea of Carbery \cite{Ca92} to the finite field setting. For instance, Mockenhaupt and Tao \cite{MT04} observed that the restriction operator acting on a single vertical slice of g, say $g_a$ for $a\in \mathbb F_q,$ is closely related to the extension operator applied to a function $h$ on $P$, which can be identified with the slice function $g_a.$ In fact, they found the connection between the $L^p\to L^2$ restriction estimate and the $L^p\to L^4$ extension estimate obtained from the additive energy estimation. Recall that the additive energy $\Lambda(E)$ for $E\subset P$ is given by \begin{equation}\label{additive} \Lambda(E):= \sum_{x,y,z, w\in E: x+y=z+w} 1.\end{equation} As a consequence, they obtained the extension result \eqref{Ta3D} for the 3-D paraboloid. Working with the restriction operator applied to regular test functions, M. Lewko \cite{LL13} was able to achieve the further improved extension results for the 3-$D$ paraboloid (see \eqref{Lew3DG} and \eqref{Lew3D}). He also employed the relation between the $L^p\to L^2$ restriction estimate and the $L^p\to L^4$ extension result for the 3-D paraboloid. In this paper, we develop his work to higher dimensional cases. To estimate $\|\widehat{g}\|_{L^2(P, d\sigma)}$, we will invoke not only $L^p\to L^4$ extension results but also $L^2\to L^r$ extension results for paraboloids in higher dimensions. The following lemma can be obtained by a modification of the Mockenhaupt and Tao Machinery which explains the relation between the $L^p\to L^2$ restriction estimate and the $L^p\to L^4$ extension result for paraboloids. \begin{lemma}\label{key1} Let $P\subset \mathbb F_q^d$ be the paraboloid. Then the following statements hold: \begin{enumerate} \item Let $g$ be a regular function with the support $G\subset (\mathbb F_q^d, dm).$ For each $a\in L_G,$ let $h_a$ be a function on the paraboloid $P\subset (\mathbb F_q^d, d\xi)$ such that $\frac{1}{2} \le |h_a(\xi)|\le 1$ on $\mbox{supp}(h_a)$ and $|\mbox{supp}(h_a)|= |G_a|.$ In addition, assume that there exists a positive number $U(|E|)$ depending on the size of a set $E\subset P$ such that $|E|\sim |\mbox{supp}(h_a)|$ for all $a\in L_G$ and \begin{equation}\label{assumption1} \max_{a\in L_G} \|(h_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} \lesssim U(|E|).\end{equation} Then we have $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}} (U(|E|)^{\frac{1}{2}}.$$ \item If $d\ge 4$ is even, or if $d=4\ell+3$ for $\ell\in \mathbb N$ and $ -1\in \mathbb F_q$ is not a square number, then $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{d^2+d-1}{2d^2}}|L_G|^{\frac{1}{4}}$$ for all regular functions $g$ on $(\mathbb F_q^d, dm)$ with $\mbox{supp}(g)=G.$ \end{enumerate} \end{lemma} \begin{proof} By duality, it follows that $$ \|\widehat{g}\|^2_{L^2(P, d\sigma)} = <g,\, (\widehat{g} d\sigma)^\vee> =<g,\, g\ast (d\sigma)^\vee>.$$ Using the Bochner-Riesz kernel $K$ which is defined by $K(m)= (d\sigma)^\vee(m) -\delta_0(m)$ for $m\in (\mathbb F_q^d, dm),$ where $\delta_0(m)=1$ if $m=(0, \ldots,0)$ and $0$ otherwise, we can write from H\"{o}lder's inequality that for $1\le r\le \infty,$ \begin{align} \label{L2ofg} \|\widehat{g}\|^2_{L^2(P, d\sigma)} &= <g, \, g\ast \delta_0> + <g,\,g\ast K> \\ \nonumber &\le\|g\|^2_{L^2(\mathbb F_q^d, dm)} + \|g\|_{L^{r'}(\mathbb F_q^d, dm)} \,\| g\ast K\|_{L^r(\mathbb F_q^d, dm)}\\ \nonumber &\le |G| + |G|^{\frac{1}{r'}} \, \| g\ast K\|_{L^r(\mathbb F_q^d, dm)},\end{align} where the last inequality follows from the property of a regular function $g$ with $\frac{1}{2}\le g\le 1$ on its support $G.$ To estimate $\| g\ast K\|_{L^r(\mathbb F_q^d, dm)},$ define $g_a$ for $a\in L_G$ as the restriction of $g$ to the hyperplane $\{m=(m_1, \ldots, m_d)\in \mathbb F_q^d: m_d=a\}.$ Notice that $\mbox{supp}(g_a)=G_a$ for $a\in L_G.$ It follows that \begin{equation} \label{mLG}\| g\ast K\|_{L^r(\mathbb F_q^d, dm)} \le \sum_{a\in L_G} \|g_a\ast K\|_{L^r(\mathbb F_q^d, dm)}.\end{equation} By the definition of $K$ and Lemma \ref{explicit}, we see that for each $a\in L_G,$ \begin{align*} \|g_a\ast K\|_{L^r(\mathbb F_q^d, dm)} &= \left(\sum_{m\in \mathbb F_q^d} \left| \sum_{n\in \mathbb F_q^d} g_a(n) K(m-n)\right|^r\right)^{\frac{1}{r}}\\ &= q^{\frac{-d+1}{2}} \left( \sum_{\underline{m} \in \mathbb F_q^{d-1}} \sum_{m_d\ne a} \left| \sum_{\underline{n}\in \mathbb F_q^{d-1}} g(\underline{n}, a) \,\chi\left(\frac{\|\underline{m}-\underline{n}\|}{-4(m_d-a)}\right)\right|^r\right)^{\frac{1}{r}}, \end{align*} where we define $\|\underline{m}-\underline{n}\|=(\underline{m}-\underline{n})\cdot (\underline{m}-\underline{n}).$ After changing variables by letting $s=-m_d+a,$ we use the change of variables one more by putting $ t=\frac{1}{4s}$ and $\underline{u}=\frac{-\underline{m}}{2s}.$ Then it follows that \begin{align*}\|g_a\ast K\|_{L^r(\mathbb F_q^d, dm)}&=q^{\frac{-d+1}{2}}\left( \sum_{\underline{u} \in \mathbb F_q^{d-1}} \sum_{t\ne 0} \left|\chi\left(\frac{\underline{u}\cdot \underline{u} }{4t} \right) \sum_{\underline{n}\in \mathbb F_q^{d-1}} g(\underline{n}, a) \,\chi\left((\underline{u}\cdot \underline{n})+t \,\underline{n}\cdot \underline{n}) \right)\right|^r \right)^{\frac{1}{r}}\\ &=q^{\frac{-d+1}{2}}\left( \sum_{\underline{u} \in \mathbb F_q^{d-1}} \sum_{t\ne 0} \left| \sum_{\underline{n}\in \mathbb F_q^{d-1}} g(\underline{n}, a) \,\chi\left((\underline{u}, t)\cdot (\underline{n}, \,\underline{n}\cdot \underline{n}) \right)\right|^r \right)^{\frac{1}{r}}.\end{align*} Now, for each $a\in L_G,$ define $h_a$ as a function on the paraboloid $P$ given by \begin{equation}\label{relation} h_a(\underline{n}, \,\underline{n}\cdot \underline{n}) = g_a(n) =g(\underline{n}, a) \quad \mbox{for}~~ n=(\underline{n}, n_d) \in \mathbb F_q^{d-1} \times \mathbb F_q.\end{equation} Then we see that for each $a\in L_G,$ $$\|g_a\ast K\|_{L^r(\mathbb F_q^d, dm)} \le q^{\frac{d-1}{2}} \| (h_a d\sigma)^\vee\|_{L^r(\mathbb F_q^d, dm)}.$$ Hence, combining this with \eqref{mLG}, the inequality \eqref{L2ofg} implies that \begin{equation}\label{L2formula} \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{1}{2r'}} q^{\frac{d-1}{4}} \left(\sum_{a\in L_G} \|(h_ad\sigma)^\vee \|_{L^r(\mathbb F_q^d, dm)}\right)^{\frac{1}{2}}. \end{equation} \subsection{Proof of the statement (1) in Lemma \ref{key1}} Since $g$ is a regular function supported on the regular set $G,$ it is clear from the definition of $h_a$ that $\frac{1}{2}\le |h_a(\xi)|\le 1$ on $\mbox{supp}(h_a)$ and $|\mbox{supp}(h_a)|=|\mbox{supp}(g_a)|=|G_a|$ for $a\in L_G.$ Thus, using the assumption \eqref{assumption1} with $r=4$, the inequality \eqref{L2formula} gives the desirable conclusion. \subsection{Proof of the statement (2) in Lemma \ref{key1}} We shall appeal the following $L^2\to L^r$ extension result obtained by A. Lewko and M. Lewko (see Theorem 2 in \cite{LL10}). \begin{lemma}\label{LLL} Let $P$ be the paraboloid in $(\mathbb F_q^d, d\xi).$ If $d\ge 4$ is even, or if $d=4\ell+3$ for $\ell\in \mathbb N$ and $ -1\in \mathbb F_q$ is not a square number, then we have $$ R^*_P\left(2\to \frac{2d^2}{d^2-2d+2}\right) \lesssim 1.$$ \end{lemma} Applying this lemma to the inequality \eqref{L2formula} with $r=\frac{2d^2}{d^2-2d+2},$ it follows $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} q^{\frac{d-1}{4}} \left(\sum_{a\in L_G} \|h_a \|_{L^2(P, d\sigma)}\right)^{\frac{1}{2}}.$$ By the Cauchy-Schwarz inequality and the definition of $h_a$ given in \eqref{relation}, we conclude that \begin{align*} \|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} q^{\frac{d-1}{4}} |L_G|^{\frac{1}{4}} \left( \sum_{a\in L_G} \|h_a \|^2_{L^2(P, d\sigma)}\right)^{\frac{1}{4}}\\ &=|G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} q^{\frac{d-1}{4}} |L_G|^{\frac{1}{4}} \left( \sum_{a\in L_G} \frac{1}{q^{d-1}} \sum_{n\in P} |h_a(n)|^2 \right)^{\frac{1}{4}}\\ &= |G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} |L_G|^{\frac{1}{4}} \left( \sum_{a\in L_G}\sum_{n\in \mathbb F_q^d} |g_a(n)|^2 \right)^{\frac{1}{4}}\\ &= |G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} |L_G|^{\frac{1}{4}} \left(\sum_{n\in \mathbb F_q^d} |g(n)|^2 \right)^{\frac{1}{4}}\\ &\le |G|^{\frac{1}{2}} + |G|^{\frac{d^2+2d-2}{4d^2}} |L_G|^{\frac{1}{4}} |G|^{\frac{1}{4}} \lesssim |G|^{\frac{d^2+d-1}{2d^2}}|L_G|^{\frac{1}{4}}, \end{align*} where the last line follows because $ \frac{1}{2} \le |g(n)| \le 1$ on its support $G.$ \end{proof} \section{Proof of main theorems} First, let us see basic ideas to deduce our main results. We want to improve Lemma \ref{LLL} which is the previously best known result on extension problems for paraboloids in higher dimensions. By duality, Lemma \ref{LLL} implies the following restriction estimate: \begin{equation}\label{oldresult} \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim \|g\|_{L^{\frac{2d^2}{d^2+2d-2}}(\mathbb F_q^d, dm)}.\end{equation} Now let us only consider the regular function $g$ on its support $G.$ Since $\|g\|_{L^p(\mathbb F_q^d, dm)} \sim |G|^{\frac{1}{p}}$, when $|G|$ is much bigger than $q^{\frac{d^2}{2d-2}}$, Lemma \ref{lem3.3} already gives us a better result than \eqref{oldresult}. On the other hand, when $|G|$ is very small, Lemma \ref{lem3.5} yields very strong results. Therefore, our main task is to obtain much better estimate than \eqref{oldresult} for every set $G$ with $q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon}$ for some $\delta,\,\varepsilon>0.$ This will be successfully done by applying Lemma \ref{key1}. In practice, we need to find a $U(|E|)$ in the conclusion of the first part of Lemma \ref{key1}. To do this, we shall invoke the following additive energy estimates due to Iosevich and Koh (see Lemma 7, Lemma 8, and Remark 4 in \cite{IK09}). \begin{lemma}\label{key} Let $P$ be the paraboloid in $({\mathbb F}_q^d , d\xi).$ Then the following statements hold: \begin{enumerate} \item If the dimension $d\ge 4$ is even and $E\subset P$, then we have $$\Lambda(E) \lesssim \min \{ |E|^3,~~ q^{-1}|E|^3+q^{\frac{d-2}{4}}|E|^{\frac{5}{2}} + q^{\frac{d-2}{2}}|E|^2 \}$$ \item If $d=4\ell+3$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then we have $$\Lambda_4(E) \lesssim \min \{ |E|^3,~~ q^{-1}|E|^3+q^{\frac{d-3}{4}}|E|^{\frac{5}{2}} + q^{\frac{d-2}{2}}|E|^2 \},$$ where $\Lambda(E)$ denotes the additive energy defined as in \eqref{additive}. \end{enumerate} \end{lemma} As we shall see, we only need the upper bound of $\Lambda(E)$ for a restricted range of $E \subset P.$ Considering the dominating value in terms of $|E|$, the following result is a simple corollary of the lemma above. \begin{corollary}\label{cor1} For the paraboloid $P \subset (\mathbb F_q^d, d\xi),$ we have the following facts: \begin{enumerate} \item If the dimension $d\ge 4$ is even and $E$ is any subset of $P$ with $q^{\frac{d-2}{2}} \le |E|\le q^{\frac{d+2}{2}},$ then $$\Lambda(E) \lesssim q^{\frac{d-2}{4}}|E|^{\frac{5}{2}}$$ \item Suppose that $d=4\ell+3$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number. Then, for any subset $E$ of $P$ with $q^{\frac{d-2}{2}} \le |E|\le q^{\frac{d+1}{2}},$ we have $$\Lambda(E) \lesssim q^{\frac{d-3}{4}}|E|^{\frac{5}{2}} + q^{\frac{d-2}{2}}|E|^2.$$ \end{enumerate} \end{corollary} We can deduce the following result by applying Corollary \ref{cor1} to the first part of Lemma \ref{key1}. \begin{lemma} \label{lem4.3} Let $g$ be a regular function with its support $G\subset (\mathbb F_q^d, dm).$ Then the following statements are valid: \begin{enumerate} \item If the dimension $d\ge 4$ is even and $q^{\frac{d-2}{2}} \lesssim |G_a| \lesssim q^{\frac{d+2}{2}}$ for $a\in L_G,$ then we have $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, |L_G|^{\frac{3}{16}} q^{\frac{-3d+6}{32}}$$ \item Assume that $d=4\ell+3$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number. Then if $q^{\frac{d-2}{2}} \lesssim |G_a| \lesssim q^{\frac{d+1}{2}}$ for $a\in L_G$, we have $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}} |L_G|^{\frac{3}{16}} q^{\frac{-3d+5}{32}}+|G|^{\frac{5}{8}} |L_G|^{\frac{1}{4}} q^{\frac{-d+2}{16}}.$$ \end{enumerate} \end{lemma} \begin{proof} For each $a\in L_G$, let $h_a$ be the function on $P$ given in the statement $(1)$ of Lemma \ref{key1}. For each $a\in L_G,$ let $H_a=\mbox{supp}(h_a).$ Since $\frac{1}{2} \le |h_a|\le 1$ on its support $H_a,$ expanding $L^4$ norm of $(h_ad\sigma)^\vee$ gives $$ \|(h_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} \le \| (H_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} = q^{\frac{-3d+4}{4}} (\Lambda(H_a))^{\frac{1}{4}}. $$ First, let us prove the first part of Lemma \ref{lem4.3}. Since $|G_a|=|H_a|$ for $a\in L_G,$ the first part of Corollary \ref{cor1} and the above inequality yield $$ \|(h_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} \lesssim q^{\frac{-3d+4}{4}} \left(q^{\frac{d-2}{4}}|H_a|^{\frac{5}{2}}\right)^{\frac{1}{4}}=q^{\frac{-11d+14}{16}} |H_a|^{\frac{5}{8}}.$$ By the definition of a regular set $G,$ it is obvious that $|G_a|\sim |G_{a'}|$ for $a, a' \in L_G.$ Hence, $|H_a|\sim |H_{a'}|$ for $a, a' \in L_G.$ Thus, we can choose $E\subset P$ such that $|E|\sim |H_a|$ for all $a\in L_G.$ It follows that $$ \max_{a\in L_G} \|(h_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} \lesssim q^{\frac{-11d+14}{16}} |E|^{\frac{5}{8}}:=U(|E|).$$ By applying the first part of Lemma \ref{key1} and observing that $|G|\sim |G_a||L_G|\sim |E||L_G|$ for all $a\in L_G,$ we conclude that \begin{align*}\|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}} \left(q^{\frac{-11d+14}{16}} |E|^{\frac{5}{8}}\right)^{\frac{1}{2}}\\ &\sim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, |L_G|^{\frac{3}{16}} q^{\frac{-3d+6}{32}},\end{align*} which proves the first part of Lemma \ref{lem4.3}. \\ To prove the second part of Lemma \ref{lem4.3}, we use the same arguments as in the proof of the first part of Lemma \ref{lem4.3}. In this case, we just utilize the second part of Corollary \ref{cor1} to see that \begin{align*} \max_{a\in L_G} \|(h_a d\sigma)^\vee\|_{L^4(F_q^d, dm)} &\lesssim q^{\frac{-3d+4}{4}}\left( q^{\frac{d-3}{4}}|E|^{\frac{5}{2}} + q^{\frac{d-2}{2}}|E|^2\right)^{\frac{1}{4}}\\ &\sim q^{\frac{-3d+4}{4}}\left( q^{\frac{d-3}{16}}|E|^{\frac{5}{8}} + q^{\frac{d-2}{8}}|E|^{\frac{1}{2}}\right)\\ &=q^{\frac{-11d+13}{16}}|E|^{\frac{5}{8}} + q^{\frac{-5d+6}{8}}|E|^{\frac{1}{2}} :=U(|E|).\end{align*} As before, we appeal the first part of Lemma \ref{key1} and use that $|G|\sim |G_a||L_G|\sim |E||L_G|$ for all $a\in L_G.$ Then the proof of the second part of Lemma \ref{lem4.3} is complete as follows: \begin{align*} \|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}} \left(q^{\frac{-11d+13}{16}}|E|^{\frac{5}{8}} + q^{\frac{-5d+6}{8}}|E|^{\frac{1}{2}}\right)^{\frac{1}{2}}\\ &\sim |G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}} \left(q^{\frac{-11d+13}{32}}|E|^{\frac{5}{16}} + q^{\frac{-5d+6}{16}}|E|^{\frac{1}{4}} \right)\\ &=|G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}} q^{\frac{-11d+13}{32}}|E|^{\frac{5}{16}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}} q^{\frac{d-1}{4}}q^{\frac{-5d+6} {16}}|E|^{\frac{1}{4}}\\ &=|G|^{\frac{1}{2}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}}|E|^{\frac{5}{16}} q^{\frac{-3d+5}{32}} + |G|^{\frac{3}{8}}\, |L_G|^{\frac{1}{2}}|E|^{\frac{1}{4}}q^{\frac{-d+2}{16}}\\ &\sim |G|^{\frac{1}{2}} +|G|^{\frac{11}{16}}\, |L_G|^{\frac{3}{16}} q^{\frac{-3d+5}{32}} + |G|^{\frac{5}{8}}\, |L_G|^{\frac{1}{4}}q^{\frac{-d+2}{16}}. \end{align*} \end{proof} We are ready to complete the proof of our main theorems, Theorem \ref{main1} and Theorem \ref{main2}, which will be proved in the following subsections. \subsection{Proof of Theorem \ref{main1}} By duality and Lemma \ref{lem3.2}, it is enough to prove the following statement: \begin{theorem}\label{main1-1}If the dimension $d\ge 6$ is even, then we have $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim \|g\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)}$$ for every regular function $g$ supported on $G \subset (\mathbb F_q^d, dm).$ \end{theorem} \begin{proof} As mentioned in the beginning of this section, it is helpful to work on three kinds of regular functions $g$ classified according to the following size of $G=\mbox{supp}(g):$ for some $\varepsilon, \delta>0,$ $$ (1)~~ 1\le |G| \le q^{\frac{d^2}{2d-2}-\delta} \quad (2)~~ q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon} \quad (3)~~ q^{\frac{d^2}{2d-2}+\varepsilon}\le |G| \le q^d.$$ Notice that Lemma \ref{lem3.2} yields much strong restriction inequality whenever $|G|$ becomes lager. Thus, Lemma \ref{lem3.2} is useful for the case (3). Also observe that Lemma \ref{lem3.5} gives the better restriction inequality for smaller size of $G$ and so it is helpful for the case (1). Thus, choosing big $\varepsilon$ and $\delta$ will yield good results for both the case (1) and the case (3). However, whenever $\varepsilon$ and $\delta$ become larger, the restriction estimate will be worse for the case (2). Hence, to deduce desirable results for all cases, our main task is to select optimal values of $\varepsilon$ and $\delta.$ Now, let us see how to find the optimal $\varepsilon$ and $\delta.$ Let $\varepsilon, \delta>0$ which will be chosen later. Let $g$ be a regular function with its support $G$ such that \begin{equation} \label{size1} q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon}.\end{equation} Let $|L_G|=q^\alpha$ for $0\le \alpha \le 1.$ Since $|G|\sim |G_a||L_G|=|G_a| q^\alpha$ for $a\in L_G,$ it must follow that for every $a\in L_G,$ $$ q^{\frac{d^2}{2d-2}-\delta-\alpha} \lesssim |G_a| \lesssim q^{\frac{d^2}{2d-2}+\varepsilon-\alpha}.$$ In order to use the first part of Lemma \ref{lem4.3}, we need to choose $\varepsilon, \delta>0$ such that $$q^{\frac{d-2}{2}}\le q^{\frac{d^2}{2d-2}-\delta-\alpha} \lesssim |G_a| \lesssim q^{\frac{d^2}{2d-2}+\varepsilon-\alpha} \le q^{\frac{d+2}{2}}.$$ Thus, if we select $\varepsilon, \delta>0$ satisfying that \begin{equation}\label{conditione-d} \delta+\alpha \le \frac{3d-2}{2d-2} \quad \mbox{and}\quad \varepsilon-\alpha\le \frac{d-2}{2d-2},\end{equation} then the first part of Lemma \ref{lem4.3} yields \begin{equation}\label{middle1} \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, q^{\frac{-3d+12}{32}} \quad \mbox{for}~~ q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon},\end{equation} where we use the fact that $|L_G|\le q.$ Notice that this inequality gives worse restriction results whenever $|G|$ becomes lager. Thus, comparing this inequality with Lemma \ref{lem3.3} which gives better restriction inequality for big size of $G,$ it is desirable to choose a possibly large $\varepsilon>0$ such that $$ |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, q^{\frac{-3d+12}{32}} \lesssim |G|^{\frac{1}{2}} q^{\frac{1}{2}}\, \left(\mbox{namely,}~ |G|\lesssim q^{\frac{3d+4}{6}}\right) \quad \mbox{and}\quad |G| \le q^{\frac{d^2}{2d-2}+\varepsilon}.$$ For this reason, we take $\varepsilon =\frac{d-4}{6d-6}$ which is positive for even $d\ge 6.$ Then we can take $\delta=\frac{d}{2d-2}$ so that the inequality \eqref{conditione-d} holds for all $0\le \alpha \le 1.$ Now we start proving Theorem \ref{main1-1}.\\ \noindent {\bf (Case I)} Assume that $q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+4}{6}},$ which is the case in \eqref{size1} for $\varepsilon =\frac{d-4}{6d-6}$ and $\delta=\frac{d}{2d-2}.$ Then, by \eqref{middle1}, we see that $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, q^{\frac{-3d+12}{32}} \quad \mbox{for}~~ q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+4}{6}}.$$ By the direct comparison, it follows that for all $ q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+4}{6}},$ $$ |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}}\, q^{\frac{-3d+12}{32}} \lesssim |G|^{\frac{3d+10}{6d+8}}=\|G\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)}.$$ Thus, the statement of Theorem \ref{main1-1} is valid for all regular functions $g$ on $(\mathbb F_q^d, dm)$ such that $q^{\frac{d}{2}} \le |\mbox{supp}(g)|=|G| \le q^{\frac{3d+4}{6}}.$\\ \noindent {\bf (Case II)} Assume that $1 \le |G| \le q^{\frac{d}{2}}.$ Applying Lemma \ref{lem3.5}, we obtain that $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + q^{\frac{-d+1}{4}} |G| \quad\mbox{for all} ~~1 \le |G| \le q^{\frac{d}{2}}.$$ In fact, this inequality gives much stronger restriction estimate than Theorem \ref{main1-1} for $1 \le |G| \le q^{\frac{d}{2}}.$ By the direct comparison, if $1\le |G| \le q^{\frac{d}{2}},$ then we have $$ |G|^{\frac{1}{2}} + q^{\frac{-d+1}{4}} |G| \lesssim |G|^{\frac{d+1}{2d}} = \|G\|_{L^{\frac{2d}{d+1}}(\mathbb F_q^d, dm)} \le \|G\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)}.$$ Hence, Theorem \ref{main1-1} is proved in this case.\\ \noindent {\bf (Case III)} Finally, assume that $ q^{\frac{3d+4}{6}}\le |G| \le q^d.$ In this case, by Lemma \ref{lem3.3} and the direct comparison, the statement of Theorem \ref{main1-1} holds: for all $q^{\frac{3d+4}{6}}\le |G| \le q^d,$ $$\|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} q^{\frac{1}{2}} \lesssim \|G\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+8}{3d+10}}(\mathbb F_q^d, dm)}.$$ We has completed the proof. \end{proof} \subsection{Proof of Theorem \ref{main2}} Theorem \ref{main2} can be proved by following the same arguments as in the proof of Theorem \ref{main1} but we will need additional work to deal with a regular set $G$ with middle size. The second part of Lemma \ref{key1} will make a crucial role in overcoming the problem. Now we start proving Theorem \ref{main2}. By duality and Lemma \ref{lem3.2}, it suffices to prove the following statement: \begin{theorem}\label{main2-2} If $d=4\ell+3$ for $\ell\in \mathbb N$, and $ -1\in \mathbb F_q$ is not a square number, then we have $$ \|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim \|g\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)}$$ for every regular function $g$ supported on $G \subset (\mathbb F_q^d, dm).$ \end{theorem} \begin{proof} As in the proof of Theorem \ref{main1-1}, let $g$ be a regular function supported on the set $G\subset (\mathbb F_q^d, dm)$ satisfying that \begin{equation} \label{size2} q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon}\end{equation} for some $\varepsilon, \delta>0$ which shall be selected as constants. Let $|L_G|=q^\beta$ for $0\le \beta \le 1.$ Since $|G|\sim |G_a||L_G|=|G_a| q^\beta$ for $a\in L_G,$ it follows that for every $a\in L_G,$ $$ q^{\frac{d^2}{2d-2}-\delta-\beta} \lesssim |G_a| \lesssim q^{\frac{d^2}{2d-2}+\varepsilon-\beta}.$$ For such $\varepsilon, \delta>0$, assume that for every $a\in L_G,$ $$q^{\frac{d-2}{2}}\le q^{\frac{d^2}{2d-2}-\delta-\beta} \lesssim |G_a| \lesssim q^{\frac{d^2}{2d-2}+\varepsilon-\beta} \le q^{\frac{d+1}{2}}.$$ Namely, we assume that \begin{equation}\label{conditione-d1} \delta+\beta \le \frac{3d-2}{2d-2} \quad \mbox{and}\quad \frac{1}{2d-2}\le \beta -\varepsilon.\end{equation} Then using the second part of Lemma \ref{lem4.3}, we have \begin{align}\label{L2good} \nonumber \|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{1}{2}} + |G|^{\frac{11}{16}} |L_G|^{\frac{3}{16}} q^{\frac{-3d+5}{32}}+|G|^{\frac{5}{8}} |L_G|^{\frac{1}{4}} q^{\frac{-d+2}{16}} \\ &\le |G|^{\frac{1}{2}} +|G|^{\frac{11}{16}} q^{\frac{-3d+11}{32}} + |G|^{\frac{5}{8}}q^{\frac{-d+6}{16}}, \end{align} where we utilized the fact that $|L_G|\le q.$ As before, by comparing this estimate with Lemma \ref{lem3.3}, we select the $\varepsilon >0$ such that $|G|\le q^{\frac{3d+5}{6}}= q^{\frac{d^2}{2d-2}+\varepsilon}.$ Namely, we take $\varepsilon=\frac{2d-5}{6d-6}.$ With this $\varepsilon$, if we choose $\frac{1}{3} \le \beta \le 1$ and $ \delta=\frac{d}{2d-2},$ then all conditions in \eqref{conditione-d1} hold, because $1\le |L_G|=q^\beta \le q.$ \begin{remark}\label{rem1} In conclusion, we have seen that if $g$ is a regular function with its support $G\subset (\mathbb F_q^d, dm)$ such that $q^{\frac{d^2}{2d-2}-\delta} \le |G| \le q^{\frac{d^2}{2d-2}+\varepsilon}$ and $ q^{\frac{1}{3}} \le |L_G|\le q$ for $\varepsilon=\frac{2d-5}{6d-6}$ and $\delta=\frac{d}{2d-2},$ then the inequality \eqref{L2good} holds. \end{remark} Now, we are ready to give the complete proof of Theorem \ref{main2-2}. \\ \noindent{(\bf Case 1)} Assume that $q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+5}{6}}$ which is the case in \eqref{size2} for $\varepsilon=\frac{2d-5}{6d-6}$ and $\delta=\frac{d}{2d-2}.$ In addition, assume that $q^{\frac{1}{3}} \le |L_G|\le q.$ Then, by Remark \ref{rem1} and the direct comparison, we see that if $q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+5}{6}}$ and $q^{\frac{1}{3}} \le |L_G|\le q,$ then for $d\ge 7,$ \begin{align*} \|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{1}{2}} +|G|^{\frac{11}{16}} q^{\frac{-3d+11}{32}} + |G|^{\frac{5}{8}}q^{\frac{-d+6}{16}}\\ &\lesssim |G|^{\frac{3d+11}{6d+10}} =\|G\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)}. \end{align*} On the other hand, if $ 1\le |L_G|\le q^{\frac{1}{3}}$ and $ q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+5}{6}},$ then we see from the second part of Lemma \ref{key1} and the direct comparison that \begin{align*} \|\widehat{g}\|_{L^2(P, d\sigma)} &\lesssim |G|^{\frac{d^2+d-1}{2d^2}}|L_G|^{\frac{1}{4}} \le |G|^{\frac{d^2+d-1}{2d^2}} q^{\frac{1}{12}} \lesssim |G|^{\frac{3d^2+4d-3}{6d^2}}\\ &= \|G\|_{L^{\frac{6d^2}{3d^2+4d-3}}(\mathbb F_q^d, dm)} \le \|G\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)}.\end{align*} Thus, Theorem \ref{main2-2} holds for all $q^{\frac{d}{2}} \le |G| \le q^{\frac{3d+5}{6}}.$\\ \noindent{(\bf Case 2)} Assume that $1 \le |G| \le q^{\frac{d}{2}}.$ In this case, Theorem \ref{main2-2} can be proved by using Lemma \ref{lem3.5} and the direct comparison as follows: $$\|\widehat{g}\|_{L^2(P, d\sigma)} \lesssim |G|^{\frac{1}{2}} + q^{\frac{-d+1}{4}} |G|\lesssim |G|^{\frac{3d+11}{6d+10}}=\|G\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)}.$$ \noindent{(\bf Case 3)} Assume that $ q^{\frac{3d+5}{6}}\le |G| \le q^{d}.$ In this case, the statement of Theorem \ref{main2-2} holds by Lemma \ref{lem3.3} and the direct comparison as follows: $$\|\widehat{g}\|_{L^2(P,d\sigma)} \le q^{\frac{1}{2}} |G|^{\frac{1}{2}} \lesssim |G|^{\frac{3d+11}{6d+10}} =\|G\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)} \sim \|g\|_{L^{\frac{6d+10}{3d+11}}(\mathbb F_q^d, dm)}.$$ By Cases $1,2,$ and $3,$ the proof of Theorem \ref{main2-2} is complete. \end{proof} \begin{table}[ht] \caption{Progress on the finite field extension problem for paraboloids in lower dimensions} \begin{center} \begin{tabular}{|c|c|c|} \hline Dimension $d,$ & & \\ Field $\mathbb F_q$ & $R^*_P(p\to r)\lesssim 1$ & Authors \\ \hline $d=2,~$ general $q$ & $p=2,~r=4 ~\mbox{(S-T)}$ &Mockenhaupt and Tao \cite{MT04} ~(solution)\\ \hline $d=3,$ & $p=2,~r=4 ~\mbox{(S-T)}$ &Mockenhaupt and Tao \cite{MT04}~(sharp)\\ $-1$ a square& $p=2.25, ~r=3.6$ &M. Lewko \cite{Le14}~(sharp)\\ &$p=\frac{18-5\varepsilon}{8-5\varepsilon},~r=3.6-\varepsilon$ &M. Lewko \cite{Le14}~(sharp)\\ & for some $\varepsilon>0$ & \\ & $p=3,~r=3$& (conjectured)\\ \hline $d=3,$& $p=2,~r>3.6$ & Mockenhaupt and Tao \cite{MT04}\\ $-1$ not a square & $p>1.6,~r=4$ & Mockenhaupt and Tao \cite{MT04}\\ (prime $q$)& $p=2,~r=3.6$ & A. Lewko and M. Lewko \cite{LL10}\\ & $p=1.6,~r=4$ & A. Lewko and M. Lewko \cite{LL10}(sharp)\\ & $p=2,~r>3.6-\frac{1}{1035}$ & M. Lewko \cite{LL13} \\ & $p=2,~r=3 $ &(conjectured)\\ \hline $d=3,$& $p=2,~r=3.6-\varepsilon$ & M. Lewko \cite{LL13} \\ $-1$ not a square & for some $\varepsilon>0$ & \\ & $p=2,~r=3 $ &(conjectured)\\ \hline \end{tabular} \end{center} \label{tab:multicol} \end{table} \begin{table}[ht] \caption{Progress on the finite field extension problem for paraboloids in higher dimensions} \begin{center} \begin{tabular}{|c|c|c|} \hline Dimension $d,$ & & \\ Field $\mathbb F_q$ & $R^*_P(p\to r)\lesssim 1$ & Authors \\ \hline $d\ge 4$ even, & $p=2,~r=\frac{2d+2}{d-1}$~(S-T) & Mockenhaupt and Tao \cite{MT04}\\ general $q$ & $p=2,~r>\frac{2d^2}{d^2-2d+2}$ & Iosevich and Koh \cite{IK09}\\ & $p> \frac{4d}{3d-2},~ r=4$ & Iosevich and Koh \cite{IK09}\\ & $p=2,~r=\frac{2d^2}{d^2-2d+2}$ & A. Lewko and M. Lewko \cite{LL10}\\ & $p=\frac{4d}{3d-2},~ r=4$ & A. Lewko and M. Lewko \cite{LL10}~(sharp)\\ & $p=2,~r>\frac{6d+8}{3d-2}$ & Theorem 1.4\\ & $p= \frac{2d^2}{d^2-d+2},~r=\frac{2d}{d-1}$ & (conjectured) \\ & $p=2,~r=\frac{2d+4}{d}$ & (conjectured best $r$ for $p=2$)\\ \hline $d\ge 5$ odd, & $p=2,~r=\frac{2d+2}{d-1}$~(S-T) & Mockenhaupt and Tao \cite{MT04}~(sharp)\\ $-1$ a square & $p=\frac{2d+2}{d-1},~r=\frac{2d+2}{d-1}-\varepsilon_d$ & M. Lewko \cite{Le14}\\ & for some $\varepsilon_d>0$ & \\ & $p=\frac{2d}{d-1},~r=\frac{2d}{d-1}$& (conjectured)\\ \hline $d=4\ell+1$ for $\ell\in \mathbb N,$ & $p=2,~r=\frac{2d+2}{d-1}$~(S-T) & Mockenhaupt and Tao \cite{MT04}~(sharp)\\ $-1$ not a square & & \\ & $p=\frac{2d}{d-1},~r=\frac{2d}{d-1}$& (conjectured)\\ \hline $d=4\ell+3$ for $\ell\in \mathbb N,$ & $p=2,~r=\frac{2d+2}{d-1}$~(S-T) & Mockenhaupt and Tao \cite{MT04}\\ $-1$ not a square & $p=2,~r>\frac{2d^2}{d^2-2d+2}$ & Iosevich and Koh \cite{IK09}\\ & $p> \frac{4d}{3d-2},~ r=4$ & Iosevich and Koh \cite{IK09}\\ & $p=2,~r=\frac{2d^2}{d^2-2d+2}$ & A. Lewko and M. Lewko \cite{LL10}\\ & $p=\frac{4d}{3d-2},~ r=4$ & A. Lewko and M. Lewko \cite{LL10}\\ & $p=2,~r>\frac{6d+10}{3d-1}$ & Theorem 1.5\\ & $p=\frac{2d^2+2d }{d^2+3},~ r=\frac{2d}{d-1}$ & (conjectured)\\ & $p=2,~r=\frac{2d+6}{d+1}$ & (conjectured best $r$ for $p=2$)\\ \hline \end{tabular} \end{center} \label{tab:multicol} \end{table} \newpage \bibliographystyle{amsplain}
2024-02-18T23:40:39.664Z
2017-03-07T02:00:51.000Z
algebraic_stack_train_0000
2,995
10,720
proofpile-arXiv_065-14632
\section{Main Results (Theorems \ref{thm:cmlealgorithm} and \ref{thm:ABS})} \begin{theorem}\label{thm:cmlealgorithm} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and $\delta, \epsilon \in[0,1]$. If $X$ has an $(f,g)$-balanced optimal CMLE solution, then there exists an algorithm which computes a mixture of $K$ spherical Gaussians $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$, such that \[ Pr \left[ \mathcal{L}_X(\theta) \leq (1+\epsilon)OPT(X,K)\right] \geq 1-\delta \ . \] The runtime of the algorithm is bounded by \begin{align*} \abs{X}\cdot K \cdot \log(\Gamma)\cdot \log(g(K))\cdot 2^{\tilde{\cal O}\left( \frac{f(K)}{\epsilon\delta}\right)} \end{align*} where $\Gamma \leq 2\cdot\ln\left(32\pi \cdot OPT_{diam}(X,K)\right) + \ln(K) + 1$. \end{theorem} \begin{corollary} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and $\delta, \epsilon\in[0,1]$. If $X$ has an $f$-balanced optimal CMLE solution, then there exists an algorithm which computes a mixture of $K$ spherical Gaussians $\theta$, such that \[ Pr \left[ \mathcal{L}_X(\theta) \leq (1+\epsilon)OPT(X,K)\right] \geq 1-\delta \ . \] The runtime of the algorithm is bounded by \begin{align*} \abs{X}\cdot K \cdot \log(\Gamma)^2\cdot 2^{\tilde{\cal O}\left( \frac{f(K)}{\epsilon\delta}\right)} \end{align*} where $\Gamma \leq 2\cdot\ln\left(32\pi \cdot OPT_{diam}(X,K)\right) + \ln(K) + 1$. \end{corollary} \begin{theorem} \label{thm:ABS} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$, and $\delta,\epsilon > 0$. Let $\mathcal{C}=\dot\bigcup_{k=1}^K C_k$ be a well-defined solution for the CMLE problem. There is an algorithm that computes a mixture of $K$ spherical Gaussians $\theta$, such that \[ \Pr\left[ \mathcal{L}_{X}(\theta) \leq (1+\epsilon) \mathcal{L}_X(\mathcal{C}) \right] \geq 1-\delta\ . \] The running time of the algorithm is bounded by \[ \abs{X}\,d\,\log\left(\frac{1}{\delta}\right)\,2^{{\cal O}\left( \frac{K}{\epsilon}\cdot \log\left(\frac{K}{\epsilon^2}\right) \right)}\, \left(\log(\log(\Delta^2))+1\right)^K \left( \log(f(K))\right)^K \ ,\] where $\Delta^2 = \max_{x,y\in X} \{ \norm{x-y}^2\}$. \end{theorem} \section{Preliminaries} Given set of observations, the objective of the CMLE problem is to find a Gaussian mixture model and a hard clustering with maximum complete-data likelihood. In this section, we will first describe and define this objective function. Then, we will present an alternating optimization scheme for this problem. However, the problem is not well-defined. Hence, we will restrict the problem to reasonable instances and solutions. \subsection{Complete-Data Log-Likelihood} Let $X\subset\mathbbm{R}^d$ be a finite set of observations. Given a spherical Gaussian distribution $\mathcal{N}_d(\mu,\sigma)$, the \emph{likelihood} that all $x\in X$ have been drawn according to $\mathcal{N}_d(\mu,\sigma)$ is given by \[ \prod_{x\in X} \mathcal{N}_d(x | \mu,\sigma)\ ,\] assuming that the observations have been drawn independently at random. \begin{definition} Given a finite set $X\subset \mathbbm{R}^d$ and a spherical Gaussian distribution with mean $\mu\in\mathbbm{R}^d$ and variance $\sigma^2 \in\mathbbm{R}$, let \[ \mathcal{L}_X(\mu,\sigma^2) \coloneqq-\ln \left( \prod_{x\in X} p(x|\mu,\sigma^2)\right) = \frac{\abs{X}d}{2}\ln(2\pi \sigma_k^2) + \frac{1}{2\sigma_k^2} \sum_{x\in X} \norm{x-\mu_k}^2 \ .\] We denote the minimal value by $OPT(X,1) = \min_{(\mu,\sigma^2)} \mathcal{L}_X(\mu,\sigma^2)$. \end{definition} Now consider a Gaussian mixture model (GMM) given by parameters $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$. Drawing an observation $x_n$ according to a GMM corresponds to a two-step process: \begin{enumerate} \item Draw a component $z_n\OneTo{K}$ with probability $p(z_n=k|\theta)=w_k$. \item Draw an observation $x_n \in X$ according to $\mathcal{N}_d(\mu_{z_n},\sigma_{z_n})$. \end{enumerate} Note that the assignment $z_n\OneTo{K}$ is a (latent) random variable in this two-step process. With the help of this random variable, we can compute the likelihood that observation $x\in X$ has been generated by the $k$-th component of the GMM, i.e. \[ p(x_n,z_n=k|\theta) = p(z_n=k|\theta)\cdot p(x_n|z_n=k,\theta) = w_k\cdot \mathcal{N}_d(x|\mu_k,\sigma_k)\ .\] Since $x_n$ and $z_n$ completely describe the two-step process, the likelihood $p(x_n,z_n|\theta)$ is also called \emph{complete-data likelihood}, while $p(x_n|\theta) = \sum_{z_n=1}^K p(x_n,z_n|\theta)$ is refered to as \emph{(marginal) likelihood}. Assume, we are given a set of observations $X=\{x_n\}_{n=1}^N$ and assignments $\{z_n\}_{n=1}^N$. Then, the likelihood that all observations have been drawn according to a GMM $\theta$ and that each $x_n$ has been generated by the $z_n$-th component, is given by \begin{align} \prod_{n=1}^N p(x_n,z_n|\theta) = \prod_{n=1}^N w_{z_n}\cdot \mathcal{N}_d(x_n|\mu_{z_n},\sigma_{z_n}) \ ,\label{eq:prelim:complete-data-1} \end{align} assuming that the observations have been drawn inpendently at random. Note that the assignments $\{z_n\}_{n=1}^N$ define a partition $\mathcal{C}=\dot\cup_{k=1}^K C_k$ via $x_n\in C_k$ iff $z_n=k$. Hence, we can also rewrite Equation~\eqref{eq:prelim:complete-data-1} as \[ \prod_{k=1}^K \prod_{x_n\in C_k} p(x_n,z_n=k|\theta) = \prod_{k=1}^K \prod_{x_n\in C_k} w_k\cdot \mathcal{N}_d(x_n|\mu_k,\sigma_k)\ . \] By taking (negative) logarithm of this expression, we obtain \begin{align*} &-\log\left(\prod_{k=1}^K \prod_{x_n\in C_k} p(x_n,z_n=k|\theta)\right) \\ &= \sum_{k=1}^K \sum_{x_n\in C_k}\left( \ln(w_k) + \ln\left(\mathcal{N}_d(x_n|\mu_k,\Sigma_k\right)\right) \\ &= \sum_{k=1}^K \mathcal{L}_{C_k}(\mu_k,\sigma^2_k) - \ln(w_k)\cdot \abs{C_k} \ . \end{align*} \begin{definition} Given a finite set $X\subset \mathbbm{R}^d$, a partition $\mathcal{C} = \{C_1,\ldots,C_K\}$ of $X$, and a mixture of spherical Gaussians with parameters $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$, we call \[ \mathcal{L}_X(\theta, \mathcal{C}) \coloneqq \sum_{k=1}^K \mathcal{L}_{C_k}(\mu_k,\sigma^2_k) - \ln(w_k)\cdot \abs{C_k} \] the complete-data negative log-likelihood. \end{definition} Note that a solution maximizing the complete-data likelihood also minimizes the complete-data negative log-likelihood, and vice versa. Therefore, we define the \emph{complete-cata maximum likelihood estimation} (CMLE) problem as follows. \begin{problem}[CMLE]\label{prob-cmle} Given a finite set $X\subset \mathbbm{R}^d$ and an integer $K\in \mathbbm{N}$, find a partition $\mathcal{C} = \{C_1,\ldots,C_K\}$ of $X$ and a mixture of spherical Gaussians with parameters $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$ minimizing $\mathcal{L}_X(\theta, \mathcal{C})$. We denote the minimal value by $OPT(X,K)$. For a fixed model $\theta$, we let $\mathcal{L}_X(\theta) = \min_{\mathcal{C}}\mathcal{L}_X(\theta,\mathcal{C})$. Analogously, for a fixed clustering $\mathcal{C}$, we let $\mathcal{L}_X(\mathcal{C}) = \min_{\theta}\mathcal{L}_X(\theta,\mathcal{C})$. \end{problem} \begin{definition} Given parameters $(w_k,\mu_k,\sigma_k^2)$ and a cluster $C_k\subseteq X$, we let \[ \mathcal{L}_{x}(w_k,\mu_k,\sigma_k^2) \coloneqq \frac{d}{2}\ln(2\pi \sigma_k^2) + \frac{1}{2\sigma_k^2} \norm{x-\mu_k}^2 - \ln(w_k), \ \] and \[ \mathcal{L}_{C_k}(w_k,\mu_k,\sigma_k^2) \coloneqq \sum_{x\in C_k} \mathcal{L}_{x}(w_k,\mu_k,\sigma_k^2) \ .\] \end{definition} \begin{remark} For all partitions $\mathcal{C} = \{C_1,\ldots,C_K\}$, we have \[ \mathcal{L}_X(\mathcal{C}) = \sum_{k=1}^K OPT(C_k,1) - \ln\left(\frac{\abs{C_k}}{\abs{X}}\right)\cdot \abs{C_k} \ .\] For all $\theta = \{(w_1,\mu_1,\sigma^2_1),\ldots,(w_K,\mu_K,\sigma^2_K)\}$, we have \[ \mathcal{L}_X(\theta) = \sum_{n=1}^N \argmin_{k\OneTo{K}} \{ \mathcal{L}_{x}(w_k,\mu_k,\sigma_k^2) \} \ . \] \end{remark} \subsection{Alternating Optimization Scheme (CEM algorithm)} An \emph{alternating optimization algorithm} for this problem is given by the following first order optimality conditions. Fixing the partition $\mathcal{C}=\{C_k\}_{k=1}^K$, the optimal mixture of spherical Gaussians is given by $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$ with \[ w_k = \frac{\abs{C_k}}{\abs{X}}\ ,\qquad \mu_k = \frac{1}{\abs{C_k}}\sum_{x_n\in C_k} x_n\ ,\qquad \sigma_k^2 = \frac{1}{d\abs{C_k}} \sum_{x_n\in C_k} \Vert x_n - \mu_k\Vert^2\ .\] Fixing the Gaussian mixture model $\theta = \{(w_k,\mu_k,\sigma^2_k)\}_{k=1}^K$, the optimal partition $\mathcal{C}=\{C_k\}_{k=1}^K$ is given by assigning each point to its most likely component, i.e. \[ x_n\in C_k \Leftrightarrow k=\argmax_{l\OneTo{K}} p(z_n=l|x_n,\theta)\ , \] where \[ p(z_n=k|x_n,\theta) = \frac{w_k\mathcal{N}(x_n|\mu_k,\sigma_k^2)}{\sum_{l=1}^K w_l\mathcal{N}(x_n|\mu_l,\sigma_l^2)}\ , \] which is the \emph{posterior probability} that $x_n$ has been generated by the $k$-th component of the given mixture. If we repeatedly compute these update formulas, the solution converges to a local extremum or a saddlepoint of the likelihood function. A proof of the correctenss of these update formulas (which we omit here) uses the following lemma. \begin{lemma}\label{lem-wonderful} Let $X\subset \mathbbm{R}^d$ be a finite set. Define \[ \mu(X)=\frac 1{\abs{X}} \sum_{x\in X} x\ .\] Then, for all $y\in\mathbbm{R}^d$ \[ \sum_{x\in X}\norm{x-y}^2=\sum_{x\in X}\norm{x-\mu(X)}^2+\abs{X}\cdot \norm{y-\mu(X)}^2\ .\] In particular, $\mu(X)=\argmin_{y\in \mathbbm{R}^d} \sum_{x\in X}\norm{x-y}^2$. \end{lemma} Note that an optimal CMLE solution is not changed by this algorithm. Hence, an optimal CMLE solution is completely defined by a partition or a Gaussian mixture model. Similarly, if we refer to a partition or a Gaussian mixture as a CMLE solution we assume that the missing parameters are as defined by the update formulas given above, respectively. \subsection{Well-Defined Instances} Unfortunately, the CMLE problem is not well defined in this form. For example, you could choose $C_1 = \{x\}$ and $\mu_1 = x$ for some $x\in X$. Then, as $\sigma_1 \rightarrow 0$ we get that $\mathcal{L}_K(X) \rightarrow -\infty$. Consequently, we impose the following restrictions on instances \begin{definition}\label{def:well-defined} We call $X = \dot\bigcup_{k=1}^K C_k$ a \emph{well-defined partition} if \begin{enumerate} \item for all $k\OneTo{K}:\ \abs{C_k} \geq 2$.\label{rest:minpts2} \end{enumerate} We call $X$ itself a \emph{well-defined instance} if \begin{enumerate} \setcounter{enumi}{1} \item $\forall x,y\in X,x\neq y:\ \norm{x-y}^2 \geq \frac{4d}{\pi}$.\label{rest:dist} \end{enumerate} We denote $X = \dot\bigcup_{k=1}^K C_k$ as a \emph{well-defined solution} if $X$ is a well-defined instance and $\{C_k\}_{k=1}^K$ is a well-defined partition. \end{definition} In the following, we prove that, with these restrictions, the CMLE problem is well defined. That is, the minimum in Problem~\ref{prob-cmle} is well defined ($\mathcal{L}_K(X)> -\infty$). Moreover, we will see (Lemma~\ref{lem-lower-bound-variance}) that for the optimal solution we have $\sigma^2_k\ge \frac{1}{2\pi}$ or \begin{align}\label{eq1} 2\pi\sigma^2_k\ge 1 \quad \text{for $k\OneTo{K}$.} \end{align} First of all, note that the sum of squared distances between the points in $X$ and the mean $\mu(X)$ can be rewritten using pairwise distances (which are lower bounded in Restriction~\ref{rest:dist}). \begin{lemma}\label{lem-1mean-characterization} Let $X\subset \mathbbm{R}^d$ be a finite set and $\mu(X):=\frac{1}{\abs{X}} \sum_{x\in X}$ its mean, then \begin{align*} \sum_{x\in X} \norm{x-\mu(X)}^2=\frac{1}{2\abs{X}}\sum_{x\in X}\sum_{y\in X}\norm{x-y}^2. \end{align*} \end{lemma} \begin{proof} \begin{align*} \sum_{x\in X}\sum_{y\in X}\norm{x-y}^2 & = \sum_{x\in X}\sum_{y\in X}\dpr{x-y}{x-y} & \\ & = \sum_{x\in X}\sum_{y\in X}(\dpr{x}{x}+\dpr{y}{y}-2\dpr{x}{y} & \\ & = 2\abs{X}\sum_{x\in X}\dpr{x}{x} -2 \sum_{x\in X}\sum_{y\in X}\dpr{x}{y} & \\ & = 2\abs{X}\sum_{x\in X}\dpr{x}{x}-2\abs{X}\sum_{x\in X}\dpr{x}{\mu(X)} & \\ & = 2\abs{X}\sum_{x\in X}\dpr{x}{x-\mu(X)} & \\ & = 2\abs{X}\sum_{x\in X}\dpr{x-\mu(X)}{x-\mu(X)} & \tag{using $\abs{X}\sum_{x\in X}\dpr{\mu(X)}{x-\mu(X)}=0$}\\ & = 2\abs{X}\sum_{x\in X}\norm{x-\mu(X)}^2. \end{align*} \end{proof} Now using the restriction on the minimum pairwise difference between points (Restriction~\ref{rest:dist}) and on the minimum number of points (Restriction~\ref{rest:minpts2}) in a cluster, we can lower bound the variance of each cluster. This directly yields Equation~\eqref{eq1} and our claim that the problem is well-defined under the restrictions given in Definition~\ref{def:well-defined}. \begin{lemma}\label{lem-lower-bound-variance} Let $Y$ be a subset of a set $X$ that satisfies Restriction~\ref{rest:dist} from Definition~\ref{def:well-defined} and that contains at least two different elements. Then, \[\sigma(Y)^2=\frac{1}{\abs{Y}d}\sum_{y\in Y}\norm{y-\mu(Y)}^2 \geq \frac{1}{2\pi}\ .\] \end{lemma} \begin{proof} \begin{align*} \sigma(Y)^2 & = \frac{1}{\abs{Y}d}\sum_{y\in Y}\norm{y-\mu(Y)}^2 \\ & = \frac{1}{2\abs{Y}^2d}\sum_{x\in Y}\sum_{y\in Y} \norm{x-y}^2 \tag{using Lemma~\ref{lem-1mean-characterization}} \\ & \geq \frac{1}{2\abs{Y}^2d} \binom{\abs{Y}}{2} \min_{x,y\in Y,x\ne y} \norm{x-y}^2 & \\ & \geq \frac{1}{8d} \min_{x,y\in Y,x\neq y} \norm{x-y}^2 \\ & \geq \frac{1}{2\pi} \tag{using Restriction~\ref{rest:dist}} \end{align*} \end{proof} Throughout the rest of this paper, we will restrict the search space of CMLE to well-defined solutions. In particular, we only consider the optimal solution among all well-defined solutions. \subsection{Well-Balanced Instances} A central idea behind the algorithms that we present in this paper is that we do not allow somewhat \emph{degenerate} instances. This means that we can find a function $f$ in the number of clusters that can be used to lower bound the number of points in a cluster and a function $g$ that can be used to lower bound the costs $OPT(C_k,1)$ of optimal clusters $C_k$. \begin{definition}[well-balanced]\label{def:well-balanced} Let $f,g:\mathbbm{N} \rightarrow \mathbbm{R}$. We denote a partition $X = \dot\bigcup_{k=1}^K C_k$ as $f$-\emph{balanced} if for all $k\OneTo{K}$ \[ \abs{C_k} \geq \frac{\abs{X}}{f(K)}\ . \] Furthermore, we denote the partition as an $(f,g)$\emph{-balanced} CMLE solution if it is $f$-balanced and additionally for all $k\OneTo{K}$ \[ OPT(C_k,1) \geq \frac{1}{g(K)} \cdot \sum_{k=1}^K OPT(C_k,1)\ . \] \end{definition} \begin{definition} Given a finite set $X\subset \mathbbm{R}^d$ and $K\in\mathbbm{N}$, we let \[ OPT_{diam}(X,K) = \min_{\substack{\{C_1,\ldots,C_K\}, \\ \dot\cup_{k=1}^K C_k = X}} \ \max_{k\OneTo{K}}\ \max_{x,y\in C_k} \norm{x-y}\ . \] \end{definition} \begin{lemma}[From $f$-balanced to $(f,g)$-balanced]\label{lem:relation-between-balance-defs} An $f$-balanced solution $X = \dot\bigcup_{k=1}^K C_k$ is also an $\left(f,\Gamma\cdot f\right)$-balanced CMLE solution, where $\Gamma \leq 2\cdot\ln\left(32\pi \cdot OPT_{diam}(X,K)\right) + \ln(K) + 1$. \end{lemma} \begin{proof} \begin{align*} OPT(C_k,1) &\geq \frac{\abs{C_k} d}{2} \geq \frac{1}{f(K)} \frac{\abs{X} d}{2} \tag{due to Lemma~\ref{cor:lower-bound-nll} and $f$ balanced}\\ &\geq \frac{1}{f(K)\cdot \Gamma} \mathcal{L}_K(X) \tag{due to Lem.~\ref{lem:UpperBoundNLL}} \\ &\geq \frac{1}{f(K)\cdot \Gamma} \sum_{k=1}^K OPT(C_k,1)\ . \end{align*} \end{proof} \subsection{Applying the ABS Algorithm}\label{sec:proof2:abs} \begin{algorithm} \KwIn{ \\ $R\subset X \subset \mathbbm{R}^d\ $: set of remaining input points\\ $l\in\mathbbm{N}\ :$ number of means yet to be found\\ $\vec\mu = (\mu_1,\ldots,\mu_j)\ :$ tuple of $j\leq k-l$ candidate means\\ $(\tilde{\sigma}_{1}^2,\ldots,\tilde{\sigma}_{k}^2)\ :$ vector of $k$ variances\\ $(\tilde{w}_{1}^2,\ldots,\tilde{w}_{k}^2)\ :$ vector of $k$ weights\\ \\ \textbf{Notation:}\\ $\vec S\ :$ vector containing the elements of set $S$ in arbitrary order \\ $\vec x \circ \vec y:\ $ concatenation of vectors, i.e. for $\vec x = (x_1,\ldots,x_n)$ and $\vec y =(y_1,\ldots,y_m)$,\newline \phantom{$\vec x \circ \vec y:\ \ $}$\vec x \circ \vec y = (x_1,\ldots,x_n,y_1,\ldots,y_m)$ } \KwOut{ $\theta = \{ (w_i,\mu_i,\sigma_i) \}$ containing at most $k$ tuples of mean and variance } \eIf{$l=0$}{\Return $\vec P$\;} { \eIf{$l\geq \abs{R}$} { \Return $\theta = \{ (\mu_i,\sigma_i) \}_i$ where $\vec\mu \circ \vec R = (\mu_i)_i$\; } { \emph{/* sampling phase */}\; sample a multiset $S$ of size $\frac{1}{\alpha \epsilon \delta}$ from $R$\; $T \leftarrow \left\{ \mu(S') | S'\subset S, \abs{S'} = \frac{1}{\epsilon\delta} \right\}$\; $\mathcal{M}_k \leftarrow \emptyset$\; \For{$t\in T$} { $\mathcal{M}_k \leftarrow \mathcal{M}_k\cup \textsc{Approx-Means}(R, l-1, \{ \vec\mu \circ (t) | \vec\mu \in \mathcal{M}_{k-l}\}, \Sigma)$\; } \emph{/* pruning phase */}\; $N \gets$ set of $\frac{\abs{R}}{2}$ points $x$ from $R$ with smallest minimum negative complete-data log-likelihood cost wrt. the weighted component given by $(\tilde{w}_i, \mu_i,\tilde{\sigma}_i^2)$ for $i\OneTo{j}$, i.e. \[ \min_{i\OneTo{j}} \left\{ \frac{d}{2}\ln(2\pi \tilde{\sigma}_i^2) + \frac{1}{2\tilde{\sigma}_i^2} \norm{x-\mu_i}^2 - \ln(\tilde{w}_i) \right\} \] $\mathcal{M}_k \gets \mathcal{M}_k \cup \textsc{Approx-Means}(R\setminus N, l, \mathcal{M}_{k-l}, \Sigma)$\; \Return the candidate $\theta = \{ (w_i, \mu_i,\sigma_i) \}_i$, $(\mu_i)\in\mathcal{M}_k$, which has minimal cost $\mathcal{L}_X(\theta)$ \; } } \caption{\textsc{Approx-Means}$(R, l,\mathcal{M}_{k-l},\Sigma)$}\label{alg-ABS} \end{algorithm} In the following we analyze Algorithm~\ref{alg-ABS}. We show that the algorithm can be used to construct means such that, together with appropriate approximations of the weights and variances, we obtain a CMLE solution with costs close to the costs of the given CMLE solution. \begin{theorem}\label{thm:ucmle:abs} Let $\tilde{\sigma}_i \in [\sigma_i^2, (\sigma_i^2)^{(1+\epsilon)}]$ and $\tilde{w}_k \geq \frac{1}{(1+\epsilon)}w_k$ for $i\OneTo{k}$. Algorithm~\ref{alg-ABS} started with $(X,k,\emptyset,(\tilde{\sigma}_1^2,\ldots,\tilde{\sigma}_k^2))$ computes a tuple $(\tilde{\mu}_1,\ldots,\tilde{\mu}_k)$ such that with probability at least $\left(\frac{1-\delta}{5}\right)^k$ \[ \mathcal{L}_{X}((\tilde{w}_i, \tilde{\mu}_i,\tilde{\sigma}_i^2)_{i\OneTo{k}}) \leq (1+\epsilon) \mathcal{L}(X) \ .\] The running time of the algorithm is bounded by $\abs{X}\,d\,2^{{\cal O}(k/\epsilon\cdot \log(k/\epsilon^2))}$. \end{theorem} Let $\dot{\bigcup}_{i=1}^k C_i$ be a partition of $X$ into optimal CMLE clusters. We introduce \[ C_{[i,j]} = \dot{\bigcup}_{t=i}^j C_t \] as a short notation for the disjoint union of clusters $i$ through $j$. We assume that the $C_i$ are numbered by the order their approximate means $\tilde{\mu}_i$ are found by the superset-sampling technique. Now, let $X=R_0 \supseteq R_i \supseteq \dots \supseteq R_{k-1}$ be a sequence of input sets computed by the algorithm, such that \[ \abs{C_i \cap R_{i-1}} \geq \alpha \abs{R_{i-1}}. \] Without loss of generality assume that each $R_i$ is the largest of these sets with this property. By using Lemma~\ref{lem-superset-sampling}, we obtain the following Lemma. \begin{lemma}[By Superset-Sampling] With probability at least $((1-\delta)/5)^k$ we have \[ \norm{\tilde{\mu}_i - \mu(C_i\cap R_{i-1})}^2 \leq \frac{\epsilon}{\abs{C_i \cap R_{i-1}}}\sum_{x \in C_i\cap R_{i-1}} \norm{x - \mu(C_i\cap R_{i-1})}^2 \] for all $i\OneTo{K}$. \end{lemma} By $N_i \coloneqq R_{i-1} \setminus R_i$ we denote the set of points remove between two sampling phases. Using these definitions we can see that \[ \dot{\bigcup}_{i=1}^k \left( C_i \cap R_{i-1}\right) \; \dot{\cup} \; \dot{\bigcup}_{i=1}^k \left( C_{[i+1,k]} \cap N_i \right) \] is a disjoint partition of $X$. Each set $C_i \cap R_{i-1}$ on the left side contains the points that the mean $\tilde{\mu}_i$ has been sampled from. The sets $C_{[i+1,k]} \cap N_i$ on the right side contain points incorrectly assigned to $\{\tilde{\mu}_1, \dots, \tilde{\mu}_i\}$ during the pruning phases between the sampling of $\tilde{\mu}_i$ and $\tilde{\mu}_{i+1}$. Denote by $\theta_i$ the parameters of the first $i$ weighted Gaussians obtained by the algorithm, i.e. \[ \tilde{\theta}_i = ((\tilde{w}_1, \tilde{\mu}_1,\tilde{\sigma}_1),\ldots,(\tilde{w}_i, \tilde{\mu}_i,\tilde{\sigma}_i))\ .\] \begin{lemma}[cf. Claim~4.8 in \cite{Ackermann09}]\label{lem:ucmle:wrongly-assinged} \[ \mathcal{L}_{C_{[i+1,k]}\cap N_i} (\tilde{\theta}_i) \leq 8\alpha k \mathcal{L}_{C_{[1,i]}\cap R_{i-1}} (\tilde{\theta}_i) \] \end{lemma} \begin{proof} As in \cite[p.~70ff]{Ackermann09}, with ``$\mbox{cost}$`` replaced by ''$\mathcal{L}$``. \end{proof} Denote by $\mbox{cost}(P,C)$ the $k$-means cost of a point set $P$ wrt. a set of means $C$. \begin{lemma}[cf. Claim~4.9 in \cite{Ackermann09}]\label{lem:ucmle:kmeans-costs} For every $i\OneTo{k}$ we have \[ \mbox{cost}(C_i\cap R_{i-1},\tilde{\mu}_i) \leq (1+\epsilon) \mbox{cost}(C_i,\mu_i)\ .\] \end{lemma} \begin{proof} As in \cite[p.~70ff]{Ackermann09}, using that optimal means in CMLE are means of the optimal CMLE clusters. \end{proof} Given appropriate approximate variances, we can conclude that a similar bound holds wrt. the complete-data log-likelihood. \begin{lemma} \label{lem:ucmle:correctly-assigned} Given $\tilde{\sigma}_i \in [\sigma_i^2, (\sigma_i^2)^{(1+\epsilon)}]$ and $\tilde{w}_i = \frac{n_i}{\abs{X}}$ with $n_i\in[\abs{C_i},(1+\epsilon)\abs{C_i}]$, we have \[ \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{w}_i, \tilde{\mu}_i,\tilde{\sigma}_i^2) \leq (1+\epsilon) \mathcal{L}_{C_i}(w_i, \mu_i,\sigma_i^2)\ .\] \end{lemma} \begin{proof} \begin{align*} \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) &= \frac{ \abs{C_i\cap R_{i-1}}d }{2} \ln(2\pi\tilde{\sigma}_i^2) + \frac{1}{2\tilde{\sigma}_i^2} \mbox{cost}(C_i\cap R_{i-1},\tilde{\mu}_i) - \abs{C_i\cap R_{i-1}}\ln(\tilde{w}_i)\ . \end{align*} We have \begin{align*} \ln(2\pi\tilde{\sigma}_i^2) \leq \ln(2\pi(\sigma_i^2)^{(1+\epsilon)})= (1+\epsilon) \ln(2\pi\sigma_i^2) \ . \end{align*} Furthermore, Using that $\abs{C_l} \leq n_l \leq (1+\epsilon) \abs{C_l}$ for all $l=1,\ldots,K$, we obtain $ \tilde{w}_k \geq \frac{\abs{C_k}}{\abs{X}}$. Hence, \begin{align*} -\ln(\tilde{w}_i) \cdot \abs{C_i \cap R_{i-1}} &\leq -\ln(\tilde{w}_i) \cdot \abs{C_i} \\ &\leq - \ln\left( \frac{\abs{C_i}}{\abs{X}}\right)\abs{C_i} \tag{by Equation~\eqref{eq:boundCMLEcost:n_k}}\\ &= - \ln \left(w_i\right) \cdot \abs{C_i} \end{align*} By Lemma~\ref{lem:ucmle:kmeans-costs} and $\tilde{\sigma}_i^2 \geq \sigma_i^2$, \begin{align*} \frac{1}{2\tilde{\sigma}_i^2} \mbox{cost}(C_i\cap R_{i-1},\tilde{\mu}_i) &\leq (1+\epsilon) \frac{1}{2\sigma_i^2} \mbox{cost}(C_i,\mu_i)\ . \end{align*} From this and by using that $\sigma_i^2 =\frac{1}{\abs{C_i}d}\mbox{cost}(C_i,\mu_i)$, we conclude \begin{align*} \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) &\leq (1+\epsilon) \frac{ \abs{C_i}d }{2} \ln(2\pi\sigma_i^2) +(1+\epsilon) \frac{1}{2\sigma_i^2} \mbox{cost}(C_i,\mu_i) - \ln(w_i)\abs{C_i}\\ &\leq (1+2\epsilon) \mathcal{N}_1(C_i) - \ln(w_i)\abs{C_i}\\ &\leq (1+ 2\epsilon) \mathcal{L}_{C_i}(\mu_i,\sigma_i^2)\ . \end{align*} Running Algorithm~\ref{alg-ABS} with $\epsilon/3$ instead of $\epsilon$ yields the claim. \end{proof} Analogously to \cite{Ackermann09}, we can prove Theorem~\ref{thm:ucmle:abs} as follows. \begin{proof}[Proof of Theorem~\ref{thm:ucmle:abs}] Let $\tilde{\theta}_k = (\tilde{\mu}_i,\tilde{\sigma}_i^2)_{i\OneTo{k}}$. Then, \begin{align*} \mathcal{L}_{X}(\tilde{\theta}_k) & \leq \sum_{i=1}^k \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) + \sum_{i=1}^{k-1} \mathcal{L}_{C_{[i+1,k]}\cap N_i} (\tilde{\theta}_k)\\ & \leq \sum_{i=1}^k \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) + 8\alpha k \sum_{i=1}^{k-1} \mathcal{L}_{C_{[1,i]}\cap R_{i-1}} (\tilde{\theta}_k) \tag{due to Lemma~\ref{lem:ucmle:wrongly-assinged}}\\ & \leq \sum_{i=1}^k \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) + 8\alpha k \sum_{i=1}^{k-1} \sum_{t=1}^i \mathcal{L}_{C_t\cap R_{i-1}} (\tilde{\mu}_t,\tilde{\sigma}_t^2)\ . \end{align*} Since $R_i\subseteq R_{i-1}$, we have $C_t\cap R_{i-1}\subseteq C_t \cap R_{t-1}$. Hence, \begin{align*} \sum_{i=1}^{k-1} \sum_{t=1}^i \mathcal{L}_{C_t\cap R_{i-1}} (\tilde{\mu}_t,\tilde{\sigma}_t^2) \leq \sum_{i=1}^{k-1} \sum_{t=1}^i \mathcal{L}_{C_t\cap R_{t-1}} (\tilde{\mu}_t,\tilde{\sigma}_t^2) \\ \leq k \sum_{i=1}^{k-1} \mathcal{L}_{C_i\cap R_{i-1}} (\tilde{\mu}_i,\tilde{\sigma}_i^2) \ . \end{align*} Thus, \begin{align*} \mathcal{L}_{X}(\tilde{\theta}_k) & \leq \sum_{i=1}^k \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) + 8\alpha k^2 \sum_{i=1}^{k-1} \mathcal{L}_{C_i\cap R_{i-1}} (\tilde{\mu}_i,\tilde{\sigma}_i^2) \\ & \leq (1+8\alpha k^2) \sum_{i=1}^k \mathcal{L}_{C_i\cap R_{i-1}}(\tilde{\mu}_i,\tilde{\sigma}_i^2) \\ & \leq (1+8\alpha k^2)(1+\epsilon) \mathcal{L}(X) \tag{by Lemma~\ref{lem:ucmle:correctly-assigned}}\ . \end{align*} Finally, running the algorithm for $\epsilon := \epsilon/2$ and $\alpha=\theta(\epsilon/k^2)$ yields the theorem. \end{proof} \section{Proof of Theorem~\ref{thm:ABS}} In the following we present the proof of Theorem~\ref{thm:ABS}. \begin{itemize} \item In Section~\ref{sec:proof2:gridding} we show how to estimate the variances and the cluster sizes of a well-defined CMLE solution via gridding. The idea behind a grid search is simply to test all solutions lying on a grid in the search space. By choosing a grid that is dense enough, we ensure that there are solutions on the grid which are sufficiently close to the parameters that we search for. \item In Section~\ref{sec:proof2:abs}, we show how one can find good estimates of the means when given good estimates of the weights and covariances. To this end, we adapt the sample-and-prune technique presented in \cite{ABS}. \end{itemize} \subsection{Generate Candidates for Variances and Weights}\label{sec:proof2:gridding} \begin{lemma}\label{lem:ucmle:gridding} Let $X \subset \mathbbm{R}^d$, and $\{C_k\}_{k=1}^K$ be a well-defined CMLE solution for $X$, with corresponding variances $\{\sigma_k^2\}_{k=1}^K$. Then, there exists an algorithm which outputs a set of at most $\left(\frac{\log(\log(\Delta^2))+1}{\log(1+\epsilon)}\right)^K$ tuples of variances, which contains a tuple $(\tilde\sigma_k^2)_{k=1}^K$, such that \[ \forall k\OneTo{K}: \sigma_k^2 \leq \tilde\sigma_k^2 \leq (\sigma_k^2)^{(1+\epsilon)}\ , \] where $\Delta^2 = \max_{x,y\in X} \{ \norm{x-y}^2\}$. \end{lemma} \begin{proof} We know that optimal variances $\sigma_k^2$ of a well-defined solution are bounded from below by \[ \forall k\OneTo{K}: \frac{1}{2\pi} \leq \sigma_k^2 . \] Furthermore, we know that these are also bounded from above by \begin{align*} \forall k\OneTo{K}: \sigma_k^2 = \frac{1}{\abs{C_k}d}\sum_{x\in C_k} \norm{x-\mu(C_k)}^2 \leq \frac{1}{\abs{C_k}d}\sum_{x\in C_k} \Delta^2 \leq \Delta^2\ . \end{align*} Because $1/(2\pi) \leq \sigma_k^2 \leq \Delta^2$, there exists a value \[ k^* \in \{1,\dots, \log_{1+\epsilon}(-\log_{1/(2\pi)}(\Delta^2))\}\] such that \[ \left(1/(2\pi)\right)^{(1+\epsilon)^{k^*-1}} \leq \sigma_i^2 \leq \left(1/(2\pi)\right)^{(1+\epsilon)^{k^*}} . \] Thus, we receive $\left\lceil\frac{\log(\log(\Delta^2))-\log(\log(2\pi))}{\log(1+\epsilon)}\right\rceil$ many values for each variance. The algorithm outputs all possible combinations of these values. \end{proof} The following result is the same as in Section~\ref{sec:gridding}. \begin{theorem} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and let $\mathcal{C}=\dot\bigcup_{k=1}^K C_k$ be an $f$-balanced partition. Then there exists an algorithm that outputs a set $S\subseteq \mathbbm{N}^K$, $\abs{S} = \left( \frac{\log(f(K))}{\log(1+\epsilon)} \right)^K $, that contains $\{n_1, \dots, n_K\}\subset S$ such that \begin{align} \abs{C_k} \leq n_k \leq (1+\epsilon) \abs{C_k}. \end{align} for all $k\OneTo{K}$. \end{theorem} \section{Proof of Theorem~\ref{thm:cmlealgorithm}} In the following we prove Theorem~\ref{thm:cmlealgorithm}. \begin{itemize} \item In Section~\ref{sec:paramtocost} we show that, if the parameters of a CMLE solution are sufficently close to those of an optimal CMLE solution, then its complete-data log-likelihood is close to that of the optimal CMLE solution. In Sections~\ref{sec:means} and \ref{sec:gridding} we then show how to obtain such parameter estimates. \item In Section~\ref{sec:means} we deal with the problem of estimating the means. We use the superset sampling technique introduced by \cite{inaba94} to compute a set of candidate means which contains a good candidate, i.e. a good estimation to the mean parameters of an optimal solution. \item In Section~\ref{sec:gridding} we use a grid search to obtain estimates of the weights and variances. The core idea is to simply test all solutions lying on a specific grid in the search space. By choosing a grid that is dense enough, we ensure that there are solutions on the grid which are sufficiently close to the parameters that we search for. \end{itemize} \subsection{Estimate the Costs of Parameter Estimates}\label{sec:paramtocost} For an optimal $(f,g)$-balanced CMLE solutions, we can estimate the parameters of the the respective optimal Gaussian mixture model and the likelihood of the optimal clusters. We can show that the CMLE solution determined by these parameter estimates yields an approximation with respect to the complete data log-likelihood. \begin{theorem}\label{thm:boundCMLEcost} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and $\epsilon > 0$. Assume $X$ has an $f$-balanced optimal CMLE solution $X=\dot\bigcup_{k=1}^K C_k$ and let $(\tilde{\mu}_1,\ldots,\tilde{\mu}_K)$ such that for all $k\OneTo{K}$ \begin{align*} \norm{\tilde{\mu}_k-\mu(C_k)}^2 \leq \frac{\epsilon}{\abs{C_k}}\sum_{x\in C_k}\norm{x-\mu(C_k)}^2\ . \end{align*} Let $(n_1,\dots,n_K)$, such that for all $k\OneTo{K}$ \begin{align} \abs{C_k} \leq n_k \leq (1+\epsilon)\abs{C_k}\ . \label{eq:boundCMLEcost:n_k} \end{align} and $\vec{\tilde{\sigma}}=(\tilde{\sigma}_1^2,\dots,\tilde{\sigma}_K^2)\in\mathbbm{R}^K$, such that for all $k\OneTo{K}$ it holds \begin{align} \tilde{\sigma}_k^2 \geq \sigma_k^2 \label{eq:boundCMLEcost:tsigma-geq-sigma} \end{align} and \begin{align} \ln(\tilde{\sigma}_k^2) - \ln(\sigma_k^2) \leq \left( (1+\epsilon)^2 - 1 \right) \frac{2}{\abs{C_k} d}OPT(C_k,1)\ .\label{eq:boundCMLEcost:tsigma-sigma-diff} \end{align} Define $\tilde{\theta}=\{(\tilde{w}_k,\tilde{\mu}_k,\tilde{\sigma}_k^2)\}_{k=1,\ldots,K}$, where $\tilde{w}_k = \frac{n_k}{\sum_{l=1}^K n_l}$. Then, \[ \mathcal{L}_X(\tilde{\theta}) \leq (1+\epsilon)^{4} OPT(X,K). \] \end{theorem} \begin{proof} Using that $\abs{C_l} \leq n_l \leq (1+\epsilon) \abs{C_l}$ for all $l=1,\ldots,K$, we obtain $ \tilde{w}_k \geq \frac{1}{(1+\epsilon)}\cdot\frac{\abs{C_k}}{\abs{X}}$. Hence, \begin{align*} -\ln(\tilde{w}_k) \cdot \abs{C_k} &\leq - \ln\left( \frac{1}{(1+\epsilon)}\cdot\frac{\abs{C_k}}{\abs{X}}\right)\abs{C_k} \tag{by Equation~\eqref{eq:boundCMLEcost:n_k}}\\ &\leq \ln(1+\epsilon)\abs{C_k} - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k}\\ &\leq \epsilon \abs{C_k} - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k} \tag{since $\ln(1+\epsilon)\leq \epsilon$}\\ &\leq \frac{2\epsilon}{d} OPT(C_k,1) - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k} \tag{since $OPT(C_k,1) \geq \frac{\abs{C_k}\cdot d}{2}$} \end{align*} Furthermore, observe that \begin{align*} \mathcal{L}_{C_k}(\tilde{\mu}_k,\tilde{\sigma}_k) &= \frac{\abs{C_k}d}{2}\ln(2\pi\tilde{\sigma}_k^2) + \frac{1}{2\tilde{\sigma}_k^2}\sum_{x\in C_k}\norm{x-\tilde{\mu}_k}^2\\ &\overset{\eqref{eq:boundCMLEcost:tsigma-geq-sigma}}{\leq} \frac{\abs{C_k}d}{2}\ln(2\pi\tilde{\sigma}_k^2) + \frac{1}{2\sigma_k^2}\sum_{x\in C_k}\norm{x-\tilde{\mu}_k}^2 \\ &\leq \frac{\abs{C_k}d}{2}\ln(2\pi\tilde{\sigma}_k^2) + \frac{1}{2\sigma_k^2} (1+\epsilon) \sum_{x\in C_k}\norm{x-\mu_k}^2 \tag{By Lemma~\ref{lem-wonderful} and property of $\tilde{\mu}_k$}\\ &= \frac{\abs{C_k}d}{2}\ln(2\pi\tilde{\sigma}_k^2) + (1+\epsilon) \frac{\abs{C_k}d}{2} \tag{By def. of $\mu_k$}\\ &= \frac{\abs{C_k}d}{2} (\ln(2\pi)+\ln(\tilde{\sigma}_k^2) ) + (1+\epsilon) \frac{\abs{C_k}d}{2} \\ &\overset{\eqref{eq:boundCMLEcost:tsigma-sigma-diff}}{=} \frac{\abs{C_k}d}{2} \left( \ln(2\pi) + \left( (1+\epsilon)^{2} - 1 \right) \frac{2}{\abs{C_k} d}OPT(C_k,1) +\ln(\sigma_k^2) \right) + (1+\epsilon) \frac{\abs{C_k}d}{2} \\ &= \frac{\abs{C_k}d}{2}\ln(2\pi\sigma_k^2) + (1+\epsilon) \frac{\abs{C_k}d}{2} + \left( (1+\epsilon)^{2} - 1 \right) OPT(C_k,1) \\ &\leq (1+\epsilon) OPT(C_k,1) + \left( (1+\epsilon)^{2} - 1 \right) OPT(C_k,1) \\ &\leq \left( (1+\epsilon)^{2} + \epsilon \right) OPT(C_k,1) \\ &\leq (1+\epsilon)^{3} OPT(C_k,1) \end{align*} Overall, we have \begin{align*} \mathcal{L}_X(\tilde{\theta}) &= \sum_{k=1}^K \mathcal{L}_{C_k}(\mu_k,\sigma^2_k) - \ln(w_k)\cdot \abs{C_k} \\ &\leq \sum_{k=1}^K (1+\epsilon)^3 OPT(C_k,1) + \frac{2\epsilon}{d} OPT(C_k,1) - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k} \\ &= \sum_{k=1}^K \left((1+\epsilon)^3+\frac{2\epsilon}{d}\right) OPT(C_k,1) - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k} \\ &\leq \left((1+\epsilon)^3+\frac{2\epsilon}{d}\right) \sum_{k=1}^K OPT(C_k,1) - \ln \left( \frac{\abs{C_k}}{\abs{X}} \right) \cdot \abs{C_k}\\ &= \left((1+\epsilon)^3+\frac{2\epsilon}{d}\right) OPT(X,K) \\ &\leq (1+\epsilon)^4 OPT(X,K) \end{align*} \end{proof} \subsection{Generate Candidate Cluster Sizes and Variances by Using Grids}\label{sec:gridding} So far, we have formulated an algorithm that gives us good means. In the following, we will use the gridding technique to determine a set of candidates for the the cluster sizes and variances. First of all, we generate a set of cluster sizes that contains good approximations of the cluster sizes of any $f$-balanced solutions. Then, we approximate the negative log-likelihood of optimal CMLE clusters, i.e. $\sum_{k=1}^K OPT(C_k,1)$ where the $C_k$ are the optimal CMLE clusters. Then, we present how to construct a candidate set of variances that contains good estimates of the variances of any $(f,g)$-balanced optimal CMLE solution. \subsubsection{Grid Search for Cluster Sizes}\label{subsec:clustersizes} \begin{theorem}\label{thm:clustersizes} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and let $X=\dot\bigcup_{k=1}^K C_k$ be an $f$-balanced partition. Then there exists an algorithm that outputs a set $S\subseteq \mathbbm{N}^K$, $\abs{S} = \left( \frac{\log(f(K))}{\log(1+\epsilon)} \right)^K $, that contains a tuple $(n_1, \dots, n_K)\in S$ such that \begin{align} \abs{C_k} \leq n_k \leq (1+\epsilon) \abs{C_k}. \end{align} for all $k\OneTo{K}$. \end{theorem} \begin{proof} Since we assume a $f$-balanced solution, we know that for all $k\OneTo{K}$ \[\frac{\abs{X}}{f(K)} \leq \abs{C_k} \leq \abs{X}.\] Thus, there exist a value $i^* \in \{ 1, \dots, \lceil\log_{1+\epsilon}(f(K))\rceil\}$ such that \[(1+\epsilon)^{i^*-1}\frac{\abs{X}}{f(K)} \leq \abs{C_k} \leq (1+\epsilon)^{i^*}\frac{\abs{X}}{f(K)}.\] Thus, we receive $\lceil\log_{1+\epsilon}(f(K))\rceil$ many values for each cluster size $n_k$. The algorithm outputs all possible combinations of these values. \end{proof} \subsubsection{Bounds on the Log-Likelihood of optimal CMLE clusters}\label{subsec:nll} Lemma~\ref{lem-lower-bound-variance} provides us with a lower bound on the negative log-likelihood of a cluster. \begin{corollary}[Lower Bound on the Optimal Log-Likelihood]\label{cor:lower-bound-nll} Let $X=\dot\bigcup_{k=1}^K C_k$ be an optimal CMLE solution. Then, $OPT(C_k,1) \geq \frac{\abs{C_k}d}{2}$. \end{corollary} The next step is to find an upper bound on the optimal complete-data likelihood value. We use Gonzales algorithm to compute a value that gives us a tighter bound than just the maximum spread (over the dimensions of the vectors in the data set). \begin{lemma}[Upper Bound on the Optimal Complete-Data Log-Likelihood]\label{lem:UpperBoundNLL} Let $X\subset\mathbbm{R}^d$ and $K\in \mathbbm{N}$. A Value $\Gamma$ can be computed in time ${\cal O}(K\cdot d\cdot\abs{X})$ such that the complete-data likelihood of an optimal CMLE solution can be bounded by \[OPT(X,K)\leq \frac{\abs{X}d}{2}\cdot\Gamma\] and $\Gamma = \ln(2\pi s^2) + 1 + \ln(K)$ for some $s\leq 4\cdot OPT_{diam}(X)$. \end{lemma} \begin{proof} Run Gonzales algorithm. The output is a set of $K$ points $p_1,\ldots,p_K \in X$. Compute the point $z$ with maximum distance to its closest point in $\{p_1,\ldots,p_K\}$ and set $s := \min_{k=1,\ldots,K} \norm{z-p_k}$. Consider the solution where the $p_k$ are the centers. Partition the points into point sets $\mathcal{C} = \{C_1,\ldots,C_K\}$, with $\norm{x-p_k} = \min_{i=1,\ldots,K} \norm{x-p_i}$ for all $x \in C_k$. Notice that the distances between any point and its center is at most $s$. Thus, when computing the optimal variance in each cluster, it is at most $s^2$. Then, for $\theta=\left\{\left(\frac{1}{K},p_k,\sigma(X_k,p_k)\right)\right\}_{k=1}^K$ we have \begin{align*} OPT(X,K) \le \mathcal{L}_X(\theta,\mathcal{C}) &= \sum_{k=1}^K \frac{\abs{C_k}d}{2} \ln(2\pi \sigma(C_k,p_k)^2)+ \frac{\abs{C_k} d}{2 } - \ln(w_k)\cdot \abs{C_k} \\ &\le \left(\sum_{k=1}^K \frac{\abs{C_k}d}{2} \ln(2\pi s^2)+ \frac{\abs{C_k} d}{2 } \right)- \ln\left(\frac{1}{K}\right)\cdot \abs{X} \\ &= \frac{\abs{X}d}{2} \ln(2\pi s^2) + \frac{\abs{X} d}{2 } + \ln(K)\cdot \abs{X}\\ &\leq \frac{\abs{X}d}{2} \left( \ln(2\pi s^2) + 1 + \ln(K)\right) \end{align*} \end{proof} Given two bounds, we can find a constant factor approximation of the the sum of the negative log-likelihoods of optimal CMLE clusters, i.e. $\sum_{k=1}^K OPT(C_k,1)$, using a grid search. \begin{lemma}[Estimating the Optimal Log-Likelihood]\label{lem:OPTestEst} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$, and $\epsilon > 0$. Let $X=\dot\cup_{k=1}^K C_k$ be an optimal CMLE solution. Then, there exists a set of $\log(3\Gamma/d)/\log(1+\epsilon)$ many values which contains a value $\mathcal{N}_{est}$ with \[ \frac{1}{1+\epsilon} \mathcal{N}_{est} \leq \sum_{k=1}^K OPT(C_k,1) \leq \mathcal{N}_{est}\ .\] \end{lemma} \begin{proof} Combining Corollary~\ref{cor:lower-bound-nll} and Lemma~\ref{lem:UpperBoundNLL}, we know that \[\frac{\abs{X}d}{2} \leq \sum_{k=1}^K OPT(C_k,1) \leq OPT(X,K) \leq \frac{\abs{X}d}{2} \Gamma . \] Thus, there exist a value $i^* \in \{ 1, \dots, \lceil \log_{1+\epsilon}(\Gamma)\rceil\}$ such that \[(1+\epsilon)^{i^*-1}\frac{\abs{X}d}{2} \leq \sum_{k=1}^K OPT(C_k,1)\leq (1+\epsilon)^{i^*}\frac{\abs{X}d}{2}.\] The algorithm outputs all $\lceil \log_{1+\epsilon}(\Gamma)\rceil$ values. \end{proof} Given this approximation of the sum of the negative log-likelihoods, we will be able to find an approximation of the negative log-likelihoods of a single cluster as we will see in the next section. \subsubsection{Grid Search for Variances}\label{subsec:variancegridding} Given the approximations of the size of the clusters and their negative log-likelihod, we are now able to find estimates of the variances. \begin{theorem}\label{thm:variancegridding} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and $\epsilon > 0$. Assume $X$ has an $(f,g)$-balanced CMLE solution $X=\dot\bigcup_{k=1}^K C_k$. Let additionally $\mathcal{N}_{est}\in \mathbbm{R}$, with \begin{align} \frac{1}{1+\epsilon}\mathcal{N}_{est}\leq \sum_{k=1}^K OPT(C_k,1) \leq \mathcal{N}_{est},\label{prop:estimatedNLLValue} \end{align} and $(n_1,\dots,n_K)$, such that for all $k\OneTo{K}$ \begin{align} \abs{C_k} \leq n_k \leq (1+\epsilon)\abs{C_k}.\label{prop:estimatedSizes} \end{align} Then there exists an algorithm that computes a set of size $K \cdot \frac{\log(g(K))}{\log(1+\epsilon)}$, that contains a tuple $(\tilde{\sigma}_1^2,\dots,\tilde{\sigma}_K^2)$, such that for all $k\OneTo{K}$ it holds \begin{align} \tilde{\sigma}_k^2 \geq \sigma_k^2 \end{align} and \begin{align} \ln(\tilde{\sigma}_k^2) - \ln(\sigma_k^2) \leq \left( (1+\epsilon)^2 - 1 \right) \frac{2}{\abs{C_k} d}OPT(C_k,1)\ . \end{align} \end{theorem} \begin{proof} Observe that \begin{align*} \frac{1}{g(K)(1+\epsilon)}\mathcal{N}_{est} \leq \frac{1}{g(K)}\sum_{k=1}^K OPT(C_k,1) \overset{\text{Def.~}\ref{def:well-balanced}}{\leq} OPT(C_k,1) \leq \sum_{k=1}^K OPT(C_k,1) \leq \mathcal{N}_{est}. \end{align*} Thus, there exists a value $j^* \in \left\{ \lceil -\log_{1+\epsilon}(g(K))\rceil, \dots, 0\right\}$ which satisfies \begin{align*} (1+\epsilon)^{j^*-1}\mathcal{N}_{est} \leq OPT(C_k,1) \leq (1+\epsilon)^{j^*}\mathcal{N}_{est}\ . \end{align*} Denote the upper bound by $\hat{\mathcal{N}} \coloneqq (1+\epsilon)^{j^*} \mathcal{N}_{est}$ and set $\tilde{\sigma}^2_k \coloneqq \exp\left( \frac{2(1+\epsilon)}{{n}_k d}\hat{\mathcal{N}}-\ln(2\pi)-1 \right)$. Notice that \begin{align*} OPT(C_k,1) = \mathcal{L}_{C_k}(\mu_k,\sigma_k^2) = \frac{\abs{C_k}d}{2}\left( \ln(2\pi\sigma_k^2 + 1)\right) \\ \Leftrightarrow \ln(\sigma_k^2) = \frac{2}{\abs{C_k}d}OPT(C_k,1)-\ln(2\pi)-1 \end{align*} Thus, \begin{align*} \ln(\tilde{\sigma}_k^2 ) = \frac{2(1+\epsilon)}{{n}_k d}\hat{\mathcal{N}}-\ln(2\pi)-1 \geq \frac{2}{\abs{C_k} d}OPT(C_k,1)-\ln(2\pi)-1 = \ln(\sigma_k^2) \end{align*} and \begin{align*} \ln(\tilde{\sigma}_k^2) - \ln(\sigma_k^2) & = \frac{2(1+\epsilon)}{{n}_k d}\hat{\mathcal{N}} - \frac{2}{\abs{C_k} d}OPT(C_k,1)\\ &\leq \frac{2(1+\epsilon)^2}{ \abs{C_k} d}OPT(C_k,1) - \frac{2}{\abs{C_k} d}OPT(C_k,1)\\ &= \left( (1+\epsilon)^2 - 1 \right) \frac{2}{\abs{C_k} d}OPT(C_k,1)\label{Eq:DiffSigmas} \end{align*} \end{proof} \subsection{Generate Candidate Means by Sampling}\label{sec:means} We reuse the following well-known lemma on superset sampling. \begin{lemma}[superset-sampling]\label{lem-superset-sampling} Let $X\subset \mathbbm{R}^d$ be a finite set, $\alpha<1$ and $X'\subset X$ with $\abs{X'}\ge \alpha \abs{X}$. Let $S\subseteq X$ be a uniform sample multiset of size at least $\frac{2}{\alpha\epsilon\delta}$. Then with probability at least $\frac{1-\delta}{5}$ there is a subset $S'\subseteq S$ with $\abs{S'}=\frac{1}{\epsilon\delta}$ such that \begin{align*} \norm{\mu(S')-\mu(X')}^2 \leq \frac{\epsilon}{\abs{X'}}\sum_{x\in X'}\norm{x-\mu(X')}^2. \end{align*} \end{lemma} If we plug our notion of $f$-balanced solutions into this lemma, then we receive an algorithm that samples good approximative means. \begin{theorem}[sampling means]\label{thm:samplingalgo} For a finite set $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$ and $\epsilon, \delta > 0$, if $X = \dot\bigcup_{k=1}^K C_k$ is an $f$-balanced partition, then there is an algorithm that computes a set of $\log(1/\delta) \cdot 2^{\frac{K}{\epsilon\delta}\cdot \log\left(\frac{f(K)}{\epsilon\delta}\right)}$ $K$-tuples of points from $\mathbbm{R}^d$, such that with probability $1-\delta$ for one of these tuples it holds that for all $k\OneTo{K}$ \[\norm{\mu_k-\mu(C_k)}^2 \leq \frac{\epsilon}{\abs{C_k}}\sum_{x\in C_k}\norm{x-\mu(C_k)}^2\ .\] The runtime of the algorithm is bounded by $\log(1/\delta)\cdot K \cdot \left(\abs{X} + 2^{\frac{K}{\epsilon\delta}\cdot \log\left(\frac{f(K)}{\epsilon\delta}\right)}\right)$. \end{theorem} \begin{proof} Consider the following algorithm, which computes a candidate set of tuples of means. \begin{algorithm}[H] \KwIn{ $X\subset \mathbbm{R}^d$ : input points \\ $K\in\mathbbm{N}$ : number of clusters } \KwOut{ set of candidate tuples of means } $P \leftarrow \emptyset$\; \For{$k=1,\ldots,K$} { sample a multiset $S$ of size $\frac{1}{\alpha \epsilon \delta}$ from $X$\; $T \leftarrow \left\{ \mu(S') | S'\subset S, \abs{S'} = \lceil \frac{1}{\epsilon\delta}\rceil \right\}$\; $P \leftarrow P\times T$\; } \Return $P$\; \caption{\textsc{Approx-Means}$(X,K)$}\label{alg-ABSa} \end{algorithm} Using Lemma~\ref{lem-superset-sampling} with $\alpha = \frac{1}{f(K)}$, we know that the output of a single run of \textsc{Approx-Means} contains a tuple with the desired property with probability $\left(\frac{1-\delta}{5}\right)^K$. We know that \[\abs{T} \leq \left(\frac{1}{\alpha\epsilon\delta}\right)^{\frac{1}{\epsilon\delta}},\] thus \[\abs{P} = \abs{T}^K \leq 2^{\frac{K}{\epsilon\delta}\cdot \log\left(\frac{f(K)}{\epsilon\delta}\right)}.\] The runtime is bounded by \[ K\cdot \abs{X} + \sum_{k=1}^K \abs{T}^k \leq K \left(\abs{X} + 2^{\frac{K}{\epsilon\delta}\cdot \log\left(\frac{f(K)}{\epsilon\delta}\right)}\right).\] By executing \textsc{Approx-Means} $\log(1/\delta)$ times we receive the desired success probability. \end{proof} \subsection{Uniform Weights} In this section we consider a restricted version of the CMLE problem where we are only interested in Gaussian mixture models with fixed uniform weights, i.e. parameters $\theta=\{(w_k,\mu_k,\Sigma_k)\}_{k\OneTo{K}}$ where $w_k=1/K$ for all $k\OneTo{K}$. We denote this problem by \emph{Uniform Complete-Data Maximum Likelihood Estimation} (UCMLE). \begin{problem}[UCMLE]\label{prob-ucmle} Given a finite set $X\subset \mathbbm{R}^d$ and an integer $K\in \mathbbm{N}$, find a partition $\mathcal{C} = \{C_1,\ldots,C_K\}$ of $X$ into $K$ disjoint subsets and $K$ spherical Gaussians with parameters $\theta = \{(\mu_k,\sigma^2_k)\}_{k=1}^K$ minimizing \begin{align*} \mathcal{L}^{unif}_X(\theta, \mathcal{C}) &=\sum_{k=1}^K \mathcal{L}_{C_k}(\mu_k,\sigma^2_k) \\ &= \sum_{k=1}^K \frac{\abs{C_k}d}{2}\ln(2\pi \sigma_k^2) + \frac{1}{2\sigma_k^2} \left( \sum_{x\in C_k} \norm{x-\mu_k}^2\right) \ . \end{align*} We denote the minimal value by $OPT_{unif}(X,K)$. \end{problem} \begin{corollary} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$, and $\delta,\epsilon > 0$. Let $X=\dot\bigcup_{k=1}^K C_k$ be a well-defined solution for the UCMLE problem. There is an algorithm that computes $K$ spherical Gaussians $\theta = \{(\tilde{\mu}_k,\tilde{\sigma}^2_k)\}_{k=1}^K$ such that with probability at least $1-\delta$ \[ \mathcal{L}^{unif}_{X}((\tilde{\mu}_i,\tilde{\sigma}_i^2)_{i\OneTo{K}}) \leq (1+\epsilon) OPT_{unif}(X,K)\ . \] The running time of the algorithm is bounded by \[ \abs{X}\,d\,\log(1/\delta)\,2^{{\cal O}(K/\epsilon\cdot \log(K/\epsilon^2))}\, \left(\log(\log(\Delta^2))+1\right)^K \ ,\] where $\Delta^2 = \max_{x,y\in X} \{ \norm{x-y}^2\}$. \end{corollary} \begin{proof} Use a grid search to obtain candidates for the variances, then apply the ABS algorithm. \end{proof} \section{Special Cases} \subsection{Weighted $K$-Means (Identical Covariances)} In this section we consider a restricted version of the CMLE problem where we are only interested in Gaussian mixture models where all components share the same fixed spherical covariance matrix, i.e. parameters $\theta=\{(w_k,\mu_k,\Sigma_k)\}_{k\OneTo{K}}$ where $\Sigma_k= \frac{1}{2\beta}I_d$ for all $k\OneTo{K}$. We call this problem the \emph{Weighted $K$-Means} (WKM) problem. \begin{problem}[WKM]\label{prob-wkm} Given a finite set $X\subset \mathbbm{R}^d$ and an integer $K\in \mathbbm{N}$, find a partition $\mathcal{C} = \{C_1,\ldots,C_K\}$ of $X$ into $K$ disjoint subsets and $K$ weighted means $\theta = \{(w_k, \mu_k)\}_{k=1}^K$, where $\mu_k\in \mathbbm{R}^D$, $w_k\in\mathbbm{R}$, and $\sum_{k=1}^K w_k=1$, minimizing \begin{align*} \mathcal{L}^{wm}_X(\theta, \mathcal{C}) &=\sum_{k=1}^K \beta \left( \sum_{x\in C_k} \norm{x-\mu_k}^2\right) - \ln(w_k)\cdot\abs{C_k} \ . \end{align*} We denote the minimal value by $OPT_{wm}(X,K)$. \end{problem} \begin{corollary} Let $X\subset \mathbbm{R}^d$, $K\in \mathbbm{N}$, and $\delta,\epsilon > 0$. Let $X=\dot\bigcup_{k=1}^K C_k$ be a well-defined solution for the WKM problem. There is an algorithm that computes $K$ weighted means $\theta = \{(\tilde{w}_k, \tilde{\mu}_k)\}_{k=1}^K$ such that with probability at least $1-\delta$ \[ \mathcal{L}^{wm}_{X}((\tilde{w}_i,\tilde{\mu}_i)_{i\OneTo{K}}) \leq (1+\epsilon) OPT_{wm}(X,K)\ . \] The running time of the algorithm is bounded by \[ \abs{X}\,d\,2^{{\cal O}(K/\epsilon\cdot \log(K/\epsilon^2))} \cdot \left( \log(f(K))\right)^K \ .\] \end{corollary} \begin{proof} Use a grid search to obtain candidates for the weights, then apply the ABS algorithm. \end{proof}
2024-02-18T23:40:39.745Z
2016-03-22T01:19:43.000Z
algebraic_stack_train_0000
3,001
8,850
proofpile-arXiv_065-14649
\section{Introduction} \label{prvni-cast} In this article, we deal with symmetric regular and normal parabolic geometries on smooth connected manifolds. Consider a regular and normal parabolic geometry $(\ba \to M, \om)$ of type $(G,P)$. A \emph{symmetry} at a point $x\in M$ is an automorphisms $\phi_x$ of the parabolic geometry such that $\phi_x(x)=x$ and the restriction of $T_x\phi_x$ to the bracket generating distribution $T^{-1}M$ is $-\id$. The parabolic geometry is \emph{symmetric}, if there is a symmetry at each $x \in M$. There are several known constructions of examples of symmetric parabolic geometries. In particular, there is a simple condition proved in \cite{GZ-Lie} that is necessary and sufficient for the existence of symmetric parabolic geometries. \begin{lemma*} Let $G$ be semisimple Lie group and $P$ parabolic subgroup of $G$. Let $G_0\ltimes \exp(\fp_+)$ be the reductive Levi decomposition of $P$ corresponding to grading $\fg_i$ of $\fg$, where $\fg_0$ is the Lie algebra of $G_0$ and $\fp_+=\fg_1\oplus \dots \oplus\fg_k$. If the parabolic geometry $(\ba \to M, \om)$ of type $(G,P)$ is symmetric, then there is $s\in G_0$ acting as $-\id$ on $\fg_{-1}$. Moreover, if the type $(G,P)$ is effective, then the element $s$ is unique element of $G_0$ acting as $-\id$ on $\fg_{-1}$. Conversely, if there is $s\in G_0$ acting as $-\id$ on $\fg_{-1}$, then the flat model $(G \to G/P, \om_G)$ is symmetric. In particular, there is infinite number of symmetries at the origin $eP$ given by left multiplications by elements of the form $$s\exp(-\Ad_s(Y))\exp (Y)$$ for $Y \in \fp_+$ and symmetries at arbitrary point $gP$ are then given by conjugation $gs\exp(-\Ad_s(Y))\exp (Y) g^{-1}$. In fact, we get a symmetric flat homogeneous parabolic geometry. \end{lemma*} It is proved in \cite[Proposition 1.29]{HG-DGA} that for each semisimple Lie algebra $\fg$ and its parabolic subalgebra $\fp$, there always exist a Lie group $G$ and its closed subgroup $P$ such that the flat model $(G \to G/P, \om_G)$ is symmetric. In fact, there is a general construction of flat and non--flat homogeneous symmetric parabolic geometries on homogeneous fiber bundles over symmetric spaces described in the article \cite[Theorem 2.7.]{HG-DGA}. There are also examples of flat non--homogeneous symmetric parabolic geometries obtained from the flat model, which are not related to symmetric spaces. It is shown in \cite{ja-springer, ja-CEJM, GZ-Srni15} that if we remove two distinguished points $u,v$ from the flat model $(G \to G/P, \om_G)$ of parabolic geometries of projective, projective contact and conformal type, then the restrictions of the flat models $(G \to G/P, \om_G)$ to $M:=G/P -\{u,v\}$ are still symmetric parabolic geometries. In all these cases, the manifold $M$ decomposes into several orbits with respect to the action of the automorphisms group (which consists exactly of elements of $G$ that preserve the subset $\{ u,v\} \subset G/P$), and on each of this orbits, the symmetries either preserve $u$ and $v$ or swap them. Further, there are constructions of homogeneous symmetric parabolic geometries other than the construction in \cite{HG-DGA}. In particular, in \cite{GZ-Lie} is presented a construction of non--flat homogeneous symmetric parabolic geometries on a (semidirect) product of a flat model of a different (non--effective) type of parabolic geometry and a homogeneous space of a nilpotent Lie group. There is a natural question, whether there are also non--flat non--homogeneous symmetric parabolic geometries? It is proved in \cite{GZ-Srni15} that all non--homogeneous symmetric conformal geometries are necessarily flat and it is clear from the proof that the same result can be obtained for all AHS--structures. However, we will show in this article that we can combine the constructions from \cite{GZ-Lie} and \cite{ja-springer, ja-CEJM, GZ-Srni15} and prove that there are types of parabolic geometries for which the question can be answered positively. In the first section, we show how to combine the above constructions to get new examples of non--flat non--homogeneous symmetric parabolic geometries. We discuss several necessary and sufficient conditions under which the construction is applicable. As our main result, we show in the Theorem \ref{main} that there are two series of non--flat non--homogeneous symmetric parabolic geometries provided by our construction. We describe these parabolic geometries in detail. In the second section, we give a proof of the main Theorem \ref{main}. The proof consists of several technical lemmas and we explain the technicalities in detail. \section{Non--flat non--homogeneous symmetric parabolic geometries} Let us firstly give the statement that explains how to combine the two constructions of symmetric parabolic geometries mentioned in the Introduction. \begin{prop*} \label{prop} Let $G$ be semisimple Lie group and $P$ parabolic subgroup of $G$. Let $G_0\ltimes \exp(\fp_+)$ be the reductive Levi decomposition of $P$ corresponding to grading $\fg_i$ of $\fg$, where $\fg_0$ is the Lie algebra of $G_0$ and $\fp_+=\fg_1\oplus \dots \oplus\fg_k$. Suppose there is a non--flat $K$--homogeneous parabolic geometry $(\ba\to M,\om)$ of type $(G,P)$ satisfying the following conditions: \begin{enumerate} \item $K$ is an algebraic Lie subgroup of the automorphism group of the parabolic geometry $(\ba\to M,\om)$ acting transitively on $M$ and we denote by $H$ the stabilizer of a point $x\in M$. \item There is $u\in \ba$ covering $x$ and reductive Levi decomposition $K=\exp(\fn)\rtimes \bar G$ such that if we define the subgroups $$\exp(\fn_0):=\{\exp(X)\in \exp(\fn): \exp(X)(u)\in uG_0\},$$ $$\bar G_0:=\{\bar g\in \bar G: \bar g(u)\in uG_0\},$$ and $$\exp(\bar \fp_+):=\{\bar g\in \bar G: \bar g(u)\in u\exp(\fp_+)\},$$ then $H$ is semidirect product of $\exp(\fn_0)$ and the parabolic subgroup $\bar P$ of $\bar G$ with reductive Levi decomposition $\bar P:=\bar G_0\ltimes \exp(\bar \fp_+)$. \item There is $\bar s\in \bar G_0$ such that $\bar s(u)=us$ for $s\in G_0$ acting as $-\id$ on $\fg_{-1}.$ \item There is submanifold $\bar M$ of $\bar G/\bar P$ such the flat model $(\bar G\to \bar G/\bar P,\omega_{\bar G})$ restricts to non--homogeneous symmetric parabolic geometry of type $(\bar G,\bar P)$ on $\bar M$. \end{enumerate} Then the parabolic geometry $$(\ba|_{\exp(\fn)/\exp(\fn_0) \times \bar M}\to \exp(\fn)/\exp(\fn_0)\times \bar M,\om|_{\exp(\fn)/\exp(\fn_0)\times \bar M})$$ is non--flat non--homogeneous symmetric parabolic geometry of type $(G,P)$. \end{prop*} \begin{proof} It follows from the assumptions (2) and (3) that the flat model $(\bar G\to \bar G/\bar P,\omega_{\bar G})$ is symmetric. Moreover, from \cite[Sections 3 and 4]{GZ-Lie} follows, that $(\ba\to M,\om)$ is symmetric parabolic geometry and the set of symmetries at $x$ contains a subset isomorphic to $s\exp(\bar \fp_+)$. Therefore the condition (4) implies that the parabolic geometry $(\ba|_{\exp(\fn)/\exp(\fn_0)\times \bar M}\to \exp(\fn)/\exp(\fn_0)\times \bar M,\om|_{\exp(\fn)/\exp(\fn_0)\times \bar M})$ is symmetric, because the set of symmetries at the points $(\exp(X)\exp(\fn_0),\bar x)\in \exp(\fn)/\exp(\fn_0)\times \bar M$ clearly contains the set of symmetries of $(\bar G|_{\bar M} \to \bar M, \om_{\bar G}|_{\bar M})$ at the points $\bar x$. \end{proof} Let us now discuss, when the conditions (1)--(4) of the Proposition \ref{prop} can be satisfied. Firstly, the condition (1) posses only topological restrictions on $M$, $G$ and $P$ that are not restrictive. There is a construction in \cite[Section 3]{GZ-DGA} that transforms non--flat $K$--homogeneous parabolic geometry $(\ba\to M,\om)$ into a parabolic geometry satisfying in addition the condition (1), after a sufficient algebraic completion of $G$ and $P$ and covering of $M$. On the other hand, the conditions (2) and (3) are highly restrictive for non--flat geometries. We know from \cite{GZ-Lie} that not all types of parabolic geometries can admit a symmetry at point with non--trivial curvature and there is even less types of parabolic geometries that admit more then one symmetry at one point, i.e., non--trivial $\exp(\bar \fp_+)$, see the tables in \cite{GZ-Lie}. Moreover, the non--flat homogeneous symmetric parabolic geometries do not satisfy the condition (2), in general. However, in the article \cite[Section 6 (second construction)]{GZ-Lie}, there is a construction of parabolic geometry $(\ba\to M,\om)$ of type $(G,P)$ satisfying the conditions (1),(2),(3) under the following conditions. \begin{lemma*}\label{cor} Suppose the type $(G,P)$ of parabolic geometries satisfies the following conditions: \begin{itemize} \item There is $s\in G_0$ acting as $-\id$ on $\fg_{-1}$ and acting as $\id$ on some component of the harmonic curvature of parabolic geometries of type $(G,P)$. \item The lowest weight $\mu$ in the component of the harmonic curvature, on which $s\in G_0$ acts as $\id$, is preserved by the Cartan involution of complexification of $\fg$. \end{itemize} Then there is non--flat $K$--homogeneous parabolic geometry $(\ba\to M,\om)$ of type $(G,P)$ satisfying the conditions (1),(2),(3) of Proposition \ref{prop} for $K$ being the automorphism group of $(\ba\to M,\om)$ and $\mu$ being its curvature. \end{lemma*} Motivated by the construction of non--homogeneous flat examples, we study, whether these geometries satisfy the condition (4) of the Proposition \ref{prop}, when we remove two points from the flat model $(\bar G\to \bar G/\bar P,\bar \om)$. We know that removing two points in the case $dim(\bar G/\bar P)=1$ leads to homogeneous parabolic geometry. Therefore we need to consider the cases, when $dim(\bar G/\bar P)>1$. If we look in the tables in the article \cite{GZ-Lie}, we get that there are only two series of possible types $(G,P)$ (up to covering) satisfying the conditions of the Lemma \ref{cor} admitting $dim(\bar G/\bar P)$ to be greater than one. Let us point out that we need to choose the projectivizations of the groups in order to satisfy the condition (3) for $n$ odd. \begin{enumerate} \item[(A)] Consider $G=PGl(n+1,\mathbb{R})$ and $P$ the stabilizer of the flag $e_1\subset e_1\wedge e_2 \subset e_1\wedge \dots \wedge e_l$ in $\mathbb{R}^{n+1}$ for $n\geq 2l-1$, $l>3$, where $e_1,\dots, e_{n+1}$ is the standard basis of $\R^{n+1}$. Then the group $K$ of the non--flat $K$--homogeneous parabolic geometry $(\ba\to M,\om)$ from Lemma \ref{cor} is (as a set) represented by the matrices from $PGl(n+1,\mathbb{R})$ of the form $$\begin{pmatrix} L'_{1,1} & L'_{1,2} & 0 & 0 & \dots & 0 & 0 \cr L'_{2,1} & L'_{2,2} & 0 & 0& \dots &0 & 0 \cr N_{3,1} & N_{3,2} & R_3 &0 & \dots & 0 & 0 \cr N_{4,1} &N_{4,2} & Z_{4,3} & L_{1,1} & \dots &L_{1,n-3} &0 \cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \cr N_{n,1} & N_{n,2} & Z_{n,3} & L_{n-3,1}& \dots & L_{n-3,n-3} & 0 \cr N_{n+1,1} & N_{n+1,2} & N_{n+1,3} & N_{n+1,4}& \dots & N_{n+1,n} & R_{n+1}\end{pmatrix},$$ where all the entries are real numbers such that the equalities $$(det(L')R_3det(L)R_{n+1})^2=1,\ \ \ det(L')R_3^{-3}R_{n+1}=1$$ hold for the submatrices $L$, $L'$ formed from elements $L_{i,j}$, $L_{i,j}'$. This means that $\bar G\cong L'\times L$ is the reductive Levi subgroup of $K$, and the unipotent radical corresponds to $N$ and $Z$. The result of multiplication of two elements $\exp(X_1)\bar g_1$, $\exp(X_2)\bar g_2\in \exp(\fn)\ltimes \bar G$ is $\exp(C(X_1,\Ad_{\bar g_1}(X_2)))\bar g_1\bar g_2\in \exp(\fn)\ltimes \bar G$, where $C(-,-)$ represents the Baker--Campbell--Hausdorff--formula for the nilpotent Lie algebra $\fn$. The difference between the Lie bracket in $\fn$ and the Lie bracket in $\frak{sl}(n+1,\mathbb{R})$ of the matrices representing the elements of $\fn$ is precisely the lowest weight of the harmonic curvature of the parabolic geometries of type $(G,P)$, which takes entries in $N_{3,1}$ and $N_{3,2}$ slots and has values in $N_{n+1,3}$ slot. The subgroup $\exp(\fn_0)$ corresponds to $Z$ entries and the parabolic subgroup $\bar P$ is product of the stabilizer $Q'$ of $e_1$ in $L'$ and the stabilizer $Q$ of $e_4\wedge \dots \wedge e_l$ in $L$. Thus $\bar G/\bar P$ is product of $L'/Q'\cong \mathbb{R}P^1$ and the space $L/Q$ of Grassmannians of $(l-3)$--planes in $\mathbb{R}^{n-3}$. Finally, the element $\bar s$ is the diagonal matrix with $(1,-1,1, \dots, 1, -1\dots, -1,1)$ for exactly $l$ appearances of $1$ on the diagonal. \item[(C)] Consider $G=PSp(2n,\mathbb{R})$ and $P$ the stabilizer of the flag of isotropic subspaces $e_1\subset e_1\wedge e_2 \subset e_1\wedge \dots \wedge e_n$ in $\mathbb{R}^{2n}$ for $n>4$, where $e_1,\dots, e_{n}$ and $f_1,\dots,f_n$ are bases of two maximally isotropic subspaces in $\R^{2n}$ satisfying $\Omega(e_i,f_j)=\delta_j^i$ for the natural symplectic form $\Omega$ preserved by $PSp(2n,\mathbb{R})$. Then the group $K$ of the non--flat $K$--homogeneous parabolic geometry $(\ba\to M,\om)$ from Lemma \ref{cor} is (as a set) represented by the matrices in $PSp(2n,\mathbb{R})$ with block structure $\begin{pmatrix} A & B \cr C & * \end{pmatrix}$ w.r.t. to the bases $e_1,\dots, e_{n}$ and $f_1,\dots,f_n$, where $$A:=\begin{pmatrix} L'_{1,1} & L'_{1,2} & 0 & 0 & \dots & 0 \cr L'_{2,1} & L'_{2,2} & 0 & 0& \dots &0 \cr N_{3,1} & N_{3,2} & R_3 &0 & \dots & 0 \cr N_{4,1} &N_{4,2} & Z_{4,3} & L_{1,1} & \dots &L_{1,n-3} \cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \cr N_{n,1} & N_{n,2} & Z_{n,3} & L_{n-3,1}& \dots & L_{n-3,n-3} \cr \end{pmatrix},$$ $$B:=\begin{pmatrix} 0 & 0 & 0 & 0& \dots & 0\cr 0 & 0 & 0 & 0& \dots & 0 \cr 0 &0 & 0 & 0& \dots & 0 \cr 0 & 0 & 0 & L_{1,n-2}& \dots & L_{1,2n-6} \cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \cr 0 & 0 & 0 & L_{n-3,n-2}& \dots & L_{n-3,2n-6} \cr \end{pmatrix},$$ $$C:=\begin{pmatrix} N_{n+1,1} & N_{n+2,1} & N_{n+3,1} & N_{n+4,1}& \dots & N_{2n,1} \cr * & N_{n+2,2} & N_{n+3,2} & N_{n+4,2}& \dots & N_{2n,2} \cr * &* & N_{n+3,3} & N_{n+4,3}& \dots & N_{2n,3}\cr * & * & * & L_{n-2,1}& \dots & L_{n-2,n-3} \cr \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \cr * & * & * & L_{2n-6,1}& \dots & L_{2n-6,2n-6} \cr \end{pmatrix},$$ where $*$ entries are uniquely determined by the structure of $Sp(2n,\mathbb{R})$, the matrix $L$ formed by elements $L_{i,j}$ is contained in $CSp(2n-6,\mathbb{R})$ and all remaining entries are real numbers such that the equality $$det(L')R_3^{-4}=1$$ holds for the submatrix $L'$ formed from elements $L_{i,j}'$. This means that $\bar G\cong L'\times L$ is the reductive Levi subgroup of $K$, and the unipotent radical corresponds to $N$ and $Z$. The result of multiplication of two elements $\exp(X_1)\bar g_1$, $\exp(X_2)\bar g_2\in \exp(\fn)\ltimes \bar G$ is $\exp(C(X_1,\Ad_{\bar g_1}X_2))\bar g_1\bar g_2\in \exp(\fn)\ltimes \bar G$, where $C(-,-)$ represents the Baker--Campbell--Hausdorff--formula for the nilpotent Lie algebra $\fn$. The difference between the Lie bracket in $\fn$ and the Lie bracket in $\frak{sp}(2n,\mathbb{R})$ of the matrices representing the elements of $\fn$ is precisely the lowest weight of the harmonic curvature of the parabolic geometries of type $(G,P)$, which takes entries in $N_{3,1}$ and $N_{3,2}$ slots and has values in $N_{n+3,3}$ slot. The subgroup $\exp(\fn_0)$ corresponds to $Z$ entries and the parabolic subgroup $\bar P$ is product of the stabilizer $Q'$ of $e_1$ in $L'$ and the stabilizer $Q$ of $e_4\wedge \dots \wedge e_n$ in $L$. Thus $\bar G/\bar P$ is product of $L'/Q'\cong \mathbb{R}P^1$ and the space $L/Q$ of the maximally isotropic (w.r.t. $\Omega$) Grassmannians of $(n-3)$--planes in $\mathbb{R}^{2n-6}$. Finally, the element $\bar s$ is the diagonal matrix with $(1,-1,1, \dots, 1)$ on the first $n$ entries of the diagonal. \end{enumerate} Since $\bar G/\bar P\cong \mathbb{R}P^1\times L/Q$ is product of two flat models of parabolic geometries in both of the above cases (A) and (C), we remove two points from the flat model $(L \to L/Q, \om_{L})$ and consider $\bar M:= \mathbb{R}P^1\times (L/Q -\{l_1Q,l_2Q\})$ for some $l_1Q,l_2Q \in L/Q$. Then the flat parabolic geometry $(\bar G\to \bar G/\bar P,\om_{\bar G})$ restricts to a parabolic geometry over $\bar M$ of the same type $(\bar G, \bar P)$. Its automorphisms group consists of direct product of $L'$ and those element of $L$ that preserve the set $\{l_1Q,l_2Q\}$. Thus it decomposes into two components according to the fact whether it preserves $l_1Q$ and $l_2Q$ or whether it swaps $l_1Q$ and $l_2Q$. This property restrict also the possible symmetries on $\bar M$ and there is a natural question, whether at least some symmetries on $\bar G/\bar P$ survive the restriction to $\bar M$. There is the following crucial statement. \begin{thm*} \label{main} Let $(L,Q)$ be one of the types of parabolic geometries from the above series (A) or (C). Then $\bar M:= \mathbb{R}P^1\times (L/Q -\{l_1Q,l_2Q\})$ satisfies the condition (4) of the Proposition \ref{prop} if and only if there is $q\in Q$ such that $q l_1^{-1}l_2Q=e_4\wedge \dots \wedge e_{l-1}\wedge e_{l+1}$ in the case (A) or $q l_1^{-1}l_2Q=e_4\wedge \dots \wedge e_{n-1}\wedge f_{n}$ in the case (C). \end{thm*} The proof of the Theorem \ref{main} is fairly technical and we give the proof in the next section. In fact, the proof of the Theorem \ref{main} is equivalent to the proof of the following statement. \begin{cor*} \label{main2} Let $(L,Q)$ be one of the types of parabolic geometries from the above series (A) or (C). Then the flat model $(L\to L/Q,\om_L)$ restricts to symmetric parabolic geometry on $L/Q -\{l_1Q,l_2Q\}$ if and only if there is $q\in Q$ such that $q l_1^{-1}l_2Q=e_4\wedge \dots \wedge e_{l-1}\wedge e_{l+1}$ in the case (A) or $q l_1^{-1}l_2Q=e_4\wedge \dots \wedge e_{n-1}\wedge f_{n}$ in the case (C). \end{cor*} Let us give the geometric interpretation of the condition in the Theorem \ref{main} and interpret the condition for the existence of preserving symmetry from Lemma \ref{l5}. \begin{cor*} Let $(L,Q)$ be one of the types of parabolic geometries from the above series (A) or (C). Then $\bar M:= \mathbb{R}P^1\times (L/Q -\{l_1Q,l_2Q\})$ satisfies the condition (4) of the Proposition \ref{prop} if and only if the subspaces $W_1$ and $W_2$ corresponding to $l_1Q$ and $l_2Q$ have intersection of dimension $dim(W_1)-1=dim(W_2)-1.$ There is symmetry preserving the subspaces $W_1$ and $W_2$ at the point of $L/Q -\{l_1Q,l_2Q\}$ corresponding to subspace $W$ if and only if the intersection $W\cap (W_1+W_2)$ is contained in $W_1$ or $W_2$. \end{cor*} The automorphisms group of the parabolic geometry $(\ba|_{\exp(\fn)/\exp(\fn_0) \times \bar M}\to \exp(\fn)/\exp(\fn_0)\times \bar M,\om|_{\exp(\fn)/\exp(\fn_0)\times \bar M})$ in the case (A) for $\bar M:= \mathbb{R}P^1\times (L/Q -\{e_4\wedge \dots \wedge e_{l-1}\wedge e_{l},e_4\wedge \dots \wedge e_{l-1}\wedge e_{l+1}\})$ has two components. The component of identity consists of (semidirect) product of $L'$, $\exp(\fn)$ and the following matrices in $L$: $$\begin{pmatrix} L_{1,1} & \dots &L_{1,l} & L_{1,l+1} & L_{1,l+2}& \dots & L_{1,n-3} \cr \vdots & \ddots & \vdots & \vdots& \vdots & \ddots & \vdots \cr L_{l-1,1}& \dots & L_{l-1,l} & L_{l-1,l+1} & L_{l-1,l+2}& \dots & L_{l-1,n-3}\cr 0& \dots & {\pmb L_{l,l}} &0 &L_{l,l+2}& \dots & L_{l,n-3} \cr 0& \dots &0&{\pmb L_{l+1,l+1}}& L_{l+1,l+2}& \dots & L_{l+1,n-3} \cr \vdots & \ddots & \vdots & \vdots& \vdots & \ddots & \vdots \cr 0& \dots & 0& 0& L_{n-3,l+2} &\dots & L_{n-3,n-3} \cr \end{pmatrix}.$$ The other component consists of (semidirect) product of $L'$, $\exp(\fn)$ and the following matrices in $L$: $$\begin{pmatrix} L_{1,1} & \dots &L_{1,l} & L_{1,l+1} & L_{1,l+2}& \dots & L_{1,n-3} \cr \vdots & \ddots & \vdots & \vdots& \vdots & \ddots & \vdots \cr L_{l-1,1}& \dots & L_{l-1,l} & L_{l-1,l+1} & L_{l-1,l+2}& \dots & L_{l-1,n-3}\cr 0& \dots &0&{\pmb L_{l+1,l+1}} &L_{l+1,l+2}& \dots & L_{l+1,n-3} \cr 0& \dots &{\pmb L_{l,l}}&0& L_{l,l+2}& \dots & L_{l,n-3} \cr \vdots & \ddots & \vdots & \vdots& \vdots & \ddots & \vdots \cr 0& \dots & 0& 0& L_{n-3,l+2} &\dots & L_{n-3,n-3} \cr \end{pmatrix}.$$ The automorphisms group of the parabolic geometry $(\ba|_{\exp(\fn)/\exp(\fn_0) \times \bar M}\to \exp(\fn)/\exp(\fn_0)\times \bar M,\om|_{\exp(\fn)/\exp(\fn_0)\times \bar M})$ in the case (C) for $\bar M:= \mathbb{R}P^1\times (L/Q -\{e_4\wedge \dots \wedge e_{n-1}\wedge e_{n},e_4\wedge \dots \wedge e_{n-1}\wedge f_{n}\})$ has two components. The component of identity consists of (semidirect) product of $L'$, $\exp(\fn)$ and the following matrices in $L$: $$\begin{pmatrix} L_{1,1} & \dots & L_{1,n} & L_{1,n+1} & \dots & L_{1,2n-1} & L_{1,2n} \cr \vdots & \ddots & \vdots &\vdots & \ddots & \vdots & \vdots \cr L_{n-1,1} & \dots & L_{n-1,n} &L_{n-1,n+1} & \dots & L_{n-1,2n-1} & L_{n-1,2n} \cr 0 & \dots & {\pmb L_{n,n}} & L_{n,n+1} & \dots &L_{n,2n-1} &0 \cr 0 & \dots & 0 & L_{n+1,n+1}& \dots & L_{n-2,n-3} & 0 \cr \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots \cr 0 & \dots & 0 & L_{2n-1,n+1}& \dots & L_{2n-1,2n-1} & 0 \cr 0 & \dots & 0 & L_{2n,n+1}& \dots & L_{2n,2n-1} & {\pmb L_{2n,2n}} \cr \end{pmatrix}.$$ The other component consists of product of $L'$, $\exp(\fn)$ and the following matrices in $L$: $$\begin{pmatrix} L_{1,1} & \dots & L_{1,n} & L_{1,n+1} & \dots & L_{1,2n-1} & L_{1,2n} \cr \vdots & \ddots & \vdots &\vdots & \ddots & \vdots & \vdots \cr L_{n-1,1} & \dots & L_{n-1,n} &L_{n-1,n+1} & \dots & L_{n-1,2n-1} & L_{n-1,2n} \cr 0 & \dots & 0 & L_{2n,n+1} & \dots &L_{2n,2n-1} &{\pmb L_{2n,2n}} \cr 0 & \dots & 0 & L_{n+1,n+1}& \dots & L_{n-2,n-3} & 0 \cr \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots \cr 0 & \dots& 0 & L_{2n-1,n+1}& \dots & L_{2n-1,2n-1} & 0 \cr 0 & \dots & {\pmb L_{n,n}} & L_{n,n+1}& \dots & L_{n,2n-1} & 0 \cr \end{pmatrix}.$$ Therefore, there is the following characterization of orbits of the automorphisms group in $\exp(\fn)/\exp(\fn_0)\times \bar M$. \begin{prop*} In the case (A) or (C), the points $(\exp(X_1)\exp(\fn_0),l_1'Q',W_3)$ and $(\exp(X_2)\exp(\fn_0),l_2'Q',W_4)$ for the Grasmainans $W_3,W_4$ in $L/Q-\{W_1,W_2\}$ are points in the same orbit of the automorphism group of the parabolic geometry $(\ba|_{\exp(\fn)/\exp(\fn_0) \times \bar M}\to \exp(\fn)/\exp(\fn_0)\times \bar M,\om|_{\exp(\fn)/\exp(\fn_0)\times \bar M})$ if and only if \begin{align*} dim(W_3\cap W_2\cap W_1)&=dim(W_4\cap W_2\cap W_1),\\ dim(W_3\cap (W_2+ W_1))&=dim(W_4\cap (W_2+W_1)),\\ dim(W_3\cap (W_2\sqcup W_1))&=dim(W_4\cap (W_2\sqcup W_1)), \end{align*} where $W_2 \sqcup W_1$ is the union of $W_2$ and $W_1$ as algebraic sets. \end{prop*} \section{The proof of Theorem \ref{main}} In order to proof the Theorem \ref{main}, it suffices to prove the Corollary \ref{main2}. So we will assume that $(L,Q)$ is one of the types of parabolic geometries from the above series (A) or (C). Since $L$ acts by automorphisms of the flat model $(L\to L/Q,\om_L)$ from the left, the restriction of $(L\to L/Q,\om_L)$ to $L/Q-\{l_1Q, l_2Q\}$ is isomorphic to restriction of $(L\to L/Q,\om_L)$ to $L/Q-\{ll_1Q, ll_2Q\}$ for all $l\in L$. Therefore we can choose $l=ql_1^{-1}$ for some $q\in Q$ and work with the parabolic geometry on $L/Q-\{eQ,q l_1^{-1}l_2Q\}$. Therefore the non--isomorphic restrictions of $(L\to L/Q,\om_L)$ to $L/Q-\{l_1Q, l_2Q\}$ are parametrized by the double coset space $Q\backslash L/Q$. We will find a suitable representative $v\in L$ of the classes in $Q\backslash L/Q$ and investigate the symmetries on the restrictions of $(L\to L/Q,\om_L)$ to $L/Q -\{eQ, vQ\}$. The elements of the Lie algebra $\fl$ of $L$, which are diagonal in the bases $e_1,\dots,e_{n+1}$ or $e_1,\dots,f_{n}$, respectively, form the Cartan subalgebra of the Lie algebra $\fl$. The Lie algebra $\fq$ of $Q$ is a standard parabolic subalgebra of $\fl$ for this Cartan subalgebra and we denote by $\fl_{-1}\oplus \fl_0\oplus \fl_1$ the corresponding $|1|$--grading of $\fl$. Then the subgroups $W(\fl)$, $W(\fl_0)$ generated by elements of $L$, $L_0$, which permute the elements of the bases $e_1,\dots,e_{n+1}$ or $e_1,\dots,f_{n}$, respectively, induce the Weyl groups of $\fl$, $\fl_0$. Let us recall that there are representatives of the classes of $W(\fl)/W(\fl_0)$ encoded by the Hasse diagram $\mathcal{W}^{\fq}$ of the parabolic subalgebra $\fq$, which define the decomposition of $L/Q$ into Schubert cells, see \cite[Section 3.2.19]{parabook}. \begin{lemma*} Each element of $L/Q$ can be uniquely written as $\exp(Z)wQ$ for $w\in \mathcal{W}^{\fq}$ and $Z\in \Ad_w^{-1}(\fl_{-1})\cap \fb_+$, where $\fb_+\subset \fq$ is the sum of all positive root spaces in $\fl$. The dimension of $\Ad_w^{-1}(\fl_{-1})\cap \fb_+$ is equal to the length of $w$ in $\mathcal{W}^{\fq}$. \end{lemma*} Consequently, the double coset space $Q\backslash L/Q$ is finite and is in bijective correspondence with the double coset space $W(\fl_0)\backslash W(\fl)/W(\fl_0)$. Therefore we can represent the classes of $Q\backslash L/Q$ by the shortest elements in the Haase diagram $\mathcal{W}^\fq$ from the class in $W(\fl_0)\backslash W(\fl)/W(\fl_0)$. We start the investigation of the symmetries of the restriction of $(L\to L/Q,\om_L)$ to $L/Q -\{eQ, vQ\}$ on the smallest cell $\exp(Z)wQ$ for $w\in \mathcal{W}^{\fq}$ of the length $1$. In the case $(A)$, there is unique $w$ of length $1$, which is the simple reflection over the $(l-3)^{rd}$ simple root that corresponds to swapping $e_{l}$ and $e_{l+1}$, and $Z$ is contained in the root space of the $(l-3)^{rd}$ simple root of $\fl$. In the case $(C)$, there is unique $w$ of length $1$, which is the simple reflection over the $(n-3)^{rd}$ simple root that corresponds to swapping $e_{n}$ and $f_{n}$, and $Z$ is contained in the root space of the $(n-3)^{rd}$ simple root of $\fl$. \begin{lemma*} \label{l1} Let $(L,Q)$ be the type of parabolic geometry from (A) or (C), let $w$ be the unique element of $\mathcal{W}^{\fq}$ of the length $1$ and let $v$ be the shortest element in $\mathcal{W}^{\fq}$ representing a class in $Q\backslash L/Q$. If the length of $v$ is greater than $1$, then there is no symmetry at the points $\exp(Z)wQ, Z\neq 0$ of $L/Q -\{eQ, vQ\}$. \end{lemma*} \begin{proof} There is a symmetry at the point $\exp(Z)wQ$ preserving the points $eQ$ and $vQ$ if and only if there is $Y\in \fl_1$ such that $$\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}\in Q$$ and simultaneously $$v^{-1}\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}v\in Q$$ hold. Since $N_{L}(Q)=Q$, $v^{-1}wsw^{-1}v\in Q$ and $\exp(\Ad_{wsw^{-1}}^{-1}(Z))=\exp(-Z)$ hold for both types (A) and (C), these two conditions are equivalent to the conditions $$\Ad_{\exp(Z)w}(Y)\in \fq$$ and simultaneously $$\exp(\Ad_v^{-1}\Ad_{\exp(-Z)w}(Y))\exp(-2\Ad_v^{-1}(Z))\in Q.$$ From the structure of $\mathcal{W}^{\fq}$ follows that $\Ad_v^{-1}(Z)$ is a non--zero element of $\fl_{-1}$, while the condition $\Ad_{\exp(Z)w}(Y)\in \fq$ implies that $\Ad_{\exp(-Z)w}(Y)$ has trivial component in the root space of $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root, respectively. Therefore, there is symmetry at $\exp(Z)wQ$ preserving the points $eQ$ and $v Q$ only if $Z= 0$. There is a symmetry at $\exp(Z)wQ$ swapping the points $eQ$ and $v Q$ if and only if there is $Y\in \fl_1$ such that the condition $$\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}v\in Q$$ holds. This condition is equivalent to the condition $$\exp(\Ad_{\exp(Z)w}(Y))v=\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])v\in Q.$$ Since the right multiplication by elements of $W(\fl)$ acts by swapping columns in the matrix $\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])$, the decisive for the existence of the symmetry are the entries on the diagonal of $\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])$. But there are numbers $1$ on the diagonal except the $(l-2)^{nd}$ and $(l-3)^{rd}$ position in the case (A) and $(n-3)^{nd}$ and $2(n-3)^{rd}$ position in the case (C) that both depend on $[Z,\Ad_w(Y)]$. Therefore, if the length of $v$ is greater than one, there is no swapping symmetry at $\exp(Z)wQ$. \end{proof} Therefore it remains to investigate the symmetries of the restriction of the flat model to $L/Q -\{eQ, vQ\}$ for the unique element $v$ of $\mathcal{W}^{\fq}$ of the length $1$. In this case, we can again use the decomposition of $L/Q$ into Schubert cells to show, when there is a symmetry preserving the points $eQ$ and $vQ$. \begin{lemma*}\label{l5} Let $(L,Q)$ be the type of parabolic geometry from (A) or (C) and let $v$ be the unique element of $\mathcal{W}^{\fq}$ of the length $1$. Then there is a symmetry at point $\exp(Z)wQ$ of $L/Q -\{eQ, vQ\}$ preserving the points $eQ$ and $vQ$ if and only if $Z$ has trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, respectively. \end{lemma*} \begin{proof} Since the conditions $v^{-1}wsw^{-1}v\in Q$ and $\exp(\Ad_{wsw^{-1}}^{-1}(Z))=\exp(-Z)$ from proof of the Lemma \ref{l1} are satisfied for generic $v$ and $w$ and $Z\in \Ad_w^{-1}(\fl_{-1})\cap \fb_+$, the symmetry $\exp(Z)ws(\exp(Z)w)^{-1}$ at the point $\exp(Z)wQ$ of $L/Q -\{eQ, vQ\}$ preserves the points $eQ$ and $vQ$ if and only if $Z$ has trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. It remains to show that there are no preserving symmetries at the other points of $L/Q -\{eQ, vQ\}$. Therefore it suffices to show that if the symmetry $$\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}$$ at the point $\exp(Z)wQ$ preserves the points $eQ$ and $vQ$, then $Z$ has a trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, respectively. Let us assume that the conditions $$\Ad_{\exp(Z)w}(Y)\in \fq$$ and simultaneously $$\exp(\Ad_v^{-1}\Ad_{\exp(-Z)w}(Y))\exp(-2\Ad_v^{-1}(Z))\in Q$$ hold. If $Z$ has a non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, then $\Ad_{\exp(-Z)w}(Y)$ has non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, too. But $$\Ad_{\exp(-Z)w}(Y)=\Ad_w(Y)-[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]]\in \fq$$ follows from the condition $\Ad_{\exp(Z)w}(Y)\in \fq$, and thus $\Ad_w(Y)\in \fq$ has non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. This is contradiction with the condition $Z\in \Ad_w^{-1}(\fl_{-1})\cap \fb_+$ for $Z$ with a non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, because $w=v\circ w'$ for some $w'\in \mathcal{W}^\fq$ holds and if the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$ is in the image of $\Ad_w(\fl_1)$, then the dimension of $\Ad_{w'}^{-1}(\fl_{-1})\cap \fb_+$ is $dim(\Ad_w^{-1}(\fl_{-1})\cap \fb_+)+1$, which is contradiction. \end{proof} Therefore it remains to show that there is a symmetry swapping the points $eQ$ and $vQ$ at the points $\exp(Z)wQ\in L/Q$ such that $Z$ has a non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. We show this as a part of the following lemma that summarizes the previous statements. \begin{lemma*} Let $(L,Q)$ be the type of parabolic geometry from (A) or (C) and let $v$ be the unique element of $\mathcal{W}^{\fq}$ of the length $1$. Then there is a symmetry either preserving or swapping $eQ$ and $vQ$ at each $\exp(Z)wQ\in L/Q -\{eQ, vQ\}$. \end{lemma*} \begin{proof} Suppose $\exp(Z)wQ\in L/Q -\{eQ, vQ\}$ is such that $Z$ has non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. Since the conditions $v^{-1}wsw^{-1}v\in Q$ and $\exp(\Ad_{wsw^{-1}}^{-1}(Z))=\exp(-Z)$ from proof of the Lemma \ref{l1} are satisfied for generic $v$ and $w$ and $Z\in \Ad_w^{-1}(\fl_{-1})\cap \fb_+$, the symmetry $\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}$ at the point $\exp(Z)wQ$ of $L/Q -\{eQ, vQ\}$ swaps the points $eQ$ and $vQ$ if and only if $$\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])v\in Q.$$ However, $v$ swaps $(l-2)^{nd}$ and $(l-3)^{rd}$ column in the case (A) and $(n-3)^{rd}$ and $2(n-3)^{rd}$ column in the case (C). Therefore there is a symmetry at $\exp(Z)wQ$ if there is $0$ on the $(l-3)^{rd}$ or $2(n-3)^{rd}$ position on diagonal in the matrix $\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])$ and the component of $\Ad_w(Y)$ in $\fl_{-1}$ is contained in the root space of the minus $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. If $Z\in \Ad_w^{-1}(\fl_{-1})\cap \fb_+$ has non--trivial component in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$ and there is the duality between the positive and negative roots, there is $Y\in \fl_1$ such that $\Ad_w(Y)$ is contained in the root space of the minus $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$. If $Y$ is anti--proportional to the component of $Z$ in the root space of the $(l-3)^{rd}$ or $(n-3)^{rd}$ simple root of $\fl$, then there is $0$ on the $(l-3)^{rd}$ or $2(n-3)^{rd}$ position on diagonal in the matrix $\exp(\Ad_w(Y)+[Z,\Ad_w(Y)]+1/2[Z,[Z,\Ad_w(Y)]])$ and the symmetry $\exp(Z)ws\exp(Y)(\exp(Z)w)^{-1}$ at the point $\exp(Z)wQ$ of $L/Q -\{eQ, vQ\}$ swaps the points $eQ$ and $vQ$. \end{proof}
2024-02-18T23:40:39.819Z
2016-03-22T01:17:53.000Z
algebraic_stack_train_0000
3,006
6,401
proofpile-arXiv_065-14663
\section{Introduction} \label{sec:1} Planetary nebulae (PNe) are formed in the final evolutionary stages of stars with initial masses $\leq$8--10\,$M_{\sun}$. As these stars evolve along the asymptotic giant branch (AGB), they experience successive episodes of heavy mass loss through a slow ($v_{\infty}$ $\sim$10 km\,s$^{-1}$) wind. Once the stellar envelope is stripped off, the hot stellar core is exposed, leading to a 1000--4000 km\,s$^{-1}$ fast stellar wind \citep{csp85,gue13}. This fast wind sweeps up the slow AGB wind, which is further photoionized by the central star (CSPN), to form a PN \citep{kwok83,fbr90}. In this interacting stellar winds (ISW) model, an adiabatically-shocked hot bubble with temperatures as high as 10$^7$--10$^8$~K forms in the inner region of the PN, but this hot gas is too tenuous ($\sim$10$^{-3}$~cm$^{-3}$) to be detected. Nevertheless, extended X-ray emission has now been detected inside the inner cavities of nearly 30 PNe with plasma temperatures of 1--3$\times$10$^6$~K and electron densities of 1--10~cm$^{-3}$ \citep[e.g.,][]{kast00,kast12,chu01,gue00,gue02,gue05,free14}. The detection of X-ray-emitting hot gas in PN interiors strongly supports the ISW model, but the discrepancy between the observed and predicted physical conditions and X-ray luminosities has led to the suggestion that some mechanism is reducing the temperature of the hot bubble and raising its density. Thermal conduction \citep[][and references therein]{stef08,sok94} and/or hydrodynamical instabilities \citep[e.g.,][]{ta14} in the wind-wind interaction zone can inject material into the hot bubble, creating a {\it mixing layer} of gas with intermediate temperatures ($\sim$10$^5$~K) between the hot bubble and the optical nebular shell. As thermal conduction governs the amount of material injected into the hot bubble, turning it on or off in the models causes differences in the spatial extent and physical properties of the mixing layer. By gaining insights into the mixing layers in PNe, the effects of thermal conduction and turbulent mixing on the interior hot gas can be quantitatively assessed. This in turn helps us to refine the models to produce more realistic predictions, which can then be compared with the available sample of PNe with diffuse X-ray emission detected \citep{free14}. There is, however, very little observational information about the mixing layers. X-ray observations of NGC\,6543 (a.k.a.\ the Cat's Eye Nebula) reveal a physical structure qualitatively consistent with the ISW models \citep{chu01}. The \emph{Chandra} image of NGC\,6543 (Figure~\ref{fig1}) shows simple limb-brightened diffuse X-ray emission confined within the bright inner shell and two blisters at the tips of its major axis, in sharp contrast to its complex optical morphology \citep{bal04}, implying density enhancement near the inner nebular rim and evaporation of nebular material into hot interior. Indeed, the observed X-ray temperature \citep[1.7$\times$10$^{6}$~K;][]{chu01} is much lower than that expected for a stellar wind of $v_{\infty}$ $\sim$1400~km\,s$^{-1}$ \citep{pri07}. Therefore, NGC\,6543 provides a case study of mixing layers in PNe. UV lines of highly ionized species produced by thermal collisions in the mixing layer can be used as probes. The most common species are C~{\sc iv}, N~{\sc v}, and O~{\sc vi}, whose fractional abundances peak at $\sim$1$\times$10$^5$, 2$\times$10$^5$ and 3$\times$10$^5$~K, respectively \citep{sv82}. \emph{FUSE} detections of the O~{\sc vi} $\lambda\lambda$1032,1038 doublet from the mixing layers have been reported in several PNe \citep{ip02,ruiz13}, including NGC\,6543 \citep{gru04}, but no spatial information could be drawn due to the limited angular resolution of \emph{FUSE}. This can only be achieved by the unique capabilities of the \emph{Hubble Space Telescope} (\emph{HST}). \begin{table*}[!t] \begin{center} \caption{\emph{HST} STIS Observing Log} \label{table1} \begin{tabular}{lcllcccc} \hline \hline \multicolumn{1}{c}{Slit Position} & \multicolumn{1}{c}{Date} & \multicolumn{5}{c}{\underline{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Instrumental Configuration~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}} & \multicolumn{1}{c}{$t_{\rm exp}$} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{l}{Detector} & \multicolumn{1}{c}{Grating} & \multicolumn{1}{c}{$\lambda_c$} & \multicolumn{1}{c}{Spectral Range} & \multicolumn{1}{c}{Dispersion} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{({\AA})} & \multicolumn{1}{c}{({\AA})} & \multicolumn{1}{c}{({\AA}~pixel$^{-1}$)} & \multicolumn{1}{c}{(s)} \\ \hline NGC\,6543-MINOR & 2012 Jul. 3 & STIS/FUV-MAMA & G140M & 1222 & 1190--1253 & 0.053 & 4570 \\ & & STIS/FUV-MAMA & G140M & 1550 & 1518--1581 & 0.053 & 1723 \\ & & STIS/CCD & G430L & 4300 & 2652--5950 & 2.746 & 2$\times$127 \\ & & STIS/CCD & G750M & 6581 & 6248--6913 & 0.554 & 2$\times$275 \\ NGC\,6543-MAJOR & 2012 Oct. 21 & STIS/FUV-MAMA & G140M & 1425 & 1190--1253 & 0.053 & 4570 \\ & & STIS/FUV-MAMA & G140M & 1550 & 1518--1581 & 0.053 & 1723 \\ & & STIS/CCD & G430L & 4300 & 2652--5950 & 2.746 & 2$\times$127 \\ & & STIS/CCD & G750M & 6581 & 6248--6913 & 0.554 & 2$\times$275 \\ \hline \end{tabular} \end{center} \end{table*} In this paper, we present \emph{HST} STIS UV and optical spectroscopy of NGC\,6543. In conjunction with the \emph{Chandra} X-ray images, these new spectra are used to successfully determine the location and spatial extent of mixing layer in a PN for the first time. We describe the observations in Section~\ref{sec:2}, and present results and discussion in Section~\ref{sec:3}. The main conclusions are summarized in Section~\ref{sec:4}. \begin{figure}[!t] \begin{center} \includegraphics[width=8.4cm,angle=0]{fig1.pdf} \caption{ \emph{HST} (red, purple) and \emph{Chandra} (blue) color-composite image of NGC\,6543. Image adopted from \emph{Chandra} X-ray Center (http://chandra.harvard.edu/photo/2008/catseye/). The positions of the \emph{HST} STIS 52\arcsec$\times$0\farcs2 long slit are marked with white lines. At both PA =16$\degr$ and 122$\degr$, the slit center is offset by 0\farcs6 from the CSPN. } \label{fig1} \end{center} \end{figure} \section{Observations and Data Analysis} \label{sec:2} \emph{HST} STIS UV and optical spectroscopic observations of NGC\,6543 (PI: M.A.\ Guerrero, GO prop.~ID 12509, Cycle~19) were carried out on 2012 July 3 and 2012 October 21. The observations aimed at detecting and tracing the spatial extent of the interface layer and comparing it with those of the nebular shell and hot bubble. The 52\arcsec$\times$0\farcs2 long slit was placed at a position angle (PA) of 16$\degr$ and 122$\degr$ along the major and minor axes of the inner nebular shell (Figure~\ref{fig1}), respectively. The G140M grating and STIS/FUV-MAMA detector were used to acquire spectra of the N~{\sc v} $\lambda\lambda$1239,1243 and C~{\sc iv} $\lambda$1548,1551 emission lines. The observations were performed in ACCUM mode. Meanwhile, the G430L and G750M gratings and STIS/CCD detector were used to obtain information of the [O~{\sc iii}], H$\alpha$, and [N~{\sc ii}] lines from the optical nebular shell. The STIS/CCD observations were split into 2 exposures to allow cosmic-ray removal. A summary of the STIS configurations and exposures is given in Table~\ref{table1}. The spectra were reduced and calibrated with the \emph{HST} STIS pipeline. \begin{figure}[!t] \begin{center} \includegraphics[width=8.7cm,angle=0]{fig2.pdf} \caption{ Spatial emission profiles of the H$\alpha$, [O~{\sc iii}] $\lambda$5007, and [N~{\sc ii}] $\lambda$6583 emission lines from the optical nebular shell (top panel), and the N~{\sc v} $\lambda$1239 UV resonance line from the interface layer and the \emph{Chandra} X-ray emission from the hot bubble (bottom panel) along the minor axis of NGC\,6543 at PA 122\degr (Figure~\ref{fig1}). The offset is relative to the CSPN. Grey shaded areas mark the positions of the interface layer, as indicated by the N~{\sc v} line. } \label{fig2} \end{center} \end{figure} In order to avoid the bright stellar light, the center of the long slit was offset 0\farcs6 from the CSPN at each slit PA. Despite this offset, noticeable scattered stellar continuum spilt into the innermost regions of the nebular shell, reducing the detection sensitivity for the faint nebular emission within a region of radius $\simeq$1\farcs5 around the CSPN. The 2D STIS spectra were used to extract spatial profiles of emission along the minor axis of the photoionized innermost nebular shell for the H$\alpha$, [O~{\sc iii}] $\lambda$5007, and [N~{\sc ii}] $\lambda$6583 lines (Figure~\ref{fig2}, top) and the collisionally-excited N~{\sc v} $\lambda$1239 UV line (Figure~\ref{fig2}, bottom). The stellar light and nebular continuum were subtracted by carefully selecting spectral regions blue- and red-wards of the target emission lines. Despite this effort, the steep stellar P-Cygni profile of the N~{\sc v} line made it impossible to obtain a clean spatial profile of nebular emission in the innermost 3\arcsec\ region around the CSPN. This inner section of spatial profile is discarded in our analysis. The spatial profile of X-ray emission, as derived from the \emph{Chandra} observations, is added into the bottom panel of Figure~\ref{fig2}. The spatial profiles along the major axis of the inner nebular shell of NGC\,6543 (not shown here) reveal similar structures, although are complicated by projection effects of emission from the blister features. \begin{figure}[!t] \begin{center} \includegraphics[width=8.4cm,angle=0]{fig3.pdf} \caption{ Top panel: Nebular N~{\sc v} $\lambda$1239, C~{\sc iii} $\lambda$1247, and C~{\sc iv} $\lambda$1551 emission lines extracted at the mixing layer (Figure~\ref{fig2}) along the minor axis of NGC\,6543. Wavelengths have been converted to the LSR velocity. The radial velocity of NGC\,6543 \citep[$-$47.5~km\,s$^{-1}$;][]{ms92} is marked by the vertical dotted line. Spectra are binned according to the dispersion (0.053\,{\AA}\,pixel$^{-1}$) of the STIS G140M grating. Fluxes of the C~{\sc iii} and C~{\sc iv} lines are scaled by 0.13 and 0.05, respectively, to allow better comparison with the N~{\sc v} line. The inset shows the 1D spectrum of the mixing layer in 1229--1249\,{\AA}, and the red continuous curve is the P-Cygni profile of the CSPN. Bottom panel: STIS E140H spectrum of NGC\,6543's central star corrected for the LSR velocity showing the absorption features of C~{\sc iv} $\lambda$1551 (black), Si~{\sc iii} $\lambda$1206 (red) and Si~{\sc ii} $\lambda$1527 (blue). Absorptions due to the PN shell and the interstellar gas are marked and velocities presented. } \label{fig3} \end{center} \end{figure} We extracted 1D spectra from the 2D STIS UV data along the minor axis. The apertures for spectral extraction were targeted at the location of the interface layer marked by the grey shaded regions in the bottom panel of Figure~\ref{fig2} that have been selected according to the spatial emission profile of the N~{\sc v} line. The spectra extracted at the east and west positions of the interface layer were then combined and corrected for the underlying scattered stellar continuum. The profiles of the N~{\sc v} $\lambda$1239, C~{\sc iii} $\lambda$1247, and C~{\sc iv} $\lambda$1551 emission lines are presented in Figure~\ref{fig3}-top. Wavelengths have been corrected for the instrumental and orbital shifts. We then converted wavelengths to velocities by correcting for the local-standard-of-rest (LSR) velocity of the solar system ($v_{\rm LSR}$ = 16~km\,s$^{-1}$) towards the direction of NGC\,6543 ($l$=96\fdg5, $b$=30\fdg0). Archival \emph{HST} STIS echelle spectra of the CSPN of NGC\,6543 (PI: R.E.\ Williams, GO prop.~ID 9736, Cycle~12) were used to complement our data analysis. The stellar spectra were obtained with the E140H grating, which provided a resolution of 114,000\footnote{The STIS Instrument Handbook, URL http://www.stsci.edu/hst/stis/documents/handbooks} ($\sim$3 km\,s$^{-1}$). Three separate settings of STIS/FUV-MAMA were used to cover a wavelength region 1140--1690 \AA. The 0\farcs2$\times$0\farcs09 slit was placed on the CSPN. Detailed description of observations is given in \citet{wil08}. \section{Results and Discussion} \label{sec:3} \subsection{Emission Line Profiles} \label{sec:3:a} Figure~\ref{fig3}-top shows the N~{\sc v}, C~{\sc iii} and C~{\sc iv} emission lines detected in our STIS G140M spectrum of NGC\,6543. The N~{\sc v} $\lambda$1239 and C~{\sc iii} $\lambda$1247 lines peak at the radial velocity of NGC\,6543 \citep[$-$47.5 km\,s$^{-1}$;][]{ms92} and both have a full-width at half-maximum (FWHM) $\sim$0.33\,{\AA}. This line width is expected for an extended source filling the STIS slit width (0\farcs2) and actually matches that of the geocoronal Ly$\alpha$ emission line. This spectral resolution is insufficient to resolve the thermal width of N~{\sc v} $\lambda$1239 produced by the 2$\times$10$^{5}$~K gas in the interface layer, which is estimated to have a FWHM $\sim$0.11\,{\AA}. Meanwhile, both lines of the C~{\sc iv} $\lambda\lambda$1548,1551 doublet have a FWHM$\sim$0.31\,{\AA}, slightly narrower than N~{\sc v} and C~{\sc iii} (Figure~\ref{fig3}, top). Moreover, its observed line center is redshifted by 0.078\,{\AA} ($\sim$15 km\,s$^{-1}$), compared to the N~{\sc v} and C~{\sc iii} lines. Inspection of our 2D STIS spectrum reveals a narrow absorption bluewards of the C~{\sc iv} emission. A close look at the archival \emph{HST} STIS E140H spectrum of NGC\,6543's central star helps to clarify this issue. The spectra of the Si~{\sc ii} $\lambda$1527, Si~{\sc iii} $\lambda$1206, and C~{\sc iv} $\lambda$1551 lines have absorption features at $-$3, $-$40 and $-$69 km\,s$^{-1}$ (Figure~\ref{fig3}, bottom). The component at $-$3 km\,s$^{-1}$ is saturated in Si~{\sc ii} and Si~{\sc iii} but weak in C~{\sc iv}, whereas the much weaker absorption at $-$40 km\,s$^{-1}$ is only present in Si~{\sc ii} and Si~{\sc iii}. These two absorption features are generally consistent in radial velocity and relative strength with the H~{\sc i} 21~cm emission towards the direction of NGC\,6543 detected in the Leiden/Argentine/Bonn Galactic H~{\sc i} Survey and Effelsberg-Bonn H~{\sc i} Survey \citep{kal05,win16}. Given the similar ionization potentials of H$^{0}$ and Si$^{+}$, these two absorption components can be attributed to neutral or low-excitation ionized interstellar gas along the direction of NGC\,6543. On the other hand, the absorption at $-$69 km\,s$^{-1}$ is most likely produced by the approaching side of the PN shell, as this velocity generally agrees with NGC\,6543's systemic velocity ($v_{\rm LSR}$ = $-$47.5~km\,s$^{-1}$) plus its expansion velocity ($\sim$16 km\,s$^{-1}$ at the inner shell and 28 km\,s$^{-1}$ at the outer shell; \citealt{ms92}). Furthermore, this component is weak and unsaturated in Si~{\sc ii}, saturated in Si~{\sc iii}, and heavily saturated in C~{\sc iv}, implying a higher excitation than that of the interstellar gas probed by the H~{\sc i} 21~cm surveys. Our identification of these absorption features generally agrees with the interpretation of \emph{IUE} observations \citep{pwa84}. The blueward absorption reduces the widths of the C~{\sc iv} emission lines and shifts their line centers towards the red. This explains the different emission line profiles of C~{\sc iv} with respect to those of N~{\sc v} and C~{\sc iii} seen in Figure~\ref{fig3}-top. \subsection{Spatial Distribution of Line Emission} \label{sec:3:b} The spatial profiles along the minor axis of NGC\,6543 (Figure~\ref{fig2}) reveal the location of mixing-layer gas originating from very different processes. The brightest emission peaks in H$\alpha$ and [O~{\sc iii}] at $\simeq$3\farcs9 from the CSPN mark the location of the $\sim$10$^4$~K swept-up inner nebular shell. The spatial profiles of the C~{\sc iii} and C~{\sc iv} lines are generally consistent with those of the H$\alpha$ and [O~{\sc iii}] lines associated with the inner shell. On the other hand, the profile of the X-ray emission from the $\gtrsim$10$^6$~K hot gas shows an eastern peak at $\simeq$2\farcs0 and a shoulder of declining emission towards the west. This irregular profile is due to the low count rate, but suggests that the X-ray-emitting gas is confined within a region with radius $\leq$3\arcsec. These new profiles confirm the interpretation of \citet{chu01} that the X-ray-emitting gas is confined within the cool nebular shell. Interestingly, the useful section of the spatial profile of the N~{\sc v} emission peaks at intermediate positions (grey shades in Figure~\ref{fig2}), $\simeq$3\arcsec, between the optical lines from the optical nebular shell and the X-ray emission from the hot bubble. The N~{\sc v} ion cannot be produced by photoionization because the effective temperature of the CSPN of NGC\,6543 is only $\simeq$50,000~K; thus it must be produced by thermal collisions at temperatures of $\sim$10$^5$~K, as expected in the mixing layer. The observed spatial profile of the N~{\sc v} $\lambda$1239 emission line can be used to estimate the radius and thickness of mixing layer. Assuming a constant-emissivity cylindrical shell with radius $R$ and thickness ${\Delta}R$, we derived an outer radius of $\sim$3\farcs7 and a thickness 27\% this radius (i.e., 1\farcs0). At a distance of 1.0$\pm$0.3~kpc \citep{reed99}, this implies a thickness of 1.5$\times$10$^{16}$~cm. The H$\alpha$ profile can be fit similarly to derive an outer radius of 4\farcs6 and a thickness of 0\farcs9, making its inner edge coincident with the outer edge of the mixing layer. The estimated thickness of the mixing layer in NGC\,6543 can be compared with our numerical results \citep{ta14}. Our post-AGB model with 0.633~$M_{\odot}$ predicts a mixing layer with a thickness of 1.8$\times10^{16}$~cm by the time the hot bubble reaches a similar average radius as that of NGC\,6543 ($R_{\rm bubble}$ $\lesssim\,0.04$~pc). This is very similar to the measured thickness. However, this thickness only covers 10 cells in our current models, and thus the mixing layer is not sufficiently sampled. New high-resolution simulations are needed to make accurate predictions on the evolution and physical properties (spatial extent, density, and temperature) of the mixing layers in PNe. \subsection{Mixing Layer Electron Density and Pressure} \label{sec:3:c} We estimated the density of the mixing layer by assuming a simple geometry of the N~{\sc v} $\lambda$1239-emitting region: a cylindrical shell with an outer radius of 3\farcs7 and an inner radius of 2\farcs7. The intensity of the N~{\sc v} $\lambda$1239 line of the interface layer, as measured from the extracted spectrum, is 6.3$\times$10$^{-14}$ erg~cm$^{-2}$~s$^{-1}$. This line intensity can be expressed as: \begin{equation} I = n_{\rm e} n_{\rm N^{4+}} h\nu \frac{8.629 \times 10^{-6}}{\sqrt{T_{\rm e}}} \frac{\Omega(1,2; T_{\rm e})}{g_{1}} {\rm e}^{-\chi/kT_{\rm e}} \frac{V}{4{\pi}d^{2}}. \label{eq1} \end{equation} Here $n_{\rm e}$ and $n_{\rm N^{4+}}$ are number densities of the electron and the N$^{4+}$ ion, respectively; $h\nu$ is the photon energy (in ergs) of the N~{\sc v} $\lambda$1239 line; $\Omega$(1,2; $T_{\rm e}$) is the Maxwellian-averaged collision strength of the N$^{4+}$ 2s\,$^{2}$S$_{1/2}$ -- 2p\,$^{2}$P$^{\rm o}_{3/2}$ transition, which is derived using the collision strength of N$^{4+}$ 2s\,$^{2}$S -- 2p\,$^{2}$P$^{\rm o}$ calculated by \citet{cm83} and Equation~(3.21) in \citet{of06}; $g_{1}$ is the statistical weight of the lower level (for N$^{4+}$, $g_{1}$ = 2); $\chi$ is the excitation energy of the upper level (in the case of N~{\sc v} $\lambda$1239, $\chi$ = 10.008~eV, which corresponds to 1.603$\times$10$^{-11}$ ergs); $d$ is the distance to NGC\,6543; $V$ is the emitting volume in cm$^{3}$. Here $V$ can be derived from the STIS slit width (0\farcs2, corresponding to 3.0$\times$10$^{15}$~cm) and the cylindrical shell of the UV-emitting mixing layer (as assumed at the beginning of this section) using the equation \begin{equation} V = 2\pi\Delta{x} \left( \int_a^{R_1}\sqrt{R_1^2 - r^2}~\mathrm{d}r - \int_{a}^{R_2}\sqrt{R_{2}^{2} - r^{2}}~\mathrm{d}r \right) \label{eq2} \end{equation} where $R_{1}$ and $R_{2}$ are the outer and inner radius of the cylindrical shell, respectively (5.5$\times$10$^{16}$~cm and 4.0$\times$10$^{16}$~cm at 1~kpc), and $\Delta{x}$ is the thickness covered by the STIS long slit (3.0$\times$10$^{15}$~cm); $a$ is the lower limit of the integration, which corresponds to 3.7$\times$10$^{16}$~cm from the CSPN according to the region selected for spectral extraction. The likely inhomogeneity of the 2$\times$10$^5$~K, N~{\sc v} $\lambda$1239-emitting gas due to hydrodynamical instabilities can be accounted for by adding a filling factor, $\epsilon$, to Equation~\ref{eq1}. In the interface layer, the N$^{4+}$/H$^{+}$ ionic abundance ratio was assumed to be close to the nebular nitrogen abundance (N/H), which is 2.30$\times$10$^{-4}$ \citep{ber03}. This nebular abundance is consistent with the stellar wind abundance \citep[2.29$\pm$0.53$\times$10$^{-4}$;][]{geor08}. Combining Equations~\ref{eq1} and \ref{eq2} and by introducing a filling factor $\epsilon$, we deduced an expression of the electron density as a function of temperature, $n_{\rm e}$ = 6.1\,$\epsilon^{-1/2}$\,$T_{\rm e}^{1/4}$\,$\exp$(5.808$\times$10$^{4}$/$T_{\rm e}$). This function shows that when the temperature varies in the range 1$\times$10$^5$--3$\times$10$^5$~K, the electron density is always close to $\sim$180\,$\epsilon^{-1/2}$~cm$^{-3}$ for the interface layer. This density and the adopted temperature of 2$\times$10$^5$~K imply a thermal pressure of $\sim$2$\times$10$^{-8}$\,$\epsilon^{-1/2}$ dyn~cm$^{-2}$ in the mixing layer, which agrees with the pressure of the hot bubble and the ionized swept-up shell \citep{gru04}, probably implying the filling factor $\epsilon\sim$1. \section{Conclusions} \label{sec:4} We present high-spatial resolution \emph{HST} STIS UV and optical spectroscopy of the Cat's Eye Nebula (NGC\,6543). Our STIS observations enabled the first view of the spatial distribution of the mixing-layer gas. This mixing layer, probed by the collisionally-ionized N~{\sc v} UV emission line, is located exactly between the optical nebular rim and the X-ray-emitting hot bubble as previously detected by \emph{Chandra}. We estimate a thickness of 1.5$\times10^{16}$~cm for the mixing layer, which is consistent with predictions of our 2D radiation-hydrodynamic simulations of the hot bubbles in PNe \citep{ta14}. The estimated electron density and thermal pressure of this layer are found to be $\sim$180\,$\epsilon^{-1/2}$~cm$^{-3}$ and $\sim$2$\times$10$^{-8}$\,$\epsilon^{-1/2}$~dyn\,cm$^{-2}$, respectively, assuming a cylindrical shell of the 2$\times10^{5}$~K, UV-emitting gas. This thermal pressure agrees with that in the hot bubble and ionized nebular rim of NGC\,6543, suggesting hydrodynamical equilibrium. New higher-resolution radiation-hydrodynamic numerical simulations will be carried out to investigate the evolution and properties of the mixing layer in young PNe (Toal\'{a} \& Arthur, in preparation). It is worth mentioning that the physical configuration of the mixing layer in PNe is also expected to occur within wind-blown bubbles that exhibit diffuse, soft X-ray emission such as the Orion Nebula, Wolf-Rayet bubbles, and superbubbles \citep[e.g.,][]{Gudel2008,Jaskot2011,ruiz13,Toala2012}. Future UV observations towards other PNe and wind-blown bubbles, such as the Wolf-Rayet bubble NGC\,6888, will help us understand and unveil the physics of the mixing layer and its relation to the existence of the diffuse X-ray-emitting gas in hot bubbles. \acknowledgments Support for the \emph{Hubble Space Telescope} Cycle 20 General Observer Program 12509 was provided by NASA through grant HST-GO-12509.01-A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS\,5-26555. X.F., M.A.G., and J.A.T.\ are partially funded by grant AYA~2011-29754-C03-02 of the Spanish MEC (Ministerio de Econom\'\i a y Competitividad) cofunded with FEDER funds. We thank Dr. Yong Zhang at the University of Hong Kong for discussion. We also thank the anonymous referee, whose comments help to enhance this paper. This paper utilizes the image from \emph{Chandra} X-ray Center (http://chandra.harvard.edu/), which was operated for NASA by the Smithsonian Astrophysical Observatory and developed with funding from NASA under contract NAS8-03060. This research has made use of NASA's Astrophysics Data System (http://adsabs.harvard.edu) and the Mikulski Archive for Space Telescope (MAST, https://archive.stsci.edu/). \\ {\it Facilities:} \facility{\emph{HST} (STIS)}.
2024-02-18T23:40:39.883Z
2016-03-23T01:07:07.000Z
algebraic_stack_train_0000
3,009
4,319
proofpile-arXiv_065-14665
\section{Introduction} \label{sec:Introduction} DNS is an integral part of the Internet infrastructure. Unfortunately, it does not offer privacy, i.\,e., the so-called resolvers (recursive nameservers) can see all queries sent to them in the clear. Resolvers can learn about users' habits and interests, which may infringe their privacy if the resolver is not run by a trusted party, but by a third party such as Google, whose resolver 8.8.8.8 serves more than 130 billion queries per day on average \cite{googledns}. The discussions about limiting tracking via cookies spurred by the ``Do not track'' initiative may result in DNS queries becoming the next target for tracking and profiling purposes \cite{Conrad12-dnssecurity}. According to \cite{HBF:2013} behavior-based tracking based on DNS queries may be feasible. Integrating mechanisms for confidentiality into DNS is difficult because of the need for compatibility with existing infrastructure. Fundamental changes to the protocol are implemented very slowly, as previous attempts have shown: Although the initial DNSSEC security extensions have been proposed in 1999 \cite{rfc2535}, the majority of users still can not profit from their benefits today. Unfortunately, DNSSEC does not address privacy issues due to an explicit design decision \cite{rfc4033}. Currently, there is no indication that facilities for privacy-preserving resolution will be integrated into the DNS architecture in the short term. Previous research efforts have focused on interim solutions, i.\,e., add-ons and tools that enable users who care for privacy to protect themselves against profiling and tracking efforts. The objective consists in designing and evaluating suitable privacy enhancing techniques in such a way that users do not have to rely on or trust the existing DNS infrastructure. The ``range query'' scheme by Zhao et al. \cite{Zhao:2007a} is one of those efforts. The basic idea consists in \emph{query obfuscation}, i.\,e., sending a set of dummy queries (hence the term ``range'') with random hostnames along with the actual DNS query to the resolver. So far the security of range query schemes has only been analyzed within a simplistic theoretical model that considers the obtainable security for \emph{singular queries}. In this paper we study the security offered by range queries for a more complex real-world application, namely \emph{web surfing}, which is one of the use cases Zhao et al. envision in \cite{Zhao:2007a}. In contrast to singular queries, downloading websites typically entails a number of inter-related DNS queries. Our results indicate that the range query scheme offers less protection than expected in this scenario, because dependencies between consecutive queries are neglected. The main contribution of this paper is to \textbf{demonstrate that random set range queries offer considerably less protection than expected in the web surfing use case}. We demonstrate that a curious resolver (the adversary) can launch a semantic intersection attack to disclose the actually retrieved website with high probability. We also show how the effectiveness of the attack can be reduced, and we identify a number of challenges that have to be addressed before range query schemes are suitable for practice. The paper is structured as follows. In Sects.~2 and 3 we review existing work and fundamentals. Having described our dataset in Sect.~4, we continue with theoretical and empirical analyses in Sects.~5 and 6. We study countermeasures in Sect.~7 and discuss our results in Sect.~8. We conclude in Sect.~9. \section{Related Work} \label{sec:relatedWork} The basic DNS range query scheme was introduced by Zhao et al. in \cite{Zhao:2007a}; there is also an improved version \cite{Zhao:2007b} inspired by private information retrieval \cite{Chor:1995}. Although the authors suggest their schemes especially for web surfing applications, they fail to demonstrate their practicability using empirical results. Castillo-Perez and Garcia-Alfaro propose a variation of the original range query scheme \cite{Zhao:2007a} using multiple DNS resolvers in parallel \cite{Castillo-Perez:2008,Castillo-Perez:2009}. They evaluate its performance for ENUM and ONS, two protocols that store data within the DNS infrastructure. Finally, Lu and Tsudik propose PPDNS \cite{Lu:2010}, a privacy-preserving resolution service that relies on CoDoNs \cite{RamasubramanianS04-codons}, a next-generation DNS system based on distributed hashtables and a peer-to-peer infrastructure, which has not been widely adopted so far. The aforementioned publications study the security of range queries for singular queries issued independently from each other. In contrast, \cite{FederrathFHP11-dnsmixes} observes that consecutively issued queries that are dependent on each other have implications for security. They describe a timing attack that allows an adversary to determine the actually desired website and show that consecutive queries have to be serialized in order to prevent the attack. \section{Fundamentals} \label{sec:fundamentals} \subsection{Random Set DNS Range Query Scheme} \label{sec:dnsrq} In this paper we focus on the basic ``random set'' DNS range query scheme as introduced in \cite{Zhao:2007a}. Zhao et al. stipulate that each client is equipped with a large database of valid domain names \textbf{(dummy database)}. Each time the client wants to issue a DNS query to a resolver, it randomly draws (without replacement) $N-1$ \emph{dummy names} from the database, and sends $N$ queries to the resolver in total. When all replies have been received from the resolver, the replies for the dummy queries are discarded and the desired reply is presented to the application that issued the query. Zhao et al. claim that this strategy leaves the adversary with a chance of $\frac{1}{N}$ to guess the desired domain name. The value of $N$ is a security parameter, which is supposed to be chosen according to the user's privacy expectations and performance needs. \subsection{Query Patterns} \label{sec:patterns} The semantic intersection attack exploits the fact that typical websites embed content from multiple servers, causing clients to issue a burst of queries for various domain names in a deterministic fashion, whenever they visit the site. For example, visiting \emph{google.com} will also trigger a DNS request for \emph{ssl.gstatic.com}, as the site includes some resources from that domain. We call the set of domain names that can be observed upon visiting a site its \emph{query pattern} $p$, i.\,e., $p(\mathrm{google.com}) = \{\mathrm{google.com},\mathrm{ssl.gstatic.com}\}$. In Sect.~\ref{sec:dataset}, we will show that many popular websites do have query patterns that can be used for this attack. Using range queries, each individual query from a pattern $p$ is hidden in a set of $N-1$ randomly chosen queries, leading to $|p|$ sets, each containing $N$ queries, being sent to the resolver in order to retrieve all the domain names required to visit the corresponding website. We refer to $N$ as the \emph{block size} of the range query scheme and to each individual range query as a \emph{block}. Note that the client uses standard DNS queries to deliver the range query, because it uses a conventional DNS resolver, i.\,e., a single range query with a block size of $N$ causes $N$ individual DNS queries. \subsection{The Semantic Intersection Attack} \label{sec:attack} An adversary, who is in possession of a database that contains the query patterns for a set of websites he is interested in \textbf{(pattern database)}, can check whether one of these patterns can be matched to consecutive query blocks received by the client. As all the dummy names are drawn independently from each other from the dummy database, it is quite unlikely that the client will draw the pattern of a different website by chance. Therefore, the adversary can be optimistic that he will only find a single pattern in the set of consecutive range queries he receives from the client, i.\,e., the pattern of the actually desired website. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{pictures/modes} \caption{Distinguishability of blocks for the resolver} \label{fig:attack:1} \end{figure} From the viewpoint of the adversary there are two different scenarios, depending on how well the adversary can distinguish consecutive blocks (cf. Fig.~\ref{fig:attack:1}). The adversary may either be able to identify all the queries that belong to the first block, but be unable to determine which of the remaining queries belongs to which of the remaining blocks (\textbf{1BD}, 1st block distinguishable), or be able to distinguish all individual blocks, i.\,e., be able to determine for all queries to which block they belong (\textbf{ABD}, all blocks distinguishable). The difference between the 1BD and the ABD scenario becomes evident by considering the following example. When a user visits the site \emph{\url{http://www.rapecrisis.org.uk}}, her browser will issue a query for \emph{www.rapecrisis.org.uk}. Moreover, it will issue two additional queries, for \emph{twitter.com} and \emph{www.rapecrisislondon.org}, once the HTML page has been parsed. For illustrative purposes we assume that range queries with $N=3$ are used. In the \textbf{ABD scenario} the adversary might, for instance, observe a first block of queries for (\emph{cnn.com}, \emph{www.rapecrisis.org.uk}, \emph{img.feedpress.it}), then a second block for (\emph{github.com}, \emph{twitter.com}, \emph{s.ebay.de}), and finally a third block for (\emph{www.rapecrisislondon.org}, \emph{ytimg.com}, \emph{conn.skype.com}). In contrast, in the \textbf{1BD scenario} the adversary might observe a first block with (\emph{cnn.com}, \emph{www.rapecrisis.org.uk}, \emph{img.feedpress.it}) and a second block with (\emph{github.com}, \emph{twitter.com}, \emph{www.rapecrisislondon.org}, \emph{s.ebay.de}, \emph{ytimg.com}, \emph{conn.skype.com}). The first block is distinguishable in both scenarios, because the web browser has to resolve the \emph{primary domain name} in order to learn the IP address of the main web server. This IP address is received within the replies that belong to the first block of queries. After the browser has downloaded the HTML file from the main web server, it will issue queries for the \emph{secondary domain names} in order to retrieve all embedded content hosted on other web servers. Given a \emph{pattern database DB} that contains primary and secondary domain names of websites, the adversary proceeds as follows in order to carry out the intersection attack in the \textbf{ABD scenario}: \begin{enumerate} \item From \emph{DB} the adversary selects all patterns, whose primary domain name is contained in the first block, obtaining the set of candidates $C$. \item The adversary selects all patterns with length $|p|$, which is the number of observed blocks, from $C$ to obtain $C_{|p|}$. \item For each pattern $q$ in $C_{|p|}$ the adversary performs a \emph{block-wise set intersection}: $q$ is a \emph{matching pattern}, if all of its domain names are dispersed among the blocks in a plausible fashion, i.\,e., iff \begin{enumerate} \item each block contains at least 1 element from $q$, and \item each element of $q$ is contained in at least 1 block, and \item $q$ can be completely assembled by drawing one element from each block. \end{enumerate} \end{enumerate} In the \textbf{1BD scenario} the adversary has to use a different approach, because there are only two blocks observable: \begin{enumerate} \item From the pattern database the adversary selects all patterns, whose primary domain name is contained in the first block, thus obtaining the set of candidate patterns $C$. \item For each pattern $q$ in $C$ the adversary performs a \emph{block-wise set intersection}: $q$ is a \emph{matching pattern}, if all of its secondary domain names are contained within the second block. \end{enumerate} Note that due to caching, the adversary cannot reliably determine $|p|$ in the 1BD scenario. Due to variations in the lookup time of different domain names, the stub resolver on the client may already receive replies (and cache the results) for some domain names before all range queries have been submitted to the resolver. However, if the range query client happens to draw one of the cached domain names as a dummy, the stub resolver will not send another query, but answer it immediately from its cache. As a result, some queries will not reach the adversary and the effective size of consecutive blocks will vary. Therefore, the adversary cannot easily determine $|p|$ in the 1BD scenario in order to filter the set $C$. For now, we neglect the fact that caching may also affect the desired queries (cf. Sect.~\ref{sec:discussion} for a discussion of this issue). In the remainder of the paper we focus on the \textbf{1BD scenario}, which we deem to be more realistic than the ABD scenario. Contemporary web browsers issue the queries for the secondary queries in parallel. Thus, when the range query client constructs range queries for each of the desired domain names, the individual queries of all the blocks will be interleaved, causing uncertainty about the composition of the individual blocks. On the other hand, the ABD scenario is relevant for range query schemes that submit all queries contained in a block in a single message. We will consider the effect of this approach in Sect.~\ref{sec:evaluation:ex3}. \section{Dataset} \label{sec:dataset} In order to evaluate the feasibility of the semantic intersection attack, we performed probabilistic analyses and implemented a simulator that applies the attack to the patterns of actual websites. For this purpose we obtained the query patterns of the top $100{,}000$ websites of the ``Alexa Toplist'' (\url{http://www.alexa.com}) with the headless Webkit-based browser PhantomJS (\url{http://phantomjs.org}).\footnote{\label{fnote:github}The source code of our crawler and simulator as well as all experimental data is available at \url{https://github.com/Semantic-IA}} As PhantomJS was not able to reach and retrieve all of the websites contained in the Toplist at the time of the data collection (May 2013) the cleaned dataset contains with $|P|=92{,}880$ patterns and $|Q|=216{,}925$ unique queries. The average pattern length (\textit{mean value}) is $13.02$ with a standard deviation of $14.28$. The distribution of pattern lengths as displayed in Fig.~\ref{fig:patternlengths} shows that, while patterns of the length 1 are frequent, patterns of higher lengths make up the majority of the dataset. The longest pattern consists of $315$ queries. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{data-lengths/histogram} \hfill \includegraphics[width=0.48\textwidth]{data-lengths/cdf} \caption{Histogram and cumulative distribution of pattern lengths} \label{fig:patternlengths} \end{figure} \section{Probabilistic Analysis} \label{sec:analysis} Before we carry out any practical evaluation using our simulator, we want to get an expectation of the likelihood of \textbf{ambiguous results}, which occur if the client happens to draw all the domain names of another website from the dummy database while the range queries needed for the desired website are assembled. If the client draws all domain names of a different pattern by chance and distributes the individual names among the blocks in a plausible fashion, the adversary will observe two patterns: the pattern of the actually desired website as well as the \textbf{random pattern}. \subsection{Modeling the Probability of Ambiguous Results} \label{sec:themodel} In the 1BD scenario an ambiguous result occurs if the primary domain name of a random pattern (the domain name of the corresponding website) is selected as a dummy in the first block, and all remaining elements of the pattern are contained in the union of the remaining blocks.\footnote{In the 1BD scenario the query distribution between the remaining blocks is irrelevant, as long as all needed queries occur at least once in the union of the blocks.} The probability for an ambiguous result can be modeled as a series of hypergeometric distributions. A hypergeometric distribution $h(k|N;M;n)$ describes the probability of drawing $k$ elements with a specific property when drawing $n$ elements out of a group of $N$ elements, of which $M$ have the desired property: \begin{equation} \label{eq:analysis:1} h(k|N;M;n) := \frac{{M \choose k}{N-M \choose n-k}}{{N \choose n}} \end{equation} First, we need to obtain the probability to draw the first element of a pattern of the correct length $n$ into the first block of queries. As the variables of the hypergeometric distribution overlap with those we use to describe the properties of a range query, we substitute them for their equivalents in our range query notation. $N$ is equal to $|Q|$, the number of names in the dummy database. $M$ equals to the number of patterns of the correct length, which we will write as $|P_n|$. In our case, the parameter $n$ of the hypergeometric distribution corresponds to $N-1$, as we will draw $N-1$ dummy names into the first block. By substituting these values into Eq.~\ref{eq:analysis:1}, we obtain the probability $p(n, k)$ of drawing exactly $k$ beginnings of patterns of the length $n$: \begin{equation} \label{eq:analysis:2} p(n, k) := \frac{{|P_n| \choose k}{|Q|-|P_n| \choose (N-1)-k}}{{|Q| \choose N-1}} \end{equation} In addition to that, we need to determine the probability of drawing the remaining $k*(n-1)$ queries into the second block, which contains the remaining $(n-1)*(N-1)$ randomly drawn dummy names in the 1BD scenario. To complete our $k$ patterns, we need to draw $k*(n-1)$ specific dummy names. The probability of success is described by the function $q(n,k)$, which is given in Eq.~\ref{eq:analysis:3}. \begin{equation} \label{eq:analysis:3} q(n, k) := \frac{{n-1 \choose n-1}^k {|Q|-(n-1)*k \choose (n-1)*(N-1)-(n-1)}}{{|Q| \choose (n-1)*(N-1)}} = \frac{{|Q|-(n-1)*k \choose (n-1)*(N-1)-(n-1)*k}}{{|Q| \choose (n-1)*(N-1)}} \end{equation} The two probabilities $p(n,k)$ and $q(n,k)$ can now be combined to receive the probability of drawing $k$ complete patterns of the correct length $n$: \begin{equation} \label{eq:analysis:4} P(n, k) := p(n,k)*q(n,k) \end{equation} In this context, the expected value of $P(n,k)$ for different values of $n$ is of interest, as it describes the average number of patterns we expect to see. The expected value, in general, is defined as: \begin{equation} \label{eq:analysis:5} E(X) := \sum\limits_{i \in I}(x_i p_i) \end{equation} In our case, $x_1$ is $k$, as it describes the number of patterns, and $p_i$ equals $P(n,k)$ as the probability of drawing $k$ patterns, i.\,e., the expected value is \begin{equation} \label{eq:analysis:6} E(n) := 1 + \sum\limits_{k=1}^{N-1} (P(n,k)*k) \end{equation} We are adding 1 to the result, as the original pattern will always be present. Equation~\ref{eq:analysis:6} will only calculate the expected value for patterns of a specific length. However, as the adversary does not know the length of the pattern with certainty in the 1BD scenario, we have to consider patterns of any length. For that, we have to use a modified variant of Eq.~\ref{eq:analysis:3}: \begin{equation} \label{eq:analysis:7} q(n, k, M) := \frac{{|Q|-(n-1)*k \choose (M-1)*(N-1)-(n-1)*k}}{{|Q| \choose (M-1)*(N-1)}} \end{equation} In Eq.~\ref{eq:analysis:7}, $n$ is the length of the random pattern, while $M$ is the length of the original pattern. Accordingly, we modify Eq.~\ref{eq:analysis:4} and Eq.~\ref{eq:analysis:6}: \begin{equation} \label{eq:analysis:8} P(n, k, M) := p(n,k)*q(n,k,M) \end{equation} \begin{equation} \label{eq:analysis:9} E(M) := 1 + \sum\limits_{n=1}^M\sum\limits_{k=1}^{N-1} (P(n,k,M)*k) \end{equation} Finally, to determine the expected mean value of the number of detected patterns given a specific block size $N$, we calculate \begin{equation} \label{eq:analysis:10} F(N) = \frac{1}{|P|}*\sum\limits_{M=1}^{L}(E(M)*|P_M|) \end{equation} where $L$ is the length of the longest pattern, and $|P_M|$ the number of patterns having length $M$. \subsection{Analytical Result} \label{sec:analyticalresult} \begin{table}[t] \centering \caption{Expected avg. number of detected patterns $F(N)$ for varying block sizes $N$} \ \begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}}rrrr} \toprule $N$ & $10$ & $50$ & $100$ \\ \midrule $F(N)$ & $1.35$ & $2.93$ & $4.83$\\ \bottomrule \end{tabular*} \label{tab:analysis:1} \end{table} The results (cf. Table~\ref{tab:analysis:1}) indicate that an adversary will, on average, detect only very few random patterns. As expected, the privacy expectation for singular queries ($\frac{1}{N}$) does not apply to the web surfing scenario. Note that for reasons of conciseness we have provided a slightly simplified model, which disregards overlaps between patterns. Actually, the adversary must expect to find a slightly \emph{higher} number of patterns, because a domain name that is contained within multiple patterns only has to be drawn once to be detected as part of all patterns. Nevertheless, the analysis is instructive and provides us with a baseline for the empirical evaluations that we will describe in the following. \section{Evaluation} \label{sec:evaluation} In order to evaluate the effectiveness of the semantic intersection attack in a realistic scenario, we developed a simulator that enables us to efficiently test different attack strategies and various assumptions about the knowledge of the adversary. In the following we present results for the 1BD scenario. \paragraph{Methodology} Given a dataset the simulator will generate range queries for all the patterns from the dataset and perform the semantic intersection attack. We are interested in the influence of two factors on the effectiveness of the attack, namely the \emph{block size} $N$, and the \emph{size of the dummy database} $|Q|$ that contains the dummy names. If the range query scheme was to be used in practice, these two factors could be easily influenced by the user. Thus, it is worthwhile to analyze their effect on the attainable privacy. In the following, we will use the metric of \textbf{$k$-identifiability}, which is derived from the well-known metric $k$-anonymity \cite{Sweene02-kanonymity}: A set of consecutively observed range queries is said to be $k$-identifiable, if the adversary finds \emph{exactly} $k$ matching patterns of websites in his pattern database. For conciseness we will show the cumulative distribution of the fraction of $k$-identifiable patterns, i.\,e., the fraction of patterns that are $k$-identifiable or less than $k$-identifiable. \subsection{Results of Experiment 1: Variation of Block Size} \label{sec:evaluation:ex1} For the purpose of this analysis, we consider three different block sizes: $N=10$, $N=50$, and $N=100$. \cite{FederrathFHP11-dnsmixes} has shown that the median latency exceeds 1200\,ms for a block size of $N=100$, rendering higher values impractical for practical use. Based on the result of Sect. \ref{sec:analysis}, we expect to receive some, but not many ambiguous results, i.\,e., instances where the whole pattern of a different website appears in a set of consecutively observed range queries by chance. Intuitively, the larger the block size, the more random patterns will occur. Accordingly, we expect the effectiveness of the attack to degrade with increasing block sizes. \afterpage{% \clearpage\clearpage \begin{table}[t] \centering \caption{Results for varying block sizes $N$ given the whole dummy database} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $10$ & $216{,}925$ & $62\,\%$ & $99\,\%$ & $1$ & $6$ \\ $50$ & $216{,}925$ & $8\,\%$ & $88\,\%$ & $3$ & $14$ \\ $100$ & $216{,}925$ & $1\,\%$ & $43\,\%$ & $6$ & $18$ \\ \bottomrule \end{tabular*} \label{tab:evaluation:ex1:1} \end{table} \begin{figure}[h] \centering \includegraphics{pictures/M2-VarN} \caption{Distribution of $k$-identifiability for varying block sizes $N$ (whole database)} \label{fig:evaluation:ex1:1} \end{figure} } As can be seen in Table~\ref{tab:evaluation:ex1:1} and Fig.~\ref{fig:evaluation:ex1:1}, the smallest block size provides little privacy, with $62\,\%$ of patterns being 1-identifiable. Consequently, the median of the observed $k$-identifiability values is $1$. $99\,\%$ of patterns are 5-identifiable or better. No pattern is more than 6-identifiable. For a larger block size of $N=50$, only $8\,\%$ of patterns are 1-identifiable, but the cumulative distribution quickly approaches $100\,\%$. All patterns are 14-identifiable or less, and the median of all observed $k$-identifiability values is $3$, i.\,e., for $50\,\%$ of the websites the adversary can narrow down the actually desired site to a set of 3 or less sites, which is far smaller than the baseline probability of $\frac{1}{50}$ for finding the desired domain name in the first block. As expected, $N=100$ is most effective: $0.8\,\%$ of patterns are 1-identifiable, but still $43\,\%$ of patterns are at most 5-identifiable. Generally, we can observe diminishing returns when the block size is increased. While the increase from $N=10$ to 50 leads to $54\,\%$ less 1-identifiable patterns, adding another 50 queries per block only decreases the fraction by $7.2$ percentage points. The same is true for the maximum $k$-identifiability, which increases by eight and four, respectively. On overall, the results indicate that range queries provide far less privacy than suggested by Zhao et al. in the web surfing scenario. \paragraph{1BD-improved} We also considered an improved attack algorithm that guesses the length of the desired patterns based on the total number of observed queries in the second block, resulting in a range of possible pattern lengths. This allows the adversary to reject all patterns that do not fall into this range. As a result $80\,\%$ ($N=100$) and $94\,\%$ ($N=10$) of all patterns are 1-identifiable. Due to space constraints, we are unable to adequately cover the calculations to estimate the length in this paper, but we have released an implementation including the relevant documentation in the source code repository (see \Cref{fnote:github}). \subsection{Results of Experiment 2: Variation of Dummy Database} \afterpage{% \clearpage\clearpage \begin{table}[t] \centering \caption{Results for varying dummy database sizes $S$ given the block size $N=50$} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $50$ & $2{,}000$ & $19\,\%$ & $92\,\%$ & $3$ & $14$ \\ $50$ & $20{,}000$ & $16\,\%$ & $95\,\%$ & $3$ & $11$ \\ $50$ & $200{,}000$ & $9\,\%$ & $88\,\%$ & $3$ & $13$ \\ \bottomrule \end{tabular*} \label{tab:evaluation:ex2:1} \end{table} \begin{figure}[h] \centering \includegraphics{pictures/M2-VarS} \caption{Distribution of $k$-identifiability for varying dummy database sizes $S$; $N=50$} \label{fig:evaluation:ex2:1} \end{figure} } Generating and maintaining a dummy database is a non-trivial task for the client, which gets harder the larger the database is supposed to be. Accordingly, the importance of the size of the dummy database is of interest. We assume that the client's dummy database is always a subset of the pattern database of the adversary, because, in general, the adversary will have access to more resources than the client, and collecting patterns scales very well. We compare the effectiveness of three different database sizes ($S=2000$, $20{,}000$ and $200{,}000$). The domain names are chosen by drawing patterns from the full pattern database (without replacement) and adding all domain names of each pattern to the dummy database. This process continues until exactly $S$ unique domain names have been found. We select full patterns to increase the chance that the client randomly chooses a full pattern when drawing dummies. We used a fixed block size of $N=50$ for this experiment. Fig.~\ref{fig:evaluation:ex2:1} shows that the differences are quite small on overall. Thus, the biggest effect of varying the database is the change in the percentage of 1-identifiable patterns: The percentage of 1-identifiable patterns drops by three percentage points when the dummy database size is increased from $S=2000$ to $S=20{,}000$, and by another 7 points on the second increase to $S=200{,}000$. The observed changes have a much smaller effect than the variation of the block size; however, regardless of these results, a larger database is always desirable to prevent other attacks, such as the enumeration of the client's database. \subsection{Effect of Pattern Length on Site Identifiability} Now that we know the effect of varying the block size, the composition of the different $k$-identifiabilities is of interest. With this information, we can determine \textbf{whether websites with longer or shorter patterns are more at risk} to be identified. Intuitively, shorter patterns should generally have lower $k$-identifiabilities, as comparatively few dummies are drawn to obfuscate them, decreasing the chance of drawing a whole pattern. Conversely, longer patterns should generally achieve higher $k$-identifiabilities, as they use a higher number of dummy domain names. We will now test this hypothesis by analyzing the composition of the different $k$-identifiabilities, using the results of our simulation with a block size of $N=50$ and the full dummy database ($S=216{,}925$). \begin{table}[t] \centering \caption{Number of patterns $n_k$, mean length $\overline{|p|}$ and standard deviation $\mathrm{SD}$ aggregated by resulting $k$-identifiability ($N=50$, $S=216{,}925$)} \label{tab:evaluation:reasons:1} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}rrrrrrrrrrr} \toprule $k$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $\geq10$ \\ \midrule $n_k$ & $7{,}693$ & $18{,}790$ & $23{,}184$ & $19{,}784$ & $12{,}497$ & $6{,}532$ & $2{,}875$ & $1{,}077$ & $336$ & $121$ \\ $\overline{|p|}$ & $10.59$ & $11.43$ & $12.52$ & $13.54$ & $14.43$ & $15.45$ & $16.22$ & $17.65$ & $17.09$ & $19.47$ \\ $\mathrm{SD}$ & $12.16$ & $13.24$ & $13.65$ & $14.55$ & $15.02$ & $16.14$ & $16.65$ & $17.71$ & $15.35$ & $19.68$ \\ \bottomrule \end{tabular*} \end{table} As can be seen in Table~\ref{tab:evaluation:reasons:1}, the mean pattern length rises almost linearly with increasing $k$-identifiability, which supports our hypothesis. The standard deviation exhibits a similar behavior, albeit with a slightly lower and less uniform growth rate. We could reproduce this result for other block and database sizes. The correlation is more distinct for larger block sizes. Smaller block sizes do not show this behavior as clearly, as the range of $k$-identifiabilities is too small to show any distinct trend. \subsection{Results of Experiment 3: ABD Scenario} \label{sec:evaluation:ex3} So far, we concentrated on the 1BD scenario (cf. Sect.~\ref{sec:attack}). We will now consider the ABD scenario by repeating the experiment from Sect.~\ref{sec:evaluation:ex1}, simulating an adversary that can distinguish individual blocks: In the ABD scenario the adversary is able to 1-identify between $87\,\%$ ($N=100$) and $97\,\%$ ($N=10$) of all domain names, vastly improving on the results of 1BD ($1\,\%$ and $62\,\%$, respectively). The increased accuracy is due to two effects: Firstly, in the ABD scenario the adversary can derive $|p|$, the length of the obfuscated pattern, and filter the set of candidate patterns accordingly (cf. Sect.~\ref{sec:attack}). Secondly, the probability that another matching pattern is drawn from the dummy database by chance is much smaller when it has to meet the three ABD conditions The contribution of these two effects to the overall effectiveness obtained for ABD can be analyzed by reviewing the results obtained for the baseline (1BD) in comparison to 1BD-improved (cf. Sect.\ref{sec:evaluation:ex1}) and ABD: The results for 1BD-improved, which filters candidate patterns using a vague estimation of $|p|$, already show a significant increase: For $N=50$ the fraction of 1-identifiable sites is $83\,\%$ for 1BD-improved, while it is only $8\,\%$ for 1BD. On the other hand, the fraction of 1-identifiable websites obtained for ABD, where matching patterns have to meet the additional conditions and the exact value of $|p|$ is known, rises only by another 6 percentage points (reaching $89\,\%$) compared to 1BD-improved. While this sort of analysis can not conclusively prove that the effect of filtering by length is larger than the effect of filtering via the ABD conditions, we note that the additional benefit of these conditions is comparatively small when the adversary can estimate the length of the obfuscated pattern. This result indicates that range query schemes that are supposed to provide privacy in a web surfing scenario have to be devised and implemented in a way that the adversary cannot infer the length of the obfuscated query pattern. \section{Countermeasures} \label{sec:countermeasures} Having shown the weaknesses of the range query scheme against a pattern-based attack strategy, we will now discuss possible countermeasures. First, we will discuss and evaluate a pattern-based dummy selection strategy. Afterwards, we will consider other strategies that could be used to hinder the adversary. \subsection{Pattern-Based Dummy Selection Strategy} \label{sec:countermeasures:improved-dummy} In the original dummy selection strategy, the client sampled the dummies independently and randomly from his dummy database. In contrast, the client will now draw \emph{whole patterns} from his database. When querying the resolver for a desired pattern, the client will draw $N-1$ random patterns of the same length and use them as dummies. If not enough patterns of the correct length are available, the client will combine two shorter patterns to obtain a concatenated pattern with the correct length. Intuitively, this approach ensures that the adversary will always detect $N$ patterns. The results of our evaluation, shown in Table~\ref{tab:countermeasures:improved-dummy:1}, confirm this conjecture. All patterns are exactly $N$-identifiable. \begin{table}[t] \centering \caption{Statistics for varying block sizes $N$ using the pattern-based dummy construction strategy} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}rrrrrr} \toprule $N$ & $S$ & 1-identifiable & $\leq 5$-identifiable & median(k) & max(k) \\ \midrule $10$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $10$ & $10$ \\ $50$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $50$ & $50$ \\ $100$ & $216{,}925$ & $0\,\%$ & $0\,\%$ & $100$ & $100$ \\ \bottomrule \end{tabular*} \label{tab:countermeasures:improved-dummy:1} \end{table} However, in real-world usage scenarios, the length of the pattern the client is about to query cannot be known in advance. As the dummies for the first element of the pattern have to be chosen before the query can be sent, the client has no way to be sure of the pattern length of the desired website, as these values may change over time when a website changes. This leads to uncertainty about the correct length of the dummy patterns. A wrong choice of pattern length may be used by the adversary to identify the original pattern. Future research could study more sophisticated dummy selection strategies, drawing from experience gained in the field of obfuscated web search \cite{BalsaTD12}. \subsection{Other Countermeasures} As described in the previous section the pattern-based dummy selection strategy is subject to practical limitations. We will briefly cover other countermeasures that may be used to improve the privacy of clients. This list is not exhaustive. The first option is to use a variable value for $N$ that changes on each block. This will raise the difficulty of determining the length of the original pattern, as long as the adversary cannot distinguish individual blocks. This change would render 1BD-improved useless, as it depends on a fixed number of chosen dummies per block (although similar optimizations could be found that would still improve on the performance of the trivial algorithm). However, this would not impact the performance of the ABD algorithm, as it does not rely on uniform block sizes Another improvement that may make the pattern-based strategy more feasible would be to round up the length of the target pattern to the next multiple of a number $x > 1$. The additional queries (``padding'') could be chosen randomly, or by choosing patterns of the correct length. Finally, other privacy-enhancing techniques, such as mixes and onion routing \cite{chaum81-mix,dingledine04tor}, can be employed to counter monitoring and tracking efforts. However, these general-purpose solutions are not specifically designed for privacy-preserving DNS resolution and may introduce significant delays into the resolution process. \section{Discussion} \label{sec:discussion} We designed our experimental setup to stick as closely as possible to reality. However, for reasons of conciseness and clarity we have neglected some effects. In the following we will discuss whether they affect the validity of our conclusions. Firstly, the results are implicitly biased due to a closed-world assumption, i.\,e., our results have been obtained on a dataset of limited size. However, as the Toplist of Alexa contains a large variety of websites we are confident that the results are valid for a large fraction of sites in general. Moreover, we have only evaluated the effectiveness of the attack for the \emph{home pages}; the evaluation of the attack on individual sub-pages is left for future work. Secondly, while we considered the effects of caching of dummy queries in the 1BD scenario, we disregarded caching of the desired queries: The client may still have (parts of) a pattern in his local cache, resulting in incomplete patterns being sent to the resolver. However, the adversary may adapt to caching by remembering the TTL of all responses he sent to a client and matching the patterns against the union of the received domain names and the cached entries. Moreover, an adversary who wants to determine all websites a user visits needs the patterns of all websites on the Internet. Such a database would be non-trivial to generate and maintain. However, a \emph{reactive} adversary may visit any domain name he receives a query for and store the pattern for that domain name in its pattern database, making a slightly delayed identification possible Finally, we disregarded changing patterns as well as DNS prefetching techniques, which cause longer and more volatile patterns. However, a determined adversary will have no problems in addressing these issues. \section{Conclusion} \label{sec:conclusion} We demonstrated that random set range queries offer considerably less protection than expected in the use case of web surfing. Our attack exploits characteristic query patterns, which lead to greatly reduced query privacy compared to the estimations made by Zhao et al. in their original work. Moreover, we proposed and evaluated an improved range query scheme using query patterns to disguise the original pattern. We encourage researchers to consider the effects of semantic interdependencies between queries when designing new schemes for query privacy, as the rising pervasiveness of social networking buttons, advertising and analytics makes singular queries less and less common for web surfing. \medskip
2024-02-18T23:40:39.890Z
2016-03-23T01:01:21.000Z
algebraic_stack_train_0000
3,010
6,471
proofpile-arXiv_065-14781
\section{Introduction} Gravitational tidal interactions between short-period planets and their host stars can play an important role in the evolution of the orbit and internal rotations of both bodies. As probably the clearest example, dissipation of planetary tidal flows is thought to explain why the shortest-period hot Jupiters have preferentially circular orbits, unlike the population of Jovian planets with orbital periods longer than ten days, which have a wide range of eccentricities (e.g.~\citealt{Rasio1996,WF2015}). Over the last decade, much work has been devoted to understand the mechanisms of tidal dissipation in fluid bodies, but many uncertainties remain \citep{Gio2004,Wu2005b,IvanovPap2007,GoodmanLackner2009,PapIv2010,FBBO2014,Ogilvie2014}. One of the major uncertainties in the theory of tides is the importance of nonlinear fluid effects. These may be particularly important for the tides in the shortest-period hot Jupiters because of their large amplitudes (e.g. WASP-19 b has a dimensionless tidal amplitude $A\sim 0.05$ using Eq.~\ref{deftidalamp} below, which can no longer be treated as a small parameter), so that linear theory (e.g.~\citealt{Wu2005b,Gio2004,IvanovPap2007,PapIv2010,Ogilvie2013}) may no longer accurately describe the tidal response. Nonlinear fluid effects are likely to play a crucial role whenever tidal forcing excites small-scale waves (typically restored by buoyancy and/or rotation), since nonlinearities become important for much smaller amplitudes for these waves than for large-scale tidal flows, resulting in wave breaking \citep{Barker2010,Barker2011} or subtler parametric instabilities \citep{BO2011,Weinberg2012}, as well as localised angular momentum deposition (e.g.~\citealt{FBBO2014}). In addition, nonlinear tidal effects can drive instabilities of the large-scale non-wavelike tidal flows, which would not be predicted by a linear tidal theory. In this work, and in the companion paper, we study one such instability: the elliptical instability, which occurs in fluids with elliptical streamlines \citep{Kerswell2002}, such as in tidally deformed planets or stars. Previous work using a local computational model \citep{BL2013,BL2014} has demonstrated that this instability could be important for tidal dissipation inside planets with the shortest orbital periods (which have the largest dimensionless tidal amplitudes) -- in particular, it may explain why hot Jupiters with orbital periods shorter than about 2 days have preferentially circular orbits. The elliptical instability has also been studied in impressive laboratory experiments \citep{Lacaze2004,LeBars2007,LeBars2010}, as well as in global numerical simulations in a rigid ellipsoidal container \citep{Cebron2010,Cebron2013}. However, simulations of the instability in a global model with a realistic free surface that are appropriate for this problem have not yet been undertaken (though see \citealt{Ou2004,Ou2007} for a different application). This is the primary aim of this paper. Studying global effects is required in order to determine how they could modify the outcome of the instability. If global effects could modify the dissipative properties of the flow, this might elevate the importance of this instability for astrophysics, by allowing it to be important for longer orbital periods. I adopt the simplest global model in which to self-consistently study nonlinear tidal effects in planets (or stars): a rotating and tidally deformed homogeneous ellipsoidal fluid body. More realistic models should be considered in future investigations, but this model has enormous theoretical advantages over more complicated ones due to its tractability, and there is an existing body of work that we can apply to aid our understanding of its properties \citep{L1989a,L1989b,ST1992,LL1996,LL1996a}. In addition, it is the cleanest configuration in which to study the elliptical instability in isolation. This is because the lowest order (quadrupolar) tidal potential does not directly excite global inertial modes in a homogeneous incompressible body (at least for aligned spin and orbit; \citealt{GoodmanLackner2009,Ogilvie2009,PapIv2010}), so enhanced tidal dissipation (over viscous damping of the global tidal flow) can only occur via this instability. In a companion paper \citep{Barker2015a}, we studied the global modes and instabilities of such a planet. In this work I present the results of global numerical simulations, using a spectral element method, to study the nonlinear outcome of the elliptical instability. My aim is to understand its nonlinear evolution and to determine its astrophysical relevance for tidal dissipation. I briefly describe the model in \S 2 -- though see \cite{Barker2015a} for further details -- before describing the code used and various code tests in \S 3 and 4. The main results are presented in \S 5 and 6, and a discussion (where these results are applied to the tidal evolution of extrasolar planets and close binary stars) and conclusion is presented in \S 7 and 8. \section{Simplified model} In \cite{Barker2015a}, we constructed a simple model in which we can study nonlinear tides in a gaseous planet or star, which I will briefly outline here. I focus on the spin--orbit synchronisation (and spin--orbit alignment) problem for an aligned (or purely anti-aligned) circular orbit for simplicity. This is because in this case there exists a natural frame in which the equilibrium shape of the ellipsoid is fixed (in the absence of instabilities), which is advantageous computationally. I use Cartesian coordinates $(x,y,z)$ centred on the planet of mass $m_p$ and unperturbed radius $R_p$ such that the $z$-axis is aligned with its spin axis, and the angular velocity of the fluid is $\Omega\geq 0$. The star has mass $m_\star$, about which the planet orbits with angular velocity $\boldsymbol{n}=n\boldsymbol{e}_z$. I allow the sign of $n$ to be positive or negative so that both purely aligned (prograde) and anti-aligned (retrograde) orbits can be studied. In the frame that rotates at the rate $\boldsymbol{n}$ (the ``bulge frame"), the governing equations are \begin{eqnarray} \label{eqs1} &&\left(\partial_{t} + \boldsymbol{u}\cdot \nabla\right)\boldsymbol{u} + 2\boldsymbol{n}\times \boldsymbol{u} = -\nabla \Pi + \nu \nabla^{2}\boldsymbol{u}, \\ && \nabla \cdot \boldsymbol{u} = 0, \\ && \Pi = p+\Phi-\frac{1}{2}|\boldsymbol{n}\times\boldsymbol{x}|^{2}+\Psi. \label{eqs3} \end{eqnarray} where $p$ is a pressure, $\nu$ is the kinematic viscosity, $\Phi$ is a fixed gravitational potential and $\Psi$ is an imposed tidal potential. The fluid has uniform density ($\rho\equiv1$) and is incompressible. Viscosity can be thought to crudely represent the effects of turbulent convection on dissipating large-scale tidal flows (e.g.~\citealt{Zahn1966,Goldreich1977,Penev2007,Penev2009,OgilvieLesur2012}), though I take $\nu$ to be uniform and independent of frequency in this work, both for simplicity, and because the four-decade old controversy over the frequency dependence of $\nu$ has not been resolved. Equilibrium is maintained by fluid pressure, central gravity and centrifugal and tidal forces. I adopt a fixed gravitational potential for the planet \begin{eqnarray} \label{fixedpot} \Phi(\boldsymbol{x})=\frac{1}{2}\omega_{d}^{2}r^2, \end{eqnarray} where $r^2=x^2+y^2+z^2$, and the dynamical frequency is $\omega_d= \sqrt{\frac{Gm_p}{R_p^3}}$. Using a fixed potential might be thought to represent the gravity of a centrally condensed body, though in this case the body is strictly uniform. I neglect the self-gravity of the fluid for computational convenience, but its inclusion would be unlikely to significantly change the results (the elliptical instability excites inertial waves, which only weakly perturb the gravitational potential). The tidal potential (to lowest order) is \begin{eqnarray} \Psi = \frac{A\omega_d^2}{2} \left( r^2 - 3 (\hat{\boldsymbol{a}_\star}\cdot \boldsymbol{x})^2\right), \end{eqnarray} where $\hat{\boldsymbol{a}_\star}=(1,0,0)$ defines the direction to the star, which is stationary in the bulge frame (because the orbit is circular). I define \begin{eqnarray} \label{deftidalamp} A&=& \frac{m_{\star}}{m_{p}}\left(\frac{R_p}{a_\star}\right)^3, \end{eqnarray} which is a measure of the dimensionless tidal amplitude, with $a_\star$ being the distance to the star, and adopt units such that the tidally unperturbed non-rotating planet would have radius $R_p=1$ and set $\omega_d=1$. The planet has volume $V$ and its surface is the triaxial ellipsoid \begin{eqnarray} \label{1} \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}=1, \end{eqnarray} where $a,b$ and $c$ are the semi-axes of the ellipsoid, which are stationary in the bulge ($\boldsymbol{n}$) frame (in the absence of viscosity and instabilities). The basic laminar tidal flow is \begin{eqnarray} \label{basicflow} \boldsymbol{U}_{0}(\boldsymbol{x})=\gamma\left(-\frac{a}{b}y,\frac{b}{a}x,0\right), \end{eqnarray} where $\gamma=\Omega-n$, which is an exact inviscid solution that is steady (in the absence of instabilities) and results from the non-synchronous rotation of the fluid ($\Omega\ne n$) in its ellipsoidal volume. When $A\ne 0$ (and $\gamma\ne 0$), the planet is similar to a Roche-Riemann ellipsoid \citep{C1987}. The stability of a similar configuration (a Riemann S-type ellipsoid) has been studied in the absence of a tidal deformation by \cite{LL1996,LL1996a}, and most recently we have studied the stability of our Roche-Riemann-like ellipsoid (including the tidal deformation) in the companion paper \citep{Barker2015a}. The shape of the planet can be determined from the three input parameters ($A,\Omega,n$) by \citep{Barker2015a} \begin{eqnarray} \label{shape} \epsilon &=& \frac{3A}{2(1-\gamma^2-n^2)-A}, \\ c^2 &=& \frac{2\left[ (2A+\gamma^2+n^2-1)(A-\gamma^2-n^2+1)+f\right]}{(A+1)(A+2(\gamma^2+n^2-1))}, \end{eqnarray} with \begin{eqnarray} f=2\gamma n \sqrt{(1-2A-\gamma^2-n^2)(1+A-\gamma^2-n^2)}, \end{eqnarray} and where $\epsilon$ is a measure of the tidal deformation, defined by $a=\sqrt{1+\epsilon}$ and $b=\sqrt{1-\epsilon}$. Note that $\epsilon\approx \frac{3A}{2}$ for small $\gamma$, $n$ and $A$. Note that $\boldsymbol{U}_{0}$ is not an exact solution in the presence of viscosity, because it does not satisfy the stress-free boundary condition at the free surface. Viscosity leads to a weak tidal torque, even in the absence of instability, which slowly synchronises the spin of the body with its orbit, and drives additional weak internal flows. The mean viscous dissipation rate for the flow given by Eq.~\ref{basicflow} is \begin{eqnarray} D_{\mathrm{lam}}= \frac{2\nu}{V}\int_{V} e_{ij}e_{ij}\; \mathrm{d}V=\nu \gamma^2 \left(\frac{a}{b}-\frac{b}{a}\right)^2, \label{disspred} \end{eqnarray} where $e_{ij}=\frac{1}{2}\left(\partial_i u_{j} + \partial_j u_{i}\right)$ (taking $u_i=U_{0,i}$). In the presence of viscosity, $D_{\mathrm{lam}}$ does not vanish unless $\gamma=0$ (or $a=b$). In the absence of perturbations and viscosity, our planet will remain in equilibrium for all time, because $\boldsymbol{U}_{0}$ is an exact inviscid solution. However, this flow has elliptical streamlines, so an infinitesimal perturbation will drive the elliptical instability for certain choices of $A,\Omega$ and $n$. This instability will also be excited in the presence of viscosity if its growth rate is sufficiently large (in practice, this requires us to consider tidal amplitudes that are slightly larger than those for most of the observed hot Jupiters in our simulations). The planetary orbit is fixed in this model, which is a reasonable approximation when studying the synchronisation (and spin-orbit alignment) of the planet. This is because short-period planets typically have much less angular momentum contained in their spin compared with their orbits. I focus on the synchronisation problem in this work because it is the simplest computationally, as there exists a frame in which the geometry of the tidal flow is stationary, in the absence of instabilities. While I do not directly study the circularisation problem, the small-scale linear instability has similar properties in this case \citep{KerswellMalkus1998}, and I believe its nonlinear evolution is likely to share many properties with the synchronisation problem that is explicitly studied in this paper. \subsection{Elliptical instability} The elliptical instability is a fluid instability of elliptical streamlines, which can be driven if $\gamma\ne 0$ and $\epsilon\ne 0$ and draws upon the kinetic energy of the tidal flow. We have studied the global properties of this instability in detail in \cite{Barker2015a}, but to aid the reader who may not be familiar with its properties, I briefly introduce those which are relevant to understand the main results of \S~\ref{Nonlinear}. The instability results from the interaction of the elliptical deformation (which has frequency $2\gamma$ in the fluid frame, which rotates at the rate $\Omega$ about the $z$-axis) with a pair of inertial waves (which have frequencies in the fluid frame $|\omega_{i}|\leq 2|\Omega|$ for $i=1,2$). Instability is possible if the wave frequencies add up so that they are in resonance with the deformation. For instability to occur, we also require the two waves to have harmonic orders $\ell_1=\ell_2$ and azimuthal wavenumbers $m_1\pm m_2=2$, since the tidal deformation has $m=2$ \citep{Kerswell1993,LL1996}. The growth rate is typically \begin{eqnarray} \sigma \sim \epsilon \gamma, \end{eqnarray} but exhibits an additional dependence on $n$ \citep{Craik1989}. The fastest growing modes typically have $|\omega_1|\approx |\omega_2|\approx \gamma$, and a necessary condition for instability (in the case of uniform rotation and $\epsilon\ll 1$) is that \begin{eqnarray} -\Omega \leq n \leq 3 \Omega, \end{eqnarray} since inertial waves cannot be excited outside of this frequency range \citep{Kerswell2002}. However, when $\epsilon$ is no longer tiny, instability is possible for modes that are not exactly resonant because each resonance has a finite width ($O(\epsilon\gamma)$). Thus, a given pair of waves can be unstable if $\omega_1\pm\omega_2 \approx 2\gamma \left(1+O(\epsilon)\right)$ -- in addition, for large enough $\epsilon$, instabilities involving three or more waves are possible in principle. More detailed results relating to the instability are presented in \cite{Barker2015a}. \section{Numerical method and setup} I solve Eqs.~\ref{eqs1}--\ref{eqs3} in their weak variational form with the efficiently parallelised spectral element code Nek5000 \citep{nek5000}. Spectral element methods combine the geometric flexibility of finite element methods with the accuracy of spectral methods, which makes them particularly suitable for studying tidal flows in realistic ellipsoidal geometries. I have previously used this code to study tidally forced inertial waves in spherical shells \citep{FBBO2014}. The computational domain is decomposed into $E$ non-overlapping hexahedral elements, and within each element, the velocity and pressure are represented as tensor-product Lagrange polynomials of orders $N$ and $N-2$, where the points are the Gauss-Lobatto-Legendre and Gauss-Legendre points, respectively (e.g.~\citealt{DevilleFischerMund2002}). The convergence is algebraic with increasing number of elements $E$ and exponential with increasing polynomial order $N$. The total number of grid points for each velocity component is $E N^3$. Temporal discretisation in Nek5000 is accomplished by a third order method based on a semi-implicit formulation, in which the nonlinear and Coriolis terms are treated explicitly, and the remaining linear terms are treated implicitly. Solutions are de-aliased following the $3/2$ rule i.e.~$3N/2$ grid points in each dimension are used for the non-linear terms, whereas only $N$ are used for the linear terms. I use an adaptive time-step based on the CFL condition with an appropriate safety factor, and I have ensured that the mesh and fluid motion are accurately computed by checking that the results do not depend on the time-step size for several cases. The computational mesh has $E=1280$ elements, which results from the merger of a Cartesian mesh close to the origin and a pair of concentric spherical shells close to the external boundary. The resulting mesh describes a full sphere centred on the origin, but we can simply deform the mesh into one that describes a full ellipsoid by applying the transformation $(x,y,z)\rightarrow (ax,by,cz)$ for each grid point in the original mesh, where $a,b,c$ are chosen so that the initial state is an inviscid equilibrium for a given set of parameters ($A,\Omega,n$). This method works well for all $a,b,c$ considered in this work. I typically adopt $N=8$ to $14$ for the simulations presented in this work (corresponding to approximately $\sim 90^3$ to $150^3$ grid points). An example deformed mesh is shown in Fig.~\ref{0} for $N=8$. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.4\textwidth]{0a} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 1cm, clip=true,width=0.4\textwidth]{0b} } \end{center} \caption{An example spectral element mesh for a highly deformed ellipsoid with $\epsilon=0.3$ and $c=0.8$ for illustration. This has $E=1280$ elements and $N=8$ grid points inside each element, which is the lowest resolution considered in this work. Top: external view. Bottom: slice through the mesh at $y=0$.} \label{0} \end{figure} Nek5000 solves for the mesh motion using an Arbitrary Lagrangian Eulerian (ALE) method, which allows us to treat the boundary condition at the surface of the planet as a free surface. Unfortunately, the extra computational work per time-step, together with the restriction on the time-step to ensure that the mesh motion is accurately captured, makes the code significantly more computationally demanding than a fixed grid version. This is the first time that such a boundary condition has been studied in nonlinear simulations of tidal flows in planets or stars. Later in this work, I will present the results of comparison simulations that have a rigid boundary (with the same initial container shape) on which an impenetrable but stress-free condition is applied to the fluid velocity. These simulations have also been performed using Nek5000 with otherwise the same setup as the free surface simulations. \subsection{Testing surface gravity modes in Nek5000} \label{tests} In previous work, I have thoroughly tested Nek5000 on several problems with a fixed boundary (e.g.~\citealt{FBBO2014,BDL2014}). Here I outline several tests of its free surface capabilities, which were undertaken so that I could be comfortable in its application to the main problem of this paper. The simplest test is to compare the frequencies of surface gravity modes in a non-rotating spherical planet in a fixed gravitational potential (Eq.~\ref{fixedpot}), which are\footnote{Note that the $\ell=1$ mode would be a trivial mode with zero frequency if I had solved Poisson's equation for $\Phi$ instead of adopting a fixed potential. These frequencies match onto those of a self-gravitating body in the limit of large $\ell$.} (e.g.~\citealt{Barker2015a}) \begin{eqnarray} \omega_\ell = \pm \sqrt{\ell} \omega_d. \end{eqnarray} To test these, I initialise a simulation with no flow at $t=0$, but with a mesh displacement $\boldsymbol{x} \rightarrow \boldsymbol{x} + \boldsymbol{\xi},$ where \begin{eqnarray} \boldsymbol{\xi} = A_{\xi} f(r) r^{\ell-1}\mathrm{Re}\left[\tilde{Y}_{\ell}^{\ell}(\theta,\phi)\right] \boldsymbol{x} \end{eqnarray} with $f(r)=r^{12}$, and $\tilde{Y}_{\ell}^{\ell}$ is a spherical harmonic but with constants set to unity. (Note that this strictly has non-vanishing divergence -- alternatively, the exact eigenfunction for a surface gravity mode could have been used.) This is a radial displacement that is proportional to a sectoral harmonic (with $\ell=m$), and $f(r)$ guarantees that the perturbation is strongest at the surface and vanishes near $r=0$. I choose $A_{\xi}=0.005$ (so that we start in the linear regime) and run simulations for $\ell\in[1,6]$, using a resolution of $N=8$ and fixed time-step $\mathrm{dt}=0.003$ ($\nu=10^{-8}$ i.e.~viscosity is negligible). The mean kinetic energy for these simulations, \begin{eqnarray} K=\frac{1}{V}\int_V\frac{1}{2}|\boldsymbol{u}|^2\mathrm{d}\, V, \end{eqnarray} is plotted in the top panel of Fig.~\ref{1} for each $\ell\in[1,6]$, where $V$ is the fluid volume. This shows that oscillations with larger $\ell$ have larger frequencies, and that the waves remain negligibly damped for several periods (as expected for the chosen viscosity). In the bottom panel of Fig.~\ref{1}, I show the results of computing the Lomb-Scargle periodogram \citep{Press1992} of the RMS velocity $\sqrt{2K}$. The predicted frequencies are represented by the dashed vertical lines, and the excellent agreement with the dominant frequency in the simulations indicates that the code accurately captures these modes. This provides a significant test of the free surface capabilities of Nek5000. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=4.5cm 0cm 8.3cm 0cm, clip=true,width=0.35\textwidth]{1a} } \subfigure{\includegraphics[trim=4.5cm 0cm 8.3cm 0cm, clip=true,width=0.35\textwidth]{1b} } \end{center} \caption{Test of the free surface capabilities of Nek5000 for initial surface deformations with $\ell\in[1,6]$ of a non-rotating sphere designed to excite surface gravity modes. Top: mean kinetic energy vs time. Bottom: Lomb-Scargle periodogram of $\sqrt{2K}$, showing the theoretical prediction $\omega_\ell=\sqrt{\ell}\omega_d$ as dashed vertical lines. This shows excellent agreement with the theoretical predictions. (The difference in significance of the peaks in the right panel is due to the different run-times considered).} \label{1} \end{figure} Several additional tests of the code were also carried out, including comparing the frequencies of surface gravity modes of a rotating ``Maclaurin-like spheroid" against theoretical predictions \citep{Braviner2014,Barker2015a}, and the evolution of the shape and simplest internal flows (linear in Cartesian coordinates) of an ellipsoid for various initial conditions (which have been tested against numerical integration of the ODEs derived within the formalism of \citealt{ST1992} and listed in Appendix B of \citealt{Barker2015a}). The agreement was excellent in all cases, so I omit these additional tests for brevity. Several further tests of the code are outlined in \S \ref{Nonlinear} in my discussion of simulation results. I can therefore be confident in applying Nek5000 to study the nonlinear evolution of the elliptical instability. \subsection{Nonlinear simulations} I initialise each simulation with the flow given by Eq.~\ref{basicflow}, to which I add small amplitude $\sim 10^{-5}$ random noise to each component of the velocity field at each grid point to allow instability to develop. All simulations are listed in Table \ref{table2} in Appendix~\ref{AppendixTable}, for reference. The initial shape and internal flow is an inviscid equilibrium, but it evolves weakly due to viscosity, and also when instability develops. The fact that the initial flow remains an equilibrium (prior to instability) in these simulations, apart from weak viscous evolution, indicates that the code correctly captures the basic equilibrium configuration. In addition, the initial mean viscous dissipation rate in each simulation accurately matches the prediction given by Eq.~\ref{disspred}, which I list in Table \ref{table2} and will explain in more detail in \S~\ref{Nonlinear}. I study the instability as a function of ($A,\Omega,n$) as well as the kinematic viscosity $\nu$, which are treated as independent parameters. I will first present a comparison of the growth rate of the elliptical instability with theoretical predictions \citep{Barker2015a} in \S \ref{comparison}, where I also confirm the presence of a violent elliptical instability for retrograde spins outside the range in which it is usually thought to operate. I then illustrate the main results from the nonlinear simulations with several aligned (prograde) examples in \S\ref{Nonlinear}, and with several examples in which the planet has an initially anti-aligned (retrograde) spin in \S \ref{retrogradecases}. A comparison of free surface and rigid boundary simulations is presented in Appendix \ref{rigidsims}. \section{Comparison with theoretical predictions} \label{comparison} I begin by comparing the growth rates of the elliptical instability in the early stages of the global simulations with the predictions of the stability analysis of \cite{Barker2015a}. This set of simulations has $\Omega=0.2$ and $\nu=10^{-4}$ for various $n$, with either $A=0.1$ or $A=0.15$. The RMS vertical velocity is \begin{eqnarray} \langle u_z\rangle=\sqrt{\frac{1}{V}\int_V u_z^2 \, \mathrm{d}V}, \end{eqnarray} which primarily quantifies the (vertical) energy in the inertial waves driven by the elliptical instability. In the linear growth phase, I fit a straight line to $\ln \langle u_z\rangle $ as a function of $t$, of the form $\ln \langle u_z\rangle \propto \sigma t$, to determine $\sigma$ (given that the instability excites waves with nonzero frequencies, a time interval that covers several wave periods must be chosen). The numerical results for the growth rate (normalised by $\epsilon$) as a function of $n$ are plotted on Fig.~\ref{8} for both $A=0.1$ (top panel) and $A=0.15$ (bottom panel) as black stars. I plot the maximum growth rate based on the energetic upper bound \citep{LL1996a} as the black dashed lines. I also plot the maximum growth rate for all global modes up to a given harmonic degree $\ell$ from the inviscid stability analysis of \cite{Barker2015a} (based on the formalism of \citealt{LL1996}), as the shaded coloured regions\footnote{The predictions for the case with $\ell_{\mathrm{max}}=8$ are computed by assuming a rigid boundary, since the linear properties of the elliptical instability are found to be very similar in this case (and the $U$-basis functions were difficult to compute accurately for $\ell>5$; \citealt{Barker2015a}).}. The simulation results agree reasonably well with the inviscid theoretical predictions, and exhibit the same trends. This agreement is promising, particularly given that the simulations were initialised with random noise (rather than a ``clean" initialisation using the eigenfunction of the fastest growing mode), and they also include non-negligible viscosity. Similar agreement between simulations and theoretical predictions has been found for the related problem of elliptical instability driven by latitudinal libration \citep{Vant2015}. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=7cm 0cm 8cm 0cm, clip=true,width=0.35\textwidth]{2a} } \subfigure{\includegraphics[trim=7cm 0cm 8cm 0cm, clip=true,width=0.35\textwidth]{2b} } \end{center} \caption{Growth rates normalised by $\epsilon$ as a function of $n$ for the elliptical instability for a planet rotating at the rate $\Omega=0.2$ for $A=0.1$ (top) and $A=0.15$ (bottom) computed using global numerical simulations (DNS; black stars). I also plot predictions for the maximum growth rate from the inviscid global stability analysis \citep{Barker2015a} for modes with harmonic degrees up to a given $\ell$ (coloured shaded regions), as well as the upper bound from energetic considerations (\citealt{LL1996a}; region under the black dashed lines). All simulations were initialised with random noise and included viscosity with $\nu=10^{-4}$. The simulations agree reasonably well with the theoretical predictions. In addition, I confirm the presence of a violent elliptical instability for retrograde spins when $n\lesssim -\Omega=-0.2$ for these values of $A$.} \label{8} \end{figure} The growth rate when the spin-over mode is excited ($\ell\leq2$) is represented by the red shaded region for $n<0$, and the regions that represent the excitation of other global inertial modes are illustrated by shading using different colours. The ellipsoidal shape is no longer defined if $n\lesssim -0.5$. This shows that more of the parameter space is unstable to modes with smaller spatial scales (larger $\ell)$. The fastest growth rates occur for retrograde (anti-aligned) spins, with the growth rate for prograde (aligned) cases being much smaller in general. This result is also found in a local plane-wave analysis of the elliptical instability \citep{Craik1989,Barker2015a}. Note that the growth rate for global modes with $\ell\leq 8$ is always smaller (by an $O(1)$ factor) than the energetic upper bound. In Fig.~\ref{8}, I also confirm the prediction\footnote{These simulations were in fact performed before the stability analysis, so this is more of an explanation than a prediction.} of a violent elliptical instability for retrograde spins even when $n\lesssim -\Omega$, when the usual elliptical instability of inertial modes is not normally thought to operate. This occurs only if the tidal amplitude is large enough to allow instability even when a pair of modes is not exactly in resonance. The nonlinear evolution of a simulation in this regime is presented in \S~\ref{violent}. Now that I have confirmed the presence of elliptical instability in global simulations with growth rates that are consistent with theoretical predictions, I turn to discuss the nonlinear evolution of the elliptical instability. I present several illustrative example simulations in \S~\ref{Nonlinear} and \ref{retrogradecases}. \section{Nonlinear simulations with a prograde spin} \label{Nonlinear} \subsection{An illustrative example: zonal flows as a saturation mechanism} \label{example} I now describe the results of an example simulation, in which the planet initially rotates in a prograde sense with $\Omega=0.2$ and orbital angular frequency $n=0.01$, with $A=0.05$ and $\nu=3 \times 10^{-5}$ (computed using a resolution of $N=10$). This configuration has an initial shape with $\epsilon=0.08$, $a=1.040$, $b=0.959$, $c=0.941$. Since the initial tidal flow (Eq.~\ref{basicflow}) does not satisfy the stress-free condition at the surface, there is a weak viscously driven circulation in the fluid interior (leading to RMS vertical velocities $\langle u_z\rangle \sim 10^{-5}$), and viscous dissipation that leads to a gradual synchronisation of the spin and orbit. The predicted mean viscous dissipation rate at $t=0$ (Eq.~\ref{disspred}) is well matched by the simulation result ($D=2.2\times 10^{-6}$) shown in the top right panel of Fig.~\ref{2} by the agreement of the black (simulated) and red (predicted) lines during this stage. The time evolution of various mean quantities is plotted in Fig.~\ref{2}. The initial elliptical instability grows to large amplitudes by $t\sim 1500$, by which time it enhances the dissipation, producing rapid partial synchronisation of the spin and orbit from $\gamma\approx0.19$ to $\gamma\approx 0.17$. After this initial burst, the turbulence temporarily dies away only to recur in a cyclic manner. Each burst of instability corresponds with a period of enhanced dissipation, which is shown by the second panel of Fig.~\ref{2}, where I plot \begin{eqnarray} D=\frac{2\nu}{V}\int_V e_{ij} e_{ij} \, \mathrm{d}V. \end{eqnarray} In the third panel, I plot \begin{eqnarray} \langle \gamma(t) \rangle=\frac{1}{V}\int_V \frac{u_\phi}{R} \mathrm{d}\, V, \end{eqnarray} the evolving mean asynchronism of the flow. During the burst phases, the spin rapidly undergoes a partial synchronisation with the orbit, which is repeated during subsequent bursts. The tidal synchronisation does not occur smoothly, instead occurring in an erratic manner, dominated by these short-lived bursts. Outside of these turbulent bursts, dissipation appears to be due to viscous dissipation of the differential rotation in the flow, and not purely laminar viscous dissipation of the global tidal flow i.e.~Eq.~\ref{disspred} (shown as the red dashed line, where I have replaced $\gamma\rightarrow \langle \gamma(t) \rangle$). \begin{figure} \begin{center} \subfigure{\includegraphics[trim=6cm 0cm 8cm 0cm, clip=true,width=0.23\textwidth]{3aNEW1} } \subfigure{\includegraphics[trim=6cm 0cm 8cm 0cm, clip=true,width=0.23\textwidth]{3bNEW} } \subfigure{\includegraphics[trim=6cm 0cm 8cm 0cm, clip=true,width=0.23\textwidth]{3cNEW} } \subfigure{\includegraphics[trim=5.5cm 0cm 8cm 0cm, clip=true,width=0.23\textwidth]{3ddNEW} } \end{center} \caption{Evolution of various flow quantities with time for a simulation with $\Omega=0.2,n=0.01,A=0.05$ and $\nu=3\times 10^{-5}$. Top left: comparison of RMS $u_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red line). Bottom left: mean asynchronism of the flow $\langle\gamma\rangle$. Bottom right: Cartesian components of the angular momentum of the fluid in the inertial frame. This figure illustrates that the elliptical instability can lead to enhanced tidal dissipation. It also shows the importance of differential rotation (zonal flows) as a saturation mechanism for the elliptical instability.} \label{2} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=6cm 0cm 5cm 0cm, clip=true,width=0.23\textwidth]{4a1} } \subfigure{\includegraphics[trim=6cm 0cm 5cm 0cm, clip=true,width=0.23\textwidth]{4a2} } \subfigure{\includegraphics[trim=4cm 0cm 7cm 0cm, clip=true,width=0.23\textwidth]{4b1} } \subfigure{\includegraphics[trim=6cm 0cm 7cm 0cm, clip=true,width=0.23\textwidth]{3eNEW} } \end{center} \caption{Illustration of zonal flows produced in a simulation with $\Omega=0.2,n=0.01,A=0.05$ and $\nu=3\times 10^{-5}$. Top left: Vertically-averaged azimuthal velocity $\langle u_\phi^{\prime} \rangle_z$ on the $xy$-plane during the initial burst phase. Top right: same during the second burst phase. Bottom left: comparison of the mean zonal flow $\langle u_\phi^{\prime} \rangle_{\phi,z}$ as a function of cylindrical radius $R$ during the first and second burst phases. Bottom right: phase plane plot of $\langle u_z\rangle$ vs $u_\mathrm{dr}$, with the colour representing time (from blue to green to red). This behaviour is reminiscent of predator-prey dynamics (with waves acting as the prey and zonal flows as the predators).} \label{3} \end{figure} In the bottom right panel of Fig.~\ref{2}, I plot the components of the angular momentum in the inertial frame ($\boldsymbol{L}_0$), which is related to that in the bulge frame ($\boldsymbol{L}_n$) by $\boldsymbol{L}_0=\boldsymbol{L}_n+I \boldsymbol{n}$, where $I=\frac{8}{15}\pi a b c$ is the moment of inertia of a rigid ellipsoid (for rotations about $z$), and \begin{eqnarray} \boldsymbol{L}_n=\int_V \boldsymbol{x}\times \boldsymbol{u} \,\mathrm{d}V. \end{eqnarray} This panel shows that the spin remains aligned with the orbit during the tidal synchronisation process, as we might expect (by the absence of appreciable growth in $L_x$ or $L_y$). In order to analyse the cyclic behaviour, I define the differential rotation as follows: firstly, the perturbed velocity is \begin{eqnarray} \boldsymbol{u}^{\prime}=\boldsymbol{u}-\boldsymbol{U}(x,y,t), \end{eqnarray} where $\boldsymbol{U}(x,y,t)=\langle\gamma (t)\rangle(-\frac{ya}{b},\frac{xb}{a},0)$. For simplicity, I use the initial geometry when computing the differential rotation, but for all cases considered the results below differ negligibly if I instead use the instantaneous shape of the ellipsoid. I interpolate results from the non-uniformly spaced grid points in Nek5000 to a uniform Cartesian grid containing the entire body (consisting of $50^3$ points), and the vertically-averaged azimuthal perturbation in the $xy$-plane (using cylindrical polar coordinates) is defined by (motivated by similar calculations in \citealt{Favier2015}) \begin{eqnarray} \langle u^{\prime}_\phi (R,\phi,t) \rangle_z &=&\frac{1}{N_z} \sum_z u^{\prime}_\phi(R,\phi,z,t), \end{eqnarray} where $N_z$ is the number of points in $z$ for a given $R$ and $\phi$. The energy in the differential rotation is \begin{eqnarray} E_{\mathrm{dr}}(t)&=&\frac{1}{V}\int_V \frac{1}{2}\left[u^{\prime}_\phi (R,\phi,z,t)\right]^2 \, \mathrm{d}V. \end{eqnarray} In the top left panel of Fig.~\ref{2}, $E_{\mathrm{dr}}$ and $\langle u_z\rangle$ are plotted as a function of time, where $\langle u_z\rangle$ is a measure of the (square root of the) energy contained in the waves\footnote{I have, somewhat unusually, compared $E_\mathrm{dr}$ with the RMS $u_z$ in this case, because the former is typically much larger than $\frac{1}{2}\langle u_z\rangle^2$ by a factor of approximately $10^{3}$. In addition, this comparison highlights more clearly the time delay between these two quantities than if I had plotted energies on a log-scale.} (in addition to the weak viscously-driven flows). $E_{\mathrm{dr}}$ begins to grow shortly after the initial instability has set in. Once the energy in the differential rotation has increased to a sufficient level, the energy in the waves subsequently decays. Only when the differential rotation has decayed sufficiently due to viscosity\footnote{The viscous timescale is $\sim L^2/\nu\sim 5000$ if $L\sim 0.4$, which is somewhat larger than the observed decay, but matches it to within an $O(1)$ factor.} can the instability grow once more. This leads to cyclic behaviour\footnote{The cyclic behaviour shown in Fig.~\ref{2} is somewhat similar to predator-prey dynamics, in which the waves can be thought of as ``rabbits" and the zonal flows as ``foxes" (e.g.~\citealt{Murray2002}). However, in this case, the energy source feeding the instability dies out as the spin synchronises with the orbit, so we do not observe strictly periodic behaviour because the ``food source runs out" (in addition to more complicated nonlinearities).} during which the instability grows, transfers energy into differential rotation, which then inhibits further growth until the differential rotation is sufficiently damped by viscosity. Differential rotation, in the form of zonal flows, therefore plays an important role in the saturation of the instability. I further illustrate this cyclic behaviour in the bottom right panel of Fig.~\ref{3}, where I plot the ``phase plane" $\langle u_z\rangle$ against $u_\mathrm{dr}=\sqrt{2 E_\mathrm{dr}}$ to show the appearance of cyclic behaviour with evolving cycle amplitudes (the colour denotes time, which increases from blue to red). Analogous cyclic behaviour occurs in local hydrodynamical simulations of the elliptical instability, where columnar vortices play the role of zonal flows \citep{BL2013}. It interesting to note that cyclic behaviour has also been observed in the nonlinear evolution of the ``r-mode" instability in neutron stars, where inertial modes are instead driven by gravitational radiation, even in integrations of three-mode couplings that neglect zonal flows \citep{Brink2005,Bondarescu2009}. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.39\textwidth]{3fNEW} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.39\textwidth]{3gNEW} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.39\textwidth]{3hNEW} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.39\textwidth]{3iNEW} } \end{center} \caption{Illustration of the flow in the simulation with $\Omega=0.2,n=0.01,A=0.05$ and $\nu=3\times 10^{-5}$. Top and top middle: instantaneous vertical velocity and $|\boldsymbol{u}|$ during initial linear growth phase at $t=1700$. Bottom middle and bottom: same during initial ``turbulent" phase at $t=1999.7$.} \label{3a} \end{figure} The vertical fluid velocity and $|\boldsymbol{u}|$ in the $xz$-plane are shown in Fig.~\ref{3a} during the first linear growth phase at $t=1700$ (top two panels), and during the subsequent ``turbulent" burst phase at $t=1999.7$ (bottom two panels). I also illustrate the vertically and temporally averaged azimuthal velocity in the $xy$-plane, during the first two burst phases (for the differential rotation) in the top two panels in Fig.~\ref{3}. In the bottom left panel of Fig.~\ref{3}, I have plotted the azimuthal velocity as a function of cylindrical radius at both of these times, illustrating the radial structure of the zonal flows. The zonal flow reaches velocity amplitudes up to approximately $4\%$ of the fluid rotation during this phase. The bottom left panel of Fig.~\ref{2} also shows that there are transient periods of tidal desynchronisation (though the net evolution is towards synchronism). This occurs between each turbulent burst, and is caused by viscous and nonlinear damping of the zonal flows, whose net angular momentum is transferred back to the mean rotation of the fluid. Ultimately, the energy source driving the zonal flows comes from the mean asynchronism of the flow, so these transient phases of desynchronisation are not sustained. Nevertheless, this points out the possibility of periods of tidal desynchronisation, even in a system that lacks an additional energy source (cf.~\citealt{OgilvieLesur2012}). The generation of zonal flows occurs in many rotating fluids (e.g.~\citealt{FBBO2014}), but this is the first example in which it has been observed in global simulations with a free surface. In this example, the flow driven by the elliptical instability was only weakly turbulent because of the relatively large viscosity in relation to the tidal amplitude considered. In the next subsection, I briefly examine two further simulations with a prograde spin, but in which the instability is driven more strongly. In these cases, zonal flows continue to play a role, but the turbulence is less bursty. \subsection{Two further prograde examples} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{6a} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{6bNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{6c} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{6dddNEW} } \end{center} \caption{Evolution of various flow quantities with time for a simulation with $\Omega=0.3,n=0.05,A=0.15$ and $\nu=10^{-4}$. Top left: comparison of RMS $u_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red line). Bottom left: mean asynchronism of the flow $\langle\gamma\rangle$. Bottom right: Cartesian components of the angular momentum of the fluid in the inertial frame.} \label{4} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 7cm 2cm, clip=true,width=0.23\textwidth]{7a2} } \subfigure{\includegraphics[trim=4cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{7b1} } \end{center} \caption{Illustration of zonal flows produced in a simulation with $\Omega=0.3,n=0.05,A=0.15$ and $\nu=10^{-4}$. Left: Vertically-averaged azimuthal velocity $\langle u_\phi^{\prime} \rangle_z$ on the $xy$-plane during the the strongest (second) burst phase. Right: comparison of the mean zonal flow $\langle u_\phi^{\prime} \rangle_{\phi,z}$ as a function of cylindrical radius $R$ during the first and second burst phases.} \label{5} \end{figure} A further example with a prograde spin is illustrated in Figs.~\ref{4} and \ref{5}. In this case, $\Omega=0.3$, $n=0.05$, with $A=0.15$ and $\nu=10^{-4}$ (using a resolution of $N=10$). As shown in the top left panel of Fig.~\ref{4}, the initial elliptical instability (shown by the blue dashed line that plots the RMS $u_z$) has grown by $t\sim 500$, leading to the generation of differential rotation in the form of zonal flows (shown by the solid black line, and plotted as a function of $R$ in the right panel of Fig.~\ref{5} as the solid black line). This simulation also exhibits bursty behaviour, though it is much less regular than the example discussed in \S~\ref{example}. A strong zonal flow is produced during the second burst phase at $t\sim 1500$, which is plotted as a function of $R$ in the right panel of Fig.~\ref{5} (blue dashed line, where the black solid line shows the zonal flow in the first burst phase), which subsequently leads to a reduction in wave activity ($\langle u_z\rangle$). I have plotted the mean azimuthal velocity on the $xy$-plane during this phase in the left panel of Fig.~\ref{5}. As this zonal flow is subsequently damped, this produces a period of rapid tidal desynchronisation, in which $\langle \gamma(t) \rangle$ increases sharply until the zonal flow has damped and transferred its angular momentum to the mean rotation (shown in the bottom left panel of Fig.~\ref{4}). Following this there are two further burst phases before the instability has succeeded in partially synchronising the planet with its orbit, such that $\langle \gamma(t) \rangle \lesssim 0.07$, after which the instability is not excited strongly over viscous damping. The enhanced viscous dissipation during the bursts of instability (over the laminar prediction shown as the red dashed line) is shown in the top right panel of Fig~\ref{4}. As with the previous example, the instability preserves the rotation axis of the flow, as indicated by the Cartesian components of the mean angular momentum in the inertial frame in the bottom right panel of Fig.~\ref{4}. The time-evolution of the various flow quantities for a different example which exhibits qualitatively similar behaviour is plotted in Fig.~\ref{10}. This example has $\Omega=0.2$, $A=0.15$ and $\nu=10^{-4}$, but with $n=0$ -- i.e. the planet is not moving, which is unphysical but the properties of the flow are worth presenting in this case because the flow is more turbulent. The top left panel again shows similar behaviour to the two previous examples, plotting the $E_z=\frac{1}{2}\langle u_z\rangle^2$ (instead of $\langle u_z\rangle$, which was used for the previous two examples) and $E_\mathrm{dr}$ as a function of time on a log-scale. However, this simulation is less regular in its bursty behaviour. When the zonal flows are strong, the energy in the waves is reduced, until the zonal flows are sufficiently damped -- either by their own shear instabilities, nonlinear energy transfers or by viscosity. This behaviour persists until the planet is mostly synchronised with its orbit, as shown in the bottom panel. After $t\gtrsim 6000$, the elliptical instability is much weaker, leading to mostly laminar evolution, as the wave excitation is balanced by viscous dissipation, at a rate that is an $O(1)$ factor larger than the viscous dissipation of the bulk tidal flow, as is shown in the top right panel of Fig.~\ref{10}. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=6cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{10aNEW} } \subfigure{\includegraphics[trim=6cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{10bNEW} } \subfigure{\includegraphics[trim=6cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{10c} } \end{center} \caption{Evolution of various flow quantities with time for a simulation with $\Omega=0.2,n=0,A=0.15$ and $\nu=10^{-4}$. Top left: comparison of $E_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red line). Bottom: mean asynchronism of the flow $\langle\gamma\rangle$. Given that there is no reference orbit (as $n=0$), I have not plotted the angular momentum components for this example.} \label{10} \end{figure} \subsection{Summary} In the examples presented so far, the initial flow becomes unstable and global inertial modes are excited by the elliptical instability. These grow and produce zonal flows, which inhibit further growth of the instability, either by changing the flow (by inducing differential rotation) in such a way that resonance with a particular pair of global modes is no longer possible, or by perturbing the phases of the growing modes in such a way that they cannot be coherently driven \citep{BL2013}. In addition, the gradual evolution of the bulk rotation of the fluid means that different resonances can be excited as the system evolves. Global modes are again driven when the zonal flows are sufficiently damped by viscosity, or by their own shear instabilities or nonlinear energy transfers, and this leads to ``bursty" cyclic behaviour, somewhat reminiscent of predator-prey dynamics in some cases. Whenever it is excited, the elliptical instability produces enhanced tidal dissipation that leads to partial synchronisation of the spin of the planet with its orbit. So far, I have only discussed cases in which the initial spin was prograde/aligned with the orbit. In \S~\ref{retrogradecases}, I present the results of simulations in which the planet has a retrograde/anti-aligned spin. This allows the action of the elliptical instability on the spin-orbit angle to be analysed. In Appendix~\ref{rigidsims}, I compare these results with those obtained in simulations that adopt a rigid container. \section{Nonlinear simulations with retrograde spin: spin-orbit alignment driven by the elliptical-instability} \label{retrogradecases} Until now, I have only discussed examples with a prograde (relative to the orbit; or with $n=0$) planetary spin that is initially aligned with its orbit. In these simulations, the elliptical instability led to gradual spin-synchronisation, with the planetary spin remaining prograde and approximately aligned with the orbit. In this section, I present simulations of the elliptical instability in planets with an initially purely retrograde (anti-aligned) spin, with a particular emphasis on studying tidal spin-orbit alignment driven by the elliptical instability. In Fig.~\ref{8} (see also \citealt{Barker2015a}), I have demonstrated that the strongest instabilities indeed occur for retrograde spins i.e.~when $n \leq0$, and that instability is also possible when $\frac{n}{\Omega}\lesssim -1$, if $A$ is sufficiently large, where the elliptical instability is usually thought to be absent. The spin-orbit evolution can be analysed by considering the spin-orbit angle $\psi$, defined by \begin{eqnarray} \cos \psi = \hat{\boldsymbol{n}} \cdot \hat{\boldsymbol{L}}_0 =\frac{\mathrm{sgn}(n)L_{0,z}}{\sqrt{L_{0,x}^2+L_{0,y}^2+L_{0,z}^2}}, \end{eqnarray} where $\boldsymbol{L}_0$ is the fluid angular momentum in the inertial frame and $\hat{\boldsymbol{n}}$ is the orbit's unit normal vector. Starting from a purely anti-aligned spin, I expect the spin to align (and synchronise) with the orbit so that this angle will evolve from $\cos\psi=-1$ to $\cos\psi=1$. In this paper, I assume $\boldsymbol{n}$ is constant, i.e.~the orbit is fixed in space. This is only appropriate in the limit of very large orbital angular momentum, which is a reasonable approximation when considering the spin evolution of a hot Jupiter (in reality, $\boldsymbol{n}$ will also evolve due to tides in the star). In this limit, we expect the planet's spin to align with its orbit on a timescale that is comparable with (but not necessarily identical to) the spin-synchronisation timescale. \subsection{Violent instability when $\frac{n}{\Omega}\lesssim -1$} \label{violent} In Fig.~\ref{8}, I have demonstrated that elliptical instability can occur when $\frac{n}{\Omega} \lesssim -1$ if $A$ is sufficiently large -- as predicted by the global (and local) stability analysis \citep{Barker2015a} -- which is outside the frequency range in which it is usually thought to operate. Here I analyse the nonlinear outcome of a simulation with $\Omega=0.2$, $n=-0.4$, $A=0.1$ and $\nu=10^{-4}$, whose initial growth rate has been plotted in the left panel of Fig.~\ref{8}. In Fig.~\ref{5b}, I plot the temporal evolution of various flow quantities during the nonlinear evolution of this simulation, and in Fig.~\ref{3b}, I plot the vertical velocity and $|\boldsymbol{u}|$ for the growing mode at $t=114.73$, and $|\boldsymbol{u}|$ during the initial turbulent phase at $t=158.21$. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{8aaNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{8eNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{8bNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{8ddNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{8dNEW} } \end{center} \caption{Evolution of various flow quantities with time for an initially anti-aligned simulation with $\Omega=0.2,n=-0.4,A=0.1$ and $\nu=10^{-4}$, for which the elliptical instability is not usually thought to be excited. Top left: comparison of $E_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red dashed line). Middle left: mean asynchronism of the flow $\langle\gamma\rangle$. Middle right: Cartesian components of the angular momentum in the inertial frame. Bottom: cosine of the spin-orbit angle $\psi$, which exhibits rapid alignment at $t\sim 3200$, due to viscous dissipation of the laminar tidal flow, which preserves the rotation axis of the flow.} \label{5b} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.35\textwidth]{8fNEWa} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.35\textwidth]{8gNEW} } \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.35\textwidth]{8hNEW} } \end{center} \caption{Illustration of the flow on the $xz$-plane at two times in the simulation with $\Omega=0.2,n=-0.4,A=0.1$, and $\nu=10^{-4}$. Top and Middle: $u_z$ and $|\boldsymbol{u}|$ at $t=114.73$, respectively, both during the linear growth phase. Bottom: $|\boldsymbol{u}|$ during initial turbulent phase at $t=158.21$. This instability is a global version of the ``stack of pancakes" instability (involving primarily horizontal epicyclic motions) that arises in a local analysis of the elliptical instability (\citealt{Barker2015a}; see also \citealt{Craik1989,LL1996a}).} \label{3b} \end{figure} The initial instability is violent, causing a rapid partial-synchronisation of the planetary spin at $t\sim 100$. Zonal flows (quantified by $E_\mathrm{dr}$) are produced as the instability saturates. A second burst of instability occurs at $t\sim 1000$, after which there is no further burst of instability and waves are no longer efficiently driven (i.e.~$\langle u_z\rangle$ is small), but a zonal flow persists. During the subsequent evolution, the spin of the planet very gradually synchronises with its orbit due to viscous dissipation of the zonal flows and the basic tidal flow. Viscous torques preserve the rotation axis of the flow, so that there is a sudden transition in $\cos\psi$ (i.e.~the rotation axis flips by $180^\circ$) when $L_{0z}$ passes through zero\footnote{Note that this occurs shortly before $\Omega$ passes through zero itself, i.e. before $\langle \gamma\rangle =\Omega-n=0.4$, because I have defined $\cos\psi$ using the angular momentum unit vector, and this differs from the angular velocity unit vector because the body is highly non-symmetric. I would observe the same behaviour if the spin-orbit angle was defined using the angular velocity components but at a slightly later time.}. This behaviour would be expected based on the simplest models of tidal dissipation, such as the constant lag-time model -- if $\psi=180^{\circ}$ initially, and the orbit is fixed \citep{Hut1981,Eggleton1998,Barker2009}, then $\psi$ should not evolve, except when $L_{0z}$ passes through zero. \subsection{Spin-orbit alignment when the ``spin-over" mode is excited} \label{spinover} In this section I collect together two examples in which the spin is initially anti-aligned with the orbit, where a secondary elliptical instability excites the ``spin-over" mode. The spin-over mode in a sphere is the only inertial mode with harmonic degree $\ell=2$ (with azimuthal wavenumber $m=\pm 1$), and represents a rigid tilt of the planet's rotation axis. This mode is excited by the elliptical instability for a narrow range of $\Omega$ and $n<0$ (see Fig.~\ref{8} and \citealt{Barker2015a}), but is not excited when $n>0$ because the phase velocity of the mode must match that of the orbit \citep{Kerswell1994}. It occurs when the polar axis ($c$) becomes the middle axis ($b<c<a$), and is related to the ``middle moment of inertia instability" of a rigid body. Previous laboratory experiments and numerical simulations have emphasised the importance of this mode \citep{Lacaze2004,LeBars2007,LeBars2010,Cebron2010,Cebron2013}. The first example has $\Omega=0.1$, $n=-0.01$, $A=0.15$ and $\nu=3\times10^{-5}$, for which the temporal evolution of various flow quantities are plotted in Fig.~\ref{15a}. In this case, the initial elliptical instability saturates by $t\sim 500$, by which time it has produced partial synchronisation of the spin and orbit. The spin remains anti-aligned with the orbit during this phase, so that this initial instability preserves the rotation axis of the flow. However, a secondary elliptical instability subsequently grows after $t\sim 1500$, which corresponds with the excitation of the spin-over mode (I have confirmed that this occurs when the container shape satisfies $b\lesssim c\lesssim a$). This tilts the planet's spin axis away from the $z$-direction, so that the non-dissipative tidal torque acting on the equatorial bulge causes the spin axis to precess. This is illustrated by oscillations in the $x$ and $y$ components of the angular momentum, plotted in the middle right panel in the figure. I also plot $|\boldsymbol{u}|$ for the spin-over mode at $t=11697.8$ in Fig.~\ref{15aa}. The precessional motion is gradually damped by its own instabilities (e.g.~\citealt{Kerswell1993,LorenzaniTilgner2003,Cebron2010a,Lin2015}), and by viscosity, causing gradual spin-orbit alignment. The timescale for this process appears initially to be faster than the laminar viscous timescale for $t\lesssim 9000$ (i.e.~$D>D_\mathrm{lam}$), presumably due to instabilities of the precessional flow or to the generation and damping of non-negligible differential rotation in the fluid, whose presence is indicated by the solid black line in the top left panel of Fig.~\ref{15a}. By comparing the left middle and bottom panels, the timescale for spin-orbit alignment is similar to that in which the planet's spin synchronises, but there is an initial phase in which the spin evolves towards anti-alignment. This transient spin-orbit evolution towards anti-alignment was discussed by \cite{Lai2012}, but the eventual evolution, which includes the ``equilibrium tide" viscous torque, is towards alignment (see also \citealt{RogersLin2013} and \citealt{LiWinn2015}). \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{20ffNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{20bbNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{20ccNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{20dddNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{20eeNEW} } \end{center} \caption{Evolution of various flow quantities with time for an initially anti-aligned simulation with $\Omega=0.1,n=-0.01,A=0.15$ and $\nu=3\times10^{-5}$. Top left: comparison of $E_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red dashed line). Middle left: mean asynchronism of the flow $\langle\gamma\rangle$. Middle right: Cartesian components of the angular momentum in the inertial frame. Bottom: cosine of the spin-orbit angle $\psi$. The spin-over mode is excited when $t\sim 2200$, leading to precessional motion that is gradually damped.} \label{15a} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.35\textwidth]{20fNEW} } \end{center} \caption{Illustration of $|\boldsymbol{u}|$ on the $xz$-plane at $t=11697.8$ in the simulation with $\Omega=0.1,n=-0.01,A=0.15$, and $\nu=3\times10^{-5}$. During this phase, the rotation axis of the fluid precesses about $z$ due to the tidal torque.} \label{15aa} \end{figure} A second example in which the spin-over mode is excited is plotted in Fig.~\ref{15b}. This simulation has $\Omega=0.2$, $n=-0.01$, $A=0.1$ and $\nu=10^{-4}$. The initial elliptical instability again preserves the anti-alignment of the spin and orbit until $t\sim 1500$, when the spin-over mode is subsequently excited (at this time I observe $b\approx c\lesssim a$). This tilts the spin axis of the fluid, which precesses about the $z$-axis due to the (non-dissipative) tidal torque. This precessional motion is gradually damped on a similar timescale to that of the spin-synchronisation, due to a combination of laminar viscous dissipation (which explains the damping from $t\approx5000$ until $t\approx10000$), in combination with additional instabilities of this precessional flow. I plot $|\boldsymbol{u}|$ for the spin-over mode at $t=4174.5$ in Fig.~\ref{15bb}. In this simulation, the damping of the precessional flow drives evolution of the spin-orbit angle towards anti-alignment. This evolution could have been predicted \citep{Lai2012} -- however, if the simulation was run for longer, we would expect it to evolve towards alignment as a result of the laminar viscous tidal torque. \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{9gNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{9bNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{9cNEW} } \subfigure{\includegraphics[trim=5cm 0cm 7cm 1cm, clip=true,width=0.23\textwidth]{9ddNEW} } \subfigure{\includegraphics[trim=5cm 0cm 8cm 1cm, clip=true,width=0.23\textwidth]{9eNEW} } \end{center} \caption{Evolution of various flow quantities with time for an initially anti-aligned simulation with $\Omega=0.2,n=-0.01,A=0.1$ and $\nu=10^{-4}$. Top left: comparison of $E_z$ with the energy in the differential rotation, $E_\mathrm{dr}$. Top right: viscous dissipation rate (black line) and laminar viscous dissipation rate prediction (red dashed line). Middle left: mean asynchronism of the flow $\langle\gamma\rangle$. Middle right: Cartesian components of the angular momentum in the inertial frame. Bottom: cosine of the spin-orbit angle $\psi$. The spin-over mode is excited when $t\sim 1500$, leading to precessional motion that is gradually damped.} \label{15b} \end{figure} \begin{figure} \begin{center} \subfigure{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,width=0.35\textwidth]{9fNEW} } \end{center} \caption{Illustration of $|\boldsymbol{u}|$ on the $xz$-plane at $t=4174.5$ in the simulation with $\Omega=0.2,n=-0.01,A=0.1$, and $\nu=10^{-4}$. This illustrates the flow after the spin-over mode has been excited, when the rotation axis of the fluid precesses about $z$ due to the tidal torque.} \label{15bb} \end{figure} In both simulations, the spin-over mode is excited when $b\lesssim c\lesssim a$, corresponding with an instability when the axis of rotation is the middle axis, as expected \citep{Kerswell1994}. Similar spin-orbit alignment is observed in cases with a rigid outer boundary (see Fig.~\ref{15c}). The spin-over mode is not observed in cases with an aligned spin and orbit, as predicted by the global stability analysis \citep{Barker2015a}, which I have confirmed in the simulations performed for this work (discussed in \S \ref{Nonlinear}). The excitation of the spin-over mode, and the gradual damping of the resulting precessional motion, is not captured in the simplest models of tidal dissipation, such as the constant time-lag model \citep{Hut1981,Eggleton1998,ML2002,Barker2009}. However, it can be captured by considering the different components of the tidal response to be damped at different rates (e.g.~\citealt{Lai2012,Ogilvie2014}). In particular, if the $\ell=2, |m|=1, \omega=-\Omega$ component of the tidal response is damped at a rate that is enhanced by an $O(1)$ factor over that of the other components. However, I do not find evidence that this component could be damped \textit{much} more efficiently than the basic tidal flow (which would be required in order to explain tidal spin-orbit alignment for stars hosting hot Jupiters without planetary inspiral, for example). Whether the spin-over mode would be exited in reality by the elliptical instability inside a planet is unclear. Hot Jupiters are likely to begin their lives rapidly rotating, to be subsequently spun-down by tides. Basing my intuition on Eqs.~\ref{shape}, I expect the body to be oblate ($c<b$), where this mode would not be excited if the spin is aligned or anti-aligned with its orbit. However, this mode could potentially be excited if the planet is slowly rotating (retrogradely with respect to the orbit), but it is then moved to a very-short period ($P\ll 1$ d) orbit so that it has $b<c$, where this instability could operate. In addition, this instability could be excited when the spin and orbit is already significantly misaligned. For a different application, it does not seem likely that this mode could be excited by the elliptical instability inside a solar-type star hosting a short-period planet on a retrograde orbit (cf.~\citealt{Cebron2013}). This is because the tidal deformation is likely to be much too weak to allow $b<c$ for realistic stellar rotation rates. Nevertheless, further work to study the excitation of this mode and to understand the damping of precessional motions in planets and stars more generally would certainly be worthwhile (e.g.~\citealt{PapPringle1982}). \section{Discussion} \label{discussionresults} The global nonlinear evolution of the elliptical instability shares many properties with its local Cartesian counterpart \citep{BL2013,BL2014}. The instability in both cases leads to ``bursty", cyclic behaviour associated with the formation of coherent structures in the flow, e.g., we can compare the top panel of Fig.~\ref{2} with the same panel in Fig.~4 of \cite{BL2013}. In the local model, this cyclic behaviour was related to the formation of vertically-aligned columnar vortices, which subsequently inhibited energy injection into the flow by the elliptical instability until these vortices had been sufficiently damped by viscosity. In this work I have found zonal flows to play an analogous role to the columnar vortices obtained in the local model. (A difference to note in the setup is that the local model treated the tidal flow as a fixed background flow, whereas here I allow it to evolve self-consistently.) \begin{figure} \begin{center} \subfigure{\includegraphics[trim=5cm 0cm 5cm 0cm, clip=true,width=0.46\textwidth]{DvsepsGlobLoc} } \subfigure{\includegraphics[trim=5cm 0cm 5cm 0cm, clip=true,width=0.46\textwidth]{uzvsepsGlobLoc} } \end{center} \caption{Collection of results plotting mean dissipation and mean RMS vertical velocity (normalised by the appropriate power of $\gamma$) against $\epsilon$ for the global simulations with a free surface (black crosses), with RMS errors identified by the error bars. Also added are the results from the local model. The magnitudes of these quantities are consistent with the scalings from the local model. The maximum dissipation is somewhat larger than the mean dissipation, but it is only attained for a short duration. This demonstrates that the RMS turbulent velocity is consistent with being $O(\epsilon)$ and the turbulent dissipation with being $O(\epsilon^3)$ in the limit of small $\epsilon$ (see \S~\ref{astrophysical} for an alternative possible scaling).} \label{dissplot} \end{figure} In \cite{BL2013}, we provided crude arguments to estimate the turbulent dissipation resulting from the elliptical instability. We assumed that an unstable mode (with a velocity amplitude $u$ and a wavelength $\lambda$) grows until secondary shear instabilities (with growth rates scaling with $u/\lambda$) become strong enough to prevent further amplitude growth (when the primary elliptical instability growth rate $\sigma \sim u/\lambda$), leading to a saturated velocity amplitude that can be written \begin{eqnarray} \label{umlt} u\sim u_z\equiv C \lambda \sigma, \end{eqnarray} where $C$ is assumed to be independent of $\epsilon$ and $\nu$. If this is the case, we have a turbulent cascade with an ``outer scale" given by $\lambda$, and turbulent dissipation $D\sim u^3/\lambda\sim \sigma^3\lambda^2$. Given that $\sigma \sim \epsilon\gamma$, we can write \begin{eqnarray} \label{dmlt} D \equiv \chi \epsilon^3 \gamma^3 \lambda^2, \end{eqnarray} where $\chi$ is an efficiency factor (which is assumed to be independent of $\epsilon$ and $\nu$, though may vary to some extent with $\Omega$ and $n$). For simplicity, I will also consider $\lambda=R_p$, the size of the equivalent spherical planet, so that the wavelength of the dominant scale is taken into account with $\chi$. In \S~\ref{astrophysical} we discuss the possibility that $\lambda$ may in fact depend on $\epsilon\gamma$ when $\epsilon\gamma\ll1$. In the local model (with weak magnetic fields; \citealt{BL2014}), we provided evidence to support Eqs.~\ref{umlt} \& \ref{dmlt}, with $\chi\sim 0.01$ and $C\sim 0.2$ (which appeared to be approximately independent of viscosity, where $\lambda$ was taken to be the size of the box), at least over the limited range of $\epsilon$ (and $\nu$) that we could study numerically. To conclusively confirm or refute the scalings of Eqs.~\ref{umlt} \& \ref{dmlt} would require global simulations over a much wider range of $\epsilon$ than I have considered here. This is not possible with current computational resources. Instead, I will compare the global results with the scalings obtained from the local model. While I have not observed sustained turbulence in all of the global simulations (unlike the local model with magnetic fields), it is nevertheless worthwhile to verify whether the results are consistent with these scalings for the computed parameter range, given that my ultimate aim is to determine the astrophysical importance of the instability. (However, it should be noted that the global simulations have not been demonstrated to exhibit mean dissipation rates that are independent of the viscosity -- unlike the local simulations, at least as far as we can probe this numerically.) I compute a time average of $D$ and (RMS) $\langle u_z\rangle $ over each simulation and plot these quantities in Fig.~\ref{dissplot}, after normalising by the appropriate power of $\gamma$. Several caveats that should be kept in mind is that I do not remove the viscous decay of the basic flow (which can be substantial in some cases, as listed at $t=0$ in Table \ref{table2}), and I also include the whole simulation (including the linear growth phase; except for the laminar phases at late times in cases where the turbulence does not persist). Nevertheless, results presented here are representative of those obtained in the whole simulation. In the top panel I also plot the maximum value of $D$, to illustrate an absolute upper bound on the dissipation obtained in the simulations (however, these values are only attained for a very short time interval). The top two panels of Fig.~\ref{dissplot} illustrate that the mean dissipation and mean RMS vertical velocity obtained in the global model are roughly consistent with the scalings that describe the local results for a given $\epsilon$. While there is significant scatter (due to differences in $\Omega,n,\nu$), these results are broadly consistent (to within an $O(1)$ factor) if $\chi\sim 0.01-0.1$ and $C\sim 0.1$ over this range of $\epsilon$ (the peak dissipation is somewhat stronger, but is attained only for a short duration). This demonstrates that the dissipation and turbulent velocities are quantitatively similar to those obtained from the local model over the observed range of $\epsilon$. Note that $\chi$ is controlled by which mode is driven unstable, which depends on $n$ and $\Omega$, in addition to $\epsilon$, which may explain some of the scatter. Fig.~\ref{dissplot} suggests that we can consider $\chi \lesssim 0.1$ to provide an upper limit for the mean dissipation resulting from the elliptical instability. In the local model, magnetic stresses were found to significantly modify the hydrodynamical evolution by preventing the maintenance of large-scale coherent vortices, thereby allowing sustained (as opposed to cyclic) turbulence \citep{BL2014}. In addition, the elliptical instability was found to drive a small-scale dynamo. Recent global simulations have shown that the elliptical instability could act as a ``system-scale" dynamo \citep{Cebron2014}. However, it remains to be seen how magnetic fields would modify the global evolution of the elliptical instability discussed in this work. \subsection{Astrophysical implications} \label{astrophysical} I have simulated the elliptical instability for a range of values of $A,n$ and $\Omega$. Which values are realistic? The shortest-period observed hot Jupiters, such as WASP-19 b \citep{Hebb2010} or WASP-121 b \citep{WASP121} have $A\sim 0.05$ and $|n|\sim 0.2$, which is at the lower end of the tidal amplitudes, and within the considered range of $|n|$, that I have simulated. Unfortunately, we do not have constraints on the rotation rates (or axes) of these planets, therefore $\Omega$ (and the sign of $n$) is not determined. Planets in wider orbits typically have smaller $A$, which I have not directly studied, requiring us to resort to scaling laws such as Eq.~\ref{dmlt} to apply the results to these planets. We can estimate the role of the elliptical instability for tidal circularisation and synchronisation by assuming Eq.~\ref{dmlt} to be valid for both circularisation and synchronisation -- for the synchronisation problem, this is consistent with the local model and compatible with the global simulation results, and I expect it to remain valid for the circularisation problem because the corresponding linear instability has similar properties \citep{KerswellMalkus1998}. I define the circularisation period $P_e$ to be the maximum orbital period for which an initially eccentric planetary orbit can be circularised within $1$ Gyr. The synchronisation period $P_\Omega$ is defined similarly. Using the equations listed in Appendix \ref{timescales} (based on Eqs.~24 and 25 of \citealt{BL2014}), I obtain \begin{eqnarray} P_{e}\approx 2.8 \;\mathrm{d} \left(\frac{\chi}{0.1}\right)^{\frac{3}{25}}\left(\frac{m_p}{1 M_J}\right)^{\frac{2}{25}}\left(\frac{m_\star}{1 M_\odot}\right)^{-\frac{2}{25}}\left(\frac{P_{\mathrm{dyn}}}{3.6 \mathrm{hr}}\right)^{\frac{22}{25}}, \end{eqnarray} where $P_\mathrm{dyn}=2\pi/\omega_d$. Similarly, I obtain \begin{eqnarray} \label{estimate2} P_{\Omega}\approx 14.7 \;\mathrm{d} \left(\frac{\chi}{0.1}\right)^{\frac{1}{6}}\left(\frac{P_{\mathrm{dyn}}}{3.6 \mathrm{hr}}\right) \left(\frac{1 \mathrm{d}}{P_{\mathrm{rot}}}\right)^{\frac{1}{6}}, \end{eqnarray} if the tidal period is $P_{\mathrm{rot}}/2$ (which would be appropriate if $\Omega\gg n$ -- note that we obtained a different estimate in \citealt{BL2014}, where we took the tidal period to be $P$). I have assumed the star to be solar-like and the planet to have Jupiter's mass, radius and radius of gyration for these estimates. The simulations (Fig.~\ref{dissplot}) suggest these to provide an upper limit on $P_e$ and $P_\Omega$ (note also that the estimates in \citealt{BL2014} assumed $\chi=10^{-2}$). These estimates are not strongly sensitive to $\chi$ (as long as $D$ scales approximately as $\epsilon^{3}$), so the scatter in Fig.~\ref{dissplot} is unlikely to change these predictions significantly. However, the radius of the planet does significantly affect these quantities, since $P_e\propto R_p^{\frac{33}{25}}$ and $P_\Omega \propto R_p^{\frac{3}{2}}$, so much greater dissipation would be expected early in the life of the system, when the planet had a larger radius, or alternatively if the planet can remain inflated. If I instead take $R_p=1.5 R_J$ for a significantly inflated (or very young) hot Jupiter, then $P_e=4.8$ d and $P_\Omega=27$ d. Note also that the timescale for spin-orbit alignment will be comparable with the spin synchronisation timescale (because the angular momentum in the orbit is typically much less than that in the planetary spin), so the spins of hot Jupiters should therefore be aligned with their orbits if their orbital periods are shorter than approximately $P_\Omega$. I conclude that the circular orbits of hot Jupiters inside about 3 days may be explained by the elliptical instability. In addition, I predict the spin synchronisation (and spin-orbit alignment) of these planets with their orbits out to about 10-15 days. These estimates may be revised somewhat if the planet can remain inflated, or if consideration of the coupled orbital and thermal evolution of these planets can modify this picture. However, it appears necessary to invoke other mechanisms to explain tidal circularisation for longer orbital periods e.g. (linear) excitation and (linear or nonlinear) dissipation of inertial waves in a planet with a core \citep{Gio2004,GoodmanLackner2009,Ogilvie2013,FBBO2014}, or dissipation in the core itself \citep{Remus2012,Storch2014}. If we turn to the related problem of explaining the observed circularisation and synchronisation of close binary stars \citep{Mazeh2008}, we find $P_e\approx 3.8$ d and $P_\Omega\approx 8.5$ d -- we have considered both stars to have the Sun's current mass and radius and $P_\mathrm{rot}=$ 10 d. The elliptical instability is therefore unlikely to be the primary explanation of the observed circularisation and synchronisation of close solar-type binary stars, but it could play an important role at short orbital periods. It was pointed out by the referee that there is an alternative possible scaling for the dissipation rate when $\epsilon\gamma \ll 1$ to the one given by Eq.~\ref{dmlt} with $\lambda=R_p$. One reason is that elliptical instability occurs in frequency bands of width $O(\epsilon\gamma)$ around exact resonance. Since there are only a finite number of global modes with $\lambda\sim R_p$, the probability that a planetary-scale mode would be excited becomes very small as $\epsilon\gamma\rightarrow 0$. Resonances will always be found on small-enough scales, since the number of modes with a given maximum wavelength $\lambda$ scales as $\lambda^{-3}$. This suggests that the ``outer-scale" $\lambda$ may scale as $(\epsilon\gamma)^{\frac{1}{3}}$. If we otherwise follow the above arguments, we would obtain $D\propto (\epsilon\gamma)^{\frac{11}{3}}$, i.e., $\chi\propto \left(\epsilon\gamma\right)^{\frac{2}{3}}$. Over the range of $\epsilon$ that I have simulated such a scaling is also consistent with the local model data (with fixed $\gamma$) plotted in Fig.~\ref{dissplot} if $D\approx 0.1\epsilon^{\frac{11}{3}}$ and $ u_z\approx 0.4\epsilon^{\frac{4}{3}}$ (this is not plotted on Fig.~\ref{dissplot} because not all global simulations have the same $\gamma$). This would predict weaker dissipation (than Eq.~\ref{dmlt} with $\lambda= R_p$) when $\epsilon\gamma\ll 1$, and therefore somewhat smaller values of $P_e$ and $P_\Omega$ -- in particular I estimate that these scalings would predict $P_e\approx 2$ d and $P_{\Omega}\approx 8$ d for hot Jupiters, instead of $3$ d and $15$ d, respectively. This alternative scaling would also suggest that tidal evolution driven by the elliptical instability would become somewhat less efficient, than predicted by Eq.~\ref{dmlt} with $\lambda=R_p$, as we approach exact synchronism or circularity. The currently available data do not allow us to distinguish between $D\propto \epsilon^3$ or $D\propto \epsilon^\frac{11}{3}$ (at fixed $\gamma$). As a result, the values of $P_e$ and $P_\Omega$ quoted above should probably be regarded as upper limits. A final point to remember is that these simulations have been forced to adopt values of the kinematic viscosity $\nu$ that were very much larger, by at least 10 orders of magnitude, than the values expected in a giant planet interior \citep{Jupiter2007}. It is hoped that these global simulations capture the dominant ``outer scales" of elliptical-instability driven turbulence, and that the mean flow quantities (such as the mean dissipation rates) are not strongly dependent on resolving much smaller scales. But whether this is true is difficult to test numerically. Further simulations that probe more deeply into the $\nu\ll 1$ regime (either using a rigid boundary or a free surface) would be worthwhile. \section{Conclusions} I have presented results from the first global simulations of the elliptical instability in a rotating and tidally deformed gaseous planet (or star) with a free surface. My primary motivation was to study tides inside the shortest-period hot Jupiters. The tides in these planets have large enough amplitudes that consideration of nonlinear tidal effects is likely be essential. In particular, the large-scale tidal flow in these planets is probably subject to the elliptical instability, which could play an important role in circularising, synchronising and aligning the spins of the shortest-period hot Jupiters. The simulations were designed to study the nonlinear evolution of the elliptical instability, to determine its outcome and astrophysical relevance. I have adopted an intentionally simplified model consisting of a rotating, homogenous, and viscous fluid planet subjected to tidal gravity. In a companion paper \citep{Barker2015a}, the global modes and instabilities of such a planet were studied. In the simulations, I have observed the elliptical instability to produce turbulence in the planetary interior, but this is bursty, and leads to temporally-variable dissipation and synchronisation of the spin and orbit. Angular momentum is deposited non-uniformly throughout the planetary interior, and this leads to the development of differential rotation in the form of zonal flows. These zonal flows play an important role in the saturation of the elliptical instability, leading to bursty evolution that is reminiscent of predator-prey\footnote{The zonal flows can be thought of as the ``foxes" and the instability-driven inertial waves can be thought of as the ``rabbits".} dynamics in some cases. These zonal flows, and their interaction with the elliptical instability, may be responsible for the collapses observed in previous laboratory experiments and numerical simulations (e.g.~\citealt{Malkus1989,LeBarsReview2015}). In addition, we have previously observed similar bursty behaviour in the local model of \cite{BL2013}, but where columnar vortices played the role of zonal flows. These results highlight the ubiquity of zonal flows in tidally forced rotating planets, demonstrating that these are generated even when realistic boundary conditions are adopted. I have demonstrated that a violent elliptical instability is observed when $\frac{n}{\Omega}\lesssim -1$, as predicted in the companion paper \citep{Barker2015a}, which is outside the frequency range in which it is usually thought to operate. This occurs for retrograde spins if the tidal amplitude is sufficiently large, so that inertial waves can be excited when not exactly in resonance. This could occur during the early stages in the life of hot Jupiters, if the planet is kicked into a very short-period orbit but possesses a retrograde spin. I have also simulated the instability in a planet in which the surface is modelled as a rigid (but stress-free) boundary rather than a free surface. I have found qualitative and broad quantitative agreement for both the linear properties of the instability \citep{Barker2015a}, as well as its nonlinear evolution (Appendix \ref{rigidsims}). This is promising, because numerical simulations with a rigid (but stress-free) boundary are much less expensive computationally (e.g.~\citealt{Cebron2013}). In simulations with an initially anti-aligned spin and orbit, the elliptical instability is observed to drive spin-orbit alignment. In all cases, the timescale for spin-orbit alignment is found to be similar (but not the same) as that of the spin synchronisation. In some cases, the ``spin-over" mode is excited (effectively a rigid tilting of the spin axis of the planet), which precesses due to the (non-dissipative) tidal torque. This precessional motion is gradually damped, leading to spin-orbit alignment that is not purely captured using the simplest models of tidal dissipation, such as the constant time-lag model \citep{Hut1981,Eggleton1998,ML2002,Barker2009}, where all components of the tide damp at the same rate \citep{Lai2012,Ogilvie2014}. Further work is required to study in more detail the damping of this precessional flow, and in particular to determine whether or not the alignment could occur before spin synchronisation (or planetary inspiral), which may have relevance to the spin-orbit alignment of hot Jupiter host stars (e.g.~\citealt{Albrecht2012}). I have quantified the tidal dissipation resulting from the elliptical instability, and I suggest that it could explain the circular orbits of the shortest-period hot Jupiters inside about 3 days. However, it seems necessary to invoke other mechanisms to explain tidal circularisation for longer orbital periods (e.g.~linear excitation of inertial waves in a planet with a core, or dissipation in the core itself). I also predict the spin synchronisation and spin-orbit alignment of hot Jupiters with orbital periods shorter than about 10 (or perhaps 15) days as a result of this mechanism. Future work is required to adopt more realistic interior models, including the presence of an inner core, as well as realistic density and entropy profiles, in addition to the possible presence of magnetic fields. It would also be worthwhile to probe more deeply into the regime of small viscosities, allowing even smaller values of $\epsilon$ to be simulated -- perhaps over a sufficient range to allow us to distinguish between the two possible scaling laws for the dissipation that are consistent with the data in \S~\ref{astrophysical}. Finally, spin-orbit alignment should be studied for more general (rather than initially anti-aligned) configurations, also taking into account the evolution of the orbit. \section*{Acknowledgements} I would like to thank Paul Fischer for the help he provided with Nek5000 during the early stages of this project, and for providing the spherical mesh that was used in this work. I would like to thank Harry Braviner, Benjamin Favier, Yufeng Lin and Gordon Ogilvie for discussions at various stages in the project, Jeremy Goodman for a useful and thought-provoking referee report, and Pavel Ivanov for helpful comments. This work was supported by the Leverhulme Trust and Isaac Newton Trust through the award of an Early Career Fellowship, but the early stages were supported by STFC through grants ST/J001570/1 and ST/L000636/1. Some of the simulations reported here used the DiRAC Complexity system, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment is funded by BIS National E-Infrastructure capital grant ST/K000373/1 and STFC DiRAC Operations grant ST/K0003259/1. DiRAC is part of the National E-Infrastructure.
2024-02-18T23:40:40.401Z
2016-03-23T01:11:44.000Z
algebraic_stack_train_0000
3,037
14,343
proofpile-arXiv_065-14884
\section{Conclusions} We have analyzed the convergence properties of the Method of Reflections for both Poisson and Stokes equations with Dirichlet boundary conditions. For typical particle configurations extending to the whole space, convergence does not hold. However, a modified method has been obtained that ensures convergence for particle configurations with bounded capacity density. Using this method, we have proven classical homogenization results in unbounded domains for regular particle configurations and sources $f \in H^{-1}(\mathbb{R}^3)$. For the Poisson equation it was proven in \cite{NV1}, \cite{NV} that this result can be extended to sources $f\in L^{\infty}\left( \mathbb{R}^{n}\right) $. The proof in \cite{NV1}, \cite{NV} relies heavily in the derivation of the so-called screening estimate, which states that the fundamental solution for the Laplace equation in a perforated domain with Dirichlet boundary conditions decays exponentially. In \cite{NV1}, \cite{NV}, this decay was proven by means of the Maximum Principle. As we have seen in Lemma \ref{lem:decayQuasiHarmonic} (cf. also Remark \ref{rem:exponentialDecay}), it is possible to derive such exponential screening estimates without using Maximum Principles, relying instead on Poincaré estimates for perforated domains (cf. Corollary \ref{cor:PoincarePerforated} and Lemma \ref{lem:poincareAnnulus}). Therefore, the results can hoped to be extended to more general elliptic operators. For the Stokes equations, however, we do not have such a strong decay estimate (cf. Lemma \ref{lem:SpolynomialDecay}). In fact, the solutions of the Brinkman equations \eqref{S2E9} with compactly supported sources $g \in L^\infty(\mathbb{R}^3)$ decay only cubic in the distance to the support of $g$ (cf. \cite{AHF}). Therefore, the solutions with sources $f \in L^\infty(\mathbb{R}^3)$ cannot be expected to be bounded. The boundary conditions used in \cite{Luke} are not the homogeneous Dirichlet conditions but the set of natural boundary conditions for sedimenting particles (or an analogous set of Neumann-Dirichlet boundary conditions in the case of the Poisson equation). It is worth to indicate that the screening effects, which have been discussed above, can be expected to be rather different for the set of boundary conditions in \cite{Luke} and for Dirichlet boundary conditions that we considered. Using again the electrostatic analogy, the Dirichlet boundary conditions we consider in this paper are those corresponding to grounded conducting particles, while those in \cite{Luke} correspond to isolated conducting particles. Hence, the Dirichlet boundary conditions result in the onset of induced charges at the particles which are proportional to the capacity of the particles. On the contrary, the type of boundary conditions used in \cite{Luke} results in the onset of induced dipoles at the particles, instead of charges. The potentials produced by dipoles decay faster than the ones produced by charges and as a consequence screening effects and collective particle interactions might be expected to be less relevant. Understanding this type of dipole induced screening effects is an interesting issue, which deserves further investigation. \section{Homogenization} \label{sec:Homogenization} In this section, we will consider particles of equal radii $r$ with centers on the lattice $ \Gamma := (d \mathbb{Z})^3$. Then, Condition \ref{cond:Capacity} is satisfied with $\mu_0 = r d^{-3}$. For the homogenization, it is convenient to include the factor $4 \pi$ in the capacity density, which we define as \[ \mu := 4 \pi r d^{-3} \] We are interested in the limiting behavior of Problem \eqref{S1E1} for $r,d \to 0$ and fixed $\mu$. Thus, throughout this section, we will consider $\mu$ as a fixed quantity. Since for fixed $\mu$ Condition \ref{cond:particlesNotToClose} will be satisfied if $r$ is sufficiently small, we will always assume that $r$ is chosen in such a way. In the following, we will use $\Gamma$ as the index set for the particles (i.e., we index them by their space position). Moreover, since for fixed $\mu$, the particle configuration does only depend on $r$, we will write an index $r$ to indicate this dependence, e.g., we write \[ K_r = \bigcup_{x \in \Gamma} \overline{B_x}. \] \subsection{A Poincaré Inequality for Perforated Domains} An important feature of this regular particle distribution is that Problem \eqref{S1E1} admits a unique solution in $H^1(\mathbb{R}^3)$ for sources $f \in H^{-1}(\mathbb{R}^3)$, instead of solutions only in $\dot{H}^1(\mathbb{R}^3)$ for sources in $\dot{H}^{-1}(\mathbb{R}^3)$. This is due to the existence of a Poincaré inequality in the space $H^1_0(\mathbb{R}^3 \backslash K)$. We first notice the following local Poincaré inequality. \begin{lemma} \label{lem:poincare} Assume $ z \in \mathbb{R}^3$, $ R> \rho > 0 $ and $ u \in H^1(B_R(z))$ such that $ u = 0 $ in $B_\rho(z)$. Then, the following Poincaré inequality holds: \[ \|u\|_{L^2(B_R(z))}^2 \leq \frac{R^3}{\rho} \|\nabla u\|^2_{L^2(B_R(z))}. \] \end{lemma} \begin{proof} It suffices to prove the estimate for $ z= 0 $ and for smooth functions. Let $ \varphi \in C^1(B_R(0)) $ such that $ \varphi \equiv 0 $ in $B_\rho(0)$. Then, denoting the unit sphere in $ \mathbb{R}^3$ by $ S^2$ we have for every $ x \in S^2 $ and every $ t \in (\rho,R)$ \[ |\varphi(tx)| \leq \int_\rho^R |\nabla \varphi(sx)| \, d s. \] Thus, \begin{align} \int_{B_R(0)} |\varphi|^2 \, d y & \leq \int_{S^2} \int_\rho^R t^2 \left( \int_\rho^R |\nabla \varphi(sx)| \, d s \right)^2 \, d t \, d x \\ & \leq \frac{1}{3} (R^3 - \rho^3) \int_\rho^R \frac{1}{s^2} \, d s \int_{S^2} \int_\rho^R s^2 |\nabla \varphi(sx)|^2 \, d s \, d x\\ & \leq \frac{R^3}{\rho} \int_{B_R(z)} |\nabla \varphi|^2 \, d y. \qedhere \end{align} \end{proof} \begin{corollary} \label{cor:PoincarePerforated} All $ u \in H^1_0(\mathbb{R}^3 \backslash K_r)$ satisfy \[ \|u\|^2_{L^2(\mathbb{R}^3)} \leq C \mu^{-1} \| \nabla u \|^2_{L^2(\mathbb{R}^3)} \] for a universal constant $C$. \end{corollary} \begin{corollary} \label{cor:Existence} For all $f\in H^{-1}(\mathbb{R}^3)$, there exists a unique weak solution $u \in H^1(\mathbb{R}^3)$ to the problem \begin{equation} \label{eq:poissonPerforated} \begin{aligned} -\Delta u &= f \quad \text{in} ~ \mathbb{R}^3 \backslash K_r, \\ u &= 0 \quad \text{in} ~ K_r, \end{aligned} \end{equation} which satisfies \[ \| u \|^2_{H^1(\mathbb{R}^3)} \leq (1 + C \mu^{-1}) \| f\|^2_{H^{-1}(\mathbb{R}^3)}. \] \end{corollary} \subsection{The Main Idea of the proof} In order to explain the idea how we are going to prove the homogenization result, we need the following definition. \begin{definition} \label{def:monopole} For a particle with radius $r$ at position $ x \in \Gamma_r $, we define the operator $T_{x}$ from $\dot{H}^1$ to $\dot{H}^{-1}(\mathbb{R}^3)$ by means of \[ Q_x = G_0 T_x. \] Moreover, we define $ M_{x} \colon \dot{H}^{1}(\mathbb{R}^3) \to \dot{H}^{-1}(\mathbb{R}^3)$ to be the uniform charge density approximation of $T_{x}$, \begin{align} (M_{x} u)(y) = \frac{(u)_{x,r}}{r} \mathcal{H}^2 |_{\partial B_r(x)}. \end{align} Furthermore, we define $\tilde{Q}_x = G_0 M_{x,r}$ to be the induced approximation for $Q_x$. The uniform charge density approximations of the operators $ A_\beta^{(r)} $ from Definition \ref{def:A_beta} are defined by \begin{equation} \begin{aligned} M_\beta^{(r)} &:= \sum_{x_1} e^{-\beta_1|x_1|} \tilde{Q}_{x_1} \sum_{x_2 \neq x_1} e^{-\beta_2|x_2|} \tilde{Q}_{x_2} \cdots \!\! \sum_{x_n \neq x_{n-1}} \!\! e^{-\beta_n|x_n|} \tilde{Q}_{x_3}. \end{aligned} \end{equation} \end{definition} \begin{remark} Note that both $T_x$ and $M_x$ implicitly depend on $r$. \end{remark} \begin{remark} \label{lem:charT} For $u \in H^1(\mathbb{R}^3)$, $ T_{x} u $ is supported in $ \overline{B_x} $. Since $T_{x} = G_0^{-1} Q_{x}$, and $Q_{x} $ is the orthogonal projection to $H_0^1(\mathbb{R}^3 \backslash \overline{B_x})^\perp$, this follows directly from the characterization \eqref{eq:characterizationOfOrthogonal}. \end{remark} To understand the meaning of the operator $T_{x}$, we take any potential $u \in \dot{H}^1(\mathbb{R}^3)$ and denote $f:=G_0^{-1} u$ the source corresponding to $u$. Moreover, we denote $g= T_{x} u$. Then, adding $g$ to $f$, gives a source $f+g$, which corresponds to a potential $v := G_0(f+g)$ that solves \begin{align} -\Delta v &= f \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{B_x}, \\ v &= 0 \quad \text{in} ~ \overline{B_x}. \end{align} We can also draw the following analogy to electrostatics. In this context, $T_{x} G_0 f$ gives the charge density that is induced by $f$ in $B_x$ if $B_x$ represents a grounded conductor (surrounded by vacuum). With this definition the original series obtained by the Method of Reflection \eqref{eq:ProjectionSeries} becomes, \begin{equation} \label{eq:ScatteringSeries} G_0 - \sum_{x_1} G_0 T_{x_1} G_0 + \sum_{x_1} \sum_{ x_2 \neq x_1} G_0 T_{x_1} G_0 T_{x_2} G_0 - \dots, \end{equation} This is how the series appears in \cite{Kirp}, where $T_{x}$ is called a scattering operator. In this paper, the Method of Reflection is interpreted as a scattering process. Viewing $G_0$ as some kind of propagator, \eqref{eq:ScatteringSeries} inherits the interpretation of the potential due to a source which propagates according to $G_0$ and scattered at the particles by $T_{x}$. We want to give an heuristic explanation for the homogenization result Theorem \ref{HomogLambZero}. To do so, let us pretend for the moment that the series \eqref{eq:ScatteringSeries} exists, and that all the operators are well defined on $H^1(\mathbb{R}^3)$ (instead of $\dot{H}^1(\mathbb{R}^3)$). Moreover, let us assume that we already know that in the limit $r \to 0$, we can replace the operator $T_x$ by $M_x$ in Definition \ref{def:monopole}. Using the definition of $M_x$ and recalling the fixed value of the capacity density $\mu = 4 \pi r d^{-3}$, the series $ \sum M_x u$ can be interpreted as a Riemann sum for $\mu u$, leading to \[ \sum_{x} T_x u \approx \sum_{x} M_x u \rightharpoonup \mu J u \quad \text{in} ~ H^{-1}(\mathbb{R}^3), \] as $r \to 0$, where $J$ is the inclusion from $H^1(\mathbb{R}^3)$ to $H^{-1}(\mathbb{R}^3)$. Therefore, the first order term in the series \eqref{eq:ScatteringSeries} converges to $ (- G_0 J) G_0 f$. It seems plausible that the higher order terms converge weakly to $(-\mu G_0 J)^k G_0 f$. Thus, the weak limit of the sequence of solutions is formally given by \[ \sum_{k=0}^\infty (-\mu G_0 J)^k G_0 = (1+ \mu G_0 J)^{-1} G_0 = (-\Delta + \mu J)^{-1}, \] which is the desired result. Since the series \eqref{eq:ScatteringSeries} is in reality divergent, we use the modified version \begin{equation} \label{eq:seriesForHomogenization} (1 - \gamma L_r)^n G_0 f, \end{equation} which we already know to converge to the solution of \eqref{S1E1}. We want to expand \eqref{eq:seriesForHomogenization} in powers of $L$ and then to take the weak limit in each of the resulting terms separately. However, one has to take into account that the weak limit is not interchangeable with taking powers. Therefore it turns out, that it is convenient to use Lemma \ref{lem:powersOfL} in order to write $(L_T)^n$ as a sum of terms such that no particle appears back to back with itself. Somewhat surprisingly, the exponential cutoff in the definition of the operator $L$ does not cause much trouble when computing the weak limit. The only difference to the heuristic reasoning above is that some additional combinatorial identities are needed. \subsection{Weak Limits of Powers of $L$} Since the inclusion map from $\dot{H}^1(\mathbb{R}^3)$ to $\dot{H}^{-1}(\mathbb{R}^3)$ is not well defined, we need the following replacement. \begin{definition} \label{def:X} We define $X$ to be the following subspace of $\dot{H}^1$. \begin{align} X &:= \{ u \in \dot{H}^1(\mathbb{R}^3) \colon u = -\Delta v \text{ for some } v \in \dot{H}^1(\mathbb{R}^3) \}. \end{align} Moreover, we define $J \colon X \to \dot{H}^{-1}(\mathbb{R}^3)$ by means of \[ \langle Ju,w \rangle = (\nabla v, \nabla w)_{L^2(\mathbb{R}^3)} \qquad \text{for all} \quad w \in \dot{H}^1, \] where $ v \in \dot{H}^1(\mathbb{R}^3) $ is the solution to $ -\Delta v = u $. \end{definition} \begin{remark} Note that $J$ can be viewed as the inclusion map, since $\langle Ju,w \rangle = \int_{\mathbb{R}^3} u w \, d x$, whenever the latter is well defined. \end{remark} \begin{lemma} \label{lem:operatorA} The operator $A \colon \dot{H}^1(\mathbb{R}^3) \to \dot{H}^1(\mathbb{R}^3) $, \begin{align} (Au)(x) = e^{-|x|} u(x), \end{align} is a bounded linear operator with range $\mathcal{R}(A) \subset X$. Moreover, the composition $JA$, where $J$ is the inclusion operator from Definition \ref{def:X}, is a bounded operator from $\dot{H}^1(\mathbb{R}^3)$ to $\dot{H}^{-1}(\mathbb{R}^3)$. \end{lemma} \begin{proof} We observe that the range of $A$ satisfies $\mathcal{R}(A) \subset \dot{H}^1(\mathbb{R}^3) \cap L^{6/5}(\mathbb{R}^3) \subset X$. The first inclusion follows from the Gagliardo-Nirenberg-Sobolev inequality $ \| w \|_{L^6(\mathbb{R}^3)} \leq C \|\nabla w\|_{L^2(\mathbb{R}^3)}$ and Hölder's inequality. The second one is deduced by the Gagliardo-Nirenberg-Sobolev inequality, too, since this implies boundedness of the functional $ F(w):= \int_{\mathbb{R}^3} u w \, d x $ in $\dot{H}^1$ if $ u \in \dot{H}^1(\mathbb{R}^3) \cap L^{6/5}(\mathbb{R}^3)$, providing in turn a solution $ v \in \dot{H}^1(\mathbb{R}^3) $ to $ - \Delta v = u$. The second assertion follows from $\| Ju \|_{\dot{H}^{-1}(\mathbb{R}^3)} = \| v \|_{\dot{H}^1(\mathbb{R}^3)}$ and the reasoning above. \end{proof} \begin{proposition} \label{pro:pseqweaklimit} Let $ u \in \dot{H}^{1}(\mathbb{R}^3) $ and $ n \in \mathbb{N}_\ast $. Then, in the limit $r \to 0$ with fixed $\mu$, \begin{equation} \label{eq:pseqweaklimit} L_r^n u \rightharpoonup \sum_{l=1}^n \sum_{ \substack{ \beta \in \mathbb{N}_\ast^l \\ |\beta| = n}} \Bigg(\prod_{j=1}^l \mu G_0 J A^{\beta_j}\Bigg) u = \mu G_0 J A ( \mu G_0 J A + A)^{n-1} u =: R_n u \quad \text{in} ~ \dot{H}^1(\mathbb{R}^3). \end{equation} In particular, for all $ \gamma > 0 $ and all $M \in \mathbb{N}$ \[ (1-\gamma L_{r})^M u \rightharpoonup \bigg(1+ \sum_{n=1}^M \binom{M}{n} (-\gamma)^n R_n \bigg) u =: S_M u \quad \text{in} ~ \dot{H}^1(\mathbb{R}^3) \] \end{proposition} The fact that the complicated looking weak limit of $L_r^n$ equals $R_n$ follows from the combinatorial consideration that, expanding the power in the definition of $R_n$, each term in the sum on the right hand side will appear exactly once. As mentioned above, the proof of Proposition \ref{pro:pseqweaklimit} is based on a Riemann sum argument using the operators $T_x$ and $M_x$ from Definition \ref{def:monopole}. This is not very difficult but technical. Therefore, we first show how to derive the homogenization result from Proposition \ref{pro:pseqweaklimit} and the results from Section \ref{sec:Poisson}. \begin{proposition} \label{pro:formalLimit} Let $M \in \mathbb{N}$ and $S_M$ be the pointwise weak limit of $(1-\gamma L_{r})^M$ from Proposition \ref{pro:pseqweaklimit}. Then, for all $\mu>0$ there exists $\gamma_0 >0 $ such that, for all $\gamma \leq \gamma_0 $ and all $f \in \dot{H}^{-1}(\mathbb{R}^3)$, \[ \lim_{M\to\infty} S_M G_0 f = u, \] where $u$ is the unique weak solution to \begin{equation} \label{eq:homoPDE} -\Delta u + \mu u = f \quad \text{in} ~\mathbb{R}^3. \end{equation} \end{proposition} \begin{proof} We observe that $\mu G_0 J + 1$ as an operator from $X$ to $\dot{H}^1(\mathbb{R}^3)$ is invertible. Indeed, we know that for any $f \in \dot{H}^{-1}(\mathbb{R}^3) \subset H^{-1}(\mathbb{R}^3)$, Problem \eqref{eq:homoPDE} has a unique weak solution $ u \in H^1(\mathbb{R}^3) \subset \dot{H}^1(\mathbb{R}^3)$. Moreover, $ u = - \mu^{-1} \Delta (v- u)$, where $v \in \dot{H}^1(\mathbb{R}^3)$ is the solution to $ -\Delta v = f $. Hence, we have $ u = (G_0^{-1} + \mu J)^{-1} f \in X$. Thus, $(\mu G_0 J + 1)^{-1} = (G_0^{-1} + \mu J)^{-1} G_0^{-1}$. Additionally, we see that $(\mu G_0 J + 1)^{-1}$ is a bounded operator since for $u $ and $ f $ as above we have $ \| \nabla u \|_{L^2(\mathbb{R}^3)} \leq \| f \|_{\dot{H}^{-1}(\mathbb{R}^3)}$. Therefore, inserting the definitions of $S_M$ and $R_n$ from the previous theorem, we deduce \begin{align} S_M = 1 + \sum_{n=1}^M \binom{M}{n}(-\gamma)^{n} R_n & = 1 + \sum_{n=1}^M \binom{M}{n}(-\gamma)^{n} \mu G_0 J A ( \mu G_0 J A + A)^{n-1} \\[-3\jot] &= 1 + \mu G_0 J (\mu G_0 J + 1)^{-1} \sum_{n=1}^M \binom{M}{n}(-\gamma)^{n} ((\mu G_0 J +1)A)^n \\ &= 1 + \mu G_0 J (\mu G_0 J + 1)^{-1} ((1-\gamma(\mu G_0 J +1)A)^M - 1). \end{align} Next, we show that $(1-\gamma(\mu G_0 J +1)A)^M \to 0$ pointwise in $ \dot{H}^1(\mathbb{R}^3)$ as $ M \to \infty$. First, by Lemma \ref{lem:operatorA}, we know that $G_0JA$ is a bounded operator. Second, $G_0JA$ is also a positive operator since \[ (G_0 J Au,u)_{\dot{H}^1(\mathbb{R}^3)} = \langle JAu,u\rangle = \int Au \cdot u \, d x = \int e^{-|x|} |u(x)|^2 \, d x. \] Finally, $G_0JA$ is clearly self-adjoint since \[ (G_0 J Au,v)_{\dot{H}^1(\mathbb{R}^3)} = \int Au \cdot v \, d x = \int Av \cdot u \, d x. \] Therefore, using the spectral theorem for bounded self-adjoint operators as in the proof of Proposition \ref{pro:abstractProjection}, we conclude $(1-\gamma(\mu G_0 J +1)A)^M \to 0$ pointwise in $ \dot{H}^1$ for small enough $\gamma$. Furthermore, \[ \mu G_0 J (\mu G_0 J + 1)^{-1} = 1 - (\mu G_0 J + 1)^{-1}, \] and hence, this is a bounded operator, as well. Therefore, multiplying by $G_0$ from the right and taking the limit $M \to \infty$ yields \begin{equation} (1 - (1-(\mu G_0 J+1)^{-1})) G_0 = (1 + \mu G_0 J)^{-1} G_0 = (G_0^{-1} + \mu J)^{-1} = ( -\Delta + \mu)^{-1}, \end{equation} which is the desired result. \end{proof} \subsection{Uniform Estimates and Proof of Theorem \ref{HomogLambZero}} \label{sec:UniformEstimatesAndInterchangingTheLimits} Combining Proposition \ref{pro:pseqweaklimit} and \ref{pro:formalLimit} we see that $(1-\gamma L_r)^M G_0 f$ converges weakly to the solution of \eqref{eq:homoPDE} if we take the limits in the order $r \to 0$ followed by $M \to \infty$. In order to prove Theorem \ref{HomogLambZero}, it remains interchange the order of taking the limits. For this purpose, we will prove that the speed of convergence of $(1-\gamma L_{r})^M G_0 f$ to $u_r$ in $\dot{H}^1_{\mathrm{loc}}(\mathbb{R}^3)$ as $M$ tends to infinity is uniform in $r$ for fixed $\mu$. Corresponding to Lemma \ref{lem:LCoercive}, we have the following lemma. It implies that the sequence $(1-\gamma L_r)^M G_0 f$ is close to zero boundary conditions in the particles in any fixed bounded region uniformly in $r$ as $M \to \infty$. \begin{lemma} \label{lem:LCoerciveInCompacta} Let $ u \in \dot{H}_0^1(\mathbb{R}^3 \backslash K_r)^\perp $ and $R > 0$, we define $v \in \dot{H}^1(\mathbb{R}^3)$ to be the solution to \begin{align} -\Delta v &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash (K_r \cap \overline{B_R(0)}), \\ v &= u \quad \text{in} ~ K_r \cap \overline{B_R(0)}. \end{align} Then, \[ (L_r u,u)_{\dot{H}^1(\mathbb{R}^3)} \geq c e^{-R} \| v\|_{\dot{H}^1(\mathbb{R}^3)}^2, \] where $c>0$ is a universal constant. \end{lemma} \begin{proof} Let $ \eta_x \in C_c^\infty (B_{2r}(x)) $ such that $ \eta_x = 1 $ in $ B_{r}(x) $ and $ |\nabla \eta_x | \leq \frac{C}{r} $. Now, we observe that for all $ w \in \dot{H}^1(\mathbb{R}^3) $ \[ \| w \|_{L^2(B_{2r}(x))} \leq \| w \|_{L^6(B_{2r}(x))} \|1\|_{L^3(B_{2r}(x))} \leq C r \| \nabla w \|_{L^2(\mathbb{R}^3)}, \] and hence, \begin{equation} \| \eta_x w \|_{\dot{H}^1(\mathbb{R}^3)} \leq \|w\|_{\dot{H}^1(\mathbb{R}^3)} + \frac{C}{r} \| w \|_{L^2(B_{2r}(x))} \leq C \| w \|_{\dot{H}^1(\mathbb{R}^3)}. \end{equation} On the other hand, by the variational form of the equation for $ v $, we know that $v$ is the function of minimal norm in the set $ X_v := \{ w \in \dot{H}^1(\mathbb{R}^3) \colon w = v ~ \text{in} ~ K_r \cap \overline{B_R} \}$. Clearly, $\sum_{x \in B_{R+r}} \eta_x Q_x v \in X_u$, and hence, \begin{align} \langle L_r v , v \rangle &= \sum_x e^{-|x|} \|Q_x v \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &\geq c e^{-R} \sum_{x \in B_{R+r}} \|\eta_x Q_x v \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &= c e^{-R} \|\sum_{x \in B_{R+r}} \eta_x Q_x v \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &\geq c e^{-R} \|v\|^2_{\dot{H}^1(\mathbb{R}^3)}. \qedhere \end{align} \end{proof} The next Lemma is needed to ensure that the values of $(1-\gamma L)^M G_0 f$ in a fixed bounded region is very little affected by particles far away from this region. \begin{lemma} \label{lem:decayQuasiHarmonic} For all $\mu > 0$, there exists a nonincreasing function $ e_\mu \colon \mathbb{R}_+ \to \mathbb{R}_+$ with $ \lim_{s \to \infty} e_\mu(s) = 0$ that has the following property. For all $0 \leq \rho \leq R$, all $w \in \dot{H}_0^1(\mathbb{R}^3 \backslash K_r)^\perp $ with $w = 0$ in $ K_r \cap B_R(0)$ satisfy \[ \| \nabla w\|_{L^2(B_\rho(0))} \leq e_\mu(R-\rho) \| \nabla w \|_{L^2(\mathbb{R}^3)}, \] if $r$ is sufficiently small. \end{lemma} \begin{proof} The proof uses the classical Widman's hole filling technique (see e.g. \cite{Gia83}). Fix a particle configuration with capacity $\mu $ and $d<1/(2\sqrt{3})$, and fix $R$, $\rho$, and $w$ according to the assumptions. For $1 \leq s \leq R - 1$, we define $\eta_s \in C_c^\infty (B_{1+s}(0))$ such that $\eta_s = 1 $ in $B_s(0)$, $|\eta_s| \leq 1$, and $ |\nabla \eta_s | \leq C $. We use $\eta^2 w$ as a test function in the weak form of the equation $w$ satisfies, namely, \begin{align} -\Delta w &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash K_r \\ w &= 0 \quad \text{in} ~ K_r \cap B_{s+1}. \end{align} This yields \[ 0 = \int_{B_{s+1}} \nabla w \nabla (\eta^2 w) \, d x = \int_{B_{s+1}} (\eta \nabla w)^2 + 2 \eta \nabla w \nabla \eta w \, d x. \] Using the Cauchy-Schwartz inequality and the Poincaré inequality in the annulus $B_{s+1} \backslash B_s $, provided by Lemma \ref{lem:poincareAnnulus}, we deduce \[ \|\nabla w\|^2_{L^2(B_s)} \leq \| \eta \nabla w\|_{L^2(B_s)} \leq C \| w \|^2_{L^2(B_{s+1} \backslash B_s)} \leq C (1+\mu^{-1}) \| \nabla w \|^2_{L^2(B_{s+1} \backslash B_s)}. \] Let us denote $ a_k := \|\nabla w\|^2_{L^2(B_{\rho+k})} $. Then, the above estimate implies for all $ k $ such that $ \rho + k \leq R - 1$ \[ a_k \leq C(1 + \mu^{-1}) (a_{k+1} - a_k). \] Therefore, \[ a_k \leq \frac{C(1 + \mu^{-1})}{C(1+\mu^{-1}) +1} a_{k+1} =: \lambda_\mu a_{k+1}, \] and $\lambda_\mu < 1$. By iterating up to $ n = \lfloor R-\rho-1 \rfloor$, we conclude \[ \|\nabla w\|^2_{L^2(B_\rho)} \leq \lambda_\mu^n \|\nabla w\|^2_{L^2(\mathbb{R}^3)}. \] This is the desired estimate with $e_\mu(s) := \lambda_\mu^\frac{{\lfloor s-1 \rfloor}}{2} $ (for $s \geq 1$ and $ e_\mu = 1$ otherwise). \end{proof} \begin{remark} \label{rem:exponentialDecay} As seen in the proof, the decay $e_\mu$ is exponential. This can be interpreted as a screening effect due to the presence of the particles. This effect can be exploited to prove homogenization results also for sources $f \in L^{\infty}(\mathbb{R}^3)$ (cf. \cite{NV1}, \cite{NV}). \end{remark} \begin{lemma} \label{lem:poincareAnnulus} Let $s \geq 1$ and $d < 1/(2\sqrt{3}) $. Then, for all $u \in \dot{H}^1_0(\mathbb{R}^3 \backslash K_r)$, \[ \| u\|^2_{L^2(B_{s+1} \backslash B_s)} \leq \frac{2\sqrt{3}}{\mu} \| \nabla u\|^2_{L^2(B_{s+1} \backslash B_s)}. \] \end{lemma} \begin{proof} As the Poincaré inequality for the whole space $\mathbb{R}^3$, Corollary \ref{cor:PoincarePerforated}, the basically follows from the estimate \[ \|u\|_{L^2(B_R(z))}^2 \leq \frac{R^3}{\rho} \|\nabla u\|^2_{L^2(B_R(z))} \] if $u=0$ in $B_\rho(z)$, which is the statement of Lemma \ref{lem:poincare}. However, there are certain technical issues due tothe nonconvexity of the annulus. Let us denote the annulus $B_{s+1}(0) \backslash B_s(0) $ by $A_s$. First observe that Lemma \ref{lem:poincare} remains true if we replace $B_R(z)$ by any $\Omega \subset B_R(z)$ that is star-shaped with respect to $z$. The reason is that we only integrated over line segments with endpoint $z$. Therefore, the assertion follows, once we have shown that there exists a covering \[ A_s \subset \cup_{x \in \Gamma_r} B_{R_x}(x), \] such that for all $x \in \Gamma_r$ the set $ A_s \cap B_{R_x}(x)$ is star-shaped with respect to $x$ and $R_x \leq 2 \sqrt{3}d$. Equivalently, for every point $y$ in the annulus, we have to find $x \in A_s \cap \Gamma_r$ and $R_x \leq 2\sqrt{3}d$ such that $y \in B_{R_x}(x)$ and $ A_s \cap B_{R_x}(x)$ is star-shaped with respect to $x$. For $y \in A_s$, there exists a ball $B_{2\sqrt{3}d}(z_1) \subset A_s$ that contains $y$, since $d < 1/(2\sqrt{3})$. In this ball we find $B_{\sqrt{3}d}(z_2) \subset B_{2 \sqrt{3} d}(z_1)$ with distance $\sqrt{3}d$ from the inner boundary of the annulus, i.e., $\operatorname{dist}\{\partial B_s(0), B_{\sqrt{3}d}(z_2)\} \geq \sqrt{3}d$. By definition of the particle configuration, there exists $x \in B_{\sqrt{3}d}(z_2) \cap \Gamma_r$. Moreover, by construction, we have $ y \in B_{2\sqrt{3}d}(x)$. Finally, we prove that $A_s \cap B_{2\sqrt{3}d}(x)$ is star-shaped with respect to $x$. Clearly, the only problem can occur by removing the inner ball $B_s(0)$ from $B_{2\sqrt{3}d}(x)$. If $B_{2\sqrt{3}d}(x) \cap B_s(0)$ is empty, then, we are done. If not, $A_s \cap B_{2\sqrt{3}d}(x)$ is star-shaped with respect to $x$ if, for any $z \in \partial B_s \cap \partial B_{2\sqrt{3}d}(x)$, the line segment $l$ from $z$ to $x$ is disjoint from $B_s(0)$. Clearly, it is equivalent to check that $l$ has smaller length than any line segment $t$ from $x$ to some $w \in \partial B_s(0)$ that is tangential to $\partial B_s(0)$. Since $\operatorname{dist} \{\partial B_s(0), B_{\sqrt{3}d}(z_2)\} \geq \sqrt{3}d$ and $s \geq 1$, it follows \[ |t|^2 \geq (s+\sqrt{3}d)^2-s^2 = 2s\sqrt{3}d + 3 d^2 \geq 2\sqrt{3}d \geq 12 d^2 = |l|^2. \] This finishes the proof. \end{proof} \begin{proposition} \label{pro:pSolByScattering} Let $f\in \dot{H}^{-1}\left( \mathbb{R}^{3}\right).$ For all $0 < \mu_1 \leq \mu_2 < \infty$, there exists a $\gamma >0$ depending only on $\mu_1$ and $\mu_2$ such that the sequence \begin{equation} \lim_{N\rightarrow\infty}\bigg( 1-\gamma\sum_{j}e^{-\left\vert x_{j}\right\vert }Q_{j}\bigg) ^{N} G_0 f\ % \end{equation} converges to the solution of \eqref{S1E1} uniformly in $\dot{H}^{1}_{\mathrm{loc}}(\mathbb{R}^3)$ for all particle configuration with capacity $ \mu_1 \leq \mu \leq \mu_2$ and sufficiently small $r$. \end{proposition} \begin{proof} As in the proof of Theorem \ref{ConvWholeSpace}, we choose $\gamma \leq 1/\|L_r\|$. Lemma \ref{lem:Lbounded} ensures that this is possible such that $\gamma$ depends only on $\mu$ if $r$ is sufficiently small. Let $\rho >0 $, $ \varepsilon>0$, and $u := G_0 f \in \dot{H}^1(\mathbb{R}^3)$. Since $\ker (L_r) = \dot{H}_0^1(\mathbb{R}^3 \backslash K_r)$, it suffices to consider $u \in \dot{H}_0^1(\mathbb{R}^3 \backslash K_r)^\perp$. Define $u_M := (1-\gamma L_r)^M u$. Then, we know from Proposition \ref{pro:nonuniformAbstractProjection} \begin{align} \|(1-\gamma L_r)u\|_{\dot{H}^1(\mathbb{R}^3)}^2 &= \|u\|_{\dot{H}^1}^2 - 2(\gamma L_r u,u)_{\dot{H}^1} +\|\gamma L_r u\|^2_{\dot{H}^1} \\ &\leq \|u\|_{\dot{H}^1}^2 - \gamma (L_r u ,u)_{\dot{H}^1}. \end{align} Iterating and using monotonicity of $(L_r u_M, u_M)_{\dot{H}^1}$, which follows from the estimate \eqref{eq:monotonPos} in Proposition \ref{pro:nonuniformAbstractProjection}, yields \[ 0 \leq \|u_{M+1}\|^2_{\dot{H}^1} \leq \|u\|_{\dot{H}^1}^2 - (M+1) \gamma (L_r u_M, u_M)_{\dot{H}^1}. \] Thus, \[ (L_r u_M, u_M)_{\dot{H}^1} \leq \frac{1}{(M+1) \gamma} \|u\|_{\dot{H}^1}^2. \] Define $v_M \in \dot{H}^1(\mathbb{R}^3)$ to be the solution to \begin{align} -\Delta v_M &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash (K_r \cap \overline{B_R}), \\ v_M &= u_M \quad \text{in} ~ K_r \cap \overline{B_R}, \end{align} and $w_M := u_M -v_M$. Then, Lemma \ref{lem:decayQuasiHarmonic} implies for all $R > \rho$ \begin{align} \| \nabla w_M\|_{L^2(B_\rho(0))} \leq e_\mu(R-\rho) \| w_M \|_{\dot{H}^1} &\leq e_\mu(R-\rho) (\| u_M \|_{\dot{H}^1} +\| v_M \|_{\dot{H}^1}) \\ & \leq e_\mu(R-\rho) (\| u \|_{\dot{H}^1} +\| v_M \|_{\dot{H}^1}), \end{align} and it is possible to choose $R$ large enough such that $ e_\mu(R-\rho) < \frac{\varepsilon}{3}$. On the other hand, by Lemma \ref{lem:LCoerciveInCompacta}, we have \[ c e^{-R} \| v_M\|_{\dot{H}^1(\mathbb{R}^3)}^2 \leq (L_r u_M,u_M)_{\dot{H}^1(\mathbb{R}^3)} \leq \frac{1}{(M+1) \gamma} \|u\|_{\dot{H}^1}^2. \] Therefore, choosing $M_0$ large enough yields for all $M \geq M_0$ \[ \| v_M\|_{\dot{H}^1(\mathbb{R}^3)} < \frac{\varepsilon}{3} \|u\|_{\dot{H}^1}. \] By combining the estimates for $v_M$ and $w_M$, we conclude (assuming without restriction $\varepsilon \leq 3$) \[ \| \nabla u_M \|_{L^2(B_\rho(0))} < \varepsilon \|u\|_{\dot{H}^1(\mathbb{R}^3)} = \varepsilon \|f\|_{\dot{H}^{-1}(\mathbb{R}^3)}. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem \ref{HomogLambZero}] We first prove that $u$ converges weakly in $\dot{H}^1(\mathbb{R}^3)$ for all sources $f \in \dot{H}^{-1}(\mathbb{R}^3)$. Since the sequence is bounded, it suffices to consider test functions in $C_c^\infty(\mathbb{R}^3)$. Let $ \varphi \in C_c^\infty(\mathbb{R}^3) $ and choose $R>0$ such that $\operatorname{supp} \varphi \subset B_R(0)$. Further, let $\gamma < \gamma_0 $ from Proposition \ref{pro:pSolByScattering} and denote by $S_M$ the corresponding pointwise weak limit of $(1-\gamma L_{r})^M$ from Proposition \ref{pro:pseqweaklimit}. Then, for all $M>0$, \begin{align} |(u_r - u,\varphi)_{\dot{H}^1}| &\leq |(S_M G_0 f - u,\varphi)_{\dot{H}^1}| + |(1- \gamma L_r)^M G_0 f - S_M G_0 f ,\varphi)_{\dot{H}^1}| \\ {} &+ |(u_r - (1- \gamma L_r)^M G_0 f,\varphi)_{\dot{H}^1}|. \end{align} The third term on the right hand side is estimated by \[ \|\nabla(u_r - (1- \gamma L_r)^M G_0 f)\|_{L^2(B_R)} \|\varphi\|_{\dot{H}^1}. \] By choosing $M$ sufficiently large, since Proposition \ref{pro:pSolByScattering} ensures that this term becomes small independently of $r$. On the other hand, also the first term becomes small by choosing $M$ large, and the second term vanishes in the limit $r \to \infty$. Weak convergence in $\dot{H}^1(\mathbb{R}^3)$ is equivalent to weak convergence in $L^2(\mathbb{R}^3)$ of the gradients. However, due to Corollary \ref{cor:Existence}, the sequence $u_r$ is uniformly bounded in $H^1(\mathbb{R}^3)$. Therefore, we can extract subsequences that converge weakly in $H^1(\mathbb{R}^3)$. Since their weak limit is uniquely determined by the weak limit of their gradients, the whole sequence converges weakly in $H^1(\mathbb{R}^3)$. The result for $f \in H^{-1}(\mathbb{R}^3)$ follows from density of $\dot{H}^{-1}(\mathbb{R}^3) $ in $H^{-1}(\mathbb{R}^3)$ using again that the solution operators for Problem \eqref{eq:poissonPerforated} are uniformly bounded. \end{proof} \subsection{Proof of Proposition \ref{pro:pseqweaklimit}} \begin{lemma} \label{lem:pMonopapprox} The following holds for the operators defined in Definition \ref{def:monopole} and \ref{def:A_beta}. \begin{enumerate}[(i)] \item There exists a constant $C$ such that, for all $x \in \mathbb{R}^3$ and $u \in \dot{H}^1$, \begin{equation} \label{eq:pMonopapprox} \|(T_x- M_{x})u\|_{\dot{H}^{-1}(\mathbb{R}^3)} \leq C \| \nabla u \|_{L^2(B_x)}. \end{equation} \item For fixed $ \mu $ and $r \to 0$, we have for all $ u \in \dot{H}^{1}(\mathbb{R}^3) $, all $ n \in \mathbb{N}$, and all $\beta \in \mathbb{N}_\ast^n$, \begin{equation} \label{eq:pMonopgood} \|M_\beta^{(r)} - A_\beta^{(r)})u\|_{\dot{H}^1} \to 0 \qquad \text{as} \quad k \to \infty. \end{equation} \end{enumerate} \end{lemma} For the proof we need the following lemma. \begin{lemma} \label{lem:extest} For $ r>0 $ and $ x \in \mathbb{R}^3$, let $ H_r := \{ u \in H^1(B_r(x)) \colon \int_{B_r(x)} u = 0 \} $. Then, for all $ r >0 $, there exists an extension operator $ E_r \colon H_r \to H^1_0(B_{2r}(x)) $ such that \begin{equation} \label{eq:extest} \| \nabla E_r u \|_{L^2(B_{2r}(x))} \leq C \| \nabla u \|_{L^2(B_r(x))} \qquad \text{for all} \quad u \in H_r, \end{equation} where the constant $ C $ is independent of $ r $. \end{lemma} \begin{proof} For $ r = 1 $ let $ E_1 \colon H^1(B_1(x)) \to H^1_0(B_2(x)) $ be a continuous extension operator. Then, by the Poincaré inequality on $ H_1 $, we get for all $ u \in H_1 $ \begin{equation} \| \nabla E_1u \|_{L^2(B_{2}(x))} \leq \| E_1u \|_{H^1(B_{2}(x))} \leq C \| u \|_{H^1(B_1(x))} \leq C \| \nabla u \|_{L^2(B_1(x))}. \label{eq:extest1} \end{equation} The assertion for general $ r > 0 $ follows from scaling by defining $ (E_r u)(x) := (E_1u_r)({\frac{x}{r}}) $, where $ u_s(x) := u(sx) $. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:pMonopapprox}] Let $u \in \dot{H}^1(\mathbb{R}^3)$. First, we observe by a straightforward calculation that \begin{equation} (\tilde{Q}_{x} u)(y) = \begin{cases} (u)_{x}, & \text{if} \quad x \in B_{x},\\ (u)_{x} \displaystyle\frac{r}{|y-x|}, & \text{otherwise}. \end{cases} \end{equation} Now, we use again that $G_0$ is an isometry and that $ Q_x = G_0 T_{x} $ is the orthogonal projection to the subspace \[ \dot{H}_0^1(\mathbb{R}^3 \backslash \overline{B_{x_1}})^\perp = \left\{ u \in H^1(\mathbb{R}^3) \colon -\Delta u = 0 ~ \text{in} ~ \mathbb{R}^3 \backslash ( \overline{B_{x}}) \right\}. \] Therefore, we can characterize $Q_x u$ as the function $v \in \dot{H}^1(\mathbb{R}^3)$ that solves \begin{align} -\Delta v &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{B}_x, \\ v &= u \quad \text{in} ~ \overline{B}_x. \end{align} Hence, $v$ is the function of minimal norm that coincides with $u$ inside the ball $B_x$. Clearly, $\tilde{Q}_{x} u \in \dot{H}_0^1(\mathbb{R}^3 \backslash \overline{B_{x}})^\perp$, and thus, $Q_{x} \tilde{Q}_{x} = \tilde{Q}_x$. Therefore, \[ (Q_{x} - \tilde{Q}_{x})u = Q_x (u - \tilde{Q}_{x} u). \] Since $ \tilde{Q}_{x} u = (u)_{x} $ in $ B_{x} $, we can use the extension operator $ E_r $ from Lemma \ref{lem:extest} (since, by the Rellich embedding theorem, the restriction of a $\dot{H}^1$ function to a ball is a $H^1$ function in that ball) and estimate \begin{align} \|(Q_{x} - \tilde{Q}_{x})u\|_{\dot{H}^1(\mathbb{R}^3)} &\leq \| E_r ((u - \tilde{Q}_{x} u) \left. \! \! \right|_{B_{x}})\|_{\dot{H}^1(\mathbb{R}^3)} \\ &= \| \nabla E_r (( u - (u)_{x}) \left. \! \! \right|_{B_{x}})\|_{L^2(\mathbb{R}^3)} \\ & \leq C \| \nabla u \|_{L^2(B_{x})}. \end{align} This concludes the proof of assertion (i). Observe that $ M_{x} u $ satisfies $ \operatorname{supp} (M_{x} u) \subset \partial B_{x}$. It can easily be seen that Lemma \ref{lem:pLocH^-1} still holds true when replacing the cutoff $e^{-|x|}$ by $e^{-j|x|}$ for any $j \in \mathbb{N}_\ast$. Therefore, we get for $ n = 1$ \[ \|(M^{(r}_\beta - A_\beta^{(r)})u\|_{\dot{H}^1(\mathbb{R}^3)}^2 \leq (1+C\mu) \sum_{x} e^{-\beta |x_1|}\|(Q_{x} - \tilde{Q}_{x})u \|_{\dot{H}^1(\mathbb{R}^3)}^2. \] Hence, the convergence \eqref{eq:pMonopgood} for $ n = 1$ follows directly from the estimate \eqref{eq:pMonopapprox}, since, for fixed capacity, the volume of the particles inside a fixed bounded domain tends to zero as $ r $ tends to zero. For $n \geq 2$, we first argue that it suffices to prove the assertion for functions $u$ in the dense set $H^1(\mathbb{R}^3) \subset \dot{H}^{1} (\mathbb{R}^3) $. Indeed, this follows once we have shown that, for any $\beta$, both $ A_\beta^{(r)}$ and $M_\beta^{(r)}$ are bounded, uniformly for all particle configurations with the same capacity $\mu$. For $ A_\beta^{(r)}$, this is the second statement of Lemma \ref{lem:powersOfL}. To estimate $M_\beta^{(r)}$, we consider first $\beta = 1$. By part (i), we have for all $u \in \dot{H}^1$ \[ \sum_x \|(\tilde{Q}_x - Q_x)u\|^2_{\dot{H}^1} \leq C \|u\|_{\dot{H}^1}^2. \] Note that we can use Lemma \ref{lem:pLocH^-1} to take the sum out of the norm in the definition of $M_1^{(r)}$, since $M_x u$ is supported on $\partial B_x$ for all $u \in \dot{H}^1$. Using additionally the bound for $L_r$ from Lemma \ref{lem:Lbounded}, we estimate \begin{align} \|M_1^{(r)} u\|_{\dot{H}^1}^2 &= \| \sum_x e^{-|x|} \tilde{Q_x}u\|_{\dot{H}^1}^2 \\ &\leq \|\sum_x e^{-|x|} (\tilde{Q_x}- Q_x)u\|_{\dot{H}^1}^2 + \|L_r u\|_{\dot{H}^1}^2 \\ &\leq (1 + C \mu) \sum_x \|e^{-\beta_1} (\tilde{Q_x}- Q_x)u\|_{\dot{H}^1}^2 + (1+ C\mu)^2 \|u\|_{\dot{H}^1}^2 \\ &\leq C(1+\mu)^2 \| u\|_{\dot{H}^1}^2. \end{align} For general $n \in \mathbb{N}_\ast$ and $\beta \in \mathbb{N}_\ast^n$, one can argue as in the proof of Lemma \ref{lem:powersOfL} by taking powers of $M_1^{(r)}$ in order to deduce $\|M_\beta^{(r)}\| \leq (C(1+\mu))^n$. Indeed, the only ingredient for the proof of the formula, which we derived in Lemma \ref{lem:powersOfL} for $(L_r)^n$, was that $Q_x$ is a projection. We did not use orthogonality. Since $\tilde{Q}$ is a projection as well, the analogous version of that formula holds for the powers of $M_1^{(r)}$. The general assertion now follows by induction. For $ n=2 $, we have \begin{equation} \begin{aligned} \| (M^{(r)}_\beta - A_\beta^{(r)})u \|^2_{\dot{H}^1(\mathbb{R}^3)} &\leq \Big \| \sum_{x_1} \sum_{{x_2} \neq {x_1}} e^{-\beta_1|x_1|} e^{-\beta_2|x_2|} Q_{x_1}(Q_{x_2} - \tilde{Q}_{x_2}) u \Big \|^2_{\dot{H}^1(\mathbb{R}^3)} \\ \quad{} &+ \Big \| \sum_{x_1} \sum_{{x_2} \neq {x_1}} e^{-\beta_1|x_1|} e^{-\beta_2|x_2|} (\tilde{Q}_{x_1} - Q_{x_1}) \tilde{Q}_{x_2} u \Big \|^2_{\dot{H}^1(\mathbb{R}^3)}. \end{aligned} \label{eq:pSplitMM} \end{equation} To further estimate the first term on the right hand side, we use that $ \sum_{x_1} e^{-\beta_1|x_1|} Q_{x_1}$ is a bounded operator. Together with part (i) and using again $Q_x Q_x = Q_x$ and $Q_x \tilde{Q}_x = \tilde{Q}_x$, we get (with a constant that depends on $\mu$) \begin{equation} \begin{aligned} &\Big \| \sum_{x_1} \sum_{{x_2} \neq {x_1}} e^{-\beta_1|x_1|} e^{-\beta_2|x_2|} Q_{x_1}(Q_{x_2} - \tilde{Q}_{x_2}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)} \\ &\leq \Big \| \sum_{x_1} e^{-\beta_1|x_1|} Q_{x_1} \sum_{{x_2}} e^{-\beta_2|x_2|} (Q_{x_2} - \tilde{Q}_{x_2}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)} + \Big \| \sum_{x_1} e^{-\beta_1|x_1|} Q_{x_1} (Q_{x_1} - \tilde{Q}_{x_1}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)}\\ &\leq C \Big \| \sum_{{x_2}} e^{-\beta_2|x_2|}(Q_{x_2} - \tilde{Q}_{x_2}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)} + C \Big \|\sum_{x_1}e^{-\beta_1|x_1|}(Q_{x_1} - \tilde{Q}_{x_1}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)} \\ & \leq C \Big \| \sum_{{x_2}} e^{-\beta_2|x_2|}(Q_{x_2} - \tilde{Q}_{x_2}) u \Big \|_{\dot{H}^1(\mathbb{R}^3)} \to 0. \end{aligned} \end{equation} For the second term on the right hand side of \eqref{eq:pSplitMM}, recall \[ (M_{x_1} - T_{x_1}) v = 0 \quad \text{in} ~ \mathbb{R}^3\backslash \overline{B_{x_1}} \] for all $ v \in \dot{H}^1(\mathbb{R}^3) $. Hence, for $u \in H^1(\mathbb{R}^3)$, we can use Lemma \ref{lem:pLocH^-1} to take out the sum in $ {x_1} $, and we use the estimate for the uniform charge density approximation from part (i), \begin{equation} \label{eq:pInductionStart} \begin{aligned} \Big \| \sum_{x_1} \sum_{{x_2} \neq {x_1}} e^{-\beta_1|x_1|} e^{-\beta_2|x_2|} (\tilde{Q}_{x_1} - Q_{x_1}) \tilde{Q}_{x_2} u \Big \|^2_{\dot{H}^1(\mathbb{R}^3)} \leq \sum_{x_1} e^{-\beta_1|x_1|} \Big \|\! \sum_{{x_2} \neq {x_1}} e^{-\beta_2|x_2|}\nabla \tilde{Q}_{x_2} u \Big \|^2_{L^2(B_{x_1})}. \end{aligned} \end{equation} Inserting the definition of $ \tilde{Q}_{x_2} $, expanding the square of the sum over $ {x_2} $, and estimating the integral yields \begin{equation} \label{eq:pr^5sum} \begin{aligned} &\sum_{x_1} e^{-\beta_1|x_1|} \Big \| \! \sum_{{x_2} \neq {x_1}} e^{-\beta_2|x_2|}\nabla \tilde{Q}_{x_2} u \Big \|^2_{L^2(B_{x_1})} \\ &\leq C \sum_{x_1} \sum_{{x_2} \neq {x_1}} \sum_{{x_3} \neq {x_1}} e^{-\beta_1|x_1|} r^5 \frac{e^{-2\beta_2|x_2|}(u)_{x_2}}{|x_1-x_2|^2} \frac{e^{-2\beta_3|x_3|} (u)_{x_3} }{|x_1-x_3|^2}. \end{aligned} \end{equation} Consider the off-diagonal terms first, i.e., $ {x_2} \neq {x_3} $. We estimate \[ e^{-\beta_1|x_1|} e^{-2\beta_2|x_2|} e^{-2\beta_3|x_3|} \leq e^{-\frac{|x_1-x_2|}{2}} e^{-\frac{|x_1-x_3|}{2}} \] and use $ r = (4 \pi)^{-1} \mu d^3 $ to bound the sum over $ {x_1} $ by an integral, \[ \sum_{x_1} r \frac{e^{-\frac{|x_1-x_2|}{2}}}{|x_1-x_2|^2} \frac{e^{-\frac{|x_1-x_3|}{2}}}{|x_1-x_3|^2} \leq C \mu\int_{\mathbb{R}^3} \frac{e^{-\frac{|y-x_2|}{2}}}{|y-x_2|^2} \frac{e^{-\frac{|y-x_3|}{2}}}{|y-x_3|^2} \, d y, \] for all $ {x_2} \neq {x_1}$, ${x_3} \neq {x_1}$. To estimate the integral, we denote $ z = x_2 - x_3 \neq 0$ for the moment and split the integral to get \[ \begin{aligned} \int_{\mathbb{R}^3} \frac{e^{-\frac{|y|}{2}}}{|y|^2} \frac{e^{-\frac{|y-t|}{2}}}{|y-t|^2} \, d y \leq \int_{\mathbb{R}^3 \backslash B_{|z|/2}(0)} \frac{4 e^{-\frac{|z|}{4}}}{|z|^2} \frac{e^{-\frac{|y-z|}{2}}}{|y-z|^2} + \int_{B_{|z|/2}(0)} \frac{e^{-|y|}}{|y|^2} \frac{4 e^{-\frac{|z|}{4}}}{|z|^2} \leq C \frac{e^{-\frac{|z|}{4}}}{|z|^2}. \end{aligned} \] Hence, using $ (u)_{x_2} (u)_{x_3} \leq \frac{1}{2} ((u)_{x_2}^2 + (u)_{x_3}^2) $ and symmetry, we deduce \[ \begin{aligned} \sum_{x_1} \sum_{{x_2} \neq {x_1}} \sum_{\substack{{x_3} \neq {x_1} \\ {x_3} \neq {x_2}}} r^5 (u)_{x_2} (u)_{x_3} \frac{e^{-\frac{|x_1-x_2|}{2}}}{|x_1-x_2|^2} \frac{e^{-\frac{|x_1-x_3|}{2}}}{|x_1-x_3|^2} &\leq \sum_{x_2} \sum_{{x_3} \neq {x_2}} C \mu r^4 (u)_{x_2} (u)_{x_3} \frac{e^{-\frac{|x_2 - x_3|}{4}}}{|x_2 - x_3|^2} \\ & \leq \sum_{x_2} C \mu^2 r^3 (u)_{x_2}^2 \int_{\mathbb{R}^3} \frac{e^{-\frac{|x_2 - y|}{4}}}{|x_2 - y|^2} \, d y \\ & \leq C \mu^2 \sum_{x_2} r^3 \left(\fint_{B_{x_2}} u(y) \, d y \right)^2 \\ & \leq C \mu^2 \| u \|^2_{L^2(\cup_{x_2} B_{x_2})} \to 0, \end{aligned} \] where we finally used $u \in H^1(\mathbb{R}^3)$ in order to have control of the $L^2$-norm. It remains to bound the diagonal terms in \eqref{eq:pr^5sum}. For those, we use the estimate \[ r \sum_{{x_1} \neq {x_2}} \frac{e^{-\frac{|x_1-x_2|}{2}}}{|x_1-x_2|^4} \leq C \mu \int_{\mathbb{R}^3 \backslash B_d(0)} \frac{e^{-\frac{|y|}{2}}}{|y|^4} \, d y \leq C \mu d^{-1}. \] Hence, \[ \sum_{x_1} \sum_{{x_2} \neq {x_1}} r^5 (u)_{x_2}^2\frac{e^{-\frac{|x_1-x_2|}{2}}}{|x_1-x_2|^4} \leq C \mu r d^{-1} \| u \|^2_{L^2(\cup_{x_2} B_{x_2})} \to 0. \] For $ n \geq 3 $, one does exactly the same thing. For $ n = 3 $, instead of $ \| u \|^2_{L^2(\cup_{x_2} B_{x_2})} $, one ends up with \begin{equation} \begin{aligned} \sum_{x_2} \Big \| \! \sum_{{x_3} \neq {x_2}} \tilde{Q}_{x_3} u \Big \|^2_{L^2(B_{x_2})} \end{aligned} \end{equation} and this can be estimated exactly as the right hand side of Equation \eqref{eq:pInductionStart}. The only difference is that the gradient is not there, but, due to the exponential cutoff, this does not matter. Thus, for $n \geq 3$, the assertion follows by induction. \end{proof} \begin{proof}[Proof of Proposition \ref{pro:pseqweaklimit}] By Lemma \ref{lem:powersOfL} it suffices to prove \begin{equation} \label{eq:sepseqweaklimit} A_\beta^{(r)} u \rightharpoonup \Bigg( \prod_{j=1}^n \mu G_0 J A^{\beta_j} \Bigg) u \quad \text{in} ~ \dot{H}^1, \end{equation} for all $ u \in \dot{H}^{1}(\mathbb{R}^3) $, all $ n \in \mathbb{N}_\ast $, and all $ \beta \in \mathbb{N}_\ast^n$. Since $G_0$ is an isometry, for $ n = 1$, it suffices to show \[ \sum_x e^{-\beta_1|x|} T_x u \rightharpoonup \mu J A^{\beta_1} u \quad \text{in} ~ \dot{H}^{-1}(\mathbb{R}^3) \] for all $ u \in \dot{H}^1(\mathbb{R}^3) $ and analogously for $n \geq 2$. Since by Lemma \ref{lem:powersOfL}, we have a uniform bound on $A_\beta^{(r)}$, it suffices to prove the assertion for the dense subset $ \dot{H}^1(\mathbb{R}^3) \cap C^1(\mathbb{R}^3)$. Lemma \ref{lem:pMonopapprox} implies that we can replace all the operators $T_x$ by $ M_x $. Moreover, it suffices to consider test function from the dense set $C_c^\infty(\mathbb{R}^3)$. Let $ u \in \dot{H}^1(\mathbb{R}^3) \cap C^1(\mathbb{R}^3) $ and $ \varphi \in C_c^\infty(\mathbb{R}^3) $. Then, we estimate \begin{align} |\langle \varphi , M_x u \rangle - 4 \pi r u(x) \varphi(x)| &\leq \frac{1}{r} \int_{\partial B_x} | (u)_x \varphi(y) - u(x) \varphi(x)| \, d y \\ & \leq C r^2 \|u\|_{C^1(\mathbb{R}^3)} \|\varphi\|_{C^1(\mathbb{R}^3)}. \end{align} On the other hand, defining $ q_x $ to be the cube centered at $x$ with edges of length $ d $ parallel to the coordinate axes, we find \begin{align} \bigg| \mu \int_{q_x} e^{-\beta_1 |y|}u(y) \varphi(y) \, d y - r e^{-\beta_1 |x|}u(x) \varphi(x) \bigg| &\leq \mu\int_{q_x} | e^{-\beta_1 |y|}u(y) \varphi(y) - e^{-\beta_1 |x|}u(x) \varphi(x)| \, d y \\ & \leq C r d \|e^{-\beta_1 |\cdot|}u\|_{C^1(\mathbb{R}^3)} \|\varphi\|_{C^1(\mathbb{R}^3)}. \end{align} Now, we take the sum in $ x $ and use that $\cup_x q_x = \mathbb{R}^3$ up to a nullset. Furthermore, we observe that we only have to take into account those cubes that lie in the support of $ \varphi $. The number of such cubes is bounded by $ C d^{-3} = C \mu r^{-1} $. Therefore, combining the above estimates leads to \[ |\langle \varphi , \sum_x e^{-\beta_1 |x|} M_x u - \mu J A^{\beta_1}u \rangle| \leq C \mu d. \] This proves the convergence for $ n = 1 $. For $ n = 2 $, we write \begin{equation} \label{eq:pSplitTT} \begin{aligned} &\sum_{x_1} \sum_{x_2 \neq x_1} e^{-\beta_1 |x_1|}e^{-\beta_2 |x_2|} M_{x_1} G_0 M_{x_2} u - (\mu)^2 J A^{\beta_1} G_0 J A^{\beta_2} u \\ &= \bigg(\sum_{x_1} e^{-\beta_1 |x_1|} M_{x_1} - \mu JA^{\beta_1} \bigg) \mu G_0 J A^{\beta_2}u \\ {} &+ \sum_{x_1} e^{-\beta_1 |x_1|} M_{x_1} G_0 \bigg(\sum_{x_2 \neq x_1} e^{-\beta_2 |x_2|}M_{x_2} - \mu JA^{\beta_2}\bigg)u. \end{aligned} \end{equation} The first term converges to zero weakly in $H^{-1}(\mathbb{R}^3)$ by the assertion for $n = 1$. We observe that for all $x_2 \neq x_1 \in \Gamma_r$, and all $z \in B_{x_1}$, \begin{align} &\bigg | \int_{\partial B_{x_2}} e^{-\beta_2 |x_2|} (u)_{x_2} \Phi (z-y) \, d z - \mu \int_{q_{x_2}} e^{-\beta_2 |y|}u(y) \Phi(z-y) \, d y \bigg | \\ & \leq C r d e^{d-|x_2|}\|u\|_{C^1(\mathbb{R}^3)} \|\Phi(z-\cdot)\|_{C^1(Q_{x_2)}} \\ & \leq C r d \|u\|_{C^1(\mathbb{R}^3)} e^{d-|x_2 - x_1|} \left( \frac{1}{|x_2 - x_1|} + \frac{1}{|x_2 - x_1|^2} \right). \end{align} Taking the sum over $ x_2 \neq x_1 $ yields \begin{equation} \label{eq:pEstimateOtherBall} e^{-\beta_1 |x_1|} \bigg | \bigg(G_0 \sum_{x_2 \neq x_1} \tilde{M}_{x_2} u - \mu J A^{\beta_2} u \bigg)_{x_1} \bigg | \leq C \mu e^d \|u\|_{C^1(\mathbb{R}^3)} d. \end{equation} Note that it is crucial for deriving this bound that the sum runs only over $x_2$ different from $x_1$. Testing again by $ \varphi \in C_c^\infty(\mathbb{R}^3) $, we conclude that also the second term in Equation \eqref{eq:pSplitTT} tends to zero weakly in $ H^{-1}(\mathbb{R}^3)$. Convergence of the higher order terms is proven by induction. \end{proof} \section{The Poisson Equation} \label{sec:Poisson} In order to directly apply the method to the Poisson equation, we need to change the spaces that we work in to make it possible to solve the Poisson equation in the whole space. \begin{definition} We define the homogeneous Sobolev space $\dot{H}^1(\mathbb{R}^3)$ as the closure of $ C_c^\infty(\mathbb{R}^3) $ with respect to the $L^2$-norm of the gradient and denote its dual by $\dot{H}^{-1}(\mathbb{R}^3)$. Moreover, for an open set $\Omega \subset \mathbb{R}^3$, we define the space $\dot{H}_0^1(\Omega)$ to be $\{u \in \dot{H}^1 \colon u = 0 ~ \text{in}~ \mathbb{R}^3 \backslash \Omega\}$. \end{definition} Note that, with these definitions, the Laplacian is an isometry from $\dot{H}^1 $ into $ \dot{H}^{-1}(\mathbb{R}^3)$. Correspondingly to the previous section, we denote $G_0 = (-\Delta)^{-1}$. The following lemma corresponds to Lemma \ref{lem:scrpoibdry}. \begin{lemma} \label{lem:poibdry} Let $\Omega \subset \mathbb{R}^3$ be open. Then, for every $ f \in \dot{H}^{-1}(\mathbb{R}^3) $, the problem \begin{equation} \begin{aligned} \label{eq:poissonBall} -\Delta u &= f \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{\Omega}, \\ u &= 0 \quad \text{in} ~ \overline{\Omega} \end{aligned} \end{equation} has a unique weak solution $ u \in \dot{H}^1(\mathbb{R}^3) $. Moreover, the solution for Problem \eqref{eq:poissonBall} is given by \begin{equation} P_{\Omega} G_0 f, \end{equation} where $ P_{\Omega} $ is the orthogonal projection from $ \dot{H}^1 (\mathbb{R}^3) $ to the subspace $ \dot{H}^1_0(\mathbb{R}^3 \backslash \overline{\Omega})$. \end{lemma} As before, we define \[ Q_i = 1 - P_i, \] where $P_{i} := P_{B_i}$ are the projection operators from Lemma \ref{lem:poibdry}. Moreover, we note as in \eqref{eq:characterizationOfOrthogonal} that $Q_i$ is the orthogonal projection to \[ \dot{H}^1_0(\mathbb{R}^3 \backslash \overline{B_i})^\perp = \{ v \in \dot{H}^1(\mathbb{R}^3) \colon -\Delta v = 0 \text{ in } \mathbb{R}^3 \backslash \overline{B_i}\}. \] As mentioned before, the operator $ \sum_i Q_i$, which we have denoted $L$ for the screened Poisson equation, will in general not be a bounded operator for infinitely particles. This is due to the long range interactions of the Laplacian. Therefore, we use a spatial cutoff to define the operator $L$ for the Poisson equation. \begin{definition} \label{def:pL} We define \[ L := \sum_i e^{-|x_i|} Q_i. \] \end{definition} \begin{remark} The choice of the specific exponential cutoff was only made for definiteness and to make the proof of the estimate for $L$ (cf. Lemma \ref{lem:Lbounded}) as analogous to the screened Poisson equation as possible. However, any cutoff that maps $\dot{H}^1(\mathbb{R}^3)$ to $ \dot{H}^{-1}(\mathbb{R}^3)$ would work (note that $\dot{H}^1(\mathbb{R}^3)$ is not contained in $ \dot{H}^{-1}(\mathbb{R}^3)$). In particular, we could choose a polynomial cutoff with sufficiently fast decay. \end{remark} \subsection{Convergence of the Modified Method of Reflections} \label{sec:ConvergenceOfTheSolutionRepresentation} \begin{lemma} \label{lem:Lbounded} The operator $L$ from Definition \ref{def:pL} is a well defined, bounded, nonnegative, self-adjoint operator on $\dot{H}^1(\mathbb{R}^3)$ with \[ \|L\| \leq (1+C\mu_0), \] where the constant $C$ depends only on $\kappa$ from Condition \ref{cond:particlesNotToClose}. \end{lemma} The proof follows the lines of the proof of the corresponding result for the screened Poisson equation, Proposition \ref{pro:Aone}. The only difference is that the exponential cutoff in the definition of $L$ replaces the the exponential decay of the fundamental solution of the screened Poisson equation \eqref{eq:FundamentalScreened}. We omit the details of the proof. However, we state the lemma corresponding to Lemma \ref{lem:locH^-1} for further reference. \begin{lemma} \label{lem:pLocH^-1} Assume $ (f_i)_{i \in I} \subset \dot{H}^{-1}(\mathbb{R}^3) $ satisfy $ \operatorname{supp} f_i \subset \overline{B_i} $. Then, \[ \Big \| \sum_i e^{-|x_i|} f_i \Big \|_{\dot{H}^{-1}(\mathbb{R}^3)}^2 \leq (1+C\mu_0) \sum_i e^{-|x_i|} \|f_i \|_{\dot{H}^{-1}(\mathbb{R}^3)}^2, \] where the constant $C$ depends only on $\kappa$ from Condition \ref{cond:particlesNotToClose}. \end{lemma} As in Proposition \ref{pro:abstractProjection} we would like to prove convergence for \[ (1-\gamma L)^n G_0 f = (1 - \sum_i \gamma e^{-|x_i|} Q_i)^n G_0 f. \] for sufficiently small $\gamma >0$. The only difference is that, instead of putting the same small factor $\gamma$ in front of all the operators $Q_i$, we now have factors depending on the particle position due to the spatial cutoff $e^{-|x_i|}$ in Definition \ref{def:pL}. Thus, we will see in Proposition \ref{pro:nonuniformAbstractProjection} below, that convergence to the desired solution still holds for sufficiently small $\gamma$. However, due to the spatial cutoff, $L$ lacks the coercivity on $\dot{H}_0^1(\mathbb{R}^3 \backslash K)^\perp$ the analogous of which we had in the case of the screened Poisson equation (cf. Lemma \ref{lem:LCoercive}): Clearly, if $u \in \dot{H}_0^1(\mathbb{R}^3 \backslash K)^\perp$ is only non-zero in particles very far away from the origin, then, $\|Lu\|_{\dot{H}^1}$ is very small compared to $\|u\|_{\dot{H}^1}$. Hence, we cannot expect any result about uniform convergence of $(1-\gamma L)^n G_0$ from a purely abstract argument as in Proposition \ref{pro:abstractProjection}. Indeed, the farther the mass of the source term $f$ is away from the origin, the slower we expect the convergence to take place. \begin{proposition} \label{pro:nonuniformAbstractProjection} Let $ H $ be a Hilbert space and $ V_k \subset H$ closed subspaces for $ k \in J$, where $J$ is a finite or countable index set. Define $Q_k$ to be the orthogonal projections from $ H$ to $V_k^\perp$. Let $ V = \cap_{k \in J} V_k$ and define $ P $ to be the orthogonal projection from $ H$ to $V$. Assume $\gamma_k >0$, $k \in J$, are chosen such that $ S:= \sum_{k \in J} \gamma_k Q_k$ defines a bounded operator with $\|S\| < 2$. Then, \[ \lim_{M\to\infty} (1-S)^M = P, \] pointwise in $H$. If $\|S\| \leq 1$, then for all $x \in H$, \begin{equation} \label{eq:posLargerNorm} ( S x, x)_H \geq \| S x \|_H^2, \end{equation} and \begin{equation} \label{eq:monotonPos} (S(1-S)x,(1-S)x)_H \leq (Sx,x)_H. \end{equation} \end{proposition} \begin{proof} The statement about convergence is proven in the same way as in Proposition \ref{pro:abstractProjection}. Observe that estimates \eqref{eq:posLargerNorm} and \eqref{eq:monotonPos} are trivially satisfied in $V$. We define again $T$ as the restriction of $ S $ to $ V^ \perp $ (in both the domain and the range). Using the spectral theorem, we can assume $T$ to be a multiplication operator on $ H = L^2_\nu(X)$ for some measure space $(X,\mathcal{A},\nu)$, i.e., there exists a function $f \in L^\infty_\nu (X)$ such that $ T \varphi $ = $ f \varphi $ for all $ \varphi \in L^2_\nu(X)$. By assumption, we know $0 < f \leq 1$. Therefore, \[ (T \varphi, \varphi)_H = \int_X f \varphi^2 \, d \nu \geq \int_X f^2 \varphi^2 \, d \nu = \| T \varphi \|_H^2, \] and \[ (T (1-T) \varphi, (1-T)\varphi)_H = \int_X f(1-f)^2 \varphi^2 \, d \nu \leq \int_X f \varphi^2 \, d \nu = (T \varphi, \varphi)_H. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem \ref{ConvWholeSpace}] We define $ \gamma_0 \leq 1/\|L_r\| $. Proposition \ref{pro:Aone} ensures that this is possible in such a way that $\gamma_0$ depends only on $\mu_0$ and $\kappa$. Then, the assertion follows directly from Proposition \ref{pro:nonuniformAbstractProjection} and Lemma \ref{lem:poibdry}. \end{proof} \subsection{The Modified Method of Reflections on the Level of the Original Series} In this subsection, we will show how to compute the expansion of the term $(1-\gamma L)^n$ in order to obtain a series similar to the original series obtained by the Method of Reflections \eqref{eq:ProjectionSeries}. This is not only interesting in itself, but will be used to derive the homogenization results Theorem \ref{HomogLambZero} and \ref{HomogStokes} in Section 4. This leads to the following definition and lemma. \begin{definition} \label{def:A_beta} Let $n\in \mathbb{N}_\ast$ and $ \beta \in \mathbb{N}_\ast^n$, where we denote $\mathbb{N}_\ast := \mathbb{N} \backslash \{0\}$. Then, we define the operator $A_\beta \colon \dot{H}^1(\mathbb{R}^3) \to \dot{H}^1(\mathbb{R}^3)$ by \begin{align} A_\beta = \sum_{i_1} e^{-\beta_1|x_{i_1}|} Q_{i_1} \sum_{i_2 \neq i_1} e^{-\beta_2|x_{i_2}|} Q_{i_2} \cdots \!\! \sum_{i_n \neq i_{n-1}} \!\! e^{-\beta_n|x_{i_n}|} Q_{i_n}. \end{align} \end{definition} \begin{lemma} \label{lem:powersOfL} For all $n \in \mathbb{N}_\ast $, the following identity holds \[ (L_r)^n = \sum_{l=1}^n \sum_{\substack{\beta \in \mathbb{N}_\ast^l \\ |\beta| = n}}A_\beta^{(r)}. \] In particular, for all $\beta \in \mathbb{N}_\ast^n$, $A_\beta$ is a bounded operator with \[ \|A_\beta\| \leq (1 + C \mu_0)^n, \] where $C$ is a universal constant. \end{lemma} \begin{proof} For $n=1$, the assertion is trivial. Let $n \geq 2$ and $\beta \in \mathbb{N}_\ast^n$. We write $\beta = (\beta_1,\beta')$ for some $\beta' \in \mathbb{N}_\ast^{n-1}$. Using $Q_x^2 = Q_x$, it is easy to see that \[ L_r A_\beta = A_{(1,\beta)} + A_{(\beta_1 + 1 ,\beta')}. \] Observe that for every $1 \leq l \leq n+1$ and every $\gamma \in \mathbb{N}_\ast^l$ with $|\gamma| = n+1$, either $\gamma_1 = 1$, then, there exists a unique $\beta \in \mathbb{N}_\ast^{l-1}$ with $|\beta| = n$ such that $\gamma = (1,\beta)$, or $\gamma_1 > 1$, then, $l \leq n$, and there exists a unique $\beta \in \mathbb{N}_\ast^l$ with $|\beta| = n$ such that $\gamma = (\beta_1 + 1,\beta')$. Therefore, the assertion for $n$ follows from the one for $n-1$. For $\beta \in \mathbb{N}_\ast^n$ with $\beta_j =1$ for all $1 \leq j \leq n$, the estimate for the operators $A_\beta$ follows directly from the bound on $L$ (see Lemma \ref{lem:Lbounded}) and the identity that we just have proven, since all the operators $Q_{i}$ are positive. For general $\gamma \in \mathbb{N}_\ast^n$, we clearly have $\|A_\gamma\| \leq \| A_\beta\|$ if $\beta $ is chosen as above. This concludes the proof. \end{proof} \section{Introduction} In this paper, we consider Poisson and Stokes equations in perforated domains \begin{equation} -\Delta u=f\ \ \text{in\ }\mathbb{R}^{3}\diagdown K\ \ ,\ \ u=0\ \ \text{in\ }% K, \label{S1E1}% \end{equation}% and \begin{equation} -\Delta v+\nabla p=f\ ,\ \ \nabla\cdot v=0\ \ \text{in\ }\mathbb{R}% ^{3}\diagdown K\ \ ,\ \ v=0\ \ \text{in\ }K. \label{S1E2}% \end{equation} where $u$ is a scalar function, and $v$ is a vector field with values in $\mathbb{R}^{3}.$ Here, the set $K$ consists of mutually disjoint balls, \begin{equation} K=\bigcup_{i \in I }\overline{B_{r_i}\left( x_i\right) }, \label{S1E3}% \end{equation} where $I$ is a finite or countable index set. Problems analogous to \eqref{S1E1} and \eqref{S1E2} have been often studied in the physics literature using the so-called Method of Reflections. This method allows to obtain some formal series for the solutions of these equations which eventually should approximate them. However, the series obtained by means of the Method of Reflections are divergent for problems like \eqref{S1E1} and \eqref{S1E2} where $K$ extends to the whole space. This divergence takes place even if the source term $f$ is compactly supported or decays very fast at infinity. The purpose of this paper is to obtain what is the precise mathematical meaning of the formal series obtained by means of the Method of Reflections and to explain how these series can be used to obtain the asymptotic behaviour of the solutions of \eqref{S1E1} and \eqref{S1E2} in the limit of small balls and the number of balls per unit volume tending to infinity. \subsection{The Method of Reflections} The Method of Reflections in hydrodynamic equations was introduced by Smoluchowski (cf. \cite{Smo11}). This method allows to approximate the solutions of boundary value problems for the Poisson or Stokes equations in domains with complex boundaries consisting of many connected components. We write any of those equations as \begin{equation} \mathcal{L}\phi=f\ \ \text{in\ \ }\Omega\, \label{S1E4}% \end{equation} where $\phi$ is the solution to be computed and $f$ is a suitable source term, and where $\mathcal{L}$ could be in principle any linear elliptic operator. We will assume by definiteness that we wish to solve these equations in the domain $\Omega=\mathbb{R}^{d}\diagdown\bigcup_{j}C_{j},$ where the sets $C_{j},$ which from now on will be denoted as particles, are compact sets and $C_{j}\cap C_{k}=\emptyset$ if $j\neq k.\ $The boundary conditions might be Dirichlet, Neumann or Robin or any other type as long as they are linear. We will write the boundary condition at each set $C_{j}$\ as \begin{equation} \mathcal{B}\phi=g_{j}\ \ \text{on\ }\partial C_{j}. \label{S1E5}% \end{equation} Suppose that the exterior boundary value problem outside each of the sets $C_{j}$ can be solved explicitly, i.e., we have explicit formulas (typically in terms of integrals) for the problems% \begin{equation} \mathcal{L}\psi_{j}=0\ \ \text{in\ \ }\mathbb{R}^{d}\diagdown C_{j}% ,\ \ \mathcal{B}\psi_{j}=h_{j}\text{ on }\partial C_{j}. \label{S1E6}% \end{equation} It is then possible to compute iteratively a solution for the boundary value problem \eqref{S1E4}, \eqref{S1E5} in $\Omega$ as follows. We write as zero order approximation $\Phi_{0}$ to the solution of \eqref{S1E4}, \eqref{S1E5} just as the solution of \begin{equation} \mathcal{L}\Phi_{0}=f\ \text{\ in }\mathbb{R}^{d}.\ \ \ \ \ \label{S1E6a}% \end{equation} This solution cannot be expected to satisfy the boundary condition \eqref{S1E5}. We then define a first order approximation to $\phi$ adding to $\Phi_{0}$ the solutions of the problems \eqref{S1E6} where $h_{j}$ is chosen as the difference between the desired boundary condition and the one given by $\Phi_{0}.$ More precisely we define $\Phi_{1,j}$ as the solution of% \begin{equation} \mathcal{L}\Phi_{1,j}=0\ \ \text{in\ \ }\mathbb{R}^{d}\diagdown C_{j}% ,\ \ \mathcal{B}\Phi_{1,j}=g_{j}-\mathcal{B}\Phi_{0}\text{ on }\partial C_{j}. \label{S1E7}% \end{equation} We then define $\Phi_{1}=\sum_{j}\Phi_{1,j}.$ Then $\Phi_{0}+\Phi_{1}$ yields a new approximation to $\phi.$ This new approximation does not satisfy the boundary conditions on $\bigcup_{j}\partial C_{j}.$ We can then define a new correction $\Phi_{2},$ defining functions $\Phi_{2,j}$ in a manner analogous to \eqref{S1E7}. More precisely we define inductively functions $\Phi_{k,j}$ as% \begin{align} \mathcal{L}\Phi_{k,j} & =0\ \ \text{in\ \ }\mathbb{R}^{d}\diagdown C_{j},\ \ \mathcal{B}\Phi_{k,j}=-\mathcal{B}\left( \sum_{\ell\neq j}% \Phi_{k-1,\ell}\right) \ \text{on }\partial C_{j}\ \text{for\ }% k=2,3,...,\label{S1E7a}\\ \Phi_{k} & =\sum_{j}\Phi_{k,j} \label{S1E7b}% \end{align} \ \ Iterating the method, we obtain a series $\Psi_{N}=\Phi_{0}+\Phi_{1}+\Phi _{2}+...+\Phi_{N}$. The reason, why this sequence can be hoped to converge to the solution of the boundary value problem \eqref{S1E4}, \eqref{S1E5} is that $\Psi_N$ satisfies \eqref{S1E4} and, by induction, \[ \mathcal{B} \Psi_{N} = g_j - \mathcal{B} \Phi_{N+1,j} \ \text{on }\partial C_{j}. \] There are several variations of the Method of Reflections in the literature. In some cases the corrections $\Phi_{k,j}$ are not computed simultaneously for all the particles but sequentially in each of the particles (cf. for instance \cite{Luke}). Nevertheless the main idea of the method is always the same, and it consists in adding recursively the corrective terms required to have the desired boundary conditions. Variations of the Method of Reflection have been used extensively to compute solutions of Poisson and Stokes equations (cf. \cite{HapBre}, \cite{IB01}, \cite{TT97}, and \cite{Kirp} to mention only a few). However the mathematical results yielding rigorous conditions on the convergence of this method and its precise range of applicability are much more limited. Convergence of the Method of Reflections has been considered in \cite{Luke} for a particular type of boundary conditions which arise naturally in sedimentation problems, and in \cite{Tra} for Dirichlet boundary conditions. There are several clear difficulties that one encounters when trying to prove the convergence of the method described above to the solution $\phi.$ If there are infinitely many particles $C_{j}$, it is not clear whether the functions $\Phi_{k}$ would be defined since they are given by a series with infinitely many terms. Actually, the divergence of these series might be expected in this situation because the solutions of Poisson and Stokes equations yield long range interactions which decay as power laws with a too slow decay. Even if the functions $\Phi_{k}$ are well defined, the convergence of $\left\{ \Psi_{N}\right\} _{N}$ as $N\rightarrow\infty$ is not clear. Divergence of this series might happen if the particles $C_{j}$ are too close and their mutual interactions do not tend to zero sufficiently fast. More precisely, divergence is expected if \[ \Big | \sum_{l\neq j}\Phi_{k,l}(x_{j}) \Big | >\left\vert \Phi _{k,j}(x_{j})\right\vert \] for most of the particles $j$. Indeed this condition implies, that adding $\Phi_{k}$ does not bring the function closer to the right boundary conditions at those particles $j$. In order to investigate the application of the Method of Reflections to Problem \eqref{S1E1}, let us consider for simplicity the special case of particles with equal radii distributed on a lattice, i.e., \begin{equation} \label{eq:lattice} K=\bigcup_{x \in (d \mathbb{Z})^3 }\overline{B_{r}\left( x\right) } , \end{equation} where $d>0$ is the particle distance. For the analysis of the convergence, it turns out that some characteristic length is of great importance, namely the screening length. This concept was introduced in the physics literature in \cite{MarqRoss}. A precise mathematical discussion of this length and its relevance in phase transition problems driven by diffusive effects can be found in \cite{NO}, \cite{NV}. The following precise definition of the screening length is well suited in the context of the Methods of Reflections. We consider equal charges on all particles that are contained in a ball of radius $\rho$. Then, we look at the potential at the particle which is at the center of this ball. This potential is the sum of the potential that is induced by the charge on that particular particle and the potential due to all the other particles. Then, the screening length is the critical radius $\rho$ at which those two portions are equal. More precisely, we define $u_j$ to be the unique solution with $u_j(x) \to 0$ as $|x| \to \infty$ of the problem \begin{align*} -\Delta u_{j} & =0\ \ \text{in \ }\mathbb{R}^{3}\diagdown B_{j},\\ u_{j} & =1\ \ \ \text{on\ \ }\partial B_{j}.% \end{align*} Then, the screening length is defined as \begin{equation} \Lambda:=\sup\Bigg\{ \rho>0:\sup_{\partial B_{j}}\Bigg( \sum_{l\neq j,\ x_{l}\in B_{\rho}}u_{l}\left( x\right) \Bigg) <1\Bigg\}. \label{S3E3}% \end{equation} If we now apply the Method of Reflections for Poisson equation to the system containing only the particles in a cloud of radius $R$, i.e., for $K_R = K \cup B_R(0)$ with $K$ as in \eqref{eq:lattice}, a sufficient condition for convergence would be \[ R<\Lambda. \] Indeed, adding $\Phi_{k}$ would then really bring the function closer to the right boundary conditions for most of the particles, leading to the estimate \[ \Vert\Phi_{k+1}\Vert\leq\theta\Vert\Phi_{k}\Vert \] in a suitable norm, where \begin{equation} \label{condConvergence} \theta:=\sup_{\partial B_{k}}\left( \sum_{j\neq k,\ x_{j}\in B_{R}}% u_{k}\left( x\right) \right) <1. \end{equation} This condition is similar to the sufficient condition obtained in \cite{Tra} for the convergence of the Method of Reflections for the Laplace equation in exterior domains with Dirichlet boundary conditions. The condition there reads \begin{equation} \label{cond:Traytak} \max_{i}\sum_{k\neq i}\frac{\frac{r}{\left\vert x_{i}-x_{k}\right\vert }% }{1-\frac{r}{\left\vert x_{i}-x_{k}\right\vert }}<1. \end{equation} In many particles systems with small radii and typical distance between particles $d$, the Conditions \eqref{condConvergence} and \eqref{cond:Traytak} are roughly equivalent to% \[ d^{-3}r\max_{i}\sum_{k\neq i}\frac{d^{3}}{\left\vert x_{i}-x_{k}\right\vert }<C% \] with $C$ of order one. Approximating the sum by an integral and assuming that the particles are contained in ball with radius $R$, this would be equivalent to% \[ d^{-3}r\int_{B_{R}\left( 0\right) }\frac{dy}{\left\vert y\right\vert }<C.% \] Thus, the screening length $\Lambda$ is of order $\sqrt{r^{-1}d^{3}}$. Therefore, for general particle distributions, it is natural to define \begin{equation} \label{eq:capacityBound} \mu_0 := \inf_{i \in I} r_i d_i^{-3}, \end{equation} where $d_i$ denotes the distance of the particle $i$ to the closest other particle. We will give the precise conditions for the particle distributions at the beginning of Section 2. \subsection{Main Results for the Screened Poisson Equation} In order to avoid divergences but still allow for infinitely many particles, instead of Poisson equation, we will consider first a modified version of the problem \eqref{S1E1}, namely the screened Poisson equation \begin{equation} -\Delta u+\xi^{-2}u=f\ \ \text{in\ }\mathbb{R}^{3}\diagdown K\ \ ,\ \ u=0\ \ \text{in\ }K\ \label{S1E8}% \end{equation} for some $\xi>0.$ The basic difference between \eqref{S1E1} and \eqref{S1E8} is that the Green's function associated to the second problem decreases exponentially in distances of order $\xi.$ Thus, the series defining the functions $\Phi_{k}$ are well defined. Moreover, particle interactions decay exponentially on distances of order $\xi$. Therefore, the series in the Method of Reflections converges for infinitely many particles provided $\mu_0 < C \xi^{-2} $, as stated in the following theorem. \begin{theorem} \label{SeriesPoissonScreened}Suppose that $\mathcal{L}=-\Delta+\xi^{-2}$ and $\mathcal{B}=I.$ There exists $C_0>0$ depending only on $\alpha$ and $\kappa$ from Condition \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi}, and there exists $\varepsilon <1 $ depending only on $\kappa$ with the following properties. Let $\Omega=\mathbb{R}^{3}\diagdown K$ with $K$ as in \eqref{S1E3}, and let $g_{j}=0$ on $\partial C_{j}.$ Suppose also that $f\in H^{-1}\left( \mathbb{R}^{3}\right) $ and define $\Phi_{0}$ as in \eqref{S1E6a} and inductively the functions $\Phi_{k}$ by means of \eqref{S1E7a}, \eqref{S1E7b}. Suppose that $\mu_0$ defined in \eqref{eq:capacityBound} satisfies \begin{equation} \mu_0 < C_0 \xi^{-2}. \label{S1E9}% \end{equation} Then the series $\sum_{k=0}^{\infty}\Phi_{k}$ converges to the unique solution $u$ of \eqref{S1E8} in $H^{1}\left( \mathbb{R}^{3} \backslash K\right) $. Moreover, \begin{equation} \label{eq:expConv} \Big\| \sum_{k=0}^{N}\Phi_{k} - u \Big\|_{H^1(\mathbb{R}^3 \backslash K)} \leq C \varepsilon^N \| f \|_{H^{-1}(\mathbb{R}^3)}, \end{equation} where $C$ depends only on $\xi$ In particular, the convergence is uniform in all particle configurations satisfying \eqref{S1E9}. \end{theorem} As indicated above, if $ \mu_0 \gtrsim\xi^{-2}$, the condition \eqref{S1E9} fails. In that case the series $\sum_{k=0}^{\infty}\Phi_{k}$ is in general divergent and the Method of Reflections cannot be applied, at least not in the form stated in Theorem \ref{SeriesPoissonScreened}. However, it turns out that it is possible to give a meaning to the formal series arising in the Method of Reflections in order to obtain a modified series which converges to the solution of \eqref{S1E8}. \begin{theorem} \label{CapOrderOne} Suppose that Conditions \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi} hold with some constants $\alpha$ and $\kappa$, and suppose \[ \mu_0 \leq C_\ast \xi^{-2} , \] for some constant $C_\ast < \infty$. Then, there exists a double sequence $q\left( k,N\right) $ defined for $k, N\in\mathbb{N}$ and $0\leq k\leq N,$ depending only on $\alpha$, $\kappa$, and $C_\ast$ with the following properties. For all $ k \in \mathbb{N}$, $\lim_{N \rightarrow \infty} q\left( k,N\right) =1$ , and for all $ f \in H^{-1}(\mathbb{R}^3)$, the sequence \[ \Psi_{N}=\sum_{k=0}^{N}q\left( k,N\right) \Phi_{k}% \] converges as $N\rightarrow\infty$ to the unique solution $u$ of \eqref{S1E8} in $H^{1}\left( \mathbb{R}^{3} \backslash K\right) $. Moreover, there exists a constant $\varepsilon <1$ depending only on $\alpha$, $\kappa$, and $C_\ast$ such that \[ \| \Psi_N - u \|_{H^1(\mathbb{R}^3 \backslash K)} \leq C \varepsilon^N \| f \|_{H^{-1}(\mathbb{R}^3)}, \] where $C$ depends only on $\xi$. \end{theorem} \subsection{The Summation Procedure and the Main Result for the Poisson Equation} Theorem \ref{CapOrderOne} can be thought as a summation method for the original series $\sum_{k=0}^{\infty}\Phi_{k}.$ The precise construction of the sequence $q\left( k,N\right) $ will be given in Section 2. Theorems \ref{SeriesPoissonScreened} and \ref{CapOrderOne} refer to the Dirichlet problem for the screened Poisson equation \eqref{S1E8} containing a parameter $\xi$ which restricts the range of interaction between particles to the finite value $\xi.$ It is natural to ask if the result can be generalized to the Dirichlet Problem for the Poisson equation \eqref{S1E1} which corresponds to $\xi=\infty.$ In this case, the series \eqref{S1E7b} defining the functions $\Phi_{k}$ does not converge if the particles extend to the whole space $\mathbb{R}^3$ and then the Method of Reflections as formulated in Theorem \ref{SeriesPoissonScreened} becomes meaningless (see also \eqref{S1E9}). Nevertheless, using the formal series $\sum_{k=0}^{\infty}\Phi_{k}$, it is possible to construct an alternative series which converges to the solution of \eqref{S1E1}. However, the relation between the original (divergent) series and the modified one, is much more involved than in the case of the screened Poisson equation Theorem \ref{CapOrderOne}. Therefore, we will first give an idea of the summation method. The summation method is based on an interpretation of the Method of Reflections using an abstract idea of Functional Analysis in Hilbert spaces. It is well known that by means of convenient choices of Hilbert spaces $H$, the solutions of many boundary value problems for a large class of equations with the form \eqref{S1E4} is equivalent to the orthogonal projection of $\mathcal{L}^{-1}f$ to the subspace of the Hilbert space for which the boundary conditions hold. We denote here by $\mathcal{L}^{-1}$ the operator solving \eqref{S1E4} in the whole space, which can be easily computed using the Green's function associated to \eqref{S1E4}. We will denote this orthogonal projection operator providing the solution of the boundary value problem \eqref{S1E4} by $P$. This projection maps the Hilbert space $H$ into the subspace satisfying the boundary conditions, which will be denoted by $V.$ On the other hand, we can associate another orthogonal projection operator $P_{j}$ to the solution of the boundary value problem for a single particle $j$. This projection maps $H$ in a subspace $V_{j}$ for which the boundary conditions are satisfied at the particle $j.$ We have $ V=\cap_{j}V_{j}.$ Let $Q_{j}$ denote the orthogonal projection from $H$ in the orthogonal of $V_{j}$ in $H.$ It turns out that the partial sums for the Method of Projections $\sum _{k=0}^{N}\Phi_{k}$ can be written as% \[ \bigg( 1-\sum_{j}Q_{j}\bigg) ^{N}\mathcal{L}^{-1}f. \] Thus, the Method of Projections converges to the solution of \eqref{S1E4} if% \begin{equation} P=\lim_{N\rightarrow\infty}\bigg(1 -\sum_{j}Q_{j}\bigg) ^{N} \label{S3E4}% \end{equation} in some suitable way. This result would hold trivially if the subspaces $\left\{ V_{j}\right\} $ were mutually orthogonal. However, if the angles between some of these subspaces are too small a geometrical argument shows that \eqref{S3E4} will fail. It is precisely condition \eqref{S1E9} that ensures that the convergence \eqref{S3E4} takes place for the Dirichlet Problem of the screened Poisson equation \eqref{S1E8}. This is the main idea in the Proof of Theorem \ref{SeriesPoissonScreened}. A related geometrical interpretation of the Method of Reflections has been analyzed in \cite{Luke}. The method used in \cite{Luke} can be applied to systems with finitely many particles, and the convergence of the Method of Reflections used there, which does not treat all the particles simultaneously but sequentially, leads to showing that \[ \lim_{N\rightarrow\infty}\bigg( \prod_{j}P_{j}\bigg) ^{N}=P,% \] where the product is taken over the finite number of particles chosen in any order. Actually, the Method of Reflections used in \cite{Luke} cannot be applied in the case of Dirichlet boundary conditions. Instead, it is applied to the Stokes system imposing the set of mixed boundary conditions at the particles satisfied by sedimenting inertialess particles, and to the Poisson equation with analogous boundary conditions. As indicated above, the convergence stated in \eqref{S3E4} cannot be expected if \eqref{S1E9} fails. However, a geometrical argument shows that, as long as the sum $\sum_{j}Q_{j}$ is convergent, the following convergence takes place.% \begin{equation} P=\lim_{N\rightarrow\infty}\bigg( I-\gamma\sum_{j}Q_{j}\bigg) ^{N}, \label{S3E5}% \end{equation} if $\gamma>0$ is small enough. Actually the right hand side can be written as a series directly related to the original series $\sum_{k=0}^{N}\Phi_{k}$ stated in Theorem \ref{CapOrderOne}. For the Poisson equation with particles extending to the whole space, the series $\sum_{j}Q_{j}$ is in general divergent. However, a similar idea can be applied by including in $\gamma$ an additional dependence on the particle position. \begin{theorem} \label{ConvWholeSpace} Let $f\in \dot{H}^{-1}\left( \mathbb{R}^{3}\right) .$ There exists a $\gamma_0 >0$ depending only on $\mu_0$ from Equation \eqref{eq:capacityBound} and $\kappa$ from Condition \ref{cond:particlesNotToClose} such that the sequence \begin{equation} \lim_{N\rightarrow\infty}\bigg( 1-\gamma\sum_{j}e^{-\left\vert x_{j}\right\vert }Q_{j}\bigg) ^{N} (-\Delta)^{-1} f\ \label{S3E6}% \end{equation} converges to the solution of \eqref{S1E1} in $\dot{H}^{1}\left( \mathbb{R}^{3}\right) $ for all $\gamma < \gamma_0$. \end{theorem} \begin{remark} We denote by $\dot{H}^1(\mathbb{R}^3) := \{ v \in L^6(\mathbb{R}^3) \colon \nabla v \in L^2 (\mathbb{R}^3) \}$ the homogeneous Sobolev space and by $\dot{H}^{-1}\left( \mathbb{R}^{3}\right) $ its dual space. \end{remark} \subsection{Homogenization Results} To illustrate the possible use of the Method of Reflections, we will give a proof of classical homogenization results in perforated domains using only the tools developed in this paper. For simplicity we will only consider regular particle configurations \eqref{eq:lattice}. We have already explained the importance of the quantity $d^{-3}r$ when we introduced the screening length $\Lambda$. Furthermore, we can draw the following analogy to the theory of electrostatics. The electrostatic capacity of a conductor is the charge induced on it by a difference of potential. In the case of the system under consideration, we consider the difference of $u$ between the surface of the sphere and sufficiently far from it, at distances of the order of the particle distances. It turns out that $d^{-3}r$ is of the order of the density of the electrostatic capacity of the particles of the system. We recall that the electrostatic capacity of a sphere of radius $r$ is $4\pi r$ (cf. \cite{Jackson}). The role of the electrostatic capacity in the solution of the Dirichlet problem for the Laplace equation in perforated domains was already recognized in \cite{CiMu}, \cite{MK}. The question considered in this paper was the homogenization problem% \begin{equation} -\Delta u_{r}=f\ \ \text{in\ }\Omega\diagdown K_{r}\ \ ,\ \ u_{r}% =0\ \ \text{in\ }K_{r}\cup\partial\Omega,\ \ \label{S2E2}% \end{equation} where $\Omega$ is an open bounded subset of $\mathbb{R}^{n}$ and $K_{r}$ is the sequence of domains% \begin{equation} K_{r}=\bigcup_{x\in\left( d\mathbb{Z}^{3}\right) }\overline{B_{r}\left( x\right) }, \label{S2E6}% \end{equation} where the density of electrostatic capacity $\mu=\frac{4\pi r}{d^{3}}$ is assumed to be constant. It was proved in \cite{CiMu} that for $f\in L^{2}\left( \Omega\right) $ the sequence of solutions $u_{r}$ converges weakly in $H^{1}\left( \Omega \right) $ as $r\rightarrow0$ to the solution of \begin{equation} -\Delta u+\mu u=f\ \ \text{in\ }\Omega\ \ ,\ \ u=0\ \ \text{in\ }% \partial\Omega. \label{S2E3}% \end{equation} The results of \cite{CiMu} do not require to assume that $\mu$ is constant, and more general particle configurations than the ones in \eqref{S1E3} can be considered. Generalizations have been developed, including more general elliptic operators, in particular Stokes equations \cite{All}, \cite{DGR}, \cite{MK}. Most of the homogenization results for elliptic problems have been obtained in bounded domains. The homogenization problem associated to \eqref{S2E2} has been considered in \cite{NV1}, \cite{NV}. In particular, it was proved in those papers that assuming that $f\in L^{\infty}\left( \mathbb{R}^{n}\right) $, the unique bounded solutions of \eqref{S2E2} converge weakly in $H_{loc}^{1}\left( \mathbb{R}^{n}\right) $ as $r\rightarrow0$ to the solution of \eqref{S2E3} with $\Omega=\mathbb{R}^{n}$. The proof of the homogenization results in \cite{NV1}, \cite{NV} relies heavily in the derivation of the so-called screening estimate, which states that the fundamental solution for the Laplace equation in a perforated domain with homogeneous Dirichlet boundary conditions decreases exponentially over distances of the order of the screening length $\Lambda=\frac{1}{\sqrt{\mu}}.$ The proof of this estimate given in \cite{NV} uses the maximum principle for second order elliptic operators and therefore the proof cannot be easily generalized to higher order operators. Since the convergence result in Theorem \ref{ConvWholeSpace} is uniform in particle configurations as in \eqref{S2E6} if the capacity density remains bounded, it turns out that it is possible to use it to derive also homogenization results not using Maximum Principle arguments. \begin{theorem} \label{HomogLambZero} Suppose that $f\in H^{-1}\left( \mathbb{R}^{3}\right) .$ Then, the problems \eqref{S1E8} with $K=K_{r}$ as in \eqref{S2E6} and constant $\mu = \frac{4\pi r}{d^{3}}$ have unique solutions $u_{r} \in H^{1}\left( \mathbb{R}^{3}\right) $. In the limit $ r \to 0$, $u_r$ converges weakly in $H^{1}\left( \mathbb{R}^{3}\right)$ to the unique solution $u\in H^{1}\left( \mathbb{R}^{3}\right) $ of the problem \begin{equation} -\Delta u+\mu u=f\ \ \text{in\ }\mathbb{R}^{3}. \end{equation} \end{theorem} An analogous result can also be proved for the solutions of the equation \eqref{S1E8} with $K=K_{r}$ and $r\rightarrow0$. In that case, the limit equation reads \[ -\Delta u+(\xi^2+\mu) u=f. \] The previous results can be obtained also for Stokes equation. In this case we need to precise the meaning of solving the equations \eqref{S1E6a}, \eqref{S1E7}. We will use the standard procedure of solving the equations in the space of divergence free functions using the pressure as a suitable Lagrange multiplier. We will say that $\phi$ is a solution of the equation $\mathcal{L}_{Stokes}\left( \phi\right) =f$ in $U$\ with $\phi=0$ in $\partial U$ with $f\in \dot{H}^{-1}\left( U;\mathbb{R}^3\right) $ and $U$ an open set of $\mathbb{R}^{3}$ if $\phi\in \dot{H}^{1}\left( U;\mathbb{R}^3\right) ,$ we have $\nabla\cdot\phi=0$, and there exists $p\in L^{2}\left( U\right) $ such that $\phi$ is a weak solution of% \begin{equation} -\Delta\phi+\nabla p=f\ ,\ \nabla\cdot\Phi=0\ \label{S2E8}% \end{equation} in the domain $U.$ \begin{theorem} \label{StokesConvergence}Let $f\in \dot{H}^{-1}\left( \mathbb{R}^{3};\mathbb{R}^3\right) .$ There exists a $\gamma_0 >0$ depending only on $\mu_0$ from Equation \eqref{eq:capacityBound} and $\kappa$ from Condition \ref{cond:particlesNotToClose} such that the sequence \begin{equation} \lim_{N\rightarrow\infty}\bigg( 1-\gamma\sum_{j}e^{-\left\vert x_{j}\right\vert }Q_{j}\bigg) ^{N} \mathcal{L}_{Stokes}^{-1} f\ % \end{equation} converges to the solution of \eqref{S1E2} in $\dot{H}^{1}\left( \mathbb{R}^{3};\mathbb{R}^3\right) $ for all $\gamma < \gamma_0$. \end{theorem} Using Theorem \ref{StokesConvergence}, we can also also homogenization results for the Stokes equation. \begin{theorem} \label{HomogStokes} Suppose that $f\in H^{-1}\left( \mathbb{R}^{3};\mathbb{R}^3\right) .$ Then, the problems \eqref{S1E2} with $K=K_{r}$ as in \eqref{S2E6} and constant $\mu = \frac{6\pi r}{d^{3}}$ have unique solutions $u_{r} \in H^{1}\left( \mathbb{R}^{3} ; \mathbb{R}^3\right) $. In the limit $ r \to 0$, $u_r$ converges weakly in $ H^{1}\left( \mathbb{R}^{3} ;\mathbb{R}^3\right) $\ as $r\rightarrow0$ to the unique solution $u \in H^{1}\left( \mathbb{R}^{3} ;\mathbb{R}^3\right)$ of \begin{equation} -\Delta u+\nabla p+\mu u=f\ \ \text{in\ }\mathbb{R}^{3},\ \ \nabla\cdot u=0\ \label{S2E9}. \end{equation} \end{theorem} \bigskip Related results have been obtained in \cite{All}, \cite{DGR}, \cite{MK}. The system of equations \eqref{S2E9} is known as Brinkman equations, which is a well established model in the theory of filtration. It provides an interpolation between the Stokes equation and Darcy's law in porous media (see \cite{SP} and \cite{All2}). All the results in those papers have been obtained in bounded domains. Theorem \ref{HomogStokes} above provides a new proof of this type of homogenization results by means of the Method of Reflections. Note that the homogenization result in Theorem \ref{HomogStokes} is valid for particle distributions in the whole space. However, we do not think that the Method of Reflections is really needed to prove homogenization results in unbounded domains, because seemingly the methods of \cite{DGR} might be easily adapted to prove Theorem \ref{HomogStokes}. We just want to emphasize that the convergence result in Theorem \ref{StokesConvergence} is strong enough to allow the derivation of the homogenization limit. \subsection{Plan of the Paper} The rest of the paper is organized as follows. In Section 2, we will prove Theorem \ref{SeriesPoissonScreened} and \ref{CapOrderOne}. To do so, after repeating a basic lemma from Functional Analysis, we will give the precise formulation of the Method of Reflections in terms of orthogonal projections in Section 2.2., which will directly lead to necessary and sufficient conditions for convergence of the series obtained by the Method of Reflections. In Section 2.3, we will provide the necessary estimate to prove Theorem \ref{SeriesPoissonScreened}. In Section 2.4, we will explain in detail the geometrical idea leading to the summation method yielding Theorem \ref{CapOrderOne}. In Section 2.5, we will analyze the summation method on the level of the original series obtained by the Method of Reflections. In Section 3, we will explain the modification needed to adapt the method derived in Section 2 to the Poisson equation. This modification basically consists in a spatial cutoff in order to solve the problem of divergent series due to the long range structure of the Poisson equation. This leads to the proof of Theorem \ref{ConvWholeSpace}. In Section 4, we prove the homogenization result, Theorem \ref{HomogLambZero}. In Section 4.1, we show that Problem \eqref{S1E1} with $K$ as in \eqref{eq:lattice} is well posed in $H^1(\mathbb{R}^3)$ due to the existence of a Poincaré inequality in $H^1_0(\mathbb{R}^3 \backslash K)$. Thereafter we give a formal derivation of the homogenization result based on the original formal series obtained by the Method of Reflections. Finally, we give the rigorous proof of Theorem \ref{HomogLambZero} using the tools and results from the previous sections. In Section 5, we apply the method to the Stokes equations \eqref{S1E2} in order to prove Theorem \ref{StokesConvergence} and \ref{HomogStokes}. Since most parts work exactly the same way as for the Poisson equation, we refrain from going through all the details again, but rather point out the necessary modifications. \input{ScreenedPoisson2.tex} \input{PoissonReflection.tex} \input{Homogenization.tex} \input{Stokes.tex} \input{Conclusion.tex} \section*{Acknowledgement} The authors acknowledge support through the CRC 1060, the mathematics of emergent effects, of the University of Bonn, that is funded through the German Science Foundation (DFG). \printbibliography \end{document} \section{The Screened Poisson Equation} \label{sec:ScreenedPoisson} We will now specify the particle distributions that we consider throughout Section \ref{sec:ScreenedPoisson} and Section \ref{sec:Poisson}. In Section \ref{sec:Homogenization}, we only consider special configurations. For a finite or countable index set $I$ we denote $(x_i)_{i \in I}$ and $(r_i)_{i \in I}$ the positions and radii of the particles. We denote the space that the particles occupy by \[ K := \bigcup_{i \in I} B_i, \] where we abbreviate $B_i = B_{r_i}(x_i)$. We only consider spherical particles, but everything also works if we instead assume that the $i$-th particle is contained in $B_i$. For each particle $i \in I$ we define the distance to the nearest other particle \[ d_i := \inf_{j \neq i} |x_i - x_j|. \] Then the sets $B_{\frac{d_i}{2}}(x_i)$ are disjoint. In the following, we will always assume that the following two conditions are satisfied. \begin{condition} \label{cond:Capacity} There exists a constant $\mu_0$ such that \[ r_i d_i^{-3} \leq \mu_0 \quad \text{for all} ~ i \in I. \] \end{condition} \begin{condition} \label{cond:particlesNotToClose} \item There exists a constant $\kappa > 1$ such that \[ \frac{d_i}{2} > \kappa r_i \quad \text{for all} ~ i \in I. \] \end{condition} \begin{remark} Without loss of generality, we will always assume $\kappa \leq 2$ in the following. \end{remark} The second condition is not very restrictive. First, it is satisfied for any finite number of non-touching particles. Second, it is also satisfied for infinitely many particles, if all the radii are sufficiently small and Condition \ref{cond:Capacity} holds. Condition \ref{cond:Capacity} can is as an upper bound for the capacity density of the particles. In this Section, we will additionally impose the following condition, which is only important when considering the screened Poisson equation \eqref{S1E8} and trivially satisfied for sufficiently small particles. \begin{condition} \label{cond:radiusSmallerXi} There exists a constant $ \alpha $ such that \[ r_i \leq \alpha \xi \quad \text{for all} ~ i \in I. \] \end{condition} \subsection{Preliminaries of Functional Analysis.} In the following, $G_0 := (-\Delta + \xi^{-2})^{-1}$ will denote the solution operator for the screened Poisson equation in the whole space $\mathbb{R}^3$. Then, $ G_0 f = W_\xi \ast f$, where \begin{equation} \label{eq:FundamentalScreened} W_\xi(x) = \frac{e^{-\frac{|x|}{\xi}}}{4 \pi |x|}. \end{equation} Moreover, $G_0$ is an isometric isomorphism from $ H^{-1}(\mathbb{R}^3) $ to $ H^1(\mathbb{R}^3) $ if we modify the standard scalar product in $ H^1(\mathbb{R}^3) $ according to \[ (u,v)_{H^1_\xi} := (\nabla u,\nabla v)_{L^2} + \xi^{-2} (u,v)_{L^2}. \] We will always consider $H^1(\mathbb{R}^3)$ endowed with this scalar product. Furthermore, we will denote the dual pairing between $H^{-1}$ and $H^1$ by $\langle \cdot , \cdot \rangle$. Moreover, we will use the following notation that differs slightly from the usual terminology. Given any closed set $K\subset\mathbb{R}^{3}$ we will denote as $H_{0}^{1}\left( \mathbb{R}^{3}\diagdown K\right) $ the closure in the $H^{1}\left( \mathbb{R}^{3}\right) $ topology of the set of functions $u\in C_{c}^{\infty}\left( \mathbb{R}^{3}\right) $ such that $u=0$ in $K.$ Notice that with this convention the elements of $H_{0}^{1}\left( \mathbb{R}^{3}\diagdown K\right) $ are also elements of $H^{1}\left( \mathbb{R}^{3}\right) .$ We now recall a classical Functional Analysis result which allows to interpret the solutions of the Dirichlet problem for elliptic equations using projections. These projection operators will be an essential tool in the rest of this paper. \begin{lemma} \label{lem:scrpoibdry} Let $\Omega \subset \mathbb{R}^3$ be open. Then, for every $ f \in H^{-1}(\mathbb{R}^3) $, the problem \begin{equation} \begin{aligned} \label{eq:screenedPoissonBall} -\Delta u + \xi^{-2} u &= f \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{\Omega}, \\ u &= 0 \quad \text{in} ~ \overline{\Omega} \end{aligned} \end{equation} has a unique weak solution $ u \in H^1(\mathbb{R}^3) $. Moreover, the solution for Problem \eqref{eq:screenedPoissonBall}, is given by \begin{equation} P_\Omega G_0 f, \end{equation} where $ P_\Omega $ is the orthogonal projection from $ H^1 (\mathbb{R}^3) $ to the subspace $ H^1_0(\mathbb{R}^3 \backslash \overline{\Omega})$. \end{lemma} \begin{proof} Existence and uniqueness follow directly from the Riesz Representation Theorem since the weak formulation reads \[ (u,v)_{H^1(\mathbb{R}^3)} = \langle v, f \rangle \qquad \text{for all} \quad v \in H^1_0(\mathbb{R}^3 \backslash \overline{B_i}). \] Furthermore, denoting by $u$ the solution to Problem \eqref{eq:screenedPoissonBall}, we have for $ v \in H^1_0(\mathbb{R}^3 \backslash \overline{B_i}) $ \[ ( G_0 f - u, v)_{H^1(\mathbb{R}^3)} = \langle v,f\rangle - \langle v,f\rangle = 0. \] Hence, $u = P_{i} G_0 $. \end{proof} \subsection{Formulation of the Method of Reflections Using Orthogonal Projections} We now recall the Method of Reflections and give directly an interpretation involving the projection operators mentioned in the introduction. These projection operators are defined by \begin{equation} \label{eq:Q} Q_{i} = 1 - P_{i}, \end{equation} where $P_{i} := P_{B_i}$ are the projection operators from Lemma \ref{lem:scrpoibdry}. Thus, $Q_{i}$ is the orthogonal projection in $H^1(\mathbb{R}^3)$ to the subspace $H^1_0(\mathbb{R}^3 \backslash \overline{B_i})^\perp$. Equivalently, for $u \in H^1(\mathbb{R}^3)$, $Q_{i} u$ solves \begin{align} \label{eq:characterizationOfQ} -\Delta Q_{i} u + \xi^{-2} Q_{i} u &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{B_i}, \\ Q_{i} u &= u \quad \text{in} ~ \overline{B_i}. \end{align} This also yields the characterization \begin{equation} \label{eq:characterizationOfOrthogonal} H_0^1(\mathbb{R}^3 \backslash \overline{B_i})^\perp = \{ v \in H^1(\mathbb{R}^3) \colon -\Delta v + \xi^{-2} v = 0 \text{ in } \mathbb{R}^3 \backslash \overline{B_i}\}. \end{equation} Here and in the following, we write "$ f = 0$ in $\Omega$" for some $f \in H^{-1}(\mathbb{R}^3)$ if $f$ is supported in $\mathbb{R}^3 \backslash \Omega$. For $f \in H^{-1}(\mathbb{R}^3)$, we define $\Phi_0 := G_0 f$. Then, the first order correction for a particle $i$ is given by $\Phi_{1,i} := -Q_{i} \Phi_0$, and the first order approximation for the solution is obtained by subtracting from $\Phi_0$ the correctors $\Phi_{1,i}$ for all the particles, i.e., \[ \Psi_1 = \Phi_0 + \sum_{i\in I} \Phi_{1,i}. \] Similarly, the $k$-th order correction for a particle $i$ is given by \begin{equation} \Phi_{k,i} = - Q_{i} \sum_{j \neq i} \Phi_{k-1,j}. \end{equation} Then, we define \begin{equation} \label{eq:kthOrderCorrection} \Phi_k = \sum_i \Phi_{k,i}. \end{equation} and the $k$-th order approximation $\Psi_k = \Phi_0 + \dots + \Phi_k$. Therefore, the Method of Reflections yields the series \begin{equation} \label{eq:ProjectionSeries} G_0 f - \sum_{i_1} Q_{i_1} G_0 f + \sum_{i_1} \sum_{ i_2 \neq i_1} Q_{i_1} Q_{i_2} G_0 f - \sum_{i_1} \sum_{ i_2 \neq i_1} \sum_{ i_3 \neq i_2} Q_{i_1} Q_{i_2} Q_{i_3}G_0 f+ \dots. \end{equation} As mentioned in the introduction, we want to rewrite this series in terms of powers of a certain operator. To do so, the key observation is that \begin{equation} \label{eq:keyForPowers} \Phi_{k,i} = - Q_{i} \sum_{j \neq i} \Phi_{k-1,j} = -Q_{i} \Psi_{k-1}. \end{equation} This is due to the fact that \begin{equation} \label{eq:boundaryOfPsi} \Psi_{k-1} = \Phi_{k,i} \in B_i, \end{equation} which follows inductively from the definition of $\Psi_{k}$ and $\Phi_{k,i}$. Therefore, we have \[ \Psi_{k+1} = -\sum_i Q_{i} \Psi_k, \] and thus, the partial sums of the scattering series are given by \begin{equation} \label{eq:SeriesAsPowers} \left (1- \sum_i Q_{i}\right)^n G_0 f. \end{equation} \begin{definition} \label{def:L} The operator $ L \colon H^1(\mathbb{R}^3) \supset \mathcal{D}(L) \to H^1(\mathbb{R}^3) $ is defined as \begin{equation} \label{eq:definitionL} L = \sum_i Q_i. \end{equation} The domain $\mathcal{D}(L) $ of this operator consists of all function $u \in H^1(\mathbb{R}^3)$ such that the series $\sum_i Q_i$ exists. \end{definition} \begin{remark} We will show below (cf. Proposition \ref{pro:Aone}) that $L$ is a bounded operator in the whole of $H^1(\mathbb{R}^3)$. As mentioned in the introduction, this is due to the exponential decay in the fundamental solution of the screened Poisson equation and fails for the Poisson equation. \end{remark} \begin{remark} \label{rem:LselfAdjoint}. We note that $\mathcal{D}(L) = H^1(\mathbb{R}^3)$ implies that $ L $ is a nonnegative self-adjoint operator, since the operators $ Q_i $ are orthogonal projections \end{remark} \begin{theorem} \label{th:SeriesConvergentImpliesSolution} \begin{enumerate}[(i)] \item If the series \eqref{eq:ProjectionSeries} obtained by the Method of Reflections is absolutely convergent, then it yields a solution to the Dirichlet problem \eqref{S1E8}. \item The series \eqref{eq:ProjectionSeries} is absolutely convergent for every $f \in H^{-1}(\mathbb{R}^3)$ if the operator $L$ from Definition \ref{def:L} is a bounded operator on $H^1(\mathbb{R}^3)$ with $\|L\| < 2$. The series \eqref{eq:ProjectionSeries} is convergent for every $f \in H^{-1}(\mathbb{R}^3)$, then $L$ defines a bounded operator on $H^1(\mathbb{R}^3)$ with $\|L\| \leq 2$. \item Assume $L$ is a bounded operator on $H^1(\mathbb{R}^3)$ with $\|L\| < 2$, and $L$ has a spectral gap, i.e., \[ \inf \{\lambda \in \sigma(L) \backslash \{0\}\} = c > 0, \] where $\sigma(L)$ denotes the spectrum of $L$. Then, there exists $\varepsilon < 1$ depending only on $\|L\|$ and $c$ such that \begin{equation} \|(1-L)^n G_0 f - u \|_{H^1_\xi(\mathbb{R}^3)} \leq \varepsilon^n \|f\|_{H^{-1}_\xi(\mathbb{R}^3)} \qquad \text{for all} \quad f \in H^{-1}(\mathbb{R}^3), \label{eq:expConvergence} \end{equation} where $u$ denotes the solution to the Dirichlet problem \eqref{S1E8}. \end{enumerate} \end{theorem} \begin{proof} As above, we denote the partial sums of the series \eqref{eq:ProjectionSeries} by $\Psi_n$. Since $(-\Delta +\xi^{-2}) Q_i v = 0$ for all $v \in H^1(\mathbb{R}^3)$ (cf. \eqref{eq:characterizationOfQ}), it follows \[ (-\Delta +\xi^{-2}) \Psi_n = f \in \mathbb{R}^3 \backslash K. \] Thus, this equation is also satisfied by the limit. By \eqref{eq:boundaryOfPsi} we have $\Psi_n = \Phi_{n+1,i} \to 0$ in $B_i$ since $\Phi_{n+1,i}$ appears in the series \eqref{eq:ProjectionSeries} which we assumed to be absolutely convergent. This implies that the limit indeed solves \eqref{S1E8}. To prove the second statement, we observe that by \eqref{eq:SeriesAsPowers}, the partial sums of the series \eqref{eq:ProjectionSeries} can be written as $(1-L)^n G_0 f$. Since $G_0$ is an isometry, these partial sums only exist if $\mathcal{D}(L) = H^1(\mathbb{R}^3)$. Then, by Remark \ref{rem:LselfAdjoint}, $L$ is a nonnegative self-adjoint operator. Thus, by the spectral theorem (for unbounded self-adjoint operators), up to an isometry, $ L $ is a multiplication operator $T$ on $ H := L^2_\nu(X)$ for some measure space $(X,\mathcal{A},\nu)$, i.e., there exists a function $f \in L^\infty_\nu (X)$ such that $ T \varphi = f \varphi $ for all $ \varphi \in L^2_\nu(X)$. Thus, $(1-L)^n G_0 f$ corresponds to \[ (1-f)^n \varphi \] which converges iff \[ -1 < (1-f) \leq 1 ~ \nu\text{-a.e.} \] Since $L$ is nonnegative, this is equivalent to $f < 2 $, $\nu$-a.e., and hence, a sufficient condition for convergence is $\|L \| < 2$, and a necessary condition is $\|L \| \leq 2$. If, in addition, $L$ has a spectral gap, then for $\nu$-a.e. $x$, $f(x) = 0$ or $ f(x) \geq c$ and \eqref{eq:expConvergence} follows. \end{proof} \begin{remark} \label{rem:kerL} It is essential to observe the following. If $L$ defines a bounded operator on $H^1(\mathbb{R}^3)$ with $\|L\| < 2$, then $(1-L)^n$ converges to the orthogonal projection to the kernel of $L$. Indeed, by decomposing any $u \in H^1(\mathbb{R}^3)$ into $u = u_1 + u_2$, where $u_1 \in \ker L$ and $u_2 \in (\ker L)^\perp$, we see that $(1-L)^n u_2 = u_2 $ and $ (1-L)^n u_1 \to 0$ using the spectral theorem as in the proof above. We recall that $L = \sum_i Q_i$, where $Q_i$ are orthogonal projections to $ H_0^1(\mathbb{R}^3 \backslash \overline{B_i})^\perp$. Therefore, \begin{equation} \label{eq:kerLasIntersection} \ker L = \bigcap_i H_0^1(\mathbb{R}^3 \backslash \overline{B_i}) = H_0^1(\mathbb{R}^3 \backslash K) =: V. \end{equation} Hence, the series \eqref{eq:ProjectionSeries} written as $(1-L)^n G_0 f$ converges to $P G_0 f$, where $P$ denotes orthogonal projection to $V$. However, this is just a different way to see that the series indeed converges to the solution of Problem \eqref{S1E8}. Indeed, the fact that $P G_0 f$ solves Problem \eqref{S1E8} follows directly from Lemma \ref{lem:scrpoibdry}. \end{remark} \subsection{Estimates for the Operator $L$} \label{sec:AnEstimateForTheFirstOrderTermInTheScatteringSeries} \begin{proposition} \label{pro:Aone} There exists a constant $C_1 > 0 $ depending only on $\alpha$ and $\kappa$ from Condition \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi} such that \begin{equation} \label{eq:Aone} \| L \| \leq (1 + C_1 \xi^2\mu_0). \end{equation} \end{proposition} The key estimate for the proof of the above proposition is the following lemma. Roughly speaking, it states that correlations between $H^{-1}$ functions which are supported in the particles are controlled by the capacity density times the norms of the functions themselves. \begin{lemma} \label{lem:locH^-1} Assume $(f_i)_{i \in I} \subset H^{-1}(\mathbb{R}^3)$ satisfies $ \operatorname{supp} f_i \subset \overline{B_i}$ for all $i \in I$. Then, \begin{equation} \label{eq:locH^-1} c \sum_i \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \leq \Big \| \sum_i f_i \Big \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \leq (1+C_1 \mu_0\xi^2) \sum_i \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2, \end{equation} where $ c>0 $ is a universal constant and $C_1$ depends only on $\alpha$ and $\kappa$ from Condition \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi}. \end{lemma} For the proof we need the following lemma. \begin{lemma} \label{lem:correlationest} Let $i,j \in I$. Assume $ f \in H^{-1}(\mathbb{R}^3) $ is supported in $ \overline{B_j} $. Then, there exists a function $ v \in H^1_0(B_{\kappa r_i}(x_i)) $ such that $ v = G_0 f $ in $ B_i$, and \begin{equation} \|v\|_{H^1_\xi(\mathbb{R}^3)} \leq C \sqrt{r_i r_j} \frac{e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|} \|f\|_{H^{-1}_\xi(\mathbb{R}^3)}, \end{equation} for a constant $ C $ that depends only on $\alpha$ and $\kappa$ from Condition \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi}. \end{lemma} \begin{proof} For $ z \in B_{\kappa r_i}(x_i)$, we define $ \theta \in C_c^\infty (B_{\kappa r_j}(z - x_j)) $ such that $ \theta = 1$ in $ B_{r_j}(z - x_j) $ and $ |\nabla \theta| \leq \frac{C}{r_j} $, (where the constant depends on $\kappa$). We use that $ f $ is supported in $ \overline{B_j} $. Therefore, using the fundamental solution \eqref{eq:FundamentalScreened}, \begin{equation} \begin{aligned} |(G_0 f) (z)| &= |(W_\xi \ast f)(z)| = |((\theta W_\xi) \ast f)(z)| \\ &= |\langle (\theta W_\xi)(z-\cdot),f \rangle| \leq \|f\|_{H^{-1}_\xi(\mathbb{R}^3)} \|\theta W_\xi \|_{H^1_\xi(\mathbb{R}^3)}, \end{aligned} \label{eq:ptw1} \end{equation} and \begin{equation} \label{eq:ptw2} |\nabla (G_0 f) (z)| \leq \| f \|_{H_\xi^{-1}(\mathbb{R}^3)} \| \theta \nabla W_\xi \|_{H^1_\xi(\mathbb{R}^3)}. \end{equation} Using Condition \ref{cond:particlesNotToClose} and \ref{cond:radiusSmallerXi}, we estimate \begin{align} \|\theta W_\xi \|_{H^1_\xi(\mathbb{R}^3)} &\leq \| W_\xi \|_{H^1_\xi(B_{\kappa r_j}(z - x_j))} + \frac{C}{r_j}\| W_\xi \|_{L^2(B_{\kappa r_j}(z - x_j))} \\ &\leq C r_j^{3/2} e^{-\frac{|x_i - x_j|-\kappa r_j} {\xi}} \left( \frac{1}{(|x - y|-\kappa r_j)^2} + \frac{1}{r_j(|x_i - x_j|-r_j)} + \frac{1}{\xi(|x_i - x_j|-r_j)}\right) \\ & \leq C r_j^{1/2} \frac{ e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|}, \end{align} and \[ \|\theta \nabla W_\xi \|_{H^1_\xi(\mathbb{R}^3)} \leq C r_j^{1/2}\frac{e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|^2}. \] Now, we use another cutoff function $ \eta \in C_c^\infty (B_{\kappa r_i}(x_i)) $ such that $ \eta = 1$ in $ B_i $ and $ |\nabla \eta| \leq \frac{C}{r_i} $ to define $ v := \eta(G_0 f)$. Then, we get from the pointwise estimates on $ G_0 f $, \eqref{eq:ptw1} and \eqref{eq:ptw2}, \begin{align} \| v\|_{H^1_\xi(\mathbb{R}^3)} = \|\eta (G_0 f) \|_{H^1(\mathbb{R}^3)} &\leq \| G_0 f \|_{H^1(B_{\kappa r_i}(x_i))} + \frac{C}{r_i}\| G_0 f \|_{L^2(B_{\kappa r_i}( x_i))} \\ &\leq C \sqrt{r_i r_j} \frac{e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|} \|f\|_{H^{-1}(\mathbb{R}^3)}. \qedhere \end{align} \end{proof} \begin{proof}[Proof of Lemma \ref{lem:locH^-1}.] Let $ \eta_i \in C_c^\infty (B_{\kappa r_i}(x)) $ such that $ \eta_i = 1 $ in $ B_i $ and $ |\nabla \eta_i | \leq \frac{C}{r_i} $. Now, we observe that for all $ u \in H^1(\mathbb{R}^3) $ \[ \| u \|_{L^2(B_{\kappa r_i}(x_i))} \leq \| u \|_{L^6(B_{\kappa r_i}(x_i))} \|1\|_{L^3(B_{\kappa r_i}(x_i))} \leq C r_i \| \nabla u \|_{L^2(\mathbb{R}^3)}, \] where we have used the Gagliardo-Nirenberg-Sobolev inequality $\|u\|_{L^6(\mathbb{R}^3)} \leq C \| \nabla u \|_{L^2(\mathbb{R}^3)}$. Hence, \begin{equation} \| \eta_i u \|_{H^1_\xi(\mathbb{R}^3)} \leq \|u\|_{H^1_\xi(\mathbb{R}^3)} + \frac{C}{r_i} \| u \|_{L^2(B_{\kappa r_i}(x_i))} \leq C \| u \|_{H^1_\xi(\mathbb{R}^3)}. \label{eq:cutoffest} \end{equation} On the other hand, denoting $f = \sum_i f_i$, \begin{align} \sum_i \| f_i \|_{H_\xi^{-1}(\mathbb{R}^3)}^2 &= \sum_i\langle G_0 f_i,f_i \rangle = \sum_i \langle \eta_i G_0 f_i,f_i\rangle \\ &= \sum_i \langle \eta_i G_0 f_i,f \rangle \leq \| f \|_{H^{-1}_\xi(\mathbb{R}^3)} \Big \|\sum_i \eta_i G_0 f_i \Big \|_{H^1_\xi(\mathbb{R}^3)}. \end{align} By taking squares on both sides and using the fact that the balls $ B_{\kappa r_i}(x_i) $ are disjoint together with the preliminary estimate \eqref{eq:cutoffest}, we deduce \[ \bigg (\sum_i \| f_i \|_{H_\xi^{-1}(\mathbb{R}^3)}^2 \bigg )^2 \leq C \| f \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \sum_i \|G_0 f_i \|_{H^1_\xi(\mathbb{R}^3)}^2. \] Since $ G_0 $ is an isometry, this yields the first inequality in \eqref{eq:locH^-1}. For the second inequality, we use again that $ G_0 $ is an isometry to get \begin{align} \Big \| \sum_i f_i \Big \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 &= \Big \| \sum_i G_0 f_i \Big \|_{H^1_\xi(\mathbb{R}^3)}^2 \\ & = \sum_i \| G_0 f_i \|_{H^1_\xi(\mathbb{R}^3)}^2 + \sum_i \sum_{j \neq i} (G_0 f_i, G_0 f_j)_{H^1_\xi(\mathbb{R}^3)} \\ &= \sum_i \| f_i \|_{H_\xi^{-1}(\mathbb{R}^3)}^2 + \sum_i \sum_{j \neq i} \langle G_0 f_j , f_i\rangle. \end{align} Let $ i \neq j $. Since $ f_i $ is supported in $ \overline{B_i}, $ we have \[ \langle G_0 f_j, f_i \rangle = \langle v, f_i \rangle, \] for any $ v \in H^1(\mathbb{R}^3) $ such that $ v = G_0 f_j $ in $B_i$. Therefore, application of Lemma \ref{lem:correlationest} yields \[ |\langle G_0 f_j , f_i\rangle| \leq C \sqrt{r_i r_j} \frac{e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|} \|f\|_{H^{-1}_\xi(\mathbb{R}^3)}. \] Finally, taking the sum in $ i $ and $ j $ and using \[ \sqrt{r_i r_j} \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)} \| f_j \|_{H^{-1}_\xi(\mathbb{R}^3)} \leq \frac{1}{2} \left(r_i \| f_i \|_{H_\xi^{-1}(\mathbb{R}^3)}^2 + r_j\| f_j\|_{H_\xi^{-1}(\mathbb{R}^3)}^2 \right) \] and symmetry in $ i $ and $ j $, we conclude using Condition \ref{cond:Capacity} \begin{align} \sum_i \sum_{j \neq i} \langle G_0 f_j , f_i\rangle & \leq C\sum_i \sum_{j \neq i} r_i \frac{e^{-\frac{|x_i - x_j|}{\xi}}}{|x_i - x_j|} \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \\ & \leq C \sum_i \mu_0 d_i^3 \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \\ & \leq C \sum_i \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2 \int_{\mathbb{R}^3} \frac{e^{-\frac{|z|}{\xi}}}{|z|} \, d z \\ & \leq C \mu_0 \xi^2 \sum_i \| f_i \|_{H^{-1}_\xi(\mathbb{R}^3)}^2. \qedhere \end{align} \end{proof} \begin{proof}[Proof of Proposition \ref{pro:Aone}.] We choose an enumeration of the index set $I$ and define \[ L_N := \sum_{i=1}^N Q_i, \] where $Q_i$ was defined in \eqref{eq:Q}. From \eqref{eq:characterizationOfOrthogonal} we see that every function in the image of $G_0 ^{-1} Q_i$ is supported in $B_i$. Using that $G_0$ is an isometry, Lemma \ref{lem:locH^-1} implies \begin{equation} \label{eq:estf} \begin{aligned} \|L^N u\|_{H^1_\xi(\mathbb{R}^3)}^2 \leq (1+C_1 \xi^2 \mu_0) \sum_{i=0}^N \| Q_{i} u \|_{H^1_\xi(\mathbb{R}^3)}^2 &= (1+C_1 \xi^2\mu_0) \sum_{i=0}^N (Q_{i} u,u)_{H^1_\xi(\mathbb{R}^3)} \\ &= (1+C_1 \xi^2\mu_0) (L^N u,u )_{H^1_\xi(\mathbb{R}^3)}. \end{aligned} \end{equation} As a sum of orthogonal projections, $L^N$ is self-adjoint. Thus, by the spectral theorem for self-adjoint bounded operators, up to an isometry, $ L^N $ is a multiplication operator $S$ on $ H := L^2_\nu(X)$ for some measure space $(X,\mathcal{A},\nu)$, i.e., there exists a function $f \in L^\infty_\nu (X)$ such that $ S \varphi = f \varphi $ for all $ \varphi \in L^2_\nu(X)$. The estimate \eqref{eq:estf} above yields \[ \int_X f^2 \varphi^2 \, d \nu \leq (1+C_1 \xi^2\mu_0) \int_X f \varphi^2 \, d \nu, \] implying \[ \|L^N_r\| = \|f\|_{L^\infty(X)} \leq 1 + C_1 \mu_0\xi^2. \] On the other hand, convergence of $L^N u$ holds for any $u \in H^1(\mathbb{R}^3)$ that is compactly supported, because particles lying outside of the support of $u$ do not play any role. By an $\frac{\varepsilon}{3}$-argument, $L u = \sum_{i = 1}^\infty Q_{i} u $ is well defined for all $u \in H^1(\mathbb{R}^3)$ and $\|L\| \leq 1 + C_1 \xi^2 \mu_0$. \end{proof} \begin{remark} \label{rem:EstimateLSharp} The second estimate in \eqref{eq:locH^-1} is sharp in the following sense. For all particle configurations, $\|L\| \geq 1$, and there exist particle configurations satisfying Conditions \ref{cond:Capacity}, \ref{cond:particlesNotToClose}, and \ref{cond:radiusSmallerXi}, such that \[ \|L \| \geq c \xi^2 \mu_0, \] for a universal constant $c$. \end{remark} \begin{proof} Consider any particle configuration and a function supported in one particle, i.e., $u \in H^1_0(B_i)$ for some $i \in I$. Then $u$ is a fixed point of the operator $L = \sum_i Q_i$, because $Q_i u = u$ and $Q_j u = 0$ for all $j \neq i$. Hence $\| L \| \geq 1$. The fact that also the capacity $\mu_0$ has to appear on the right hand side follows more or less directly from the definition of the electrostatic capacity. The capacity of a set $K$ is defined as \[ \|\nabla v \|_{L^2(\mathbb{R}^3 \backslash K)}^2, \] where $v$ is the solution to \begin{align} -\Delta v &= 0 \quad \text{in} ~ \mathbb{R}^3 \backslash K, \\ v &= 1 \quad \text{in} ~ K. \end{align} Now we consider particles distributed on a lattice with equal radius $r$, i.e., the set $K$ occupied by the particles is \[ K=\bigcup_{x\in\left( d\mathbb{Z}\right)^{3} }\overline{B_{r}\left(x\right) }. \] We choose $d << 1$ and consider $u \in H^1(\mathbb{R}^3)$ such that $ u = 1 $ in $B := B_1(0)$. Then, for each $x_i \in \left(d\mathbb{Z}\right)^{3} \cap B$, we have for $y \in \mathbb{R}^3 \backslash B_i$ \[ (Q_i u)(y) = r e^{\frac{r}{\xi}} \frac{e^{-\frac{|y-x_i|}{\xi}}}{|y-x_i|}, \] and thus, \[ \| Q_i u \|_{H^1(\mathbb{R}^3)}^2 \geq \| \nabla Q_i u \|_{L^2(\mathbb{R}^3)}^2 \geq r^2 \int_r^\infty \frac{e^{-\frac{s}{\xi}}}{s^2} \, d s \geq C \xi^2 r. \] Therefore, using again that $Q_i$ is an orthogonal projection, \[ \|L_r\| \geq c (L_r u,u)_{H^1(\mathbb{R}^3)} = c \sum_x (Q_x u,u)_{H^1(\mathbb{R}^3)} \geq c \!\! \sum_{x \in \left( d\mathbb{Z}\right)^{3} \cap B} \!\! \|Q_x u\|^2_{H^1(\mathbb{R}^3)} \geq c \xi^2 \!\! \sum_{x \in \left( d\mathbb{Z}\right)^{3} \cap B} r, \] where we put the norm of $u$ into the constant because $u$ has been chosen independently of the particle distribution. Since the number of $x_i$ in $\left( d\mathbb{Z}\right)^{3} \cap B$ is of order $ d^{-3} = \mu_0 r^{-1}$, we conclude $ \|L\| \geq c\mu_0$. \end{proof} Using the bound on the norm of $L$ that we proved in Proposition \ref{pro:Aone} it follows from Theorem \ref{th:SeriesConvergentImpliesSolution} that the series \eqref{eq:ProjectionSeries} obtained by the Method of Reflections converges to the solution of Problem \eqref{S1E8}. Uniform convergence also follows from Theorem \ref{th:SeriesConvergentImpliesSolution} and the following Lemma. \begin{lemma} \label{lem:LCoercive} There exists a constant $c_1 > 0$ depending only on $\kappa$ from Condition \ref{cond:particlesNotToClose} such that \[ (L u,u)_{H^1(\mathbb{R}^3)} \geq c_1 \| u\|_{H^1(\mathbb{R}^3)}^2, \] for all $u \in H_0^1(\mathbb{R}^3 \backslash K)^\perp$. \end{lemma} \begin{proof} Let $ \eta_i \in C_c^\infty (B_{\kappa r_i(x_i)}) $ such that $ \eta_i = 1 $ in $ B_i $ and $ |\nabla \eta_i | \leq \frac{C}{r_I} $. Now, we observe that for all $ v \in H^1(\mathbb{R}^3) $ \[ \| v \|_{L^2(B_{\kappa r_i}(x_i))} \leq \| v \|_{L^6(B_{\kappa r_i}(x_i))} \|1\|_{L^3(B_{\kappa r_i}(x_i))} \leq C r_i \| \nabla v \|_{L^2(\mathbb{R}^3)}, \] and hence, \begin{equation} \| \eta_x v \|_{H^1(\mathbb{R}^3)} \leq \|v\|_{H^1(\mathbb{R}^3)} + \frac{1}{r} \| v \|_{L^2(B_{2r}(x))} \leq C \| v \|_{H^1(\mathbb{R}^3)}. \end{equation} On the other hand, we know that every $u \in H_0^1(\mathbb{R}^3 \backslash K)^\perp$ satisfies $-\Delta u + \xi^{-2} u = 0$ in $\mathbb{R}^3 \backslash K$ (cf. Equation \eqref{eq:characterizationOfOrthogonal}). Thus, the variational form of this equation implies that $ u $ is the function of minimal norm in the set $ X_u := \{ v \in H^1(\mathbb{R}^3) \colon v = u ~ \text{in} ~ K\}$. Clearly, $\sum_i \eta_i Q_x u \in X_u$, and hence, \begin{align} (L u , u)_{H^1(\mathbb{R}^3)} &= \sum_i ( Q_i u, u)_{H^1(\mathbb{R}^3)} = \sum_i \|Q_i u \|_{H^1(\mathbb{R}^3)}^2 \\ &\geq c \sum_i \|\eta_i Q_i u \|_{H^1(\mathbb{R}^3)}^2 = c \|\sum_i \eta_i Q_i u \|_{H^1(\mathbb{R}^3)}^2 \geq c \|u\|^2_{H^1(\mathbb{R}^3)}. \qedhere \end{align} \end{proof} \begin{proof}[Proof of Theorem \ref{SeriesPoissonScreened}] By Proposition \ref{pro:Aone}, we have $\| L \| \leq 1 + C_1 \xi^2 \mu_0$. Defining $C_0 := \frac{1}{2 C_1}$, we have $\|L\| \leq \frac{3}{2}$ if $\mu_0 \leq C_0 \xi^{-2}$. Furthermore, Lemma \ref{lem:LCoercive} implies \begin{equation} \label{eq:SpectralGap} \|L u\| \geq c_1 \|u\| \end{equation} for all $u \in H_0^1(\mathbb{R}^3 \backslash K)^\perp$. By Remark \ref{rem:kerL}, we have $\ker L = H_0^1(\mathbb{R}^3 \backslash K)$. Thus, Estimate \eqref{eq:SpectralGap} implies that $L$ has a spectral gap. Therefore, Theorem \ref{th:SeriesConvergentImpliesSolution} implies the exponential convergence \begin{equation} \|(1-L)^n G_0 f - u \|_{H^1_\xi(\mathbb{R}^3)} \leq \varepsilon^n \|f\|_{H^{-1}_\xi(\mathbb{R}^3)} \qquad \text{for all} \quad f \in H^{-1}(\mathbb{R}^3), \end{equation} for some $\varepsilon$ depending only on $c_1$ and thus depending only $\kappa$ from Condition \ref{cond:particlesNotToClose}. Since the norm $\| \cdot \|_{H^1_\xi(\mathbb{R}^3)}$ is equivalent to the standard $H^1$-norm, this concludes the proof. \end{proof} \subsection{Convergence of a Modified Method of Reflections} In the previous subsection, we proved that the series \eqref{eq:ProjectionSeries} obtained by the Method of Reflections converges for small capacities. Recall that the series is given by \begin{equation} \label{eq:ReflectionSeriesCompressed} \lim_{n \to \infty} (1-L)^n G_0 f. \end{equation} First of all, we note that the series is indeed divergent if the capacity is sufficiently large. Indeed, as shown in Remark \ref{rem:EstimateLSharp} the operator norm of $L$ diverges as the capacity tends to infinity and we have already observed in Theorem \ref{th:SeriesConvergentImpliesSolution} that the series is divergent if the operator norm of $L$ is larger than $2$. Now we want to give the series a meaning for arbitrary capacities. As seen in Remark \ref{rem:kerL}, the solution to Problem \eqref{S1E8}, which we want to obtain by the Method of Reflections, is given by $P G_0 f$, where $P$ is the orthogonal projection to the kernel of $L$. Therefore, the modification simply consists in replacing \eqref{eq:ReflectionSeriesCompressed} by \begin{equation} \label{eq:ResummationCompressed} \lim_{n \to \infty} (1-\gamma L)^n G_0 f, \end{equation} with $\gamma := 1/ \|L\|$. Using again the spectral theorem, we will show in Proposition \ref{pro:abstractProjection} below that this ensures convergence to the solution to Problem \eqref{S1E8}. However, let us first give a heuristic explanation why this can be expected. We can give the following interpretation of the Method of Reflections using the representation \eqref{eq:ReflectionSeriesCompressed}. To the solution of the equation without boundary conditions $G_0 f$, we add the sum of all the correctors, which is $-L$. Doing this, we expect to push the function towards zero boundary conditions. By iterating this, we hope to obtain a sequence converging to the solution to the Dirichlet problem \eqref{S1E8}. However, if $G_0 f$ has the same sign in several particles that are close to each other and sufficiently large (i.e., large capacity), then, the effect of $L$ is too large: The boundary conditions in each of those particles are not only corrected by the corresponding projection operator, but they also undergo a push in the same direction by the effect of all the other particles. In other words, we push in the right direction but too far. Therefore, reducing the push by multiplying with $\gamma$ might solve this problem. We can also give a purely geometrical interpretation. Let $P$ denote the orthogonal projection to $\ker L $, and $Q$ the projection to its orthogonal complement. We recall that $L$ is the sum of the operators $Q_i$ which are the orthogonal projections. Let us denote their kernel by $V_i$. Then \begin{equation} \ker L = \bigcap_i V_i =: V. \end{equation} If the subspaces $V_i$ were orthogonal to each other, than, we would have \[ 1-L = 1-\sum_i Q_i = 1 - Q = P, \] and the convergence of $(1-L)^n$ to $P$ would trivially hold. However, they are not orthogonal to each other. Indeed, the closer two particles are, the more they interact with each other. Interaction of the particles, however, means lack of orthogonality. Therefore, the series diverges if there is too much interaction between particles close to each other -- too large capacity $\mu_0$ -- or if the interaction does not decay fast enough -- too large $\xi$. In Figure \ref{fig:1}, we see what happens in the orthogonal complement $V^\perp$ if the angles between the subspaces $V_i$ are small. We consider the simplest non-trivial case in which only two particles are present. As we see in Figure \ref{fig:1}, $(1-L)x$ might end up on the other side of the origin then $x$. In this example, $(1-L)x$ is still closer to the origin than $x$. This is a feature of the case of only two subspaces since $\|L\| < 2$ as long as the subspaces $V_i$ have trivial intersection. Therefore, the Method of Reflections always yields a convergent sequence if there are only two particles and they do not intersect. However, if more particles are present and the angles between the subspaces are sufficiently small, $(1-L)x$ will be larger than $x$. In that case, adding a small parameter $\gamma$ in front of $L$ will solve this problem. Indeed, as in Figure \ref{fig:1}, we can ensure that $(1-\gamma L)x$ lies on the same side of the origin as $x$ by choosing $\gamma < 1/\|L\|$. \begin{figure}[ht] \centering \includegraphics[scale= 0.5]{FigureNew.pdf} \caption{}\label{fig:1} \end{figure} \begin{proposition} \label{pro:abstractProjection} Assume $ H $ is a Hilbert space and $ V_k \subset H$ are closed subspaces for $ k \in J$, where $j$ is a finite or countable index set. Define $Q_k$ to be the orthogonal projections from $ H$ to $V_k^\perp$. Let $ V = \cap_{k \in j} V_k$ and define $ P $ to be the orthogonal projection from $ H$ to $V$. If $ S:= \sum_{k \in j} Q_k$ defines a bounded operator, then, for all $ 0< \gamma < \frac{2}{\|S\|} $, \[ \lim_{M\to\infty} (1-\gamma S)^M = P, \] pointwise in $H$. Moreover, if $S$ is strictly positive in $V^\perp$, i.e., there exists $ c > 0$ such that \begin{equation} \label{eq:coercivityCondition} (Sx,x)_H \geq c \|x\|_H^2 \qquad \text{for all} \quad x \in V^\perp, \end{equation} then, \begin{equation} \label{eq:exponentialConvergence} \|(1-\gamma S)^M - P \| \leq \max \{1-\gamma c, \gamma \|S\| - 1 \}^M . \end{equation} \end{proposition} \begin{remark} To optimize the exponential convergence in \eqref{eq:exponentialConvergence}, one can choose \[ \gamma = \frac{2}{\|S\| + c}. \] \end{remark} \begin{proof} By definition of $ S $, we have $ \ker S =V $. Thus, $(1-\gamma S)^M x = x $ for all $x \in V$. On the other hand, as $S$ is self-adjoint, we have $ \mathcal{R}(S) \subset (\ker S)^\perp = V^\perp $. We define $T$ as the restriction of $ S $ to $ V^ \perp $ (in both the domain and the range) satisfies $ \|1- \gamma T\| \leq \max\{1-\gamma c,\gamma \|S\| - 1\}$. Thus, it suffices to show that $(1-T)^n \to 0$ pointwise in $H$. Being a sum of orthogonal projections, $ S $ and also $T$ are self-adjoint operators. Hence, by the spectral theorem, we can assume that $ T $ is a multiplication operator on $ H = L^2_\nu(X)$ for some measure space $(X,\mathcal{A},\nu)$, i.e., there exists a function $f \in L^\infty_\nu (X)$ such that $ T \varphi = f \varphi $ for all $ \varphi \in L^2_\nu(X)$. Since $T$ is positive and bounded by $\|S\|$, we have $0 < f \leq \|S\|$. Therefore, \begin{align} \|(1-\gamma T)^M \varphi\|^2_H &= \int_X |\varphi|^2 (1-\gamma f)^{2M} \, d \nu \to 0. \end{align} If in addition, \eqref{eq:coercivityCondition} holds, then $c < f \leq \|S\|$. Thus, \begin{align} \|(1-\gamma T) \varphi\|^2_H &= \int_X |\varphi|^2 (1-\gamma f)^{2} \, d \nu \\ &\leq \|1-\gamma f \|^2_{L^\infty_\nu(X)} \|\varphi\|^2_H \\ &\leq \max\{1-\gamma c,\gamma \|S\|- 1 \} \|\varphi\|^2_H. \qedhere \end{align} \end{proof} \begin{corollary} \label{cor:solbyprojection} Let $C_1$ be the constant from Proposition \ref{pro:Aone}. Then, for all particle configuration satisfying \begin{equation} C_1 \mu_0 \xi^{2} \leq C_2 ,% \end{equation} for some $C_2 < \infty$, there exists a constant $ \gamma_0 $, which depends only on $C_2$, with the following property. For all $\gamma \leq \gamma_0$, \[ (1-\gamma L)^M \to P \qquad \text{in} ~ \mathcal{L}(H^1(\mathbb{R}^3)) \quad \text{as} ~ M \to \infty, \] where $ P $ is the orthogonal projection from $ H^1(\mathbb{R}^3) $ to $H_0^1(\mathbb{R}^3 \backslash K)$. Moreover, there exists $\varepsilon <1 $ depending only on $\kappa$, and $C_2$ such that \[ \|(1-\gamma_0 L)^M - P\|_{\mathcal{L}(H^1(\mathbb{R}^3))} \leq C \varepsilon^n, \] where $C$ depends only on $\xi$. \end{corollary} \begin{proof} We define $\gamma_0 = 1/(1+C_2)$. Proposition \ref{pro:Aone} implies $ \gamma_0 \leq 1/\|L\| $ Then, the assertion follows directly from Proposition \ref{pro:abstractProjection} and Lemma \ref{lem:LCoercive}. \end{proof} \subsection{The Modified Method of Reflections as a Summation Method} \label{sec:TheResummationOnTheLevelOfTheScatteringSeries} \begin{lemma} \label{lem:combinatorics} Let $f \in H^{-1}(\mathbb{R}^3)$. Let $\Phi_n$ as in \eqref{eq:kthOrderCorrection} be the $n$-th order correction obtained by the Method of Reflections. Then, for all $\gamma > 0$ \begin{equation} (1-\gamma L)^M G_0 f = \sum_{n=0}^M q(n,M,\gamma) \Phi_n, \end{equation} where $q(0,M,\gamma) := 0 $, $q(n,M,\gamma) = 0 $ for $n > M$, and \[ q(n,M,\gamma) = \frac{M!}{(M-n)!(n-1)!} \int_0^{\gamma} t^{n-1}(1-t)^{M-n} \, d t = \frac{M!}{(M-n)!(n-1)!} B(\gamma;n,M-n+1), \] for $0 < n \leq M$. Here, $ B $ denotes the incomplete Beta function. In particular, for all $\gamma >0$, and $n\in\mathbb{N}$ it holds \[ \lim_{M\to\infty} q(n,M,\gamma) = 1. \] \end{lemma} \begin{proof} As we have seen in \eqref{eq:SeriesAsPowers}, it holds \[ \sum_{n=0}^M \Phi_n = (-L_r)^M G_0 f. \] By induction, this leads to the following identity \begin{equation} \label{eq:combinatorics} (-L_r)^M G_0 f = \sum_{n=1}^M (-1)^{M-n} \binom{M-1}{n-1} \Phi_n. \end{equation} Expanding $(1- \gamma L)^M$ and using \eqref{eq:combinatorics} leads to $q(0,M,\gamma) = 1$, $q(n,M,\gamma) = 0 $ for $n > M$, and, for $0 < n \leq M$, \begin{equation} \begin{aligned} q(n,M,\gamma) &= \sum_{l=n}^M \binom{M}{l}\gamma^{l} (-1)^{l-n} \binom{l-1}{n-1} \\ &= (-1)^n \sum_{l=n}^M \frac{M! }{l(M-l)!}\frac{(-\gamma)^{l}}{(n-1)!(l-n)!} \\ &= (-1)^n \frac{M!}{(n-1)!} \sum_{k=0}^{M-n} \frac{1}{k+l}\frac{ (-\gamma)^{k+l}}{(M-n-k)!k!}. \end{aligned} \end{equation} Defining \[ \psi(z) := \sum_{k=0}^{M-n} \frac{1}{k+n}\frac{z^{k+l}}{(M-n-k)!k!}, \] we find \begin{align} \frac{d}{dz} \psi(z) &= \sum_{k=0}^{M-n} \frac{z^{k+n-1}}{(M-n-k)!k!} \\ &= \frac{z^{n-1}}{(M-n)!}(1+z)^{M-n}, \end{align} and hence, \[ \psi(z) = \frac{1}{(M-n)!} \int_0^z t^{n-1}(1+t)^{M-n} \, d t. \] Inserting this in the above equation, we finally get \begin{equation} \begin{aligned} \sum_{l=n}^M \binom{M}{l}\gamma^{l} (-1)^{l-n} \binom{l-1}{n-1} &= (-1)^n \frac{M!}{(M-n)!(n-1)!} \int_0^{-\gamma} t^{n-1}(1+t)^{M-n} \, d t \\ &= \frac{M!}{(M-n)!(n-1)!} \int_0^{\gamma} t^{n-1}(1-t)^{M-n} \, d t \\ &= \frac{M!}{(M-n)!(n-1)!} B(\gamma;n,M-n+1). \end{aligned} \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{CapOrderOne}] The result is a direct consequence of Corollary \ref{cor:solbyprojection} and Lemma \ref{lem:combinatorics}. \end{proof} \section{Adaptation to Stokes Equations} \label{sec:SpacesAndOperators} In this section, we will adapt the previous results for the Poisson equation to the case of Stokes equations. We will not repeat everything from the previous sections but rather point out the necessary modifications. Working only in spaces of divergence free functions, the presence of the pressure in the Stokes equations can be in principle ignored for the definition of all the operators needed. \begin{definition} We define $\dot{H}^1_\sigma(\mathbb{R}^3;\mathbb{R}^3) \subset \dot{H}^1(\mathbb{R}^3;\mathbb{R}^3) $ to be the closed subspace of divergence free functions, and $\dot{H}^{-1}_\sigma(\mathbb{R}^3;\mathbb{R}^3)$ its dual space. \end{definition} \begin{notation} To improve readability, we will from now on write $\dot{H}^1(\mathbb{R}^3)$ instead of $\dot{H}^1(\mathbb{R}^3;\mathbb{R}^3)$ and similarly for $\dot{H}^{-1}(\mathbb{R}^3;\mathbb{R}^3)$, $\dot{H}^1_\sigma(\mathbb{R}^3;\mathbb{R}^3)$, etc. \end{notation} \begin{remark} Note that $\dot{H}^{-1}_\sigma(\mathbb{R}^3) \subset \dot{H}^{-1}(\mathbb{R}^3)$. Here, the inclusion for $f \in \dot{H}^{-1}_\sigma(\mathbb{R}^3)$ to a function in $\dot{H}^{-1}(\mathbb{R}^3)$ is given by $\langle u, f\rangle := \langle P_\sigma u, f \rangle$ for all $ u \in \dot{H}^1(\mathbb{R}^3)$, where $P_\sigma$ is the orthogonal projection from $\dot{H}^1(\mathbb{R}^3)$ to $\dot{H}^1_\sigma(\mathbb{R}^3)$. \end{remark} \begin{lemma} \label{lem:sto} Let $ f \in \dot{H}^{-1} (\mathbb{R}^3) $. Then, the Stokes equations \begin{equation} \begin{aligned} -\Delta u & = -\nabla p + f, \\ \operatorname{div} u &= 0 \end{aligned} \label{eq:sto} \end{equation} has a unique weak solution $ (u,p) \in \dot{H}^1_\sigma(\mathbb{R}^3) \times L^2(\mathbb{R}^3) $. The solution operator $ \bar{G}_0$ for the velocity field is given by \begin{equation} \bar{G}_0 f = \Phi \ast f, \end{equation} where \begin{equation} \Phi(x) := \frac{1}{8 \pi} \left( \frac{1}{|x|} + \frac{x \otimes x}{|x|^3} \right). \end{equation} Moreover, the restriction of the solution operator to $\dot{H}^{-1}_\sigma$, which we denote by $G_0$, is an isometric isomorphism. \end{lemma} \begin{lemma} \label{lem:stobdry} Let $\Omega \subset \mathbb{R}^3$ be open. Then, for every $ f \in \dot{H}^{-1}(\mathbb{R}^3) $, the problem \begin{equation} \begin{aligned} \label{eq:stokesBall} -\Delta u &= - \nabla p + f \quad \text{in} ~ \mathbb{R}^3 \backslash \overline{\Omega}, \\ \operatorname{div} u &= 0,\\ u &= 0 \quad \text{in} ~ \overline{\Omega}, \\ p &= 0 \quad \text{in} ~ \overline{\Omega} \end{aligned} \end{equation} has a unique weak solution $ (u,p) \in \dot{H}^1_\sigma(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Moreover, \begin{equation} u = P_{\Omega} \bar{G}_0, \end{equation} where $ P_{\Omega} $ is the orthogonal projection from $ \dot{H}^1_\sigma (\mathbb{R}^3) $ to the subspace $ \dot{H}^1_{0,\sigma}(\mathbb{R}^3 \backslash \overline{\Omega})$. \end{lemma} \begin{remark} Analogous to $H^1_0(\mathbb{R}^3 \backslash \overline{\Omega})$, we use the convention \[ \dot{H}^1_{0,\sigma}(\mathbb{R}^3 \backslash \overline{\Omega}) := \{u \in \dot{H}^1_{\sigma}(\mathbb{R}^3 \backslash \overline{\Omega}) \colon u=0 \text{ in } \Omega \}. \] \end{remark} \begin{remark} The condition $p = 0 $ in $\overline{\Omega}$ in Equation \eqref{eq:stokesBall} ensures uniqueness. Indeed, dropping this condition, $p$ can be chosen equal to any constant in every bounded connected component of $\Omega$. \end{remark} Again, for a particle $i$, we define the orthogonal projection $Q_i = 1 - P_i$, where $P_i = P_{B_i}$ and notice that \[ \dot{H}^1_{0,\sigma}(\mathbb{R}^3 \backslash \overline{B_i})^{\perp_\sigma} = \{ u \in \dot{H}^1_\sigma(\mathbb{R}^3) \colon -\Delta u = -\nabla p \text{ in } \mathbb{R}^3 \backslash \overline{B_i} \text{ for some } p \in L^2(\mathbb{R}^3 \backslash \overline{B_i})\}, \] where $\perp_\sigma$ indicates that we take the orthogonal complement with respect to $\dot{H}^1_\sigma(\mathbb{R}^3)$. Notice that $G_0^{-1} Q_i u \in \dot{H}^{-1}_\sigma(\mathbb{R}^3)$ is supported in $\overline{B_i}$, i.e., $\langle v,G_0^{-1} Q_x u\rangle = 0$ for every $v$ in $ \dot{H}^1_{0,\sigma}(\mathbb{R}^3 \backslash B_x)$. This, however, does not mean that $G_0^{-1} Q_i u$, viewed as an element of $\dot{H}^{-1}(\mathbb{R}^3)$, is supported in $\overline{B_i}$. In the case of Poisson equation, we often used cutoff functions to exploit that a function $f \in \dot{H}^{-1}_\sigma(\mathbb{R}^3)$ is supported in $\overline{B_i}$. However, multiplication with a cutoff function destroys the property of a function to be divergence free. Therefore, the following Lemma is needed. \begin{lemma} \label{lem:pressureEstimate} Assume $ f \in \dot{H}^{-1}_\sigma(\mathbb{R}^3) $ is supported in $ \overline{B_i} $. Then, there exists a unique $p \in L^2(\mathbb{R}^3)$ with $p=0$ in $B_i$ such that $ \tilde{f} := f +\nabla p $ is supported in $\overline{B_i}$ as a function in $\dot{H}^{-1}(\mathbb{R}^3)$. Moreover, $\|\tilde{f}\|_{\dot{H}^{-1}(\mathbb{R}^3)} \leq C \|f\|_{\dot{H}^{-1}(\mathbb{R}^3)}$ for a universal constant $ C $. We denote by $S$ the operator that maps $f$ to $\tilde{f}$. \end{lemma} \begin{proof} Since $ f \in \dot{H}^{-1}_\sigma(\mathbb{R}^3) $ is supported in $ \overline{B_i} $, we have $\langle f,v\rangle = 0$ for all $v \in \dot{H}^1_{0,\sigma}(\mathbb{R}^3 \backslash B_i)$. Hence, there exists a unique $p \in L^2(\mathbb{R}^3\backslash \overline{B_i})$ such that $f = -\nabla p$ in $\mathbb{R}^3 \backslash \overline{B_i}$ and we can set $p=0$ in $B_i$. By Lemma \ref{lem:divergenceSolution} below, we can find $u \in \dot{H}^1_0(\mathbb{R}^3 \backslash B_i)$ such that $\operatorname{div} u = p$ and $\|u\|_{\dot{H}^1(\mathbb{R}^3)} \leq C \|p\|_{L^2(\mathbb{R}^3)}$. Hence, \[ \|u\|_{\dot{H}^1(\mathbb{R}^3)} \|f\|_{\dot{H}^{-1}(\mathbb{R}^3)} \geq \langle u, f \rangle = \langle u, - \nabla p \rangle = \|p\|^2_{L^2(\mathbb{R}^3)}, \] and thus $\|p\|_{L^2(\mathbb{R}^3)} \leq C \|f\|_{\dot{H}^{-1}(\mathbb{R}^3)}$. Hence, $\tilde{f} := f + \nabla p$ is supported in $\overline{B_i}$ as a function in $\dot{H}^{-1}(\mathbb{R}^3)$, and $\|\tilde{f}\|_{\dot{H}^{-1}(\mathbb{R}^3)} \leq C \|f\|_{\dot{H}^{-1}(\mathbb{R}^3)}$. \end{proof} The following Lemma can be found in every standard textbook on Stokes equations, e.g., in \cite{Ga11}. \begin{lemma} \label{lem:divergenceSolution} Let $\Omega \subset \mathbb{R}^3$ be a locally lipschitzian bounded or exterior domain. Then there exists a constant $C$ with the following property. For all $f \in L^2(\Omega)$, that satisfies \[ \int_\Omega f \, d x = 0, \] if $\Omega$ is a bounded domain, there exists $u \in H^1_0(\Omega)$ such that \[ \operatorname{div} u = f \] and \[ \|\nabla u\|_{L^2(\Omega)} \leq C \|f\|_{L^2(\Omega)}. \] \end{lemma} \begin{remark} The constant $C$ is invariant under scaling of $\Omega$. \end{remark} Now one can define the operator $L$ analogously to the corresponding operator for the Poisson equation from Definition \ref{def:pL}. Using Lemma \ref{lem:pressureEstimate}, the estimate for $L$ (cf. Lemma \ref{lem:Lbounded}) follows in the same manner as before. Then, Theorem \ref{StokesConvergence} follows immediately from Proposition \ref{pro:nonuniformAbstractProjection} and Lemma \ref{lem:stobdry}. \subsection{Homogenization} Corresponding to Definition \ref{def:monopole}, we introduce the following operators. \begin{definition} We define $T_x : \dot{H}^1_\sigma(\mathbb{R}^3) \to \dot{H}^{-1}(\mathbb{R}^3)$ by $T_x = S G_0^{-1} Q_x$, where $S$ is the operator from Lemma \ref{lem:pressureEstimate}. Moreover, we define the uniform force density approximation of the operator $T$ to be the operator $M_x : \dot{H}^1_\sigma(\mathbb{R}^3) \to \dot{H}^{-1}(\mathbb{R}^3)$, \[ (M_x u)(y) = \frac{3(u)_{x,r}}{2r} \] \end{definition} Note that the definition of $M_x$ differs from the corresponding operator for the Poisson equation (cf. Definition \ref{def:monopole}) by a factor $3/2$. The reason for this is that the electrostatic capacity of a ball of radius $r$ is $4 \pi r$. The corresponding quantity for the Stokes equations, however, is the absolute value of the Stokes' drag force acting on a ball of radius $r$ moving with unit speed in a fluid which is at rest at infinity, which is $6 \pi r$. Lemma \ref{lem:extest} used in the proof Lemma \ref{lem:pMonopapprox} has to be replaced by the following Lemma. \begin{lemma} \label{lem:Stokesextest} For $ r>0 $ and $ x \in \mathbb{R}^3$, let $ H_r := \left\{ u \in H^1(B_r(x)) \colon \int_{B_r(x)} u = 0 \right\} $. Then for all $ r >0 $ there exists an extension operator $ E_r \colon H_r \to H^1_0(B_{2r}(x)) $ such that \begin{equation} \label{eq:Stokesextest} \| \nabla E_r u \|_{L^2(B_{2r}(x))} \leq C \| \nabla u \|_{L^2(B_r(x))} \qquad \text{for all} \quad u \in H_r, \end{equation} where the constant $ C $ is independent of $ r $. \end{lemma} \begin{proof} For $ r = 1 $ let $ E_1 \colon H^1(B_1(x)) \to H^1_0(B_2(x)) $ be a continuous extension operator. Then, by the Poincaré inequality on $ H_1 $ we get for all $ u \in H_1 $ \begin{equation} \| \nabla E_1u \|_{L^2(B_{2}(x))} \leq \| E_1u \|_{H^1(B_{2}(x))} \leq C \| u \|_{H^1(B_1(x))} \leq C \| \nabla u \|_{L^2(B_1(x))} \label{eq:Stokesextest1} \end{equation} The assertion for general $ r > 0 $ follows from scaling by defining $ (E_r)u(x) := (E_1u_r)({\frac{x}{r}}) $ where $ u_s(x) := u(sx) $. \end{proof} These are the only things that change in the proof of the homogenization result, Theorem \ref{HomogStokes}, except for the result about locally uniform convergence in the particle configuration. For the Poisson equation, this result was stated in Proposition \ref{pro:pSolByScattering}. The analogous statement for the Stokes equations remains valid. However, the proof of Lemma \ref{lem:LCoerciveInCompacta} and \ref{lem:decayQuasiHarmonic} needed in the proof of Proposition \ref{pro:pSolByScattering} have to be modified due to the use of cutoff functions. Corresponding to Lemma \ref{lem:LCoerciveInCompacta} and \ref{lem:decayQuasiHarmonic}, we will prove Lemma \ref{lem:SLCoerciveInCompacta} and \ref{lem:SpolynomialDecay} . For the proof Lemma \ref{lem:SLCoerciveInCompacta}, we need the following lemma. \begin{lemma} \label{cor:solenoidalExtension} Let $\Omega \subset \mathbb{R}^3$ be a bounded and locally lipschitzian domain and assume $v \in H^1(\Omega)$ satisfies \[ \int_\Omega v \cdot \nu = 0. \] Then, for any $R>0$ and $x \in \mathbb{R}^3$ such that $\Omega \subset \subset B_R(x)$, there exists $u \in H^1_0(\Omega)$ such that \begin{align} u &= v \text{ in } \Omega \\ \operatorname{div} u &= 0 \text{ in } B_R(x) \backslash \overline{\Omega} \end{align} and \[ \| u\|_{H^1(B_R)} \leq C \|v\|_{H^1(\Omega)}, \] where the constant depends only on the domains $\Omega$ and $B_R(x)$. In particular, for any $v \in H^1(B_r)$ with $\int_{b_r} v \cdot \nu = 0$, we can find $u \in H^1_0(B_{2r}(x))$ such that \begin{align} u &= v \text{ in } B_r(x) \\ \operatorname{div} u &= 0 \text{ in } B_{2r}(x) \backslash B_r(x) \end{align} and \[ \|\nabla u\|^2_{H^1(B_{2r}(x))} \leq \frac{C}{r^2} \|v\|^2_{L^2(B_r(x))} + C\|\nabla v\|^2_{L^2(B_r(x))} \leq C \|\nabla v\|^2_{L^2(\mathbb{R}^3)}, \] where the constant is independent of $r$ and $v$. \end{lemma} \begin{proof} We take any (not necessarily divergence free) extension $u_1$ of $v$ to $B_R(x)$ that satisfies the estimate, and take a solution $u_2 \in H^1(B_R \backslash \overline{\Omega})$ of $\operatorname{div} u_2 = -\operatorname{div} u_1$ provided by Lemma \ref{lem:divergenceSolution} and define $ u = u_1 + u_2$. The second assertion follows from scaling, and the last inequality is a consequence of H\"older's inequality and the Gagliardo-Nirenberg-Sobolev inequality. \end{proof} \begin{lemma} \label{lem:SLCoerciveInCompacta} Let $ u \in \dot{H}_{0,\sigma}^1(\mathbb{R}^3 \backslash K_r)^{\perp_\sigma} $ and $R > 0$. We define $v \in \dot{H}^1_\sigma(\mathbb{R}^3)$ to be the solution to \begin{align} -\Delta v &= -\nabla p \quad \text{in} ~ \mathbb{R}^3 \backslash (K_r \cap \overline{B_R(0)}), \\ \operatorname{div} v &= 0, \\ v &= u \quad \text{in} ~ K_r \cap \overline{B_R(0)}. \end{align} Then, \[ (L_r u,u)_{\dot{H}^1(\mathbb{R}^3)} \geq c e^{-R} \| v\|_{\dot{H}^1(\mathbb{R}^3)}^2, \] where $c>0$ is a universal constant. \end{lemma} \begin{proof} By the variational form of the equation for $ v $, we know that $v$ is the function of minimal norm in the set $ X_v := \{ w \in \dot{H}^1_\sigma(\mathbb{R}^3) \colon w = v ~ \text{in} ~ K \cap \overline{B_R} \}$. For every $x$ in $\Gamma_r \cap B_{R=r}$, Corollary \ref{cor:solenoidalExtension} provides functions $v_x \in H^1_0(B_{2r}(x)$ with $\|v_x\|_{\dot{H}^1(\mathbb{R}^3)} \leq C \|Q_x v\|_{\dot{H}^1(\mathbb{R}^3)}$ such that $v_x = Q_x v = v $ in $B_x$. Clearly, $\sum_{x \in B_{R+r}} v_x \in X_u$, and hence, \begin{align} \langle Lv , v \rangle &= \sum_x e^{-|x|} \|Q_x v \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &\geq c e^{-R} \sum_{x \in B_{R+r}} \|v_x \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &= c e^{-R} \|\sum_{x \in B_{R+r}} v_x \|_{\dot{H}^1(\mathbb{R}^3)}^2 \\ &\geq c e^{-R} \|v\|^2_{\dot{H}^1(\mathbb{R}^3)}. \qedhere \end{align} \end{proof} For the proof Lemma \ref{lem:SpolynomialDecay}, we need the following lemma. \begin{lemma} \label{lem:projest} Let $ u \in H^1(\mathbb{R}^3) $ and $ x \in \mathbb{R}^3 $. Assume $ 0<\rho<R$. Then \begin{equation} \| u \|^2_{L^2(B_{\rho}(x))} \leq C \left( \frac{\rho^3}{R^3} \|u\|_{L^2(B_R(x))}^2 + \rho^2 \|\nabla u\|_{L^2(B_R(x))}^2\right), \end{equation} where $ C$ is a universal constant. In particular, for all particle configurations with capacity $\mu$ and all $u \in H^1(\mathbb{R}^3)$, we have \begin{equation} \| u \|^2_{L^2(K_r)} \leq C \mu \|u\|_{L^2(\mathbb{R}^3)}^2 + C \mu \|\nabla u\|_{L^2(\mathbb{R}^3)}^2. \end{equation} \end{lemma} \begin{proof} Define $ (u)_{R,x} = \fint_{B_{R}(x)} u $. Then, using Lemma \ref{lem:extest} we get \begin{equation} \begin{aligned} \|u-(u)_{R,x}\|_{L^2(B_{\rho}(x))} &\leq \|u-(u)_{R,x}\|_{L^6(B_{\rho}(x))} \|1\|_{L^3(B_{\rho}(x))} \\ &\leq C \rho \|\nabla E_{R}(u-(u)_{R,x})\|_{L^2(B_d(x))} \leq C \rho \|\nabla u \|_{L^2(B_{R}(x))}. \end{aligned} \end{equation} Furthermore, \begin{equation} \| (u)_{R,x} \|^2_{L^2(B_{\rho}(x))} = C \rho^3 \left(\fint_{B_{R}(x)} u \, d x \right)^2 \leq C \rho^3 \fint_{B_{R}(x)} u^2 \, d x= C \frac{\rho^3}{R^3}\| u \|^2_{L^2(B_{R}(x))}. \end{equation} Combining these two estimates yields the assertion. \end{proof} \begin{lemma} \label{lem:SpolynomialDecay} For all $\mu > 0$ and $\rho>0$, there exists a nonincreasing function $ e_{\mu,\rho} \colon \mathbb{R}_+ \to \mathbb{R}_+$ with $ \lim_{s \to \infty} e_{\mu,\rho}(s) = 0$ that has the following property. All $w \in \dot{H}_0^1(\mathbb{R}^3 \backslash K_r)^\perp $ with $w = 0$ in $ K_r \cap B_R(0)$ satisfy \[ \| \nabla w\|_{L^2(B_\rho(0))} \leq e_{\mu,\rho}(R) \| \nabla w \|_{L^2(\mathbb{R}^3)}, \] for all $R \geq \rho$ if $r$ is sufficiently small. \end{lemma} \begin{proof} Fix a particle configuration with capacity $\mu $ and $d<1/(2\sqrt{3})$, and fix $R$, $\rho$, and $w$ according to the assumptions. Assume $s \geq 1$ satisfies $ 2s \leq R $. Note that $w$ is the function of minimal norm in the set \[ X_w := \{ v \in \dot{H}^1_\sigma \colon v=0 \text{ in } K_r \cap B_{2s}(0), ~ v = u \text{ on } \partial B_{2s}(0)\} \] Define $\eta \in C^1(\mathbb{R}^3) $ to be a cut-off function with $\eta = 1 $ in $\mathbb{R}^3 \backslash B_{2s(1-3r)}(0)$, $eta = 0 $ in $B_{s(1+3r)}$, and $|\nabla \eta | \leq C/s$. Then, $v_1:= \eta w$ has the right boundary condition to be in the set $X_w$ but fails to be divergence free. Indeed, $\operatorname{div} v_1 = \nabla \eta \cdot w$. Therefore, we use Lemma \ref{lem:divergenceSolution} to find a function $v_2 \in \dot{H}^1_0 (B_{2s} \backslash B_s)$ with $\operatorname{div} v_2 = - \operatorname{div} v_1$ and \[ \| \nabla v_2 \|_{L^2(B_{2s} \backslash B_s)} \leq C \| \operatorname{div} v_1 \|_{L^2(B_{2s} \backslash B_s)} \leq \frac{C}{s} \| w \|_{L^2(B_{2s} \backslash B_s)}. \] Now $v_1 + v_2$ is divergence free and equals $w$ on $\partial B_{2s}$. To match the boundary conditions in $K_r \cap B_{2s}(0)$, we use Corollary \ref{cor:solenoidalExtension}. For $x \in \Gamma_r \cap (B_{2s(1-2r)} \backslash B_{s(1+2r)}) $ it provides a function $v_x \in H^1_{0,\sigma}(B_{2r}(x))$ with $v_x = - v_2$ in $B_x$ and \begin{align} \| v_x \|^2_{\dot{H}^1} & \leq \frac{C}{r^2} \| v_2 \|^2_{L^2(B_x)} + C \| \nabla v_2 \|^2_{L^2(B_x)} \\ & \leq C ( \mu \| v_2 \|^2_{L^2(B_\frac{d}{2}(x)} + \| \nabla v_2 \|^2_{L^2(B_\frac{d}{2}(x)} \\ &\leq C(s^2 \mu + 1) \| \nabla v_2 \|^2_{L^2(B_\frac{d}{2}(x)}, \end{align} where we used Lemma \ref{lem:projest} for the second estimate and the Poincaré inequality in $ \dot{H}^1_0 (B_{2s} \backslash B_s)$ for the last one. By construction, $v:= v_1 + v_2 + \sum_{x \in \Gamma_r\cap (B_{2s(1-2r)} \backslash B_{s(1+2r)})} v_x $ is an element of $X_w$. Therefore, \begin{align} 0 &\leq \| \nabla v \|_{L^2(\mathbb{R}^3)}^2 - \| \nabla w \|_{L^2(\mathbb{R}^3)}^2 \\ & \leq C\|\nabla w \|^2_{L^2(B_{2s} \backslash B_s)} + C(\frac{1}{s^2} + \mu) \| w \|^2_{L^2(B_{2s} \backslash B_s)} - \|\nabla w\|^2_{L^2(B_s)}. \end{align} Since $s \geq 1 $ by assumption, the factor $s^{-2}$ can be dropped. Using the Poincaré inequality in the annulus $B_{2s} \backslash B_s $, provided by Lemma \ref{lem:poincareAnnulus} below, we deduce \[ \|\nabla w\|^2_{L^2(B_s)} \leq C (1+\mu^{-1}) \| \nabla w \|^2_{L^2(B_{2s} \backslash B_s)}. \] Using again the hole filling technique and iterating from $s:= \max\{\rho,1\}$ until $2^k s \geq R/2 $ concludes the proof. \end{proof}
2024-02-18T23:40:40.801Z
2016-03-23T01:09:22.000Z
algebraic_stack_train_0000
3,052
25,635
proofpile-arXiv_065-14933
\section{Introduction} Preissmann and Mischler \cite{PreissmannMischler} proved the following, confirming a conjecture of R. Bacher. \begin{teorema}\label{TeoRey} Let $p=2n+1$ be an odd prime. Suppose we are given $n$ elements $d_1,\ldots,d_n\in (\mathbb{Z}/p)^*$. Then there exists a partition of $\mathbb{Z}/p - \{0\}$ into pairs with differences $d_1,\ldots,d_n$. \end{teorema} A simpler proof of this thoerem can be found in \cite{KohenSadofschi}. Karasev and Petrov, independently, gave a proof of this result along the same lines and provided further generalizations in \cite{KarasevPetrov}. In this work, they also conjectured two generalizations of \cref{TeoRey}, replacing $p$ by an arbitrary integer $N$. The conjecture in the case that $N$ is even is originally due to Adamaszek. \begin{conjetura}[{\cite[Conjecture 1]{KarasevPetrov}}]\label{ConjImpar} Let $N = 2n + 1$ be a positive integer. Suppose we are given $n$ elements $d_1,\ldots,d_n\in (\mathbb{Z}/N)^*$. Then there exists a partition of $\mathbb{Z}/N - \{0\}$ into pairs with differences $d_1,\ldots,d_n$. \end{conjetura} We will prove the conjecture when $N$ is even: \medskip \begin{teoremaRepetido}[\ref{TeoPar}]{\normalfont ({\cite[Conjecture 2]{KarasevPetrov}}){\bfseries .}} Let $N=2n$ be a positive integer. Suppose we are given $n$ elements $d_1,d_2,\ldots,d_n \in (\mathbb{Z}/N)^*$. Then there exists a partition of $(\mathbb{Z}/N)$ into pairs with differences $d_1, d_2,\ldots,d_n$. \end{teoremaRepetido} \medskip While finishing this paper we found out that, in his master's thesis \cite{Mezei}, T.R. Mezei suggests a possible way to solve the conjecture that is similar to ours. Furthermore, he shows that Theorem \ref{TeoPar} holds whenever $N=2p$ for $p$ a prime number. \section{The even case} We recall the following version of the Cauchy-Davenport theorem. \begin{teorema}[{\cite[1.4]{Granville}}]\label{CauchyDavenport} If $A$ and $B$ are nonempty subsets of $\mathbb{Z}/N$ where $0\in B$, and $gcd(b,N)=1$ for all $b\in B\setminus\{0\}$, then $$|A+B|\geq \min\{N,|A|+|B|-1\}.$$ \end{teorema} Suppose that we have a partition as in \cref{TeoPar}. Since the $d_{i}$ are odd numbers, each pair contains exactly one even number. Therefore, if \cref{TeoPar} holds there exists signs $s_i$ such that $s_1d_1+\ldots+s_nd_n\equiv 1-2+3-\ldots +(2n-1)-2n\equiv n\bmod N$. \begin{teorema}\label{TeoSuma} Let $N=2n$ and let $d_1,\ldots,d_n\in (\mathbb{Z}/N)^*$. Then there exists $s_1,\ldots,s_n\in \{1,-1\}$ such that $$s_1d_1+\ldots+s_nd_n\equiv n\bmod {2n}$$ \begin{proof} It is enough to prove that there exists $I\subset\{1,\ldots,n\}$ such that $$\sum_{i \in I} 2d_{i} \equiv d_{1}+d_{2}+ \cdots +d_{n} + n \bmod{2n}$$ Since $d_i$ is odd for every $i$, $d_{1}+d_{2}+ \cdots+ d_{n} + n$ is even and therefore our task is equivalent to finding $I$ such that $$\sum_{i \in I} d_{i} \equiv \frac{d_{1}+d_{2}+ \cdots +d_{n} + n}{2} \bmod{n}.$$ Let $A_{i}=\left\lbrace d_{i},0 \right\rbrace$. Applying \cref{CauchyDavenport} inductively, we see that $$ \#(A_{1}+ \cdots + A_{n}) \ge min \left\lbrace n, \sum \#A_{i} -(n-1) \right\rbrace=n,$$ concluding the proof. \end{proof} \end{teorema} The last ingredient is the following theorem by Hall. \begin{teorema}[{\cite{Hall}}]\label{TeoHall} Let $A$ be an abelian group of order $n$ and $a_1,\ldots,a_n$ be a numbering of the elements of $A$. Let $d_1,\ldots,d_n\in A$ be elements such that $d_1+\ldots+d_n=0$. Then there are permutations $\sigma,\tau\in S_n$ such that $$a_i-a_{\sigma(i)}=d_{\tau(i)}$$ \end{teorema} \begin{teorema}\label{TeoPar} Let $N=2n$ be a positive integer. Suppose we are given $n$ elements $d_1,d_2,\ldots,d_n \in (\mathbb{Z}/N)^*$. Then there exists a partition of $\mathbb{Z}/N$ into pairs with differences $d_1, d_2,\ldots,d_n$. \begin{proof} First from \cref{TeoSuma}, we may assume that $d_1+\ldots+d_n\equiv n \bmod{2n}$. Now it is enough to find a numbering $a_1,\ldots,a_n$ of the odd numbers in $\mathbb{Z}/N$ and $\sigma \in S_n$ such that $2i-a_i\equiv d_{\sigma(i)}\bmod {2n}$ for every $i=1,\ldots,n$, for then the partition in pairs $\{2,a_1\},\{4,a_2\},\ldots,\{2n,a_n\}$ works. Equivalently, we need to find a numbering $b_1,\ldots,b_n$ of the even numbers in $\mathbb{Z}/N$ such that $2i-b_i\equiv d_{\sigma(i)}+1\bmod {N}$ for some $\sigma \in S_{n}$. Now since $d_i+1$ is even for all $i$, this is the same as finding a permutation $c_1,\ldots,c_n$ of $\{1,\ldots,n\}$ such that $i-c_i\equiv \frac{d_{\sigma(i)}+1}{2} \bmod n$, for some $\sigma \in S_n$. If we verify that $\frac{d_1+1}{2}+\ldots+\frac{d_n+1}{2}\equiv 0\bmod n$ this will follow from \cref{TeoHall}. But this holds, since $d_1+\ldots+d_n\equiv n\bmod {2n}$ and therefore $(d_1+1)+\ldots+(d_n+1)\equiv 0\bmod {2n}$, proving that $\frac{d_1+1}{2}+\ldots+\frac{d_n+1}{2}\equiv 0\bmod n$. \end{proof} \end{teorema} \bibliographystyle{plain}
2024-02-18T23:40:41.021Z
2016-03-23T01:03:13.000Z
algebraic_stack_train_0000
3,064
874
proofpile-arXiv_065-14950
\section{Introduction} In the near future, robots will become trustworthy helpers of humans, performing a variety of services at homes and in workplaces. A basic, but essential capability for such robots is to fetch common objects of daily life, e.g., cups or TV remote controllers, and hand them to humans. Today robots perform object handover in a limited manner: typically the robot holds an object statically in place and waits for the human to take it. This is far from the fluid handover between humans and is generally inadequate for the elderly, the very young, or the physically weak who require robot services. The long-term goal of our research is to develop the algorithmic framework and the experimental system that enable robots to perform \emph{fluid} object handover in a \emph{dynamic} setting and to adapt over human preferences and object characteristics. This work takes the first step and focuses on a robot handing over a water bottle in a dynamic setting (Fig.~\ref{fig:handoverScenarios}), e.g., handing over flyers to people walking by or handing over water bottles to marathon runners. Object handover appears deceptively simple. Humans are experts at object handover. We perform it many times a day almost flawlessly without thinking and \emph{adapt} over widely different contexts: \begin{itemize} \item \emph{Dynamics:} We hand over objects to others whether they sit, stand, or walk by. \item \emph{Object characteristics:} We hand over objects of different shape, weight, and surface texture. \item \emph{Human preferences:} While typical human object handover occurs very fast, we adapt our strategy and slow down when handing over objects to the elderly or young children. \end{itemize} The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. Manually programming robot handover with comparable robustness and adaptivity poses great challenge, as we lack even a moderately comprehensive and reliable model for handover in a variety of contexts. \begin{figure}[t] \centering \captionsetup{width=.8\linewidth} \includegraphics[height = 1.5in]{sit.pdf}\quad \includegraphics[height =1.5in]{walk.pdf}\quad \includegraphics[height=1.5in]{run.pdf} \caption{Hand over a water bottle to a person sitting, walking, or running. } \label{fig:handoverScenarios} \end{figure} Alternatively, the robot can learn the handover skill by interacting with the human and generalize from experience. In this work, we formulate the learning task as \emph{contextual policy search}~\citep{Kupcsik2013}. Policy search is a general approach to reinforcement learning and has been very successful in skill learning for robot with many degrees of freedom~\citep{Deisenroth2013}. Policy search algorithms parametrize robot control policies and search for the best parameter values by maximizing a reward function that captures the policy performance. Contextual policy search introduces a set of \emph{context variables} that depend on the task context, e.g., object type or size for the handover task, and the policy parameters are conditioned on the context variables. A reward function that accurately measures policy performance is key to the success of policy search. However, handcrafting a good reward function is often tedious and error-prone, in particular, for learning object handover. It is unclear what quantitative measures capture fluid object handover. Instead, we propose to learn the latent reward function from human feedback. Humans are experts at object handover and can easily provide reward feedback. However, the feedback is often noisy. To be robust against noise and avoid overfitting, we apply a Bayesian optimization approach to latent reward learning. Importantly, our learning algorithm allows for both \emph{absolute feedback}, e.g., ``Is the handover good or bad?'', and \emph{preference feedback} , e.g., ``Is the handover better than the previous one?''. Combining latent reward learning and policy search leads to a holistic contextual policy search algorithm that learns object handover directly from human feedback. Our preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. \section{Related Work} \label{sec:related} \subsection{Object Handover} Object handover has intrigued the research community for a long time from the both physical and social-cognitive perspectives. Early work on handover dates back to at least 1990s~\citep{Agah_ICRA_1997,NagOos98}. Recent work suggests that object handover consists of three stages conceptually: approach, signal, and transfer~\citep{Strabala_JHRI_2013}. They do not necessarily occur sequentially and may partially overlap. In the first stage, the giver approaches the taker and poses the object to get ready for handover~\citep{CakSri11,MaiGha12,SisAla05}. In the second stage, the giver and taker signal to each other and exchange information, often through non-verbal communication, such as motion~\cite{Dragan_RSS_2013}, eye gaze, or head orientation~\citep{Grigore_IROS_2013}, in order to establish joint intention of handover. In the final stage, they complete the physical transfer of the object. The transfer stage can be further divided into two sub-stages, before and after the giver and the taker establish joint contact of the object, respectively. Earlier work on object transfer generally assumes that the object remains stationary once joint contact is established and relies on handcrafted controllers~\citep{Agah_ICRA_1997,Chan_ICRA_2014,HuaCak15,NagOos98}. Our work focuses to the final physical transfer stage only. The algorithm learns a controller directly from human feedback. It does not make the stationary assumption and caters for dynamic handover. Object transfer is an instance of the more general problem of cooperative manipulation~\citep{BruKha08}: it involves two asymmetric agents with limited communication. Human-human object handover provides the yardstick for handover performance. Understanding how humans perform handover (e.g., \citep{ChaPar13,HubKup13}) paves the way towards improved robot handover performance. \subsection{Policy Search} Robot skill learning by policy search has been highly successful in recent years \cite{Deisenroth2013}. Policy search algorithms learn a skill represented as a probability distribution over parameterized robot controllers, by maximizing the expected reward. To allow robot skills to adapt to different situations, contextual policy search learns a contextual policy that conditions a skill on context variables~\citep{Silva2012, Daniel2012, Kupcsik2013}. To represent robot skills, policy search typically makes use of parametrized controllers, such as \emph{dynamic movement primitives}~\citep{Ijspeert2003} or \emph{interaction primitives} \citep{Heni_ICRA_2014}. The latter is well-suited for human-robot interaction tasks. Our work, on the other hand, exploits domain knowledge to construct a parameterized impedance controller. To learn robot skills, policy search requires that a reward function be given to measure learning performance. However, handcrafting a good reward function is often difficult. One approach is inverse reinforcement learning (IRL), also called inverse optimal control, which learns a reward function from expert demonstration~\cite{Ng_ICML_2000, Ratliff_2009_AR}. Demonstrations by human experts can be difficult or tedious to acquire, in particular, for robot-human object handover. An alternative is to learn directly from human feedback, without human expert demonstration. Daniel et al. use reward feedback from humans to learn manipulation skills for robot hands~\citep{Daniel_RSS_2014}. Wilson et al. consider learning control policies from trajectory preferences using a Bayesian approach without explicit reward feedback~\citep{Wilson_NIPS_2012}. Jain et al. learn manipulation trajectories from human preferences \citep{Jain_CoRR_2013}. Preference-based reinforcement learning algorithms generally do not use absolute reward feedback and rely solely on preference feedback \citep{Wirth_ECML_2013}. Our algorithm combines both absolute and preference feedback in a single Bayesian framework to learn a reward function and integrate with policy search for robot skill learning. \section{Learning Dynamic Handover from Human Feedback} \label{sec:handover} \subsection{Overview} Assume that a robot and a human have established the joint intention of handover. Our work addresses the physical transfer of an object from the robot to the human. The robot controller $u(\cdot\, ; \a )$ specifies the control action $u_t$ at the state $\x_t$ at time $t$ for $t=1, 2, \dots$. The controller $u(\cdot\, ; \a )$ is parametrized by a set of parameters $\a$, and the notation makes the dependency on $\a$ explicit. A reward function $R(\a)$ assigns a real number that measures the performance of the policy $u(\cdot\, ; \a )$. To handle the dynamics of handover, we introduce a context variable $\s$ representing the velocity of the human hand and condition the controller parameters $\a$ on $\s$, giving rise the reward function $R(\a, \s)$. In general, context variables may include other features, such as human preferences and object characteristics as well. A contextual policy $\pi(\a | \s)$ is a probability distribution over parametrized controllers, conditioned on the context $\s$. Our goal is to learn a contextual policy that maximizes the expected reward: \begin{align} \pi^* = \arg\max_{\pi} \int_{\s} \int_{\a} R(\a, \s) \pi(\a|\s) \mu(\s)\; \mathrm{d}\a \,\mathrm{d}\s, \end{align} where $\mu(\s)$ is a given prior distribution over the contexts. Contextual policy search iteratively updates $\pi$ so that the distribution peaks up on controllers with higher rewards. In each iteration, the robot learner observes context~$\s$ and samples a controller with parameter value $\a$ from the distribution $ \pi(\cdot |\s)$. It executes the controller $u(\cdot | \a)$ and observes the reward $R(\a,\s)$. After repeating this experiment $L$ times, it updates $\pi$ with the gathered data $\{\a_i,\s_i,R(\a_i,\s_i)\}_{i=1}^L$ and proceeds to the next iteration. See Fig.~\ref{fig:hri-setup} for the overall learning and control architecture and Table~\ref{alg:handover} for a sketch of our learning algorithm. The reward function $R(\a, \s)$ is critical in our algorithm. Unfortunately, it is difficult to specify manually a good reward function for learning object handover, despite the many empirical studies of human-human object handover~\citep{CakSri11, ChaPar13,HubKup13, Strabala_JHRI_2013}. We propose to learn a reward function $\hat{R}(\a, \s)$ from human feedback. Specifically, we allow both \emph{absolute} and \emph{preference} human feedback. Absolute feedback provides direct assessment of the robot controller performance on an absolute scale from 1 to 10. Preference feedback compares one controller with another relatively. While the former has higher information content, the latter is usually easier for humans to assess. We take a Bayesian approach and apply Gaussian process regression to latent reward estimation. The learned reward model $\hat{R}(\a, \s)$ generalizes the human feedback data. It provides estimated reward on arbitrarily sampled $(\a,\s)$ without additional experiments and drastically reduces the number of robot experiments required for learning a good policy. \begin{figure}[t] \centering \captionsetup{width=.8\linewidth} \includegraphics[width = 8cm]{human-robot-handover-learning-setup2.pdf} \caption{The human-robot handover skill learning framework. The robot observes context $\s$, then samples $\a$ using the policy $\pi(\a|\s)$. The experiment is executed with a robot controller with parametrization $\a$. The robot controller $u(\x;\a)$ provides deterministic control signals $\bs{u}$ given the state of the robot and its environment $\bs{x}$. After the experiment the human provides a high-level feedback $\mathcal{F}$, which is used the estimate the latent reward $\hat{R}(\a,\s)$. Finally, the policy is updated with the latest data.} \label{fig:hri-setup} \end{figure} \begin{table} \centering \begin{tabular}{ p{0.8\textwidth} } \hline The C-REPS Algorithm with Human Feedback \\ \hline \textbf{Input:} relative entropy bound $\epsilon$, initial policy $\pi (\a|\s)$, maximum number of policy updates $H$. \\ \textbf{for} $h = 1,\dots,H$ \\ \quad\begin{tabular}{| p{90mm} } \textbf{Collect human feedback data:} \\ \quad\begin{tabular}{| p{\textwidth} } {\it Observe context $\s_i\sim \mu(\s)$, $i = 1,\dots,L$}\\ {\it Draw parameters $\a_i \sim \pi(\a| \s_i)$}\\ {\it Collect human feedback $\mathcal{F}_i$}\\ \end{tabular} \textbf{Estimate latent rewards of all previously seen samples} $\{\a_i, \s_i, \mathcal{F}_i\}_{i=1}^{E}$ \\ \textbf{Predict rewards:} \\ \quad\begin{tabular}{| p{.9\textwidth} } {\it Draw context $\s_j \sim \hat{\mu}(\s),~j=1,\dots,Q$}\\ {\it Draw parameters $\a_j \sim \pi(\a|\s_j)$}\\ {\it Predict $\hat{R}(\a_j,\s_j)$ with reward model } \\ \end{tabular} \textbf{Update policy:} \\ \quad\begin{tabular}{| p{.9\textwidth} } {\it Update policy $\pi(\a|\s)$ using C-REPS with samples $\{\a_j, \s_j, \hat{R}(\a_j, \s_j)\}_{j=1}^Q$ \end{tabular} \textbf{end} \\ \end{tabular} \\ \textbf{Output:} policy $\pi(\a| \s)$ \\ \hline \caption{The learning framework for human-robot object transfer.} \label{alg:handover} \end{tabular} \end{table} \subsection{Representing the Object Handover Skill} \label{sec:skill} In this section we discuss how we encode the handover skill and which parameters $\a$ refers to. In our work we use a trajectory generator, a robot arm controller and a robot hand controller to encode the handover skill. A trajectory generator provides reference Cartesian coordinates for the robot end-effector to follow. In robot learning tasks, Movement Primitives (MP) are often used to encode a trajectory with a limited amount of parameters. MPs encode the shape, speed and magnitude of the trajectory in Cartesian space, or in joint space for each degree of freedom. While MPs can encode a wide variety of skills, they typically require a higher number of parameters to tune, which might slow down the learning process. For handover tasks however, we can use human expert knowledge to define robot hand trajectories. This approach allows for a more compact representation of the trajectory generator with less parameters to tune. Furthermore, we can address safety by reducing the workspace of the robot and we can easily synchronize with the human motion. In our experiments we use visual data of a Kinect sensor, which tracks the right hand of the human. As soon as the human hand is within $d_{max}$ distance from the robot hand the robot moves the object towards the human hand location. We assume that a path planner computes the reference trajectory from the current robot hand location to the human hand location. The reference trajectory is updated every time the human hand location is updated. As soon as the distance between the human and the robot hand falls below $d_{min}$, we do not use visual information due to possible occlusion and measurement error. Instead, we use the recorded visual data to predict the human hand trajectory for the next second when the physical interaction is likely to happen. The values of $d_{min}$ and $d_{max}$ may depend on different factors, such as, experiment setup, robot configuration, etc. In order to ensure robust human-robot handover, we need to allow compliant robot arm motion. We use Cartesian impedance control \citep{BruKha08} where the wrench $\bs{F}_{6 \times 1}$ concatenating forces and torques exerted in the end-effector frame is computed according to $ \bs{F} = \bs{M} \Delta \bs{\ddot{\x}} + \bs{D} \Delta\bs{\dot{\x}} + \bs{P} \Delta\x, $ where $\Delta\x_{6\times 1}$ is the deviation from the reference trajectory. The gain parameters $\bs{M}$, $\bs{D}$ and $\bs{P}$ will determine the amount of exerted forces and torques. $\bs{M}$ is typically replaced with the robot inertia at the current state. We choose the damping $\bs{D}$ such that the closed loop control system is critically damped. We use a diagonal stiffness matrix $\bs{P} = \mbox{diag}([\bs{p}_t^T, \bs{p}_r^T])$, where $\bs{p}_t$ is the translational and $\bs{p}_r$ is the rotational stiffness. Finally, the applied torque commands are $\bs{\tau} = \bs{J}^T \bs{F} + \bs{\tau}_{ff}$, where $\bs{J}$ is the Jacobian of the robot and $\bs{\tau}_{ff}$ are feed forward torques compensating for gravity and other nonlinear effects. Motivated by recent work in human-human handover experiments \cite{ChaPar13}, a robot grip force controller \citep{Chan_ICRA_2014} has been proposed $ \bs{F}_g = k \bs{F}_l + \bs{F}_{ovl}, $ where $\bs{F}_g$ is the commanded grip force, $\bs{F}_l$ is the measured load force and $\bs{F}_{ovl}$ is the preset overloading force. The slope parameter $k$ depends on object properties, such as mass, shape and material properties. When using this controller, the robot will release the object in case the total load force on the robot drops below a threshold value. For robot hands with only finger position control we cannot use the above control approach. Instead, we directly command finger positions by identifying the finger position with minimal grip force that still holds the object. Then, we use a control law to change finger positions linear in the load force $\bs{f}_{pos}= \bs{f}_{min} + m \bs{F}_l$. The value of $m$ depends on many factors, such as, object type, weight and other material properties. For learning the object handover, we tune $7$ parameters of the control architecture. For trajectory generator we tune the minimal and maximal tracking distances $d_{min}$ and $d_{max}$. For the compliant arm controller we learn the translational stiffness parameters and one parameter for all the rotational stiffness values. Finally, for finger controller we tune the slope parameter. All these parameters are collected in $\a_{7\times 1}$. \subsection{Estimating the Latent Reward Function} In this section we propose a Bayesian latent reward estimation technique based on previous work \citep{Chu_ICML_2005}. Assume that we have observed a set of samples $\{\s_i, \a_i\}_{i=1}^E$ and human feedback $\{\mathcal{F}_i\}_{i=1}^E$, where $\mathcal{F}_i = \tilde{R}(\y)$, in case the human gives an absolute evaluation (denoted by $\tilde{R}$) on parametrization $\a_i$ in context $\s_i$, $ \y= [\s^T, \a^T]^T$. In case of preference feedback $\mathcal{F}_i = \y_k \succ \y_{i\neq k}$ if $\y_i$ is preferred over $\y_i$. Note that for a given sample there may exist both preference and absolute evaluation. We define the prior distribution over the latent rewards as a Gaussian Process \citep{Rasmussen2006}, $\hat{\bs{R}} \sim \mathcal{N}(\bs 0, \bs K)$, with $\bs{K}_{ij} = k(\y_i, \y_j)$. Without the loss of generality we assume $\bs{0}$ prior mean, but more informative priors can be constructed with expert knowledge. The likelihood function for preference based feedback is given by $p(\y_i \succ \y_j|\hat{\bs{R}}) = \bs \Phi((\hat{R}_i - \hat{R}_j)/(\sqrt{2}\sigma_p))$ \citep{Chu_ICML_2005}, where $\bs{\Phi}(\cdot)$ is the c.d.f. of $\mathcal{N}(0,1)$ and $\sigma_p$ is a noise term accounting for feedback noise. For absolute feedback data we simply define the likelihood by $p(\tilde{R}_i|\hat{\bs{R}}) = \mathcal{N}(\hat{R}_i, \sigma_r^2)$, where $\sigma_r^2$ represents the variance of absolute human feedback. Finally, the posterior distribution of the latent rewards can be approximated by, \begin{equation} p(\hat{\bs{R}}|\mathcal{D}) \propto \prod_{i=1}^N p(\y_{i,1} \succ \y_{i, 2}|\hat{\bs{R}}) \prod_{j=1}^M p(\tilde{{R}}_j|\hat{R}_j,\sigma_r^2) p(\hat{\bs{R}}|\bs 0, \bs K), \end{equation} where we used the notation $p(\y_{i,1}\succ \y_{i,2}|\hat{\bs{R}})$ to highlight that $\mathcal{F}_i$ is a preference feedback comparing $\y_{i,1}$ to $\y_{i,2}$. For finding the optimal latent rewards, we minimize \begin{equation} J(\hat{\bs{R}}) = - \sum_{i=1}^N \log \bs \Phi (\z_i) + \frac{\sigma_r^{-2}}{2} \sum_{j=1}^M (\tilde{R}_j - \hat{R}_j)^2 + \hat{\bs{R}}^T \bs K^{-1} \hat{\bs{R}}, \label{eq:optpref} \end{equation} with $\z_i = (\hat{R}(\y_{i,1}) - \hat{R}(\y_{i,2}))/(\sqrt{2}\sigma_p)$. It was shown in \citep{Chu_ICML_2005} that minimizing $J$ w.r.t. $\hat{\bs{R}}$ is a convex problem in case there is only preference based feedback ($M=0$). However, it easy to see that the Hessian of $J(\hat{\bs{R}})$ will only be augmented with non-negative elements in the diagonal in case $M> 0$, which will leave the Hessian positive semi-definite and the problem convex. Optimizing the hyper-parameters of the kernel function $\bs \theta$ and the noise terms can be evaluated by maximizing the evidence $p(\mathcal{D}|\bs \theta, \sigma_p, \sigma_r)$. While the evidence cannot be given in a closed form, we can estimate it by Laplace approximation. It is interesting to note that in case there is only preference feedback, that is, $M=0,~N>0$, we obtain the exact same algorithm as in \citep{Chu_ICML_2005}. In the other extreme, in case there is only absolute feedback ($M>0,~N=0$) we get Gaussian Process regression, which provides a closed form solution for $p(\hat{\bs{R}})$. Overall, our extension provides an opportunity to mix preference and absolute feedback in a unified Bayesian framework. Also note that after obtaining $p(\hat{\bs{R}})$ we can use Bayesian linear regression to query the expected reward $R^*$ of unseen samples $\bs{y}^*$ \cite{Chu_ICML_2005, Rasmussen2006}. We can use the resulting generative model of the reward to query the reward for a large number of samples from the current control distribution $\y \sim \mu(\s)\pi(\a|\s)$, without the need for real experimental evaluation. Such a data-efficient model-based approach has been demonstrated to reduce the required number of experiments up to two orders of magnitude \citep{Kupcsik2013,Daniel_RSS_2014}. \subsection{Contextual Relative Entropy Policy Search} To update the policy $\pi(\a|\s)$, we rely on the contextual extension of Relative Entropy Policy Search \citep{Kupcsik2013,Deisenroth2013}, or C-REPS. The intuition of C-REPS is to maximize the expected reward over the joint context-control parameter distribution, while staying close to the observed data to balance out exploration and experience loss. C-REPS uses an information theoretic approach, where the relative entropy between consecutive parameter distributions is bounded $ \int_{\s,\a} p(\s, \a) \log \frac{p(\s, \a)}{q(\s,\a)}d\s d\a \leq \epsilon , $ where $p(\s,\a)$ and $q(\s, \a)$ represent the updated and the previously used context-parameter distributions. The parameter $\epsilon \in \mathbb{R}^+$ is the upper bound of the relative entropy. The emerging constrained optimization problem can be solved by the Lagrange multiplier method (see e.g. \citep{Kupcsik_AIJ_2015}). The closed form solution for the new distribution is given by $ \pSA \propto \qSA \exp\left((R(\a, \s) - V(\s))/\eta\right). $ Here, $V(\s)$ is a context dependent baseline, while $\eta$ and $\bs{\theta}$ are Lagrangian parameters. The baseline is linear in some context features and it is parametrized by $\bs{\theta}$. To update the policy we use the computed probabilities $\pSA$ as sample weights and perform a maximum likelihood estimation of the policy model parameters. \section{Experiments} \label{sec:expr} For the handover experiment we use the 7-DoF KUKA LBR arm (Figure 3). For the robot hand we use the Robotiq 3-finger hand. The fingers are position controlled, but the maximum grip force can be indirectly adjusted by limiting the finger currents. In order for accurate measurement of external forces and torques, a wrist mounted force/torque sensor is installed. \begin{figure}[t] \centering \captionsetup{width=.8\linewidth} \includegraphics[width = 0.6\textwidth]{robot-setup2.png} \begin{center} \caption{Robot setup for experiments. We use the 7-DoF KUKA LBR arb with the 3-finger Robotiq robot hand. We use Kinect to track the human hand motion. } \end{center} \label{fig:experiment} \end{figure} \subsection{Experimental Setup} An experiment is executed as follows. First, a 1.5l water bottle is placed at a fixed location, which the robot is programmed to pick up. Subsequently, the robot moves the bottle to a predefined position. At this point we enable compliant arm control and we use a Kinect sensor (Figure 3) to track the hand of the human. Subsequently, the human moves towards the robot to take the bottle. While approaching the robot, we use the Kinect data to estimate the hand velocity $\s$ of the human, which we assume to be constant during the experiment. We only use data when the human is relatively far (above $1$m) from the robot to avoid occlusion. After the context variable is estimated the robot sets its parameter by drawing a controller parametrization $\a \sim \pi(\a|\s)$. Subsequently, the robot and the human make physical contact and the handover takes place. Finally, the human evaluates the robot performance (preference or absolute evaluation on a 1-10 scale, where 1 is worst 10 is best) and walks away such that the next experiment may begin. We presented the pseudo code of our learning algorithm in Table \ref{alg:handover}. As input to the algorithm we have to provide the initial policy $\pi(\a|\s)$, and several other parameters. We use a Gaussian distribution to represent the policy $\pi(\a|\s) = \mathcal{N}(\a|\bs{a} + \bs{A} \s, \bs{\Sigma})$. In the beginning of the learning we set $\bs{A}=\bs{0}$, that is, the robot uses the same controller distribution over all possible context values. During learning all the parameters ($\bs{a},~ \bs{A},~ \bs{\Sigma}$) of the policy will be tuned according to the C-REPS update rule. The initial policy mean $\bs{a}$ and the diagonal elements of the covariance matrix $\bs{\Sigma}$ are set as follows. For the rotational stiffness we set $2.75$ Nm/rad mean and $0.5^2$ variance. For the translational stiffness parameters we chose $275,~450,~275$ N/m in x, y, and z direction in the hand frame (Fig \ref{fig:robothand}). The variances are $50^2, 75^2$, and $50^2$ respectively. For the finger control slope parameter we chose $2.5$ 1/N with a variance of $0.5^2$. This provides a firm grip of the water bottle. The robot will not move the fingers until the force generated by the human hand reaches half the weight of the bottle. With a slope parameter of $0$ the robot exerts a minimal grip force that can still support the bottle. With a slope value above $5$ the robot only releases the bottle if the human can support $1.2\times$ the object weight. Thus, we can avoid dropping the object, even with the initial policy. Finally as mean we set $200$mm and $600$mm as minimal and maximal trajectory tracking control distance. As variances we chose $50^2$ and $150^2$. The parameters are therefore initialized as $\bs{a} = (2.75,~ 275,~ 450,~275,~2.5,~200,~600)^T$, $\bs{A} = \bs{0}$ and $\bs{\Sigma} = \mbox{diag}(0.5^2,~ 50^2,~75^2,~50^2,~0.5^2,~50^2,~150^2)$. \begin{figure} \centering \captionsetup{width=.8\linewidth} \includegraphics[width = 0.3 \textwidth]{handFrames.pdf} \caption{The robot hand frame orientation. } \label{fig:robothand} \end{figure} For the C-REPS learning algorithm in Table \ref{alg:handover} we chose $\epsilon = 0.75$ and we updated the policy after evaluating $L = 10$ human-robot handover experiments. However, before the first policy update we used $L = 40$ handover experiments, such that we have a reliable estimation of the latent rewards. Before each policy update we estimate the latent rewards for all the previously seen experiments $\{\a_i, \s_i, \mathcal{F}_i\}_{i=1}^E$. Here, $E$ represents the total number of observed samples. Note, that $E$ is increased by the amount of latest experiments $L$ before each policy update. Therefore, $E$ represents how much experimental evaluation, or information we used to reach a certain level of performance. After estimating the latent rewards we use the resulting generative reward model to evaluate $Q=500$ \emph{artificial} context-control parameter pairs drawn from $\hat{\mu}(\s)\pi(\a|\s)$. We used these artificial samples to update the policy. This way we got a highly data-efficient algorithm, similar to the one in \citep{Kupcsik2013}. After the policy is updated, we start a new loop and evaluate $L$ new experiment. We not only use this information to update our dictionary to estimate latent rewards, but also to estimate the performance of the current policy. The performance of the policy is measured by the expected latent reward of the newly evaluated $L$ experiments. We expect the performance measure to increase with the amount of information $E$ and policy updates. After updating the policy $H$ times (Table \ref{alg:handover}) we terminate the learning. \subsection{Results} As the learning algorithm uses randomly sampled data for policy updates and noisy human feedback, the learned policy and its performance may vary. In order to measure the consistency of the learning progress we repeated the complete learning trial several times. A trial means evaluating the learning algorithm starting with the initial policy and with an empty dictionary, $E=0$, but using the same parameters for $L$ and $\epsilon$. We evaluated $5$ learning trials and recorded the expected performance of the robot before each policy update. The expected learning performance over $5$ trials with 95$\%$ confidence bounds against the amount of real robot experiments $E$ used for policy update is shown in Figure \ref{fig:learningResults}. We can see that learning indeed improved the performance of the initial policy, which has an expected value of $6.8$. Over the learning trials, we noticed that the human mostly gave absolute feedback for very good or bad solutions. This is expected as humans can confidently say if a handover skill feels close to that of a human, or if it does something unnatural (e.g., not releasing the object). By the end of the learning, the expected latent reward rose to the region of $8$. Note, that the variance of the learning performance over different trials not only depends on the stochastic learning approach, but also on noisy human feedback. Thus we can conclude that the learning indeed improved the expected latent reward of the policy, but how did the policy and the experiments change with the learning? \begin{figure} \centering \captionsetup{width=.8\linewidth} \includegraphics[width = 0.4 \textwidth]{handoverLearningResults2.pdf} \caption{The expected latent reward mean and standard deviation over $5$ independent learning trials. Humans may give absolute feedback in a 1 to 10 scale. Initially the latent reward is estimated to be around $6.8$, which goes up to around $8$ after evaluating the learning.} \label{fig:learningResults} \end{figure} \textbf{The learned policy.} We first discuss the mean value $a$ of the learned policy and then we show how the policy generalizes to more dynamic tasks. Over several learning trials we observed that a high quality policy provides a lower rotational stiffness compared to the hand-tuned initial policy. We observed that on expectation the learned rotational stiffness is $1.29$ Nm/rad, which is lower than the initial $2.75$. This helped the robot to quickly orient the object with the human hand upon physical contact. We observed similar behavior in the translational stiffness values in the $x-z$ directions (see Figure \ref{fig:robothand}). The learned values were almost $100$ N/m lower compared to the initial values. This helps the robot to become more compliant in horizontal motions. Interestingly, the learned stiffness in $y$ direction became slightly higher ($474$ N/m) compared to its initial value. During physical interaction the forces acting along the y-axis are mostly responsible for supporting the weight of the object. With a higher stiffness value interaction times became lower and also helped avoiding situations where the robot did not release the object. The learned slope parameter of the finger controller became more conservative ($3.63$ 1/N). This prevented any finger movement until the human force reached at least $0.8\times$ the weight of the object. Finally, the learned minimal and maximal tracking distance on expectation became $269$ and $541$mm respectively. \begin{figure}[t] \centering \captionsetup{width=.8\linewidth} \includegraphics[width = 0.9\textwidth]{polpars.pdf} \caption{The initial and the learned policy parameters against the context value. Top row, from left to right: the rotational stiffness, translational stiffness in the x-y-z direction. Bottom row, from left to right: finger control slope, minimal and maximal visual hand tracking distance.} \label{fig:parameters} \end{figure} The policy generalizes the controller parametrization with mean $\bs{a} + \bs{A}\s$. We discussed above how $\bs{a}$ changed on expectation after the learning. We now turn our attention to $\bs{A}$ and show how generalization to more dynamic task happens. We typically executed experiments with hand speed between $0.1$ and $1$m/s. We observed that on expectation the rotational stiffness values were lowered for more dynamical tasks ($\s =1$m/s) with $-0.31$ Nm/rad. This helped the robot to orient with the human hand quicker. Interestingly, we observed that the stiffness in x direction is slightly increased with $56$ N/m. However, the stiffness in $y$ direction is dramatically decreased with $-281$ N/m. This reduces forces acting on the human significantly during faster physical interaction. The stiffness in $z$ direction is decreased with $-10$ N/m, which is just a minor change. Interestingly, the slope parameter of the robot finger controller increases with $0.6$ 1/N, which leads to an even more conservative finger control. Finally, we observed that on expectation the minimal hand tracking distance is increased by $46$mm and the maximal distance remains almost the same with an additional $9$mm. A visual representation of the learned parameters against the context value is shown in Figure \ref{fig:parameters}. In the following, we will analyze some static and dynamic handover experiments to give more insight why humans prefer the learned policy as opposed to the initial hand-tuned controller. \begin{figure} \centering \subfigure[]{ \captionsetup{width=.8\linewidth} \label{fig:static} \includegraphics[width = .4\textwidth]{staticHandoverNew.pdf} }\quad \subfigure[]{ \label{fig:dynamic} \includegraphics[width = .4\textwidth]{dyamicHandoverNew.pdf} } \captionsetup{width=.8\linewidth} \caption{\textbf{(a)} Two example of experimental results of the forces acting between the human and the robot during physical interaction. The forces are plotted starting right before the physical interaction until the handover is finished. \textbf{(b)} Two example of experimental results in dynamic handover situations. The forces are plotted starting right before the physical interaction until the handover is finished.} \end{figure} \textbf{Human preferences for static handovers.} For static handover tasks we observed that a robust and quick finger control was always preferred and highly rated. In Figure \ref{fig:static} we can see the forces and jerks of two typical static handover solutions. The weight of the bottle was around $20$N. We can see that the preferred solution always maintained a low jerk and forces remained limited. Moreover, a successful handover happens relatively fast. In our experiments we observed that a high quality solution happens within $0.6$ seconds and no faster than $0.4$ seconds. Similar results have been reported in human-human object transfers experiments \citep{ChaPar13}. Typically disliked parameterizations have low translational stiffness and a stiff finger control, resulting in the robot not releasing the object quick enough, which is considered a failure. These experiments typically lasted for $1$ to $2$ seconds until the bottle was released. \textbf{Human preferences for dynamic handovers.} In dynamic handover situations contact forces and jerks were significantly higher compared to the static case (Figure \ref{fig:dynamic}). A typical preferred dynamic handover controller has lower rotational and translational stiffness, and a more firm finger controller. In our experiments the human always came from one direction while taking the bottle from the robot. In the robot hand frame this was the x-direction. As we can see, a preferred controller achieves a significantly lower contact force and jerk in this direction. We noticed that a physical contact time in a dynamic handover scenario is around $0.3-0.6$ sec. Based on the latent rewards, we noticed that there is a strong preference towards faster handovers, as opposed to the static case, where we did not observe such strong correlation in handovers within $0.6$ seconds. Interestingly, we noticed that humans preferred stiffer finger controllers in dynamic handovers. We assume that this helps a robust transfer of the object from giver to taker. In a dynamic handover situation vision might not provide enough feedback about the handover situation during physical contact, and thus, an excess of grip force would be necessary to ensure the robust transfer and to compensate for inaccurate position control. Video footage of some typical experiments before and after the learning is available at www.youtube.com/watch?v=2OAnyfph3bQ. By analyzing these experiments we can see that the learned policy indeed provides a controller parametrization that decreases handover time, reduces forces and jerks acting on the human over a wide variety of dynamic situations. While the initial policy provides a reasonable performance in less dynamic experiments, learning and generalization significantly improves the performance of the policy. Based on our observations, for static handovers a fast and smooth finger control was necessary for success, while in dynamic handover situation higher compliance and a firm finger control were preferred. \section{Discussion} This paper presents an algorithm for learning dynamic robot-to-human object handover from human feedback. The algorithm learns a latent reward function from both absolute and preference feedback, and integrates reward learning with contextual policy search. Experiments show that the robot adapts to the dynamics of human motion and learns to hand over a water bottle successfully, even in highly dynamic situations. The current work has several limitations. First, it is evaluated on a single object and a small number of people. We plan to generalize the learning algorithm to adapt over human preferences and object characteristics. While contextual policy search works well for adapting over handover dynamics, object characteristics exhibit much greater variability and may pose greater challenge. Second, our handover policy also does not consider human response during the handover or its change over time. We want to model key features of human response and exploit it for effective and fluid handover. For both, combining model-free learning and model-based planning seems a fruitful direction for exploration. \bigskip \noindent\textbf{Acknowledgements.} This research was supported in part an A*STAR Industrial Robotics Program grant (R-252-506-001-305) and a SMART Phase-2 Pilot grant (R-252-000-571-592). \bibliographystyle{plain}
2024-02-18T23:40:41.113Z
2016-03-22T01:17:17.000Z
algebraic_stack_train_0000
3,069
6,349
proofpile-arXiv_065-14972
\section{Introduction} Data annotation is a labor-intensive and expensive process. Because humans can make mistakes, stringent disciplinary actions such as double-confirmation and cross-checking are required to reduce annotation errors. Modern deep neural networks (DNNs) require large training datasets to determine hundreds of thousands of model parameters. Because accurate annotations on large datasets need a large amount of efforts and expense, various cost-effective methods such as crowdsourcing and web annotation have been adopted as alternatives. As these methods inevitably introduce labeling errors, it is essential to develop a robust training strategy to minimize the impact of mislabeled examples on test performance. Unfortunately, standard DNN training methods have not been successful in handling such labeling errors because DNNs have a large capacity to memorize noisy labels \cite{arpit2017closer,zhang2016understanding}. Thus, numerous studies have attempted to reduce the influence of noisy labels on DNN training. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figure/overall7.pdf} \caption{{\bf Overall structure of the proposed SRT method:} Temporal self-ensemble models generated using the cyclical learning rate are used to filter the noisy labels. The filtering criterion is based on the cross-entropy loss and the Jensen-Shannon divergence between multi-view predictions evaluated with the previously generated self-ensemble models. The training is performed on the filtered samples using the loss function derived from the current model.} \label{fig:overall_framwork} \end{figure*} One approach to robust training against noisy labels is to regularize the model by weighting the loss terms for samples that possess noisy labels \cite{patrini2017fcorrection,ren2018learning,liu2015classification,wang2017multiclass,reed2014bootstrap}. Because this approach uses noisy labels throughout the training phase, it is difficult to prevent memorization of noisy labels by DNNs. Therefore, many recent studies have proposed strategies for filtering noisy labels during training \cite{wei2020jocor,jiang2018mentornet,malach2017decoupling,bo2018coteaching,yu2019coteaching+,song2019selfie,lee2020ltec,shen2019itlm,kim2019nlnl,yao2021josrc}. The objective of this approach is to identify the samples whose labels appear to be correct and use them for training. One promising solution is to consider the samples associated with small losses as clean samples. However, if we use the loss function used to train the network for filtering, some noisy labels memorized by the DNNs may not be filtered in the subsequent training phase. We call this {\it self-bias issue} or {\it self-confirmation issue} because model training and filtering of noise labels influence each other to degrade performance. To overcome this self-bias issue, Co-teaching \cite{bo2018coteaching} trains two independent networks simultaneously and updates the weights of one network with the small loss samples found by the other network. Initialized with the different weights, two networks can be co-trained in a decoupled manner, thereby reducing the effect of memorization. Decoupling \cite{malach2017decoupling} and Co-teaching+ \cite{yu2019coteaching+} further strengthen such a decoupling effect by updating their weights with the training samples whose predictions from two networks do not match. The benefit of deploying multiple networks for a robust training is also justified by {\it ensemble learning}. A model ensemble captures the distribution of model parameters given a training dataset. Hence, the distribution of predictions from the model ensemble can indicate whether a particular sample has a noisy label. This can be analogous to a case in which we can make a better decision if we listen to the opinions of different experts. Because the experts' opinions are more likely to agree on clean labels than on noisy labels, we can judge whether labels are noisy depending on their consensus. SELF \cite{nguyen2020self} formed a model ensemble by taking a moving average of the weight checkpoints obtained over stochastic gradient descent (SGD) iterations. LEC \cite{lee2020ltec} produced the ensemble networks by injecting perturbations into the model parameters. JoCoR \cite{wei2020jocor} incorporated consensus measures among jointly trained networks into the loss function and used small loss samples for training. O2U-Net \cite{huang2019o2u} performed filtering based on the loss averaged over multiple cycles of cyclical learning rate. Recently, Jo-SRC \cite{yao2021josrc} proposed the strategy of relabeling noisy samples using an ensemble model generated by an exponential moving average of the network weights. In this paper, we propose an effective self-training scheme, that filters incorrectly labeled samples during training. The proposed method, referred to as {\it self-ensemble-based robust training} (SRT), generates temporal self-ensemble networks while training a single network with a stochastic gradient optimization. We adopt the learning rate scheduling used to generate self-ensemble models for {\it stochastic weight averaging} (SWA) \cite{izmailov2018averaging}, which was used to find an improved inference model through ensemble averaging. We build a sample acquisition function based on these self-ensembles to evaluate the likelihood of being incorrectly labeled for a given sample. The proposed acquisition function comprises two terms. First, the cross-entropy loss term is obtained using the temporal self-ensemble networks previously generated for the latest $P$ learning rate (LR) cycles. The second contrastive loss term indicates the consensus between the {\it multi-view predictions} produced by feeding the differently transformed input samples into the temporal self-ensemble networks. In addition, the multi-view predictions obtained for the model under training are used to enforce the consistency of model predictions through the contrastive loss. Fig. \ref{fig:overall_framwork} illustrates the proposed SRT method. We conduct the experiments on several widely used public datasets. Our results demonstrate that the filtering criterion derived from both temporal self-ensemble and multi-view predictions can offer significant performance gains over the baselines and the proposed method can achieve state-of-the-art performance for some categories. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figure/js.pdf} \caption{JS loss vs. epoch averaged over clean samples and noisy samples for each epoch evaluated on CIFAR-10 dataset.} \label{fig:js} \end{figure*} The contributions of our work are summarized as follows. \begin{itemize} \item We propose a simple yet effective learning method that is robust against labeling errors. The proposed SRT method generates the temporal self-ensemble networks by adopting the learning rate scheduling used for SWA. Without training multiple networks, our approach effectively mitigates the self-bias problem that arises when filtering out noisy labels. \item We construct a new acquisition criterion for detecting accurately labeled samples using the temporal self-ensemble networks. We transform the input in different ways, and measure the consistency between multi-view predictions generated by these transformed inputs for each self-ensemble network. While the temporal self-ensemble captures the posterior under model uncertainty, the multi-view predictions capture that under data uncertainty. Previous works have not considered both of these uncertainties together to robustly train a model with noisy labels, and this is how our method yields a superior performance than the existing methods. \item We found a relevant work O2U-Net, \cite{huang2019o2u} that employed the cyclical LR for robust training. O2U-Net generates the loss trajectory over epochs using the cyclical LR and ranks the samples based on the averaged loss. It filters the noisy samples only after the cyclical training step is complete. This is contrasted with our SRT, which filters the noisy samples while training the model. While O2U-Net employs the cyclical LR to differentiate between noisy samples and clean samples better, SRT uses the cyclical LR to produce temporal ensembles. After cleaning up the noisy samples, O2U-Net needs to go through re-training phase, which requires much longer training time. Our experimental results confirm that the online filtering approach of SRT achieves better performance than the post-filtering approach of O2U-Net when self-confirmation issue is well controlled. \item The source code will be publicly available. \end{itemize} \section{Related Work} To deal with the problem of noisy labels, there have been attempts to correct the loss function such that the impact of noisy labels is minimized. Several works tried to estimate the noise transition matrix and correct the training loss accordingly \cite{patrini2017fcorrection,natarajan2013learning,xia2019anchor}. However, the performance of these methods is sensitive to the quality of the noise transition matrix estimate, in particular, on the dataset with a large number of classes. Other methods proposed a noise-tolerant loss function which penalized the loss terms associated with the misclassified noisy samples \cite{ren2018learning,liu2015classification,wang2017multiclass,reed2014bootstrap}. Recent works have actively studied the sample selection strategy which filters the mislabeled data according to the loss value of each sample. MentorNet \cite{jiang2018mentornet} selects the small loss data as correctly labeled data. This is based on the observation that memorization occurs progressively for noisy labels after the DNN learns easy patterns during the initial training phase \cite{arpit2017closer,liu2020earlylearning}. However, self-paced learning \cite{jiang2018mentornet} that selects the small loss data by the network itself causes {\it{self-bias}}. Due to memorization effect, the errors made in sample selection affect the next phase of training without the ability to correct them \cite{bo2018coteaching}. To overcome this issue, many studies have used the co-training scheme, which employs two independent networks trained simultaneously to find the noisy labels \cite{bo2018coteaching,yu2019coteaching+,wei2020jocor,li2020dividemix}. The co-training scheme uses the samples selected by the one network to train the other network to reduce error propagation \cite{bo2018coteaching}. Based on co-training scheme, Co-teaching+ \cite{yu2019coteaching+} adopts the disagreement strategy to make those two networks decoupled. On the other hand, JoCoR \cite{wei2020jocor} adopts the agreement strategy which trains the samples whose predictions from both networks are similar. The proposed method, SRT treats self-bias issue without co-training scheme. This is possible through using the temporal self-ensemble generated by SGD optimization and utilizing the contrastive loss derived from multi-view predictions to identify incorrectly labeled samples. Consequently, the proposed SRT outperforms the existing state-of-the-art methods. \begin{algorithm} \caption{Summary of SRT algorithm} \label{algorithm:SRT2} \begin{algorithmic}[1] \REQUIRE Training dataset $\mathcal{D}$, the total number of cycles $K$, learning rate cycle $C$ (in epochs), the number of the self-ensemble networks $P'$, the memory $\mathcal{Z}$. \\ \STATE Initialize $f(x,\Theta_0)$\\ \FOR{cycle $k = 1, ..., K$} \FOR{epoch $t = C*(k-1)+1,..., C*k$} \FOR{iter $j = 1,..., \frac{N}{|\mathcal{B}|}$} \STATE Update the learning rate $\epsilon(t, j)$. \STATE Sample a mini-batch ${\mathcal{B}}$ from the dataset $\mathcal{D}$ \STATE Calculate $\mathcal{L}_{filt}(x_i, y_i) $ with $P = \min\{k-1, P'\}$, $\forall (x_i, y_i)\in\mathcal{B}$ \STATE Filter the noisy labels \\ ${\mathcal{B}}'$ = $\underset{\tilde{\mathcal{B}}:|\tilde{\mathcal{B}}|\geq{r(t)|{{\mathcal{B}}}}|}{\arg\min}$ $\sum_{(x_i,y_i)\in \tilde{\mathcal{B}}} \mathcal{L}_{filt}(x_i, y_i)$. \\ \STATE Update the model weights using the training samples in ${\mathcal{B}}'$. \ENDFOR \ENDFOR \STATE Store $f(x; \Theta_k)$ in the memory $\mathcal{Z}$\\ \ENDFOR \RETURN Trained model \\ \end{algorithmic} \end{algorithm} \section{Proposed Robust Training Method} In this section, we present the details of the proposed SRT method. \subsection{Problem Description} Consider an $M$-class classification task. We train the DNN model using a training dataset of size $N$, $\mathcal{D} = \{x_i, y_i\}^{N}_{i=1}$, where $x_i$ is the $i$-th training sample, and $y_i = [y_{i}^{1},...,y_{i}^{M}]^{T}\in\{0, 1\}^{M}$ is the $i$-th label encoded by a one-hot vector. The DNN model is expressed as $f(x;\Theta)=[f^{1}(x;\Theta), ..., f^{M}(x;\Theta)]^T$, where $x$ is the input data and $\Theta$ denotes the model parameters. Assuming that some of the samples in $\mathcal{D}$ are corrupted by labeling errors, we aim to filter them in the training phase. Specifically, we intend to design a suitable criterion for identifying correctly labeled samples in the mini-batch, $\mathcal{B} (\subset \mathcal{D})$. \subsection{Proposed SRT Method } The structure of the SRT is depicted in Fig. \ref{fig:overall_framwork}. In this subsection, we will explain the main components of the SRT in detail. \subsubsection{Generation of temporal self-ensemble} \input{table/main_results} \input{table/clothing1M_result} \input{table/o2u_new} We generate the temporal self-ensemble while training a single neural network $f(x;\Theta)$ using SGD. We adopt the learning rate scheduling proposed in \cite{loshchilov2016sgdr} and captures the network parameters at the intermediate check-points of SGD iterations. We start with the inital learning rate during a warm-up period of $C$ epochs. Then, we periodically increases the LR rapidly and decreases it gradually. The LR change according to the number of epochs is illustrated in Fig.~\ref{fig:overall_framwork}. The LR $\epsilon(t, j)$, specified by the maximum LR $\epsilon_1$, minimum LR $\epsilon_2$, and cycle $C$ in the number of epochs, is expressed as \cite{loshchilov2016sgdr} \begin{align} \epsilon(t, j) &= \epsilon_2 + \frac{1}{2}\left(\epsilon_1 - \epsilon_2\right)\left(1+ \cos\left(\frac{s(t, j)}{C}\pi \right)\right), \label{eq:cyclical learning rate} \end{align} where \begin{align} s(t, j) &= \mod(t-1, C) + \frac{|\mathcal{B}|}{N}(j-1), \end{align} for $t$ and $j$ are the epoch and iteration indices, respectively. Thus, whenever $\epsilon(t, j)$ reaches the minimum value $\epsilon_2$, we capture the weights of a temporal self-ensemble network. By storing the weights in each LR cycle, the proposed method generates a series of temporal self-ensemble networks $f(x; \Theta_1), f(x; \Theta_2), ..., f(x; \Theta_{k})$, where $k$ denotes the LR cycle index. \subsubsection{Generation of multi-view predictions} Multi-view predictions are generated by applying data transformation $g(x)$ to the input data $x$ in the mini-batch $\mathcal{B}$. Two different transformed samples $g(x)$ and $g'(x)$ are generated, where $g(x)$ implies an augmented view of the sample, $x$ and $g'(x)$ implies another view of the same sample, $x$. Note that $g(x)$ and $g'(x)$ are determined by a parameter that defines the transformation function. We provide the examples of such a transformation function in subsection 4.1. These transformed samples are input to the self-ensemble model $f(x;\Theta_k)$ to produce the multi-view predictions, $f(g(x); \Theta_k)$ and $f(g'(x); \Theta_k)$. \subsubsection{Filtering the samples with noisy labels} The proposed SRT method uses the {\it sample acquisition function} to identify a sample with noisy label in the mini-batch. The role of the acquisition function is to generate the likelihood of an input sample being mislabeled. We construct the sample acquisition function using the latest $P$ temporal self-ensemble networks $f(x; \Theta_{k-P}), ..., f(x; \Theta_{k-1})$ obtained up to the previous LR cycles. The acquisition function for the given training sample $(x_i, y_i)$ is expressed as \begin{align} & \mathcal{L}_{filt}(x_i,y_i) = \sum_{p=1}^P \lambda \mathcal{L}_{CE}(f(x_i;\Theta_{k-p}), y_i) \nonumber \\ & + (1 - \lambda) \mathcal{L}_{JS}(f(g(x_i);\Theta_{k-p}),f(g'(x_i);\Theta_{k-p})). \label{eq:filtering objective} \end{align} Note that the individual terms of the summation in (\ref{eq:filtering objective}) are associated with the latest $P$ self-ensemble networks. We do not include the model under training to avoid the self-bias issue. Each term consists of two terms; 1) the cross-entropy (CE) loss term and 2) the Jensen-Shannon (JS) divergence loss term. $\lambda$ is a parameter that balances the two terms. The CE loss determines how confident the self-ensemble model is regarding the relationship between input $x_i$ and label $y_i$ \begin{align} \mathcal{L}_{CE}(f(x_i;\Theta_{k-p}), y_i) = - \sum_{m=1}^{M} y_{i}^{m}\log{f^m(x_i;\Theta_{k-p})}. \label{eq:cross-entropy objective} \end{align} The CE loss term is likely to be higher for incorrect labels. The second term measures the consensus of the multi-view predictions $g(x_i)$ and $g'(x_i)$ through the JS divergence \begin{align} &\mathcal{L}_{JS}(f(g(x_i);\Theta_{k-p}), f(g'(x_i);\Theta_{k-p})) \nonumber \\&= D_{KL}(f(g(x_i);\Theta_{k-p})|f(g'(x_i); \Theta_{k-p})) \nonumber \\ & \quad + D_{KL}(f(g'(x_i);\Theta_{k-p})|f(g(x_i);\Theta_{k-p})), \label{eq:Jenson-shannon objective} \end{align} where $ D_{KL}(p|p') = \sum_{m=1}^{M} p^m \log{ \left(\frac{p^m }{p'^m }\right)}$ and $p=[p^1,...,p^M]^{T}$. We use the simplified form of the JS divergence proposed in \cite{wei2020jocor} for reducing the computation. The multi-view predictions are more likely to fluctuate for the mislabeled samples than for the correctly labeled samples \cite{lee2020ltec}. Thus, the JS divergence loss term tends to be large for the mislabeled samples and helps detecting these samples. Fig. \ref{fig:js} shows the values of JS loss averaged over noisy, clean, total samples separately for each epoch on CIFAR-10 dataset. The JS loss value tends to be larger for the noisy samples than for the clean samples and the gap between two increases with epoch. The proposed method computes the acquisition function for all samples in the mini-batch and uses only $r(t)$ \% of samples with the smallest acquisition value for training, where $t$ is the epoch index. Further, we assume that $\eta$ \% of the dataset is corrupted by labeling errors, where the noise rate $\eta$ can be determined from the known statistics or empirical estimation. At the beginning of training, $r(t)$ is set slightly larger than $(100-\eta)$ \% to avoid filtering the clean samples and then gradually reduces to $(100-\eta)$ \% as the model improves with training epochs \cite{bo2018coteaching}. \subsubsection{Training with filtered samples} We consider $k$ as the current LR cycle index and $\mathcal{B}'$ as the set of filtered samples in the mini-batch. The SRT method trains the model by minimizing the following loss function \begin{align} &\mathcal{L}_{train}(x_i, y_i) = \frac{1}{|\mathcal{B'}|}\sum_{(x_i, y_i) \in \mathcal{B}'} \lambda \mathcal{L}_{CE}(f(x_i;\Theta), y_i) \nonumber \\ & \quad \quad \quad + (1 - \lambda) \mathcal{L}_{JS}(f(g(x_i);\Theta), f(g'(x_i);\Theta)), \label{eq:training objective} \end{align} where $f(x;\Theta)$ denotes the model under training. Aside from the filtering step, the CE loss and JS divergence loss terms are used to train the model. The JS divergence loss term for the training loss corresponds to the {\it consistency regularization} used for semi-supervised learning \cite{laine2017temporal}. While the CE loss is used to learn from the target label, the JS divergence loss regularizes the model to make consistent predictions for the perturbed inputs on the same data. The combination of these two loss terms enables an improvement in classification performance. A back-propagation algorithm is executed to minimize the loss function $\mathcal{L}_{train}$. \subsubsection{Summary of SRT algorithm} Algorithm \ref{algorithm:SRT2} summarizes the detailed procedure of the proposed SRT method. \section{Experiments} In this section, we evaluate the performance of the proposed SRT method. \subsection{Experimental Setup} \subsubsection{Datasets} We used four popular public datasets, MNIST \cite{lecun1998mnist}, CIFAR-10, CIFAR-100 \cite{krizhevsky2009learning}, and Clothing1M \cite{xiao2015clothing1m} for the evaluation. For MNIST, CIFAR-10, and CIFAR-100, both symmetric and asymmetric label noises were synthetically generated. We followed the recommendations from previous studies \cite{bo2018coteaching, wei2020jocor} to generate synthetic label noise. Clothing1M \cite{xiao2015clothing1m} is a large-scale dataset containing real-world noisy labels. The annotation was conducted by crawling online websites. \subsubsection{Evaluation} The performance was evaluated by measuring the test accuracy achieved by the models for each dataset. The test accuracy was averaged over the last 10 epochs. The mean and standard deviation of the test accuracy were presented by repeating the experiments five times with different random seeds. To evaluate the accuracy of the filtering methods, we also measured the {\it label precision}, which is defined as the average ratio of the clean samples to the selected samples in the mini-batch \cite{bo2018coteaching}. We compared the proposed SRT method with the existing methods including Standard (vanilla supervised learning with noisy labels), Decoupling \cite{malach2017decoupling}, Co-teaching \cite{bo2018coteaching}, Co-teaching+ \cite{yu2019coteaching+}, JoCoR \cite{wei2020jocor}, and Jo-SRC \cite{yao2021josrc}. We reproduced the performance of these methods except for Jo-SRC. The performance of Jo-SRC was brought from the original paper\footnote{In our experiment, we could not reproduce the performance of Jo-SRC.}. We also compared SRT with O2U-Net, which used cyclic LR for different purpose. Because the training configurations of O2U-Net were provided for different network architecture (e.g., nine-layer convolutional neural network), we presented this comparison separately in Table \ref{table:o2u}. Note that we compared SRT with the ensemble-based robust training methods and did not include the semi-supervised learning-based methods \cite{li2020dividemix, song2019selfie, nguyen2020self, yao2021josrc, jiang2020synthetic, Kim_2021_CVPR, zhou2021robust} for comparison since they used unlabeled data samples for training. \subsubsection{Experimental setup} The experimental setup of previous studies was employed \cite{wei2020jocor}. For CIFAR-10 and CIFAR100, we used a seven-layer convolutional neural network with a batch size of 128. For MNIST, we used a two-layer perceptron with a batch size of 128. For Clothing1M, we used ResNet-18 \cite{he2016deep} with a batch size of 64. The parameter $\lambda$ in (\ref{eq:training objective}) was set to 0.9 for MNIST and was set to 0.7 for CIFAR-10, CIFAR-100, and Clothing1M. For all experiments, we used the Adam optimizer with a momentum of 0.9 and set $P=2$ for sample selection. Except for Clothing1M, we set $\epsilon_1 = 0.001$, $\epsilon_2=0$, and $C=20$ epochs for cyclical LR scheduling in (\ref{eq:cyclical learning rate}). For Clothing1M, we set $\epsilon_1 = 0.0005$, $\epsilon_2=0$, and $C=10$ epochs. Following Co-teaching \cite{bo2018coteaching}, we used $r(t) = 1 - \frac{\eta}{100} \cdot \min \{\frac{t}{T_k}, 1\}$, where $t$ is the epoch index and $T_k$ is the hyper-parameter. We set $T_k=10$ for MNIST, $T_k=15$ for CIFAR-10 and CIFAR-100, and $T_k=5$ for Clothing1M. Each $T_k$ was empirically determined. We conducted all experiments using a TITAN XP GPU. Data transformations used to generate multi-view predictions include random translation, horizontal flipping, random cropping, and normalization \cite{shorten2019survey}. We applied a different set of data transformations for each dataset. For CIFAR-10 and CIFAR-100, we used a combination of random translation by up to four pixels, horizontal flipping, and normalization \cite{he2016deep}. For MNIST, we used random translation by up to one pixel followed by normalization \cite{visin2015renet}. For Clothing1M, we performed horizontal flipping, resizing to 256$\times$256, and random cropping to 224$\times$224, followed by normalization \cite{wei2020jocor}. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figure/test_accuracy.pdf} \caption{Test accuracy versus epoch index evaluated on MNIST, CIFAR-10 and CIFAR-100 datasets.} \label{fig:test_accuracy} \end{figure*} \subsection{Performance Comparison} Table \ref{table:main results} presents the performance comparison of several robust training methods evaluated on MNIST, CIFAR-10, and CIFAR-100. We consider four combinations of different noise types and ratios, i.e., symmetric-20\%, symmetric-50\%, symmetric-80\%, and asymmetric-40\%. The proposed SRT consistently achieves a better performance than the existing robust training methods for all setups considered. The performance gain of the SRT increases with increase in the noise ratio. This shows that the SRT is particularly better at coping with severe labeling errors than the existing methods. On the MNIST dataset, the SRT method outperforms the second best, JoCoR by 0.45\% in the symmetric-20\% category. The performance gain of the SRT goes up to 5.56\% for more difficult symmetric-80\% setup. The SRT exhibits higher performance gains on CIFAR-10 and CIFAR-100 datasets. In the symmetric-80\% category, the SRT achieves up to 7.37\% and 15.31\% performance gains over the JoCoR on CIFAR-10 and CIFAR-100, respectively. Furthermore, SRT beats the current state-of-the-art, Jo-SRC, by up to 4.42\% and 4.35\% on the same noise settings. Table \ref{table:Clothing1M} presents the performance evaluated on Clothing1M, the real-world dataset. We followed the evaluation protocol presented in \cite{wei2020jocor}. {\it Best} indicates the achieved test accuracy when the best validation accuracy is observed, and {\it last} indicates the test accuracy obtained after the training is complete. SRT achieves the best performance among the candidate methods. In particular, SRT outperforms the second best, Jo-SRC by 0.70\% in {\it best accuracy}. In Table \ref{table:o2u}, we also compare the performance of SRT and O2U-Net. We used a nine-layer convolutional neural network to match the configurations used in O2U-Net. We observe that SRT significantly outperforms O2U-Net on both CIFAR-10 and CIFAR-100 datasets. Note that the performance gap increases with noise ratio, which confirms that SRT is particularly strong for harsh noisy label scenarios. \input{table/cifar10_ablation_50} \input{table/js_loss_ablation} \subsection{Performance Analysis} In this subsection, we investigate the behavior of the SRT with training epochs. We measure the intermediate test accuracy of several algorithms after optimization in each epoch. Fig. \ref{fig:test_accuracy} depicts the variation of test accuracy in epoch. {\it Standard} is the baseline without any noisy label filtering. During the initial training phase, the test accuracy of {\it Standard} rapidly increases; however, it drastically decays at a certain point as the incorrect labels are memorized by the model. Therefore, an effective training method that can alleviate this memorization effect is necessary. The robust training methods successfully compensate the performance degradation exhibited in the {\it Standard}. Nevertheless, Co-teaching, Decoupling, Co-teaching+, and JoCoR exhibit inconsistent performances in that none of them consistently outperforms the rest. While JoCoR achieves the best performance for the symmetric-20\% and symmetric-50\% settings, it gets worse than other methods for harder settings. In contrast, the SRT method outperforms the other methods for all setups considered. In most cases, the test accuracy of the SRT does not decay over the entire epoch and maintains a high level of test accuracy. The performance gain of the SRT method increases for more difficult noise settings, which is consistent with the results in Table \ref{table:main results}. The slight fluctuations in the test accuracy observed for SRT appears to be affected by the cyclical LR scheduling. \input{table/cifar10_selection_sym80} \subsection{Ablation Study} In this section, we conduct an ablation study to analyze the contribution of the proposed concepts behind the SRT method. Table \ref{table:cifar10_ablation} presents the test accuracy obtained by adding each idea to the baseline for Symmetric-50\% on CIFAR-10 dataset. {\it Temporal Ensemble} indicates temporal self-ensemble, and {\it Multi-view} indicates the multi-view prediction. Baseline was chosen as a method filtering noisy labels with small loss samples. Since {\it Multi-view} requires a data augmentation strategy, we also present the performance of the baseline with data augmentation enabled. {\it Temporal Ensemble} improves the mean accuracy by 16.54\% over the baseline. {\it Multi-view} yields 16.62\% improvement. With both {\it Temporal Ensemble} and {\it Multi-view} enabled, the overall performance gain goes up to 18.56\%. Since the JS divergence term is also used for the training, it is worth taking a look at the effect of the JS divergence loss used for training. Table \ref{table:js} shows how much the classification accuracy increases by adding the JS divergence loss to the CE loss. We see that the JS divergence loss provides a significant improvement in classification accuracy for all settings considered. In particular, the JS divergence loss achieves 1.88\% performance improvement in Symmetric-50\% on CIFAR-10. Next, we investigate how the performance varies depending on the way the sample acquisition function is constructed based on the temporal self-ensemble. Table \ref{table:cifar10_selection_sym80} presents the test accuracy achieved when different combinations of the ensemble networks $f(x;\Theta_{k-2})$ and $f(x;\Theta_{k-1}),$ and the current model $f(x;\Theta_{k})$ are used for the acquisition function. For fair comparison, configurations of cyclical LR and multi-view prediction are equally set for all cases. We consider symmetric-80\% noise setting on CIFAR-10 dataset. The method (a) is the baseline that filters noisy labels based on small loss samples. Note that the method (c) corresponds to SRT in that it uses the latest two self-ensemble networks. Obviously, we observe that the method (a) suffers from the self-bias issue. As the methods (b) and (c) use temporal ensemble networks to filter noisy labels, their performances improve over the method (a). We also note that the method (c) using two self-ensemble networks is better than the method (b) using a single network. Since the method (d) includes the current model $f(x;\Theta_{k})$ in the acquisition function, it might suffer from self-bias issue. Table \ref{table:cifar10_selection_sym80} shows that the method (d) performs worse than the method (c), SRT in terms of mean accuracy. \section{Conclusions} In this paper, we proposed a simple yet effective method for robustly training a DNN model using mis-labeled data samples. The proposed SRT method filtered out mislabeled samples based on new acquisition criteria derived from temporal self-ensembles. These self-ensemble networks were obtained by periodically sampling the model weights over the SGD weight trajectory during training. The cyclical LR scheduling, which was popularly used for SWA \cite{izmailov2018averaging}, was adopted to generate better self-ensemble networks. Furthermore, we added the additional acquisition metric that measures the inconsistency between the multi-view predictions. These predictions were obtained by feeding the transformed inputs into each self-ensemble. Combining the above two components, the SRT filtered out the mislabeled samples and trained the model using the filtered data samples. The experiments conducted on the widely used public datasets demonstrated that the proposed SRT method offered significant performance gains over the existing methods.
2024-02-18T23:40:41.333Z
2022-07-22T02:14:37.000Z
algebraic_stack_train_0000
3,073
4,983
proofpile-arXiv_065-15011
\section{Introduction} In the past decades, significant progress has been achieved in a variety of fields of quantum science and technology, including quantum computation \cite{DiVincenzo255}, quantum communication \cite{qci} and quantum sensing \cite{RevModPhys.89.035002}. In these applications, it is often necessary to develop efficient estimation methods to acquire information about quantum systems and quantum system identification has attracted wide attention \cite{D1,NURDIN2022,7130587}. In quantum estimation and quantum system identification, a common and essential step is to perform measurement on the quantum system of interest. Quantum detector tomography (QDT), as the standard technique to characterize an unknown measurement process, is fundamental for device benchmarking and subsequent tasks such as quantum state tomography (QST) \cite{Qi2013,Hou2016,MU2020108837}, quantum Hamiltonian identification \cite{8022944,9026783,PhysRevLett.113.080401,PhysRevA.95.022335,zhang2,PhysRevA.96.062334}, quantum process tomography \cite{WANG2019269,PhysRevA.63.020101,xiao} and quantum control \cite{DONG2022}. When the operators describing a detector are diagonal in the photon number basis, they are called phase-insensitive (otherwise phase-sensitive) detectors and can be straightforwardly identified using function fitting \cite{Renema:12} or convex optimization \cite{Feito_2009,Lundeen2009,natarajan2013quantum}. For phase-sensitive detectors, generally they can not be simultaneously diagonalized and their reconstruction is thus more complicated. Existing methods include Maximum Likelihood Estimation \cite{PhysRevA.64.024102,PhysRevLett.93.250407}, linear regression \cite{Grandi_2017}, convex-quadratic optimization \cite{zhang2012mapping,Zhang_2012}, and analytical two-stage solution \cite{wang2019twostage}. Specially, binary detectors can always be simultaneously diagonalized and thus their estimation has an analytical scheme based on Frobenius-norm projection \cite{9029759}. For $ d $-dimensional QDT, we prepare $ M $ different types of quantum states and the total number of copies of these states $ N $ is called resource number. Many identification algorithms assume the experimental resource is diverse enough in QDT, i.e., any detector can be uniquely determined by the measurement outcome statistics. This scenario is called informationally complete (I.C.) \cite{ic1,ic2} and the opposite scenario is called information incomplete (I.I.). In practice, the I.C. condition may not be satisfied for QDT, which results in an I.I. scenario (e.g. when $ M<d^2 $ for a $ d $-dimensional detector). In the I.I. scenario and in certain I.C. scenarios where the probe states lie close to the I.I. set although they are still in I.C. set, the QDT problem is ill-conditioned. To solve this problem, convex optimization methods with regularization were proposed in \cite{Feito_2009,Lundeen2009} for phase-insensitive detectors and in \cite{zhang2012mapping,Zhang_2012} for phase-sensitive detectors. In experiments, a regularized least-square method was used in \cite{add1,add2} for phase-insensitive detectors. However, there is still a lack of closed form solutions for QDT with regularization in these existing methods. To solve this problem, we develop QDT with regularization inspired by classical transfer function identification. In the previous literature, a kernel-based regularization was proposed in \cite{PILLONETTO201081,PILLONETTO2011291,CHEN20121525,Pillonetto2014}, which can cope with bias–variance trade-off. For kernel-based regularization, an important problem is how to design a suitable kernel matrix. Refs. \cite{CHEN20121525,Chen2018} proposed different kernels and Refs. \cite{Chen2013,Mu2018,CHEN2021109682,mu2021asymptotic} discussed how to tune hyper-parameters in the kernel matrix and the asymptotic properties of these parameters. Further work about kernel-based regularization was studied in \cite{MU2018327,Pillonetto2016,chentac,pillonetto2022regularized}. In this paper, we develop regularization methods in QDT which are applicable to both phase insensitive and phase sensitive detectors. We give a closed form solution, applicable to both the cases of I.C. and I.I.. We then discuss different regularization forms and explain the advantages of using regularization in QDT. We consider no regularization as a special case. In the I.C. scenario, a common step (see e.g. \cite{wang2019twostage,9029759}) is to uniformly distribute the resource for each quantum state as $ N/M $, which is often not the optimal distribution. Without regularization, we discuss how to optimize the resource distribution for different types of probe states based on minimizing the mean squared error (MSE) of QDT. We convert this optimization problem into a semidefinite programming (SDP) problem, which can be solved efficiently. In comparison, if the resource distribution is given, the probe state design problem was discussed in \cite{xiao2021optimal}. In both the I.C. and I.I. scenarios, we also prove that under the static assumption (specific definitions in Section \ref{sec51}), the MSE scales as $ O(\frac{1}{N}) $ or tends to a constant, and we characterize when the MSE can reach the optimal scaling $ O(\frac{1}{N}) $. We propose an exact characterization of the best regularization for identifiable parameters to achieve the minimum MSE, allowing the probe states to be I.C. or I.I.. In the I.C. scenario, we obtain the same best regularization form as proposed in \cite{CHEN20121525}. We also prove the best regularization can reach the optimal scaling $ O\left(\frac{1}{N}\right) $ even in the I.I. scenario. Numerical examples demonstrate that the optimization of resource distribution and regularization can reduce the MSE. Then we give the reason why adaptive rank-1 regularization motivated from the best regularization fails to show an $ O(\frac{1}{N}) $ scaling in QDT, and we find an indication that full-rank regularization might be better. Finally, we apply our algorithm to quantum optical experiments using two-mode coherent states for binary detectors. The experimental results show that the adaptive regularization has a lower MSE compared with the Tikhonov regularization method in \cite{wang2019twostage}. The main contributions of this paper are summarized as follows. \begin{enumerate} \item[(i)] A closed form of regularized QDT solution is established with different regularization forms in the I.C. and I.I. scenarios. The motivations and advantages to apply regularization in QDT are discussed. \item[(ii)] Without regularization, we optimize the resource (probe state) distribution by converting it to a semidefinite programming (SDP) problem in the I.C. scenario. \item[(iii)] Under the static assumption, we prove that the MSE scales as $ O(\frac{1}{N}) $ or tends to a constant and we characterize when the MSE can reach the optimal scaling $ O(\frac{1}{N}) $. In addition, an exact characterization of the best regularization for identifiable parameters to achieve the minimum MSE is given in the I.C. and I.I. scenarios. \item[(iv)] Simulation results are provided to verify the effectiveness of resource distribution optimization and regularized QDT. Quantum optics experimental results are presented to demonstrate the necessity of choosing a proper regularization form to further reduce the QDT error. \end{enumerate} This paper is organized as follows. In Section \ref{sec2}, we introduce the background knowledge and weighted least squares for QDT. In Section \ref{sec3}, we discuss different regularization forms for QDT. In Section \ref{sec5}, we characterize the scaling of MSE under static assumptions and the best regularization for identifiable parameters. In Section \ref{ns}, we give numerical examples and in Section \ref{secexp}, we present experimental results. Conclusions are presented in Section \ref{con}. Notation: For a matrix $ A $, $A \geq 0$ means $A$ is positive semidefinite. The conjugation and transpose $(T)$ of $A$ is $A^{\dagger}$. The trace of $A$ is $\operatorname{Tr}(A)$. The rank of $ A $ is $ \operatorname{Rank}(A) $. The identity matrix is $I$. The real and complex domains are $\mathbb{R}$ and $\mathbb{C}$, respectively. The tensor product is $\otimes$. The set of all $d$-dimensional complex/real vectors is $ \mathbb{C}^{d}/\mathbb{R}^{d} $. Row and column vectors also denoted as $ \langle\psi| $ and $|\psi\rangle$, respectively. The Frobenius norm for a matrix and 2-norm for a vector are $\|\cdot\|$. The Kronecker delta function is $\delta$. $\mathrm{i}=\sqrt{-1}$. The diagonal matrix $ X $ formed from vector $ b $ is denoted as $ X=\operatorname{diag}(b) $. For any $X_{d \times d}\geq 0$ with spectral decomposition $X=U P U^{\dagger},$ define $\sqrt{X}$ or $X^{\frac{1}{2}}$ as $U \operatorname{diag}\left(\sqrt{P_{11}}, \sqrt{P_{22}}, \ldots, \sqrt{P_{d d}}\right) U^{\dagger}$. \section{Preliminaries and identification algorithm}\label{sec2} Here we present the background knowledge and briefly introduce the QDT identification algorithm in \cite{wang2019twostage}. Based on this QDT identification algorithm, we introduce weighted least squares (WLS) in QDT and explain its advantages. \subsection{Quantum state and measurement} For a $d$-dimensional quantum system, its state can be described by a $d \times d$ Hermitian matrix $\rho$, which satisfies $\rho\geq0$ and $\operatorname{Tr}(\rho)=1$. When $\rho=|\psi\rangle\langle\psi|$ for some $|\psi\rangle\in\mathbb{C}^{d}$, we call $ \rho $ a pure state. Otherwise, $\rho$ is called a mixed state, and can be represented using pure states $\left\{\left|\psi_{i}\right\rangle\right\}$ as $\rho=\sum_{i} c_{i}\left|\psi_{i}\right\rangle\left\langle\psi_{i}\right|$ where $c_{i} \in \mathbb{R}$ and $\sum_{i} c_{i}=1 $ with $c_{i}\geq 0 $. A set of operators $\left\{P_{i}\right\}_{i=1}^{n}$ named Positive-operator-valued measure (POVM) characterizes a corresponding detector as a measurement device. Each POVM element $P_{i}$ is Hermitian and positive semidefinite, and together they satisfy the completeness constraint $\sum_{i=1}^{n} P_{i}=I$. When the measurements corresponding to $\left\{P_{i}\right\}$ are performed on $\rho$, the probability of obtaining the $i$-th result is given by the Born's rule \begin{equation} p_{i}=\operatorname{Tr}\left(P_{i} \rho\right). \end{equation} From the completeness constraint, we have $\sum_{i} p_{i}=1$. \subsection{Problem formulation of QDT} Suppose the true values of the POVM elements are $\left\{P_{i}\right\}_{i=1}^{n}$. We design $ M $ different types of quantum states $\rho_{j}$ (called probe states) and record the measurement results $\hat{p}_{i j}$ as the estimate of $p_{i j}=\operatorname{Tr}\left(P_{i} \rho_{j}\right)$. Each probe state has resource number $N_j$ (i.e., $ N_{j} $ copies) and the total resource number is $N=\sum_{j=1}^{M} N_{j} $. Given experimental data $\left\{\hat{p}_{i j}\right\}$, the problem of QDT \cite{wang2019twostage} can be formulated as \begin{equation} \min _{\left\{\hat{P}_{i}\right\}_{i=1}^n} \sum_{i=1}^{n} \sum_{j=1}^{M}\left[\hat{p}_{i j}-\operatorname{Tr}\left(\hat{P}_{i} \rho_{j}\right)\right]^{2} \end{equation} such that $\hat P_i=\hat P_i^{\dagger},\hat{P}_{i} \geq 0 $ for $1 \leq i \leq n$ and $\sum_{i=1}^{n} \hat{P}_{i}=I$. Let $\left\{\Omega_{i}\right\}_{i=1}^{d^{2}}$ be a complete basis set of orthonormal operators with dimension $d$. Without loss of generality, let $ \operatorname{Tr}\left(\Omega_{i}^{\dagger} \Omega_{j}\right)=\delta_{i j},\Omega_{i}=\Omega_{i}^{\dagger} $ and $\operatorname{Tr}\left(\Omega_{i}\right)=0$ except $ \Omega_{1}=I / \sqrt{d} $. Then we can parameterize the detector and probe states as \begin{equation} \label{para} \begin{aligned} P_{i} =\sum_{a=1}^{d^{2}} \lambda_{i}^{a} \Omega_{a},\rho_{j} =\sum_{b=1}^{d^{2}} \phi_{j}^{b} \Omega_{b}. \end{aligned} \end{equation} Using Born's rule, we can obtain \begin{equation} p_{i j}=\sum_{a=1}^{d^{2}} \phi_{j}^{a} \lambda_{i}^{a} \triangleq \phi_{j}^{T} \lambda_{i}, \end{equation} where $\phi_{j}\triangleq\left(\phi_{j}^{1}, \ldots \phi_{j}^{d^{2}}\right)^{T}$ and $\lambda_{i}\triangleq\left(\lambda_{i}^{1}, \ldots \lambda_{i}^{d^{2}}\right)^{T}$ are the parameterization vectors of $ \rho_{j} $ and $ P_i $, respectively. Suppose the outcome for $P_{i}$ of $ \rho_j $ appears $n_{i j}$ times, and then $\hat{p}_{i j}=n_{i j} /N_{j} $. Denote the estimation error as $e_{i j}=\hat{p}_{i j}-p_{i j} $. According to the central limit theorem, $ e_{ij} $ converges in distribution to a normal distribution with mean zero and variance $ \left(p_{i j}-p_{i j}^{2}\right) /N_j$. We thus have the least squares (LS) equation \begin{equation} \hat{p}_{i j}=\phi_{j}^{T} \lambda_{i}+e_{i j}. \end{equation} To propose least squares (LS) and weighted least squares (WLS) solutions in QDT, in the following we write down and solve the linear equation for each POVM element \emph{individually}. This can be achieved by rearranging the data after implementing all the measurements. Collect the parameterization of the probe states as $X=\left(\phi_{1}, \phi_{2}, \ldots, \phi_{M}\right)^{T}$. For the $ i $-th POVM element $ P_i $, let \begin{equation*} \begin{aligned} \hat{y}_{i}&\triangleq\left(\hat{p}_{i 1}, \hat{p}_{i 2}, \ldots, \hat{p}_{i M}\right)^{T},\\ y_{0}&\triangleq\left((1, \ldots, 1)_{1 \times M}\right)^{T}=\sum_{i} \hat{y}_{i},\\ \mathcal d_{d^{2} \times 1}&\triangleq(\sqrt{d}, 0, \ldots, 0)^{T},\\ e_{i}&\triangleq\left[e_{i 1}, \ldots, e_{i M}\right]^{T}. \end{aligned} \end{equation*} Define $ {\bar { y_i}}\triangleq\hat{ y}_{i}-\frac{1}{n} y_{0}$ and $ \theta_{i}\triangleq \lambda_i-\frac{1}{n}{\mathcal d}$. Then we have \begin{equation} \label{eqsmall} \bar{y_i}=X\theta_{i}+e_i. \end{equation} Now the QDT Problem can be transformed into the following form: \begin{problem}\label{problem2} For $ 1\leq i\leq n $, given experimental data $ \bar{y}_{i} $, solve $ \min _{\hat{P}_{i}}\|\bar{y}_{i}-X\theta_{i}\|^{2} $ with $\hat{P}_{i} \geq 0 $, where $ \lambda_i= \theta_{i}+\frac{1}{n}{\mathcal d} $ is the parametrization of $ \hat{P}_{i} $. \end{problem} \subsection{Weighted least squares in QDT} To solve Problem \ref{problem2}, the standard LS solution is \begin{equation}\label{ls1} \hat \theta_{i,\text{LS}}=\left(X^{T} X\right)^{-1} X^{T}\bar{y_i}, \end{equation} and then the estimator for each detector is $\hat \lambda_{i,\text{LS}}=\hat \theta_{i,\text{LS}}+ \frac{1}{n}{\mathcal d} $, which is equivalent to equation (9) in \cite{wang2019twostage}. Although all the estimation errors $ e_{ij} $ have zero mean, they have different variances, which is called heteroscedasticity in statistics. The constrained least squares as equation (6) in \cite{wang2019twostage} and standard LS \eqref{ls1} do not consider heteroscedasticity. However, WLS consider the heteroscedasticity property and has optimal MSE. We thus consider WLS estimate \begin{equation} \label{wls} \hat{\theta}_{i,\text{WLS}}=\left({X}^{T}W_i {X}\right)^{-1} {X}^{T}W_i \bar y_i, \end{equation} where \begin{equation} {W_i}= \operatorname{diag}\left(\left[\frac{N_{1}}{p_{i1}-p_{i1}^{2}}, \ldots, \frac{N_{M}}{p_{iM}-p_{iM}^{2}}\right]\right) \end{equation} is the weighting matrix. We assume that $ p_{ij} $ is not equal to $ 0 $ or $ 1 $, which is reasonable because $ p_{ij} \in [0,1] $ and generally the probability for $ p_{ij}=0 $ or $ 1 $ is $ 0 $ in theory. The following are the two main advantages of using WLS: \begin{itemize} \item We can normalize the estimation errors to normal Gaussian errors and solve the heteroscedasticity problem. With increasing measurements, each $ e_{ij} $ will converge asymptotically to a Gaussian random variable with mean zero and variance $\sigma_{ij}=\left(p_{ij}-p_{ij}^{2}\right) / N_{j}$. Thus, we have $\mathbb{E}\left({e_{i}e_{i}}^{T}\right)={W_i}^{-1}$. Define $ Q_{i}\triangleq\sqrt{W_{i}}^{-1}/\sigma $ for certain $ \sigma>0 $. Then we multiply by $ Q_{i}^{-1} $ in \eqref{eqsmall} as \begin{equation} Q_{i}^{-1}{\bar y_i}=Q_{i}^{-1}{X} {\theta_i}+Q_{i}^{-1}{e_i}. \end{equation} Let $ \mathbb{E}(\cdot) $ denote the expectation with respect to all possible measurement results. The new errors have an independent identical normal distribution (i.i.d.) with \begin{equation}\label{sigma2} \mathbb{E}\left({Q_{i}}^{-1}{e_{i}e_{i}}^{T} {Q_{i}}^{-1}\right)=\sigma^{2}I. \end{equation} Thus, all the variances of the estimation errors are normalized to $ \sigma^2 $. \item For any unbiased linear estimator $ \hat \theta_i $ for $ \theta_i $, we have \cite{MU2020108837} \begin{equation} \begin{aligned} \operatorname{MSEM}\left(\hat{\theta}_{{i,\text{WLS}}}\right)&=\mathbb{E}\left[\left(\hat{\theta}_{{i,\text{WLS}}}-\theta_i\right)\left(\hat{\theta}_{i,\text{WLS}}-\theta_i\right)^{T}\right]\\ &=\left(X^{T}W_iX\right)^{-1}\leqslant \operatorname{MSEM}\left(\hat{\theta}_{{i}}\right), \end{aligned} \end{equation} where $ \operatorname{MSEM}\left(\cdot\right) $ is the MSE matrix. The MSE of all the POVM elements is \begin{equation} \label{upmse} \begin{aligned} \mathbb{E}\left(\sum_{i=1}^{n}\left\|\hat{P}_{i}-P_{i}\right\|^{2}\right) &=\sum_{i=1}^{n}\mathbb{E}\left(\left\|\hat{\theta}_i-\theta_i\right\|^{2}\right) \\ &=\sum_{i=1}^{n}\operatorname{Tr}\left(\operatorname{MSEM}\left(\hat{\theta}_{i}\right)\right). \end{aligned} \end{equation} Hence, the WLS solution to Problem \ref{problem2} has the minimum MSE. \end{itemize} In practice, the weighting matrix $W_i$ is unknown and a feasible solution is to use the asymptotic estimate \begin{equation}\label{hatw} \hat{W}_i=\operatorname{diag}\left(\left[\frac{N_{1}}{\hat{p}_{i1}-\hat{p}_{i1}^{2}}, \ldots, \frac{N_{M}}{\hat{p}_{iM}-\hat{p}_{iM}^{2}}\right]\right). \end{equation} Denote $\hat Q_{i}^{-1}\triangleq\sqrt{\hat W_{i}}^{-1}/\sigma $, $ \tilde{y_{i}}\triangleq\hat Q_{i}^{-1}{\bar y_i}$, $\tilde{X}_{i}\triangleq \hat Q_{i}^{-1}{X}$, $\tilde{e_i}\triangleq \hat Q_{i}^{-1}{e_i} $ and the model equivalent to \eqref{eqsmall} is \begin{equation} \label{weightmodel} {\tilde y_i}={\tilde X}_i {\theta_i}+\tilde{e}_{i}, \end{equation} where the variance of $ \tilde{e}_{i} $ is $ \sigma^{2}I$ and the practical asymptotic WLS (AWLS) estimate is \begin{equation} \label{awls} \begin{aligned} \hat{\theta}_{i,\text{AWLS}}&=\left({X}^{T}\hat W_i {X}\right)^{-1} {X}^{T}\hat W_i \bar y_i\\ &=\left(\tilde X_{i}^{T} \tilde X_{i}\right)^{-1}\tilde X_{i}^{T}\tilde{y_i}. \end{aligned} \end{equation} The difference between $\hat{{\theta}}_{{i,\text{AWLS}}}$ and $\hat{{\theta}}_{{i,\text{WLS}}}$ is asymptotically small in comparison with $ \hat{{\theta}}_{{i,\text{WLS}}}$ \cite{MU2020108837}. Thus, the estimate \eqref{awls} is accurate enough and asymptotically coincides with \eqref{wls}. Note that the LS estimate \eqref{ls1} or WLS estimate \eqref{awls} might result in nonphysical POVM elements $ \{\hat P_i\} $ which have negative eigenvalues due to the randomness of the measurement results. Thus, we need to use the algorithm in \cite{wang2019twostage} to further obtain a positive semidefinite estimate. \begin{remark} One may notice that \eqref{eqsmall} has the same linear regression form $ y=X\theta+e $ as transfer function identification in system identification \cite{CHEN20121525}. However, there are some differences between QDT and transfer function identification for classical (non-quantum) systems. First, in QDT, more measurement data will only enhance the data accuracy in $ y $ and the dimension of $ y $ is fixed with given probe states. In transfer function identification, the dimension of $ y $ increases for more data. Second, the parameterization matrix $ X $ is determined by the given probe states and $ X^{T}X $ can be singular (e.g., $ M<d^2 $) in QDT. In transfer function identification, $ X $ depends on the input data and measurement data. In practice, $ X^{T}X $ is therefore always invertible but the condition number may be large. Thus, the standard LS cannot give an accurate estimate. Finally, the variance of the noise $ e $ is often assumed to be a constant in transfer function identification. However, in QDT, the variances of noise are usually different and decrease as $ O(\frac{1}{N}) $ where $ N $ is the resource number. \end{remark} \section{Regularization in QDT}\label{sec3} In QDT, when the different types of probe states are similar or I.I., leading to an ill-conditioned problem, convex optimization methods with regularization were proposed in \cite{Feito_2009,Lundeen2009} for phase-insensitive detectors and in \cite{zhang2012mapping,Zhang_2012} for phase-sensitive ones. The motivation of introducing regularization is to mitigate the ill-conditioned property. For phase-insensitive detectors, the regularization form is chosen such that the diagonal elements of the reconstructed detector have smooth variations \cite{Zhang_2012}. However, for phase-sensitive detectors, a suitable regularization form is not easy to find. In addition, convex optimization methods cannot give a closed form solution. Therefore, in this section, we use regularization in the WLS of QDT which can give a closed form solution. \subsection{Regularized weighted least squares} In the ill-conditioned scenario, the condition number of $ \tilde X_{i}^{T}\tilde X_{i}$ can be large or even infinite. To solve this problem, we add regularization in the weighted model \eqref{weightmodel} as \begin{equation} \left\|\tilde y_{i}-\tilde X_{i} \theta_{i}\right\|^{2}+\theta_{i}^{T} D_{i} \theta_{i}, \end{equation} where $D_{i}$ is positive semi-definite and called a regularization matrix. Denote $ R_i\triangleq\tilde X_{i}^{T}\tilde X_{i} $. After we add regularization, the estimate is changed to be \begin{equation} \begin{aligned} \hat{\theta}_{i,\text{RWLS}}&=\left(R_i+D_{i}\right)^{-1} \tilde X_{i}^{T}\tilde y_i.\\ \end{aligned} \end{equation} The expectation of $ \hat{\theta}_{i,\text{RWLS}} $ is \begin{equation} \mathbb{E} \left(\hat{\theta}_{i,\text{RWLS}}\right)=\left(R_{i}+D_{i}\right)^{-1} R_{i} {\theta}_{i}. \end{equation} The bias is \begin{equation}\label{bias} \theta_{i,\text{RWLS}}^{\text{bias}}\triangleq\mathbb{E} \left(\hat{\theta}_{i,\text{RWLS}}\right)-\theta_{i}=-\left(R_{i}+D_{i}\right)^{-1} D_{i} \theta_{i}. \end{equation} Define \begin{equation} \begin{aligned} \tilde{\theta}_i\triangleq&\hat{\theta}_{i,\text{RWLS}}-\mathbb{E} \left(\hat{\theta}_{i,\text{RWLS}}\right)\\ =&\left(R_{i}+D_{i}\right)^{-1}\tilde{X}_{i}^{T}\left(\tilde{y}_{i}-\tilde{X}_{i} \theta_{i}\right)\\ =&\left(R_{i}+D_{i}\right)^{-1}\tilde{X}_{i}^{T}\tilde{e}_{i}, \end{aligned} \end{equation} and then the MSE matrix of $ \hat{\theta}_{i,\text{RWLS}} $ is \begin{equation}\label{msed} \begin{aligned} &\operatorname{MSEM}\left(\hat{\theta}_{i,\text{RWLS}}\right)=\mathbb{E}\left[\left(\hat{\theta}_{i,\text{RWLS}}-\theta_{i}\right)\left(\hat{\theta}_{i,\text{RWLS}}-\theta_{i}\right)^{T}\right] \\ &=\mathbb{E} \left(\tilde{\theta}_i \tilde{\theta}_{i}^{T}\right)+\theta_{i,\text{RWLS}}^{\text{bias}}\left(\theta_{i,\text{RWLS}}^{\text{bias}}\right)^{T}\\ &=\left(R_i+D_i\right)^{-1}\left(\sigma^{2} R_i+D_i \theta_{i} \theta_{i}^{T} D_{i}^{T}\right)\left(R_i+D_i\right)^{-1}. \end{aligned} \end{equation} An MSE matrix similar to \eqref{msed} can be found in \cite{CHEN20121525} for transfer function identification with standard LS estimation. The MSE of QDT is $ \operatorname{Tr}(\text{MSEM}) $ and depends on the true parameter $ \theta_{i} $. When the probe states are I.C., we can obtain an estimate without regularization (i.e., $ D_i=0 $) and the MSE matrix becomes \begin{equation}\label{mseic} \begin{aligned} &\operatorname{MSEM}\left(\hat{\theta}_{i,\text{RWLS}}\right)=\sigma^{2}R_i^{-1}, \end{aligned} \end{equation} which is independent of the true parameter $ \theta_{{i}} $. One can appreciate applying regularization in QDT from the following three aspects. \begin{enumerate} \item[1)] Regularization is a typical solution to ill-conditioned problems. In the field of classical transfer function identification (see e.g., \cite{Chen2013}), the input signal is band-limited, and then the matrix $R_i$ may become ill-conditioned as the amount of data increases. Similarly in QDT, the input probe states can be ``band-limited", in the sense that the types of the probe states are not rich enough (especially when coherent states are employed) and transition from I.C. to I.I.. This current incapability of realizing perfect number states endows $R_i$ with a large condition number, which can be reduced by regularization while still maintaining a closed form solution. \item[2)] POVM elements satisfying physical constraints always have eigenvalues in $[0,1]$. On the contrary, direct LS estimation for ill-conditioned QDT usually gives a large $ \|\hat{\theta}_i\| $, which may have eigenvalues outside $[0,1]$ and become nonphysical \cite{wang2019twostage}. Therefore, the regularization $ \theta_{i}^{T} D_{i} \theta_{i} $ is added as a penalty term on the amplitude of the estimation result, promoting the satisfaction of the physical constraints. \item[3)] From an alternative point of view, regularization leverages the bias-variance tradeoff. The regularization estimation is biased as \eqref{bias}, which can lead to an MSE smaller than that of the standard LS estimation both in the I.C. and I.I. scenarios. \end{enumerate} \begin{remark} Ref. \cite{MU2020108837} has discussed regularized weighted regression in quantum state tomography. Their motivation is that the quantum state $ \rho $ is usually of low rank and thus it is reasonable to add a Tikhonov regularization as Sec. \ref{tik}, which is different from our motivation in QDT. \end{remark} \subsection{Different regularization forms in QDT} Here we discuss different regularization forms in QDT. Firstly, we consider no regularization (i.e., $ D_i=0 $) as a special regularization form in the I.C. scenario. Since the MSE in \eqref{mseic} does not depend on true parameter $ \theta_{{i}} $, we propose resource distribution optimization of $ N_j $ to minimize the MSE with given probe states. Then we present some common regularization forms. With regularization, the MSE in \eqref{msed} depends on true parameter $ \theta_{{i}} $. Thus we cannot optimize resource distribution as without regularization and we use a uniformly distributed $ N_j=N/M $. \subsubsection{Without regularization} Without regularization, Refs. \cite{wang2019twostage,9029759} choose $ N_j=N/M $ for given probe states, which is often not the optimal distribution. Similar input design problems in classical systems and control have been widely studied and there are many existing results, e.g., D,A,E-optimal input design \cite{Boyd2004Convex}. Here, we formulate and solve the problem within the framework of A-optimal design problem, where the trace of the covariance matrix (i.e., MSE) is minimized. Let $ \eta_j=\frac{N_{j}}{N} $, and the optimization of resource distribution problem can be formulated as \begin{equation} \label{rs} \begin{aligned} \min_{\{\eta_{j}\}_{j=1}^{M}} & \sum_{i=1}^{n}\operatorname{Tr}\left( \sum_{j=1}^{M}\left(\eta_{j}w_{ij} \phi_{j} \phi_{j}^{T}\right)\right)^{-1} \\ \text { s.t. } & \eta_{j} \geq 0, \sum_{j=1}^{M} \eta_{j}=1, \end{aligned} \end{equation} where $ \phi_{j} $ is the given parameterization vectors of $ \rho_{j} $ and $ w_{ij} $ is the weighted constant which we may obtain from a prior information. If we do not have a prior information, we can set $ w_{ij}=1 $. This optimization problem is convex and it can be converted to a semidefinite programming (SDP) problem \begin{equation} \label{rs2} \begin{aligned} &\min_{\{\eta_{j}\}_{j=1}^{M},\{u_{k}\}_{k=1}^{d^2}} \sum_{k=1}^{d^2} u_k \\ &\quad\quad\text { s.t. } \left[\begin{array}{cc} \sum_{j=1}^{M}\eta_{j} w_{ij}\phi_{j} \phi_{j}^{T} & v_{k} \\ v_{k}^{T} & u_{k} \end{array}\right] \geq 0,\\ &\quad\quad 1 \leq k \leq d^{2}, 1 \leq i \leq n, \\ &\quad\quad\eta_{j} \geq 0, \sum_{j=1}^{M} \eta_{j}=1, \end{aligned} \end{equation} where $v_{k}$ is the $k$-th unit vector. Using CVX \cite{cvx,gb08}, we can solve \eqref{rs2} efficiently. Note that $ N_j=\eta_jN $ may not be an integer, and we need to round it up or down. In comparison, if the resource distribution is given, the probe state design problem was discussed in \cite{xiao2021optimal} based on minimizing an upper bound on the MSE and the condition number. \subsubsection{Tikhonov regularization}\label{tik} A most common regularization form is in a Tikhonov sense \cite{Boyd2004Convex}. In QDT, a natural method is to choose regularization matrix as \begin{equation} \label{Ti} D_i^{\text{Tikhonov}}=cI, \end{equation} where $ c $ is a positive constant. Ref. \cite{wang2019twostage} did not use WLS and chose $D_i=\frac{c}{N}I $ which is Tikhonov regularization, because \begin{equation}\label{wangtik} \begin{aligned} \hat{\theta}_{i,\text{RWLS}}&=\left(X^{T}X+\frac{c}{N}I\right)^{-1} X^{T} \bar y_i\\ &=\left(X^{T}NI X+{c}I\right)^{-1} X^{T} NI\bar y_i,\\ \end{aligned} \end{equation} where the weighted matrix is $ NI $ instead of \eqref{hatw}. \subsubsection{Kernel-based regularization}\label{difference} In transfer function identification, Refs. \cite{PILLONETTO201081,PILLONETTO2011291,CHEN20121525,Pillonetto2014} proposed kernel-based regularization and explained regularization in a Bayesian perspective. We assume the true parameter $ \theta_{i}$ is a random variable and has a Gaussian distribution with zero mean and covariance matrix $ S_i $: \begin{equation} \theta_{i} \sim \mathcal{N}\left(0, S_i\right). \end{equation} Therefore, the posterior estimate is \begin{equation}\label{rwlssolution} \begin{aligned} \hat{\theta}_{i}^{\text {post }}=&\left(S_iR_{i}+\sigma^{2} I\right)^{-1} S_i F_{i} \\ =&\left(R_{i}+\sigma^{2} S_{i}^{-1}\right)^{-1} F_{i}, \end{aligned} \end{equation} where $ F_{i}\triangleq\tilde{X}_{i}^{T}\tilde{y_i} $. If $ S_i $ is singular, we can use the first equality of \eqref{rwlssolution} to obtain the estimate. This posterior estimate is the same as the regularized estimate if the regularization matrix $D_i$ is chosen as \cite{CHEN20121525} \begin{equation} D_i=\sigma^{2} S_{i}^{-1}. \end{equation} This gives an insight into how to choose the regularization matrix $ D_i $ or kernel matrix $ S_{i} $: Let it reflect the correlations of the parameters \cite{CHEN20121525}. To use the kernel-based regularization in QDT, we need to solve two problems \begin{enumerate} \item[(i)] In the Bayesian perspective for kernel-based regularization, the mean of the unknown parameters is zero. But in QDT, the mean of the unknown parameters $ \lambda_i $ is usually not zero. \item[(ii)] Heteroscedasticity: In transfer function identification, it is usually assumed that the noises have the same variances. But the estimation errors $ e_{ij} $ usually have different variances in QDT. \end{enumerate} The first problem is solved by modeling in \eqref{eqsmall} where the unknown parameter $ \theta_i $ becomes zero-mean. For the second problem, WLS \eqref{weightmodel} solves the heteroscedasticity problem. There are two advantages of using kernel-based regularization in QDT compared with using kernel-based regularization in transfer function identification: \begin{enumerate} \item[(i)] In transfer function identification, we need to identify the variance of the noise firstly, while we already know the approximate variance of the estimation error in QDT from measurement data. \item[(ii)] In transfer function identification, the problem dimension increases as more data is generated, resulting in increased difficulty. While in QDT, more data will only enhance the data accuracy and the dimension is fixed with given probe states. \end{enumerate} One limit using kernel-based regularization in QDT is that the parameter $ \theta_i$ does not have the property of impulse responses of transfer functions which usually decay exponentially \cite{CHEN20121525}. In this paper, we choose DI kernel which only represents the auto-correlation for each coefficient of QDT as \begin{equation} \label{DI} S_i^{\text{DI}}(k, j)=\left\{\begin{array}{ll} c \mu^{k}, & \text {if } k=j,\\ 0, & \text {otherwise}, \end{array}\right. \end{equation} where $ c \geq 0$, $0 \leq \mu \leq 1 $. If we have more prior knowledge such as the correlation between different coefficients, we can design more suitable kernels as in transfer function identification. \subsubsection{Best regularization (in the I.C. scenario)} For true parameter $ \theta_i $, two natural questions are whether there exists an optimal regularization matrix and if there exists an optimal regularization matrix, does it depend on $ \theta_i $? Ref. \cite{CHEN20121525} has discussed these problems in transfer function identification and the result also holds for QDT. The MSE matrix in \eqref{msed} can be rewritten using $ S_i $ as \begin{equation}\label{msesi} \begin{aligned} \operatorname{MSEM}\left(\hat{\theta}_{i,\text{RWLS}}\right)=&\left(S_iR_i+\sigma^{2} I\right)^{-1}(\sigma^{2} S_{i}R_iS_{i}\\ &+\sigma^{4} \theta_{i} \theta_{i}^{T})\left(R_iS_{i}+\sigma^{2} I\right)^{-1}. \end{aligned} \end{equation} When $ R_i $ is invertible, the following matrix inequality \cite{CHEN20121525,1658250} \begin{equation}\label{ls} \left.\operatorname{MSEM}\left(\hat{\theta}_{i, \mathrm{RWLS}}\right)\right|_{S_{i}=K} \geq\left.\operatorname{MSEM}\left(\hat{\theta}_{i, \mathrm{RWLS}}\right)\right|_{S_{i}=\theta_{i} \theta_{i}^{T}} \end{equation} holds for any $ K \geq 0 $. Later, in Theorem \ref{theorem2}, we will extend this inequality to the case where $ R_i $ is singular. Thus, ideally the best choice of regularization always includes \begin{equation}\label{best} S_i^{\text{best}}=\theta_{i} \theta_{i}^{T}, \end{equation} which yields the corresponding optimal regularized estimate \begin{equation} \hat{\theta}_{i}^{\text{best}}=\left(\theta_{i} \theta_{i}^{T} R_i+\sigma^{2} I\right)^{-1} \theta_{i} \theta_{i}^{T} F_i, \end{equation} with $ R_i=\tilde X^{T}\tilde X $ and $ F_{i}=\tilde{X}^{T}\tilde{y_i} $. The theoretically best regularization depends on the unknown parameter and cannot be used in practice. A natural question is that, is $ \theta_{i} \theta_{i}^{T} $ the only choice for $ S_{i} $ to result in the best regularization? Ref. \cite{1658250} has given a positive answer for the I.C. scenario. For the I.I. scenario we will give a negative answer in Sec. \ref{sec52}. \subsubsection{Adaptive regularization} As motivated by the best regularization, we can propose adaptive regularization with rank-1 kernel matrix which is similar to the rank-1 kernel matrix for transfer function identification in \cite{chentac}. Firstly, we consider a two-step adaptive regularization. In the first step, we use Tikhonov or kernel-based regularization and we can obtain a rough estimate $ \hat\theta_{i}^{0} $ with certain kernel matrix $ S_i^{(1)} $. Then in the second step, we repeat using the measurement data in the first step, but now the regularization matrix is adaptively chosen as \begin{equation} \label{first} S^{\text{rank-1}}_{i}=\hat\theta_{i}^{0}\left(\hat\theta_{i}^{0}\right)^T. \end{equation} The following analysis and Theorem \ref{theorem1} in the next section indicate that full-rank kernel matrix may be better than rank-1 kernel matrix, because a full-rank $ S_i $ does not induce a dimension reduction from $ R(B) $ to $ R(S_iB) $. Therefore, we also consider to use full-rank kernel matrix as \begin{equation}\label{full} S_{i}^{\text{full-rank}}=S^{\text{rank-1}}_{i}+S^{\text{DI}}_{i}, \end{equation} in Sec. \ref{ns}. It is an important problem to determine the kernel matrix and some different kernels are proposed in transfer function identification. For a structure-given kernel matrix, optimization of the hyper-parameters (such as $c$, $ \mu $ in \eqref{DI}) in the kernel matrix has been discussed in \cite{chentac,Chen2018,Chen2013}. However, the question of how to choose the optimal adaptive kernel matrix is still an open problem. \section{Characterizing the MSE of QDT with regularization }\label{sec5} \subsection{On the MSE scaling}\label{sec51} To analyze the performance of different regularization methods, we characterize the asymptotic behavior of the estimation error, e.g., MSE. Without loss of generality, we can always normalize the variances of the estimation errors to $ 1 $, i.e., $ \sigma^{2}=1 $ in \eqref{sigma2}. We give the following assumptions. \begin{assumption}\label{assum1} The probe state parameterization matrix $ X $ is given. The kernel matrix $S_i$ is given. For each $1\leq j\leq n$, $\lim _{N \rightarrow \infty} \frac{N_{j}}{N}= h(j)$ where $ h(j) $ is a constant in $ [0,1] $ depending on $ j $. \end{assumption} We refer to Assumption \ref{assum1} as the \emph{static assumption}. With Assumption \ref{assum1}, the probe state parameterization matrix and kernel matrix are given constant matrices which do not change in our analysis and the resource distribution for each probe state can change as $ N $ increases. But the limit of the ratio is a constant and can be $ 0 $ or $ 1 $. We say that the random sequence $\left\{\xi_{N}\right\} $ converges almost surely to a random variable $\xi $ if $\operatorname{P}\left(\lim_{N \rightarrow \infty}\left\|\xi_{N}-\xi\right\|_{2}=0\right)=1$ , which can be written as $\xi_{N} \stackrel{a. s.}{\rightarrow} \xi$ as $N \rightarrow \infty$. For the weighted matrix $\hat{W}_{i} $, its deviation from the true value $ {W}_{i} $ has been derived in \cite{MU2020108837} as \begin{equation}\label{weight} \begin{aligned} \hat{W}_{i}=&\operatorname{diag}\left(\left[\frac{N_{1}}{\hat{p}_{i 1}-\hat{p}_{i 1}^{2}}, \ldots, \frac{N_{M}}{\hat{p}_{i M}-\hat{p}_{i M}^{2}}\right]\right)\\ =&\left(1+O\left(\frac{1}{\sqrt{N}}\right)\right)W_{i}.\\ \end{aligned} \end{equation} We define \begin{equation} B\triangleq\lim _{N \rightarrow \infty}\frac{X^{T} W_i X}{N}, \hat B_{N}\triangleq\frac{X^{T} \hat W_i X}{N}, \end{equation} where the normalized weighted parameterization matrix $\hat B_{N}=\left(1+O\left(\frac{1}{\sqrt{N}}\right)\right) B $ for constant matrix $ B $ because $\lim _{N \rightarrow \infty} \frac{N_{j}}{N}= \operatorname{constant} $. Therefore, $ \hat B_{N}\stackrel{a. s.}{\rightarrow}B $ as $ N \rightarrow \infty $. We denote $ R(X) $ as the range space of $ X $ and $ N(X) $ as the null space of $ X $. Then we propose the following theorem to characterize the MSE. \begin{theorem}\label{theorem1} In the regularization-based QDT, if the $i$-th POVM element satisfies the static assumption, then its MSE scales as $ O(\frac{1}{N}) $ if and only if the true values of the unknown parameters satisfy $\theta_i\in R(S_{i} B) $. Otherwise, the MSE converges to a positive value independent of $ N $. \end{theorem} \begin{proof} For the $ i $-th POVM element, according to \eqref{msesi} and $ \sigma^{2}=1 $, the MSE is \begin{equation} \begin{aligned} &\operatorname{Tr}\left[\operatorname{\left.\operatorname{MSEM}\right|}_{S_{i}}\right]\\ =&\operatorname{Tr}\big[\left(S_{i} R_{i}+ I\right)^{-1}( S_{i} R_{i} S_{i}+ \theta_{i} \theta_{i}^{T})\left(R_{i} S_{i}+ I\right)^{-1}\big]\\ =&\operatorname{Tr}\big\{\left[\left(S_{i} R_{i}+ I\right)\left(R_{i} S_{i}+ I\right)\right]^{-1}\left(S_{i} R_{i} S_{i}+ \theta_{i} \theta_{i}^{T}\right)\big\},\\ \end{aligned} \end{equation} where $ R_i=X^{T} \hat{W}_{i} X $. We define \begin{equation}\label{A1} \begin{aligned} A_{1}&\triangleq\left(S_{i} R_{i}+ I\right)\left(R_{i} S_{i}+ I\right)\\ &=\left(NS_{i}\hat{B}_{N}+ I\right)\left(N\hat{B}_{N} S_{i}+ I\right),\\ \end{aligned} \end{equation} and \begin{equation}\label{A2} \begin{aligned} A_{2}\triangleq S_{i} R_{i} S_{i}=NS_{i}\hat B_{N}S_{i}. \end{aligned} \end{equation} Now the MSE becomes $ \operatorname{Tr}\left(A_{1}^{-1}(A_{2}+\theta_{i}\theta_{i}^{T})\right) $. We then introduce the following lemma \begin{lemma}\label{lemma2} \cite{WU198853,CUI201717} For an $n \times n$ complex matrix $T$, the following statements are equivalent: \begin{enumerate} \item $T=A B$, where $A, B \geqslant 0$; \item $T=A B$, where $A>0$ and $B \geqslant 0$; \item $T$ is similar to a nonnegative diagonal matrix. \end{enumerate} \end{lemma} From Lemma \ref{lemma2}, $ S_iB $ is similar to a nonnegative diagonal matrix and we assume $ S_iB=Q^{-1}\Sigma_{1}Q$ where $\Sigma_{1}=\operatorname{diag}(\Sigma_{11},\Sigma_{12}) $ and $\Sigma_{11}$ is a $ k\times k $ positive diagonal matrix, $\Sigma_{12}$ is a $ (d^2-k)\times (d^2-k) $ zero matrix. Therefore, $ NS_iB+I $ can also be diagonalized by $ Q $ as \begin{equation}\label{inv} \begin{aligned} N S_{i} B+I&=Q^{-1} \operatorname{diag}\left(\left[\tau_{1}, \cdots, \tau_{d^{2}}\right]\right) Q \\ &=Q^{-1}\operatorname{diag}\left(N \Sigma_{11}+I_{k}, I_{d^{2}-k} \right)Q, \end{aligned} \end{equation} where $ \tau_1\geq\cdots\geq\tau_{d^2}>0 $, $ \tau_j=O(N) $ for $ 1\leq j\leq k $ and $ \tau_j=1 $ for $ k+1\leq j\leq d^2 $ and the corresponding eigenvectors are $ \{u_j\}_{j=1}^{d^2} $. As $ N\rightarrow\infty $, we have \begin{equation} \begin{aligned} &\lim _{N \rightarrow \infty}\left(N S_{i} B+I\right)^{-1}\\ =&\lim _{N \rightarrow \infty} Q^{-1}\operatorname{diag}\left(N \Sigma_{11}+I_{k}, I_{d^{2}-k} \right)^{-1} Q\\ =&Q^{-1}\operatorname{diag}\left(0, I_{d^{2}-k} \right) Q, \end{aligned} \end{equation} and thus $ \left(N S_{i} B+I\right)^{-1} $ tends to a constant matrix. Since \begin{equation*} I-\left(NS_{i}{B}+ I\right)^{-1}=\left(NS_{i}{B}+ I\right)^{-1}NS_iB, \end{equation*} it is also a bounded matrix and tends to a constant matrix as $N\rightarrow\infty $. Let the spectral decomposition of $ B $ be \begin{equation}\label{b} B=V \Sigma_{2} V^{T}=V\operatorname{diag}\left(\Sigma_{21},0\right) V^{T}. \end{equation} Thus, the Moore-Penrose inverse of $ B $ is \begin{equation} \tilde{B}=V\operatorname{diag}\left(\Sigma_{21}^{-1},0\right) V^{T}, \end{equation} which is a constant matrix and $ B\tilde{B}B=B $. Therefore, the first term of MSE is \begin{equation} \begin{aligned} &\quad\operatorname{Tr}\left(A_{1}^{-1} A_{2}\right)\\ &\stackrel{a. s.}{\rightarrow}\operatorname{Tr}\left(\left(N B S_{i}+ I\right)^{-1}\left(N S_{i} B+ I\right)^{-1} N S_{i} B S_{i}\right) \\ &=\!\frac{1}{N}\! \operatorname{Tr}\left(\!\left(NS_{i} B+ I\right)^{-1} NS_{i} B\!\cdot\! \tilde{B}\!\cdot\! N{B} S_{i}\left(\!NB S_{i}+ I\right)^{-1}\right)\\ &= O\left(\frac{1}{N}\right), \end{aligned} \end{equation} because the term $ \operatorname{Tr}(\cdot) $ is bounded and tends to a constant. Therefore, the first term of MSE always scales as $ O(\frac{1}{N}) $. Then we discuss the scaling of the second part of MSE \begin{equation} \operatorname{Tr}\left(A_{1}^{-1} \theta_{i} \theta_{i}^{T}\right)\stackrel{a. s.}{\rightarrow}\theta_{i}^{T}\left(NBS_i+I\right)^{-1} \left(NS_iB+I\right)^{-1} \theta_{i}. \end{equation} If $ \theta_{i} $ is a linear combination of $ u_{j}$ for $1 \leq j \leq k $, we have \begin{equation} \operatorname{Tr}\left(A_{1}^{-1} \theta_{i} \theta_{i}^{T}\right)= O(\frac{1}{N^2}). \end{equation} Otherwise, if $ \theta_{i} $ is not a linear combination of $ u_{j}, 1 \leq j \leq k $, $ \operatorname{Tr}\left(A_{1}^{-1} \theta_{i} \theta_{i}^{T}\right) $ tends to a positive number independent of $ N $. Therefore, for MSE, it scales as $ O(\frac{1}{N}) $ if and only if the true parameter $ \theta_{i} $ is the linear combination of $ u_{j}$ for $1 \leq j \leq k $, i.e., $ {\theta}_i\in R(S_{i}{B}) $. Otherwise, MSE tends to a positive value independent of $ N $. \end{proof} Note that when $ S_{i} {B} $ is full-rank, i.e., $ S_{i} $ and $B $ are both positive definite, the condition $ \theta_i\in R(S_{i} {B}) $ is always satisfied. Therefore, the MSE always scales as $ O(\frac{1}{N}) $. Thus, when the types of different probe states are I.C., for any positive definite kernel matrix $ S_i $, the MSE always scales as $ O(\frac{1}{N}) $. However, when the probe states are I.I., the condition $ \theta_i\in R(S_{i} {B}) $ is difficult to be satisfied in practice. Thus, without special prior knowledge, for almost all regularization forms, the MSE will tend to a constant when $ N $ tends to infinity. In addition, as $ M $ decreases, for given $ S_i $, this condition may become more difficult to be satisfied because $ R\left(S_{i} {B}\right) $ may become smaller. Thus, rank-1 adaptive regularization as in \eqref{first} is not a good choice and full-rank kernel matrix as in \eqref{full} may be better. The above analysis can help understand the boundary of the ability of employing regularization in QDT. \begin{remark} A similar problem was also discussed as Theorem 2.1 in \cite{chentac} for transfer function identification in the I.C. scenario. There a condition to realize unbiased estimation of the true parameters with regularization was given. Here, by allowing the probe states to be I.C. or I.I., we give a stronger result about the scaling of MSE as $ O(\frac{1}{N}) $ or tends to a constant for QDT. Our result can also be applied to the case when the variance of noise scales as $ O(\frac{1}{N}) $, which is typical in the scenario where only statistical noise is considered in quantum measurement. \end{remark} \subsection{On the best regularization allowing I.I.}\label{sec52} We now consider the best regularization which has minimum MSE. It is given by \eqref{best} in the I.C. scenario. Here we aim to characterize the I.I. case. From \eqref{inv} we know $ NS_{i}B+I $ is always invertible. Define \begin{equation} L_{i}\triangleq-\left(N S_{i} B+I\right)^{-1}, \end{equation} and thus \begin{equation} I+L_{i}=\left(N S_{i} B+I\right)^{-1} N S_{i} B=-N L_{i} S_{i} B. \end{equation} Therefore, we have \begin{equation}\label{d2b} \left(I+L_{i}\right) \tilde{B}= -NL_{i} S_{i} B \tilde{B}. \end{equation} Then we propose the following theorem to characterize the best kernel matrix, allowing $ B $ to be singular. \begin{theorem}\label{theorem2} For the $ i $-th POVM element with true parameter $ \theta_{i} $ and normalized weighted parameterization matrix $ B $ as \eqref{b}, define $\Gamma\triangleq\big\{M \mid M=\theta_{i} \theta_{i}^{T}+V\operatorname{diag}\left(0,Z_{3}\right) V^{T}, Z_{3} \geq 0, \operatorname{dim}(Z_{3})=d^2-\operatorname{rank}(B)\big\}$. If $ \theta_{{i}}\in R(B) $, then $ S_i $ achieves the minimum MSE (i.e., $ S_i $ is the best regularization) if and only if $ S_{i}\in \Gamma$. \end{theorem} \begin{proof} For the MSE with kernel matrix $ S_i $, it can be rewritten as \begin{equation}\label{mseli} \begin{aligned} &\operatorname{Tr}\left[\operatorname{MSEM}\mid_{S_{i}}\right]\\ =&\operatorname{Tr}\left[\left(N S_{i} B+I\right)^{-1}\left(N S_{i} B S_{i}+\theta_{{i}}\theta_{i}^{T}\right)\left(N B S_{i}+I\right)^{-1}\right] \\ =&\operatorname{Tr}\left[\frac{\left(I+L_{i}\right) \tilde{B}\left(I+L_{i}\right)^{T}}{N}+L_{i} \theta_{{i}}\theta_{i}^{T} L_{i}^{T}\right]. \end{aligned} \end{equation} Define $g(L_{i})$ to be the last line of (\ref{mseli}). Since $ g(L_{i}) $ is convex in $ L_i $, we can find the minimum value by setting the derivative to be zero as \begin{equation}\label{grad} \frac{dg}{d L_{i}}=\frac{2 \tilde{B}+2 L_{i} \tilde{B}}{N}+2 L_{i} \theta_{{i}}\theta_{i}^{T} =0. \end{equation} If there exists $S_i\geq0$ so that (\ref{grad}) holds for the corresponding $L_i|_{S_i}$, then such an $S_i$ is the optimal solution to minimize the MSE (\ref{mseli}). We tentatively plug $S_i$ in \eqref{grad}, which (using \eqref{d2b}) becomes $2 L_{i}\left(-S_{i}B\tilde{B}+\theta_{{i}}\theta_{i}^{T}\right)=0$, equivalent to \begin{equation}\label{besteq} \tilde{B}BS_{i} =\theta_{{i}}\theta_{i}^{T}. \end{equation} Since $\theta_{i}\in R(B) $, we let $\theta_{i}=Bb $ and then \eqref{besteq} becomes \begin{equation}\label{besteq2} \begin{aligned} & V\left[\begin{array}{cc} I_{21} & 0 \\ 0 & 0 \end{array}\right] V^{T} S_{i}\\ =&V\left[\begin{array}{cc} \Sigma_{21} & 0 \\ 0 & 0 \end{array}\right] V^{T} b b^{T} V\left[\begin{array}{cc} \Sigma_{21} & 0 \\ 0 & 0 \end{array}\right] V^{T}. \end{aligned} \end{equation} Denote \begin{equation} V^{T} b=\left[\begin{array}{l} p \\ q \end{array}\right], V^{T} S_{i} V=\left[\begin{array}{cc} Z_{1} & Z_{2} \\ Z_{2}^{T} & Z_{3} \end{array}\right]. \end{equation} Then \eqref{besteq2} can be simplified as \begin{equation} \begin{aligned} &{\left[\begin{array}{cc} I_{21} & 0 \\ 0 & 0 \end{array}\right]\left[\begin{array}{ll} Z_{1} & Z_{2} \\ Z_{2}^{T} & Z_{3} \end{array}\right]=\left[\begin{array}{cc} Z_{1} & Z_{2} \\ 0 & 0 \end{array}\right]} \\ =&\left[\begin{array}{cc} \Sigma_{21} & 0\! \\ 0 & 0 \end{array}\right]\left[\begin{array}{l} p \\ q \end{array}\right]\left[\begin{array}{ll} p^{T} & q^{T} \end{array}\right]\left[\begin{array}{cc} \Sigma_{21} & 0 \\ 0 & 0 \end{array}\right]\\ =&\left[\begin{array}{cc} \Sigma_{21} p p^{T} \Sigma_{21} & 0 \\ 0 & 0 \end{array}\right], \end{aligned} \end{equation} and thus \begin{equation} Z_{1}=\Sigma_{21} p p^{T} \Sigma_{21}, Z_{2}=0. \end{equation} Since \begin{equation} \theta_{i} \theta_{i}^{T}=V\left[\begin{array}{cc} \Sigma_{21} p p^{T} \Sigma_{21} & 0 \\ 0 & 0 \end{array}\right] V^{T}, \end{equation} all solutions to \eqref{besteq} can be expressed as \begin{equation} S_{i}\!=\!V\left[\begin{array}{cc} \!\Sigma_{21} p p^{T} \Sigma_{21} & 0 \!\\ \!0 & Z_{3}\! \end{array}\right] V^{T}=\theta_{i} \theta_{i}^{T}\!+\!V\left[\begin{array}{cc} 0 & 0 \\ 0 & Z_{3} \end{array}\right] V^{T}, \end{equation} where $ Z_3 $ is positive semidefinite. Therefore, the solution set of \eqref{besteq} is exactly characterized by $ \Gamma $ where \begin{equation} \begin{aligned} \Gamma\triangleq&\big\{M \mid M=\theta_{i} \theta_{i}^{T}+V\operatorname{diag}\left(0,Z_{3}\right) V^{T}, Z_{3} \geq 0,\\ & \operatorname{dim}(Z_{3})=d^2-\operatorname{rank}(B)\big\}. \end{aligned} \end{equation} \end{proof} For all the best regularizations $ S_i $ in $ \Gamma $, we have $ S_iB=\theta_{i} \theta_{i}^{T}B $. This gives the minimum value of the MSE, which can be calculated as \begin{equation} \begin{aligned} &\operatorname{Tr}\left(\operatorname{MSEM}\mid_{S_{i}\in\Gamma}\right)=\operatorname{Tr}\left(\operatorname{MSEM}\mid_{\theta_{i} \theta_{i}^{T}}\right)\\ =&\operatorname{Tr}\bigg[\left(N \theta_{i} \theta_{i}^{T} B+I\right)^{-1}(N \theta_{i} \theta_{i}^{T} B \theta_{i} \theta_{i}^{T}\\ &+\theta_{{i}}\theta_{i}^{T})\left(N B \theta_{i} \theta_{i}^{T}+I\right)^{-1}\bigg] \\ =&\operatorname{Tr}\left[\theta_{{i}}\theta_{i}^{T}\left(N B \theta_{i} \theta_{i}^{T}+I\right)^{-1}\right]. \end{aligned} \end{equation} \begin{remark} In practice, we do not know the true values of $ B $ and $ \theta_{{i}} $. One possible solution is to use a rough estimate $ \hat\theta_i $ and $ \hat B_N $ to replace $ \theta_{i} $ and $ B $ in $ \Gamma $. In this case, there may exist an optimal choice of $ Z_3 $ to achieve the minimum MSE and we leave it as an open problem. \end{remark} Here, we compare Theorem \ref{theorem1} and Theorem \ref{theorem2}. If $ \theta_{{i}}\in R(B) $, then for any full-rank kernel matrix $ S_{i} $, $ \theta_{{i}}\in R(S_iB) $ and thus the MSE scales as $O\left(\frac{1}{N}\right) $. For any $ S_i \in \Gamma $, we can obtain the minimum MSE. In addition, $ \theta_{{i}}\in R\left(\theta_{{i}}\theta_{i}^{T}B\right)= R(S_iB) $, and thus the MSE also scales as $O\left(\frac{1}{N}\right) $. If $ \theta_{{i}}\in N(B) $, all the ideal measurement data $ p_{ij} $ are zero, i.e., we cannot obtain any information from the measurement data. Therefore, $ \theta_{{i}} $ is not identifiable. If $ \theta_{{i}}=\theta_{{i,1}}+\theta_{{i,2}} $ where $\theta_{{i,1}}\neq0, \theta_{{i,1}}\in R(B) $ and $\theta_{{i,2}}\neq 0, \theta_{{i,2}}\in N(B) $, then $ \theta_{{i,1}} $ is identifiable and $ \theta_{{i,2}} $ is not identifiable. Therefore, we only aim to identify $ \theta_{{i,1}} $ and the discussion is the same as $ \theta_{{i}}\in R(B) $. We then consider two special cases. The first one is that $ B $ is full-rank. Therefore, $ \theta_{i}\in R(B) $ is always satisfied and the unique best kernel matrix is $ S_i=\theta_{{i}}\theta_{i}^{T} $ which is the same as \cite{CHEN20121525}. The second one is $ S_{i}=\gamma\theta_{{i}}\theta_{i}^{T} $ where $ \gamma $ is a positive constant. Even if $\theta_{{i}}\notin R(B) $, we still have $ \theta_{{i}}\in R(S_iB) $ ($ \theta_{{i}}\notin N(B) $, otherwise $ R(S_{i}B)=0 $), thus the MSE also scales as $O\left(\frac{1}{N}\right) $. Note that all the above discussion is based on that $ N $ tends to infinity. When $ N $ is small, the performance of the regularization forms will be shown through simulation in Sec. \ref{ns}. \section{Numerical simulation}\label{ns} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{re16.pdf} \caption{The error scalings of different regularization forms with WLS using $ 16 $ types of $ 4 $ dimensional pure states. When the resource number $ N>10^6 $, all the MSEs scale as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. The best regularization is $S_i^{\text{best}}=\theta_{i} \theta_{i}^{T}$ which is the lower bound of MSE and depends on true value of $ \theta $. Therefore, it cannot be used in practice and we aim to achieve regularization closest to the best regularization. } \label{re16} \end{figure*} In this section, we discuss two commonly used classes of probe states for QDT. The first one involves $ d $ dimensional pure states $ \rho=\left|\psi\right\rangle\left\langle\psi\right|$ where $ \left|\psi\right\rangle$ is the superposition of $ d $ dimensional Fock states as \begin{equation}\label{pure} \left|\psi\right\rangle= \sum_{n=1}^{d} c_{n}\left|n\right\rangle. \end{equation} In \cite{xiao2021optimal}, an analysis indicates that pure states may perform better than mixed states for QDT to minimize MSE. Another class of commonly used probe states for QDT are the coherent states, because they are more straightforward to be prepared. A coherent state is denoted as $|\alpha\rangle$ where $\alpha \in \mathbb{C}$ and it can be expanded using number states as \begin{equation} |\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}} \sum_{i=0}^{\infty} \frac{\alpha^{i}}{\sqrt{i !}}|i\rangle. \end{equation} Coherent states are in essence infinite dimensional. Denote the corresponding $ d $-dimensional truncation as \begin{equation*} |\alpha_{d}\rangle\triangleq\mathrm{e}^{-\frac{|\alpha|^{2}}{2}} \sum_{i=0}^{d-1} \frac{\alpha^{i}}{\sqrt{i} !}|i\rangle. \end{equation*} To estimate a $d$ dimensional detector, in the simulation we assume that the outcomes generated by the residual signal $ \operatorname{Tr}\left[\left(|\alpha\rangle-\left|\alpha_{d}\right\rangle\right)\left(\langle\alpha|-\left\langle\alpha_{d}\right|\right)\right] $ are all included in the outcomes of the last POVM element. Since we truncate the coherent state in $ d $-dimension, $ \operatorname{Tr}\left(\left|\alpha_{d}\right\rangle\langle\alpha_{d}|\right)<1$ but for pure states in \eqref{pure} $ \operatorname{Tr}\left(\rho\right)=1 $. Here we discuss resource distribution optimization without regularization and different regularization forms under the uniformly distributed resources. \subsection{Superposed Fock states}\label{purenum} We consider a $ 4$ dimensional three-valued phase-sensitive detector as \begin{equation}\label{detector4} \begin{aligned} P_{1}^{(4)}&=\left[\begin{array}{cccc}0.1 & 0 & 0.02-0.05 \mathrm{i} & 0.03+0.07 \mathrm{i} \\ 0 & 0.2 & 0 & 0 \\ 0.02+0.05 \mathrm{i} & 0 & 0.3 & 0 \\ 0.03-0.07 \mathrm{i} & 0 & 0 & 0.4\end{array}\right],\\ P_{2}^{(4)}&=\left[\begin{array}{cccc} 0.2 & 0.01+0.02 \mathrm{i} & 0 & 0 \\ 0.01-0.02 \mathrm{i} & 0.2 & 0 & 0 \\ 0 & 0 & 0.3 & 0 \\ 0 & 0 & 0 & 0.4 \end{array}\right],\\ P_{3}^{(4)}&=I-P_{1}^{(4)}-P_{2}^{(4)}. \end{aligned} \end{equation} Using the algorithm in \cite{MISZCZAK2012118,qetlab}, we generate $ 16 $ different types of $ 4 $ dimensional pure states. We use different regularization forms including no regularization (\eqref{Ti} with $ c=0$), Tikhonov regularization (\eqref{Ti} with $ c=10$), kernel-based regularization (\eqref{DI} with $c=0.09, \mu=0.9 $), rank-1 adaptive regularization, full-rank adaptive regularization (see Sec. \ref{sec5}) and the best regularization \eqref{best}. The best regularization is the lower bound of MSE and depends on true value of $ \theta $. Therefore, it cannot be used in practice and we aim to achieve regularization closest to the best regularization. For rank-1 adaptive regularization, we use kernel-based regularization (\eqref{DI} with $c=0.09, \mu=0.9 $) in step 1 and \eqref{first} in step 2. For full-rank adaptive regularization, we use kernel-based regularization (\eqref{DI} with $c=0.09, \mu=0.9 $) in step 1 and \eqref{full} in step 2. For each resource number, we run the algorithm $ 100 $ times and obtain the average MSE and standard deviation. The results are shown in Fig. \ref{re16}. The best regularization scales as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. When the resource number $ N<10^6 $, the MSEs of kernel-based regularization and adaptive regularization are a little smaller than Tikhonov regularization and no regularization. In addition, full-rank adaptive regularization has a little smaller MSE than rank-1 adaptive regularization. When the resource number $ N>10^6 $, all the MSEs scale as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. Since these $ 4 $ dimensional pure states are I.C., without regularization, we also consider resource distribution optimization. We compare the MSE of the case with averagely distributed resources $N/M$ (``Average" in Fig. \ref{rsop}) and the MSE of the case with optimized resource distribution (``Optimized" in Fig. \ref{rsop}) by solving \eqref{rs2}. For each resource number $ N $, we run the algorithm $ 100 $ times and obtain the average MSE and standard deviation. The results are shown in Fig. \ref{rsop}. We can obtain a lower MSE with resource distribution optimization and both MSEs scale as $ O\left(\frac{1}{N}\right) $ when $ N>10^5 $. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{rsop.pdf} \caption{The MSE comparison between average and optimized resource distribution using $ 16 $ types of $ 4 $ dimensional pure states.} \label{rsop} \end{figure} Then we generate only $ 10 $ random types of $ 4 $ dimensional pure states using the same algorithm and regularization forms. The results are shown in Fig. \ref{re10}. In this I.I. scenario, there does not exist a unique solution for WLS \eqref{awls} without regularization. Therefore, we use the Moore-Penrose inverse of $\tilde X_{i}^{T}\tilde X_{i} $ to obtain an estimate instead of \eqref{awls}, which is called ``no regularization" in Fig. \ref{re10}. The best regularization also scales as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. The MSEs of kernel-based regularization and adaptive regularization are always a little smaller than Tikhonov regularization and no regularization. Moreover, the MSEs of adaptive regularization are close to the MSE of kernel-based regularization. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{re10.pdf} \caption{The error scalings of different regularization forms with WLS using $ 10 $ types of $ 4 $ dimensional pure states. Except the best regularization, all the MSEs tend to constants as predicted by Theorem \ref{theorem1} because $ \theta_{i}\in R(S_{i}B) $ does not hold. Using true parameters $ \theta_{i} $, the best regularization is $S_i^{\text{best}}=\theta_{i} \theta_{i}^{T}$ and thus $ \theta_{i}\in R(S_{i}B) $ always holds. According to Theorem \ref{theorem1}, the best regularization scales as $ O\left(\frac{1}{N}\right) $ for arbitrary detectors.} \label{re10} \end{figure*} Here we explain the reason why adaptive regularization with rank-1 kernel matrix fails to exhibit a clear advantage over typical non-adaptive protocol (as shown in Fig. \ref{re10}) in the I.I. scenario. In the first step, for the chosen kernel matrix $ S_{i}^{(1)} $, the condition $ {\theta}_i\in R(S_{i}^{(1)}{B}) $ is usually not satisfied in the I.I. scenario. Thus, the estimate $ \hat{\theta}_{i}^{0} $ is biased and MSE tends to a positive constant $ c $ as \begin{equation}\label{adaptiveerror} \lim _{N \rightarrow \infty} \mathbb{E}\left\|\theta_{i}- \hat{\theta}_{i}^{0}\right\|=c>0. \end{equation} Then in the second step, if we choose regularization as \eqref{first}, $ {\theta}_i\notin R(S^{\text{rank-1}}_{i}{B}) $ because the only one vector in $ R(S^{\text{rank-1}}_{i} B) $ is $ {\hat\theta}_{i}^{0} $ and $ \lim _{N \rightarrow \infty} \mathbb{E}\left\|\theta_{i}- \hat{\theta}_{i}^{0}\right\|=c>0 $. Moreover, even if we use multi-step regularization with rank-1 kernel matrix as above, the estimation result is still biased and MSE always tends to a constant, because the number of adaptive steps is always finite. As $ N $ increases, except the best regularization, all the MSEs tend to constants as predicted by Theorem \ref{theorem1} because $ \theta_{i}\in R(S_{i}B) $ does not hold. \subsection{Coherent states} Since coherent states are truncated, we consider a larger dimensional three-valued phase-sensitive detector as \begin{equation} \begin{aligned} &P_{1}^{(8)}=U_{1} \operatorname{diag}\left(P_{1}^{(4)}, P_{1}^{(4)}\right) U_{1}^{\dagger}, \\ &P_{2}^{(8)}=U_{2} \operatorname{diag}\left(P_{2}^{(4)}, P_{2}^{(4)}\right) U_{2}^{\dagger},\\ &P_{3}^{(8)}=I-P_{1}^{(8)}-P_{2}^{(8)}, \end{aligned} \end{equation} where $ d=8 $ and $ U_1 $, $ U_2 $ are random unitary matrices \cite{Zyczkowski_1994,qetlab}. We also ensure $ P_{3}^{(8)} $ is positive semidefinite. We use different regularization forms including no regularization (\eqref{Ti} with $ c=0$), Tikhonov regularization (\eqref{Ti} with $ c=10$), kernel-based regularization (\eqref{DI} with $c=0.6, \mu=0.95 $), rank-1 adaptive regularization, full-rank adaptive regularization (see Sec. \ref{sec5}) and the best regularization \eqref{best}. For rank-1 adaptive regularization, we use kernel-based regularization (\eqref{DI} with $c=0.6, \mu=0.95 $) in step 1 and \eqref{first} in step 2. For full-rank adaptive regularization, we use kernel-based regularization (\eqref{DI} with $c=0.6, \mu=0.95 $) in step 1 and \eqref{full} in step 2. For each resource number, we run the algorithm $ 100 $ times and obtain the average MSE and standard deviation. Since coherent states are more similar to each other, we generate $ 640 $ random different types of coherent states using the probe state preparation in \cite{wang2019twostage} where the real part and imaginary part of $ \alpha $ are randomly generated in the interval $ [-1,1] $. The results are shown in Fig. \ref{co640}. When $ N<10^8 $, the MSEs of kernel-based regularization and adaptive regularization are a little smaller than Tikhonov regularization and no regularization. In addition, full-rank adaptive regularization has a little smaller MSE than rank-1 adaptive regularization. When $ N>10^{8} $, all the MSE scales as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. Since these coherent states are I.C., we also consider resource distribution optimization without regularization. The simulation results are shown in Fig. \ref{co640op}. We can also obtain a lower MSE with resource distribution optimization and both MSEs scale as $ O\left(\frac{1}{N}\right) $ for $ N>10^{7} $. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{co640.pdf} \caption{The error scalings of different regularization forms with WLS using $ 640 $ types of coherent states. When $ N>10^{8} $, all the MSE scales as $ O(\frac{1}{N}) $ satisfying Theorem \ref{theorem1}. The best regularization is $S_i^{\text{best}}=\theta_{i} \theta_{i}^{T}$ which is the lower bound of MSE and depends on true value of $ \theta $. Therefore, it cannot be used in practice and we aim to achieve regularization closest to the best regularization.} \label{co640} \end{figure*} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{co640op.pdf} \caption{The MSE comparison between average and optimized resource distribution using $ 640 $ types of coherent states.} \label{co640op} \end{figure} Then using the same algorithm in \cite{wang2019twostage}, we generate only $ 48 $ random types of coherent states where real parts and imaginary parts of $ \alpha $ are randomly generated in the interval $ [-1,1] $. We change kernel-based regularization \eqref{DI} with $c=0.6, \mu=0.01 $. The results are shown in Fig. \ref{co48}. Kernel-based regularization and adaptive regularization always have smaller MSEs compared with Tikhonov regularization and no regularization. When $ N>10^{10} $, except the best regularization, all the MSEs tend to constants as predicted by Theorem \ref{theorem1} because $ \theta_{i}\in R(S_{i}B) $ does not hold. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{co48.pdf} \caption{The error scalings of different regularization forms with WLS using $ 48 $ types of coherent states. Except the best regularization, all the MSEs tend to constants as predicted by Theorem \ref{theorem1} because $ \theta_{i}\in R(S_{i}B) $ does not hold. Using true parameters $ \theta_{i} $, the best regularization is $S_i^{\text{best}}=\theta_{i} \theta_{i}^{T}$ and thus $ \theta_{i}\in R(S_{i}B) $ always holds. According to Theorem \ref{theorem1}, the best regularization scales as $ O\left(\frac{1}{N}\right) $ for arbitrary detectors.} \label{co48} \end{figure*} \section{Experimental examples}\label{secexp} We consider the same quantum optical experimental system for QDT in \cite{Yokoyama:19} and \cite{wang2019twostage}. Ref. \cite{wang2019twostage} used Tikhonov regularization based on standard LS to complete the QDT. Here, we consider the same experimental data and employ kernel-based regularization based on WLS instead to further improve the QDT accuracy. \subsection{Experimental setup}\label{expset} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{setting.pdf} \caption{The detailed structure and description for experimental setup can be found in \cite{wang2019twostage,Yokoyama:19}. Att., Attenuator; PBS, Polarization beam splitter; H, Half wave plate; Q, Quarter wave plate. In experiment, we generate two-mode coherent states as \eqref{twomode} and input them to emulated two-mode detector. Then we obtain detector outcomes and identify this two-mode detector. } \label{fig} \end{figure} The entire experimental setup is given in Fig. \ref{fig}, which determines the structure of the detector to be estimated. More details about this setup can be found in \cite{wang2019twostage,Yokoyama:19}. It leads to block-diagonal binary detectors $ P_0+P_1=I $ as \begin{equation}\label{eqa43} P_i=L^{(i)}_1\oplus L^{(i)}_2\oplus \cdots\oplus L^{(i)}_m, \end{equation} where $m$ is the number of different blocks and $L^{(i)}_j\geq 0$ is $d_j\times d_j$ dimensional, with $\sum_{j=1}^m d_j=d$. Hence, we need to identify each block $L^{(i)}_j$. Two-mode coherent states are prepared for detector tomography by using an adequately attenuated continuous-wave (CW) fiber coupled laser as depicted in the green dashed box in Fig. \ref{fig} \cite{wang2019twostage,Yokoyama:19}. We express the general two-mode coherent state without global phase as $|\alpha,\beta \text{e}^{\text{i}\delta}\rangle$ ($\delta\in\mathbb{R}$, $\alpha,\beta\geq0$), which can be expanded in the photon number basis as \begin{equation}\label{twomode} |\alpha,\beta \text{e}^{\text{i}\delta}\rangle=\exp[-\frac{1}{2}(\alpha^2+\beta^2)] \sum_{j,k}^{\infty}\frac{\alpha^j\beta^k\text{e}^{\text{i}k\delta}} {\sqrt{j!k!}}|j,k\rangle, \end{equation} and the parameters of the $ 19 $ probe states used are shown in \cite{wang2019twostage,Yokoyama:19}. The amplitudes of these coherent states satisfy $(\alpha, \beta) \in\{(0.316,0.316),(0.447,0),(0,0.447),(0.194,0.112),(0.112,$ $0.194), (0,0)\}$. Although the probe states are I.C., the condition number of the probe states' parameterization matrix $ X $ is large and the problem is ill-conditioned. Thus, we add regularization to identify each block $ L_j $. After regularized WLS, we obtain an estimate $\{\hat E_i\}$ which might not be positive semidefinite. Then we use the Stage 2 algorithm as in \cite{wang2019twostage} in each block and obtain $ \hat Q^{(i)}_j $. The final estimation is thus $\hat P_i=\hat Q^{(i)}_1\oplus \hat Q^{(i)}_2\oplus \cdots\oplus \hat Q^{(i)}_m$, which is physical and also satisfies the block-diagonal requirement. \subsection{Result comparison} Ref. \cite{wang2019twostage} considered experiments for two different sets of detectors, denoted as Group I and Group II, respectively. For the true value of Group I, $P_1=L_1^{(1)}\oplus L_2^{(1)}\oplus L_3^{(1)}$, and we have $L_1^{(1)}=2.91\times 10^{-4}$, \begin{equation} L_2^{(1)}=\left[ \begin{array}{cc} 0.202&0.00109\text{i}\\ -0.00109\text{i}&0.202\\%-9.48\times10^{-20}i \end{array}\right],\nonumber \end{equation} and \begin{equation} L_3^{(1)}=\left[ \begin{array}{ccc} 0.363&0.00123\text{i}&1.20\times10^{-6}\\%+1.07\times10^{-19}i+2.07\times10^{-22}i -0.00123\text{i}&0.363&0.00123\text{i}\\ 1.20\times10^{-6}&-0.00123\text{i}&0.363\\%-1.07\times10^{-19}i \end{array}\right].\nonumber \end{equation} For the true value of Group II, we have $L_1^{(1)}=1.27\times 10^{-4}$, \begin{equation} L_2^{(1)}=\left[ \begin{array}{cc} 0.0763&-0.0440+0.0879\text{i}\\ -0.0440-0.0879\text{i}&0.127\\%-9.48\times10^{-20}i \end{array}\right],\nonumber \end{equation} and $L_3^{(1)}=$ \begin{equation} \!\!\left[\!\! \setlength{\arraycolsep}{1.6pt} \begin{array}{ccc} 0.147&-0.0574+0.115\text{i}&0.00580+0.00773\text{i}\\ -0.0574-0.115\text{i}&0.184&-0.0543+0.109\text{i}\\ 0.00580-0.00773\text{i}&-0.0543-0.109\text{i}&0.238\\%-1.07\times10^{-19}i \end{array}\!\right]\!\!\!.\nonumber \end{equation} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{exp1.pdf} \caption{Experimental and simulation QDT results of Tikhonov regularization (LS), rank-1 adaptive regularization (WLS) and full-rank adaptive regularization (WLS) for Group I.} \label{exp1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{exp2.pdf} \caption{Experimental and simulation QDT results of Tikhonov regularization (LS), rank-1 adaptive regularization (WLS) and full-rank adaptive regularization (WLS) for Group II.} \label{exp2} \end{figure} Ref. \cite{wang2019twostage} recorded $10^{6}$ measurement outcomes for each input state, and repeated the process $6$ times. We use these measurement data to identify the detectors and also plot the identification results using simulated measurement data as a comparison in Fig. \ref{exp1} and Fig. \ref{exp2}. For the QDT problem, Ref. \cite{wang2019twostage} employed Tikhonov regularization with standard LS estimation, where they chose $ D_i^{\text{Tikhonov}}=\frac{10^3}{N} I $ and the estimation is given in \eqref{wangtik}, while here we use rank-1 adaptive regularization and full-rank adaptive regularization with WLS. Since the result of kernel-based regularization is similar as adaptive regularization, we only show the results of adaptive regularization. In Group I, we choose $c=0.001, \mu=0.8 $ in \eqref{DI} in step 1 and \eqref{first} in step 2 for-rank 1 adaptive regularization and for full-rank adaptive regularization, we choose $c=0.001, \mu=0.8 $ in \eqref{DI} in step 1 and \eqref{full} in step 2. The results are shown in Fig. \ref{exp1}. Adaptive regularization (WLS) performs better than Tikhonov regularization (LS) in \cite{wang2019twostage}, especially for large resource number $ N $. In addition, the MSE of full-rank adaptive regularization is a little smaller than rank-1 adaptive regularization. In Group II, for rank-1 adaptive regularization, we choose $c=0.0008, \mu=0.9 $ in \eqref{DI} in step 1 and \eqref{first} in step 2 and for full-rank adaptive regularization, we choose $c=0.0008, \mu=0.9 $ in \eqref{DI} in step 1 and \eqref{full} in step 2. The results are shown in Fig. \ref{exp2}. Adaptive regularization (WLS) performs better than Tikhonov regularization (LS) when $ N>10^{2.5} $ and the MSE of full-rank adaptive regularization is always a little smaller than rank-1 adaptive regularization. Moreover, the MSE of Group II is a little larger than that of Group I because the amplitudes of nondiagonal elements in Group II are significantly larger than zero. \section{Conclusion}\label{con} In this paper, using regularization, we improve QDT accuracy with given probe states. In the I.C. and I.I. scenarios, we have employed WLS estimation, discussed different regularization forms, proved the scaling of MSE under the static assumption and characterized the best regularization. In the I.C. scenario, without regularization, we have studied resource distribution optimization and converted it to an SDP problem. The numerical examples have demonstrated the effectiveness of different regularization forms and resource distribution optimization. In a quantum optical experiment, our adaptive regularization with WLS has achieved lower mean squared errors compared with Tikhonov regularization with LS. It remains an open problem how to choose the kernel optimally in adaptive regularization.
2024-02-18T23:40:41.457Z
2022-07-22T02:12:47.000Z
algebraic_stack_train_0000
3,083
12,302
proofpile-arXiv_065-15080
\section{Introduction}\label{s:intro} Negative feedback from active galactic nuclei (AGN) and starbursts plays a fundamental role in the evolution of galaxies according to theoretical models and numerical simulations (e.g., \citealt{Narayanan2011, Scannapieco2012, Hopkins2012, Nelson2015, Schaye2015}). This feedback occurs through the injection of material, energy, and momentum into the interstellar medium (ISM) and gives rise to massive gas outflows and regulates the growth of the stellar mass and black-hole accretion. Such energetic and massive outflows have been detected in galaxies at low and high redshift. In particular, they have been detected in ultra-luminous infrared galaxies (ULIRGs; \hbox{$L_{\rm IR}$}$>10^{12}$\hbox{$L_{\rm \odot}$}) in their atomic ionized (e.g., \citealt{Westmoquette2012,Arribas2014}), atomic neutral (e.g., \citealt{Rupke2005, Cazzoli2016}), and cold molecular (e.g., \citealt{Fischer2010, Feruglio2010, Sturm2011, Cicone2014}) phases. Local ULIRGs are major gas-rich mergers mainly powered by star-formation (SF), although AGN accounting for a significant fraction of the total IR luminosity ($10-60\%$) are usually detected too \citep{Farrah2003,Nardini2010}. Since local ULIRGs are the hosts of the most extreme starbursts in the local Universe with star-formation rates (SFR) greater than $\sim$150\,\hbox{$M_{\rm \odot}$}\,yr$^{-1}$, based on their IR luminosities \citep{Kennicutt2012}, they are adequate objects to study the negative feedback from both AGN and SF. In this paper, we focus on the molecular phase of these outflows. This phase includes molecular gas with a wide range of temperatures. The hot ($T>1500$\,K) and the warm ($T>200$\,K) molecular phases can be observed using the near-IR ro-vibrational H$_2$ transitions (e.g., \citealt{Emonts2014, Dasyra2015, Emonts2017}) and the mid-IR rotational H$_2$ transitions (e.g., \citealt{Hill2014}). However, it is thought that the energy and mass of these outflows are dominated by the cold molecular phase { (e.g., \citealt{Feruglio2010, Cicone2014, Saito2018}}) although some observations and models seem to contradict this view (e.g., \citealt{Hopkins2012,Dasyra2016}). The cold molecular phase has been detected using multiple CO transitions (e.g., \citealt{Feruglio2010, Chung2011, Cicone2014, GarciaBurillo2015, Pereira2016b}), HCN transitions { (e.g., \citealt{Aalto2012, Walter2017, BarcosMunoz2018})}, and far-IR OH absorption \citealt{Fischer2010, Sturm2011, Spoon2013, GonzalezAlfonso2017}). All these observations have revealed that cold molecular outflows are common in ULIRGs and that they can be massive enough to play a relevant role in the regulation of the SF in their host galaxies. Knowing the distribution of the outflowing gas is important to derive accurate outflow properties, like the outflow mass, energy, and momentum rates, which are key to determine the impact of these outflows onto their host galaxies. However, spatially resolved observations of outflows in ULIRGs are { still limited to few sources (e.g., \citealt{GarciaBurillo2015, Veilleux2017, Saito2018, BarcosMunoz2018})}. Here, we present new high-angular resolution ($\sim0\farcs3-0\farcs4$) ALMA observations of the CO(2--1) transition in three low-$z$ ULIRGs where the cold molecular outflow phase is spatially resolved on scales of $\sim$500\,pc. This provides a direct measurement of the outflow size, and, therefore, allows us to derive more accurately the outflow properties. This paper is organized as follows: the sample and the ALMA observations are described in Section~\ref{s:data}. In Section ~\ref{s:ana}, we analyze the 248\,GHz continuum and CO(2--1) emissions and measure the outflow properties in these systems. The energy source of the outflows, as well as their impact, launching mechanism, and multi-phase structure are discussed in Section \ref{s:discu}. Finally, in Section \ref{s:conclusions}, we summarize the main results of the paper. Throughout this article we assume the following cosmology: $H_{\rm 0} = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\rm m}=0.3$, and $\Omega_{\rm \Lambda}=0.7$. \section{Observations and data reduction}\label{s:data} \subsection{Sample of ULIRGs}\label{s:sample} \begin{table*}[ht] \caption{Sample of local ULIRGs} \label{tbl_sample} \centering \begin{small} \begin{tabular}{llccccccccccc} \hline \hline \\ {IRAS} Name & Component & R.A.\tablefootmark{a} & Dec.\tablefootmark{a} & v$_{\rm sys}$\tablefootmark{b} & $z$\tablefootmark{~c} & $d_{\rm L}$\tablefootmark{d} & Scale\tablefootmark{~d} & $\log \frac{L({\rm AGN})}{ L_\odot}$\tablefootmark{~e} &$\log \frac{L_{\rm IR}}{L_\odot}$\tablefootmark{~f} \\ & & (ICRS) & (ICRS) & (km\,s$^{-1}$) & & (Mpc) & (pc\,arcsec$^{-1}$)\\ \hline\\[-2ex] 12112$+$0305 & & & & & 0.0731 & 330 & 1390 & 11.4 & 12.19 \\ & SW & 12h13m45.939s & 2d48m39.10s & 21167 & & & \\ & NE & 12h13m46.056s & 2d48m41.53s & 21045 & & & \\ 14348$-$1447 & & & & & 0.0825 & 375 & 1554 & 11.6 & 12.27 \\ & SW & 14h37m38.280s & --15d00m24.24s & 23766 & & & \\ & NE & 14h37m38.396s & --15d00m21.29s & 23676 & & & \\ 22491$-$1808 & & & & & 0.0778 & 353 & 1469 & 11.5 & 12.03 \\ & E & 22h51m49.348s & --17d52m24.12s & 22412 & & & \\ & W\tablefootmark{*} & \nodata & \nodata & \nodata \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Coordinates of the 248\,GHz rest-frame continuum emission for each nucleus (see Section \ref{ss:continuum}). The astrometric uncertainty is $\sim$25\,mas (see Section~\ref{ss:nuclear_kin}).} \tablefoottext{b}{CO(2--1) systemic velocity using the relativistic velocity definition in the kinematic local standard of rest (LSRK; see Section \ref{ss:nuclear_kin}). Typical uncertainties are $\lesssim$10\,km\,s$^{-1}$.} \tablefoottext{c}{Redshift using the average systemic velocity of the system.} \tablefoottext{d}{Luminosity distance and scale for the assumed cosmology (see Sect~\ref{s:intro}).} \tablefoottext{e}{Luminosity of the AGN in the system estimated from mid-IR spectroscopy \citep{Veilleux2009}.} \tablefoottext{f}{IR luminosity of the system based on the SED fitting of the {\it Spitzer} and {\it Herschel} { mid- and far-IR} photometry (see Section~\ref{ss:ir_lum}).} \tablefoottext{*}{No 248\,GHz continuum is detected at the position of the near-IR W nucleus of IRAS~22491$-$1808.}} \end{table*} In this paper, we study three low-$z$ ($d\sim350$\,Mpc) ULIRGs (six individual nuclei) with \hbox{$\log L_{\rm IR}\slash L_\odot=12.0-12.3$} (see Table~ \ref{tbl_sample}) { based on their mid- and far-IR spectral energy distribution modeling (Section~\ref{ss:ir_lum})}. These three ULIRG systems seem to be in a similar dynamical state. They were classified as type III by \citet{Veilleux2002}, which corresponds to a pre-merger stage characterized by two identifiable nuclei with well defined tidal tails and bridges. They also belong to the subclass of ``close binary'' (i.e., ``b'') as the projected separation of their nuclei is smaller than 10\,kpc. Their nuclei are classified as low ionization nuclear emission-line regions (LINER; IRAS~12112+0305 and IRAS~14348$-$1447; e.g., \citealt{Colina2000, Evans2002}) or \ion{H}{ii} (IRAS~22491$-$1808; e.g., \citealt{Veilleux1999}) and in all systems a weak AGN contribution ($10-15$\%) is detected in their mid-IR \textit{Spitzer}\ spectra \citep{Veilleux2009}. For IRAS~14348$-$1447, high-angular resolution mid-IR imaging { suggests} that the AGN is located in the SW nucleus \citep{AlonsoHerrero2016}. For IRAS~12112+0305 and IRAS~22491$-$1808, we assume that the AGN is at the brightest nucleus in the radio/sub-mm continuum, i.e., IRAS~12112+0305 NE and IRAS~22491$-$1808 E (see below). In addition, vibrationally excited HCN $J=4-3$ emission is detected in IRAS~12112+0305 NE and IRAS~22491$-$1808 E which can be a signature of hot dust heated by an AGN \citep{Imanishi2016, Imanishi2018}. In addition, these three ULIRGs belong to a representative sample of local ULIRGs studied by \citet{GarciaMarin2009, GarciaMarin2009Part1}, \citet{Arribas2014}, and \citet{Piqueras2012} using optical and near-IR integral field spectroscopy. \subsection{ALMA data} We obtained Band 6 ALMA CO(2--1) 230.5\,GHz and continuum observations for these three local ULIRGs (see Table~\ref{tbl_sample}) as part of the ALMA projects 2015.1.00263.S and 2016.1.00170.S (PI: Pereira-Santaella). The observations were carried out between June 2016 and May 2017. The total on-source integration times per source were $\sim30-40$\,min split into two scheduling blocks. The baseline lengths range between 15 and 1100\,m providing a synthesized beam full-width at half-maximum (FWHM) of $\sim0\farcs3-0\farcs4$ ($400-500$\,pc at the distance of these ULIRGs). Details on the observations for each source are listed in Table~\ref{tbl_alma_log}. Two spectral windows of 1.875\,GHz bandwidth (0.976\,MHz\,$\equiv\sim$1.3\,km\slash s channels) were centered at the sky frequency of the $^{12}$CO(2--1) and CS(5--4) transitions (see Table~\ref{tbl_alma_log}). In addition, a continuum spectral window was set at $\sim$248\,GHz ($\sim$1.2\,mm). In this paper, we analyze the CO(2--1) and continuum spectral windows, the CS(5--4) data will be presented in a future paper. The data were calibrated using the ALMA reduction software CASA (v4.7; \citealt{McMullin2007}). The amplitude calibrators for each scheduling block are listed in Table \ref{tbl_alma_log}. For the CO(2--1) spectral window, a constant continuum level was estimated using the line free channels and then subtracted in the $uv$ plane. For the image cleaning, we used the Briggs weighting with a robustness parameter of 0.5 \citep{Briggs1995PhDT}. The synthesized beam ($\sim0\farcs3-0\farcs4$) and maximum recoverable scale ($\sim4\arcsec$) are presented in Table \ref{tbl_alma_log} for each observation. { To our knowledge, there are no single-dish CO(2--1) fluxes published for these ULIRGs so it is not straightforward to estimate if we filter part of the extended emission. However, the bulk of the CO(2--1)} emission of these systems is relatively compact (see Section~\ref{ss:molgas} { and Appendix~\ref{apx_channels}}), so we expect to recover most of the CO(2--1) emission with these array configurations. The final datacubes have 300$\times$300 spatial pixels of 0\farcs08 and 220 spectral channels of 7.81\,MHz ($\sim10$\,km\slash s). For the CO(2--1) { cubes}, the 1$\sigma$ sensitivity is $\sim310-450$\,$\mu$Jy\,beam$^{-1}$ per channel and $\sim30-45$\,$\mu$Jy\,beam$^{-1}$ for the continuum images. A primary beam correction (FWHM$\sim$20\arcsec) was applied to the data. \subsection{Near-IR \textit{HST} imaging} We downloaded the near-IR \textit{HST}\slash NICMOS F160W ($\lambda_{\rm c}=$1.60\hbox{\,$\mu$m}, FWHM=0.34\hbox{\,$\mu$m}) and F222M ($\lambda_{\rm c}=$2.21\hbox{\,$\mu$m}, FWHM=0.15\hbox{\,$\mu$m}) reduced images from the Mikulski Archive for Space Telescopes (MAST). The angular resolutions of these images are 0\farcs14 and 0\farcs20 for the F160W and F222M filters, respectively, which is slightly better than the resolution of the ALMA data. The ALMA and \textit{HST} images were aligned using the positions of the nuclei in the 248\,GHz and F222M images. The F222M filter was used because it is less affected by dust obscuration than F160W. { If the 2.2\hbox{\,$\mu$m}\ near-IR and 248\,GHz continua have similar spatial distributions in these ULIRGs, the uncertainty of the image alignment is about 0\farcs08 ($\sim$120\,pc) limited by the centroid accuracy in the {\it HST} data. } \begin{table*}[ht] \caption{ALMA observation log} \label{tbl_alma_log} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & Date & Observed & On-source & Maximum & Synthesized & \multicolumn{2}{c}{Amplitude calibrator} & Sensitivity\tablefootmark{c}\\ \cline{7-8}\\[-2.1ex] & & frequency\tablefootmark{a} & time & recoverable scale & beam\tablefootmark{b} & Name & Flux\\ & & (GHz) & (min) & & & & (Jy) & ($\mu$Jy\,beam$^{-1}$) \\ \hline\\[-2ex] 12112+0305 & 2017-05-08 & 214.81 & 37 & 3\farcs9 & 0\farcs36$\times$0\farcs27, --82\ensuremath{^\circ} & J1229+0203 & 7.01$\pm$0.27 & 310\slash 33 \\ & 2017-05-09 & & & & & & \\ 14348$-$1447 & 2017-05-09 & 212.87 & 39 & 4\farcs0 & 0\farcs32$\times$0\farcs26, --78\ensuremath{^\circ} & J1517-2422 & 1.79$\pm$0.11 & 340\slash 32 \\ & 2017-05-22 & & & & & & 2.15$\pm$0.15 \\ 22491$-$1808 & 2016-06-21 & 213.92 & 29 & 4\farcs0 & 0\farcs48$\times$0\farcs34, --84\ensuremath{^\circ} & Pallas & Butler\tablefootmark{d} & 450\slash 46 \\ & 2016-07-21 & & & & & & \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Central observed frequency of the CO(2--1) spectral window.} \tablefoottext{b}{FWHM and position angle of the synthesized beam using Briggs weighting with a robustness parameter of 0.5.} \tablefoottext{c}{1$\sigma$ line\slash continuum sensitivities after combining the two scheduling blocks for each object. For the line sensitivity, we use the 7.8\,MHz ($\sim$10\,km\,s$^{-1}$) channels of the final data cube.} \tablefoottext{d}{Flux estimated using the Butler-JPL-Horizons 2012 models and ephemeris information (see ALMA Memo \#594).} } \end{table*} \section{Data analysis}\label{s:ana} \subsection{Morphology} In Figures~\ref{fig_data_i12112}, \ref{fig_data_i14348}, and \ref{fig_data_i22491}, we present the CO(2--1) and 248\,GHz continuum emission maps of the three ULIRGs. \subsubsection{Molecular gas}\label{ss:molgas} The molecular gas traced by the CO(2--1) transition, which is dominated by the emission from the central $\sim1-2$\,kpc, has an irregular morphology with multiple large scale tidal tails (up to $\sim$10\,kpc) and isolated clumps. These characteristics are very likely connected to the ongoing galaxy interactions taking place in these systems. Similar tidal tails are observed in the stellar component in the near-IR {\it HST}\slash NICMOS NIC2 images (right hand panel of Figures~\ref{fig_data_i12112}--\ref{fig_data_i22491}). However, there are noticeable offsets, $\sim1-2$\,kpc, between the position of these stellar and molecular tidal tails. To measure the total CO(2--1) emission of each system, we first defined the extent of this emission by selecting all the contiguous pixels where the CO(2--1) line peak is above 6$\sigma$ (see second panel of Figures~\ref{fig_data_i12112}, \ref{fig_data_i14348}, and \ref{fig_data_i22491}). Then, we integrated the line flux in this area. The resulting flux densities are presented in Table \ref{tbl_integrated}. The flux strongly peaks at the nuclei of these objects, so we calculate an effective radius based on the area, $A$, which encloses half of the total CO(2--1) emission as $R_{\rm eff} = \sqrt{A/\pi}$. This $R_{\rm eff}$ provides a better estimate of the actual size of the CO(2--1) emission. For these galaxies, the effective radius varies between { 400\,pc and 1\,kpc} (see Table \ref{tbl_integrated}). Both in IRAS~12112 and IRAS~22491, the CO(2--1) emission is completely dominated by one of the galaxies which produces 80\%, and 90\%, respectively, of the total flux of the merging system. In IRAS~14348, the CO(2--1) emission is also dominated by one of the nuclei (SW), but this one is only two times brighter than the NE nucleus. \begin{table}[t] \caption{Integrated CO(2--1) emission} \label{tbl_integrated} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & $S_{\rm CO}$\tablefootmark{a} & Total size\tablefootmark{b} & \multicolumn{2}{c}{$R_{\rm eff}$\tablefootmark{c}} \\ & (Jy\,km\,s$^{-1}$) & (kpc$^2$) & (arcsec) & (pc) \\ \hline\\[-2ex] I12112 SW & 24.5 & 9.3 $\pm$ 0.3 & 0.31 & 430 $\pm$ 30 \\ I12112 NE & 117.2 & 16.5 $\pm$ 0.4 & 0.41 & 570 $\pm$ 30 \\ I14348 SW & 105.9 & 18.9 $\pm$ 0.5 & 0.50 & 780 $\pm$ 40 \\ I14348 NE & 53.3 & 19.6 $\pm$ 0.5 & 0.38 & 590 $\pm$ 40 \\ I22491 E & 59.4 & 8.3 $\pm$ 0.3 & 0.30 & 450 $\pm$ 30 \\ I22491 W & 4.1 & 7.0 $\pm$ 0.3 & 0.81 & 1100 $\pm$ 40 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Total CO(2--1) flux. { The absolute flux uncertainty is $\sim$10\%}.} \tablefoottext{b}{Size of the area where the CO(2--1) emission is detected at $>6\sigma$. { This is calculated as the number of pixels with emission $>6\sigma$ multiplied by the projected pixel area on the sky. The uncertainties are the square root of the number of pixels times the projected pixel area.}} \tablefoottext{c}{Effective radius of the region which encloses 50\%\ of the total CO(2--1) emission defined as $R_{\rm eff} = \sqrt{A/\pi}$ where $A$ is the area of this region.} } \end{table} \subsubsection{248\,GHz continuum}\label{ss:continuum} Except for the western nucleus of IRAS~22491, which is not seen at 248\,GHz, the remaining nuclei are clearly detected in both the CO(2--1) and continuum images. In all the cases, the 248\,GHz continuum emission is produced by a relatively compact source. To accurately measure the continuum properties, we used the {\sc uvmultifit} library \citep{MartiVidal2014} within CASA. This library can simultaneously fit various models to the visibility data. First, we tried a 2D circular Gaussian model which provided a good fit for all the sources except two (I12112 NE and I22491 E). For these two sources, we added a delta function with the same center as a second component to account for the unresolved continuum emission. This second unresolved component represents $70-80$\% of the total continuum emission in these objects. The results of the fits are presented in Table~\ref{tbl_continuum} and the center coordinates listed in Table~\ref{tbl_sample} (see also Appendix~\ref{apx_uv_fit}). The resolved continuum emission (2D circular Gaussian component of the model) has a FWHM between 260 and 1000\,pc, which is more compact than the CO(2--1) emission. For comparison, the CO(2--1) effective radius is 3 to 6 times larger than the FWHM\slash 2 of this 2D Gaussian component. Only in I22491 E, both have similar sizes, although, in this galaxy, the continuum emission is dominated by the unresolved component. Therefore, in all these ULIRGs, the 248\,GHz continuum emission is considerably more compact than the molecular CO(2--1) emission. { This is similar to what is observed in other local LIRGs and ULIRGs (e.g., \citealt{Wilson2008, Sakamoto2014, Saito2017}).} In IRAS~12112 and IRAS~22491, the continuum emission is dominated by the same nucleus that dominates the CO(2--1) emission (see Section \ref{ss:molgas}). The fraction of the 248\,GHz continuum produced by these dominant nuclei are 90\%\ and $>$95\%, respectively, which are slightly higher than their contributions to the total CO(2--1) luminosity of their systems (80\%\ and 90\%, respectively). In IRAS~14348, the SW nucleus produces 60\%\ of the continuum emission and the NW the remaining 40\%. These fractions are similar to those of the CO(2--1) produced in each nucleus (65\%\ and 35\%, respectively). The 248\,GHz continuum emission is possibly produced by a combination of thermal dust continuum, free-free radio continuum, and synchrotron emission. The latter can be dominant at this frequency in the case of AGN, and, as discussed in Section~\ref{s:sample}, AGN emission is detected in the three ULIRGs. To determine the non-thermal contribution to the measured 248\,GHz fluxes, we use the available interferometric radio (1.49 and 8.44\,GHz) observations for these systems \citep{Condon1990, Condon1991}. The position of the 248\,GHz continuum sources is compatible with the location of the 1.49 and 8.44\,GHz radio continuum emission within 0\farcs15 ($\sim$half of the beam FWHM). Therefore, we assume that the radio and the 248\,GHz emissions are produced in the same regions. We also note that the western nucleus of IRAS~22491 is undetected as well at radio wavelengths. For the rest of the sources, we fit a power-law to the 1.49 and 8.44\,GHz fluxes and obtain spectral indexes between 0.42 and 0.72. Then, we use these spectral indexes to extrapolate the non-thermal emission at 248\,GHz. On average, this represents 20\%\ of the 248\,GHz emission for these ULIRGs (see Table~\ref{tbl_continuum}), with a minimum (maximum) contribution of 14\% (43\%). Therefore, most of the 248\,GHz emission is likely due to thermal dust emission { and free-free radio continuum} produced in the compact nuclear region. \begin{table*}[ht] \caption{ALMA continuum models} \label{tbl_continuum} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & Obs freq.\tablefootmark{a} & Rest freq.\tablefootmark{a} & Total flux\tablefootmark{b} & Delta\tablefootmark{c} & Gaussian\tablefootmark{c} & \multicolumn{2}{c}{FWHM\tablefootmark{d}} & Non-thermal\\ & (GHz) & (GHz) & (mJy) & (mJy) & (mJy) & (arcsec) & (pc) & fraction\tablefootmark{e} \\ \hline\\[-2ex] I12112 SW & 231.12 & 247.99 & 0.69 $\pm$ 0.05 & \nodata & 0.69 $\pm$ 0.05 & 0.19 $\pm$ 0.02 & 260 & 0.43 \\ I12112 NE & & & 6.81 $\pm$ 0.14 & 4.60 $\pm$ 0.10 & 2.21 $\pm$ 0.10 & 0.35 $\pm$ 0.02 & 480 & 0.20 \\ I14348 SW & 229.08 & 247.98 & 2.42 $\pm$ 0.05 & \nodata & 2.42 $\pm$ 0.05 & 0.17 $\pm$ 0.01 & 260 & 0.21 \\ I14348 NE & & & 1.63 $\pm$ 0.05 & \nodata & 1.63 $\pm$ 0.05 & 0.17 $\pm$ 0.01 & 270 & 0.21 \\ I22491 E & 229.64 & 247.50 & 5.16 $\pm$ 0.11 & 4.09 $\pm$ 0.06 & 1.07 $\pm$ 0.10 & 0.68 $\pm$ 0.08 & 1000 & 0.14 \\ I22491 W & & & $<$0.14$^{f}$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ \hline \end{tabular} \end{small} \tablefoot{The flux uncertainties are statistical uncertainties from the fit. The absolute flux calibration uncertainty is about 10\%. \tablefoottext{a}{Observed and rest frame continuum frequencies.} \tablefoottext{b}{Total flux of the continuum model.} \tablefoottext{c}{Flux of the delta (unresolved) and Gaussian components of the models.} \tablefoottext{d}{Deconvolved FWHM of the Gaussian component.} \tablefoottext{e}{Non-thermal emission fraction at 248\,GHz estimated from the radio 1.49 and 8.44\,GHz fluxes (\citealt{Condon1990, Condon1991}; see Section \ref{ss:continuum}).} \tablefoottext{f}{3$\sigma$ flux upper limit for an unresolved source.} } \end{table*} \begin{figure*} \centering \includegraphics[width=\textwidth]{alma_data_i12112.pdf} \caption{ALMA and \textit{HST} maps for IRAS~12112+0305. The first and second panels are the CO(2--1) integrated flux and { peak intensity for $\sim$10\,km\,s$^{-1}$ channels}, respectively. The contour levels in the second panel correspond to (6, 18, 54, 162, 484)$\times\sigma$, where $\sigma$ is the line sensitivity (Table \ref{tbl_alma_log}). The third panel is the ALMA 248\,GHz continuum. The contours in this panel are (3, 27, 81)$\times\sigma$ where $\sigma$ is the continuum sensitivity (Table \ref{tbl_alma_log}). The fourth panel shows the near-IR \textit{HST}\slash NICMOS F160W map with the CO(2--1) peak contours. The position of the two nuclei is marked with a cross in all the panels. The red hatched ellipse represents the FWHM { and PA} of the ALMA beam (Table \ref{tbl_alma_log}). \label{fig_data_i12112}} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{alma_data_i14348.pdf} \caption{Same as Figure~\ref{fig_data_i12112} but for IRAS~14348$-$1447.\label{fig_data_i14348}} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{alma_data_i22491.pdf} \caption{Same as Figure~\ref{fig_data_i12112} but for IRAS~22491$-$1808.\label{fig_data_i22491}} \end{figure*} \subsection{Molecular gas kinematics} \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{alma_v_i12112_SW.pdf} \includegraphics[width=0.48\textwidth]{alma_v_i12112_NE.pdf} \includegraphics[width=0.48\textwidth]{alma_v_i14348_SW.pdf} \includegraphics[width=0.48\textwidth]{alma_v_i14348_NE.pdf} \includegraphics[width=0.48\textwidth]{alma_v_i22491_E.pdf} \hspace{0.48\textwidth} \caption{First (left panel) and second (right panel) moments of the CO(2--1) emission for each component of the ULIRGs. The spacings between the contour levels in the first and second moment maps are 100\,km\,s$^{-1}$ and 25\,km\,s$^{-1}$, respectively. For the first moment maps, the velocities are relative to the systemic velocity (see Table~\ref{tbl_sample}). The red box in this panel indicates the field of view presented in Figure~\ref{fig_alma_posdiagram} for each object. The dashed and dotted lines mark the kinematic major axis and the outflow axis, respectively, defined in Section \ref{ss:nuclear_kin} (see also Figure~\ref{fig_alma_posdiagram}). The black cross marks the position of the 248\,GHz continuum peak. The red hatched ellipse shows the beam FWHM { and PA}. \label{fig_alma_vel}} \end{figure*} In Figure~\ref{fig_alma_vel}, we show the first and second moments of the CO(2--1) emission for each galaxy of the three ULIRG systems and indicate the outflow axis (dotted line) and the kinematic major axis of the nuclear disk (dashed line) { defined in Section~\ref{ss:nuclear_kin} (see Figure~\ref{fig_alma_posdiagram})}. The first moment maps indicate a complex velocity field, although a rotating disk pattern is present in all the systems. The second moment maps show that the velocity dispersion maximum ($120-170$\,km\,s$^{-1}$) is almost coincident with the location of the nucleus and that it is enhanced more or less along the molecular outflow axis (dotted line). The latter is expected since the high-velocity outflowing gas produces broad wings in the CO(2--1) line profile which enhance the observed second moment. \subsubsection{Nuclear disks and molecular outflows}\label{ss:nuclear_kin} \begin{table*}[ht] \caption{Nuclear molecular { emission}} \label{tbl_gaskin} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & PA\tablefootmark{a} & PA$_{\rm out}$\tablefootmark{b} & PA - PA$_{\rm out}$\tablefootmark{c} & v\tablefootmark{~d} & $\sigma$\tablefootmark{~e} & v\slash $\sigma$\tablefootmark{~f} & $i$\tablefootmark{~g} \\[0.1ex] & (deg) & (deg) & (deg) & (km\,s$^{-1}$) & (km\,s$^{-1}$) & & (deg) \\ \hline\\[-2ex] I12112 SW & 289 $\pm$ 2 & 213 $\pm$ 10 & 75 $\pm$ 11 & 81 $\pm$ 6 & 130 $\pm$ 9 & 0.62 $\pm$ 0.09 & 25 $\pm$ 14 \\ I12112 NE & 80 $\pm$ 2 & 353 $\pm$ 5 & 87 $\pm$ 6 & 120 $\pm$ 8 & 168 $\pm$ 3 & 0.71 $\pm$ 0.06 & 28 $\pm$ 15 \\ I14348 SW & 232 $\pm$ 4 & 107 $\pm$ 8 & 126 $\pm$ 9 & 60 $\pm$ 10 & 148 $\pm$ 4 & 0.40 $\pm$ 0.09 & 15 $\pm$ 10 \\ I14348 NE & 202 $\pm$ 5 & 112 $\pm$ 7 & 90 $\pm$ 9 & 120 $\pm$ 40 & 138 $\pm$ 6 & 0.89 $\pm$ 0.30 & 36 $\pm$ 31 \\ I22491 E & 348 $\pm$ 2 & 36 $\pm$ 20 & 133 $\pm$ 20 & 110 $\pm$ 10 & 122 $\pm$ 5 & 0.88 $\pm$ 0.12 & 36 $\pm$ 23 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a, b}{Position angle (East of North) of the receding part of the kinematic major axis and the high-velocity outflowing gas, respectively (see Section \ref{ss:nuclear_kin} and Figure~\ref{fig_alma_posdiagram}).} \tablefoottext{c}{Difference between the position angles of the outflow and the kinematic major axis.} \tablefoottext{d}{Semi-amplitude of the observed CO(2--1) rotation curve.} \tablefoottext{e}{Second moment of the nuclear CO(2--1) emission profile (Figures~\ref{fig_outflow_i12112_SW}--\ref{fig_outflow_i22491_E}).} \tablefoottext{f}{Observed dynamical ratio.} \tablefoottext{g}{{ Disk} inclination assuming an intrinsic dynamical ratio for local ULIRGs of 1.5 $\pm$ 0.6 (See Section \ref{ss:nuclear_kin}; \citealt{GarciaMarin06, Bellocchi2013}).} } \end{table*} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{pos_diagram_i12112_SW.pdf} \includegraphics[width=0.32\textwidth]{pos_diagram_i12112_NE.pdf} \includegraphics[width=0.32\textwidth]{pos_diagram_i14348_SW.pdf} \includegraphics[width=0.32\textwidth]{pos_diagram_i14348_NE.pdf} \includegraphics[width=0.32\textwidth]{pos_diagram_i22491_E.pdf} \hspace{0.32\textwidth} \caption{Centroid of the CO(2--1) emission measured in each { $\sim$10\,km\,s$^{-1}$} velocity channel. The color of the points indicates the CO(2--1) velocity with respect to the systemic velocity not corrected for inclination. The { rose} diamond marks the position of the 248\,GHz continuum peak. The dashed line is the linear fit to the low velocity gas and corresponds to the kinematic major axis of the rotating disk. The dotted line is the linear fit to the high-velocity red- and blue-shifted gas which traces the projection of the outflow axis in the sky. The error bars in each point indicate the statistical uncertainty in the centroid position. The gray error bars represent the astrometric accuracy of these observations for channels with SNR$>$10 (see Section~\ref{ss:nuclear_kin}). \label{fig_alma_posdiagram}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{rotcurve_i12112_SW.pdf} \includegraphics[width=0.32\textwidth]{rotcurve_i12112_NE.pdf} \includegraphics[width=0.32\textwidth]{rotcurve_i14348_SW.pdf} \includegraphics[width=0.32\textwidth]{rotcurve_i14348_NE.pdf} \includegraphics[width=0.32\textwidth]{rotcurve_i22491_E.pdf} \hspace{0.32\textwidth} \caption{Rotation curves of the ULIRGs extracted along the kinematic major axis. The red line shows the best fit arctan model to the data. The fit results are presented in Table~\ref{tbl_gaskin}. \label{fig_rotcurve}} \end{figure*} Similar to \citet{GarciaBurillo2015}, we derive the centroid of the CO(2--1) emission in each velocity channel to study the nuclear gas kinematics and identify high-velocity gas decoupled from the rotating disks. The results are presented in Figure~\ref{fig_alma_posdiagram}. { Thanks to the high SNR of the data, we are able to determine the centroid positions with statistical uncertainties $<$10\,mas for most of the channels. The astrometric accuracy for the frequency and array configuration of these observations is $\sim$25\,mas for channels with a SNR higher than 10. For channels with a SNR of $\sim$3, this accuracy is reduced to $\sim$80\,mas\footnote{ see Section 10.5.2 of the ALMA Cycle 6 Technical Handbook.}. Therefore, the shifts of the centroid positions shown in Figure~\ref{fig_alma_posdiagram} are expected to be real.} For all the objects, the low-velocity emission centroids follow a straight line. This is consistent with the emission from a rotating disk which is not completely resolved. The direction of this line traces the position angle (PA) of the kinematic major axis of the rotating disk. Therefore, we did a linear fit to these points and derived the disk PA (Table~\ref{tbl_gaskin}). Also, using these fits, we determined the systemic velocity as the velocity of the point along the major axis closest to the continuum peak (Table~\ref{tbl_sample}). By contrast, the centroids of the high-velocity gas do not lie on the kinematic major axis and they cluster at two positions, one for the red-shifted and the other for the blue-shifted gas. These two positions are approximately symmetric with respect to the nucleus. This is strong evidence of the decoupling of the high-velocity gas from the global disk rotation and, as we discuss in Section \ref{ss:mol_outflow}, this is compatible with the expected gas distribution of a massive molecular outflow originating at the nucleus. Alternatively, if we assume a coplanar geometry for the high-velocity gas, these PA twists could be explained by a strong nuclear bar-like structure. However, because of the extremely high radial velocities implied by this geometry, up to 400\,km\,s$^{-1}$, in the following, we only discuss the out-of plane outflow possibility. The inclination of the disks is an important parameter to derive accurate outflow properties. It is commonly derived from the ratio between the photometric major and minor axes assuming that the galaxy is circular. However, in these merger systems, it is not obvious how to define these axes and also the circular morphology assumption might be incorrect. Therefore, we use an alternative approach based on the kinematic properties of the nuclear disks to estimate the inclination. First, we extract the rotation curve along the kinematic major axis and fit an arctan model (e.g., \citealt{Courteau1997}) to determine the curve semi-amplitude, v (Figure~\ref{fig_rotcurve} and Table~\ref{tbl_gaskin}). Then, we measure the velocity dispersion, $\sigma$, of the nuclear region (1--2\,kpc) and calculate the observed dynamical ratio v$\slash\sigma$. \citet{GarciaMarin06} and \citet{Bellocchi2013} measured the v$\slash\sigma$ ratios in a sample of 25 ULIRGs (34 individual galaxies) with H$\alpha$ integral field spectroscopy. Assuming a mean inclination of 57\ensuremath{^\circ} ($\sin i = 0.79$; see \citealt{Law2009}), we can correct their v values for inclination and determine an intrinsic v$\slash\sigma$ ratio of 1.5$\pm$0.6 for ULIRGs. Then, we compare the observed dynamical ratios in each galaxy with this intrinsic ratio to estimate their inclinations ($15-40$\ensuremath{^\circ}). \subsection{Properties of the molecular outflows}\label{ss:mol_outflow} \subsubsection{Observed properties: PA, size, flux, and velocity}\label{ss:observed_properties} \begin{table*}[ht] \caption{Nuclear CO(2--1) emission and observed outflow properties} \label{tbl_outflow_obs} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & $r$\tablefootmark{~a} & v range\tablefootmark{~b} & \multicolumn{3}{c}{$S_{\rm CO}$} & $|$v$_{\rm high}|$\tablefootmark{e} & v$_{\rm max}$\tablefootmark{f} & $R_{\rm c}$\tablefootmark{~g} & $R_{\rm max}$\tablefootmark{~h} \\[0.1ex] \cline{4-6} \\[-1.9ex] & & & Total\tablefootmark{c} & Blue-shifted\tablefootmark{~d} & Red-shifted\tablefootmark{~d} & & & & \\ & (arcsec [kpc]) &(km\,s$^{-1}$) & (Jy\,km\,s$^{-1}$) & (Jy\,km\,s$^{-1}$) & (Jy\,km\,s$^{-1}$) & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (arcsec) & (arcsec) \\ \hline\\[-2ex] I12112 SW& 0.55\,[0.8] & [230, 550]& 19.5 $\pm$ 0.1 & 0.25 $\pm$ 0.04 & 0.32 $\pm$ 0.04 & 324 $\pm$ 12 & 470 $\pm$ 20 & 0.24 $\pm$ 0.03 & 0.55 $\pm$ 0.05 \\ I12112 NE& 1.3\,[1.8] & [220, 900]& 115.9 $\pm$ 0.2 & { 5.4} $\pm$ 0.1 & 5.3 $\pm$ 0.1 & 465 $\pm$ 30 & 800 $\pm$ 90 & 0.24 $\pm$ 0.03 & 1.15 $\pm$ 0.30 \\ I14348 SW& 1.1\,[1.7] & [230, 800]& 97.7 $\pm$ 0.2 & 3.5 $\pm$ 0.1 & 4.1 $\pm$ 0.1 & 419 $\pm$ 38 & 740 $\pm$ 30 & 0.18 $\pm$ 0.03 & 1.05 $\pm$ 0.05 \\ I14348 NE& 0.7\,[1.1] & [240, 560]& 42.1 $\pm$ 0.1 & 0.48 $\pm$ 0.05 & 1.0 $\pm$ 0.1 & 373 $\pm$ 5 & 520 $\pm$ 110 & 0.22 $\pm$ 0.03 & 0.60 $\pm$ 0.05 \\ I22491 E& 1.0\,[1.5] & [210, 600]& 57.4 $\pm$ 0.1 & 1.1 $\pm$ 0.1 & 0.9 $\pm$ 0.1 & 325 $\pm$ 33 & 400 $\pm$ 110 & 0.10 $\pm$ 0.01 & 0.35 $\pm$ 0.05 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Radius of the aperture used to measure the CO(2--1) emission in arcsec and kpc.} \tablefoottext{b}{Absolute value of the velocity range considered to measure the blue- and red-shifted wings of the CO(2--1) profile with respect to the systemic velocity.} \tablefoottext{c}{Total CO(2--1) flux measured within an aperture of radius $r$ centered on the object nucleus.} \tablefoottext{d}{Blue- and red-shifted high-velocity CO(2--1) emission measured in the indicated velocity range { after subtracting the best-fit 2-Gaussian model (see Section~\ref{ss:observed_properties} and Figures~\ref{fig_outflow_i12112_SW}$-$\ref{fig_outflow_i22491_E})}.} \tablefoottext{e}{Absolute value of the intensity weighted average velocity of the high-velocity gas with respect to the systemic velocity.} \tablefoottext{f}{Maximum velocity at which CO(2--1) emission is detected at more than 3$\sigma$.} \tablefoottext{g}{Half of the maximum separation between the centroids of the blue- and red-shifted high-velocity emission with the same $|$v$-$v$_{\rm sys}|$ (see Figure~\ref{fig_alma_posdiagram}).} \tablefoottext{h}{Largest radius at which high-velocity CO(2--1) emission is detected.} } \end{table*} \begin{figure*} \centering \hfill \includegraphics[width=0.46\textwidth]{alma_outflow_i12112_SW.pdf} \hfill \includegraphics[width=0.48\textwidth]{alma_spiral_i12112_SW.pdf} \hfill \caption{The blue and red contours in panel \textit{a} represent the integrated blue- and red-shifted high-velocity CO(2--1) emission, respectively. The specific velocity range is listed in Table~\ref{tbl_outflow_obs}. { The lowest contour corresponds to the 3$\sigma$ level. The next contour levels are (0.5, 0.9)$\times$ the peak of the high-velocity emission when these are above the 3$\sigma$ level. For I12112~SW, $\sigma=$\,30\,mJy\,km\,s$^{-1}$\,beam$^{-1}$ and the red and blue peaks are at 110 and 240\,mJy\,km\,s$^{-1}$\,beam$^{-1}$, respectively.} The dotted and the dashed lines are the outflow axis and the kinematic major axis, respectively (see Table~\ref{tbl_gaskin}). The red hatched ellipse represents the beam FWHM { and PA}. The dashed circle indicates the region from which the nuclear spectrum was extracted. Panel \textit{b} shows the nuclear spectrum in yellow and the { best-fit} model in gray. Panel \textit{c} shows the difference between the observed spectrum and the { best-fit} model. The shaded blue and red velocity ranges in these panels correspond to the velocity ranges used for the contours of panel \textit{a}. The gray line in panel \textit{c} marks the 3$\sigma$ noise level per channel. Panel \textit{d} shows the CO(2--1) emission at the velocities indicated by the numbers at the top-right corner of the panel as black and red contours, respectively. The black and red double lines trace the morphological features (spiral arms, tidal tails) observed at those velocities. { Panel \textit{e} shows the CO(2--1) mean velocity field (same as in Figure~\ref{fig_alma_vel})}. The black dashed line is the kinematic major axis and the gray dot-dashed line the kinematic minor axis. The far and near sides of the rotating disk are indicated, assuming that the observed morphological features are trailing.\label{fig_outflow_i12112_SW}} \end{figure*} \begin{figure*} \centering \hfill \includegraphics[width=0.46\textwidth]{alma_outflow_i12112_NE.pdf} \hfill \includegraphics[width=0.48\textwidth]{alma_spiral_i12112_NE.pdf} \hfill \caption{Same as Figure~\ref{fig_outflow_i12112_SW} but for I12112 NE. { In panel \textit{a}, $\sigma=$\,45\,mJy\,km\,s$^{-1}$\,beam$^{-1}$ and the red and blue peaks are at 2.4 and 2.9\,Jy\,km\,s$^{-1}$\,beam$^{-1}$, respectively.} \label{fig_outflow_i12112_NE}} \end{figure*} \begin{figure*} \centering \hfill \includegraphics[width=0.46\textwidth]{alma_outflow_i14348_SW.pdf} \hfill \includegraphics[width=0.48\textwidth]{alma_spiral_i14348_SW.pdf} \hfill \caption{Same as Figure~\ref{fig_outflow_i12112_SW} but for I14348 SW. { In panel \textit{a}, $\sigma=$\,46\,mJy\,km\,s$^{-1}$\,beam$^{-1}$ and the red and blue peaks are at 1.4 and 1.0\,Jy\,km\,s$^{-1}$\,beam$^{-1}$, respectively.} \label{fig_outflow_i14348_SW}} \end{figure*} \begin{figure*} \centering \hfill \includegraphics[width=0.46\textwidth]{alma_outflow_i14348_NE.pdf} \hfill \includegraphics[width=0.48\textwidth]{alma_spiral_i14348_NE.pdf} \hfill \caption{Same as Figure~\ref{fig_outflow_i12112_SW} but for I14348 NE. { In panel \textit{a}, $\sigma=$\,32\,mJy\,km\,s$^{-1}$\,beam$^{-1}$ and the red and blue peaks are at 0.74 and 0.56\,Jy\,km\,s$^{-1}$\,beam$^{-1}$, respectively.} \label{fig_outflow_i14348_NE}} \end{figure*} \begin{figure*} \centering \hfill \includegraphics[width=0.46\textwidth]{alma_outflow_i22491_E.pdf} \hfill \includegraphics[width=0.48\textwidth]{alma_spiral_i22491_E.pdf} \hfill \caption{Same as Figure~\ref{fig_outflow_i12112_SW} but for I22491 E. { In panel \textit{a}, $\sigma=$\,43\,mJy\,km\,s$^{-1}$\,beam$^{-1}$ and the red and blue peaks are at 1.1 and 2.2\,Jy\,km\,s$^{-1}$\,beam$^{-1}$, respectively.} \label{fig_outflow_i22491_E}} \end{figure*} In the previous section, we presented the detection of high-velocity gas in 5 out of 6 ULIRG nuclei which is compatible with the presence of massive molecular outflows. But depending on which side of the rotating disk is closest to us, this high-velocity gas can be interpreted as an inflow or as an outflow. To investigate this, we plot the morphological features (spiral arms or tidal tails) we identified in the CO(2--1) channel maps (panels $d$ of Figures~\ref{fig_outflow_i12112_SW}--\ref{fig_outflow_i22491_E} { and Appendix~\ref{apx_channels}}). Then, in panels $e$ of these figures, we plot the identified morphological features over the velocity fields and, assuming that these features are trailing, we can determine the near- and far-side of the rotating disk. For all the galaxies where the high-velocity emission is spatially resolved, the blue-shifted high-velocity emission appears on the far side of the disk and the red-shifted emission on the near side. This is a clear signature of outflowing gas. In addition, we can measure the PA of these outflows by fitting the position of the red and blue centroid clusters with a straight line (Figure~\ref{fig_alma_posdiagram} and Table~\ref{tbl_gaskin}). We calculated the difference between the PA of the high-velocity gas and that of the kinematic major axis of the disk (Table~\ref{tbl_gaskin}) and found values around 90\ensuremath{^\circ}\ for 3 cases. This PA difference is the expected value for an outflow perpendicular to the rotating disk. For IRAS~14348 SW, the PA difference is $\sim$126$\pm$8\ensuremath{^\circ}\ which suggests a different outflow orientation. Finally, the outflow PA of IRAS~22491 E seems to deviate from a perpendicular orientation although with less significance due to the large uncertainty ($\sim$120$\pm$20\ensuremath{^\circ}). In panel {\it a} of Figures~\ref{fig_outflow_i12112_SW}--\ref{fig_outflow_i22491_E}, we show the spatial distribution of the high-velocity gas emission. This emission is spatially resolved, except for IRAS~22491 E, and reaches projected distances, $R_{\rm max}$, up to 0\farcs4--1\farcs2 ($0.5-1.8$\,kpc; see Table~\ref{tbl_outflow_obs}) from the nucleus. We note that these sizes are a factor of $3-5$ larger than the sizes derived from the separation between the blue- and red-shifted emission centroids ($R_{\rm c}$). In the following, we use the $R_{\rm c}$ as the outflow size because, as a flux-weighted estimate of the outflow size, it is a better estimate of the extent of the region where most of the outflowing molecular gas is located. On the other hand, $R_{\rm max}$ is dominated by the faint CO(2--1) emission at larger radii and it is also likely dependent on the sensitivity of the observations. The outflows are clearly spatially resolved ($2\times R_{\rm max}> $ FWHM of the beam). However, the angular resolution is not high enough to allow us to measure the outflow properties as function of radius. For this reason, we only consider the integrated outflow emission and measure a total outflow flux. To do so, we extracted the spectrum of the regions where high-velocity gas is detected (panels $b$ of Figures~\ref{fig_outflow_i12112_SW}--\ref{fig_outflow_i22491_E}). We fitted a two Gaussian model to the CO(2--1) line profile. { This model reproduces well the core of the observed line profile}. The residual blue and red wings (i.e., the outflow emission) are shown in panels $c$ of Figures~\ref{fig_outflow_i12112_SW}--\ref{fig_outflow_i22491_E} for each galaxy. From these spectra, we also estimate the flux-weighted average velocity of the outflowing gas ($320-460$\,km\,s$^{-1}$) and the maximum velocity at which we detect CO(2--1) emission ($400-800$\,km\,s$^{-1}$). The total CO(2--1) flux, as well as the flux in the high-velocity wings, are presented in Table~\ref{tbl_outflow_obs}. \subsubsection{Derived properties}\label{ss:derived_properties} \begin{table*}[ht] \caption{Derived molecular outflow properties} \label{tbl_outflow_derived} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & $M_{\rm out}$\tablefootmark{a} & $M_{\rm tot}$\tablefootmark{b} & v$_{\rm out}$\tablefootmark{c} & v$_{\rm max}$\tablefootmark{d} & $R_{\rm out}$\tablefootmark{e} & $R_{\rm max}$\tablefootmark{f} & $\log t_{\rm dyn}$\tablefootmark{g} & $\log \dot{M}_{\rm out}$\tablefootmark{h} & $\log t_{\rm dep}$\tablefootmark{i} & $\log \dot{P}_{\rm out}$\tablefootmark{j} & $\log L_{\rm out}$\tablefootmark{k} \\ & (10$^8$\,\hbox{$M_{\rm \odot}$}) & (10$^9$\,\hbox{$M_{\rm \odot}$}) & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (pc) & (kpc) & (yr) & (\hbox{$M_{\rm \odot}$}\,yr$^{-1}$) & (yr) & (g\,cm\,s$^{-2}$) & (erg\,cm$^{-2}$\,s) \\ \hline\\[-2ex] I12112 SW & 0.31 & 1.0& 360 & 520 & 810 & 1.8 & 6.3 & 1.1 & 7.9 & 34.49 & 41.74 \\ I12112 NE & { 5.7} & 6.1 & 530 & 910 & 710 & 3.4 & 6.1 & 2.6 & 7.2 & 36.13 & 43.55 \\ I14348 SW & 5.2 & 6.7 & 430 & 770 & 1000 & 6.0 & 6.4 & 2.4 & 7.5 & 35.79 & 43.13 \\ I14348 NE & 1.0 & 2.9 & 460 & 650 & 590 & 1.5 & 6.1 & 1.9 & 7.5 & 35.38 & 42.75 \\ I22491 E & 1.2 & 3.5 & 400 & 500 & 250 & 0.9 & 5.8 & 2.3 & 7.2 & 35.72 & 43.02 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a,b}{Outflow and integrated molecular gas masses, respectively, assuming a ULIRG-like conversion factor $\alpha_{\rm CO}$ of 0.78 \hbox{$M_{\rm \odot}$} (K\,km\,s$^{-1}$\,pc$^{-2}$)$^{-1}$ and $r_{21}$ ratio of 0.91 \citep{Bolatto2013}.} \tablefoottext{c}{Inclination corrected outflow velocity $|{\rm v}_{\rm high}|$\slash $\cos i$ (see Table~\ref{tbl_outflow_obs}).} \tablefoottext{d}{{ Inclination corrected maximum outflow velocity $|{\rm v}_{\rm max}|$\slash $\cos i$ (see Table~\ref{tbl_outflow_obs}).}} \tablefoottext{e}{Inclination corrected outflow radius range estimated from $R_{\rm c}$\slash $\sin i$ (see Table~\ref{tbl_outflow_obs}).} \tablefoottext{f}{{ Inclination corrected outflow maximum radius derived using $R_{\rm max}$\slash $\sin i$ (see Table~\ref{tbl_outflow_obs}).}} \tablefoottext{g}{Outflow dynamical time $t_{\rm dyn}=R_{\rm out}$\slash v$_{\rm out}$ (see Table~\ref{tbl_outflow_obs}).} \tablefoottext{h}{Outflow rate $\dot{M}_{\rm out} = {\rm v}_{\rm out} \times M_{\rm out}$\slash $R_{\rm out}$. The uncertainty is { $\sim$0.4\,dex (see Section~\ref{ss:derived_properties}).}} \tablefoottext{i}{Depletion time $t_{\rm dep}=M_{\rm tot}$\slash $\dot{M}_{\rm out}$.} \tablefoottext{j}{Outflow momentum rate $\dot{P}_{\rm out} = \dot{M}_{\rm out}\times {\rm v}_{\rm out}$.} \tablefoottext{k}{Outflow kinetic luminosity $L_{\rm out} = \hbox{1 \slash 2}\times \dot{M}_{\rm out}\times {\rm v}_{\rm out}^2$.} } \end{table*} \begin{table*}[ht] \caption{Escape Outflow} \label{tbl_outflow_escape} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & v$_{\rm esc}$\tablefootmark{a} & v$_{\rm range}$\tablefootmark{b} & $S_{\rm CO}^{\rm esc}$\tablefootmark{c} & $S_{\rm CO}^{\rm esc}$\slash $S_{\rm CO}^{\rm out}$\tablefootmark{d} & $\log \dot{M}_{\rm esc}$ \\ & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (Jy\,km\,s$^{-1}$) & & (\hbox{$M_{\rm \odot}$}\,yr$^{-1}$)\\ \hline\\[-2ex] I12112 SW & 465 & [425, 550] & 0.14 $\pm$ 0.03 & 0.24 & 0.49 \\ I12112 NE & 465 & [414, 900] & { 3.0} $\pm$ 0.1 & { 0.28} & 2.1 \\ I14348 SW & 590 & [570, 800] & 1.2 $\pm$ 0.1 & 0.16 & 1.6 \\ I14348 NE & 590 & [459, 560] & 0.28 $\pm$ 0.04 & 0.18 & 1.2 \\ I22491 E & 400 & [319, 600] & 0.7 $\pm$ 0.1 & 0.34 & 1.8 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Escape velocity at 2\,kpc (see \citealt{Emonts2017}).} \tablefoottext{b}{Observed velocity range used to measure the molecular gas with ${\rm v}>{\rm v_{\rm esc}}$ taking into account the inclination of the object.} \tablefoottext{c}{CO(2--1) emission with ${\rm v}>{\rm v_{\rm esc}}$.} \tablefoottext{d}{Ratio between the CO(2--1) emission from molecular gas with ${\rm v}>{\rm v_{\rm esc}}$ and the total outflowing gas from Table \ref{tbl_outflow_obs}.} \tablefoottext{e}{Molecular gas escape rate.} } \end{table*} In Table~\ref{tbl_outflow_derived}, we present the derived properties for these outflows based on the observations and assuming that they are perpendicular to the rotating disk. The latter is consistent with the $\sim$90\ensuremath{^\circ}\ PA difference between the kinematic major axis and the outflow axis measured in 3 of the galaxies. For the other 2 cases (PA difference $\sim$120\ensuremath{^\circ}), { this assumption might introduce a factor of $\sim$2 uncertainty in the derived outflow rates.} To convert the CO(2--1) fluxes into molecular masses, we assume a ULIRG-like $\alpha_{\rm CO}$ conversion factor of 0.78 \hbox{$M_{\rm \odot}$} (K\,km\,s$^{-1}$\,pc$^{-2}$)$^{-1}$ and a ratio between the CO(2--1) and CO(1--0) transitions, $r_{21}$, of 0.91 \citep{Bolatto2013}. The outflow velocity is corrected for the inclination by dividing by $\cos i$, where $i$ is the disk inclination (Table~\ref{tbl_gaskin}). Similarly, the outflow radius is corrected by dividing by $\sin i$. The average corrected outflow velocity is $\sim$440\,km\,s$^{-1}$ and the average deprojected radius $\sim$700\,pc. Using these quantities, we calculate the dynamical time as $t_{\rm dyn} = R_{\rm out}\slash {\rm v}_{\rm out}$ (about $\sim$1\,Myr for these outflows). Then, we estimate the outflow rate using $\dot{M}_{\rm out}=M_{\rm out}\slash t_{\rm dyn}$. We find $\dot{M}_{\rm out}$ values between $\sim$12 and $\sim$400\,\hbox{$M_{\rm \odot}$}\,yr$^{-1}$. From these estimates, we can derive the depletion time ($t_{\rm dep}$), outflow momentum rate ($\dot{P}_{\rm out}$), and the outflow kinetic luminosity ($L_{\rm out}$; see e.g., \citealt{GarciaBurillo2015}). { The uncertainties on the outflow rate, and the quantities derived from it, are dominated by the uncertainty in the value of the $\alpha_{\rm CO}$ conversion factor which is not well established for the outflowing gas (e.g., \citealt{Aalto2015}) and the outflow geometry (inclination). For the conversion factor, we assume a ULIRG-like value ($\alpha_{\rm CO}=0.78$\hbox{$M_{\rm \odot}$} (K\,km\,s$^{-1}$\,pc$^{-2}$)$^{-1}$), although depending on the gas conditions, this factor can vary within 0.3\,dex (e.g., \citealt{Papadopoulos2012b, Bolatto2013}). Therefore, from the uncertainty in the inclination and the conversion factor, we assume a 0.4\,dex uncertainty for these values.} Finally, in Table~\ref{tbl_outflow_escape}, we estimate the fraction of the outflowing gas that would escape the gravitational potential of these galaxies. We use the escape velocities at 2\,kpc calculated by \citet{Emonts2017} for these systems which range from $\sim$400 to 600\,km\,s$^{-1}$. We integrate the CO(2--1) emission with velocities higher than these escape ones (taking into account the inclination of the outflows) and obtain that $15-30$\%\ of the high-velocity gas will escape to the intergalactic medium. The escape outflow rates are $3-120$\,\hbox{$M_{\rm \odot}$}\,yr$^{-1}$. However, these escape rates can be lower if the velocity of the outflowing gas is decreased due to dynamical friction. \subsection{Nuclear SFR}\label{ss:sfrrate} \begin{table*}[ht] \caption{Outflows and nuclear SFR} \label{tbl_outflow_agn_sfr} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & $\log$\,SFR(IR)\tablefootmark{a} & $\log$\,SFR(1.4\,GHz)\tablefootmark{b} & $\log \eta$\,\tablefootmark{c} & $\log \frac{\dot{P}_{\rm out}}{P_{\rm SNe}}$\tablefootmark{d} & $\log \frac{L_{\rm out}}{L_{\rm SNe}}$\tablefootmark{e}\\ & (\hbox{$M_{\rm \odot}$}\,yr$^{-1}$) & (\hbox{$M_{\rm \odot}$}\,yr$^{-1}$) \\ \hline\\[-2ex] I12112 SW & 1.12 & 1.57 & --0.02 & --0.65 & --1.4 \\ I12112 NE & 2.26 & 2.21 & 0.34 & --0.12 & --0.67 \\ I14348 SW & 2.14 & 2.36 & 0.26 & --0.28 & --0.91 \\ I14348 NE & 1.97 & 2.12 & --0.07 & --0.58 & --1.2 \\ I22491 E & 2.04 & 1.76 & 0.26 & --0.32 & --0.99 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{SFR derived from the IR luminosity assigned to each nucleus (see Section~\ref{ss:ir_lum}) using the calibration of \citet{Kennicutt2012}. This is the adopted SFR for these nuclei.} \tablefoottext{b}{SFR derived from the non-thermal radio continuum using the \citet{Murphy2011} SFR calibration (see Section~\ref{ss:radio_cont}).} \tablefoottext{c}{Logarithm of the mass loading factor \hbox{$\eta = \dot{M}_{\rm out}\slash {\rm SFR}$(IR)}.} \tablefoottext{d}{Ratio between the outflow momentum rate and the momentum injected by supernova explosions. We assume that the $P_{\rm SN}$ per SN is {$1.3\times 10^{5}$\hbox{$M_{\rm \odot}$}\,km\,s$^{-1}\times(n_0/100$\,cm$^{-3})^{-0.17}$} \citep{Kim2015} using $n_0 = 100$\,cm$^{-3}$ and that the SN rate, $\nu_{\rm SN}$\,(yr$^{-1}$), is 0.012$\times$SFR(IR)(\hbox{$M_{\rm \odot}$}\,yr$^{-1}$) for the adopted IMF \citep{Leitherer1999}.} \tablefoottext{e}{Ratio between the total kinetic luminosity of the molecular outflow and that injected by supernova explosions ($L_{\rm SNe}$(erg\,s$^{-1}$)=$9\times 10^{41} {\rm SFR}$(IR)(\hbox{$M_{\rm \odot}$}\,yr$^{-1}$); \citealt{Leitherer1999} adapted for a \citealt{Kroupa2001} IMF).} } \end{table*} \begin{figure*}[t] \centering \includegraphics[height=0.25\textwidth]{fit_sed_12112.pdf} \includegraphics[height=0.25\textwidth]{fit_sed_14348.pdf} \includegraphics[height=0.25\textwidth]{fit_sed_22491.pdf} \caption{Mid- and far-IR spectral energy distribution of the three ULIRGs. The yellow circle corresponds to the \textit{Spitzer}\slash MIPS 24\hbox{\,$\mu$m}\ flux, the blue squares to the \textit{Herschel}\slash PACS 60, 100, and 160\hbox{\,$\mu$m}\ fluxes, and the green triangles to the \textit{Herschel}\slash SPIRE 250, 350, and 500\hbox{\,$\mu$m}\ fluxes. The solid red line is the best fit to the data using a single temperature gray body. \label{fig_sed_fits}} \end{figure*} Measuring the SFR in these ULIRGs is important to evaluate the impact of the molecular outflows. Most of the outflowing molecular gas is concentrated in the central $1-2$\,kpc, so to determine the local effect of the outflows, we must compare them with the nuclear SFR. However, the nuclei of local ULIRGs are extremely obscured regions (e.g., \citealt{GarciaMarin2009, Piqueras2013}) and estimating their SFR is not straightforward. In this section, we use two approaches to measure the nuclear SFR using the IR luminosity and the 248\,GHz continuum and the radio continuum which should not be heavily affected by extinction. \subsubsection{IR luminosity}\label{ss:ir_lum} The total IR luminosity ($L_{\rm IR}$) is a good tracer of the SF in dusty environments such as the nuclei of ULIRGs (e.g., \citealt{Kennicutt2012}). However, there are no far-IR observations with the two nuclei of the systems spatially resolved. For this reason, we first derived the integrated $L_{\rm IR}$ of each system. We fit a single temperature gray body model to the 24\hbox{\,$\mu$m}\ to 500\hbox{\,$\mu$m}\ fluxes from \textit{Spitzer}\ and \textit{Herschel}\ \citep{Piqueras2016, Chu2017} following \citet{Pereira2016b}. The resulting $L_{\rm IR}$ are $\sim$0.2\,dex lower than those derived using the {IRAS} fluxes, but we consider these new $L_{\rm IR}$ more accurate since we are using more data points which cover a wider wavelength range to fit the IR emission (7 points between $24-500$\hbox{\,$\mu$m}\ vs. 4 points between $12-100$\hbox{\,$\mu$m}) and also we avoid flux contamination from unrelated sources thanks to the higher angular resolution of the new data ($6\arcsec-35\arcsec$ vs. $0\farcm5-2\arcmin$). The \hbox{$L_{\rm IR}$}\ are listed in Table~\ref{tbl_sample} and the best-fits shown in Figure~\ref{fig_sed_fits}. Then, we assign a fraction of \hbox{$L_{\rm IR}$}\ due to the star-formation (i.e., after subtracting the AGN luminosity from Table~\ref{tbl_sample}) to each nucleus which is proportional to their contribution to the total thermal continuum at 248\,GHz (dust emission plus free-free radio continuum; see Table~\ref{tbl_continuum}) of the system. By doing this, we assume that all the \hbox{$L_{\rm IR}$}\ is produced in the central $300-1000$\,pc, which is consistent with the compact distribution of the molecular gas around the nuclei (Section~\ref{ss:molgas}), and that the 248\,GHz continuum scales with the \hbox{$L_{\rm IR}$}. The latter is true for the free-free radio continuum contribution at this frequency which is proportional to the ionizing flux and, therefore, to the SFR and \hbox{$L_{\rm IR}$}. The dust emission at 248\,GHz depends on the average dust temperature of each nucleus ($f_\nu\slash L\propto T^{-3}$ in the Rayleigh-Jeans tails of the black body). Although, given the similar temperatures we obtained for the integrated emission ($T\sim65-70$\,K; see Figure~\ref{fig_sed_fits}), our assumption seems to be reasonable. Finally, we converted these nuclear IR luminosities into SFR using the \citet{Kennicutt2012} calibration (see Table~\ref{tbl_outflow_agn_sfr}). The SFRs range from 13 to 180\,\hbox{$M_{\rm \odot}$}\,yr$^{-1}$. \subsubsection{Radio continuum}\label{ss:radio_cont} We also estimated the SFR from the non-thermal radio continuum observations of these galaxies (see Section~\ref{ss:continuum}). Using the observed 1.49 and 8.44\,GHz fluxes and the derived spectral indexes, we estimated the rest-frame 1.4\,GHz continuum and applied the \citet{Murphy2011} SFR calibration. Here, we ignore any contribution from an AGN to the radio emission. These objects seem to be dominated by SF and just have small AGN contribution, but the radio emission of these AGN is uncertain. The obtained radio SFR are listed in Table~\ref{tbl_outflow_agn_sfr}. These values are comparable to those obtained from the IR luminosity. The average difference between the two estimates is 0.2\,dex with a maximum of 0.4\,dex. Therefore, the two methods provide compatible SFR values and, in the following, we adopt the SFR(IR) with a 0.2\,dex uncertainty. \section{Discussion}\label{s:discu} \subsection{Outflow energy source}\label{ss:energy} \begin{figure*} \centering \includegraphics[height=0.3\textwidth]{fig_sfr_mout.pdf} \hfill \includegraphics[height=0.3\textwidth]{fig_pout_sfr.pdf} \hfill \includegraphics[height=0.3\textwidth]{fig_lkin_sfr.pdf} \caption{{ Mass outflow rate} vs. nuclear SFR ({\it left}), outflow momentum rate vs. nuclear SFR ({\it middle}), and outflow kinetic luminosity vs. nuclear SFR ({\it right}). { Red circles indicate nuclei with outflows launched by an AGN, green diamonds are objects hosting an AGN with molecular outflows of uncertain SF\slash AGN origin, and stars represent star-formation dominated nuclei}. The blue circles mark the ULIRGs presented in this work. The white stars are the lower luminosity starburts compiled by \citet{Cicone2014}. The remaining points correspond to local U\slash LIRGs from the literature: NGC~1614 and IRAS~17208-0014 \citep{GarciaBurillo2015, Pereira2015_n1614, Piqueras2016}; NGC~3256 N and S \citep{Sakamoto2014, Emonts2014, Ohyama2015, Pereira2011, Lira2002}; ESO~320-G030 \citep{Pereira2016b}; { Arp~220 W \citep{BarcosMunoz2018}; and NGC~6240 \citep{Saito2018}.} The crosses at the lower right corners represent the typical error bars of the points. The black lines in the {\it left} panel correspond to mass loading factors, $\eta=\dot{M}$\slash SFR, of 1 and 10. The dashed orange line in the {\it middle panel} marks the total momentum injected by SNe as function of the SFR. The dashed green lines indicate the L(SFR)\slash c ratio and 10 times this value. The dashed orange lines in the {\it right} panel indicate the $L_{\rm out}$ = $a\times L_{\rm SNe}$ with $a=$1, 0.1, and 0.01 as function of the SFR. The solid red lines in the middle and right panels are the best linear fits to the star-forming objects. \label{fig_sfr_mout}} \end{figure*} In the left panel of Figure~\ref{fig_sfr_mout}, we show the relation between the outflow rate and the nuclear SFR (i.e., the mass loading factor $\eta$). In this figure, we include local U\slash LIRGs with spatially resolved observations (filled symbols) as well as the lower luminosity starbursts compiled by \citet{Cicone2014}\footnote{For NGC~3256, we use the newer observations presented by \citet{Sakamoto2014} which distinguish between the Northern and Southern nuclei instead of the \citet{Sakamoto2006} data used by \citet{Cicone2014}.}. In total, we include observations for { 7 ULIRG nuclei, 5 LIRG nuclei, and 4 starbursts.} { For the 2 nuclei classified as AGN, we derive the nuclear SFR from their IR luminosity after subtracting the AGN contribution (see \citealt{GarciaBurillo2015, Ohyama2015}).} { The 5 new ULIRG nuclei (encircled stars in this figure) have mass loading factors}, $\eta$, $\sim0.8-2$ (see Table~\ref{tbl_outflow_agn_sfr}). These are similar to those observed in local starburst galaxies which are typically lower than $\sim2-3$ (e.g., \citealt{Bolatto2013, Cicone2014, Salak2016}). This suggests that the outflows in these ULIRGs are also powered by SF. To further investigate the energy source, in Table~\ref{tbl_outflow_agn_sfr}, we list the ratios between the kinetic luminosity and momentum rates of the outflows and the total energy and momentum injected by supernovae (SNe), respectively. We assume that the SNe total energy and momentum are upper limits on the energy and momentum that the starburst can inject into the outflow (independent of the launching mechanism; see Section~\ref{ss:launch}). For all the galaxies, both the energy and momentum in the molecular outflowing gas are lower than those produced by SNe. Although this does not imply an SF origin, we cannot rule out the SF origin based on the energy or momentum in of these outflows. Molecular outflows from AGN usually have maximum velocities up to $\sim$1000\,km\,s$^{-1}$ { (e.g., \citealt{Cicone2014, Veilleux2017})} which are higher than those due to SF (few hundreds of km\,s$^{-1}$). We found the maximum outflow velocities in IRAS~12112 NE and IRAS~14348 SW ($\sim700-800$\,km\,s$^{-1}$). These are not as high as those observed in other AGN, but are $1.5-2$ times higher than in the rest of our sample and might indicate an AGN powered outflow in these objects. However, there are molecular outflows detected in more nearby starbursts which also reach these high velocities (e.g., \citealt{Sakamoto2014}). Therefore, the velocities of the outflows in these ULIRGs are not high enough to claim an AGN origin. Similarly, the orientation of the outflow gives information on its origin. Outflows produced by starbursts tend to be perpendicular to the disk of the galaxy where it is easier for the gas to escape. On the contrary, the angle of AGN outflows is, in principle, independent of the disk orientation (e.g., \citealt{Pjanka2017}). We found that the PA of these outflows are compatible with being perpendicular to the disk (i.e., possible SF origin) except for IRAS~14348 SW (i.e., possible AGN origin; see Table~\ref{tbl_gaskin}). In summary, the mass, energy, momentum, velocity, and geometry of these outflows seem to be compatible with those expected for a SF powered outflow. The only exception could be the outflow of IRAS~14348 SW. This outflow has a relatively high velocity compared to the others and also a different geometry, so it might be powered by an AGN. X-ray observations also suggest the presence of a Compton-thick AGN, although the bolometric luminosity of this AGN seems to be $<$10\%\ of the total IR luminosity \citep{Iwasawa2011} and would not be able to produce the observed outflow. Therefore, since there is no clear evidence for an AGN origin, we assumed a SF origin in this case too. \subsection{Outflow effects on the star-formation} The nuclear outflow depletion times are $15-80$\,Myr which are comparable to those found in other ULIRGs \citep{Cicone2014, GarciaBurillo2015, GonzalezAlfonso2017}. These times do not include the possible inflow of molecular gas into the nuclear region. However, between 70\% and 90\% of the molecular gas is already in these central regions (see Tables~\ref{tbl_integrated} and \ref{tbl_outflow_obs}), so we do not expect significant molecular inflows to occur. Inflows of atomic gas might be present too, but there are no spatially resolved \ion{H}{i} observations available for these objects to infer the atomic gas distribution. In addition, we have to take into account that most of the outflowing gas ($\sim60-80$\%; Table~\ref{tbl_outflow_escape}) will not escape the gravitational potential of these systems and will become available to form new stars in the future. We can estimate how long it will take for the outflowing gas to rain back into the system from the average outflow velocity, the outflow radius, and the escape velocity (Tables~\ref{tbl_outflow_derived} and \ref{tbl_outflow_escape}). From the escape velocity we obtain the gravitational parameter, $\mu=GM$, using the following relation: \begin{equation} \mu = \frac{1}{2}\times r_{\rm esc} \times {\rm v_{esc}}^2 \end{equation} where v$_{\rm esc}$ and r$_{\rm esc}$ are the escape velocity and the radius at which it is calculated, respectively. Then, assuming that the outflowing gas moves radially and that it is not affected by any dynamical friction, the equations of motion are: \begin{equation} \begin{aligned} \frac{dr}{dt} &= {\rm v} \\ \frac{d{\rm v}}{dt} &= -\frac{\mu}{r^2} \end{aligned} \end{equation} with the initial conditions $t_0 = t_{\rm dyn}$, $r_0=R_{\rm out}$, and ${\rm v}_0={\rm v}_{\rm out}$. Integrating these equations numerically, we can determine when $r$ becomes 0, and obtain an estimate of the outflow cycle duration. By doing this, we find cycle durations of $5-10$\,Myr (these can be shorter if the dynamical friction is important). Therefore, even if the outflow depletion times are slightly shorter than the SF depletion times ({ $M_{\rm tot}\slash $SFR$\sim30-80$\,Myr)}, the outflowing gas will return to the starburst region after few Myr where it will be available again to form stars. In consequence, the main effects of these outflows are to delay the formation of stars in the nuclear starbursts and to expel a fraction of the total molecular gas ($\sim15-30$\%) into the intergalactic medium. However they will not completely quench the nuclear star-formation. { \citet{Walter2017} suggested that the molecular outflow detected in the low-luminosity starburst galaxy NGC~253 is accelerating at a rate of 1 km\,s$^{-1}$\,pc$^{-1}$ when observed at 30\,pc resolution. For these ULIRGs, we find that the higher velocity outflowing molecular gas is not located farther from the nucleus than the lower velocity gas (see Figure~\ref{fig_alma_posdiagram} and Appendix~\ref{apx_channels}). Therefore, outflow acceleration does not seem to be important for these outflows at $\sim$500\,pc scale and will likely not affect the cycle duration and outflow effects discussed above.} \subsection{Outflow launching mechanism in starbursts}\label{ss:launch} There are two main mechanisms that can launch outflows in starbursts. Radiation pressure from young stars can deposit momentum into dust grains. Dust and gas are assumed to be dynamically coupled and, therefore, this process can increase the gas outward velocity and produce an outflow. This class of outflows is known as momentum-driven (e.g., \citealt{Murray2005, Thompson2015}). The second mechanism is related to the energy injection into the interstellar medium (ISM) by SNe. If the gas does not cool efficiently, this energy increase translates into an adiabatic expansion of the gas which drives the outflow. These outflows are known as energy-driven (e.g., \citealt{Chevalier1985, Costa2014}). \begin{figure} \centering \includegraphics[height=0.31\textwidth]{fig_eta_vout.pdf} \caption{Mass loading factor vs. outflow velocity. Only outflows powered by SF are plotted in this figure. Galaxy symbols are as in Figure~\ref{fig_sfr_mout}. The dashed black lines are the best fits with fixed slopes of --1 and --2 (see Section~\ref{ss:launch}). The red line is the best linear fit. \label{fig_eta_vout}} \end{figure} The scaling relation between the mass loading factor and the outflow velocity is different for energy- and momentum-driven outflows ($\eta\sim {\rm v^{-2}}$ for energy-driven and $\eta\sim {\rm v^{-1}}$ for momentum-driven; e.g., \citealt{Murray2005}). \citet{Cicone2014} found a slope of --1 for this relation and suggested that the molecular phase of outflows are possibly momentum-driven. However, Figure~\ref{fig_eta_vout} shows that, after adding the new data points, the slope of the $\eta$ vs. v relation is shallower than --1. The best linear fit is: \begin{equation} \log\,\eta = (0.8 \pm 0.4) - (0.3 \pm 0.2) \log {\rm v_{out}} ({\rm km\,s^{-1}}) \end{equation} This does not necessarily imply that these outflows are not momentum-driven. Actually, the --1 slope for momentum-driven outflows implicitly ignores the dependency of the outflow velocity on the optical depth, $\tau_{\rm FIR}$, of the launching region. When the FIR optical depth increases, the momentum transfer from the radiation to the dust\slash gas can be considerably more efficient \citep{Thompson2015, Hopkins2013}. For instance, if $\tau_{\rm FIR}>1$, the momentum boost factor, $\dot{P}_{\rm out}\slash (L\slash c)$, can significantly exceed $\sim$2 \citep{Thompson2015}. To test this, in the middle panel of Figure~\ref{fig_sfr_mout}, we plot the outflow momentum rate as a function of the SFR. The best linear fit to the starbursts data is: \begin{equation} \log \dot{P}_{\rm out} ({\rm g\,cm\,s^{-2}}) = (32.7 \pm 0.3) + (1.5 \pm 0.2) \log {\rm SFR} (M_\odot {\rm yr^{-1}}) \end{equation} which has a slope $>$1. That is, for those starbursts with the lower SFR, the momentum boost factor is $\sim$1 (see also \citealt{Cicone2014}). But this factor increases for objects with higher SFR up to $\sim$8. For one of these starbursts, ESO~320-G030, we measured a very high optical depth in the region launching the outflow $\gtrsim$8 at 100$\hbox{\,$\mu$m}$ \citep{Pereira2017Water}. Therefore, higher dust opacities in the more vigorous starbursts could explain these momentum boost factors $>2$. We also explore the possible role of SNe in the launching of these outflows. We plot the momentum injected by SNe in the middle panel of Figure~\ref{fig_sfr_mout}, which is more than a factor of 10 higher than the radiation pressure. For all the starbursts the momentum rate of their outflows is lower than the momentum due to SN explosions. Therefore, these outflows could be launched by SNe. If this is the case, the momentum coupling between the SNe and the ISM seems to be more efficient at higher SFR. While for the low SFR objects the outflows carry less than 10\% of the SNe momentum, the outflows in higher SFR objects carry up to 75\% of the momentum injected by SNe. Similarly, in the right panel of Figure~\ref{fig_sfr_mout}, we compare the kinetic luminosity of the outflows with the energy produced by SNe. The outflow kinetic luminosity represents $4-20$\% of the energy produced by SNe for the U\slash LIRGs, whereas for the lower luminosity starbursts, this fraction is $\lesssim$1\%. Therefore, if these outflows are driven by SNe, this suggests that the coupling efficiency between the SNe and the ISM increases with increasing SFR. The best linear fit is: \begin{equation} \log L_{\rm out} ({\rm erg\,s^{-1}}) = (39.0 \pm 0.3) + (2.0 \pm 0.2) \log {\rm SFR} (M_\odot {\rm yr^{-1}}) \end{equation} We also note that, for the { AGN U\slash LIRGs}, the observed kinetic luminosities of the outflows are $1-5$\% of the AGN luminosity \citep{Cicone2014, GarciaBurillo2015}. Thus, if SNe are the main drivers of outflows in starbursts, the coupling between the SN explosions and the ISM must be more efficient than for AGN, at least, when the SFR is sufficiently high. \subsection{Multi-phase outflows}\label{ss:multiphase} \begin{table}[t] \caption{Hot-molecular outflow phase} \label{tbl_multiphase} \centering \begin{small} \begin{tabular}{lcccccccccccc} \hline \hline \\ Object & v$_{\rm cold\,H_2}$\tablefootmark{a} & v$_{\rm hot\,H_2}$\tablefootmark{b} & $M_{\rm hot\,H_2}$\tablefootmark{c} & $M_{\rm hot\,H_2}\slash M_{\rm cold\,H_2}$\tablefootmark{d}\\ & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (10$^3$\hbox{$M_{\rm \odot}$}) & (10$^{-5}$) \\ \hline\\[-2ex] I12112 NE & 465 & 430 & 6.8$\pm$3.7 & 1.3$\pm$0.7 \\ I14348 SW & 419 & 520 & 8.4$\pm$2.2 & 1.6$\pm$0.5 \\ I22491 E & 325 & 320 & 5.9$\pm$1.9 & 4.9$\pm$1.6 \\ \hline \end{tabular} \end{small} \tablefoot{ \tablefoottext{a}{Cold molecular outflow velocity (see Table~\ref{tbl_outflow_obs}).} \tablefoottext{b,c}{Velocity and mass of the hot-molecular outflowing gas \citep{Emonts2017}.} \tablefoottext{d}{Hot-to-cold molecular gas ratio in the outflows.} } \end{table} We measure similar outflow dynamical times, around 1\,Myr, in all the galaxies. These are much shorter than the age of the star-formation burst expected in ULIRGs ($\sim$60-100\,Myr; \citealt{RodriguezZaurin2010}) and also much shorter than the outflow depletion times ($\sim$15-80\,Myr; see Section~\ref{ss:sfrrate}). This might be connected to the evolution of the gas within the outflow. For instance, if the molecular gas is swept from the nuclear ISM, it might be able to survive only $\sim$1\,Myr in the hot gas outflow environment before the molecular gas dissociates and becomes neutral atomic gas (e.g., \citealt{Decataldo2017}). These dynamical times are also consistent with those measured in the molecular outflow of a local starburst observed at much higher spatial resolution \citep{Pereira2016b, Aalto2016}. Alternatively, if the outflow has a bi-conical geometry, its projected area increases proportionally to $r^2$ as it expands. Therefore, even if the molecular gas is not dissociated, its column density rapidly decreases with increasing $r$ and, eventually, the CO emission will be below the detection limit of the observations. The present data do not allow us to distinguish between these possibilities because the outflow structure is just barely spatially resolved and, therefore, it is not possible to the accurately measure the radial dependency of the outflow properties. It has been suggested that molecular gas forms in the outflow (e.g., \citealt{Ferrara2016, Richings2018}). If so, these observations indicate that molecular gas does not efficiently form in outflows, at least, beyond 1\,kpc or after 1\,Myr. \subsubsection{Hot and cold molecular phase} There are observations of the ionized and hot molecular phases of the outflows in I12112, I14348, and I22491 that demonstrate their multi-phase structure and suggest that transitions between the different phases are possible (\citealt{Arribas2014, Emonts2017}). { For these galaxies, a direct comparison of the CO(2--1) data with the observations of the ionized phase (H$\alpha$) are not possible due to the relatively low angular resolution of the H$\alpha$ data ($\gtrsim$1\arcsec). However, the detection of a broad H$\alpha$ component indicates the presence of ionized gas in the outflow. The comparison between the cold molecular and the ionized phases of the outflow in NGC~6240 is presented by \citet{Saito2018}. They show that the outflow mass is dominated by the cold molecular phase in that object.} For the hot molecular phase, we have maps at higher angular resolution \citep{Emonts2017}. This hot phase is traced by the near-IR ro-vibrational H$_2$ transitions and is detected in three cases (I12112 NE, I14348 SW, and I22491 E). The two cases where no outflow was detected in the hot phase, I12112 SW and I14348 NE, contain the least massive of the CO outflows in our ALMA sample, and may therefore have been below the detection limit of the near-IR data. In general, there is a good agreement between the outflow velocity structures (see figures 2, 3, and 4 of \citealt{Emonts2017}). Also, there is a good agreement between the average outflow velocities (see Table~\ref{tbl_multiphase}). Interestingly, for the hot molecular H$_{2}$ gas, only the blueshifted part of the outflows was unambiguously detected. The redshifted part of the outflows, as seen in CO, may have suffered from very high obscuration in the near-IR H$_{2}$ lines, although the poorer spectral resolution and lower sensitivity of the near-IR data compared to the ALMA data makes this difficult to verify. The average hot-to-cold molecular gas mass ratio is (2.6$\pm$1.0)$\times$10$^{-5}$. If we only consider the blueshifted part of the outflows, this ratio would be higher by up to a factor of about two. These estimates are slightly lower but comparable to the ratio of 6$-$7$\times$10$^{-5}$ observed in the outflows of local LIRGs \citep{Emonts2014, Pereira2016b}, and well within the $10^{-7}-10^{-5}$ range found for molecular gas in starburst galaxies and AGN \citep{Dale2005}. This ratio provides information on the temperature distribution of molecular gas (e.g., \citealt{Pereira2014}) and the excitation of the outflowing gas (e.g., \citealt{Emonts2014, Dasyra2014}). The hot-to-cold molecular gas mass ratio can also be used to obtain a rough estimate of the total outflowing mass of molecular gas when only near-IR H$_2$ data are available. This method was used in \citet{Emonts2017} to extrapolate total molecular mass outflow rates in I12112 NE, I14348 SW, and I22491 E, as based on the near-IR H$_{2}$ data alone. This resulted in mass outflow estimates that were significantly lower than those found in CO and OH surveys of starburst galaxies and ULIRGs \citep{Sturm2011, Spoon2013, Veilleux2013, Cicone2014, GonzalezAlfonso2017}. However, our new ALMA results reveal higher molecular mass outflow rates, bringing them back in agreement with these earlier surveys. This shows the importance of directly observing of the cold component of molecular outflows, and it highlights the synergy between ALMA and the James Webb Space Telescope for studying the role of molecular outflows in the evolution of galaxies. \section{Conclusions}\label{s:conclusions} We have analyzed new ALMA CO(2--1) observations of 3 \hbox{low-$z$} ULIRG systems ($d\sim 350$\,Mpc). Thanks to the high SNR and spatial resolution of these data, we have been able to study the physical properties and kinematics of the molecular gas around 5 out of 6 nuclei of these 3 ULIRGs. Then, we have used data from the literature to investigate the properties of these outflows and their impact on the evolution of the ULIRG systems. The main results of this paper are the following: \begin{enumerate} \item We have detected fast { (deprojected v$_{\rm out}\sim 350-550$\,km\,s$^{-1}$; v$_{\rm max}\sim 500-900$\,km\,s$^{-1}$)} massive molecular outflows ($M_{\rm out}\sim(0.3-5)\times$10$^{8}$\,\hbox{$M_{\rm \odot}$}) in the 5 well detected nuclei of these 3 low-$z$ ULIRGs. The outflow emission is spatially resolved and we measure { deprojected outflow effective radii} between 250\,pc and 1\,kpc. The PA of the outflow emission is compatible with an outflow perpendicular to the rotating molecular disk in 3 cases. { Only in one case, the outflow PA is clearly not along the kinematic minor axis and suggests a different outflow orientation.} \item The outflow dynamical times are between 0.5 and 3\,Myr and the outflow rates between 12 and 400\,\hbox{$M_{\rm \odot}$}\,yr$^{-1}$. Taking into account the nuclear SFR, the mass loading factors are 0.8 to $\sim$2. { These values are similar to those found in other local ULIRGs}. The total molecular gas mass in the regions where the outflows originate is $(1-7)\times$10$^9$\,\hbox{$M_{\rm \odot}$}. Therefore, the outflow depletion times are $15-80$\,Myr. { We also estimate that only $15-30$\% of the outflowing gas has ${\rm v}>{\rm v_{\rm esc}}$ and will escape the gravitational potential of the nucleus.} \item We use multiple indicators to determine the power source of these molecular outflows (e.g, mass loading factor, outflow { energy and momentum vs. those injected by SNe}, maximum outflow velocity, geometry, etc.). For all the nuclei, { the observed molecular outflows} are compatible with being powered by the strong nuclear starburst. \item The outflow depletion times are slightly shorter than the SF depletion times { ($30-80$\,Myr)}. However, we find that most of the outflowing molecular does not have enough velocity to escape the gravitational potential of the nucleus. Assuming that the outflowing gas is not affected by any dynamical friction, we estimate that most of this outflowing material will return to the molecular disk after $5-10$\,Myr and become available to form new stars. Therefore, the main effects of these outflows are to expel part of the total molecular gas ($\sim15-30\%$) into the intergalactic medium and delay the formation of stars { but, possibly, they are not completely quenching the nuclear star-formation.} \item \citet{Cicone2014} suggested that outflows in starbursts are driven by the radiation pressure due to young stars (i.e., momentum-driven) based on the --1 slope of the mass loading factor { vs.} outflow velocity relation. After adding more points to this relation, we find a shallower slope --(0.3$\pm$0.2). For momentum-driven outflows, this shallower slope can be explained if the dust optical depth increases for higher luminosity starbursts enhancing the momentum boost factor. One of the nuclear starbursts in our sample has an optical depth $\gtrsim$8 at 100$\hbox{\,$\mu$m}$ and might support this scenario. Alternatively, these outflows might be launched by SNe. If so, the coupling efficiency between the ISM and SNe increases with increasing SFR. For the stronger starbursts, { these molecular} outflows carry up to 75\% and 20\% of the momentum and energy injected by SNe, respectively. \item We explore the possible evolution of the cold molecular gas in the outflow. The relatively small sizes ($<$1\,kpc) and short dynamical times ($<$3\,Myr) of the outflows suggest that molecular gas cannot survive longer in the outflow environment or that it cannot form efficiently beyond these distances or times. The detection of other outflow phases, hot molecular and ionized, for these galaxies suggests that transformation between the different outflow gas phases might exist. Alternatively, in a uniform bi-conical outflow geometry, the CO column density will eventually be below the detection limit { and explain the non-detection of the outflowing molecular gas beyond $\sim$1\,kpc. New high-spatial resolution observations of similar outflows will help to distinguish between these possibilities. } \end{enumerate} \begin{acknowledgements} { We thank the anonymous referee for useful comments and suggestions.} MPS acknowledges support from STFC through grant ST/N000919/1. LC, SGB, and AL acknowledge financial support by the Spanish MEC under grants ESP2015-68964 and AYA2016-76682-C3-2-P. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2015.1.00263.S, ADS/JAO.ALMA\#2016.1.00170.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \end{acknowledgements}
2024-02-18T23:40:41.744Z
2018-05-11T02:00:29.000Z
algebraic_stack_train_0000
3,094
13,579
proofpile-arXiv_065-15090
\subsection{Synthetic dataset: Mixture of Gaussians} \label{sec:2d} \def-0.6in{-0.6in} \def0.05{0.05} \def.11{.12} \newcommand{\DrawGroup}[2]{ \begin{subfigure}{} \begin{minipage}{0.05\textwidth} \centering \vspace{-0.6in} \rotatebox{90}{#1} \rotatebox{90}{iter:#2} \end{minipage} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth, height=.11\textwidth]{results/2d_#1/sampling_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/2d_#1/h1_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/2d_#1/h2_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/2d_#1/h3_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/2d_#1/h4_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/2d_#1/D_prob_itr#2.png} \end{subfigure} } \newcommand{\DrawCaption}{ \begin{subfigure}{} \begin{minipage}{0.05\textwidth} \centering \rotatebox{90}{~~} \rotatebox{90}{~~} \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering MoG samples \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering h1 \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering h2 \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering h3 \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering h4 \end{minipage} \end{subfigure} \begin{subfigure}{} \begin{minipage}{.11\textwidth} \centering D \end{minipage} \end{subfigure} } \newcommand{\DrawGroupVTWO}[3]{ \begin{subfigure}{} \begin{minipage}{0.05\textwidth} \centering \vspace{-0.6in} \rotatebox{90}{#1} \rotatebox{90}{iter:#2} \end{minipage} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth, height=.11\textwidth]{results/#3/2d_#1/sampling_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/#3/2d_#1/h1_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/#3/2d_#1/h2_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/#3/2d_#1/h3_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/#3/2d_#1/h4_itr#2.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.11\textwidth]{results/#3/2d_#1/D_prob_itr#2.png} \end{subfigure} } \begin{figure}[h] \DrawGroupVTWO{control}{0}{extra_2d} \DrawGroupVTWO{control}{4000}{extra_2d} \DrawGroupVTWO{control}{20000}{extra_2d} \hrule \DrawGroupVTWO{treat}{0}{extra_2d} \DrawGroupVTWO{treat}{4000}{extra_2d} \DrawGroupVTWO{treat}{20000}{extra_2d} \DrawCaption \caption{Fitting 2D Mixture of Gaussian (MoG): $h1$-$h4$ show the input space linear region defined by different binary activation patterns on each layer; each colour corresponds to one unique binary pattern; the last column shows probability of being real according to \D; BRE is applied on $h2$ and $h3$. Experimental details and more visualization in Appendix\ \ref{sec:hyp_detail} and \ref{sec:moreexp}, including one set of comparison for fitting an imbalanced mixture in Fig.\ \ref{fig:more2dcontrol_v2_imbalanced}-\ref{fig:more2dtreat_v2_imbalanced}. \label{fig:2d_main} } \label{fig:2dmog} \end{figure} We first demonstrate BRE regularizer's effect on fitting a 2D mixture of Gaussian. In Figure \ref{fig:2dmog}, the top three rows and bottom three ones correspond respectively to experiments without the BRE regularizer (control) and with the regularizer (treat). Within each setting, each row represents one iteration during GAN training, selected to be at the beginning, middle, and the end of the training process. The first column shows real data points (blue) and generated data points (red). The second to fifth columns show hidden layers $1$ to $4$ of \D, where contiguous pixels with the same colour have the same binary activation pattern on that particular layer. The last column shows the probability of real data according to \D. The BRE regularizer is added on layers $h2$ and $h3$. More results in Appendix \ref{sec:moreexp}. By adding BRE, the input domain is partitioned more finely as reflected by visualization for layers $h2$, $h3$ and $h4$. The richer \D representation allows more effective exploration of different input regions because the gradient signals provided by \D to \G are more diverse than the degenerate baseline case where \D is linear in large regions of the input. Once a real data mode is discovered, \G locks onto it without oscillation. This shows that with better \D capacity usage, the GAN optimisation converges faster and is more stable, while the resulting equilibrium suffers much less from mode dropping. \section{Reconstruction as auxiliary task to regularize \D worsens results} \section{Model and hyperparameter details} \label{sec:hyp_detail} \subsection{2D example} \G is 4-layer (excluding noise layer) MLP with ReLU hidden activation function, and tanh visible activation; \D is 5-layer MLP with LeakyRelu($.2$). Both \D and \G has 10 units on each hidden layer, no batch or other normalization is used in \D or \G, no other stabilization techniques are used; For Fig.\ \ref{fig:2d_main}, Fig.\ \ref{fig:more2dcontrol_v2}, and Fig.\ \ref{fig:more2dtreat_v2}, lr=.001 with adam(.0, .999), and BRE regularizer weight $1.$, applied on h2 and h3; both lr and BRE weight linearly decay to over iterations to $1e-6$ and $0$ respectively. For Fig.\ \ref{fig:more2dcontrol_v2} and Fig.\ \ref{fig:more2dtreat_v2}, lr=.002 with adam(.5, .999), and BRE regularizer weight $1.$, applied on h2. \subsection{Unsupervised Learning CIFAR10} Table.\ \ref{table:architectures}, ``single'' means that BRE is applied on a single layer (the middle one of all nonlinear layers of \D), while ``multi'' means all nonlinear layers except the first one and last two (the final classification and the nonlinear layer before it). For Fig.\ \ref{cifar_unsup}, the default optimization setting (left column, i.e.\ (a) and (c)) is $\text{lr}\!=\!\expnumber{2}{-4}$ and one $\D$ update per $\G$ update, lr for both $\D$ and $\G$ annealled to $1e-6$ over $90K$ \G updates; while the aggressive setting (right column, i.e. (b) and (d)) is $\text{lr}\!=\!\expnumber{2}{-3}$ and three $\D$ update for every $\G$ update, lr for both $\D$ and $\G$ annealed to $1e-6$ over $10K$ \G updates. \subsection{Semi-Supervised Learning} We used exactly the same code and GAN hyperparameters \footnote{https://github.com/openai/improved-gan} from \citet{salimans2016improved}. $R_{BRE}$ regularization is applied on every other second layer, starting from the 2nd until 4 layers before the classification layer (applied on three layers in total). On CIFAR10, we used a regularizer weight of $.01$, and on SVHN we used $0.1$. BRE is applied on real, fake and interp data. \section{Proof of Proposition \ref{prop:Covar}} \label{sec:proofs} \begin{proof} Let $M_i = U_i\tU_i$. Then \begin{align*} & \E{\left( \sum_{i=1}^d U_i\tU_i\right)^2} = \E{\left( \sum_{i=1}^d M_i\right)^2} \\ & \quad = \sum_{i=1}^d \E{M_i^2} + \sum_{i\neq t} \E{M_iM_t} \\ & \quad = \sum_{i=1}^d \E{U_i^2\tU_i^2} + \sum_{i\neq t} \E{U_i\tU_iU_t\tU_t} \\ & \quad \stackrel{(1)}{=} \sum_{i=1}^d \E{U_i^2}^2 + \sum_{i\neq t} \E{U_iU_t}^2 \\ & \quad \stackrel{(2)}{=} d + \sum_{i\neq t} \E{U_iU_t}^2\\ & \quad \stackrel{(3)}{=} d + \sum_{i\neq t} \text{Cov}\left(U_i, U_t\right)^2, \end{align*} where Equation (1) is due to the independence of $U$ and $\tU$, Equation (2) is due to that $U_i^2 = 1$ with probability $1$, and Equation (3) is because $\E{U_i} = 0$. \end{proof} \section{Corollary 3.3 of \citet{gavinsky2015joint}} \label{sec:cor33} \begin{thm}[Corollary 3.3 of \citet{gavinsky2015joint}] Let $H_{\min}(\Prob) = -\log\left( \max_x \Prob(X=x) \right)$. Also let $(U_1, \ldots, U_d)$ be pairwise independent random variable of Bernoulli($0.5$). Then, \[ H(\Prob) \ge H_{\min}(\Prob) \geq \log(d+1). \] \end{thm} \section{Additional Experimental Results} \label{sec:moreexp} \subsection{More samples from DCGAN-ReLU} \label{sec:more_samp_dcgan} We show more samples generated from the DCGAN-ReLU model, mentioned in Sec. \ref{sec:unsup}, without the BRE regularizer (with red frames) and with the BRE regularizer (with blue frames) in Fig. \ref{fig:more_samp_dcgan}. In Fig. \ref{fig:more_samp_dcgan} images are arranged based on $L_2$ distances in the original pixel value space. Images that are similar in pixel values are roughly grouped together. To achieve this, we apply t-SNE \citep{maaten2008visualizing} to reduce the dimensionality of these images into 2D points. These 2D coordinates are then transformed to 2D grids RasterFairy \footnote{https://github.com/Quasimondo/RasterFairy}. At the same time, the neighborhood relations of the rastered 2D points are preserved to a certain degree. We then use these rastered 2D points to arrange the location of these images. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{results/grid_tsne_cifar_dcgan.png} \caption{Rastered t-SNE visualization of DCGAN-ReLU CIFAR10 samples. Images with red frames are generated without BRE and images with blue frames are generated with BRE. Locations roughly indicates similarity between images in the pixel value space.} \label{fig:more_samp_dcgan} \end{figure} \subsection{Semi-supervised learning curves} \label{sec:cifar_semi_curves} Fig.\ \ref{fig:cifar_semi} shows the Learning curves for semi-supervised learning on CIFAR10. \begin{figure}[h!] \begin{subfigure}{} \centering \includegraphics[width=.43\textwidth]{results/semi_sup/train_err.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.43\textwidth]{results/semi_sup/test_err.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.43\textwidth]{results/semi_sup/loss_lab.png} \end{subfigure} \begin{subfigure}{} \centering \includegraphics[width=.43\textwidth]{results/semi_sup/loss_unl.png} \end{subfigure} \caption{Improved semi-supervised learning on CIFAR-10: BRE regularizer placed on every other second layer, starting from the 2nd until 4 layers before the classification layer. Regularizer weight is $.01$ and not decayed.} \label{fig:cifar_semi} \end{figure} \subsection{Reconstruction as auxiliary task to regularize \D worsens results} \label{sec:recon_aux_worsens} \begin{table}[h] \centering \begin{tabular}{ll} \toprule {} & D Recon, no BRE \\ \midrule ln, $\lambda_{recon} = 10$ & $6.1958 \pm 0.2438$ \\ ln, $\lambda_{recon} = 1$ & $6.2218 \pm 0.2390$ \\ ln, $\lambda_{recon} = .1$ & $6.2437 \pm 0.2346$ \\ ln, $\lambda_{recon} = 0$ & $6.4025 \pm 0.2187$ \\ \midrule bn, $\lambda_{recon} = 1$ & $6.5356 \pm 0.2176$ \\ bn, $\lambda_{recon} = .1$ & $6.5475 \pm 0.2798$ \\ bn, $\lambda_{recon} = 0$ & $6.5865 \pm 0.1837$ \\ \bottomrule \end{tabular} \label{table:recon_aux_worsens} \caption{Reconstruction as an auxiliary task worsens results. $\lambda_{recon}$ is the weight of the $l_{2}$ reconstruction loss term. With both batch or layer normalization, reconstruction auxiliary task hurts the final results.} \end{table} \input{unsup_celeba} \subsection{Additional 2D MoG Results} We show more 2D mixture of Gaussian results in Fig. \ref{fig:more2dcontrol_v2}, \ref{fig:more2dtreat_v2}, \ref{fig:more2dcontrol}, and \ref{fig:more2dtreat}. \def-0.6in{-0.6in} \def0.05{0.05} \def.11{.11} \begin{figure}[h!] \centering \DrawGroupVTWO{control}{0}{extra_2d} \DrawGroupVTWO{control}{100}{extra_2d} \DrawGroupVTWO{control}{500}{extra_2d} \DrawGroupVTWO{control}{1000}{extra_2d} \DrawGroupVTWO{control}{2000}{extra_2d} \DrawGroupVTWO{control}{3000}{extra_2d} \DrawGroupVTWO{control}{4000}{extra_2d} \DrawGroupVTWO{control}{5000}{extra_2d} \DrawGroupVTWO{control}{10000}{extra_2d} \DrawGroupVTWO{control}{20000}{extra_2d} \DrawGroupVTWO{control}{49900}{extra_2d} \DrawCaption \caption{More Results on Fitting 2D Mixture of Gaussian on the control group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dcontrol_v2} \end{figure} \begin{figure}[h!] \centering \DrawGroupVTWO{treat}{0}{extra_2d} \DrawGroupVTWO{treat}{100}{extra_2d} \DrawGroupVTWO{treat}{500}{extra_2d} \DrawGroupVTWO{treat}{1000}{extra_2d} \DrawGroupVTWO{treat}{2000}{extra_2d} \DrawGroupVTWO{treat}{3000}{extra_2d} \DrawGroupVTWO{treat}{4000}{extra_2d} \DrawGroupVTWO{treat}{5000}{extra_2d} \DrawGroupVTWO{treat}{10000}{extra_2d} \DrawGroupVTWO{treat}{20000}{extra_2d} \DrawGroupVTWO{treat}{49900}{extra_2d} \DrawCaption \caption{More Results on Fitting 2D Mixture of Gaussian on the treat group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dtreat_v2} \end{figure} \begin{figure}[h!] \centering \DrawGroup{control}{1000} \DrawGroup{control}{10000} \DrawGroup{control}{20000} \DrawGroup{control}{30000} \DrawGroup{control}{40000} \DrawGroup{control}{50000} \DrawGroup{control}{60000} \DrawGroup{control}{70000} \DrawGroup{control}{80000} \DrawGroup{control}{90000} \DrawGroup{control}{98000} \DrawCaption \caption{More Results on Fitting 2D Mixture of Gaussian on the control group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dcontrol} \end{figure} \begin{figure}[h!] \centering \DrawGroup{treat}{1000} \DrawGroup{treat}{10000} \DrawGroup{treat}{20000} \DrawGroup{treat}{30000} \DrawGroup{treat}{40000} \DrawGroup{treat}{50000} \DrawGroup{treat}{60000} \DrawGroup{treat}{70000} \DrawGroup{treat}{80000} \DrawGroup{treat}{90000} \DrawGroup{treat}{98000} \DrawCaption \caption{More Results on Fitting 2D Mixture of Gaussian on the treat group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dtreat} \end{figure} \begin{figure}[h!] \centering \DrawGroupVTWO{control}{0}{extra_2d_unbalanced} \DrawGroupVTWO{control}{100}{extra_2d_unbalanced} \DrawGroupVTWO{control}{500}{extra_2d_unbalanced} \DrawGroupVTWO{control}{1000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{2000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{3000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{4000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{5000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{10000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{20000}{extra_2d_unbalanced} \DrawGroupVTWO{control}{49900}{extra_2d_unbalanced} \DrawCaption \caption{More Results on Fitting imbalanced 2D Mixture of Gaussian (probabilities $[.1, .3, .3, .3]$) on the control group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dcontrol_v2_imbalanced} \end{figure} \begin{figure}[h!] \centering \DrawGroupVTWO{treat}{0}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{100}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{500}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{1000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{2000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{3000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{4000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{5000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{10000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{20000}{extra_2d_unbalanced} \DrawGroupVTWO{treat}{49900}{extra_2d_unbalanced} \DrawCaption \caption{More Results on Fitting imbalanced 2D Mixture of Gaussian (probabilities $[.1, .3, .3, .3]$) on the treat group. See Figure \ref{fig:2dmog} for detailed description.} \label{fig:more2dtreat_v2_imbalanced} \end{figure} \subsection{Faster and better convergence in unsupervised learning} \label{sec:unsup} \vspace{-.1cm} We quantitatively measure the resulting \G using the Inception score \citep{salimans2016improved}. Table \ref{table:architectures} shows improved final Inception scores on DCGAN, as well as the following non-standard architectures (only mentioning difference from the standard DCGAN): densely connected convnet \citet{huang2016densely} for \D; \G and \D with an equal number of filters on each layer; \D with ReLU nonlinearity. In all cases, the models regularized by BRE improve over the baseline counter-parts without regularization (no-BRE). Furthermore, vanilla GAN's with BRE applied on multiple \D layers (BRE\_multi) always outperform WGAN-GP \citep{gulrajani2017improved}. Fig.\ \ref{fig:unsup_samples_relu} shows some generated samples from a DCGAN-ReLU model without and with BRE regularization. Fig. \ref{fig:more_samp_dcgan} of Appendix \ref{sec:more_samp_dcgan} show more samples laid out by t-SNE visualization to better illustrate mode collapsing. \begin{figure}[h \begin{minipage}[c]{\linewidth} \begin{tabular}{ll} \toprule {} & densenet \D \\ \midrule WGAN-GP BRE\_single & $3.9589 \pm 0.6632$ \\ WGAN-GP no-BRE & $4.1046 \pm 0.3443$ \\ no-BRE & $6.3662 \pm 0.1465$ \\ BRE\_multi & $6.5650 \pm 0.1979$ \\ {\bf BRE\_single} & $\mathbf{6.6261 \pm 0.1529}$ \\ \toprule {} & Equal Size \G and \D \\ \midrule no-BRE & $5.1330 \pm 0.5491$ \\ BRE\_single & $6.0375 \pm 0.3669$ \\ BRE\_multi\_tanh & $6.3455 \pm 0.2132$ \\ WGAN-GP BRE\_single & $6.4515 \pm 0.2315$ \\ WGAN-GP no-BRE & $6.6993 \pm 0.1705$ \\ {\bf BRE\_multi} & $\mathbf{7.0569 \pm 0.2031}$ \\ \bottomrule \end{tabular} \begin{tabular}{ll} \toprule {} & ReLU \D\\ \midrule ln WGAN-GP no-BRE & $4.4359 \pm 0.2975$ \\ no-BRE & $5.5409 \pm 0.2363$ \\ WGAN-GP no-BRE & $5.9606 \pm 0.3584$ \\ WGAN-GP BRE\_single & $6.2105 \pm 0.3607$ \\ BRE\_single & $6.2526 \pm 0.2239$ \\ BRE\_multi\_tanh & $6.3754 \pm 0.2870$ \\ {\bf BRE\_multi} & $\mathbf{6.7715 \pm 0.3162}$ \\ \toprule {} & DCGAN\\ \midrule WGAN-GP no-BRE & $6.3284 \pm 0.4642$ \\ no-BRE & $6.5865 \pm 0.1837$ \\ BRE\_single & $6.6908 \pm 0.2539$ \\ {\bf BRE\_multi} & $\mathbf{6.7312 \pm 0.1365}$ \\ \bottomrule \end{tabular} \captionof{table}{BRE on various architectures: no-BRE is the baseline in each case; with BRE weight in other cases is set to $1.$; {\it single} and {\it multi} signify whether BRE is applied on one layer in the middle of \D or multiple (see Appendix \ref{sec:hyp_detail} for more details); {\it ln} for layer normalization in \G and \D (default is batchnorm); {\it tanh} means the softsign nonlinearity in BRE is replaced by tanh.} \label{table:architectures} \end{minipage} \end{figure} Fig.\ \ref{fig:dcgan_convergence_speed} shows that with BRE, DCGAN training converges faster, as measured by Inception score. The $1\sigma$ error bars are estimated from ten different random runs. Because DCGAN architecture is engineered to be stable, in the end, baseline DCGAN can still achieve comparable Inception score with the regularized version on average. But clearly the convergence is much faster with BRE during the initial transient phase, confirming our intuition that BRE improves exploration. Fig.\ \ref{cifar_unsup} shows Inception score and (thresholded) activation correlation values ($\RAC$) during one particular set of runs with default DCGAN optimization settings, and a more aggressive optimization setting. In both cases, BRE regularization results in similarly faster convergence, and higher final Inception score in the unstable case with more aggressive optimization. The bottom row in Fig.\ \ref{cifar_unsup} shows that BRE regularization is indeed making a qualitative difference to the activation correlation ($\RAC$) by keeping it low during training in both cases. \begin{figure}[h] \begin{minipage}[c]{.495\linewidth} \centering \includegraphics[width=\textwidth]{results/DCGAN_BRE0_BRE1/DCGAN_BRE1_vs_BRE0.png} \vspace{-.2in} \captionof{figure}{\small{Even with stable DCGAN architecture, BRE makes convergence faster.}} \label{fig:dcgan_convergence_speed} \end{minipage} \begin{minipage}[c]{.495\linewidth} \centering \includegraphics[width=.492\textwidth]{results/cifar_archi_exp/relu/relu_bre0_middle_run0_cifar10_70e93b29d6428abf341ab5cbce4bba8d_iter_39999resampled.pdf} \centering \includegraphics[width=.492\textwidth]{results/cifar_archi_exp/relu/relu_bre1_lfl1_run8_cifar10_7e787b211f97f1f79fbd5accfee8f9b3_iter_39999resampled.pdf} \captionof{figure}{\small{Samples for DCGAN-ReLU without BRE (left) vs with BRE (right)}} \label{fig:unsup_samples_relu} \end{minipage} \end{figure} \begin{figure}[h] \centering \begin{subfigure}[] { \centering \includegraphics[width=.42\textwidth]{results/unsup_inceplr0002_ndsteps1_fbfd9a88637788144083df23f2cd4524_vs_0e6d5e11b05016a7d10e7f50a21f1a50/inception_score.png} \label{small_lr_incep1} } \end{subfigure} \centering \begin{subfigure}[] { \centering \includegraphics[width=.42\textwidth]{results/unsup_incep_lr002_ndsteps3_10ef0b6b5c7e79d37b02bd9167821b35_vs_de1f2402973ee7e6e49e3afae8ca46ee/inception_score.png} } \end{subfigure} \vspace{-.15in} \centering \begin{subfigure}[] { \centering \includegraphics[width=.42\textwidth]{results/unsup_inceplr0002_ndsteps1_fbfd9a88637788144083df23f2cd4524_vs_0e6d5e11b05016a7d10e7f50a21f1a50/stats.png} } \end{subfigure} \centering \begin{subfigure}[] { \centering \includegraphics[width=.42\textwidth]{results/unsup_incep_lr002_ndsteps3_10ef0b6b5c7e79d37b02bd9167821b35_vs_de1f2402973ee7e6e49e3afae8ca46ee/stats.png} } \end{subfigure} \vspace{-.15in} \caption{Inception scores and regularizer values during training: (left column, i.e.\ (a) and (c)) default optimization setting; (right column, i.e. (b) and (d)) more aggressive optimization. Details in Appendix \ref{sec:hyp_detail}. (top row, i.e. (a) and (b)) Inception scores during training; (bottom row, i.e. (c) and (d)) $\RAC$ term of BRE on fake, real, and interpolation inbetween. Even though BRE is not applied on real, model still allocates enough capacity when BRE is applied on fake and interpolation.} \label{cifar_unsup} \end{figure} \subsection{Improved Semi-supervised Learning on CIFAR10 and SVHN} BRE regularization is not only compatible with semi-supervised learning using GAN's, but also improves classification accuracy. Table.\ \ref{table:semisup_cifar} shows results on CIFAR10 with feature matching semi-supervised learning GAN. BRE allows the learning process to discover a better solution during training that also generalizes better, indicated by a lower training classification loss as well as lower test classification error rates. We used the same code and hyperparameters \footnote{https://github.com/openai/improved-gan} from \citet{salimans2016improved}. Details on BRE hyperparameters are in Appendix \ref{sec:hyp_detail}, learning curve plots in Appendix \ref{sec:cifar_semi_curves}. On Street View House Numbers (SVHN) dataset \citep{netzer2011reading}, with the same setup from \citet{salimans2016improved}, learning is not always stable when trained for a long time. Fig.\ \ref{fig:svhn_semi_plots} (top row) shows that without BRE regularization, when trained for a very long time, sometimes learning diverges. Such failure is dramatically reduced by BRE (bottom row of Fig.\ \ref{fig:svhn_semi_plots}). \begin{table}[h] \centering \small{ \begin{tabular}{lll} \toprule {} & Test error rate ($\%$) & Train classification loss\\ \midrule FM (reported in \cite{salimans2016improved}) & $ 18.63 \pm 2.32$ & {}\\ FM, 10 ensemble (reported in \cite{salimans2016improved}) & $15.59 \pm 0.47$ & {}\\ \midrule FM (our run) & $17.42 \pm 0.50$ & $\expnumber{9.25}{-4} \pm \expnumber{5.05}{-4}$\\ FM $+$ BRE & $16.98 \pm 0.52$ & $\expnumber{5.03}{-4} \pm \expnumber{3.50}{-4}$\\ FM, 10 ensemble (our run) & $14.25$ & {}\\ FM $+$ BRE, 10 ensemble & $13.93$ & {}\\ \bottomrule \end{tabular} } \caption{Semi supervised learning on CIFAR10: feature matching (FM) from \cite{salimans2016improved}); 1000 labeled training examples.} \label{table:semisup_cifar} \end{table} \def0.229\textwidth{0.229\textwidth} \begin{figure}[h!] \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/control_loss_lab.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/control_loss_unl.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/control_train_err.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/control_test_err.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/treat_loss_lab.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/treat_loss_unl.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/treat_train_err.png} } \end{subfigure} \begin{subfigure} { \centering \includegraphics[width=0.229\textwidth]{results/semi_sup_svhn/treat_test_err.png} } \end{subfigure} \caption{Improved semi-supervised learning on SVHN: each curve corresponds to a different random seeding; we repeat the same set of seeds for runs without BRE (Top row), and with BRE (Bottom column); both the random seed for selecting labeled examples and random seeds for model parameter initialization are varied. } \label{fig:svhn_semi_plots} \end{figure} \section{Capacity usage of rectifier nets and its effects on GAN training} \label{sec:nn_capacity_and_gan} \vspace{-.1cm} \def-0.1in{-0.1in} \begin{figure}[ht] \center \includegraphics[width=0.7\linewidth]{figs/iclr_capacity} \vspace{-0.1in} \caption{Capacity usage of the rectifier discriminator $\D$ in different scenarios. $\D$ (a rectifier net) cuts the input space into different linear regions, since rectifier nets compute piece-wise linear functions. {\bf left}: $\D$ uniformly spreads its capacity in the input space, but does not have enough capacity to distinguish all subtle variations within a data distribution. {\bf middle}: $\D$ uses its capacity in the region with no data; while real and fake data are correctly separated, variations within real data distribution are not represented by $\D$, so cannot possibly be communicated to $\G$ if this degeneracy persists through training; meanwhile all fake points in the same linear region passes the same gradient information to $\G$, even if they are visually distinct. {\bf right}: $\D$ spends most capacity on real and fake data, but also in regions where \G might move its mass to in future iterations.} \label{fig:capacity_regeme} \end{figure} The motivation of our regularizer starts with an observation that during GAN training, the generator \G receives information about the input space \emph{only} indirectly through the gradient of \D, $\nabla_x \D(x)$. Typically \D is a rectifier net. Absent of the last sigmoid nonlinearity, \D computes piecewise linear functions, meaning that the learning signal to \G is (almost) piecewise constant. The final sigmoid nonlinearity does not change the direction of input gradient in each linear region, but merely the scale of gradient vectors. The learning of \G in GANs can be interpreted as the movement of the fake samples generated by $G$ toward the real data distribution, guided by the (almost) piecewise constant vectorial signals according to input space partitioning by \D. Hence, the diversity and informativeness of learning signals to \G is closely related to how the input space is partitioned, i.e.\ how \D's model capacity is allocated. In a region of the input space, how much capacity \D allocates into it can be approximately measured by the number of linear pieces in that region \citep{montufar2014number}. When \D spreads out its model capacity in a right way, the evenly dispersed partitioning helps \G to explore better and discover the real data manifold, while avoiding large unstable jumps due to overconfident extrapolation made by \D. Ideally, when GAN training is stable, the min-max game eventually forces \D to represent subtle variations in the real data distribution and pass this information for the learning of \G. However, the discriminator \D is solely tasked to separate real samples from the generated fake ones. Thus \D has no incentive to do so, especially when the classification task for \D is too simple. Such is always the case in the early stage of the training and may persist to the later stage if the input space has high dimensionality or if \G already collapsed. In these situations, \D could overfit, and its internal layers could have degenerate representation whereby large portions of the input space are modelled as linear regions, as pictorially depicted in Fig.\ \ref{fig:capacity_regeme}, and shown in the synthetic experiment in Sec.\ \ref{sec:2d}. With such degeneracy, learning signals from \D are not diverse and fail to capture the differences among different modes or subtle variations of the real data. Furthermore, such degeneracy could also cause the learning of \G to bluntly extrapolate, resulting in large updates, which in turn drops already discovered real data modes and/or leads to oscillations. We observe this phenomenon in the synthetic data problem in Sec.\ \ref{sec:2d}. In this paper, we propose a new regularizer for training GANs where \D is a rectifier net. Our regularizer encourages the discriminator $D$ to cut the input space more finely around where the current \G distribution is supported, as well as where training might transport the generated data distribution to in the short future, as depicted in Fig.\ \ref{fig:capacity_regeme} ({right}). In this way, \G receives rich guidance for \emph{faster exploration} and \emph{more stable convergence}. The regularizer facilitates exploration because \D tells apart the generated fake samples from the real ones in \emph{distinct} ways. This is because if the fake data points $x$ lie in different regions, learning signals $\nabla_{x}\D(x)$ to \G are likely to point to different directions. Hence a concentrated mass in the fake data distribution has a better chance of been spread apart. On the other hand, the convergence to equilibrium is more stable because there are less large piecewise linear regions, where \G learning constantly receives the same transportation direction, potentially leading to overshoot and oscillation. \todok{I think we have an experimental picture, shall we ref it somewhere} Our regularizer is constructed to encourage the activation patterns of the internal representations of points in a mini-batch to be diverse. As shown in \citet{raghu2016expressive}, the different local linear regions defined by $\D$ is closely related to the different activation patterns of $D$. In particular, two input points into \D with different activation patterns on all layers of $\D$ are guaranteed to lie on different linear regions. More details are presented in Sec.\ \ref{sec:BRE} where the regularizer is defined, along with the analysis of its properties. \if0 When real and fake data are far apart in the early stages of the training or when the generator collapses into some particular modes, the discriminator $D$ may assign all the generated fake samples to a small number of linear pieces, thus incapable to prevent the modal collapses in the training of GANs. Moreover, \todok{add figure and explain why it is more desired for $D$ to allocate capacity...} it is also more desired if In GAN training, \G receives information about the real data distribution and its implicit manifold \cite{??} indirectly through $\nabla_x \D(x)$. Because typically \D is a rectifier net, which computes a piecewise linear function, \G learning essentially can only access information in the form of piecewise constant function \footnote{\G sees the world via a kaleidoscope that is $\nabla_x \D(x)$}. So where in the input space \D uses its capacity, i.e.\ which region is represented with more finer polytopes, determines what \G can potentially learn, as Fig.\ \ref{fig:capacity_regeme} illustrates. However, \D's task is to seperate real and fake data distribution, it has no incentive to allocate its capacity in the most desired way illustrated in Fig.\ \ref{fig:capacity_regeme}. Unlike in typical application domains where deep neural nets are applied, in GAN training, the classification problem is not always highly nonlinear, especially when real and fake data distribution are far apart at the beginning or when generator distribution miss some mode. Therefore, \D does not always need to approximate a complex function on the real and fake data manifold. This causes problems for \G as described in the previous paragraph. \fi \vspace{-.1cm} \subsection{Related works} \vspace{-.1cm} \label{subsec:relatedworks} Other regularization/gradient penalty techniques have also been proposed to stabilize GANs training \citep{gulrajani2017improved, nagarajan2017gradient} recently. \citet{gulrajani2017improved} adds an input gradient penalty to the update of \D, so that the magnitude of signals passed to \G is controlled. \citet{nagarajan2017gradient} modifies the update of \G to avoid going where the magnitude of $\nabla_x \D(x)$ is large. These methods, as well as other similar works that constrain the input gradient norm or the Lipschitz constant of \D, all try to stabilize the training dynamics by regularizing the learning signal magnitude. This is different from our method that diversifies the learning signal directions. As discussed in the previous section, the diversified signal directions help both the convergence speed and the stability of the training. In Sec.\ \ref{sec:unsup}, we empirically demonstrate that our proposed method achieves better results than Wasserstein GAN with gradient penalty (WGAN-GP) \citep{gulrajani2017improved}. The role of model capacity of the discriminator $D$ in training generative adversarial networks (GANs) has been previously explored by \citet{arora2017generalization}. They show that $\D$ with a finite number of parameters has limited capacity in distinguishing real data from the generated ones. They suggest increasing the discriminator $\D$'s capacity among other modifications. Our work can be viewed as a continuation along this direction, except that we treat the model capacity not as a static number, but a dynamic function in the different parts of the input space during training. Because even with a large number of parameters, \D might not use its capacity in a right way to help convergence, as discussed previously. We explore the question on \textit{where} and \textit{how} $\D$ can utilize its limited capacity effectively for better training convergence. Encouraging $\D$ to use its capacity in a constructive way is non-trivial. One theoretically sound potential approach to regularize \D is to use a Bayesian neural net, whose model capacity away from data is not degenerate. However, computationally scalable deep Bayesian neural networks are still an active area of research \citep{hernandez2015probabilistic, hasenclever2017distributed} and are not easy to use. Alternatively, we can use auxiliary tasks to regularize $D$'s capacity usage. If given labelled data, semi-supervised learning as an auxiliary task for \D, as shown in \citet{salimans2016improved}, improves GAN training stability and the resulting generative model. We hypothesize that if the data domain has other structures that can be exploited as supervised learning signal, Exploiting these structures could as well potentially improve the GAN training stability like in \citet{salimans2016improved}. When no supervised task is available, auto-encoding is another potential possibility. Energy-Based GAN (EBGAN) \citep{zhao2016energy} and Boundary Equilibrium GAN (BEGAN) \citep{berthelot2017began} use auto-encoders as their discriminators. However, both EBGAN and BEGAN have different objectives from the vanilla GAN. Furthermore, instead of using auto-encoding to simply regularize \D, the auto-encoder loss is used to discriminate real data from fake ones. Hence, it is unclear if their benefits stem from the regularization effects or the alternative classification approach. Another set of works that use auto-encoding as an auxiliary task is in learning an inference network along with GAN \citep{donahue2016adversarial,dumoulin2016adversarially}. However, they both modify the input to \D, so that \D classify not just the data, but together with the corresponding latent codes from \G. Again, in this case, it is unclear if the regularization effect on the model capacity of \D is the source of any improvement in learning stability. Our preliminary results on using the auxiliary auto-encoding loss on real data show that it does not lead to improvement (see Discussion and Future Work in Sec.\ \ref{sec:discussion}). \section{Discussion and future work} \label{sec:discussion} There are still many unexplored avenues along this line of research. For example, how can our new regularizer collaborate with other GANs training techniques to further improve the training GANs? We leave such further explorations for future works. Meanwhile, there are two interesting questions related to the central theme of this work. \vspace{-.2cm} \subsection*{Does directly regularizing the diversity of $\nabla_x \D(x)$ work? If not, why not?} To diversify \G's learning signals, it might be tempting to enforce gradient directions ${\nabla_x \D(x_k)}$ to be diverse. However, in rectifier networks, if two inputs share the same activation pattern, the input gradients located at the two points are co-linear; hence any gradient-based learning with such diversity regularizer would have difficulty pulling them apart. In general, unlike BRE which operates directly on both activated and non-activated portions of \D's internal units, an input gradient regularizer can only access information on the activated path in the network, so that it can only encourage existing non-shared activated path, but cannot directly create any new non-shared activated path. In theory, tanh nonlinearities as activations for \D could avoid this problem, but such network is hard to train in the first place. In our preliminary studies, on networks with tanh, input gradient diversity regularizer with either cosine similarity or a soft-sign based regularizer like BRE does not work. \vspace{-.1cm} \subsection*{Could auxiliary tasks help regularize \D?} As discussed in Sec.\ \ref{subsec:relatedworks}, auxilary tasks could potentially regularize \D and stabilize training. One possible auxilary task is reconstruction loss. We performed some preliminary experiments, and found that reconstructing real data as auxilary tasks worsens the resulting learned \G. See Appendix.\ \ref{sec:recon_aux_worsens} for results. Further study is needed, and is beyond the scope of this work. \vspace{-.2cm} \section{Conclusions} We proposed a novel regularizer in this paper to guide the discriminator in GANs to better allocate its model capacity. Based on the relation between the model capacity and the activation pattern of the network, we constructed our regularizer to encourage a high joint entropy of the activation pattern on the hidden layers of the discriminator $D$. Experimental results demonstrated the benefits of our new regularizer: faster progress in the initial phase of learning thanks to improved exploration, more stable convergence, and better final results in both unsupervised and semi-supervised learning. \section{Experiments} \label{subsec:experiments} Using a 2D synthetic dataset and CIFAR10 dataset \citep{krizhevsky2009learning}, we show that our BRE improves unsupervised GAN training in two ways: \begin{enumerate*}[label={(\alph*)}] \item when GAN training is unstable (for e.g. due to architectures that are less well tuned than DCGAN \citep{radford2015unsupervised}), BRE stabilizes the training and achieves much-improved results, often surpassing tuned configurations. \item with architecture and hyperparameters settings that are engineered to be stable already, BRE makes GAN learning converges faster. \end{enumerate*} We then demonstrate that BRE regularization improves semi-supervised classification accuracy on CIFAR10 and SVHN dataset \citep{netzer2011reading}. Additional results on imbalanced 2D mixture as well as CelebA dataset are presented in the Appendix \ref{sec:moreexp}. \vspace{-.2cm} \input{2d_example} \input{cifar} \section{Introduction} \vspace{-.2cm} Generative Adversarial Network (GAN) \citep{goodfellow2014generative} has been a new promising approach to unsupervised learning of complex high dimensional data in the last two years, with successful applications on image data \citep{isola2016image,shrivastava2016learning}, and high potential for predictive representation learning \citep{mathieu2015deep} as well as reinforcement learning \citep{finn2016connection,henderson2017optiongan}. In a nutshell, GANs learn from unlabeled data by engaging the generative model (\G) in an adversarial game with a discriminator (\D). \D learns to tell apart fake data generated by \G from real data, while \G learns to fool \D, having access to \D's input gradient. Despite its success in generating high-quality data, such adversarial game setting also raises challenges for the training of GANs. Many architectures and techniques have been proposed \citep{radford2015unsupervised,salimans2016improved,gulrajani2017improved} to reduce extreme failures and improve the sample quality of generated data. However, many theoretical and practical open problems still remain, which have impeded the ease-of-use of GANs in new problems. In particular, $\G$ often fails to capture certain variation or modes in the real data distribution, while $\D$ fails to exploit this failure to provide better training signal for $\G$, leading to subtle mode collapse. Recently \citet{arora2017generalization} showed that the capacity of $D$ plays an essential role in giving $G$ sufficient learning guidances to model the complex real data distribution. With insufficient capacity, $\D$ could fail to distinguish real and generated data distributions even when their Jensen-Shannon divergence or Wasserstein distance is not small. In this work, we demonstrate that even with sufficient maximum capacity, $\D$ might not allocate its capacity in a desirable way that facilitates convergence to a good equilibrium. We then propose a novel regularizer to guide \D to have a better model capacity allocation. Our regularizer is constructed to encourage \D's hidden binary activation patterns to have high joint entropy, based on a connection between the model capacity of a rectifier net and its internal binary activation patterns. Our experiments show that such high entropy representation leads to faster convergences, improved sample quality, as well as lower errors in semi-supervised learning. Code will be made available at \url{https://github.com/BorealisAI/bre-gan}. \if0 We motivate the proposed regularizer in Sec.\ \ref{sec:nn_capacity_and_gan}. We then formally define our regularizer and analyze its theoretical properties in Sec.\ \ref{sec:BRE}. Finally, we empirically demonstrate its effectiveness in Sec.\ \ref{subsec:experiments}. \todoy{I don't think this outline is really needed, the flow of this paper is not very complicated.} \fi \if0 Several regularization/gradient penalty techniques have recently been proposed to stablize GANs training \citep{gulrajani2017improved, mescheder2017numerics}. Since we c motivated from the capacity angle, our method differs from them the most in that we regularize $D$'s hidden layer acitivities, neither on input gradients nor parameters directly. This in turn will constraint the parameters of $D$ and hence $G$. In terms of functional forms, \citep{gulrajani2017improved} adds gradient penality to the discriminator $D$'s input gradient, $\nabla_{x} D(x)$, to indirectly regularize $D$ and $G$'s weights. Both \citep{mescheder2017numerics, nagarajan2017gradient} regularize the training dynamics by constraining the parameters directly via adding other vector fields to the gradient updates. \fi \section{Related} Model capacity in generative adversarial networks (GANs) has been previously explored \cite{arora2017generalization}. While they show discriminator $D$ with finite number of parameters cannot detect insufficient diversity in generator $G$'s samples, the question on \textit{where} and \textit{how} $D$ can utilize its limited capacity effectively is left unanswered. In this direction, our work can be thought of as a method to direct $D$'s finite parameter resources on generated data as well as interpolation between real and generated data. This is ``where'' capacity is addressed. Morever, not only should $D$ do well on its classification task, but it also needs to tell apart each generated or interpolated sample in \textit{distinct} ways. This is ``how'' capacity is used. Both are important for GANs, because $D$'s role is to learn all features from real data and pass them to $G$ via $\nabla_{x}D(x)$, which also distinguishes $D$'s role from regular supervised learning tasks where irrelevant features for the labels can be forgotten. However, the number of parameters for neural nets is a very weak notion for model capacity. A finer notion for measuring capacity, particularly suited for rectifier nets, is activation pattern \cite{raghu2016expressive}. In their work, it is shown that the activation patterns is upper bounded by depth and widths. Under GANs' finite capacity constraint, it is thus important to allocate the activation patterns onto interpolated and generated data and not waste the capacity on places where there is no data. Since activation pattern corresponds to number of piecewise linear regions $D$ cuts into \cite{montufar2014number}, each piecewise linear region should contain as few data points as possible. As data points $x$ lie in different regions, $\nabla_{x}D(x)$ are likely to be diverse. $G$ in turn can receive diverse learning signals from $\nabla_{x}D(x)$. It is in this sense that we encourage model capacity usage of $D$. Henceforth, we use activation pattern as a measure on where and how $D$ uses its model capacity. See figure \ref{fig:capacity_regeme} for a pictorial illustration. \section{Binarized Representation Entropy} \label{sec:BRE} We now introduce our regularizer, the binarized representation entropy regularizer (BRE). Recall that we would like to encourage diverse activation patterns in $D$ across different samples. For a sample $x$, its activation pattern on a particular internal layer of $D$ can be represented by a binary vector, as shown in Figure \ref{fig:brelocation}. In particular, let $\vc{h} \in \real^d$ be the immediate pre-nonlinearity activity of a sample $x$ on a particular layer of $d$ hidden units \footnote{We use column vectors in this paper.}. The activation pattern of $x$ on this layer can be represented by the sign vector of $\vc{h}$, defined as $\ba = \sign{\vc{h}} := \frac{\vc{h}}{|\vc{h}|} \in \{\pm1\}^d$ where $|\cdot|$ is entry-wise absolute value. \begin{wrapfigure}{r}{0.5\textwidth} \vspace{-.3cm} \begin{framed} \centering \includegraphics[width = \textwidth] {figs/BRE_location} \vspace{-0.5cm} \caption{\small{Activation vector $\ba_k$ of a sample $x_k$ on a layer $L$ immediately before nonlinearity.}} \label{fig:brelocation} \end{framed} \vspace{-0.3cm} \end{wrapfigure} \todor[]{Need a small figure to indicate the positions of $h_k$ and $s_k$ in the network here.} We call this binary vector $\ba \in \{\pm1\}^d$ the activation vector of the sample $x$ on this particular layer. In this work, we model the activation vector of each sample in a particular layer of $D$ as a random binary vector. Given a mini-batch, $\{x_1, \ldots, x_K\}$ of size $K$, assume that each binary activation vector $\ba_k$ of $x_k$, $k = 1, \ldots, K$, on a particular layer with $d$ hidden units is an independent sample of a random binary vector $U = (U_1, \ldots, U_d)$, where $U_i$ denotes a Bernoulli random variable\footnote{The Bernoulli distribution is defined over $\{ +1, -1\}$ instead of $\{0,1\}$.} with parameters $p_i$ and distribution function $\Prob_i$ for $i=1, \ldots, d$. Also denotes the joint distribution function of $(U_1, \ldots, U_d)$ by $\Prob$. To have diverse activation patterns, we would like to construct a regularizer that encourages $\Prob$ to have a large joint entropy. Ideally, one could use an empirical estimate of the entropy function as a desired regularizer. However, sample-based estimation of the entropy of a high-dimensional distribution has been well known to be difficult, especially with a small mini-batch size \citep{darbellay1999estimation,miller2003new,kybic2007high,kybic2012approximate,scott2015multivariate}. We instead propose a simple {\em binarized representation entropy} (BRE) regularizer, which encourages the entropy of $\Prob$ to be larger (in a weak manner). For a particular layer in $D$, our BRE regularizer $\RBRE$ is computed over a mini-batch of $\{x_1, \ldots, x_K\}$, and consists of two terms, {\em marginal entropy} $\RME$ and {\em activation correlation} $\RAC$, both acting on the binarized activation vectors of the hidden units\footnote{We may apply this regularizer to multiple layers in $D$. In that case, we will sum all the $\RBRE$'s of each layer.}: $\RBRE = \RME + \RAC$, where \vspace{-.2cm} \begin{equation} \RME = \frac{1}{d}\sum_{i=1}^d \bar{\ba}_{(i)}^2 = \frac{1}{d}\sum_{i=1}^d \left( \frac{1}{K} \sum_{k=1}^{K} \ba_{k,i}\right)^2; \quad \text{and } \quad \RAC = \frac{1}{K(K-1)} \sum_{\overset{j, k = 1}{j \neq k}}^{K} \frac{| \ba_{j}^{\T}\ba_{k}| }{d}. \end{equation} \vspace{-.2cm} \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-.7cm} \begin{framed} \centering \includegraphics[width = \textwidth] {figs/BRE_explained_6} \caption{\small{Notations for defining $\RBRE$.}} \label{fig:breexplanation} \end{framed} \vspace{-0.7cm} \end{wrapfigure} Here $\bar{\ba}_{(i)} = \frac{1}{K} \sum_{k=1}^{K} \ba_{k,i}$ is the average of the $i$th element (corresponding to the $i$th hidden unit) of the activation vectors $\ba_k$ across the mini-batch, where $\ba_k$ is the activation vector of $x_k$ for $k=1,\ldots, K$, as shown in Figure \ref{fig:breexplanation}. Thus $\RME$ can be interpreted as an empirical estimate of $\frac{1}{d}\sum_{i=1}^d \E{U_i}^2$, and $\RAC$ as an empirical estimate of $\frac{1}{d}\E{\vert U^\top\tU \vert} =\frac{1}{d}\E{\vert \sum_{i=1}^{d}U_i\tU_i \vert}$, where $U$, $\tU$ are two i.i.d. random vectors with probability function $\Prob$. \if0 \begin{figure}[t] \center \includegraphics[width=0.3\linewidth]{figs/BRE_explained_6} \vspace{-0.1in} \caption{How the two parts are computed} \label{fig:BRE_explain} \end{figure} \begin{figure}[t] \center \includegraphics[width=0.5\linewidth]{figs/BRE_location} \vspace{-0.1in} \caption{Where to put the regularizer} \label{fig:BRE_location} \end{figure} \fi As shown in Section \ref{subsec:suf_cond}, our regularizer encourages a large joint entropy of this random binary vector. In particular, we show that the first term, $\RME$, encourages individual hidden units to be active half of the time on average, to have high {\em marginal entropy}; the second term, $\RAC$, encourages low {\em activation correlation} between each pair of the hidden units. We further show in Section \ref{subsec:nes_cond} that having the regularizer being close to 0 is a necessary condition for $(U_1, \ldots, U_d)$ to achieve its maximum entropy. Details in the practical implementation of our regularizer are discussed in Section \ref{subsec:GANtraining}. \subsection{BRE encourages high joint entropy of the activation patterns} \label{subsec:suf_cond} Note that each summand $\bar{\ba}_{(i)}$ in $\RME$ is an empirical estimate of $2p_i - 1$, the mean of the marginal distribution $\Prob_i$. Thus minimizing $\bar{\ba}_{(i)}^2$ leads to $p_i=\frac12$, i.e. $U_i$ is zero-mean for $i = 1, \ldots, d$. In other words, $\RME$ is $0$ when there are equal number of $\pm 1$ in $U$. Moreover, for $j,k = 1,\ldots, K$ where $j\neq k$, minimizing $|\ba_j^{\top}\ba_k|$ in the second term $\RAC$ is essentially equivalent to minimizing $\left(\ba_j^{\top}\ba_k\right)^2$. Thus, minimizing $\RAC$ can be seen as minimizing $\E{\left(U^{\top}\tU\right)^2}$ where $U, \tU$ are i.i.d. from $\Prob$. Since minimizing $\RME$ is enforcing $U_i$ to be zero-mean for $i = 1,\ldots, d$, as shown in Proposition \ref{prop:Covar}, minimizing $\RAC$ enforces the pairwise independence of the $U_i$'s. Lastly, Assuming the hidden units $U_i$'s are zero-mean and pairwise independent, by Corollary 3.3 of \citet{gavinsky2015joint} (which we restate in Appendix \ref{sec:cor33} for completeness), we have that the entropy of $\Prob$ satisfies \[ H(\Prob) \geq \log(d+1). \] \begin{prop} \label{prop:Covar} Let $U = (U_1, \ldots, U_d)$ be a zero-mean multivariate Bernoulli vector of $\Prob$, and $\tU = (\tU_1, \ldots, \tU_d)$ denotes another random vector of $\Prob$ that is independent to $U$. Then \[ \E{\left(U^{\top}\tU\right)^2} = \E{\left( \sum_{i=1}^d U_i\tU_i\right)^2} = d + \sum_{\overset{i,t = 1}{i\neq t}}^{d} \text{Cov}\left(U_i, U_t\right)^2. \] \end{prop} We defer the proof of this proposition to Appendix \ref{sec:proofs}. \subsection{Maximum entropy representation has $\RBRE \approx 0$.} \label{subsec:nes_cond} We further show that $\RBRE \approx 0$ is a necessary condition for $\Prob$ to achieve the maximum entropy. It is straightforward that the maximum entropy of $\Prob$ is achieved if and only if $\E{U_i} = 0$ ($p_i=1/2$) for all $i \in \{1, ..., d\}$, i.e. each hidden unit is activated half of the time, and $(U_1, \ldots, U_d)$ are mutually independent. Therefore, the $i$th element of the average activation vector $\bar{\ba}_{(i)}$ is approximately zero for $i \in \{1, ..., d\}$, and so is $\RME$. \if0 Intuitively, $\RME$ encourages $p_k$ to be close to $1/2$, while $\RAC$, as shown in Section \ref{subsec:suf_cond}, encourages $(U_1, \ldots, U_d)$ to be pairwise independent. Although pairwise independence is weaker than mutual independence, we will see in Section \ref{subsec:suf_cond} that pairwise independence guarantees a lower bound for the entropy of the joint distribution $\Prob$. The binarized representation on a layer attain its maximal entropy, when $U_k$'s are mutually independent and $\E{U_k} = 0$, for all $k \in \{1, ..., K\}$. \fi Further, note that $\RAC$ is an empirical estimate of $\E{\left| \sum_{i=1}^d M_i /d\, \right|}$ where $M_i = U_i\tU_i$ for $i = 1, \ldots, d$. Note that given $p_i=1/2$ and $U_i$'s are mutually independent, one can show that $M_i$'s are mutually independent and have the distribution of Bernoulli($0.5$) as well. Therefore by the Central Limit Theorem, the distribution of $\sum_{i=1}^d M_i$ converges in distribution to the Gaussian distribution $\mathcal{N}(0, d)$. Given sufficiently large $d$, the distribution of $\frac{\sum_{i=1}^d M_i}{d} $ is approximately $\mathcal{N}(0, 1/d)$, and thus $\RAC$ is approximately zero\footnote{Note that the expectation of $R_{AC}$ under the maximum entropy assumption is not zero, but a small number on the order of $1e-3$.}. \subsection{GAN training with BRE regularizer} \label{subsec:GANtraining} \vspace{-.1cm} \subsubsection*{\textbf{Practical implementations of $R_{BRE}$}} \vspace{-.2cm} In practice, due to the degenerate gradient of the sign function, we replace $\ba$ in $\RME$ by its smooth approximation $\vc{a} = \softsign{\vc{h}} := \frac{\vc{h}}{|\vc{h}| + \epsilon}$, where $\epsilon$ is a hyperparameter to be chosen. If $\epsilon$ is too small, the nonlinearity becomes too non-smooth for stochastic gradient descent; if it is too large, it fails to be a good approximation to the sign function. Furthermore, not only different layers could have different scales of $\vc{h}$, hence requiring different $\epsilon$, during training the scale of $\vc{h}$ could change too. Therefore, instead of setting a fixed $\epsilon$, we set $\epsilon = \zeta ~\text{avg}({|\vc{h}|})$, where $\zeta$ is some small constant and $\text{avg}({|\vc{h}|})$ is a scalar, where the average runs over samples in the minibatch and the $d$ dimensions of the layer. In this way, $\softsign{\cdot}$ is invariant with respect to any multiplicative scaling of $\vc{h}$ in the forward pass of the computation; in the backward pass for the gradient computation, we do not backpropagate through $\epsilon$. We choose $\zeta\!=\!0.001$, as we observe empirically that this usually makes $90\%$ to $99\%$ of units to have absolute value at least $.9$. An alternative to this softsign is the tanh nonlinearity. However, tanh lacks the scale invariance of our proposed softsign with varying $\epsilon$, hence potentially less effective in capturing the nature of input space partitioning. In Sec.\ \ref{sec:unsup} (Table.\ \ref{table:architectures}), we confirm empirically that using tanh instead of softsign decreases the effectiveness of BRE. We also relax $\RAC$ by allowing a soft margin term, as $\RAC = \text{avg}_{j \neq k} \max \left(0, | \vc{a}_{j}^{\T}\vc{a}_{k}| / d - \eta \right)$. Recall that $\vc{a}_{j}^{\T}\vc{a}_{k} / d$ has an approximate distribution of $N(0,1/d)$, so a good choice for the margin threshold is $\eta = c \sqrt{{1}/{d}}$, where we adopt the ``$3\sigma$ rule'' and choose $c=3$ to leave $99.7\%$ of $i,j$ pairs unpenalized in the maximum entropy case. To regularize GAN training, $R_{BRE}$ is applied to the immediate pre-nonlinearity activities on selected layers of \D. Therefore, if there is any normalization layer before nonlinearity, $R_{BRE}$ needs to be applied after the normalization. We emphasize that we use softsign for the regularizer only, we do not modify the nonlinearity or any other structure of the neural net. \vspace{-.2cm} \subsubsection*{\textbf{Which layers should $R_{BRE}$ be applied on?}} \vspace{-.2cm} Technically $R_{BRE}$ can be applied on any rectifier layer before the nonlinearity. However, having it on the last hidden rectifier layer before classification might hinder \D's ability to separate real from fake, as the high entropy representation encouraged by $R_{BRE}$ might not be compatible with linear classification. Therefore, for unsupervised learning, we apply $R_{BRE}$ on all except the last rectifier nonlinearity before the final classification; for semi-supervised tasks using the augmented class setup from \citet{salimans2016improved}, we apply $R_{BRE}$ only on $2$nd, $4$th and $6$th convolutional layer, and leave the three nonlinear layers before the final softmax untouched. \vspace{-.2cm} \subsubsection*{\textbf{Which part of the data should $R_{BRE}$ be applied on?}} \vspace{-.2cm} Recall from Sec.\ \ref{sec:nn_capacity_and_gan} that we want $\D$ to spend enough capacity on both the real data manifold, and the current generated data manifold by \G, as well as having adequate capacity in region where we do not currently observe real or fake points but might in future iterations. To enforce this, we apply $R_{BRE}$ on generated data minibatch, as well as random interpolation inbetween real and generated data. Specifically, let $\vc{x}_k$ and $\vc{\tilde{x}}_k$ be a real and a fake data points respectively, we sample $\alpha_k \sim U(0,1)$ and let $\vc{\hat{x}}_k = \alpha_k \vc{x}_k+ (1-\alpha_k) \vc{\tilde{x}}_k$, and apply $R_{BRE}$ on selected layer representation computed on interpolated data points $\{\vc{\hat{x}}_k\, \vert \, k = 1\ldots, K\}$ as well. \subsection{CelebA} \label{sec:celeba} We compare stable and unstable runs of DCGAN on CelebA dataset \citep{celeba}, as well as the effect of the BRE regularizer. Fig.\ \ref{AC_celeba} shows thresholded $\RAC$ term (defined in Sec. \ref{subsec:GANtraining}) through training. The model being investigated is a 4-layer DCGAN for both \G and \D, with batch normalization. The unstable run (Fig.\ \ref{unstable_celeba}) uses a large initial learning rate of $.01$ and $3$ \D update steps for each \G update, whereas the stable run (Fig.\ \ref{stable_celeba}) uses initial $lr=2e-3$ and $1$ \D update for each \G update. Even without the BRE regularizer, we can see that when GAN training is stable, \D uses more capacity around fake and real data as well as inbetween, as measured by $\RAC$ values in Fig.\ \ref{AC_celeba}. When BRE regularizer is applied, the usage of \D's capacity is more improved, and resulted in more diversity in the learned distribution by \G (Fig.\ \ref{bre_celeba}). \todor{Maybe put the network structures in the appendix for the experiments.} \begin{figure}[h!] \begin{subfigure}{\label{AC_celeba}} \centering \includegraphics[width=.45\textwidth]{results/celeba_stats_419096259667f7e0b6ac09b58226a10d_vs_989c6a4daefc5b062863e648e894fca1/stats_hand_modified.png} \end{subfigure} \begin{subfigure}{\label{unstable_celeba}} \centering \includegraphics[width=.45\textwidth]{results/celeba_stats_419096259667f7e0b6ac09b58226a10d_vs_989c6a4daefc5b062863e648e894fca1/refactored-main_celeba_be98f5c67306eedb405a2d0e8eea2182_iter_10000resampled.pdf} \end{subfigure} \begin{subfigure}{\label{bre_celeba}} \centering \includegraphics[width=.45\textwidth]{results/celeba_stats_419096259667f7e0b6ac09b58226a10d_vs_989c6a4daefc5b062863e648e894fca1/refactored-main_celeba_419096259667f7e0b6ac09b58226a10d_iter_10000resampled.pdf} \end{subfigure} \begin{subfigure}{\label{stable_celeba}} \centering \includegraphics[width=.45\textwidth]{results/celeba_stats_419096259667f7e0b6ac09b58226a10d_vs_989c6a4daefc5b062863e648e894fca1/refactored-main_celeba_989c6a4daefc5b062863e648e894fca1_iter_10000resampled.pdf} \end{subfigure} \caption{(Thresholded) Activity correlation (AC) values (top left) and samples at iteration 10K: (top right) DCGAN unstable run ($lr=.01$ and $3$ \D update steps for each \G update); (lower right) DCGAN stable run ($lr=2e-3$ and $1$ \D update steps for each \G update); (lower left) BRE-DCGAN, DCGAN training with BRE regularizer same hyperparameters as DCGAN stable run in lower right plot. BRE-DCGAN results are visibly more diverse.} \end{figure}
2024-02-18T23:40:41.771Z
2018-05-11T02:00:12.000Z
algebraic_stack_train_0000
3,096
9,215
proofpile-arXiv_065-15144
\section{Introduction} Deep learning has made a great deal of success in processing images, audios, and natural languages [1-3], influencing academia and industry dramatically. It is essentially a collection of various methods for effectively training neural networks with deep structures. A neural network is usually regarded as a hierarchical system composed of many nonlinear computing units (or neurons, nodes). The most popular neural network was once multilayer perceptron (MLP) [4]. A MLP consists of an input layer, a number of hidden layers and an output layer, as shown in Figure 1. The depth of it is the number of layers excluding the input layer. If the depth is greater than 2, a neural network is now called “deep”. For training MLPs, backpropagation (BP) is certainly the most well-known algorithm in common use [4], but it seemed to work only for shallow networks. In 1991, Hochreiter indicated that typical deep neural networks (DNNs) suffer from the problem of vanishing or exploding gradients [5]. To overcome training difficulties in DNNs, Hinton et al. started the new field of deep learning in 2006 [6, 7]. Besides deep MLPs, DNNs also include convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Here, we omit RNNs for saving space. Theoretically, a CNN can be regarded as a special MLP or feedforward neural network. It generally consists of an input layer, alternating convolutional and pooling layers, a fully connected layer, and an output layer, as shown in Figure 2. Note that “convolutional layers” are also called "detection layers", and “pooling layers" are also called “downsampling layers”. There have been a large number of CNN variants, for example, LeNet [8], AlexNet [1], VGGNet [9], GoogLeNet [10], ResNet [11], Faster R-CNN [12], DenseNet [13], Mask R-CNN [14], YOLO [15], SSD [16], and so on. They not only take the lead in competitions of image classification and recognition as well as object localization and detection [9-12], but also in other applications such as deep Q-networks [17], AlphaGo [18], speech recognition [2], and machine translation [3]. To cope with the disadvantages of CNNs, in 2017 Hinton et al. further proposed a capsule network [19], which is more convincing from the neurobiological point of view. So many deep models are dazzling with different structures. Some of them have added shortcut connections, parallel connections, and even nested structures to traditional layered structures. How to establish a unified framework for DNNs is becoming a progressively important issue in theory. We are motivated to address it. \begin{figure} \centering \includegraphics[width=3in,height=1.3in]{figure1.png} \caption{The structure of a MLP.} \end{figure} \begin{figure} \centering \includegraphics[width=4.5in,height=1.2in]{figure2.png} \caption{The structure of a CNN.} \end{figure} This paper is organized as follows. In Section 2, we propose a mathematical definition to formalize neural networks, give their directed graph representations, and prove a generation theorem about the induced networks of connected directed acyclic graphs. In Section 3, we use the concept of capsule to extend neural networks, define an induced model for capsule networks, and establish a unified framework for deep learning with a universal backpropagation algorithm. Finally, in Section 4 we make a few conclusions to summarize the significance of the capsule framework to advance deep learning in theory and application. \section{Formalization of Neural networks} \subsection{Mathematical definition} A neural network is a computational model composed of nodes and connections. Nodes are divided into input nodes and neuron nodes. Input nodes can be represented by real variables, e.g. $x_{1},x_{2},\cdots,x_{n}$ . The set of input nodes is denoted as $X=\{x_{1},x_{2},\cdots,x_{n}\}$ . A neuron node can receive signals through connections both from input nodes and the outputs of other neuron nodes, and perform a weighted sum of these signals for a nonlinear transformation. Note that the weight measures the strength of a connection, and the nonlinear transformation is the effect of an activation function. Let $F$ be a set of activation functions, such as sigmoid, tanh, ReLU, and so on. On $X$ and $F$, a neural network can be formally defined as a 4-tuple $net=(S,H,W,Y)$, where $S$ is a set of input nodes, $H$ is a set of neuron nodes, $W$ is a set of weighting connections, and $Y$ is a set of outputs. The neural network is recursively generated by four basic rules as follows: 1) \textbf{Rule of variable}. For any $z\in X$, let $y_z = z$. If $S=\{z\}$, $H=\emptyset$, $W=\emptyset$, $Y=\{y_z\}$, then the 4-tuple $net=(S,H,W,Y)$ is a neural network. 2) \textbf{Rule of neuron}. For any nonempty subset $S\subseteq X$, $\forall f \in F$, $\forall b \in \mathbb{R}$, construct a node $h\not\in X$ that depends on $(f,b)$ and select a set of weighting connections $w_{x_i\rightarrow h} (x_i\in S)$. Let $y_h = f(\sum_{x_i \in S} {w_{x_i\rightarrow h}x_i}+b)$ be the output of node $h$. If $H=\{h\}$, $W=\{w_{x_i\rightarrow h} |x_i\in S\}$, and $Y=\{y_h\}$, then $net=(S,H,W,Y)$ is a neural network. 3) \textbf{Rule of growth}. Suppose $net=(S,H,W,Y)$ is a neural network. For any nonempty subset $N\subseteq S\cup H$, $\forall f\in F$, $\forall b\in \mathbb{R}$, construct a node $h\not\in S\cup H$ that depends on $(f,b)$ and select a set of weighting connections $w_{z_j\rightarrow h}(z_j\in N)$. Let $y_h = f(\sum_{z_j\in N} {w_{z_j\rightarrow h}y_{z_j}}+b)$ be the output of node $h$. If $S'=S$, $H'=H\cup \{h\}$, $W'=W\cup \{w_{z_j\rightarrow h}|z_j \in N\}$, and $Y'=Y\cup \{y_h\}$, then $net'=(S',H',W',Y')$ is also a neural network. 4) \textbf{Rule of convergence}. Suppose $net_k=(S_k,H_k,W_k,Y_k)(1\leq k \leq K)$ are $K$ neural networks, satisfying that $\forall 1\leq i\neq j \leq K$, $(S_i \cup H_i) \cap (S_j\cup H_j)=\emptyset$. For any nonempty subsets $A_k \subseteq S_k \cup H_k(1\leq k \leq K)$, $N=\bigcup_{k=1}^K {A_k}$, $\forall f\in F$, $\forall b \in \mathbb{R}$, construct a node $h\not\in \bigcup_{k=1}^K (S_k\cup H_k)$ that depends on $(f,b)$, select a set of weighting connections $w_{z\rightarrow h}(z\in N)$. Let $y_h=f(\sum_{z\in N} {w_{z\rightarrow h}y_z}+b)$ be the output of the node $h$. If $S=\bigcup_{k=1}^K S_k$, $H=(\bigcup_{k=1}^K H_k)\cup \{h\}$, $W=(\bigcup_{k=1}^K W_k)\cup \{w_{z\rightarrow h}|z\in N\}$, and $Y=(\bigcup_{k=1}^K Y_k)\cup \{y_h\}$, then $net=(S,H,W,Y)$ is also a neural network. Among the four generation rules, it should be noted that the rule of neuron is not independent. This rule can be derived from the rule of variable and the rule of convergence. Moreover, the weighting connection $w_{z\rightarrow h}$ should be taken as a combination of the weight and the connection, rather than just the weight. Additionally, if a node $h$ depends on $(f,b)$, $f$ is called the activation function of $h$, and $b$ is called the bias of $h$. \subsection{Directed graph representation} Let $X$ be a set of real variables and $F$ be a set of activation functions. For any neural network $net=(S,H,W,Y)$ on $X$ and $F$, a directed acyclic graph $G_{net}=(V,E)$ can be constructed with the vertex set $V=S\cup H$ and the directed edge set $E=\{z\rightarrow h|w_{z\rightarrow h} \in W\}$. $G_{net}=(V,E)$ is called the directed graph representation of $net=(S,H,W,Y)$. Two cases of the representation generation are discussed in the following. \textbf{1)The case of $X=\{x_1\}$} Using the rule of variable, for $x_1 \in X$, let $y_{x_1}=x_1$. If $S=\{x_1\}$, $H=\emptyset$, $W=\emptyset$, and $Y=\{y_{x_1}\}$, then $net=(S,H,W,Y)$ is a neural network. Since this network has only one input node without any function for nonlinear transformation, it is also called a trivial network, as shown in Figure 3(a). Using the rule of neuron, for a nonempty subset $S=\{x_1\} \subseteq X$, $\forall f\in F$, $\forall b \in \mathbb{R}$, construct a node $h_1 \not\in S$ that depends on $(f,b)$, select a weighting connection $w_{x_1\rightarrow h_1}$, and let $y_{h_1}=f(w_{x_1\rightarrow h_1}x_1 +b)$. If $H=\{h_1\}$, $W=\{w_{x_1\rightarrow h_1}\}$, and $Y=\{y_{h_1}\}$, then $net=(S,H,W,Y)$ is a neural network, which has one input and one neuron. It is also called a 1-input-1-neuron network, as shown in Figure 3(b). Using the rule of growth on the network, three new neural networks with different structures can be generated, as shown in Figures 4(a-c). Likewise, they are called 1-input-2-neuron networks. Using the rule of growth on the three networks, twenty-one new neural networks with different structures can be totally generated further. Seven out of them for Figure 4(a) are displayed in Figures 5(a-g). They are called 1-input-3-neuron networks. \begin{figure} \centering \includegraphics[width=2.3in,height=0.5in]{figure3.png} \caption{(a)A trivial network; (b)A 1-input-1-neuron network.} \end{figure} \begin{figure} \centering \includegraphics[width=4.7in,height=0.9in]{figure4.png} \caption{Three 1-input-2-neuron networks.} \end{figure} \begin{figure} \centering \includegraphics[width=5.5in,height=2in]{figure5.png} \caption{Seven 1-input-3-neuron networks.} \end{figure} \textbf{2)The case of $X=\{x_1,x_2\}$} Using the rule of variable, for $x_1, x_2\in X$, let $y_{x_1}=x_1$ and $y_{x_2}=x_2$. If $S_1=\{x_1\}$, $S_2=\{x_2\}$, $H_1=H_2=\emptyset$, $W_1=W_2=\emptyset$, $Y_1=\{y_{x_1}\}$, and $Y_2=\{y_{x_2}\}$, then $net_1=(\{x_1\},\emptyset,\emptyset,\{y_{x_1}\})$ and $net_2=(\{x_2\},\emptyset,\emptyset,\{y_{x_2}\})$ are neural networks. Obviously, both of them are trivial networks. Using the rule of neuron, for a nonempty subset $S\subseteq X$, if $S=\{x_1\}$ or $S=\{x_2\}$, the neural network can be similarly constructed with the case of $X=\{x_1\}$. If $X=\{x_1,x_2\}$, $\forall f\in F$, $\forall b \in \mathbb{R}$, construct a node $h_1\not\in S$ that depends on $(f,b)$, select a set of weighting connections $w_{x_i\rightarrow h_1}(x_i \in S)$ and let $y_{h_1}=f(\sum_{x_i\in S}{w_{x_i\rightarrow h_1}x_i}+b)$. If $H=\{h_1\}$, $W=\{w_{x_1\rightarrow h_1}, w_{x_2\rightarrow h_1}\}$, and $Y=\{y_{h_1}\}$, then $net=(S,H,W,Y)$ is a neural network. This is a 2-input-1-neuron network, as depicted in Figure 6. Using the rule of growth on this network, seven 2-input-2-neuron networks with different structures can be generated, as shown in Figures 7(a-g). \begin{figure} \centering \includegraphics[width=1.2in,height=0.6in]{figure6.png} \caption{A 2-input-1-neuron network.} \end{figure} \begin{figure} \centering \includegraphics[width=5.5in,height=1.8in]{figure7.png} \caption{Seven 2-input-2-neuron networks.} \end{figure} Finally, the rule of convergence is necessary. In fact, it cannot generate all neural networks only using the three rules of variable, neuron and growth. For example, the network in Figure 8(c) cannot be generated without using the rule of convergence on the two in Figures 8(a-b). \begin{figure} \centering \includegraphics[width=4.3in,height=0.9in]{figure8.png} \caption{A necessary explanation for the rule of convergence.} \end{figure} \subsection{Induced network and its generation theorem} Suppose $G=(V,E)$ is a connected directed acyclic graph, where $V$ denotes the vertex set and $E$ denotes the directed edge set. For any vertex $h\in V$, let $IN_h=\{z|z\in V, z\rightarrow h \in E\}$ be the set of vertices each with a directed edge to $h$, and $OUT_h=\{z|z\in V,h\rightarrow z \in E\}$ be the set of vertices for $h$ to have directed edges each to. If $IN_h=\emptyset$, then $h$ is called an input node of $G$. If $OUT_h=\emptyset$, then $h$ is called an output node of $G$. Otherwise, $h$ is called a hidden node of $G$. Let $X$ stand for the set of all input nodes, $O$ for the set of all output nodes, and $M$ for the set of all hidden nodes. Obviously, $V=X\cup M\cup O$, and $M=V-X\cup O$. Furthermore, let $y_h$ be the output of node $h$, and $w_{z\rightarrow h}$ be the weighting connection from $z$ to $h$. Then, a computational model of graph $G$ can be defined as follows: \hspace{0.4cm}1) $\forall z \in X$, $y_z=z$. \hspace{0.4cm}2) $\forall h\in M\cup O$, select $f\in F$ and $b \in \mathbb{R}$ to compute $y_h=f(\sum_{z\in IN_h}{w_{z\rightarrow h}y_z}+b)$. If $S=X$, $H=M\cup O$, $W=\{w_{z\rightarrow h}|z\rightarrow h \in E\}$, and $Y=\{y_h|h\in V\}$, then $net_G = (S,H,W,Y)$ is called an induced network of graph $G$. The following generation theorem holds on the induced network. \textbf{Generation Theorem:} For any connected directed acyclic graph $G=(V,E)$, its induced network $net_G$ is a neural network that can be recursively generated by the rules of variable, neuron, growth, and convergence. \textbf{Proof:} By induction on $|V|$ (i.e. number of vertices), we prove the theorem as follows.\\ 1) When $|V|=1$, we have $|X|=1$ and $|O|=0$, so the induced network $net_G$ is a neural network that can be generated directly by the rule of variable.\\ 2) When $|V|=2$, we have $|X|=1$ and $|O|=1$, so the induced network $net_G$ is a neural network that can be generated directly by the rule of growth.\\ 3) Assume that the theorem holds for $|V|\leq n$. When $|V|= n+1\geq 3$, the induced network $net_G$ has at least one output node $h\in O$. Let $E_h=\{z\rightarrow h\in E\}$ denote the set of edges heading to the node $h$. Moreover, let $V'=V-\{h\}$ and $E'=E-E_h$. Based on the connectedness of $G'=(V',E')$, we have two cases to discuss in the following: \begin{enumerate} \item[i)] If $G'=(V',E')$ is connected, then applying the induction assumption for $|V'|\leq n$, the induced network $net_{G'}=(S',H',W',Y')$ can be recursively generated by the rules of variable, neuron, growth, and convergence. Let $N=IN_h$. In $net_G=(S,H,W,Y)$, we use $f\in F$ and $b\in \mathbb{R}$ to stand for the activation function and bias of node $h$, and $w_{z\rightarrow h}(z\in N)$ for the weighting connection from node $z$ to the node $h$. Then, $net_G$ can be obtained by using the rule of growth on $net_{G'}$, to generate the node $h$ and its output $y_h = f(\sum_{z\in N}{w_{z\rightarrow h}y_z}+b)$. \item[ii)] Otherwise, $G'$ comprises a number of disjoint connected components $G_k=(V_k,E_k)(1\leq k \leq K)$. Using the induction assumption for $|V_k|\leq n(1\leq k \leq K)$, the induced network $net_{G_k}=(S_k,H_k,W_k,Y_k)$ can be recursively generated by the rules of variable, neuron, growth, and convergence. Let $A_k=(S_k\cup H_k)\cap IN_h$, and $N= \bigcup_{k=1}^K{A_k}$. In $net_G=(S,H,W,Y)$, we use $f\in F$ and $b\in \mathbb{R}$ to stand for the activation function and bias of the node $h$, and $w_{z\rightarrow h}(z\in N)$ for the weighting connection from node $z$ to node $h$. Then, $net_G$ can be obtained by using the rule of convergence on $net_{G_k}(1\leq k \leq K)$, to generate the node $h$ and its output $y_h = f(\sum_{z\in N}{w_{z\rightarrow h}y_z}+b)$. \end{enumerate} As a result, the theorem always holds. \section{Capsule framework of Deep learning} \subsection{Mathematical definition of capsules} In 2017, Hinton et al. pioneered the idea of capsules and considered a nonlinear “squashing” capsule [19]. From the viewpoint of mathematical models, a capsule is essentially an extension of the traditional activation function. It is primarily defined as an activation function with a vector input and a vector output. More generally, a capsule can be an activation function with a tensor input and a tensor output. \begin{figure} \centering \includegraphics[width=3in,height=0.9in]{figure9.png} \caption{Mathematical model of a general capsule.} \end{figure} As shown in Figure 9, a general capsule may have $n$ input tensors $X_1,X_2,\cdots,X_n$, $n$ weight tensors $W_1,W_2,\cdots,W_n$, and a capsule bias $B$, and $n$ weighting operations $\otimes_1,\otimes_2,\cdots,\otimes_n$. Note that a weighting operation may be taken as an identity transfer, a scalar multiplication, a vector dot product, a matrix multiplication, a convolution operation, and so on. Meantime, $W_i\otimes_iX_i(1\leq i \leq n)$ and $B$ must be tensors with the same dimension. The total input of the capsule is $U=\sum_i{W_i\otimes_iX_i}+B$, and the output $Y$ is a tensor computed by a nonlinear capsule function $cap$, namely, \begin{equation} Y=cap(U)=cap(\sum_i{W_i\otimes_iX_i}+B). \end{equation} For convenience, we use $\mathcal{F}$ to stand for a nonempty set of capsule functions, and $\mathbb{T}$ for the set of all tensors. \subsection{Capsule Networks} Suppose $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a connected directed acyclic graph, where $\mathcal{V}$ denotes the vertex set and $\mathcal{E}$ denotes the directed edge set. For any vertex $H\in \mathcal{V}$, let $IN_H$ be the set of vertices each with a directed edge to $H$, and $OUT_H$ be the set of vertices for $H$ to have a directed edge each to. If $IN_H=\emptyset$, then $H$ is called an input node of $\mathcal{G}$. If $OUT_H=\emptyset$, then $H$ is called an output node of $\mathcal{G}$. Otherwise, $H$ is called a hidden node of $\mathcal{G}$. Let $\mathcal{X}$ stand for the set of all input nodes, $\mathcal{O}$ for the set of all output nodes, and $\mathcal{M}$ for the set of all hidden nodes. Obviously, $\mathcal{V}=\mathcal{X}\cup \mathcal{M}\cup \mathcal{O}$, and $\mathcal{M}=\mathcal{V}-\mathcal{X}\cup \mathcal{O}$. Furthermore, let $Y_H$ be the output of node $H$, and $(W_{Z\rightarrow H},\otimes_{Z\rightarrow H})$ be the tensor-weighting connection from $Z$ to $H$. If $\forall H\in \mathcal{M}\cup \mathcal{O}, \forall Z \in IN_H$, $W_{Z\rightarrow H}\otimes_{Z\rightarrow H}Y_Z$ and $B$ are tensors with the same dimension, then a tensor-computational model of graph $\mathcal{G}$ can be defined as follows: \hspace{0.4cm}1) $\forall Z \in \mathcal{X}$, $Y_Z=Z$. \hspace{0.4cm}2) $\forall H\in \mathcal{M}\cup \mathcal{O}$, select $cap\in \mathcal{F}$ and $B \in \mathbb{T}$ to compute $Y_H=cap(\sum_{Z\in IN_H}{W_{Z\rightarrow H}\otimes_{Z\rightarrow H}Y_Z}+B)$. If $\mathcal{S}=\mathcal{X}$, $\mathcal{H}=\mathcal{M}\cup \mathcal{O}$, $\mathcal{W}=\{(W_{Z\rightarrow H},\otimes_{Z\rightarrow H})|Z\rightarrow H\in \mathcal{E}\}$, and $\mathcal{Y}=\{Y_H|H\in \mathcal{V}\}$, then $net_{\mathcal{G}}=(\mathcal{S},\mathcal{H},\mathcal{W},\mathcal{Y})$ is called a tensor-induced network of graph $\mathcal{G}$. This network is also called a capsule network. \begin{figure} \centering \includegraphics[width=3.5in,height=0.4in]{figure10.png} \caption{The capsule structure of a MLP.} \end{figure} Using a capsule network, a MLP can be simplified as a directed acyclic path of capsules. For example, the MLP in Figure 1 has five layers: an input layer, three hidden layers, and an output layer. On the whole, each layer could be thought of as a capsule. Let $X=(x_1,x_2,\cdots,x_5)^T$ stand for the input capsule node, $H_i=(cap_i,B_i)(i=1,2,3)$ for the hidden capsule nodes, and $O=(cap_4,B_4)$ for the output capsule node. Note that capsule function $cap_i$ and capsule bias $B_i$ are defined by the elementwise activation function and the bias vector respectively of the corresponding layer in the MLP. If the weighting operations $\otimes_{X\rightarrow H_1}$, $\otimes_{H_1\rightarrow H_2}$, $\otimes_{H_2\rightarrow H_3}$, and $\otimes_{H_3\rightarrow O}$ are all taken as matrix multiplication “$\times$”, then we have $(W_{X\rightarrow H_1},\otimes_{X\rightarrow H_1})=((w_{m,n}^{X\rightarrow H_1})_{7\times5},\times)$, $(W_{H_1\rightarrow H_2},\otimes_{H_1\rightarrow H_2})=((w_{m,n}^{H_1\rightarrow H_2})_{7\times7},\times)$, $(W_{H_2\rightarrow H_3},\otimes_{H_2\rightarrow H_3})=((w_{m,n}^{H_2\rightarrow H_3})_{7\times7},\times)$ and $(W_{H_3\rightarrow O},\otimes_{H_3\rightarrow O})=((w_{m,n}^{H_3\rightarrow O})_{4\times7},\times)$, which are the tensor-weighting connections from $X$ to $H_1$, $H_1$ to $H_2$, $H_2$ to $H_3$ and $H_3$ to $O$. Finally, let $Y_{H_i}(i=1,2,3)$ stand for the output vector of $H_i$, and $Y_O$ for the output vector of $O$. Setting $Y_{H_O}=X$ and $Y_{H_4}=Y_O$, we obtain $Y_{H_i}=cap_i(W_{H_{i-1}\rightarrow H_i}\times Y_{H_{i-1}}+B_i)$. Therefore, the capsule structure of the MLP is a directed acyclic path, as displayed in Figure 10. Besides MLPs, capsule networks can also be used to simplify the structures of other DNNs. Let us consider the CNN in Figure 2. This CNN has 7 layers: one input layer, two convolutional layers, two downsampling (pooling) layers, one fully connected layer, and one output layer. On the whole, each of the layers could be thought of as a capsule. Let $X$ stand for the input capsule node, $H_i=(cap_i,B_i)(i=1,\cdots,5)$ for the hidden capsule nodes, and $O=(cap_6,B_6)$ for the output capsule node. Note that $cap_1$ and $cap_3$ are capsule functions defined by elementwise ReLUs. $cap_2$ and $cap_4$ are capsule functions defined by downsampling “$\downarrow$”. $cap_5$ is an identity function. $cap_6$ is a capsule function defined by softmax. In addition, $B_i(i=1,\cdots,6)$ are capsule biases each defined by the bias tensor of the corresponding layer in the CNN. Let both $\otimes_{X\rightarrow H_1}$ and $\otimes_{H_2\rightarrow H_3}$ be the convolution operation “$\ast$”, both $\otimes_{H_1\rightarrow H_2}$ and $\otimes_{H_3\rightarrow H_4}$ be the identity transfer “$\rightarrow $”, $\otimes_{H_4\rightarrow H_5}$ be the tensor-reshaping operation “$\triangleleft$”, and $\otimes_{H_5\rightarrow O}$ be the matrix multiplication “$\times$”. Then, $(W_{X\rightarrow H_1},\otimes_{X\rightarrow H_1})=(W_{X\rightarrow H_1},\ast)$, $(W_{H_1\rightarrow H_2},\otimes_{H_1\rightarrow H_2})=("",\rightarrow)$, $(W_{H_2\rightarrow H_3},\otimes_{H_2\rightarrow H_3})=(W_{H_2\rightarrow H_3},\ast)$, $(W_{H_3\rightarrow H_4},\otimes_{H_3\rightarrow H_4})=("",\rightarrow)$, $(W_{H_4\rightarrow H_5},\otimes_{H_4\rightarrow H_5})=("",\triangleleft)$, and $(W_{H_5\rightarrow O},\otimes_{H_5\rightarrow O})=(W_{H_5\rightarrow O},\times)$, which are the tensor-weighting connections from $X$ to $H_1$, $H_1$ to $H_2$, $H_2$ to $H_3$, $H_3$ to $H_4$, $H_4$ to $H_5$, and $H_5$ to $O$. Finally, let $Y_{H_i}(i=1,2,3,4,5)$ stand for the output tensor of $H_i$, and $Y_O$ for the output tensor of $O$. This leads to the following computations: \begin{equation} \begin{cases} Y_{H_1} = cap_1(W_{X\rightarrow H_1}\ast X+B_1) = \textrm{ReLU}(W_{X\rightarrow H_1}\ast X+B_1), \\ Y_{H_2} = cap_2(W_{H_1\rightarrow H_2}\otimes_{H_1\rightarrow H_2} Y_{H_1}+B_2) = cap_2(\rightarrow Y_{H_1}+B_2) = \downarrow Y_{H_1}+B_2, \\ Y_{H_3} = cap_3(W_{H_2\rightarrow H_3}\ast X+B_3) = \textrm{ReLU}(W_{H_2\rightarrow H_3}\ast Y_{H_2}+B_3), \\ Y_{H_4} = cap_4(W_{H_3\rightarrow H_4}\otimes_{H_3\rightarrow H_4} Y_{H_3}+B_4) = cap_4(\rightarrow Y_{H_3}+B_4) = \downarrow Y_{H_3}+B_4, \\ Y_{H_5} = cap_5(\triangleleft Y_{H_4}+B_5) = \triangleleft Y_{H_4}, \\ Y_O = cap_6(W_{H_5\rightarrow O}\times Y_{H_5}+B_6) = softmax(W_{H_5\rightarrow O}\times Y_{H_5}+B_6). \end{cases} \end{equation} Therefore, the capsule structure of the CNN is also a directed acyclic path, as depicted in Figure 11. \begin{figure} \centering \includegraphics[width=4.5in,height=0.4in]{figure11.png} \caption{The Capsule structure of a CNN, with “$\ast$” standing for convolution, “$\rightarrow$” for identity transfer, “$\triangleleft$” for tensor reshaping, and “$\times$” for matrix multiplication.} \end{figure} Besides simplifying the description of existing DNNs, the capsule networks can also be used to graphically design a variety of new structures for complex DNNs, such as displayed in Figure 12. \begin{figure} \centering \includegraphics[width=3.5in,height=1.8in]{figure12.png} \caption{Structure of a general capsule network.} \end{figure} \subsection{Universal backpropagation of capsule networks} Suppose $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a connected directed acyclic graph. Let $\mathcal{X}=\{X_1,X_2,\cdots,X_n\}$ stand for the set of all input nodes, $\mathcal{O}=\{O_1,O_2,\cdots,O_m\}$ for the set of all output nodes, and $\mathcal{M}=\mathcal{V}-\mathcal{X}\cup \mathcal{O}=\{H_1,H_2,\cdots,H_l\}$ for the set of all hidden nodes. $net_{\mathcal{G}}=(\mathcal{S},\mathcal{H},\mathcal{W},\mathcal{Y})$ is an tensor-induced network of graph $\mathcal{G}$. This is also a capsule network. If the number of nodes $|\mathcal{S}\cup \mathcal{H}|\geq 2$, then for $\forall H\in \mathcal{H}$, \begin{equation} \begin{cases} U_H = \sum_{Z\in IN_H}{W_{Z\rightarrow H}\otimes_{Z\rightarrow H}Y_Z+B_H}, \\ Y_H = cap_H(U_H)=cap_H(\sum_{Z\in IN_H}{W_{Z\rightarrow H}\otimes_{Z\rightarrow H}Y_Z+B_H}). \end{cases} \end{equation} For any output node $H\in \mathcal{O}$, let $Y_H$ and $T_H$ be its actual output and expected output for input $\mathcal{X}$, respectively. The loss function between them is defined as $L_H=Loss(Y_H,T_H)$. Accordingly, we have the total loss function $L=\sum_{H\in \mathcal{O}}L_H$. Let $\delta_H=\frac{\partial L}{\partial U_H}$ denote the backpropagated error signal (or sensitivity) for capsule node $H$. By the chain rule, we further obtain: \begin{equation} \forall H\in \mathcal{O}, \begin{cases} \delta_H & = \frac{\partial L}{\partial U_H} = \frac{\partial Loss(Y_H,T_H)}{\partial Y_H}\cdot \frac{\partial cap_H}{\partial U_H}, \\ \frac{\partial L}{\partial B_H} & = \frac{\partial L}{\partial U_H}\cdot \frac{\partial U_H}{\partial B_H} = \delta_H, \\ \frac{\partial L}{\partial W_{Z\rightarrow H}} & = \frac{\partial L}{\partial U_H}\cdot \frac{\partial U_H}{\partial W_{Z\rightarrow H}} =\delta_H \cdot \frac{\partial U_H}{\partial W_{Z\rightarrow H}}. \end{cases} \end{equation} \begin{equation} \forall H\in \mathcal{M}, \begin{cases} \delta_H & = \frac{\partial L}{\partial U_H} = \sum_{P\in OUT_H}{\frac{\partial L}{\partial U_P}\cdot \frac{\partial U_P}{\partial Y_H}\cdot \frac{\partial Y_H}{\partial U_H}} \\ & = \sum_{P\in OUT_H}{\delta_P \cdot \frac{\partial U_P}{\partial Y_H}\cdot \frac{\partial cap_H}{\partial U_H}}, \\ \frac{\partial L}{\partial B_H} & = \frac{\partial L}{\partial U_H}\cdot \frac{\partial U_H}{\partial B_H}=\delta_H, \\ \frac{\partial L}{\partial W_{Z\rightarrow H}} & = \frac{\partial L}{\partial U_H}\cdot \frac{\partial U_H}{\partial W_{Z\rightarrow H}}=\delta_H \cdot \frac{\partial U_H}{\partial W_{Z\rightarrow H}}. \end{cases} \end{equation} Note that in formulae (4)-(5), $\frac{\partial cap_H}{\partial U_H}$ depends on the specific form of capsule function $cap_H$. For example, when $cap_H$ is an elementwise sigmoid function, the result is $\frac{\partial cap_H}{\partial U_H}=sigmoid(U_H)(1-sigmoid(U_H))$. Meanwhile, $\frac{\partial U_H}{\partial W_{Z\rightarrow H}}$ and $\frac{\partial U_P}{\partial Y_H}$ also depend on the specific choice of the weighting operation $\otimes_{Z\rightarrow H}$. Based on formulae (4)-(5), a universal backpropagation algorithm can be designed theoretically for capsule networks, with one iteration detailed in \textbf{Algorithm 1}. In practice, this algorithm should be changed to one of many variants with training data [20]. \begin{table*} \centering \begin{tabular}{lll} \multicolumn{3}{c}{\textbf{Algorithm 1}: One iteration of the universal backpropagation algorithm.} \\ \toprule 1) Select a learning rate $\eta > 0$, \\ 2) $\forall H\in \mathcal{M}\cup \mathcal{O}$, $\forall Z \in IN_H$, initialize $W_{Z\rightarrow H}$ and $B_H$, \\ 3) $\forall H\in \mathcal{O}$, compute $\delta_H = \frac{\partial Loss(Y_H,T_H)}{\partial Y_H}\cdot \frac{\partial cap_H}{\partial U_H}$, \\ 4) $\forall H \in \mathcal{M}$, compute $\delta_H = \sum_{P\in OUT_H}{\delta_P \cdot \frac{\partial U_P}{\partial Y_H}\cdot \frac{\partial cap_H}{\partial U_H}}$, \\ 5) Compute $\Delta W_{Z\rightarrow H}=\delta_H\cdot \frac{\partial U_H}{\partial W_{Z\rightarrow H}}$ and $\Delta B_H = \delta_H$, \\ 6) Update $W_{Z\rightarrow H} \leftarrow W_{Z\rightarrow H}- \eta \cdot \Delta W_{Z\rightarrow H}$,$B_H \leftarrow B_H- \eta \cdot \Delta B_H$.\\ \bottomrule \end{tabular} \end{table*} \section{Conclusions} Based on the formalization of neural networks, we have developed capsule networks to establish a unified framework for deep learning. This capsule framework could not only simplify the description of existing DNNs, but also provide a theoretical basis of graphical designing and programming for new deep learning models. As future work, we will try to define an industrial standard and implement a graphic platform for the advancement of deep learning with capsule networks, and even with a similar extension to recurrent neural networks. \section*{References} \medskip \small [1] Krizhevsky, A., Sutskever, I.\ \& Hinton, G.E.\ (2012) Imagenet classification with deep convolutional neural networks. In F.\ Pereira, C.J.C.\ Burges, L.\ Bottou and K.Q.\ Weinberger (eds.), {\it Advances in neural information processing systems 25}, pp.\ 1097--1105. Cambridge, MA: MIT Press. [2] Amodei, D., Ananthanarayanan, S.\ \& Anubhai, R.\ et al.\ (2016) Deep speech 2: End-to-end speech recognition in English and Mandarin. {\it International Conference on Machine Learning}, pp.\ 173--182. [3] Wu, Y., Schuster, M.\ \& Chen, Z.\ et al.\ (2016) Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. {\it arXiv preprint arXiv:1609.08144}. [4] Rumellhart, D.E.\ (1986) Learning internal representations by error propagation. {\it Parallel distributed processing: Explorations in the microstructure of cognition} {\bf 1}:319-362. [5] Schmidhuber, J.\ (2014) Deep learning in neural networks: An overview. {\it Neural Network} {\bf 61}:85-117. [6] Hinton, G.E.\ \& Salakhutdinov, R.R.\ (2006) Reducing the dimensionality of data with neural networks. {\it Science} {\bf 313}(5786):504-507. [7] Hinton, G.E., Osindero, S.\ \& Teh, Y.W.\ (2006) A fast learning algorithm for deep belief nets. {\it Neural computation} {\bf 18}(7):1527-1554. [8] LeCun, Y., Bottou, L.\ \& Bengio Y, et al.\ (1998) Gradient-based learning applied to document recognition. {\it Proceedings of the IEEE} {\bf 86}(11):2278-2324. [9] Simonyan, K.\ \& Zisserman, A.\ (2014) Very Deep Convolutional Networks for Large-Scale Image Recognition. {\it Computer Science}. [10] Szegedy, C. Liu, W.\ \& Jia, Y.\ et al.\ (2015) Going deeper with convolutions. {\it IEEE Conference on Computer Vision and Pattern Recognition}. [11] He, K. Zhang, X.\ \& Ren, S.\ et al.\ (2016) Deep residual learning for image recognition. {\it Proceedings of the IEEE conference on computer vision and pattern recognition}, pp.\ 770--778 [12] Ren, S., He, K.\ \& Girshick, R.\ et al.\ (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. {\it Advances in neural information processing systems}, pp.\ 91--99. [13] Huang, G., Liu, Z.\ \& Weinberger, K.Q.\ et al.\ (2017) Densely connected convolutional networks. {\it Proceedings of the IEEE conference on computer vision and pattern recognition}, pp.\ {\bf 1}(2):3. [14] He, K., Gkioxari, G.\ \& Dollár, P.\ et al.\ (2017) Mask r-cnn. {\it Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE}, pp.\ 2980--2988. [15] Redmon, J., Divvala, S.\ \& Girshick, R.\ et al.\ (2016) You only look once: Unified, real-time object detection. {\it Proceedings of the IEEE conference on computer vision and pattern recognition}, pp.\ 779--788. [16] Liu, W., Anguelov, D.\ \& Erhan, D.\ et al.\ (2016) Ssd: Single shot multibox detector. {\it European conference on computer vision. Springer, Cham}, pp.\ 21--37. [17] Mnih, V., Kavukcuoglu, K.\ \& Silver, D.\ et al.\ (2015) Human-level control through deep reinforcement learning. {\it Nature} {\bf 518}(7540):529. [18] Silver, D., Schrittwieser, J.\ \& Simonyan, K.\ et al.\ (2017) Mastering the game of Go without human knowledge. {\it Nature} {\bf 550}(7676):354-359. [19] Sabour, S., Frosst, N.\ \& Hinton, G.E.\ (2017) Dynamic routing between capsules. In I.\ Guyon, U.V.\ Luxburg, S.\ Bengio, H.\ Wallach, R.\ Fergus, S.\ Vishwanathan and R.\ Garnett (eds.), {\it Advances in Neural Information Processing Systems 30}, pp.\ 3859-3869. Cambridge, MA: MIT Press. [20] Ruder, S.\ (2016) An overview of gradient descent optimization algorithms. {\it arXiv preprint arXiv:1609.04747}. \end{document}
2024-02-18T23:40:41.999Z
2018-05-11T02:04:32.000Z
algebraic_stack_train_0000
3,112
5,500
proofpile-arXiv_065-15237
\section{Introduction \label{sec:I}} Spin-orbit (SO) coupling was recently engineered in a neutral atomic Bose-Einstein condensate (BECs) by dressing two atomic spin states (hyperfine states $|F=1,\,m_{F}=\pm1\rangle$ of a spin-$1$ $^{87}\text{Rb}$ BEC) with a pair of laser beams \cite{Lin_NAT11}. This new scenario has motivated further studies on vector solitons and other nonlinear waves, such as, self-trapped states \cite{Merkl_PRL10}, vortices \cite{Xu_PRL11,Radic_PRA11,Ramachandhran_PRA12,Lobanov_PRL14,Sakaguchi_PRE14-2}, Skyrmions \cite{Kawakami_PRL12}, Dirac monopoles \cite{Conduit_PRA12}, dark solitons \cite{Fialko_PRA12,Achilleos_EPL13}, bright solitons \cite{Achilleos_PRL13}, gap solitons \cite{Kartashov_PRL13,Zhang_PRA15,Sakaguchi_PRA18}, exotic complexes \cite{Belobo_SR18}, etc. Furthermore, many studies in BECs with SO coupling have shown interesting effects like the chiral confinement in quasirelativistic BECs \cite{Merkl_PRL10}, existence of a \textquoteleft stripe phase\textquoteright{} \cite{Ho_PRL11,Sinha_PRL11}, tunneling dynamics \cite{Zhang_PRA12,Garcia-March_PRA14,Wang_PRA15}, the partial wave scattering \cite{Williams_SCI12}, the phenomenon of Zitterbewegung \cite{LeBlanc_NJP13,Qu_PRA13,Achilleos_PRA14}, the tunability of the SO coupling strength \cite{Jimenez-Garcia_PRL15}, traveling Majorana solitons \cite{Zou_PRL16}, steadily moving solitons in a helicoidal gauge potential \cite{Kartashov_PRL17}, negative-mass hydrodynamics \cite{Khamehchi_PRL17}, etc. Analytical developments for search localized solutions in BECs with SO coupling was recently reported in quasi-one- \cite{Salasnich_PRA13,Kartashov_PRL13,Achilleos_PRL13,Xu_PRA13,Zezyulin_PRA13,Achilleos_EPL13,Kartashov_PRA14,Sakaguchi_PRE14,Chiquillo_LP14,Gautam_PRA15,Zhang_PRA15,Cao_JOSAB15,Wen_PRA16,Li_CPL16,Sakaguchi_PRA17,Li_NJP17,Belobo_SR18,Sakaguchi_PRA18} and quasi-two-dimensional \cite{Salasnich_PRA14,Sakaguchi_PRE14,Sakaguchi_PRE14-2,Liao_PRA17,Li_PRA17,Kato_PRA17,Sakaguchi_PRA18,Huang_PRA18} systems. Specifically, in Ref. \cite{Salasnich_PRA13} was derived an effective 1D coupled nonpolynomial Schr\"odinger equations from the system of 3D Gross-Pitaevskii equations. Next, this study was extended to quasi-two-dimensional BECs with SO and Rabi couplings \cite{Salasnich_PRA14}. Detailed studies of stationary and moving bright solitons in BECs with SO and Rabi couplings was presented in Refs. \cite{Xu_PRA13,Liu_EPL14,Li_CPL16,Wen_PRA16,Sakaguchi_PRA17,Li_NJP17} and in Refs. \cite{Chiquillo_LP14,Liao_PRA17} including also interatomic magnetic dipole-dipole interactions. In Ref. \cite{Zezyulin_PRA13} was reported the existence of even, odd, and asymmetric nonlinear modes in the effectively 1D self-repulsive binary BEC with the SO and Zeeman splitting, confined by the axial HO potential. The emergence of a number of nontrivial soliton properties due to a localized SO coupling was presented in Ref. \cite{Kartashov_PRA14}. In Ref. \cite{Sakaguchi_PRE14} was studied discrete and continuum composite solitons in BECs with the Rashba SO coupling loaded into a deep 1D or 2D optical-lattice potential. The spontaneous symmetry breaking in a SO-coupled $f=2$ spinor condensate was reported in \cite{Gautam_PRA15}. In Ref. \cite{Cao_JOSAB15} was numerically investigated the ground state properties and dynamical generation of dark solitons in SO-coupled BECs. Recently, was reported in Ref. \cite{Huang_PRA18} the possibility to stabilize excited states of semi-vortex and mixed-mode solitons (originally unstable) in a setting based on repulsive dipole-dipole interactions induced by a polarizing field, oriented perpendicular to the plane in which the dipolar BEC is trapped. In addition, it has also been predicted that 2D and 3D solitons can be stabilized in spinor (two-component) BECs with the help of Rashba-type SO coupling \cite{Wilson_PRL13,Sakaguchi_PRE14-2,Jiang_PRA16,Sakaguchi_PRE14,Gautam_PRA17,Chen_CNSNS17,Zhang_PRL15,Li_NJP17,Liao_PRA17}. In a more complex scenario, collisions of solitary waves can show nontrivial structures since, due to the nonintegrability of the system, the collision outcome can depend on the initial conditions, presenting in some cases a fractal pattern \cite{Yang_PRL00,Tan_PRE01,Dmitriev_CHAOS02,Dmitriev_PRE02,Zhu_PRE07,Zhu_PRL08,Zhu_PD08,Zhu_SAM09,Hause_PRA10,Teixeira_PLA16}. Fractal structures in collisions of solitons are also reported in systems described by other models, such as, in the $\phi^{4}$ model \cite{Goodman_CHAOS08,Goodman_CHAOS15}, the sine-Gordon model \cite{Fukushima_PLA95,Higuchi_CSF98,Dmitriev_PRE01,Dmitriev_PB02,Dmitriev_PRE08}, etc. However, there are still few works dedicated to exploring collisions of localized structures in BECs with SO coupling \cite{Sakaguchi_PRE14,Sakaguchi_PRE14-2,Kartashov_PRL17,Li_PRA17,Gautam_PRA17}. Indeed, in Ref. \cite{Kartashov_PRL17} was reported the existence and stability of families of steadily moving solitons in a helicoidal gauge potential, where in the absence of Zeeman splitting, such solitons interact elastically similarly to solitons in integrable systems. Also, in Ref. \cite{Sakaguchi_PRE14-2} was verified that in two-dimensional SO-coupled self-attractive BECs in free space, collisions between two moving solitons lead to their merger into a single one. The scattering process due to the collisions of solitons was used in Ref. \cite{Sakaguchi_PRE14} in view to verify the stability of 1D and 2D solitons. In Ref. \cite{Li_PRA17} it was studied the mobility and collision of gap-solitons in dipolar BECs with SO coupling, revealing negative and positive effective masses of the isotropic and anisotropic solitons, respectively. In addition, in Ref. \cite{Gautam_PRA17} it was presented the study of the formation and dynamics of 2D vortex-bright solitons in a three-component SO coupled spin 1 spinor condensate, revealing that in the collision of two moving vortex-bright solitons at small velocities, one finds that the in-phase solitons either collapse or merge into a single entity, whereas out-of-phase solitons repel and avoid each other without ever having an overlapping profile. Here, we investigate the influence of the SO coupling on the collisional dynamics of solitons in BECs. To this end, we employ a reduced ordinary differential equations (ODE) model based on a variational approach, which allow us to analytically investigate the formation of fractal-like patterns and the properties of the scattered solitons. The rest of the paper is organized as follows. In Sec. \ref{sec:II}, we describe the effective mean-field coupled Gross-Pitaevskii (GP) equations with SOC used to study the collisional dynamics of solitons. By means of a variational approach, we obtain a reduced ODE model in Sec. \ref{sec:III}. In Sec. \ref{sec:IV} we analyze the width oscillations in the $|\xi|\gg1$ regime and the initial conditions to be used in the numerical simulations presented in Sec. \ref{sec:V}. Finally, in Sec. \ref{sec:Conclusion}, we give a summary of our findings. \section{Theoretical Model \label{sec:II}} We start by considering a BEC confined in a quasi-one-dimensional parabolic trap (with frequencies $\omega_{x}\ll\omega_{\perp}$), described by an effective 1D-GP equation system with SO and Rabi couplings, which is written in a scaled form as \cite{Achilleos_PRL13} (length in units of $a_{\perp}\equiv\sqrt{\hbar/m\omega_{\perp}}$, time in units of $\omega_{\perp}^{-1}$, and energy in units of $\hbar\omega_{\perp}$) \begin{eqnarray} i\partial_{t}A_{k} & = & \left[-\dfrac{1}{2}\partial_{x}^{2}+i(-1)^{k-1}\gamma\,\partial_{x}+V(x)\right.\nonumber \\ & & \left.+g_{k}\left|A_{k}\right|^{2}+g_{12}\left|A_{3-k}\right|^{2}\right]A_{k}+\Gamma A_{3-k}\,,\label{1D_NLSEs} \end{eqnarray} where $A_{k}$ ($k=1,2$) are wave functions related to the two pseudospin components of the BEC. The strengths of the intra- and interspecies interactions are $g_{k}\equiv2a_{k}/a_{\perp}$ and $g_{12}\equiv2a_{12}/a_{\perp}$, with $a_{k}$ and $a_{12}$ being the respective s-wave scattering lengths. The strengths of the SO and Rabi couplings are $\gamma\equiv k_{L}a_{\perp}$ and $\Gamma\equiv\Omega/(2\omega_{\perp})$, respectively, where $k_{L}$ is the wave number of the Raman lasers that couple the two atomic hyperfine states in the $x$ direction \cite{Hamner_PRL15}, and $\Omega$ is the frequency of the Raman coupling, responsible for the Rabi mixing between the states. In the following, we will assume a null interspecies interactions $g_{12}=0$ (which can be properly adjusted by means of the Feshbach resonance \cite{Inouye_NAT98}), i.e., we consider cases where the interspecies interaction is provided only by the Rabi term. Also, in a complete attractive binary BEC (negative $g_{1}=g_{2}=g$ and $\Gamma$) one can obtain localized solutions even in absence of axial confinement, because in specific conditions the self-trapping of the cigar-shaped cloud prevents spreading. In this sense, in our model we consider $V(x)=0$. In order to investigate the details of this physical process, specifically in the collisional dynamics of two solitons, in the next section we derive a reduced ODE model that aims to provide an effective description of the collision dynamics. \section{The reduced ODE model\label{sec:III}} \begin{figure*}[t] \begin{centering} \includegraphics[width=0.85\paperwidth]{F1.eps} \par\end{centering} \caption{(Color online) Pictorial representation of the pre-collisional scenario of two symmetric solitons in a SO and Rabi-coupled BEC. In (a) and in the top frame of (b), the pre-collisional scenario consists of both solitons (initially with peak position at $x=\pm p_{0}$) moving toward the origin ($\mathcal{O}$) with propagation velocity $\protect\vec{v_{0}}'=(-1)^{k}\left(v_{0}+\gamma\right)\hat{x}$ (for $k=1,2$ and $v_{0}'=v_{0}+\gamma>0$), for the $k$-soliton component, that is induced by the initial phase velocity and by the Raman laser field pumped in the $(-1)^{k}\hat{x}$ direction. The remaining three frames in (b) illustrate the evolution of the initial configuration, i.e., by showing the beginning of the interaction stage, which is followed by the first collision process with maximum overlap at $t=t_{\text{col}}^{\text{\tiny(\ensuremath{1})}}$. The last frame depicts the post-collisional scenario, with the scattered solitons moving away from each other with propagation velocity $v_{\infty}'$ and eventually reaching their initial separation at $t=t_{\infty}$.} \label{Fig1} \end{figure*} For convenience, we reset the indexes for the components using the rule $k\rightarrow\mathrm{sgn}[(-1)^{k}]$ ($k=1,2$). Then, we assume an approximated solution in a full functional form for symmetric bright solitons, which can be written in the form \begin{equation} A_{\pm}=\eta\,\textrm{sech}\left(\dfrac{x\pm p}{w}\right)\mathrm{e}^{i\left[\pm v\left(x\pm p\right)+\frac{b}{2w}(x\pm p)^{2}+\sigma\right]},\label{ANSATZ} \end{equation} with the variational parameters within $A_{\pm}$ being time-dependent functions, namely: amplitude ($\eta$), velocity ($v$), width ($w$), peak position ($p$), chirp ($b$), and global phase ($\sigma$). The exponent comes from the Galilean invariance of Eq. \eqref{1D_NLSEs}, excepting the quadratic term in $x$, which gives a parabolic phase offset to the waves that promotes width oscillations. The parameter $\sigma$ develops an important role in the model, because it is responsible for the global phase invariance of the system. Note that the momentum conservation arises naturally from the \emph{ansatz}, because the total momentum of the symmetric solitons is always zero. The Lagrangian density corresponding to Eq. \eqref{1D_NLSEs} can be written as $\mathscr{L}=\mathcal{L}_{+}+\mathcal{L}_{-}$, in which \begin{eqnarray} \mathcal{L}_{\pm} & = & \Im\left(A_{\pm}^{*}\partial_{t}A_{\pm}\right)\pm\gamma\,\Im\left(A_{\pm}^{*}\partial_{x}A_{\pm}\right)\nonumber \\ & & +\dfrac{1}{2}\left|\partial_{x}A_{\pm}\right|^{2}+\dfrac{g_{\pm}}{2}\left|A_{\pm}\right|^{4}+\Gamma\Re\left(A_{\pm}^{*}A_{\mp}\right),\label{LAG_DENS} \end{eqnarray} where $\Im(\xi)$ and $\Re(\xi)$ denote the imaginary and real parts of the complex argument $\xi$, respectively. The variational approach yields a reduced ODE model that is calculated by substituting the \emph{ansatz} \eqref{ANSATZ} into the effective Lagrangian density \eqref{LAG_DENS}, and then integrating over the whole $x$-axis. The resulting Lagrangian is given in terms of the variational parameters and their temporal derivatives, as follows \begin{align} L & =4\eta^{2}w\left(v\dot{p}+\dot{\sigma}\right)+\dfrac{\pi^{2}\eta^{2}w}{6}\left(\dot{b}w-b\dot{w}\right)+4\gamma\eta^{2}wv\nonumber \\ & +2\eta^{2}w\left(v^{2}+\dfrac{1}{3w^{2}}+\frac{\pi^{2}b^{2}}{12}\right)+\dfrac{4g\eta^{4}w}{3}+4\pi\Gamma\eta^{2}wG\ ,\label{RM_LAG} \end{align} where the coupling function $G=G\left(\xi,\zeta,w\right)\,$, written as function of the auxiliary variables $\xi=2p/w$ and $\zeta=2v+\xi b$ plus the parameter $w\,$, is given by \begin{align} & G\left(\xi,\zeta,w\right)=\dfrac{\sin\left(\zeta p\right)}{\sinh\left(\xi\right)\sinh\left(\pi\zeta w/2\right)}\ .\label{G_FUNC} \end{align} Since the resulting Lagrangian depends upon the global phase only through the term $\left(4\eta^{2}w\right)\dot{\sigma\,}$, the Euler-Lagrange equation for $\sigma$ provides the norm conservation in the reduced ODE model, i.e., \begin{equation} K=4\eta^{2}w, \end{equation} which simply states that $\int_{-\infty}^{\infty}dx\,(\left|A_{+}\right|^{2}+\left|A_{-}\right|^{2})=K\,$, allowing one to acquire $\eta(t)$ directly from $w(t)\,$. Also, the other Euler-Lagrange equations arising from the Lagrangian \eqref{RM_LAG} yield a system of four coupled ODEs, the so-called reduced model, written as\begin{subequations} \begin{equation} \dot{v}=\pi\Gamma\dfrac{\partial G}{\partial p},\label{vp} \end{equation} \begin{equation} \dot{w}=b+\dfrac{12\Gamma}{\pi}\dfrac{\partial G}{\partial b}\label{wp} \end{equation} \begin{equation} \dot{p}=-\left(v'+\pi\Gamma\dfrac{\partial G}{\partial v}\right),\label{pp} \end{equation} \begin{equation} \dot{b}=\dfrac{3}{\pi^{2}}\left(\dfrac{4}{3w^{3}}+\dfrac{gK}{3w^{2}}-4\pi\Gamma\dfrac{\partial G}{\partial w}\right),\label{bp} \end{equation} \end{subequations}with $v'=v+\gamma\,$. These equations govern the evolution of the four independent variational parameters that characterize the system of symmetric solitons possessing the fixed functional form given by the \emph{ansatz} \eqref{ANSATZ}. The set of parameters $\mathcal{C}(t)=\{p(t),v(t),w(t),b(t)\}$ expresses the configuration of the system at an instant of time $t>0$, which evolves from an initial configuration $\mathcal{C}_{0}=\{p_{0},v_{0},w_{0},b_{0}\}$ (here we use the notation: $q(0)=q_{0}$). To properly investigate the scattering of symmetric solitons in this variational model, one needs to build a set of $\mathcal{C}_{0}$ that corresponds to a desired pre-collisional scenario. In Fig. \ref{Fig1}, two illustrative representations of such pre-collisional scenario are shown. In this case, we have $\left|\xi\right|\gg1$, which means that the separation of the solitons (given by $2|p|$) is much greater than their width, providing a negligible tail overlap at the origin of the coordinate system, such that the system can be represented by two noninteracting symmetric solitons. This correspondence is no longer valid when the interaction stage begins, i.e., at the ``moment'' in which the decreasing separation is $\left|\xi\right|\gtrsim1$, and the increasing overlap of the solitons' tails eventually becomes large enough so that the effects of the Rabi interaction becomes substantial. We will see (next section) that the interacting solitons can collide once or several times. In the latter case, they can form a bound-state that endures until the last collision. Each collision is a process that mostly affect the dynamics of the solitons during the time near the instant of maximal overlap (as depicted in Fig. \ref{Fig1}(b) for the first collision), which is denoted by $t=t_{\text{col}}^{\text{\tiny(\ensuremath{j})}}$ for the $j$-th collision (hence, $p(t_{\text{col}}^{\text{\tiny(\ensuremath{j})}})=0$), with $j=1\,,\dots,\,n_{\text{col}}$ and $n_{\text{col}}$ being the total number of collisions during the bound-state. These collision processes can induce width oscillations in the solitary waves. It is a dynamical property that manifests when a part of the solitons' kinetic energy is contained within a wave profile vibration. Such property plays a very important role in the bound-state dynamics and can prevail after the unbinding. So, one can expect that the post-collisional scenario is characterized by scattered solitons moving away from each other and endowed with width oscillations (this scenario is illustrated in Fig. \ref{Fig1}(b) for a transmission case). As their separation gradually increases, the inequality $\left|\xi\right|\gg1$ eventually holds, allowing the noninteracting solitons correspondence to be applied again. In this work we focus on the scattering of solitary waves manifesting in the form of fundamental soliton solutions during the pre-collisional scenario, this means that the solitons' shape remains practically the same until the interaction stage (no width change: $\dot{w}=0$). Width oscillations during the post-collisional scenario are expected and analytically tractable due to the simplifications allowed\textcolor{blue}{{} }by the $\left|\xi\right|\gg1$ regime in the reduced model equations (Eqs. (\ref{vp})-(\ref{bp})). Hence, the width dynamics in this regime is studied in the next section, which also introduces some important concepts and definitions regarding the total energy of the system, which are essential in the discussions concerning the main issue of this article. \section{Initial conditions and width oscillations\label{sec:IV}} In order to build the general form of a set of parameters $\mathcal{C}_{0}$ for pre-collisional scenarios, some basic insight about the solitons' dynamics in the reduced model is required, and hence the Eqs. \eqref{vp}-\eqref{bp} need to be analyzed. Firstly, note that in all four equations there is a term directly proportional to $\Gamma(\partial G/\partial q)$ (with $q=v,\,w,\,p$ or $b\,$), which couples the variational parameters with each other. When the solitons are far from each other (as in pre- or post-collisional scenarios), i.e., for $\left|\xi\right|\gg1$, these coupling terms become negligible since the denominator of $G$ increases very fast for large $|\xi|$ due to a dominating term $\ \propto\exp\left[-\left|\left(1+\pi wb/2\right)\xi\right|\,\right]\,$, allowing one to assume that $\partial_{q}G\approx0$ and $G\approx0\,$. In this regime, the reduced model describes noninteracting solitons (with null acceleration $\dot{v}=0$ in \eqref{vp}) moving toward (outward) the origin when $\dot{p}<0$ ($\dot{p}>0$), with constant absolute velocity $\left|v'\right|$ as stated by Eq. \eqref{pp}. Also, this equation shows that $v$ can be identified as the propagation velocity (given by $\dot{p}\,$) only in the absence of the SO coupling ($\gamma=0$). Note that the above approximations fails when the solitons get closer to each other, such that the term $\pi\Gamma\partial_{v}G$ becomes relevant. In fact, the role of the variational parameter $v$ consists in emulating the effect of the phase velocity that, together with the group velocity $\gamma$ induced by the SO coupling, promote the collisional scenario of solitons moving initially with propagation velocity $v_{0}'=v_{0}+\gamma>0$ (as previously pointed out in Fig. \ref{Fig1}). Eqs. \eqref{wp} and \eqref{bp} govern the dynamics of the shape parameters $(w,b)$. In the regime $\left|\xi\right|\gg1$, the parameter $b$ dictates the variations in the width, since $b=\dot{w}$, where the conditions for a fixed profile can be derived by simultaneously imposing $b=0$ and $\dot{b}=0$. The solutions are $w_{f}=4/\left(\left|g\right|K\right)$ and $b_{f}=0$, with $f$ standing for \textit{fundamental} (without oscillation). Then, to get a pre-collisional configuration consisting of fundamental solitons, one can simply use a set of initial parameters in the form $\mathcal{C}_{0}^{f}=\{p_{0},v_{0},w_{f},b_{f}\}$ such that $v_{0}>-\gamma$ and $\left|\xi_{0}\right|\gg1$. Next, by considering slightly different shape parameters, an analytical study of the width behavior can be directly performed by means of the dynamic equations. To this end, the width parameter must be rewritten as $w(t)=\left[1+W(t)\right]w_{f}$, with the new parameter $W(t)\ll1$ being the relative deviation from $w_{f}$. The latter assumption allows one to expand the Eq. \eqref{bp} in Taylor series ($(1+W)^{-n}=1-n\,W+{\cal O}(W^{2})$ for $n>0$), in view to find the following equations: \begin{align} \begin{cases} w_{f}\dot{W}-b=0\\ W+\dot{b}/\mathcal{B}\hspace{0.5pt}=0 \end{cases} & ,\quad\mathcal{B}=\dfrac{4}{\pi^{2}w_{f}^{3}}\;,\label{LVeqs} \end{align} neglecting terms of order ${\cal O}(W^{2})$. The equations (\eqref{LVeqs}) can be cast in a decoupled form $\ddot{q}+\left(\mathcal{B}/w_{f}\right)q=0$ (with $q=W$ or $b\,$), which reveals that both $w$ and $b$ undergo harmonic oscillations with angular frequency $\omega_{w}^{\textrm{{\tiny\,(LO)}}}=\sqrt{\mathcal{B}/w_{f}}=g^{2}K^{2}/\left(8\pi\right)$ (LO stands for low amplitude oscillations). Additionally, Eqs. \eqref{LVeqs} show that these parameters oscillate out of phase by $\pi/2$ radians with oscillation amplitudes $\hat{W}$ and $\hat{b}$ related through the ratio $\hat{b}/\hat{W}=\omega_{w}^{\textrm{{\tiny\,(LO)}}}$, hence the condition $\hat{W}\ll1$ implies in $\hat{b}\ll1$. In the interaction stage, the coupling terms containing $\partial_{q}G$ influence the system's dynamics in a nontrivial way that cannot be analytically tractable. Since the shape parameters are altered during the collision processes, width oscillations are expected to occur, but the behavior is far from being quasi-harmonic because the inequality $\left|\xi\right|\gg1$ does not hold and $\hat{W}$ is not small. The latter condition also applies to the post-collisional scenarios, i.e., the scattered solitons can be provided with highly nonharmonic width oscillations. To investigate this case, one can explore the fact that total energy of the system is a conserved quantity, given by the Hamiltonian \begin{equation} H(p,v,w,b)=H_{\textrm{{\tiny TM}}}+H_{\textrm{{\tiny VM}}}+\pi\Gamma\left(G-G_{0}\right),\label{RM_HAM} \end{equation} where \begin{eqnarray*} H_{\textrm{{\tiny TM}}}(p,v) & = & \dfrac{1}{2}\left(v+\gamma\right)^{2}+\pi\Gamma G_{0},\\ H_{\textrm{{\tiny VM}}}(w,b) & = & \dfrac{Kg}{12w}+\dfrac{1}{6w^{2}}+\frac{\pi^{2}}{24}b^{2},\\ G_{0}=\left.G\,\right|_{(w,b)=(w_{f},0)} & = & \dfrac{\sin\left(2pv\right)}{\sinh\left(2p/w_{f}\right)\sinh\left(\pi vw_{f}\right)}. \end{eqnarray*} The first and the second terms in the Hamiltonian correspond to the energy within the solitons' translational mode (TM) and vibrational mode (VM), respectively, and the third is an energy term due to the interaction of these modes \cite{Yang_10}. The idea of casting the Hamiltonian as shown in \eqref{RM_HAM} is to highlight the energy contributions arising from each type of motion of the solitons in the reduced model. The Hamiltonian \eqref{RM_HAM} in its entire form will be used in the next section. For a while, the focus is on the general behavior of width oscillations emerging in post-collisional scenarios. In this sense, terms originating from the function $G$ are negligible, allowing one to identify the solitons' TM energy by their kinetic energy, i.e., $H_{\textrm{{\tiny TM}}}(v')=\left(v'\right)^{2}/2$. By considering the configurations at $t=0$, given by $\mathcal{C}_{0}^{f}$, one obtains the Hamiltonian \begin{equation} H_{0}(v_{0})=H_{\textrm{{\tiny TM}}}^{(0)}+H_{\textrm{{\tiny VM}}}^{(0)}, \end{equation} where the first term, $H_{\textrm{{\tiny TM}}}^{(0)}=H_{\textrm{{\tiny TM}}}(v'_{0})$, is the TM initial energy, and the last, $H_{\textrm{{\tiny VM}}}^{(0)}=H_{\textrm{{\tiny VM}}}(w_{0},b_{0})=-g^{2}K^{2}/96\,$, is the self-energy of the fundamental solitons. After an ``infinitely'' long time interval $\left(t\rightarrow\infty\right)$, the Hamiltonian of the scattered solitons can be written as \begin{equation} H^{(\infty)}=H_{\textrm{{\tiny TM}}}^{(\infty)}+H_{\textrm{{\tiny VM}}}^{(\infty)},\label{RM_HAM_F} \end{equation} where $H_{\textrm{{\tiny TM}}}^{(\infty)}=H_{\textrm{{\tiny TM}}}(v_{\infty}')$, with $v_{\infty}=v(t\rightarrow\infty)$, and \[ H_{\textrm{{\tiny VM}}}^{(\infty)}=\left.\left(\dfrac{Kg}{12w}+\dfrac{1}{6w^{2}}+\frac{\pi^{2}b^{2}}{24}\right)\right|_{t\rightarrow\infty}. \] Here, $v_{\infty}$ is the final (constant) value of the phase velocity, and $H_{\textrm{{\tiny TM\,(VM)}}}^{(\infty)}$ is the TM (VM) final energy. We stress that the parameter $v$ approaches $v_{\infty}$ asymptotically during the post-collisional scenario, but in a practical sense, one can set $t_{\infty}$ as the instant in which the initial separation is reattained ($p(t_{\infty})=p_{0}$), where $t\rightarrow\infty$ in \eqref{RM_HAM_F} was replaced by $t=t_{\infty}$ (as shown in the last frame in Fig. \ref{Fig1}(b)). The energy conservation implies that $\Delta H=H^{(\infty)}-H^{(0)}=0$. By using this result combined with the equations $b=\dot{w}$ and $w=\left(1+W\right)w_{f}$, one can obtain the following equation for the parameter $W(t)$ ($t>t_{\infty}$) in terms of the initial and final propagation velocities \begin{equation} \left(\pi\,\dot{W}\right)^{2}+\left(\dfrac{g^{4}K^{4}}{64}\right)\dfrac{W^{2}}{(1+W)^{2}}=-\left(\dfrac{3g^{2}K^{2}}{2}\right)\Delta H_{\textrm{{\tiny TM}}}\ ,\label{POST_WO} \end{equation} where $\Delta H_{\textrm{{\tiny TM\,(VM)}}}=H_{\textrm{{\tiny TM\,(VM)}}}^{(\infty)}-H_{\textrm{{\tiny TM\,(VM)}}}^{(0)}$ is the TM (VM) energy variation, obeying the relation $\Delta H_{\textrm{{\tiny TM}}}=-\Delta H_{\textrm{{\tiny VM}}}$. Based on the positiveness of all terms in the left-hand side of the Eq. \eqref{POST_WO}, the energy variation of the modes are such that $\Delta H_{\textrm{{\tiny TM}}}\leq0$ and $\Delta H_{\textrm{{\tiny VM}}}\geq0$, which implies $\left|v_{\infty}'\right|\leq v_{0}'$ (recall that $v_{0}'>0$) with the equalities holding when the scattered solitons have no vibrational profile ($W=\dot{W}=0$). Except for this latter trivial case, $W$ has two critical values (denoted by $W_{c}^{\pm}$) that are obtained from Eq. \eqref{POST_WO} subjected to the condition $\dot{W}=0$. These critical values are found to be \begin{align} W_{c}^{\pm} & =\pm\dfrac{\sqrt{6\,\Delta H_{\textrm{{\tiny VM}}}\phantom{^{0}}}}{|g|K/4\mp\sqrt{6\,\Delta H_{\textrm{{\tiny VM}}}\phantom{^{0}}}}\ ,\label{CRIT_WIDTH} \end{align} with $W_{c}^{+}\geq0$ being the positive critical value and $W_{c}^{-}\leq0$ the negative one. In view to solve the first order differential equation for $W$, one gets \begin{equation} dt=\dfrac{8\pi}{gK}\dfrac{(1+W)\,dW}{\sqrt{96\,\Delta H_{\textrm{{\tiny VM}}}(1+W)^{2}-g^{2}K^{2}W^{2}\phantom{^{0}}}}\ .\label{W_INTEGRAL} \end{equation} Indeed, it appears to be a hard task to solve Eq. \eqref{W_INTEGRAL} for $W(t)$. However, the behavior of the width parameter is periodic. So, one can write $t(W_{c}^{+})-t(W_{c}^{-})$ equal to half of the width oscillation period $\left(\,T_{w}/2\,\right)$. Hence, by using the relation $\omega_{w}=2\pi/T_{w}\,$, the angular frequency of width oscillations is found to be \begin{equation} \omega_{w}=\omega_{w}^{\textrm{{\tiny\,(LO)}}}\left[1-\dfrac{\Delta H_{\textrm{{\tiny VM}}}}{|H_{\textrm{{\tiny VM}}}^{(0)}|}\right]^{3/2},\hspace*{1em}\Delta H_{\textrm{{\tiny VM}}}\leq|H_{\textrm{{\tiny VM}}}^{(0)}|\,.\label{WIDTH_AFREQ} \end{equation} The Eqs. \eqref{CRIT_WIDTH} and \eqref{WIDTH_AFREQ} characterize the width oscillations in the post-collisional scenario in terms of the initial and final propagation velocities, $v_{0}'$ and $v_{\infty}'$, which provide the energy increase in the VM ($\Delta H_{\textrm{{\tiny VM}}}=[\,\left(v_{0}'\right)^{2}-\left(v_{\infty}'\right)^{2}\,]/2\,$). Since $\left|v_{\infty}'\right|\leq v_{0}'\,$, the scattering can be of three types, \emph{namely}, elastic (case $|v_{\infty}'|=v_{0}'$), inelastic (case $|v_{\infty}'|<v_{0}'$), and completely inelastic (case $|v_{\infty}'|=0$). An elastic scattering occurs when the TM energy is completely recovered after the interaction stage, resulting in scattered solitons with fixed shape ($\Delta H_{\textrm{{\tiny VM}}}=0$ and $W_{c}^{\pm}=0$), otherwise the amount of energy not recovered remains stored in the VM (inelastic scattering), and the scattered solitons will vibrate ($\Delta H_{\textrm{{\tiny VM}}}>0$ and $W_{c}^{\pm}\neq0$). If this amount of energy is very small such that $\left|v_{\infty}'\right|\lesssim v_{0}'$ (quasi-elastic scattering), the vibration can be considered to be quasi-harmonic because $\Delta H_{\textrm{{\tiny VM}}}\ll|H_{\textrm{{\tiny VM}}}^{(0)}|$ implies that $W_{c}^{+}\approx|W_{c}^{-}|\ll1$ and $\omega_{w}\approx\omega_{w}^{\textrm{{\tiny\,(LO)}}}$, which validate the results of the previous approach for low amplitude of width oscillations. If the TM final energy is zero, the total energy of the system is entirely contained in the VM (completely inelastic scattering, $\Delta H_{\textrm{{\tiny VM}}}=\left(v_{0}'\right)^{2}/2$ or $H^{(\infty)}=H_{\textrm{{\tiny VM}}}^{(\infty)}$), resulting in scattered solitons with fixed separation vibrating with the largest (lowest) possible amplitude (frequency). In terms of width oscillations, this means that for a specific value of $v_{0}'$, the critical values of $|W|$ are maximum and $\omega_{w}$ is minimum. Since the knowledge about $v_{\infty}'$ it is enough for us to characterize both TM and VM dynamics of the scattered solitons, the investigation of solitons' scattering starts from the choice of initial value of $v_{0}'$ and its influence over the interaction stage. \section{Numerical Results and Discussion \label{sec:V}} We set the value of nonlinearity strength $g$ such that the width of the fundamental soliton solution is $w_{f}=1$ and the solitons' total norm $K=1$. These constraints are attained for $g=-4$. Also, we set the Rabi coupling as $\Gamma=-0.04$, which allow us to get interesting dynamical effects. The interaction between the solitons is sufficiently small for a $20$ units wide separation, which justify our choice of $p_{0}=10$. The program developed for the simulations uses double precision for both real and complex numbers, it is written in the Fortran 95 language and employs a $4^{\text{th}}$-order Runge-Kutta method to numerically solve the coupled ODEs \eqref{vp}-\eqref{bp} with initial conditions given by $\mathcal{C}_{0}^{f}(v_{0}')$ and $v_{0}'=v_{0}+\gamma>0$ being a variable initial parameter defining the pre-collisional configuration. The time-step is set to $10^{-4}$, this value is small enough to provide a very good approximation for the evolution of the variational parameters in the conditions of our interest. Also, in order to check the accuracy of the results obtained, we performed some tests by considering lower values of discretization, for which we obtained similar results. \begin{figure*}[t] \begin{centering} \includegraphics[width=1\textwidth]{F2.eps} \par\end{centering} \caption{(Color online)\textit{ Left panel}: Scattering results for $v_{\infty}$ versus $v_{0}$ obtained via iterative simulations of the reduced the ODE model (Eqs. \eqref{vp}-\eqref{bp}) in four $v_{0}$-ranges (a)-(d), within the interval $[0,v_{c}]$ and with $\gamma=0$, i.e., without the SO coupling effect. The highlighted rectangular regions (gray) indicate the $v_{0}$-range of the plot immediately below, i.e., the panels in (b), (c) and (d) are successive ``zooms'' of the highlighted regions. The color scheme at the bottom of this figure uses the integer $n_{\text{ref}}$, called reference number (its value is specified in top right corner of each plot), to provide an adaptive rule for coloring the points $(v_{0},v_{\infty})$ accordingly to the integer $n_{\text{col}}$ associated with the corresponding regular process (irregular ones are not plotted since $n_{\text{col}}\gg1$). Also, some windows are labeled in each plot, where the used notations are explained in the right panel. \textit{Right panel: }(e) heatmap of the normalized function $|A'_{-}(x,t)|$. The $v_{0}$ value used in this simulation belongs to the interval of regularity of a $4$-pass collisional scattering window. The notation used in the description of this heatmap is explained in the bottom boxes of this panel.} \label{Fig2} \end{figure*} To explore the influence of $v_{0}'$ over the solitons' dynamics, an iterative routine is implemented to perform a set of consecutive scattering simulations, each one using a different initial propagation velocity, $v_{0}'(j)$ (with $j\in\mathbb{N}$ being the iteration number), which can only assume values within a predefined $v_{0}'$-range $[v_{0}'(1),v_{0}'(n_{I})]$ (with $n_{I}$ being the total number of iterations). In this routine, the value of the SO coupling constant $\gamma$ is kept fixed while $v_{0}$ is increased by a fixed amount $\delta v_{0}>0$ in the end of each iteration, i.e., $v_{0}'(j+1)=v_{0}'(j)+\delta v_{0}$. The length of the continuous interval defined by the $v_{0}'$-range is simply given by the difference between the $v_{0}'$ values used in the first and in the last scattering simulation, $L=v_{0}'(n_{I})-v_{0}'(1)$, and consequently $\delta v_{0}=L/(n_{I}-1)$. Moreover, for each scattering simulation the output data is obtained when the numerical evolution stops after the program detects that the initial separation was reached in the post-collisional scenario. In this sense, the quantities analyzed are the number of collisions before unbinding $n_{\text{col}}$, the exit velocity $v_{\infty}$, $W_{c}^{+}$, and $T_{w}$. We stress that we choose a convenient integer $n_{\text{ref}}$ as reference and only the points with $n_{\text{ref}}-4\leq n_{\text{col}}\leq n_{\text{ref}}-1$ are considered in our graphical analyzes. So, the remaining points, with $n_{\text{col}}<n_{\text{ref}}-4$ or $n_{\text{col}}>n_{\text{ref}}$, are not plotted. \subsection{Scattering process without SO coupling ($\gamma=0$)} In this subsection we will consider the system in absence of SO coupling ($\gamma=0$). This first step will provide us a reference for the dynamical properties, which will be analyzed in details in order to verify, in the next subsection, the influence of the SO coupling parameter $\gamma$ over them. The results of the iterative simulations show that in the high-energy collision regime ($v_{0}\gg1$) the solitons collide one time ($n_{\text{col}}=1$) and their phase velocity almost does not diminishes ($v_{\infty}\lesssim v_{0}$), indicating that the scattering is quasi-elastic and that the single collision process promotes just a direct transmission (the solitons simply pass through each other). In this regime, as $v_{0}$ increases the quantities $v_{\infty}$, $W_{c}^{+}$, and $\omega_{w}$ asymptotically approach the lines $v_{\infty}=v_{0}$, $W_{c}^{+}=0$, and $\omega_{w}=\omega_{w}^{\textrm{{\tiny\,(LO)}}}=2/\pi$, respectively, which are associated with the ``scattering'' of two noninteracting symmetric solitons. As $v_{0}$ is reduced, the scattering gradually becomes more inelastic, that is, $\Delta H_{\textrm{{\tiny VM}}}$ increases causing $W_{c}^{+}$ to increase too and $\omega_{w}$ to decrease. When $v_{0}$ is close to the value $v_{\textrm{{\tiny(VM)}}}=0.374$, the excitation of the vibrational mode is maximum although the variation in the translational mode energy is still relatively small (since $v_{\infty}\approx0.898\,v_{0}$), this means that $W_{c}^{+}$ is maximum too and $\omega_{w}$ is minimum, with $\max_{G}(W_{c}^{+})\approx0.397$ and $\min_{G}(\omega_{w})\approx0.881\,\omega_{w}^{\textrm{{\tiny\,(LO)}}}$ (the estimates were obtained from graphical analyses, and $G$ stands for global, i.e., for any $v_{0}>0$). Accordingly, as $v_{0}$ gets even smaller (low-energy collision regime $v_{0}<v_{\textrm{{\tiny(VM)}}}$), $\Delta H_{\textrm{{\tiny VM}}}$ and, consequently, $W_{c}^{+}$ decreases too (the opposite stands for $\omega_{w}$). The origin of this inversion in the behavior of these quantities can be understood by analyzing the equation $\Delta H_{\textrm{{\tiny VM}}}=(v_{0}-v_{\infty})(v_{0}+v_{\infty})/2$ for decreasing $v_{0}$. The first factor always grows because the scattering becomes more inelastic, and it dominates during the high-energy collision regime. On the contrary, the second factor always declines due the reducing amount of energy involved in the first collision, it exactly balances the growth promoted by the first one when $v_{0}=v_{\textrm{{\tiny(VM)}}}$, and dominates during the low-energy collision regime causing $\Delta H_{\textrm{{\tiny VM}}}$ to decrease. This behavior persists until $v_{0}$ reaches a critical value $v_{c}\approx0.11755$, that corresponds to a completely inelastic scattering ($v_{\infty}=0$). If $v_{0}<v_{c}$, the solitons form a bound-state after the first collision process and $n_{\text{col}}\geq2$. The scattering simulations in this range reveal that the dynamics of this bound-state is very complex and rich in details, requiring a quite extensive investigation in order to understand the underlying mechanism produced by the attractive Rabi interaction. From hereafter, the focus is on the correlations between the output quantities and the control (input) parameter $v_{0}'\in(0,v_{c}')$, and how these arise from the reduced model description of the solitons' bound-state. In Fig. \ref{Fig2}, the left panel shows four plots of $v_{\infty}\times v_{0}$, which were generated from the data provided by the iterative simulations. Specifically, the panel (a) (with $n_{\text{ref}}=5$) covers a $v_{0}$-range in the low-energy collision regime, where $(v_{c},0)$ can be seen as a critical-point that separates the region of direct transmission, or 1-pass collisional scattering (points with $n_{\text{col}}=n_{\text{ref}}-4=1$, see the color scheme at the bottom of the figure), from the region of multi-pass collisional scattering, where the post-collisional scenario is always preceded by the formation of a bound-state (points with $n_{\text{col}}=2,3$ and $4$). The distribution of points in this plot reveals that $v_{\infty}$ and $n_{\text{col}}$ obey the equation $\text{sign}(v_{\infty})=(-1)^{n_{\text{col}}-1}$, which states that a transmission-like scattering ($v_{\infty}>0$) always occurs when $n_{\text{col}}$ is odd, and a reflection-like scattering ($v_{\infty}<0$) always occurs when $n_{\text{col}}$ is even. Regarding the region of 1-pass collisional scattering (Fig. \ref{Fig2}(a)), one can verify that the points $(v_{0},v_{\infty})$ closely trace the upper segment of a hyperbola with functional form $x^{m}-y^{m}=v_{c}^{m}$ (with $v_{0}$ and $v_{\infty}$ taking the roles of $x$ and $y$, respectively), which has its vertex in the critical point and its asymptotes (the lines $y=\pm x$) represented by the dashed lines in Fig. \ref{Fig2}. Next, by a fitting procedure we get $m=1.814\pm0.007$, showing that the collision outcome can be predicted very accurately when $v_{0}\geq v_{c}$. This control is possible because a small variation in the initial velocity $v_{0}\rightarrow v_{0}+\delta$, causes a small variation in its final velocity $v_{\infty}\rightarrow v_{\infty}+\Delta$, with $\delta$ and $\Delta$ having the same order of magnitude, and the scattering is said to be regular in this sense. On the other hand, the same does not hold when $v_{0}<v_{c}$, since $v_{\infty}$ is found to be very sensitive to small changes in the values $v_{0}$ for some regions. Indeed, there are some regions with regularity for $v_{0}<v_{c}$, in which we can obtain predictable results. The most evident intervals of regularity are those where only 2-pass collision scattering ($n_{\text{col}}=2$) happens, called reflection windows, which are seen as valley-like shapes in Fig. \ref{Fig2}(a). The asymptote $y=-x$ is tangent to the curve defined by all these shapes, which means that an elastic 2-pass collision scattering is possible for a specific $v_{0}$ value within each reflection window. Interestingly, these windows appear to form a structure that presents self-similarity at any scale (a fractal-like scattering), i.e., any amplification of a smaller $v_{0}$-range containing the critical point reveals (given an enough point density) the same pattern of infinitely many reflection windows intertwined by regions in which $n_{\text{col}}>2$. This happens because both the length of a window and its separation distance to the nearest window can become arbitrarily small as close as it gets to the critical point. Regarding 3-pass collisional scattering ($n_{\text{col}}=3$), the Fig. \ref{Fig2}(a) shows that it can happen if the $v_{0}$ value is sufficiently close to one of the edges of any reflection window, where some of the associated points are found to be within very small intervals of regularity, which technically requires a much higher local point density to be reasonably visualized. Therefore, in order to verify how these points are really distributed, iterative simulations were performed in $v_{0}$-ranges near the left and right sides of certain reflection windows. The complementary data acquired unfolds some substructures of transmission windows that were previously hard to detect, and strongly indicate that 3-pass collisional scatterings can only occur when $v_{0}$ falls into an interval of regularity corresponding to one of these transmission windows, assuming lump-like shapes in Figs. \ref{Fig2}(b). These substructures are endowed with the same self-similarity property previously discussed, but only those emerging at the left side of a reflection window present a pattern that resembles the one shown in the panel (a) (the windows height in right-sided substructures decrease instead of increasing accordingly with the asymptote $y=x$). Indeed, the range that encompasses the larger left-sided substructure, highlighted by a rectangular (gray) region in Fig. \ref{Fig2}(a), was simulated again with more points and displayed in Fig. \ref{Fig2}(b). This plot provides a wide view of the particular substructure chosen in Fig. \ref{Fig2}(a), where one can notice that both the window pattern and the distribution of points near the windows edges are indeed very similar (``mirrored'') to that of the first plot. By investigating the surroundings of the transmission windows through some iterative simulations, other smaller substructures associated with 4-pass collisional scatterings are revealed. These are composed by reflection windows too and present a high degree of similarity with the previous plot, a signature of the fractal-like scattering, as one can attest by comparing it with the plot in Fig. \ref{Fig2}(c), which considers the left-sided substructure of the second transmission window (the highlighted (grey) region in Fig. \ref{Fig2}(b)). Thus, all the plotted points within the intervals intertwining the reflection windows in the panel (a) are part of underlying substructures, which unfold whenever one investigates the distribution of points surrounding any reflection or transmission window. The whole structure composed by infinitely many reflection and transmission windows displays the main characteristic feature of a fractal, i.e., self-similarity. Here, such fractal-like consists of the main window pattern ($n_{\text{col}}=2$) plus the left-sided (right-sided) ones associated with $n_{\text{col}}\,$-pass collisional scatterings ($n_{\text{col}}\geq3$) that emerge in subregions within $v_{0}<v_{c}\,\land\,|v_{\infty}|\leq v_{0}$ that contain only the left (right) critical (or edge) point of a certain $(n_{\text{col}}-1)$-pass collisional scattering window, which is a point corresponding to a completely inelastic $(n_{\text{col}}-1)$-pass collisional scattering. A much higher degree of self-similarity is clearly noticed between the window patterns of the substructures, as one can realize by comparing Fig. \ref{Fig2}(b) and (c), which appear to be mirrored images (across the $v_{0}$-axis) from each other. The plot displayed in Fig. \ref{Fig2}(d) results from iterative simulations in the range highlighted in Fig. \ref{Fig2}(c), it emphasizes the fractal feature described and show that the window pattern replicates more precisely in substructures that have the same type of window. As previously mentioned, another feature regarding the solitons' scattering is its high sensitivity to $v_{0}$ when this initial propagation velocity is not within an interval of regularity, this is a signature of chaos that allows us to infer that the scattering is predominantly chaotic when $v_{0}<v_{c}$, which is intrinsically related to the formation of bound-states generally involving a lot of collisions (i.e., $n_{\text{col}}\gg1$, excepting the region of very low propagation velocities at the left of the larger reflection window). Hence, the fractal structure must arise from a recurrent internal mechanism that causes the scattering to become regular when specific conditions involving the solitons' translational and vibrational modes are attained. We stress that the fractal scattering of solitons of systems described by (generalized) nonlinear Schr\"odinger equation were also verified in Refs. \cite{Zhu_PRL08,Zhu_PD08,Dmitriev_CHAOS02,Teixeira_PLA16}. To unravel this internal mechanism, a detailed analysis of the solitons' dynamics during the interaction stage is needed. To this end, we first study the general aspects of the bound-states by examining the evolution of the solitons' profile from the perspective of the heatmaps of $|A'_{-}(x,t)|$. For that, several simulations are performed for different values of $v_{0}$ selected in some intervals of those reflection and transmission windows shown in Fig. \ref{Fig2}(a)-(d). By analyzing the bound-state formation for various input velocities within a same interval, one can only differ one scattering from another by comparing the shape vibrations and the exit angle ($\tan^{-1}(v_{\infty})$) in the post-collisional scenario, that is, before the final collision the dynamics is visually indistinguishable (this is more prominent when considering smaller windows). This means that each window has its own bound-state signature describing the consistent behavior of the solitons' modes that gives rise to the window itself. Moreover, this signature is unique and can be simply defined in terms of the number of complete shape vibrations (a full width oscillation period) between two consecutive collisions during the bound-state, as indicated in Fig. \ref{Fig2}(e). This full width oscillation period is taken as a time interval centered in an instant $t=t_{\text{peak}}$ of minimum profile width (or maximum profile amplitude). In this way one can count the number of peaks (spots in the heatmap where $|A'_{-}(x,t)|\apprle1$) between the $(j-1)$-th and the $j$-th collisions ($j=2,\dots,n_{\text{col}}$) and assign the resulting integer value to $\ensuremath{n_{\text{sv}}^{\text{\tiny(\ensuremath{j-1|j})}}}$ (see the notation introduced in Fig. \ref{Fig2}). Then, any $n_{\text{col}}\,$-pass collisional scattering window can be labeled in terms of these $n_{\text{col}}-1$ integers as pointed out by Fig. \ref{Fig2}(e), where the heatmap displayed corresponds to the 4-pass collisional scattering window $W[3,4,5]$. Interestingly, the window signatures also follow a pattern that is naturally connected with the fractal-like structure. It is first seen in the panel (a), where the label of the $j$-th window (always from left to right) is written as $W[j+1]$, i.e., $n_{\text{sv}}^{\text{\tiny(1\ensuremath{|}2)}}=j+1$. Then, based on the consistent window patterns previously discussed, one can infer from the heatmaps analysis that the $k$-th window of the substructure emerging from the left side of $W[j+1]$ can be labeled as $W[j+1,k+1]$ (see Fig. \ref{Fig2}(b)), with the changing index $J=k+1$ defined as the main index. The same applies for the $l$-th window of the substructure emerging from the left side of $W[j+1,k+1]$, which has the label $W[j+1,k+1,l+1]$ ($J=l+1$ is the main index here), and so on. The integers $\ensuremath{n_{\text{sv}}^{\text{\tiny(\ensuremath{j-1|j})}}}$ ($j=2,\dots,n_{\text{col}}$) that define a $n_{\text{col}}\,$-pass collisional scattering window signature depend on the frequency of the shape vibration ($\ensuremath{\omega_{\text{sv}}}$) and on the time duration of each bounce $\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{j-1|j})}}=t_{\text{col}}^{\text{\tiny(\ensuremath{j})}}-t_{\text{col}}^{\text{\tiny(\ensuremath{j-1})}}$. In analyzing the shape parameters evolution, we verified that $\ensuremath{\omega_{\text{sv}}}$ is approximately constant during the bouncing time intervals between collisions, when the tail overlap is small enough so that the interaction promotes an effective attraction maintaining the solitons' bound-state while exerting a weak influence over the previously induced shape oscillations. Also, we found that the quantity $\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ (time duration of the last bounce) strictly increases with $v_{0}$ as it covers the entire interval (from left to right) of a $n_{\text{col}}\,$-pass collisional scattering window (sub)structure, with the corresponding critical point being a singularity in which $\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}\rightarrow\infty$. If $v_{0}$ is within the interval of regularity of a window with main index $J$, i.e., $\ensuremath{n_{\text{sv}}^{\text{\tiny(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}=J}$, one can write $\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}=J\ensuremath{T_{\text{sv}}}+\delta t_{\text{col}}$, in which $\ensuremath{T_{\text{sv}}}=2\pi/\ensuremath{\omega_{\text{sv}}}$ is the shape vibration period and $\delta t_{\text{col}}$ is a $v_{0}$ dependent term accounting for the time duration associated with the $(n_{\text{col}}-1)$-th and $n_{\text{col}}$-th collisions when $\ensuremath{\omega_{\text{sv}}}$ is no longer constant. We found that this linear behavior for $\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ as function of $J$ occurs when the left or the right edge points of five consecutive windows ($J=1,\dots,5$) are considered. In this case, $\delta t_{\text{col}}$ tends to assume the same value when $v_{0}$ is about to leave the intervals of regularity. The angular coefficient of the fitting line provides a reasonable estimate of $\ensuremath{T_{\text{sv}}}$, which was obtained with standard deviation always less than $2\%$ for two sets of five points of each plot in Fig. \ref{Fig2}. Concerning the structure in the panel (a), the average value obtained was $\left\langle \ensuremath{T_{\text{sv}}}\right\rangle =10.8\pm0.2\ (1,8\text{\%})$, while for the substructures in the panels (b)-(d) the average values of $\ensuremath{T_{\text{sv}}}$ are the same, given by $\left\langle \ensuremath{T_{\text{sv}}}\right\rangle =9.98\pm0.02\ (0.21\text{\%})$. The numerical quantity $2\pi/\left\langle \ensuremath{T_{\text{sv}}}\right\rangle \approx0.63$ is\textcolor{blue}{{} }a reasonable estimate for the shape vibration frequency,\textcolor{blue}{{} }which indicates that such vibrational motion in regular processes have indeed a characteristic frequency. Next, we analyze the behavior of $\delta t_{\text{col}}$ in terms of $v_{0}$. We found that this quantity strictly increases with $v_{0}$ such that $1.7\lesssim\delta t_{\text{col}}/\ensuremath{T_{\text{sv}}}\lesssim2.0$, with the left (right) sided extreme value reached when $v_{0}$ assumes the value corresponding to the left (right) edge of a window. So, it follows that the bouncing frequency $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}=2\pi/\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ must approximately satisfy the relation \begin{equation} \left(J+1.85+d\right)\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}=\omega_{\text{sv}}\quad\left(\ \left|d\right|\lesssim0.15\ \right),\label{BounceSV} \end{equation} which establishes the condition of motion synchronization involving the solitons' translational and vibrational modes, which give rise to the intervals of regularity. This condition means that the bouncing motion is such that $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ must approach a state of resonance with the shape vibration, $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}=\omega_{\text{sv}}/(J+3)$, from below by a suitable amount provided by Eq. \eqref{BounceSV}. The process is always irregular if $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ is not close enough to or exceeds a resonance value ($\omega_{\text{sv}}/3,\,\omega_{\text{sv}}/4,\,\omega_{\text{sv}}/5,\,\dots$). The narrowing of the windows of a given structure results from the behavior of $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}-1|n_{\text{col}}})}}$ with $v_{0}$, which decreases faster as close as $v_{0}$ is from the corresponding critical value in a such way that the greater the integer $J$ is, smaller is the $v_{0}$-interval in which the condition \eqref{BounceSV} holds and, consequently, narrower is the window. \begin{figure}[t] \begin{centering} \includegraphics[width=1\columnwidth]{F3.eps} \par\end{centering} \caption{(Color online) Energy of the solitons' translational mode ($H_{\textrm{{\tiny TM}}}$) as function of the time variable $t-\ensuremath{t_{\text{col}}^{(1)}}$, by considering $20$ scattering processes (indexed with integers $j\in[1,20]$ in the heatmap). The $v_{0}$ values for each one of these processes is highlighted by horizontal lines crossing the rotated version of the plot seen in Fig. \ref{Fig2}(a) (right side). The dashed vertical line at the time $t=\ensuremath{t_{\text{col}}^{(1)}}$ highlights the instant of the first collision, i.e., when the interaction causes an effective decrease in $H_{\textrm{{\tiny TM}}}$, resulting in part of the initial TM energy converted in VM energy, which in turn promotes shape vibrations. For the $10$ last processes ($j=11,\dots,20$), $v_{0}<v_{c}$ and $H_{\textrm{{\tiny TM}}}$ becomes negative right after the $t=\ensuremath{t_{\text{col}}^{(1)}}$.} \label{Fig3} \end{figure} \begin{figure*}[t] \begin{centering} \includegraphics[width=0.85\paperwidth]{F4.eps} \par\end{centering} \caption{(Color online) Energy of the solitons' translational mode ($H_{\textrm{{\tiny TM}}}$) as function of the time variable $t-\ensuremath{t_{\text{col}}^{(1)}}$, by considering $20$ scattering processes. Here, the $v_{0}$ values for each one of these processes are taken in the $W[5]$'s interval of regularity analogously to the plot in Fig. \ref{Fig3}. In the panel (a), $v_{0}$ covers the full range (as indicated by the inset, containing the corresponding $v_{\infty}\times v_{0}$ plot). In this case, the $20$ energy plots are almost indistinguishable. In the panels (b) and (c), $v_{0}$ covers the left and the right half range, respectively, starting from the middle point and then toward the edges (see the inset panels). In each plot, $10$ processes are displayed and indexed with integers $j\in[1,10]$. The time range starts from the final point shown in panel (a). In $t-\ensuremath{t_{\text{col}}^{(1)}}=120$, the solitons will have spread out and $H_{\textrm{{\tiny TM}}}\approx H_{\textrm{{\tiny TM}}}^{(\infty)}$.} \label{Fig4} \end{figure*} From section \ref{sec:IV}, we bring back the quantities defined in \eqref{RM_HAM_F} to investigate the scattering mechanism in terms of the energy within the solitons' modes. To this end, we firstly considered $20$ distinct scattering processes with $v_{0}$ varying into the interval $[v_{c}-\Delta v,v_{c}+\Delta v]$ from right to left, with $v_{c}-\Delta v$ chosen to match the $v_{0}$ value of $W[2]$'s left edge point. The right half of this interval is in the direct transmission region, i.e., the first $10$ processes are regular ones consisting of just one collision. In Fig. \ref{Fig3}, the temporal evolution of $H_{\textrm{{\tiny TM}}}$ is shown for each scattering process, with the index $j\in[1,20]$. We observe that for $j\in[1,10]$ ($v_{0}>v_{c}$), the first collision effectively causes a decrease in the energy of the TM that reaches a stable positive constant value ($H_{\textrm{{\tiny TM}}}^{(\infty)}=\text{const.}>0$) as the solitons get far apart from\textcolor{blue}{{} }each other. For $j=10$, the associated $v_{0}$ value is very close to $v_{c}$ and $H_{\textrm{{\tiny TM}}}^{(\infty)}\apprge0$. This is an expected result since the ``critical process'' ($v_{0}=v_{c}$) must end up with $H_{\textrm{{\tiny TM}}}^{(\infty)}=0$. From Fig. \ref{Fig3}, we verify that for $j\in[11,20]$ ($v_{0}<v_{c}$), $H_{\textrm{{\tiny TM}}}$ is negative and oscillatory (sometimes reaching the positive range again) until the moment of the last collision ($t=t_{\text{\,col}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}})}}$, which is close to the time $t-t_{\text{\,col}}^{\text{\tiny\,(\ensuremath{n_{\text{col}}})}}=40$ for the two last processes with $v_{0}$ within $W[2]$). In this case the TM recovers enough energy to remain positive (unbinding) and eventually constant as the separation between the solitons increases. Therefore, these results show that a final negative TM energy value is a signature of the formation of bound-states. Also, during the evolution of this state one can attest that $H_{\textrm{{\tiny TM}}}$ is indeed a predominantly negative valued function of time, i.e., it can eventually becomes positive valued for a short time without triggering the unbinding and then return to the negative range, but we attested that this can happen only in chaotic processes. Into the regular windows, when $H_{\textrm{{\tiny TM}}}$ oscillates and reach the positive range, the solitons unbind and scatter away ($H_{\textrm{{\tiny TM}}}\rightarrow H_{\textrm{{\tiny TM}}}^{(\infty)}>0$). To clarify the above statement, we proceeded as before by considering $20$ distinct scattering processes with $v_{0}$ now covering a full window range. In Fig. \ref{Fig4}, the energy of the solitons' TM ($H_{\textrm{{\tiny TM}}}$) are shown for $v_{0}$ into the $W[5]$'s interval of regularity. In \ref{Fig4}(a) we observe that, before the last (second) collision, $H_{\textrm{{\tiny TM}}}$ is not affected by changes in $v_{0}$. This is because all variational parameter display this same behavior embedded in $H_{\textrm{{\tiny TM\,(VM)}}}$, which prevails until the second collision, for which subtle differences accumulated during the bound-state evolution become enough to promote very different interaction outcomes, as one can note in Figs. \ref{Fig4}(b) and \ref{Fig4}(c). In fact, based on extensive analyses of the simulations data, we were able to infer that this initial dynamics of the modes energy is maintained until the eminence of the last collision for all observed collection of scattering processes within an arbitrary window $W[\dots,J]$. Also, it extends similarly for any irregular process in the chaotic region nearby, i.e., if the condition of motion synchronization (\ref{BounceSV}) is not met, the solitons do not unbind and any variation in $v_{0}$ causes the upcoming bound-state dynamics to radically diverge, giving rise to the $v_{\infty}$'s great sensitivity to $v_{0}$. \begin{figure}[tp] \begin{centering} \includegraphics[width=1\columnwidth]{F5.eps} \par\end{centering} \caption{(Color online) Heatmap of $|A'_{+}(x,t-\ensuremath{t_{\text{col}}^{(1)}})|$ and the corresponding TM energy ($H_{\textrm{{\tiny TM}}}$) versus $t-\ensuremath{t_{\text{col}}^{(1)}}$, for three examples of scattering: (a) direct transmission, (b) regular scattering, and (c) irregular scattering. In panel (a) it was used $v_{0}=0.13>v_{c}$. In panel (b) it was set $v_{0}=0.077984$, belonging to $W[3,4,5]$'s interval of regularity. In panel (c) it was considered $v_{0}=0.055$, which is located in a chaotic interval between the windows $W[2]$ and $W[3]$.} \label{Fig5} \end{figure} Following, in Fig. \ref{Fig5} we display the profile $|A'_{+}(x,t-\ensuremath{t_{\text{col}}^{(1)}})|$ and the corresponding $H_{\textrm{{\tiny TM}}}$ versus $t-\ensuremath{t_{\text{col}}^{(1)}}$ in order to clarify the basic features regarding both the bound-state and the TM energy dynamics, for each one of the three cases considered in this plot. Note that the heatmap in Fig. \ref{Fig5}(a) shows that the collision induce shape vibrations, as also indicated by the corresponding $H_{\textrm{{\tiny TM}}}$ evolution, where one can see that the TM energy is always positive and $H_{\textrm{{\tiny TM}}}^{(\infty)}<H_{\textrm{{\tiny TM}}}^{(0)}$, as expected since part of the initial energy is transferred to the VM. Also, the heatmap shown in Fig. \ref{Fig5}(b) is an example of a regular scattering, as previously displayed in Fig. \ref{Fig2}(e). This example of regular process is useful for illustrating that the longer the bounce time duration ($\Delta t_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{j-1|j})}}$) is, smaller is the absolute value of the TM energy. Indeed, this process occurs because the solitons weakly bind to each other during these well behaved bounces, due to their separation. On the other hand, in irregular processes the bound-state frequently evolves to situations in which the solitons strongly bind to each other, which are characterized by very high bouncing frequencies $\omega_{\text{\,bounce}}^{\text{\tiny\,(\ensuremath{j-1|j})}}$ (or collision rates) that maintain the average separation very small. The heatmap from the example in Fig. \ref{Fig5}(c) illustrates such behavior. It takes place just after the second collision and is accompanied by a large effective decrease in $H_{\textrm{{\tiny TM}}}$, which reaches a range of negative values that are greater than $H_{\textrm{{\tiny TM}}}^{(0)}$ by more than an order of magnitude (in modulus). In fact, one can infer about the binding strength by testing the inequality $|H_{\textrm{{\tiny TM}}}|\gg|H_{\textrm{{\tiny TM}}}^{(0)}|$, and then infer about the type of scattering process. \subsection{Effects of SO coupling in the scattering process ($\gamma\protect\neq0$)} In the previous subsection, we considered the reduced ODE model in the absence of SO-coupling ($\gamma=0$), where the results of several scattering simulations revealed the existence of a very rich and complex dynamics that emerges when the initial velocity is smaller than a certain threshold value (i.e., $v_{0}<v_{c}$). Our extensive analysis of the data allowed us to better understand the underlying mechanism that gives rise to the many interesting features of the solitons in the variational description. Now we explore what happens with all these features when the SO-coupling is present ($\gamma\neq0$). In section \ref{sec:IV}, we have previously pointed out that the initial propagation velocity cannot be identified with the parameter $v_{0}$ when $\gamma\neq0$, instead it is $v_{0}'=v_{0}+\gamma$ as indicated by Eq. (\ref{pp}) in the regime $\left|\xi\right|\gg1$. Regarding only the effective soliton dynamics, as can be seen in heatmap plots, a pre-collisional scenario with $v_{0}=V_{0}$ and $\gamma=0$ is indistinguishable from one with $v_{0}=V_{0}-\gamma$ and $\gamma\neq0$, since $v_{0}'=V_{0}$ in both cases. Hence, in order to simulate the effects of the SO-coupling over pre-collisional scenarios equivalent to those from the previous subsection, we have used a $v_{0}$-range similar to that from Fig. \ref{Fig2}(a) translated by $\gamma$ units to the left (right) if $\gamma>0$ ($\gamma<0$). In Fig. \ref{Fig7}, the effect of the SO-coupling over the final propagation velocity $v_{\infty}'$ is shown for several cases in which $\gamma>0$. The plots in Fig. \ref{Fig7}(a)-(g) display similar window structures that basically differ from another one by some sort of transformation combining translation and scaling of the intervals of regularity. The critical point that separates the chaotic-like region from the regular one also translates along the $v_{0}'$-axis as the $\gamma$ increases. One can realize that $v_{c}$ grows from Fig. \ref{Fig7}(a) to \ref{Fig7}(d) and diminishes from Fig. \ref{Fig7}(d) to \ref{Fig7}(g). Besides these changes in the windows placement, there are new transmission windows associated with 3-pass collisional scattering processes that now appear at left side of $W[2]$. \begin{figure*}[!t] \begin{centering} \includegraphics[width=0.75\paperwidth]{F6.eps} \par\end{centering} \caption{(Color online) Scattering results for $v_{\infty}'$ versus $v_{0}'$ obtained via iterative simulations of the reduced ODE model (Eqs. \eqref{vp}-\eqref{bp}) in a fixed $v_{0}$'-range for different values of $\gamma>0$, starting from $\gamma=0.025$ in panel (a) and adding $\Delta\gamma=0.025$ at each step until $\gamma=0.15$ in panel (f). Next, we start from $\gamma=0.2$ in panel (g) and adding $\Delta\gamma=0.1$ at each step until $\gamma=0.7$ in panel (l). The SO-coupling parameter is chosen as $\gamma=1.0$ and $\gamma=1.5$ in panels (m) and (n), respectively. At the right corner of the plots in (k) and (l), a zoom of the window structure is displayed to highlight the emerging gap that splits the chaotic-like region in two parts. In panel (o), the approximate values for the two types of critical velocity, $v_{c}^{\text{\tiny R}}(\gamma)$ and $v_{c}^{\text{\tiny T}}(\gamma)$, are shown in two graphs, with the smallest one focusing in the $\gamma$-range where $v_{c}^{\text{\tiny T}}(\gamma)$ reaches a peak value. In panel (p), two heatmap plots are displayed to exemplify the characteristic dynamics of the two types of direct scattering, namely, direct reflection ($v_{0}'\leq v_{c}^{\text{\tiny R}}(\gamma)$) and direct transmission ($v_{0}'\geq v_{c}^{\text{\tiny T}}(\gamma)$). } \label{Fig7} \end{figure*} We emphasize that for $0<\gamma\lesssim0.2$ the effect of the SO-coupling in the variational dynamics is small, in the sense that it does not affect significantly the main structure of windows and its substructures. So, the mechanism described in the previous subsection still works when the SO-coupling is present and, after some analysis of the collision dynamics within several intervals of regularity, one can verify that those interesting features associated with the reflection/transmission windows remain. We performed several iterative simulations considering $\gamma$ values gradually increasing from $0.2$ up to $1.5$ with step $\Delta\gamma=0.05$. By comparing the obtained plots (some of which are shown in Fig. \ref{Fig7}(h)-(n)), one can notice that the critical velocity keeps decreasing as $\gamma$ increases, causing the whole window structure to be displaced toward the origin. Indeed, the window closest to the origin shrinks and eventually disappears when $\gamma$ reaches a certain value. The beginning of this process can be seen in the window $W[2]$ (left to right) in Fig. \ref{Fig7}(g). As this process goes on, the structure ``loses'' some windows and becomes smaller.\textcolor{blue}{{} }When $\gamma=0.5$ (see Fig. \ref{Fig7}(j)), the structure can be barely seen and becomes even more confined due to the emergence of a new type of critical point that separates the chaotic-like region from a new one that extends until the origin ($v_{0}'=0$). This new region increases with $\gamma$ and speeds up the vanishing process of the chaotic-like region and the window structures within it, which are lastly seen in Fig. \ref{Fig7}(l). Following, in Fig. \ref{Fig7}(m) the window structure is gone, and only a few points can be barely seen within what is left from the chaotic-like region, which has already completely vanished in Fig. \ref{Fig7}(m). Comparing these last two figures, we verify an inversion of the initial increasing behavior of the new region, since its interval was shortened. Back to Fig. \ref{Fig7}(l), we introduce a notation to differ the new type of critical velocity from the old one, with $v_{c}^{\text{\tiny R}}(\gamma)$ denoting the former and $v_{c}^{\text{\tiny T}}(\gamma)$ the latter (previously denoted by $v_{c}$). Here the dependence with $\gamma$ is written explicitly, and the superscripts R and T stand for reflection and transmission, respectively. With this notation we mean that every scattering process with $v_{0}'>v_{c}^{\text{\tiny R}}(\gamma)$ is a direct transmission, and that every scattering process with $0<v_{0}'<v_{c}^{\text{\tiny R}}(\gamma)$ is a direct reflection. The latter is a new type of regular scattering that cannot occur if $\gamma$ does not exceed a certain threshold value $\gamma_{R}$. As an example, in Fig. \ref{Fig7}(p) it is considered two plots displaying the behavior of these two types of direct scattering process. In the direct reflection scenario (bottom plot of Fig. \ref{Fig7}(p)) one can note that the peak position $p$ never reaches zero (without passing) and that there is no detectable shape vibrations after the collision, i.e., the scattering is practically elastic (the corresponding points in the $v_{\infty}^{\prime}\times v_{0}^{\prime}$\textcolor{red}{{} }plots closely trace the line $y=-x\ |\ x\in[0,v_{c}^{\text{\tiny R}}(\gamma)]$, as can be seen in Fig. \ref{Fig7}(j)-(n)). Defining $v_{c}^{\text{\tiny R}}(\gamma)=0\ \forall\ \gamma\ |\ 0\leq\gamma<\gamma_{\text{\tiny R}}\ $, then the direct reflection critical point ($P_{R}$) always coincides with the origin of the coordinates system (i.e., $P_{R}=(0,0)$), and the direct transmission one is simply $P_{T}=(v_{c}^{\text{\tiny T}}(\gamma),0)$ as usual. For $\gamma>\gamma_{R}$, the results allows one to write, in a general way, that $P_{R}\approx(v_{c}^{\text{\tiny R}}(\gamma),-v_{c}^{\text{\tiny R}}(\gamma))$ and that $P_{T}=(v_{c}^{\text{\tiny T}}(\gamma),V_{\infty}^{\text{\tiny T}}(\gamma))$, with the exit velocity function defined as $V_{\infty}^{\text{\tiny T}}(\gamma)=f(\gamma)v_{c}^{\text{\tiny T}}(\gamma)$, such that $f(\gamma)=\Theta(\gamma-\gamma_{\text{\tiny R}})\,r_{\gamma}$, with $\Theta$ being the Heaviside step function and $r_{\gamma}\in[0,1]$. By graphically tracking the $P_{T}$ point, we found that $r_{\gamma}$ strictly increases with $\gamma$ and asymptotically approaches the value $1$, as shown in Fig. \ref{Fig7}(n) where $r_{\gamma}\approx1$, so that $P_{T}$ is very close to the line $y=x$. This means that scattering process associated with this critical point tends to become elastic one, with solitons simply crossing each other with almost no excitation of the vibrational mode. In order to check the behavior of the critical points $P_{T}$ and $P_{R}$ with more accuracy, i.e. for a smaller $\Delta\gamma$, we developed a numerical algorithm to locate these points within a precision $\log_{10}(\delta v_{c})\leq-5$ and without performing long iterative simulations over wide $v_{0}'$-ranges. We set $\Delta\gamma=0.05$ and executed the algorithm for $\gamma$ values into the interval $[-2.5,2.5]$. The corresponding results are shown in Fig. \ref{Fig7}(o). We found that the $P_{R}$ points distribution is symmetric with respect to the $\gamma=0$ axis, and also that none of these appear in the interval $[-\gamma_{\text{\tiny R}},\gamma_{\text{\tiny R}}]$ (as indicated by our previous analysis for $\gamma>0$). Then, we can extend the $f$ function to the negative domain by redefining it as $f(\gamma)=\Theta(|\gamma-\gamma_{\text{\tiny R}}|)\,r_{\gamma}$, with $r_{\gamma}\approx1$ for SO-coupling strengths $|\gamma|\gg1$. Additionally, the length of the direct reflection region is maximum, $\max\left[v_{c}^{\text{\tiny R}}(\gamma)\right]$, when $|\gamma|$ is about $1.15$, and, $v_{c}^{\text{\tiny R}}(|\gamma|)$ strictly decreases for greater SO-coupling strengths. Regarding the $P_{T}$ points distribution, we observe that it is not symmetric and displays a special behavior when $\gamma\in[0,\gamma_{\text{\tiny R}}]$. In this interval, one notes that, for a certain SO-coupling strength $\gamma_{\text{\tiny T}}>0$, the length of the chaotic-like region is maximum (i.e., $\max\left[v_{c}^{\text{\tiny T}}(\gamma)\right]=v_{c}^{\text{\tiny T}}(\gamma_{\text{\tiny T}})$). By reducing the discretization to $\Delta\gamma=0.0125$ over the interval $[0,0.25]$ (highlighted by an arrow in Fig. \ref{Fig7}(o)), we obtained that $\gamma_{\text{\tiny T}}$ is about $0.1125$. Indeed, this result was expected since such behavior could be inferred from our previous analysis for $\gamma>0$. The asymmetric behavior of $v_{c}^{\text{\tiny T}}(\gamma)$ displayed in Fig. \ref{Fig7}(o) is explained as follows. For a SO-coupling strength $|\gamma'|$, there are always two initial phases giving the same initial propagation velocity $V_{0}$, which are $v_{0}^{\pm}=V_{0}\pm|\gamma'|$ for $\gamma=\mp|\gamma'|$. The first term in Eq. (\ref{pp}) is simply $-v'$, hence it is equal to $-V_{0}$ for both initial conditions $v_{0}^{\pm}$. Now, if the dependence of the coupling function $G$ with variational parameter $v$ was through a term proportional to $v'$, then the reduced model would be clearly symmetric with respect to $\gamma$. However, this is not the case here, because the Rabi coupling has broken the SO-coupling inversion symmetry. Regarding the rest of the $P_{T}$ distribution points residing in the intervals $[-2.5,-\gamma_{\text{\tiny R}}]$ and $[\gamma_{\text{\tiny R}},2.5]$, the data shows that $v_{c}^{\text{\tiny T}}(\gamma)$ strictly decreases for increasing $|\gamma|$. Also, from Fig. \ref{Fig7}(o), we observe that when $|\gamma|\gtrsim\max[v_{c}^{\text{\tiny R}}]$ the difference given by $v_{c}^{\text{\tiny T}}(|\gamma|)-v_{c}^{\text{\tiny R}}(|\gamma|)$ (length of the chaotic-like region) is of the order of $10^{-4}$ and quickly approaches $0^{+}$ as $|\gamma|$ grows, i.e., the $P_{T}$ and $P_{R}$ points tend to coalesce for large values of the SO-coupling strength. In the regime $|\gamma|\gg1$, one can infer from the behavior of the critical points that $v_{c}^{\text{\tiny T,R}}(|\gamma|)\approx0$, therefore the scattering tends to become a simple elastic direct transmission for any pre-collisional scenario ($\forall\,v_{0}'>0$), which is equivalent to turning off the Rabi coupling. By following the same protocol employed in the previous subsection, we considered here the cases in which $\gamma=\pm0.15$ and investigated some substructures. As example, in Figs. \ref{Fig8})(a)-(c) we display the case with $\gamma=0.15$ (similar results are found for the case with negative sign). When analyzing the window distributions, we found that the pattern associated with reflection windows differs from the one associated with transmission windows, with the former having an overall larger window spacing compared with the latter. However, for the case $\gamma=-0.15$ one finds an opposite behavior. Hence, the results indicate that the fractal-like behavior can indeed persist if the first window structure is weakly affected by the SO-coupling, and that the changes in the window patterns depend of the sign of $\gamma$. We also explored some cases in which the SO-coupling strength caused the chaotic-like regions to become very small as in Figs. \ref{Fig7}(j)-(k). So, we found that the first substructures still emerge in the edges of the remaining windows that were not significantly affected by the vanishing process previously described. \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{F7.eps} \par\end{centering} \caption{(Color online) Scattering results for $v_{\infty}'$ versus $v_{0}'$ obtained via iterative simulations of the reduced ODE model (Eqs. \eqref{vp}-\eqref{bp}) in three $v_{0}$'-ranges (a)-(c) within the interval $(0,v_{c}^{\text{\tiny T}}(\gamma)]$ with $\gamma=0.15$. The two highlighted regions (gray) indicate the $v_{0}$'-range of the plot immediately below. The panel (a) corresponds to a ``zoom'' of the $v_{0}'$-range containing a substructure near the left edge of the second window (from left to right) of the main structure displayed in Fig. \ref{Fig7}(f).} \label{Fig8} \end{figure} Hitherto, we have focused on the emergent effects caused by the SO-coupling, hence our analyses considered only the general aspects regarding the two types of regular scattering and their associated intervals, with more emphasis in the intertwining chaotic-like interval and window structures within it. We have firstly investigated how the parameter $\gamma$ modifies the coupling function $G$ and its derivatives $\partial_{q}G$. To this end, we rewrite the Eq. (\ref{G_FUNC}) in terms of the propagation velocity by making $v=v'-\gamma$, which is equivalent to the variable exchange $\zeta\,\rightarrow\,\zeta'-2\gamma$, with $\zeta'=2v'+\xi b$ being analogously to $\zeta$ in the case of $\gamma=0$. Then, defining $G'$ given by \begin{align} G'(\xi,\zeta',w) & =\dfrac{\sin\left(\zeta'p-2\gamma p\right)}{\sinh\left(\xi\right)\sinh\left(\pi\zeta'w/2-\pi\gamma w\right)}\ ,\label{G'_FUNC}\\ \left.G'\right|_{b=0,w=1} & =\dfrac{\sin\left[2p(v'-\gamma)\right]}{\sinh\left(2p\right)\sinh\left[\pi(v'-\gamma)\right]}\ \ \left(\text{\small|\ensuremath{\xi}|\,\ensuremath{\gg}\,1}\right),\label{G'_b0} \end{align} with the Eq. (\ref{G'_b0}) valid before the collision. We stress that in the case of $\gamma=0$, the Eq. (\ref{G'_FUNC}) recovers the form of $G$ obtained in the previous section, i.e., $[G',\zeta',v']{}_{\gamma=0}=[G,\zeta,v]$ (see Eq. (\ref{G_FUNC})). We performed an extensive study of the above functions to figure out how the terms $2\gamma p$ and $\pi\gamma w$ modify the variational dynamics, with focus on the derivatives $\partial_{p}G'$ and $\partial_{v}G'$, which are associated with the translational acceleration terms in the reduced model and develop a more important role in the propagation dynamics. By this study we retrieved the most important qualitative aspects of the SO-coupling influence over the interaction. Considering only the denominator of Eq. (\ref{G'_FUNC}), the term $\pi\gamma w$ alters the interaction strength in different ways depending on the behavior of the width parameter $w$. During the bound-states, the oscillatory character of $w$ due shape vibrations induces oscillations in the Rabi interaction strength, which are small when $\pi|\gamma|w\ll1$, i.e., if $|\gamma|\ll1$. For greater SO-coupling strengths, this oscillation can make the bound-state dynamics very complicated, as the binding strength keeping the solitons together alternates between weak and strong regimes. When the SO-coupling is such that $|\gamma|\gg v'$, the leading effect of the term $\pi\gamma w$ is the dumping of the Rabi interaction strength, as one can clearly verify from Eq. (\ref{G'_b0}). This can be related to the behavior of the critical point $P_{T}$, because, as the Rabi interaction weakens due to the increasing $|\gamma|$, the maximum propagation velocity for the bound-state formation ($v_{c}^{\text{\tiny T}}(\gamma)$) reduces until a certain value in which the attraction is still enough to trap the solitons. On the other hand, regarding now the numerators of Eqs. (\ref{G'_FUNC}) and (\ref{G'_b0}), one notes that the parameter $\gamma$ induces oscillations that develop when the solitons are moving, which occur at a fixed frequency $2|\gamma|$ when $p$ varies linearly during pre-collisional scenarios. This leads to oscillations in the sign of every term containing a derivative of $G'$, causing the Rabi interaction to oscillate between regimes of attraction ($\Gamma\partial_{p,v}G'<0$) and repulsion ($\Gamma\partial_{p,v}G'>0$). Since the denominator of Eq. (\ref{G'_b0}) is dominated by the term $\sinh(2p)\gg1$, the approximation $G',\partial_{q}G'\approx0$ is valid and the sign of the coupling terms do not matter during pre-collisional scenarios. Therefore, the sign oscillation become relevant only when $p$ is small enough so that the translational acceleration terms, $\Gamma\partial_{p,v}G'$, can significantly alter the propagation. During the bound-states, $p$ is confined to a narrow interval of values ($|p|\lesssim5$), if the SO-coupling strength is small, such that $2|\gamma p|\ll1$, then the sign oscillation barely alters the predominantly attractive Rabi interaction. In contrast, for greater SO-coupling strengths, $2|\gamma p|$ is not small and such oscillations are much more prominent, making the oscillations of the bound-state to vary in an unpredictable way. For instance, one of the consequences of this non trivial behavior is displayed in Figs. \ref{Fig7}(k)-(l), where one can see a gap in the chaotic-like region that splits it into two smaller regions, i.e., there is a forbidden range of final velocities that establishes a threshold value for $|v_{\infty}'|$ if $v_{0}'\in[v_{c}^{\text{\tiny R}}(\gamma),v_{c}^{\text{\tiny T}}(\gamma)]$. This effect happens because the Rabi interaction becomes momentarily repulsive just after the unbinding, and then, due the proximity of the solitons, the acceleration is greater enough to increase the propagation velocity. The gain in velocity is greater as greater the SO-coupling strengths is and also when the acceleration acts for a longer time, i.e., if $v'$ is very small just after the unbinding (as in those regular inelastic processes near the window edges). This increasing gap explains the behavior of the parameter $r_{\gamma}$ in the $P_{T}$ critical point expression, since $V_{\infty}^{\text{\tiny T}}(\gamma)$ follows the gap upper boundary. \begin{figure*}[t] \begin{centering} \includegraphics[width=0.85\paperwidth]{F8.eps} \par\end{centering} \caption{(Color online) Phase space trajectories governed by the effective reduced ODE model (given by\textcolor{blue}{{} }Eqs. (\ref{vp_model})) providing a variational description for the direct reflection type of solitons scattering. In the three cases considered ($\gamma=0.5$, $1.0$, $1.5$), a total of $8$ trajectories with $p_{0}=10$ and $v_{0}'\in(0,v_{c}^{\text{\tiny R}}(\gamma)]$ are plotted. The background is a contour line plot of the function $\partial_{p}G_{0}'.$ Since $\Gamma=-0.04<0$, the attraction zones are the highlighted by gray/black regions, corresponding to $\text{sgn}\left(\partial_{p}G_{0}'\right)=-1$, while repulsion zones are identified by the white regions, corresponding to $\text{sgn}\left(\partial_{p}G_{0}'\right)=+1$.} \label{Fig9} \end{figure*} The alternation between attractive and repulsive Rabi interaction can be directly related with the emergence of the direct reflection region. We investigated several scattering processes with $v_{0}'\in(0,v_{c}^{\text{\tiny R}}(\gamma)]$ for various values of $\gamma$, observing that the role of the variational parameters $w$ and $b$ is negligible. Indeed, this occurs because the collision is quasi-elastic, with the energy stored within the vibrational mode being practically zero when compared with the energy within the translational mode. This finding allowed us to study this type of scattering in a more quantitative way, since we can set $w=1$ and $b=0$ to obtain the effective reduced ODE model \begin{equation} \dot{v}=\pi\Gamma\dfrac{\partial G_{0}'}{\partial p}\quad,\quad\dot{p}=-\left(v'+\pi\Gamma\dfrac{\partial G_{0}'}{\partial v}\right)\ ,\label{vp_model} \end{equation} with $G_{0}'=\left.G'\right|_{b=0,w=1}$ being the effective coupling function yielded by Eq. (\ref{G'_b0}). Note that if $\gamma=0$, $G_{0}^{\prime}$ coincides with $G_{0}$ of Eq. (\ref{RM_HAM}), introduced in section \ref{sec:IV}. We study the phase space trajectories governed by Eq. (\ref{vp_model}) subjected to the initial conditions $(p_{0},v_{0}')$, with $p_{0}=10$ as usual and $v_{0}'(i)=V_{0}^{\text{(init})}+\Delta_{\gamma}(i-1)\ |\ i\in[1,2,\dots,8]$, with $\Delta_{\gamma}=(v_{c}^{\text{\tiny R}}(\gamma)-V_{0}^{\text{(init})})/(I-1)$ and $V_{0}^{\text{(init})}=0.001$. When these phase space trajectories (two-dimensional curves) are plotted with the contour line plot of $\partial_{p}G_{0}'$ or $\partial_{v}G_{0}'$ in the background, one can visualize how the propagation is driven by the oscillatory Rabi interaction, and also how the SO-coupling strength increases the frequency of such oscillations and consequently alters the dynamics. This is exactly what is displayed in Fig. \ref{Fig9} for three different values of $\gamma>0$ and with background composed by the contour line plots of $\partial_{p}G_{0}'$. The corresponding negative values provide the same results and similar plots are obtained when $\partial_{v}G_{0}'$ is considered instead. The alternation between attraction (gray zones with $\Gamma\partial_{p}G_{0}'>0$) and repulsion (white zones with $\Gamma\partial_{p}G_{0}'<0$) is clearly depicted in Fig. \ref{Fig9}. Considering the case with $\gamma=0.5$, the trajectories show that the attraction zone immediately affecting all the processes in the far field (close to $p=10$) is negligible ($\partial_{p,v}G_{0}'\approx0$) due to the initially large separation. As $p$ reduces and reaches the repulsion zone ($p\simeq6$), the separation becomes small enough to cause a deceleration that can act during a long enough time interval to completely break the solitons ($v'=0$), and then accelerate them away ($v'<0$) back to the far field in such way that, in the post-collisional scenario, $v_{\infty}'\approx-v_{0}'$. Also, one can see that the shortest trajectory ($v_{0}'=0.001$) quickly turns back as it gets into the repulsion zone, and that the longest trajectory ($v_{0}'\approx v_{c}^{\text{\tiny R}}(\gamma)$) turns back after almost reaching the attraction zone that extends all the way toward $p=0$. Regarding the other two cases, with $\gamma=1.0$ and $\gamma=1.5$, an analogous behavior can be visualized. However, due to the greater SO-coupling strengths, there is more zones of attraction and repulsion that add more details to the dynamics. In both cases, the effect of the attraction/repulsion zones in the far field are once again negligible, and most of trajectories begin to be significantly affected after reaching the next-to-last repulsion zone, which is the zone where the shortest trajectory turns back before reaching the last and most effective repulsion one (see Fig. \ref{Fig9}). In the case with $\gamma=1.0$, one observes that the next-to-last repulsion zone barely influences the other trajectories ($v_{0}'>0.001$), which make the way through the attraction zone until finally reaching the last repulsion one and then turning back. In addition, in the case with $\gamma=1.5$, these final zones are narrower and closer to the $p=0$ axis, hence the acceleration effects are amplified causing the trajectories to assume the shapes as seen in Fig. \ref{Fig9}. For greater SO-coupling strengths, the zones depicted in this figure keep getting narrower and closer to the $p=0$ axis. The effectiveness of the acceleration and deceleration under the trajectories diminishes and the maximum velocity for the occurrence of direct reflection scattering becomes smaller (this connects with the decreasing behavior of $v_{c}^{\text{\tiny R}}(|\gamma|)$ for $|\gamma|\gtrsim1.15$). As $|\gamma|$ increases further, the effects of the repulsion and attraction zones cancel each other out (in average). Also, in this case the Rabi interaction is weakened, i.e., the scattering tends to be a mere direct transmission for almost all $v_{0}'>0$. \section{Conclusion \label{sec:Conclusion}} In summary, we investigated the influence of the SO coupling on the collisional dynamics of solitons in binary BECs by using a reduced ordinary differential equations (ODE) model based on a variational approach, which allow us to analytically investigate the formation of fractal-like\textcolor{blue}{{} }patterns and the properties of the scattered solitons. To this end, we first studied the collision of solitons in the absence of SO coupling and then we started to verify the influence on the scattering patterns by changing the value of the SO coupling parameter $\gamma$. We found exotic structures of scattering by focusing on the values of the exit velocities $v_{\infty}^{\prime}$ for given input velocities $v_{0}^{\prime}$. Also, we verified that these structures present a fractal-like pattern, i.e., periodic repetitions of the main structure in its substructures, corresponding to the zoomed views. The size of the region presenting windows structures is drastically affected by the SO coupling. Indeed, we observe that for $|\gamma|\gtrsim1.15$ the structure of windows vanishes completely. Also, the SO-coupling promotes non-trivial oscillations in the Rabi interaction strength and its sign, which are the sources of the emergent effects altering the window structure that vanishes as the chaotic-like region is compressed in the $v_{0}^{\prime}$-direction by the regions of direct transmission and direct reflection, and in the $v_{\infty}^{\prime}$-direction by the growing gap of forbidden final propagation velocities. \subsection*{Acknowledgments} We acknowledge financial support from the Brazilian agencies CNPq, CAPES, FAPEG, and the National Institute of Science and Technology (INCT) for Quantum Information. \bibliographystyle{apsrev4-1} \phantomsection\addcontentsline{toc}{section}{\refname}
2024-02-18T23:40:42.376Z
2018-05-10T02:03:36.000Z
algebraic_stack_train_0000
3,135
14,859
proofpile-arXiv_065-15262
\section{INTRODUCTION} UAV delivery is growing from a concept to a reality. In places where ground transportation is unavailable or congested, UAV delivery could be a fast and low-cost alternative. It could serve in infrequent use case like delivery of emergency relief supplies, or commercial delivery of packages for online shopping. In both use cases, path planning is important. Rescue teams want to get the needed aid delivered as soon as possible, while owners of commercial UAVs may wish to maximize the monetary benefits from their UAVs by taking shorter routes. However, there are many realistic constraints in the path planning. One type of constraints is threats in the air, for instance, other aerial vehicles. Just like vehicles running on the road, UAVs may collide with each other in the air as well, and the chance of collision grows as the geographical density of UAVs increases. In densely populated area, this collision could be disastrous. Furthermore, in modern city with many skyscrapers, there is limited space for maneuvers, thus we must avoid dispatching too many UAVs in a small space in a short period of time. In the setting of commercial delivery, each UAV owner may possess hundreds or even thousands of UAVs, which increases the chances of collision. Therefore the owners should strive to avoid collision during path planning stage. Another type of constraints is the unavailability of air space due to no-fly zone or extreme weather in certain area. It may be worth noting that this no-fly zone should be temporary, otherwise it should have been removed from the usable airspace structure. Examples could be sports events or carnivals, where no-fly zone is setup temporarily for the safety of participants. It could also be periodical no-fly zones, like downtown or Central Business District where it is densely populated during daytime and emptier during the night. In this paper, we propose an algorithm to output shortest, collision-free, flyable path based on A* algorithm. The main contribution is a heuristic function to calculate the penalty while waiting due to traffic control. We use a data structure called \textit{CurrentSchedule}for checking and updating the availability of airspace. We generated and simulated a large amount of delivery request with airspace structure in Singapore. Simulation shows that the proposed algorithm scales almost at linear with regard to the number of requests until some equilibrium points. The rest of paper is organized as follows. Related works are presented in Section 2. Section 3 gives a formal formulation of problem, and Section 4 describes our algorithm followed by a discussion on its optimality. Section 5 details necessary information about implementation of the proposed method. Analysis of experimental results are conducted in Section 6, and finally, conclusion and future works of the study is depicted in Section 7. \section{RELATED WORK} The UAV path planning algorithms generally fall under two broad categories: one is the offline path planning algorithm, which uses global information about the environment to generate optimal path; the other is the online path planning algorithm, which employs information perceived through sensors and reroute on-the-fly. Online path finding algorithms is designed to deal with uncertainties and emergencies during the flight. \cite{c1} presents some common online algorithms, including potential field approach and particle swarm optimization (PSO). They have advantages and disadvantages: potential field approach are simple and straightforward to understand, but may easily fall into local optimum should no adjustment to algorithm is done. PSO is also intuitive, but it can also be too computationally expensive to reroute in real-time. Boivin et al. \cite{c8} proposed a predictive control scheme that use shared knowledge between nearby UAVs to find collision-free route, meanwhile considering the dynamics of UAV itself. A fundamental challenge to online algorithms is the modelling of uncertainties during the flight. To model the impact of derivation of trajectory to routing, Kim \cite{c9} proposed a probabilistic trajectory model in 3D space. In his model, multiple trajectories are proposed with different degree of derivations to the baseline trajectory. The probability of conflicts is calculated for each potential trajectory, and the one with lowest Off-line algorithms plan the path prior to departure. The aircraft are usually modelled as point mass moving in two dimensions with constant speed and upper-bounded turning rate. \cite{c2} uses Genetic Algorithm (GA) to find optimal flyable path through numerous iterations, with each iteration improving previous path by rerouting at waypoint. The problem with Genetic Algorithm is that runtime grows exponentially as number of iterations and flights grows. \cite{c3,c4} also uses GA, but with a parallel approach to speedup the calculation. \cite{c10} formulates vehicle dynamic model, collision avoidance constraint, and multiple waypoint constraint as mixed-integer linear program (MILP). One of its novelty is that it uses trigonometric functions to better approximate the radius constraint. However, it does not consider battery constraint, which is a limiting factor to UAV's delivery capability. In this paper, we will address both collision avoidance constraint and battery constraint. Besides, there are other algorithms with emphasis on different aspects. There are also hybrid solutions where both offline and online combination are used. This is also often the common practice for commercial companies. In the use case of delivery, its is often that routing is on-demand, which means we are required to route a new request frequently, although it may be sub-optimal. Also, new path should fit into our existing schedules, which satisfies the collision-free and no-fly zone constraint. Given we have global information regarding the the existing schedules of UAVs and their current locations, we can optimize the route through off-line planning. One natural way is using A* algorithm. A* algorithm \cite{c5} is a best-first, efficient and optimal search algorithm. It uses heuristics to guide the search, and therefore, reduce the number of nodes needed to be explored. Another advantage for using A* is that it can apply on both 2D and 3D space as long as the graph is connected. Zhang et al. \cite{c6} proposed an offline improved A* algorithm to deal with realistic constraints in UAV movement, including maximum moving angle, minimum route leg length, minimum flight height and maximum route length. Their algorithm can avoid collision with terrain while minimize the flight height. However, it only handles static graph, and can be computationally expensive. Zhang et al. proposed fixes by trimming some less-permissble nodes, or imposing some hard nodes that the route must follows, but doing so will lose some potentially optimal solutions, and thus lose its completeness and optimality. \section{PROBLEM FORMULATION} We model UAV as point mass moving in two dimensions with a constant cruising speed and a maximum flying time. The fixed altitude is because air space is usually structured in layers \cite{c10, c11}. Therefore, our proposed collision-free dynamic routing problem is formulated as: \begin{quote} Given a set of existing flight schedules $M$, no-fly zone schedule $S$, a graph $G(V,E)$ consists of edges $E$ and vertices $V$, and a new request $R_i(o,d,t)$ to route from $o \in V$ to $d \in V$ at time $t$, we would like to find out if there is a viable new route from $o$ to $d$, meanwhile, it should not pass through any no-fly zone and existing flight. If such a solution exists, output the solution paths and time needed. \end{quote} \subsection{Notations} We use two binary variables $U_{e}(u,e_i,t)$ and $U_{v}(u,v_j,t)$ to represent if aircraft is occupying an edge $e_i$ or vertex $v_j$ respectively. When $U_{e}(u,e_i,t) = 1$, it means "at time $t$, UAV $u$ is flying on edge $e_i$". Similarly, $U_{v}(u,v_j,t) = 1$ means "at time $t$, UAV $u$ is flying on node $v_j$". Waiting penalty refers to the sum of expected waiting time for taking an edge or node ($edgePen(e,t)$ and $nodePen(v,t)$). It can be set as a constant number, or a function whose value depends on traffic density, time, and other factors like air space structure. Thus, we use $P(u,t)$ to represent the waiting penalty at time $t$ for UAV $u$. \begin{equation} \label{eq:1} \begin{split} P(u,t) & = U_{e}(u,e_i,t)*edgePen(e_i,t)\\ & + U_{v}(u,v_j,t)*nodePen(v_j,t) \end{split} \end{equation} \subsection{Constraints} There are three realistic constraints in this problem, namely, collision-free constraint, no-fly zone constraint, and maximum flyable time constraint (battery constraint): \subsubsection{Collision-free Constraint} Avoidance of collision is critical to proper routing for aircraft. Collision-free has two aspects of meaning: \begin{equation} \begin{split} \forall v \in V, u_i,u_j & \in U \land u_i\ne u_j U(u_i,v,t_i) \land U(u_j,v,t_j)\\ &\rightarrow t_i \ne t_j \end{split} \end{equation} There should be no other UAVs occupying the same node at the same time. There are two types of nodes, connecting nodes and landable nodes. Connecting nodes are geometrical nodes that UAVs can make turns, thus, all nodes in the airspace graph are connecting nodes. Landable nodes are nodes where we have UAV dispatching stations to recharge battery, load and unload packages, and conduct maintenance. Each trip must start and end from landable nodes. \begin{equation} \begin{split} \forall e \in E, u_i,u_j & \in U \land u_i\ne u_j U(u_i,e,t_i) \land U(u_j,e,t_j)\\ &\rightarrow t_i \ne t_j \end{split} \end{equation} There should be no more than one UAVs occupying the same edge at the same time. This addresses the concern where two UAVs may collide with each other. \\ \subsubsection{No-fly Zone Constraint} No-fly zone (NFZ) is a designated area where flight in that area is prohibited for a period of time, but UAV may fly across it before and after the designated time. This information can be obtained prior to path planning. To avoid flying across no-fly zone, we pre-process the no-fly zone schedule and place some virtual UAVs on the "no-fly" edges and nodes. From equation \ref{eq:1}, the waiting penalty for using such edges will be prohibitively high. Thus, we can filter out those routes who will cross no-fly zone using A* algorithm. Let $E_{nfz}$ be a set of edges in no-fly zone, $V_{nfz}$ be the set of nodes in no-fly zone, $T_{start}$ be the start time and $T_{end}$ be the end time, then we have: \begin{equation} \label{nfz1} \begin{split} \forall u_i \in U, &U(u_i,e,t_i)\\ & \rightarrow e\not\in E_{nfz} \lor t_i < T_{start} \lor t_i > T_{end} \end{split} \end{equation} \begin{equation} \label{nfz2} \begin{split} \forall u_i \in U, &U(u_i,v,t_i)\\ & \rightarrow v\not\in V_{nfz} \lor t_i < T_{start} \lor t_i > T_{end} \end{split} \end{equation} Equation \ref{nfz1} means UAV cannot cross any edge of no-fly zone from the start to the end of no-fly period. Similarly, equation \ref{nfz2} means UAV cannot hover on nodes that belong to no-fly zone. \subsubsection{Maximum Flight Time Constraint} Battery is a realistic constraint in UAV delivery. Because of it, we use time instead of distance as weight in our A* algorithm because battery time is a better measurement of remaining flying capability. We define $T_{max}$ as maximum flight time, therefore, only flights whose duration is smaller than $T_{max}$ are flyable. Otherwise, we have to divide it into more trips. \subsection{Assumptions} We assume that at all UAV fly at $v_i$, which is a constant speed of 30km/h. Since most current UAV on-board battery can only support for around 30-minute flight time, we set $T_{max}$ to be 20 minutes for safe redundancy, which gives a maximum flight distance of 10km. \section{Description of Algorithm} The key of A* algorithm is an evaluation function $f() = g() + h()$, where $g()$ calculates the true path cost, and $h()$ is the heuristic function that estimates the travel time from current node to goal node. We expand $g()$ by adding waiting penalty factor $p$ to $g()$ to use information about node and edge availability. \begin{algorithm}[H] \caption{Collision-Free Dynamic Routing} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE A non-negative graph $G=(V,E,w)$,\\ current time $t_{curr}$, no-fly zone $nfz$, origin node $v_{ori}$,\\ destination node $v_{dest}$, current schedule $CurrSched$ \ENSURE Status code, expected travel time $t_{exp}$,\\ a ordered list of nodes $Path$ \STATE $CurrSched.updateNFZ(nfz)$ \STATE initialize OPEN priority queue \STATE initialize CLOSED list \STATE OPEN.push($v_{ori}$) \WHILE {!OPEN.isEmpty()} \STATE $q \gets OPEN.pop()$ \STATE $L \gets G.getSuccessors(q)$ \FOR {each node $succ_i$ in L} \STATE $w \gets CalcCost(succ_i,CurrSched,t_{curr},v_{goal})$ \IF {$succ_i$ == $v_{dest}$} \STATE $t_{exp}, Path \gets backTrack(PARENT)$ \STATE $CurrSched.update()$ \RETURN $SUCCESS, Path, t_{exp}$ \ENDIF \STATE OPEN.update() \STATE CLOSED.update() \STATE OPEN.push($succ_i$,w) \ENDFOR \STATE CLOSED.add(q) \ENDWHILE \RETURN $FAILURE$ \end{algorithmic} \end{algorithm} \textit{OPEN} stores the frontier nodes, and \textit{CLOSED} stores the nodes whose successors have been explored. Line 1 to 4 does initialization; exploration of nodes starts from line 5; line 6 and 7 retrieves the current node and its successor node; line 8 generates the successors of the currently expanding nodes, line 9 calculates the cost using algorithm 2; line 10 to 14 checks whether is node is already the goal node and update the schedule accordingly; line 15 to 26 checks whether existing node in CLOSED and OPEN list can have a better upstream node; in the end, the currently explored node will be added to \textit{CLOSED}. \begin{algorithm}[H] \caption{Calculate Cost} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Successor node $succ$, current node $v_{curr}$, current schedule $CurrSched$, current time $t_{curr}$, goal node $v_{goal}$ \ENSURE Cost for using this successor $w$ \STATE $hCost \gets euclideanDistance(t_{curr},v_{goal}) +edgePen(v_{curr},t_{curr}) +nodePen(succ,t_{curr})$ \STATE $gCost \gets v_{curr}.coSoFar+weight(v_{curr},succ)$ \RETURN $hCost+gCost$ \end{algorithmic} \end{algorithm} \textit{CalcCost} function is key of the proposed algorithm. The total weight of a path depends on both the time necessary to travel to that node, and waiting penalty along the path (equation \ref{eq:1}). \subsection{Discussion on Optimality} The proposed algorithm is optimal because the heuristic function for finding weight is admissible. The waiting penalty $P(u,t)$ reflects the actual cost for taking a successor. $edgePen(v_{curr},t_{curr})$ is the waiting time for taking the edge from current node to successor node, and $nodePen(succ,t_{curr})$ is the waiting time when agent reaches the successor node when there are other UAVs hovering on that node. Those two components are unavoidable, thus we view them as actual cost. The rest of the proof can therefore be reduced to the original A* algorithm \cite{c5}. \section{IMPLEMENTATION} Now that we have proven the optimality of proposed algorithm via admissibility, we would like to verify our algorithm on a real-life scenario. We select Singapore as a case study for a few reasons: 1) Singapore as a financial center in Southeast Asia is likely to have huge amount of demands for commercial UAV delivery, and 2) Singapore has a high vehicle occupancy rate and relatively small land area. Using UAV for delivery may relieve the demand for ground transport, and therefore reduces traffic congestion. We implemented a UAV delivery simulation program called Multi-UAV Simulation Engine (MUSE) to test the proposed algorithm. \subsection{Preparation of Input} To simulate the UAV traffic, we must have air space structure and delivery requests. Under Singaporean setting, we select the rooftop of multistory car parks as landable UAV delivery stations, where there are 77 of them. This is because the rooftop is usually unused in the car park, and each residential area is equipped with a multistory car park. We apply triangulation on the graph to reduce the number of nodes, and manually join collinear nodes on the same direction. The resulting airspace graph is as Fig.1. \begin{figure} \centering \includegraphics[width=8cm]{2D_Airspace} \caption{The air space map of Singapore} \label{2D_Airspace} \end{figure} Due to a lack of information, we randomly generated the delivery requests with a requested departure time, origin ID, and edge ID. \subsection{Software} We develop a Java program to simulate the UAV delivery with the proposed algorithm. The program reads airspace structure, request, and no-fly zone schedule. It produces the routing results for each request in JSON format \footnote{The implementation code is open-sourced under MIT license at https://github.com/StevenShi-23/MUSE.}. \subsection{Hardware Setup} The experiment was conducted on a PC with 2.4 GHz Intel i7 6660U CPU and 16GB memory. \section{Analysis of Simulation} We simulated the UAV delivery from 8:00 am to 8:00 pm, which is 720 minutes. We assume a no-fly zone near Central Business District from 10:00 am to 12:00 am. We simulated from 1000 requests to 10000 requests, and for each number of requests, we repeat the simulation 100 times with different requests to calculate the average. \subsection{Runtime Analysis} Fig.2 is a summery of running time in seconds versus number of requests. Notably, it only took our proposed algorithm average 19.45 seconds to route 5000 requests. However, as the number of requests increases, chances of collision increases, thus more nodes need to be expanded and more calculation is done. \begin{figure} \centering \includegraphics[width=8cm]{runtime} \caption{Computational time for finding a path within maximum flight time.} \end{figure} \subsection{Success Rate and Average Delay} We use two metrics to measure the success of our algorithm. Success rate is the percentage of successfully routed requests divided by total number of requests. Failure to route is defined as not being able to find a viable path whose distance is within maximum flight distance. We can see from Fig.3 that the success rate decreases as number of requests increases, and it drops significantly after around 6000 requests. It may come from a few reasons: 1) the distance between requested origin and destination is too far ,or 2) collision avoidance takes too much time for the route to be possible. \begin{figure} \centering \includegraphics[width=8cm]{SuccessRate} \caption{Success rate for finding a flyable path.} \end{figure} Fig.4 shows that as number of requests served per minutes increases, the average delayed time in the air increases. This may verify that 2) is the main cause for the routing failure after 6000 requests. \begin{figure} \centering \includegraphics[width=8cm]{AverageDelay} \caption{Average delayed time in the air for collision-avoidance and no-fly zone} \end{figure} \section{CONCLUSION} We introduced an optimal collision-free path planning algorithm based on A* algorithm. The key to obtain waiting penalty is to build an efficient lookup table for all current schedules. The proposed algorithm is capable of dealing with collision-avoidance constraint, no-fly zone constraint, and battery constraint. We also give an implementation for the mentioned algorithm. The runtime for the algorithm scales almost linear to number of requests. It also verifies the intuition that the routing success rate decreases with higher request density. Besides, the success rate drops dramatically at some turning point where there is little space to satisfy new requests. Although our proposed algorithm is optimal at routing a single new request, the global UAV path planning remains a NP-hard problem \cite{c7}. Future work can be developed on using heuristics to approximate sub-optimal algorithms for global path planning. Also, more realistic requests information could be obtained from authorities to design more fine-grained airspace structure, and study optimal location of UAV delivery stations.
2024-02-18T23:40:42.491Z
2018-05-10T02:04:10.000Z
algebraic_stack_train_0000
3,140
3,341
proofpile-arXiv_065-15310
\section{Introduction} A \emph{hypergraph} $\mathcal{H}=(\mathcal{V},\mathcal{E})$ is defined by a set of nodes $\mathcal{V}$ and a set of \emph{hyperedges} $\mathcal{E}$. Unlike simple graphs, a hyperedge in a hypergraph can contain two or more nodes. (In this paper, we ignore hyperedges of size one, as for the problems we consider, these hyperedges can be trivially preprocessed.) The maximum hyperedge size of a hypergraph $\mathcal{H}$ is usually called the \emph{dimension} (or \emph{rank}) of $\mathcal{H}$. As Linial~\cite{linial13} and Kutten et al.~\cite{kutten14} have pointed out, while simple graphs capture \emph{pairwise} interactions well, hypergraphs are ideal for modeling \emph{multi-party} interactions. For example, social networks can contain multiple overlapping groups, each of which has multiple individuals; economic transactions often involve several parties, and each party can participate in several transactions at the same time. Despite their importance, solving graph-theoretic problems in hypergraphs in a distributed fashion is often highly non-trivial, and usually much less understood than the corresponding problems in simple graphs. Computing a \emph{maximal independent set (MIS)} is a prominent example. For a hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$, an \emph{independent set} $\mathcal{I}$ of $\mathcal{H}$ is a subset of $\mathcal{V}$ such that for each hyperedge in $\mathcal{E}$, at least one node is not in $\mathcal{I}$. An independent set $\mathcal{I}$ is called \emph{maximal} if adding any new node to $\mathcal{I}$ would violate independence. Efficient computation of an MIS is an important problem in distributed computing theory: it is a fundamental symmetry breaking problem; it could also be a key building block for solving many other problems (such as matching and vertex coloring). Efficient algorithms for computing an MIS in simple graphs have long been known, and improvements are still being made (see, e.g., \cite{alon86,luby86,barenboim16,ghaffari16,censor-hillel17}). In contrast, for nearly three decades, researchers have been seeking a parallel algorithm for computing hypergraph MIS within poly-logarithmic time, yet the answer is still unclear (see, e.g., \cite{karp88,beame90,kelsen92,luczak97,bercea14,harris17}). For distributed message-passing systems, the hypergraph MIS problem has received much less attention. The two classic computational models to study distributed graph problems in message-passing systems are the LOCAL model and the CONGEST model. In both cases, the network is modeled as an $n$-vertex graph and communication happens in synchronous rounds. In the LOCAL model, the messages exchanged in every round can be of arbitrary size, while in the CONGEST model only messages of size $O(\log n)$ are allowed. Currently, to the best of our knowledge, poly-logarithmic time algorithms for the hypergraph MIS problem only exist in the LOCAL model, or in the CONGEST model if the input hypergraph has constant dimension~\cite{kutten14}. It is no coincidence that the hypergraph MIS problem has a poly-logarithmic time (randomized) LOCAL solution. As has been made explicit by Ghaffari et al.~\cite{ghaffari17}, so long as a graph problem has a ``sufficiently local'' sequential greedy algorithm, there exists a systematical way to build a randomized LOCAL algorithm that solves the problem in poly-logarithmic time. However, this strategy has two drawbacks: (a) large message sizes; and (b) the considered problem is actually solved in a somewhat centralized fashion (though at smaller scale) which might involve non-trivial local computation. On the contrary, to compute an MIS in simple graphs, both the classical algorithm by Luby and Alon et al.~\cite{alon86,luby86} and (the first part of) the latest solution proposed by Ghaffari~\cite{ghaffari16} work well in the CONGEST model, and incur little local computation. Therefore, an interesting open question is: do poly-logarithmic time CONGEST algorithms exist that can solve the hypergraph MIS problem? In this paper, we make some progress towards answering this open question. Particularly, we focus on \emph{linear hypergraphs}---a class of hypergraphs in which any two hyperedges intersect in at most one node---and devise efficient algorithms to compute MIS and another closely related structure in such hypergraphs, in the distributed CONGEST model. We note that although linear hypergraphs are a specific subclass of hypergraphs, unique challenges that do not arose in simple graphs persist. In general, our hope is that understanding the MIS problem for linear hypergraphs will be an important intermediate step along the path for solving MIS in general hypergraphs, in the distributed CONGEST model. \bigskip\noindent{\bf MIS in Linear Hypergraphs.} Our first result is a randomized algorithm that computes an MIS for a linear hypergraph in poly-logarithmic time in the distributed CONGEST model. Conceptually, the algorithm contains two parts. In the first part, we utilize \emph{network decomposition}~\cite{elkin16} to decompose the input hypergraph into multiple smaller ones, each with bounded diameter. (The motivation for doing so will be discussed shortly.) The second part contains multiple iterations. In each iteration, within the bounded-diameter subhypergraphs, we further generate \emph{equitable subhypergraphs}. (Roughly speaking, an equitable hypergraph is somewhat like a ``regular graph'' in the simple graph world.) Then, within each equitable subhypergraph, we independently mark each node with a carefully chosen probability, and let marked nodes that do not violate independence constraints join the MIS. Since the subhypergraphs are equitable, we prove that many nodes will decide in each iteration. Hence, after not too many iterations, the algorithm will output a complete MIS. The second part of this algorithm can be seen as a distributed variant of a parallel hypergraph MIS algorithm proposed by \L{}uczak and Szyma\'nska~\cite{luczak97}. Nonetheless, to maintain the correctness and efficiency of the original algorithm, the conversion process is nontrivial. Particularly, the first issue is that the original algorithm depends on knowledge of some global parameters. To avoid incurring $\Omega(D)$ time complexity where $D$ is the diameter of the input hypergraph, we employ network decomposition. This is also the motivation for the first part of our algorithm. The second and more critical issue is that the original algorithm depends on $\Theta(n)$ global parameters, in the worst case. These information would cost too much time to collect, even after decomposition. To resolve this problem, we have refined the detailed analysis so that our algorithm now only depends on $O(\log{n})$ parameters. \bigskip\noindent{\bf The Generalized MIS (GMIS) Problem.} One way to interpret the hypergraph MIS problem is: for each hyperedge $e\in\mathcal{E}$, associate a threshold $t_e=|e|-1$, then an MIS $\mathcal{I}$ is a maximal subset of $\mathcal{V}$ such that for each hyperedge $e\in\mathcal{E}$, the number of nodes in $\mathcal{I}$ does not exceed $t_e$. (I.e., $\forall e\in\mathcal{E}, |\mathcal{I}\cap e|\leq t_e=|e|-1$.) Now, by allowing $t_e$ to be any integer value between one and $|e|-1$, we obtain what we define as the \emph{generalized maximal independent set (GMIS)} problem. That is, in the GMIS problem, for each hyperedge $e\in\mathcal{E}$, we define (as input) a threshold $t_e$ where $1\leq t_e\leq |e|-1$, and the goal is to find a maximal subset $\mathcal{I}$ of $\mathcal{V}$ such that for each hyperedge $e\in\mathcal{E}$, we have $|\mathcal{I}\cap e|\leq t_e$. As previously mentioned, hypergraphs is an ideal structure to capture multi-party interactions. The thresholds on hyperedges can be used to represent the constraints posed by various problems. Therefore, we believe the additional flexibility of GMIS (in comparison with MIS) would allow it to model a wider range of real-world problems. \bigskip\noindent{\bf GMIS in Linear Hypergraphs.} Allowing arbitrary thresholds on hyperedges makes the already hard hypergraph MIS problem even more challenging. For example, many hypergraph MIS algorithms critically rely on the property that an independent set in a subhypergraph is also an independent set in the original hypergraph. However, as we shall see, a generalized independent set in a subhypergraph is \emph{not} necessarily a generalized independent set in the original hypergraph. As a result, we might have to adjust the definition of subhypergraph accordingly, which in turn could significantly affect the performance and/or correctness of the original algorithm. In this paper, we show that GMIS can be solved in $O(\log^2{n})$ time in the LOCAL model. Moreover, by generalizing our previous hypergraph MIS algorithm, we are able to devise a CONGEST algorithm that can solve GMIS in poly-logarithmic time, subject to the constraint that the input hypergraph is linear and has constant dimension. It is also worth noting, although we use the same high-level strategy, important adjustments to both the algorithm and the analysis are made during the generalization process. At first glimpse, it may seem easy to obtain a poly-logarithmic time GMIS algorithm for constant dimension hypergraphs, even in the CONGEST model. However, it turns out that the most intuitive strategies do not lead to the desired outcome. For instance, the approach of reducing the maximum hyperedge threshold one by one can be slow. This is because, in the simple graph setting, Luby's algorithm and its variants achieve high efficiency by considering both the nodes that decide to join and not to join the MIS. Yet, for hypergraph GMIS (as well as MIS), it is hard to analyze how many nodes will decide to not join, thus raising difficulties to arguing how fast nodes are removed, or how fast the maximum hyperedge threshold is reduced. \section{Related Work} Efficient computation of MIS in simple graphs has always attracted numerous attention. In two seminal papers, Alon, Babai, and Itai, as well as Luby~\cite{alon86,luby86} provided a randomized algorithm which solves the problem in $O(\log{n})$ time. Since then, many other solutions were proposed (see, e.g., Section 1.1 of \cite{barenboim16} for a brief survey), and the current best known (randomized LOCAL) algorithm is proposed by Ghaffari~\cite{ghaffari16}. Perhaps surprisingly, however, how to efficiently compute MIS in hypergraphs is much less well understood. As we have mentioned earlier, researchers have been seeking a parallel algorithm that can compute a hypergraph MIS within poly-logarithmic time under the PRAM model for decades, yet the answer is still unclear. More specifically, in 1990, Beame and Luby~\cite{beame90} introduced a randomized algorithm with poly-logarithmic runtime for computing an MIS in hypergraphs of dimension three. Kelsen~\cite{kelsen92} improved the analysis of \cite{beame90} so that the algorithm can work for all constant dimension hypergraphs. Later, \L{}uczak and Szyma\'nska~\cite{luczak97} showed that for all linear hypergraphs, the problem can also be solved within poly-logarithmic time. The second part of our hypergraph MIS algorithm is a refined distributed variant of \L{}uczak and Szyma\'nska's algorithm. On the other hand, for general hypergraphs, early result by Karp et al.~\cite{karp88} proved an MIS can be obtained in $O(\sqrt{n}\cdot(\log{n}+\log{m}))$ time where $m$ is the number of hyperedges. Later, by repeatedly using the algorithm of \cite{beame90}, Bercea et al.~\cite{bercea14} gave an algorithm that works in $n^{o(1)}$ time, subject to the constraint that there are not too many hyperedges. More recently, Harris~\cite{harris17} improved the result of Kelsen~\cite{kelsen92} and devised an algorithm with runtime $O(\log^{2^r}{n})$ for hypergraphs with dimension $r$. Lastly, we note that in the original paper by Beame and Luby~\cite{beame90}, the authors also proposed another simple parallel algorithm and conjectured it can solve MIS within poly-logarithmic time, for any hypergraph. However, to the best of our knowledge, the correctness of this conjecture is still unknown. For message-passing distributed systems, even fewer attention were paid to the hypergraph MIS problem, and the most recent result comes from Kutten et al.~\cite{kutten14}. In their paper, by employing network decomposition~\cite{linial93} and exploiting the local nature of the MIS problem, the authors provided a $O(\log^2{n})$ time LOCAL algorithm. In contrast, under the CONGEST model in which each message is of bounded size, the authors presented two other results: (a) for hypergraphs with constant dimension $d$, a $O(\log^{(d+4)!+4}{n})$ time algorithm; and (b) for general hypergraphs, a $O(\min\{\Delta^\epsilon\log^{(1/\epsilon)^{O(1/\epsilon)}}{n},\sqrt{n}\})$ time algorithm where $\Delta$ is the maximum degree and $1\geq\epsilon\geq 1/(\frac{\log\log{n}}{c\log\log\log{n}}-1)$ for some constant $c$. In our linear hypergraph MIS algorithm, the dimension can be arbitrary, and the degree of the poly-logarithmic term (in the running time) does not depend on the dimension. \section{Model and Problem} A \emph{hypergraph} $\mathcal{H}=(\mathcal{V},\mathcal{E})$ is defined by a set of nodes $\mathcal{V}$, and a set of \emph{hyperedges} $\mathcal{E}$. We usually assume $|\mathcal{V}|=n$, and each node has a unique identity. For each hyperedge $e\in\mathcal{E}$, it contains two or more nodes in $\mathcal{V}$. The maximum size of all hyperedges is called the \emph{dimension} (or \emph{rank}) of a hypergraph. A hypergraph is a \emph{linear hypergraph} if for each pair of hyperedges, they overlap on at most one node. For a set of nodes $\mathcal{V}'\subseteq\mathcal{V}$, define $\mathcal{H}'=(\mathcal{V}',\mathcal{E}')$ to be the induced \emph{subhypergraph} of $\mathcal{H}$ where $\mathcal{E}'=\{e\ |\ e\in\mathcal{E},e\subseteq\mathcal{V}'\}$. For each hyperedge $e\in\mathcal{E}$, we associate an integer threshold $t_e$ where $1\leq t_e\leq |e|-1$. For a subset $\mathcal{I}$ of $\mathcal{V}$, we call it a \emph{generalized independent set} if for each $e\in\mathcal{E}$, $|e\cap\mathcal{I}|\leq t_e$. We say a generalized independent set $\mathcal{I}$ is a \emph{generalized maximal independent set (GMIS)} if adding any extra node to $\mathcal{I}$ would violate some hyperedge's threshold constraint. Notice, if for each hyperedge $e\in\mathcal{E}$ we define $t_e=|e|-1$, then a generalized independent set becomes a classical hypergraph \emph{independent set}, and a generalized maximal independent set becomes a classical hypergraph \emph{maximal independent set (MIS)}. To model a hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$, we consider a synchronous message-passing network in which time is divided into discrete \emph{slots}. We adopt the \emph{server-client} model used in \cite{kutten14}. In this model, $\mathcal{H}$ is realized as a simple bipartite graph $G_{\mathcal{H}}=(V_{\mathcal{H}},E_{\mathcal{H}})$. The nodes in $G_{\mathcal{H}}$ are partitioned into two sets: $S_{\mathcal{H}}$ and $C_{\mathcal{H}}$. Each node in $S_{\mathcal{H}}$ represents a particular node in $\mathcal{V}$, and each node in $C_{\mathcal{H}}$ represents a particular hyperedge in $\mathcal{E}$. We call the nodes in $S_{\mathcal{H}}$ as \emph{servers}, and the nodes in $C_{\mathcal{H}}$ as \emph{clients}. For a node $u\in C_{\mathcal{H}}$ and a node $v\in S_{\mathcal{H}}$, there is an edge (i.e., a bidirectional communication link) connecting them if and only if the node represented by $v$ is contained within the hyperedge represented by $u$. Another model to represent a hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$ is called the \emph{vertex-centric} model. In this model, $\mathcal{H}$ is again realized as a simple graph $G_{\mathcal{H}}=(V_{\mathcal{H}},E_{\mathcal{H}})$. However, here $V_{\mathcal{H}}$ simply denotes the set of nodes in $\mathcal{V}$, and there is an edge between two nodes $u$ and $v$ iff there is a hyperedge in $\mathcal{E}$ containing both $u$ and $v$. We call $G_{\mathcal{H}}$ as the \emph{server graph} of $\mathcal{H}$. Throughout this paper, at the network layer, we use the server-client model to represent hypergraphs. However, for the ease of presentation, we will sometimes discuss the server graph of the specified hypergraph. Regarding the capacity of the communication links, we will mostly consider the \emph{CONGEST} model. More specifically, in each time slot, for each direction of each link, only a $O(\log{n})$-sized message can be sent. Sometimes, we will also discuss the implications of our results under the \emph{LOCAL} model. In that case, in each time slot, for each direction of each link, an arbitrarily large message can be sent. In this paper, we are interested in finding efficient distributed algorithms that can solve MIS and GMIS in linear hypergraphs in the CONGEST model. Particularly, we will develop (Monte Carlo) randomized algorithms that can solve the considered problems \emph{with high probability (w.h.p.)}, i.e., a probability that is at least $1-1/n^c$ for some constant $c\geq 1$. \section{Decomposing Hypergraphs} Network decomposition (see, e.g., \cite{linial93,elkin16}) is a widely used technique in distributed computing for solving graph theoretic problems. For a simple graph $G=(V,E)$, a \emph{$(d,c)$-network-decomposition} is a partition of $V$ so that: (a) for each slice of the partition (i.e., a subset of $V$), the induced subgraph has diameter at most $d$; and (b) we can assign each slice of the partition a color within a set of $c$ colors, and ensure any two adjacent nodes in $G$ of the same color must be in the same slice of the partition. Moreover, $d$ is called the \emph{weak diameter} if, when computing the diameters of the induced subgraphs, edges not in the subgraph (but in $E$) can be used; otherwise, $d$ is called the \emph{strong diameter}. For many network algorithms, network decomposition can be used to boost efficiency as it allows for parallelism: subgraphs with the same color can usually be processed at the same time without interfering each other. Network decomposition is also helpful in that it bounds the diameter of the graph instances the algorithm will process. Our hypergraph MIS/GMIS algorithm also relies on network decomposition to achieve high efficiency: first, decompose the input hypergraph into multiple subhypergraphs of bounded diameter; then, iterate through all colors and run the core MIS/GMIS algorithm in parallel within subhypergraphs of the same color; finally, combine all partial solutions to obtain a complete MIS/GMIS of the original hypergraph. In this part of the paper, we will present the guarantees provided by the decomposition procedure; we will also show that combining the MIS/GMIS found in each decomposed subhypergraph correctly gives a complete MIS/GMIS of the original input hypergraph. We begin with the decomposition procedure. The idea of decomposing input hypergraph into multiple smaller ones and then compute MIS for these subhypergraphs in parallel has been used by Kutten et al.~\cite{kutten14}. In that paper, the authors utilized the classical $(O(\log{n}),O(\log{n}))$-network-decomposition algorithm developed by Linial and Saks~\cite{linial93}. However, Linial and Saks's algorithm only produces a decomposition with weak diameter $O(\log{n})$, thus might result in congestions when communication occurs in multiple subhypergraphs simultaneously. To resolve this issue, Kutten et al.\ slightly modified Linial and Saks's algorithm so as to upper bound the potential congestion. Recently, Elkin and Neiman developed a new $(O(\log{n}),O(\log{n}))$-network-decomposition algorithm with strong diameter $O(\log{n})$~\cite{elkin16}. This is a strict improvement when compared with Linial and Saks's algorithm. Therefore, we implement Elkin and Neiman's algorithm in our model in this paper. More specifically, the decomposition procedure---which is described in detail in the proof of the following lemma---guarantees the following properties. \begin{lemma}\label{lemma-decomp-alg} Let $G_{\mathcal{H}}$ be the server graph of an $n$-node hypergraph $\mathcal{H}$. With high probability, in $O(\log^2{n})$ time slots, for some positive integer $k$, we can partition nodes of $\mathcal{H}$ into $k$ sets $S_{1}, S_{2}, \cdots, S_{k}$, produce $k$ subgraphs of $G_{\mathcal{H}}$ denoted by $G_{1}, G_{2}, \cdots, G_{k}$, and assign a color within a set of $O(\log{n})$ colors to each set, such that: (a) for all $i$, subgraph $G_{i}$ is the induced subgraph of $S_{i}$ and has strong diameter $O(\log{n})$; (b) for any $S_{i}$ and $S_{j}$ that are assigned with the same color, there is no hyperedge in $\mathcal{H}$ that contains nodes in both $S_{i}$ and $S_{j}$. \end{lemma} \begin{proof} We first briefly describe Elkin and Neiman's network decomposition algorithm. (More details can be found in the original paper~\cite{elkin16}.) The algorithm contains $\ln{n}$ stages. In the $i$\textsuperscript{th} stage, there are $2(cn/e^i)^{1/m}$ phases; we also fix $\beta_i=\ln{(cn/e^i)}/m$. Here, $c$ and $m$ are parameters that can be adjusted. Let $G'_1=G_{\mathcal{H}}$. In each phase $t$, we carve a block $W_t$ out of the current graph $G'_{t}$, and let $G'_{t+1}=G'_t\backslash W_t$. Notice, all nodes in $W_t$ gets a unique color, and each connected component in $W_t$ is a slice of the final partition. In the $t$\textsuperscript{th} phase, each node $v$ in $G'_t$ independently samples a value $r_v$ from the exponential distribution with parameter $\beta_t$, where $\beta_t$ is the value of $\beta$ for the stage the $t$\textsuperscript{th} phase is contained within. Each node $v$ in $G'_t$ broadcasts $r_v$ to all nodes in $G'_t$ that are within distance $\lfloor r_v\rfloor$ from it. On the other hand, each node $y$ in $G'_t$ also records the values of $r_v$ that have reached it, along with the distances to these nodes. Then, $y$ sorts these nodes $v_1,v_2,\cdots,v_x$ according to $g_i=r_{v_i}-\texttt{dist}_{G'_{t}}(y,v_i)$ in decreasing order. Finally, $y$ is added to $W_t$ iff $g_1-g_2>1$. As have been shown in \cite{elkin16}, by choosing proper $c$ and $m$, w.h.p.\ the above algorithm finishes within $O(\log{n})$ phases, and $r_v$ will always be bounded by $O(\log{n})$. Moreover, any connected component in any $W_t$ has strong diameter $O(\log{n})$. I.e., the algorithm can create a decomposition with strong diameter $O(\log{n})$ in $O(\log^2{n})$ time, using $O(\log{n})$ colors. We now describe one simple way to simulate Elkin and Neiman's algorithm in our CONGEST server-client model. To implement the $t$\textsuperscript{th} phase, we use $2j=\Theta(\log{n})$ time slots. More specifically, in the first slot within the phase, each server node $v$ sends $r_v$ to its neighboring client nodes. Then, we repeat the following for $2j-1$ time slots: in each even (resp., odd) slot, each client (resp., server) node sends the two maximum $g_i$ it has seen since the beginning of this phase to its neighbors. To see the correctness of the above simulation, consider a server node $y$. Assume in the original algorithm, in the phase in which $y$ gets a color, the two maximum values it obtained are $g_1=r_{u_1}-\texttt{dist}(y,u_1)$ and $g_2=r_{u_2}-\texttt{dist}(y,u_2)$. Further assume in our simulation, the two maximum values $y$ obtained are $g'_1=r_{u'_1}-\texttt{dist}(y,u'_1)$ and $g'_2=r_{u'_2}-\texttt{dist}(y,u'_2)$. Clearly, $g'_1\leq g_1$. Moreover, in that phase, a value equal to $g_1$ will reach $y$. Otherwise, there must exist value $g''_1=r_{u''_1}-\texttt{dist}(y,u''_1)>g_1$, such that on the path from $u_1$ to $y$, some node $z$ receives both $g''_1$ and $g_1$, and decides stop forwarding $g_1$. In such case, in the original execution, $g''_1$ will reach $y$ as well because $\lfloor r_{u''_1}-\texttt{dist}(z,u''_1)\rfloor\geq\lfloor r_{u_1}-\texttt{dist}(z,u_1)\rfloor$ decides the number of remaining hops the message will propagate (from node $z$), contradicting the assumption that the maximum value received by $y$ is $g_1$. Hence, we know $g'_1=g_1$. Similarly, we can also prove $g_2=g'_2$. Therefore, we know our simulation is correct. Since simulating one phase costs $\Theta(\log{n})$ time slots (as $r_v\in O(\log{n})$), and there are $O(\log{n})$ phases, the decomposition procedure terminates in $O(\log^2{n})$ time in our network model. The properties in the lemma follow by the definition of network decomposition. \end{proof} With a proper decomposition, the core MIS/GMIS algorithm only needs to deal with bounded diameter hypergraphs. In particular, the following lemma---which is inspired by Lemma 3 in \cite{kutten14}---shows that if we can compute MIS/GMIS in low diameter hypergraphs fast, then we can also compute it in general hypergraphs fast. Notice, when compared with the original version, the proof is generalized so that the claim holds for GMIS as well. \begin{lemma}[Decomposition lemma, generalized version of Lemma 3 in \cite{kutten14}]\label{lemma-decomp} Assume we are given a hypergraph $\mathcal{H}$ containing $n$ nodes. If there exists an algorithm $\mathcal{A}$ that computes an MIS (resp., GMIS) for hypergraph $\mathcal{H}'$---which contains $n'\leq n$ nodes and has $O(\log{n})$ diameter---in $T(n')$ time, then there exists an algorithm that computes an MIS (resp., GMIS) for $\mathcal{H}$ within $O(T(n)\cdot\log{n}+\log^2{n})$ time. \end{lemma} \begin{proof} Let $G_{\mathcal{H}}$ be the server graph of $\mathcal{H}$. First, run the network decomposition algorithm on $G_{\mathcal{H}}$ as discussed in the proof of Lemma \ref{lemma-decomp-alg}. This step takes $O(\log^2{n})$ time slots. The next step contains $O(\log{n})$ iterations, and in the $i$\textsuperscript{th} iteration we consider node sets with color $i$. Assume node set $S_t$ has color $i$, and the corresponding subgraph is $G_{t}$. In the $i$\textsuperscript{th} iteration, we need to decide for each node in $S_{t}$ whether it is in the final solution of MIS (resp., GMIS) or not. In the following analysis, we assume we have already done so for the node sets with color $1$ to $i-1$. Define $\mathcal{H}_{t}$ to be the following subhypergraph. $\mathcal{H}_{t}$ contains all nodes in $S_{t}$. (Recall that a node in $S_{t}$ represents a node in $\mathcal{H}$.) For each hyperedge $e$ that contains some node in $S_{t}$, count the number of nodes that satisfy either of the following two conditions: (a) a node in a set of color $j>i$; or (b) a node in a set of color $j<i$ that has already decided to not be in the MIS (resp., GMIS). If the count is strictly smaller than $|e|-t_e$ where $t_e$ is the threshold of $e$ in $\mathcal{H}$, then we add a hyperedge $e'=e\cap S_{t}$ to $\mathcal{H}_{t}$. The threshold of $e'$ is the remaining threshold that is still available to $e$. Notice, since we use the server-client model to realize the hypergraph, in a synchronized execution, we can construct $\mathcal{H}_{t}$ in a constant number of time slots, even in the CONGEST model. In particular, in the $i$\textsuperscript{th} iteration, a server (i.e., node in $\mathcal{H}$) $u$ can first tell each adjacent client (i.e., hyperedge) $e$ about its color and whether it has decided to be in the MIS (resp., GMIS) or not. These information can be sent within one message. The client can then locally check and decide, for nodes with color $i$, whether $e'=e\cap S_{t}$ should be added to $\mathcal{H}_{t}$ or not. Next, the client can inform each adjacent server with color $i$ about whether $e'$ is constructed or not, and the remaining threshold. (However, this acknowledgment cannot contain the identities of the nodes in $e'$ due to message size constraint.) Once $\mathcal{H}_t$ is constructed, we compute MIS (resp., GMIS) of $\mathcal{H}_{t}$. In particular, we run algorithm $\mathcal{A}$ on $\mathcal{H}_t$. Since $G_{t}$ has $O(\log{n})$ diameter, we know $\mathcal{A}$ will finish within $O(T(n))$ time if we only run it on $G_t$. However, we need to run $\mathcal{A}$ on all $G_{t_i}$ with color $i$. Nevertheless, due to property (b) in Lemma \ref{lemma-decomp-alg}, we can indeed run $\mathcal{A}$ on all $G_{t_i}$ in parallel without worrying about congestion. Hence, we can still finish executing $\mathcal{A}$ on all such $G_{t_i}$ in $O(T(n))$ time. After running $\mathcal{A}$ for all $\mathcal{H}_t$ of all colors, the combined solution of all $\mathcal{H}_t$ will be a valid solution for the MIS (resp., GMIS) problem on hypergraph $\mathcal{H}$. We now prove the correctness of this claim. Let $M_t$ be the constructed MIS (resp., GMIS) of $\mathcal{H}_t$. Firstly, observe that any node in $M_t$ can be added to the MIS (resp., GMIS) solution of $\mathcal{H}$ without violating the threshold constraints. This is because, when constructing $\mathcal{H}_t$, for each hyperedge $e$ in $\mathcal{H}$ that contains some node in $S_t$, if $e'=e\cap S_t$ is added to $\mathcal{H}_t$, then threshold of $e$ is inherited and updated. Otherwise, if $e'$ is not added, then even if all nodes in $e$ with color $i$ decide to join the MIS (resp., GMIS), the threshold of $e$ will not be violated, as there are enough nodes in $e$ that have decided to not join the MIS (resp., GMIS), or have not decided yet. Secondly, we claim if a node $u\in S_t$ is not in $M_t$, then there exists a hyperedge $e'$ in $\mathcal{H}_t$ such that adding $u$ to $M_t$ would violate the threshold constraint of hyperedge $e$. Here, $e$ is a hyperedge in $\mathcal{H}$ and $e'=e\cap S_t$. To see this, assume adding $u$ to $M_t$ would violate the threshold constraint of $e'$ in $\mathcal{H}_t$. (We can make this assumption since $\mathcal{A}$ can correctly compute MIS (resp., GMIS) in $\mathcal{H}_t$.) Further assume there are $y_e$ nodes in $e$ that have already decided to join the MIS (resp., GMIS) when constructing $e'$. This implies the threshold associated with $e'$ is $t_e-y_e$. Moreover, adding $u$ to $M_t$ would make $t_e-y_e+1$ nodes in $e'$ decide to join the MIS (resp., GMIS). Therefore, adding $u$ to $M_t$ would make $t_e+1$ nodes in $e$ decide to join the MIS (resp., GMIS), which is a violation. To complete the proof of the lemma, notice that we need $O(T(n))$ time for each color, and we have $O(\log{n})$ colors. Therefore, the total time complexity for computing MIS (resp., GMIS) on $\mathcal{H}$ is $O(T(n)\cdot\log{n}+\log^2{n})$. \end{proof} Before proceeding to the next part, we note that Lemma \ref{lemma-decomp} implies we can solve hypergraph GMIS (hence MIS as well) in $O(\log^2{n})$ time, in the distributed LOCAL model. \begin{theorem}\label{thm-gmis-local} A GMIS can be computed in $O(\log^2{n})$ time in the LOCAL model, w.h.p. \end{theorem} \begin{proof} As stated in Lemma \ref{lemma-decomp-alg}, in $O(\log^2{n})$ time, we can decompose the input hypergraph. Then, we proceed as specified in the proof of Lemma \ref{lemma-decomp}. Notice, we are now in the LOCAL model, which means each message can be of arbitrary size. For each subgraph, since the diameter is $O(\log{n})$, by flooding information for $O(\log{n})$ time slots, all nodes in the subgraph will know everything about the constructed hypergraph, and can thus compute (identical) GMIS for the constructed hypergraph locally. This implies $T(n)=O(\log{n})$. As a result, computing GMIS for all colors takes $O(\log^2{n})$ time. \end{proof} \section{Computing an MIS in Linear Hypergraphs} \subsection{The Algorithm} In this section, we introduce a randomized distributed algorithm that solves classical MIS in linear hypergraphs, within poly-logarithmic time. As previously mentioned, it is based on a parallel algorithm originally developed by \L{}uczak and Szyma\'nska~\cite{luczak97}. Nonetheless, we have adjusted the algorithm and refined the detailed analysis accordingly, so as to greatly reduce the number of input parameters the algorithm depends upon, thus ensuring the high efficiency of this distributed variant, even in the CONGEST model. Throughout this section, we restrict our attention to hypergraphs with diameter $O(\log{n})$, as MIS for hypergraphs with larger diameter can be computed with a poly-logarithmic time complexity overhead, due to Lemma \ref{lemma-decomp}. Before presenting the algorithm, we introduce some relevant notations. For an $n$-node hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$, define $U_{i}=U_{i}(\mathcal{H})=\{e\ |\ e\in\mathcal{E},|e|=i\}$, and $u_{i}=u_{i}(\mathcal{H})=|U_{i}|$. That is, $U_{i}$ is the set of dimension $i$ hyperedges, and $u_{i}$ is the cardinality of set $U_{i}$. For a node $v\in\mathcal{V}$, define $d_i(v)=d_i(v,\mathcal{H})=|\{e\ |\ e\in U_i,v\in e\}|$. That is, $d_i(v)$ is number of dimension $i$ hyperedges that contain $v$. It is easy to see $\sum_{v\in\mathcal{V}}{d_i(v)}=iu_i$, which in turn implies the average value of $d_i(v)$ is $iu_i/n$. We say hypergraph $\mathcal{H}$ is \emph{equitable} if either $n\leq c_{eq}$ for some sufficiently large constant $c_{eq}$, or for every $i\leq\log{n}$ we have $d_i(v)\leq(iu_i/n)\cdot\log^5{n}$. (That is, for a sufficiently large hypergraph, it is ``equitable'' iff for every $i\leq\log{n}$, each node's ``dimension $i$ degree'' is not much larger than the ``average dimension $i$ degree''.) The high level idea of the algorithm is not complicated: we initialize the independent set $\mathcal{I}$ as an empty set, and then gradually add nodes to $\mathcal{I}$; meanwhile, we also remove nodes that would violate the independence requirement if appended to $\mathcal{I}$. More specifically, the algorithm contains multiple iterations, each of which contains three parts. In the first part, we find a large equitable subhypergraph $\mathcal{H}'$ by continuously removing nodes that deviate a lot from the current average $d_i(v)$ for some $i\leq\log{n'}$, along with all the hyperedges containing any of the removed nodes. (Here, $n'$ is the number of nodes in $\mathcal{H}'$.) In the second part, we add some nodes in $\mathcal{H}'$ into a candidate set $\mathcal{W}$. The detailed rule depends on a parameter $\hat{a}$: in case $\hat{a}$ is small, we only add one special node into $\mathcal{W}$; otherwise, we add each node into $\mathcal{W}$ independently with probability $\min\{\hat{a},e^{-6}\}$. (More details regarding $\hat{a}$ will be given shortly.) Then, we remove from $\mathcal{W}$ all nodes that produce some hyperedge in $\mathcal{H}'$. The resulting set $\mathcal{I}'$ is an independent set of $\mathcal{H}'$. In the last part, we add nodes in $\mathcal{I}'$ to $\mathcal{I}$, and remove them from $\mathcal{H}$. We also remove from $\mathcal{H}$ all nodes $v\notin\mathcal{W}$ for which there exists a hyperedge $e\in\mathcal{E}$ such that $e\subseteq\{v\}\cup\mathcal{I}'$, as these nodes surely cannot be added to $\mathcal{I}$. When implementing the above algorithm, there are some details worth clarifying. In \L{}uczak and Szyma\'nska's original algorithm, in each iteration, in the equitable subhypergraph $\mathcal{H}'$, the aforementioned parameter $\hat{a}$ is a real value satisfying $n'/\log^8{n'}\leq\sum_{i\geq 2}{i\cdot u_i(\mathcal{H}')\cdot \hat{a}^{i-1}}\leq 2n'/\log^8{n'}$. According to this definition, to obtain $\hat{a}$, we might have to collect $\Theta(n')$ different $u_i(\mathcal{H}')$ values, resulting unacceptable time consumption. Instead, in our variant, $\hat{a}$ is defined to be a real value satisfying $n'/\log^8{n'}\leq\sum_{i=2}^{\log{n'}}{i\cdot u_i(\mathcal{H}')\cdot \hat{a}^{i-1}}\leq 2n'/\log^8{n'}$, which can be obtained much more efficiently. (We have refined the analysis to ensure correctness is still guaranteed with this updated definition.) On the other hand, within each iteration, to calculate the value of $\hat{a}$, we need to know $n'$, as well as $u_i(\mathcal{H}')$ for each $2\leq i\leq\log{n'}$. To obtain these values, our strategy is to first elect a leader in the server-client representation of $\mathcal{H}$ (the leader can be either a server or a client), and then build a BFS tree with the root being the leader. Once the tree is built, we use aggregation to allow the root to obtain the needed values. Finally, the root broadcasts these values to all other nodes. Our procedures for accomplishing the above tasks are mostly based on the standard algorithms described in Chapter 3 and 5 of Peleg's book~\cite{peleg00}. (See Appendix \ref{appdix-obtain-para} for more details.) It is also worth noting that we cannot simply build a tree in $\mathcal{H}'$, as it might be not connected at all. Hence, during aggregation, nodes not in $\mathcal{H}'$ can simply forward values without updating them, this ensures the final results are obtained with respect to $\mathcal{H}'$. The detailed algorithm is provided in Figure \ref{fig-protocol-linear-mis-distributed}. For simplicity, we only show the pseudocode for server nodes, and omit the pseudocode for client nodes. \begin{figure}[t] \hrule \vspace{1ex}\textbf{Pseudocode executed at a node $v$ in $\mathcal{H}$:}\vspace{1ex} \hrule \begin{small} \begin{algorithmic}[1] \State $state\gets active$ \For {($l_1\gets 1$ to $\Theta(\log^{18}{n})$)} \If {($state\neq included$ \textbf{and} $state\neq excluded$)} \State $state\gets active$ \Comment If $v$ has not decided then join this iteration \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part I}: Create equitable subhypergraph \State $\hat{n}\gets\texttt{CountNode}()$ \Comment Count nodes that are in $active$ state in $O(\log{n})$ time \For {($l_2\gets 1$ to $\Theta(\log^{2}{\hat{n}})$)} \State $n'\gets\texttt{CountNode}()$ \State $\texttt{CountUi}()$ \Comment Count $u_i$ for $2\leq i\leq\log{n'}$ in $O(\log^2{n})$ time \If {($\texttt{CheckEq()}=true$)} \Comment Check if hypergraph is equitable in $O(\log{n})$ time \State \textbf{continue} \ElsIf {($state=active$ \textbf{and} $d_i(v)>\frac{iu_i}{n'}\cdot\log^4{n'}$ for some $i\leq\log{n'}$)} \State $state\gets idle$ \State Inform adjacent client nodes about $v$ becoming $idle$ for this iteration \EndIf \EndFor \If {($state=idle$)} \Comment Ignore part two if $v$ is not in the equitable subhypergraph \State \textbf{goto} \textsc{Part III} \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part II}: Generate an independent set \State Compute $\hat{a}$ such that $\frac{n'}{\log^8{n'}}\leq\sum_{i=2}^{\log{n'}}{i\cdot u_i\cdot \hat{a}^{i-1}}\leq\frac{2n'}{\log^8{n'}}$ \State $p_0\gets\min\{\hat{a},e^{-6}\}$ \If {($p_0\leq\frac{\log^8{n'}}{n'}$)} \State $v'\gets\texttt{MaxD2}()$ \Comment $\texttt{MaxD2}$ finds an active node $v'$ that maximizes $d_2(v')$ in $O(\log{n})$ time \If {($v=v'$)} \State $state\gets elected$ \EndIf \ElsIf {($\texttt{Random(0,1)}\leq p_0$)} \Comment $\texttt{Random}(x,y)$ samples a random real value in $[x,y]$ \State $state\gets elected$ \EndIf \If {(there is no adjacent client $e$ s.t.\ all active servers connected to $e$ are $elected$)} \State $state\gets included$ \Comment $v$ decides to join the MIS \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part III}: Update the hypergraph \If {($state=included$)} \State Inform adjacent client nodes about $v$ deciding to join the MIS \EndIf \If {(there is an adjacent client $e$ s.t.\ except $v$ all servers connected to $e$ are $included$)} \State $state\gets excluded$ \Comment $v$ decides to not join the MIS \EndIf \State Inform adjacent client nodes about $v$'s decision if it has decided \EndFor \end{algorithmic} \end{small} \hrule\vspace{1ex} \caption{\textbf{Pseudocode executed at a node in $\mathcal{H}$ for computing MIS.}}\label{fig-protocol-linear-mis-distributed} \vspace{-3ex} \end{figure} \subsection{The Analysis} From the pseudocode it is easy to see the runtime of the algorithm is poly-logarithmic. Therefore, in this part, we focus on proving the correctness of the algorithm. To begin with, we state two important observations. \begin{fact}\label{fact-subhypergraph-mis} Let $\mathcal{H}'$ be subhypergraph of $\mathcal{H}$, an independent set of $\mathcal{H}'$ is independent in $\mathcal{H}$ too. \end{fact} \begin{fact}\label{fact-linear-hypergraph-edge-num} A linear hypergraph $\mathcal{H}$ containing $n$ nodes has at most ${n\choose 2}$ hyperedges. \end{fact} \begin{proof} To see this, consider an arbitrary hyperedge $e$ in the hypergraph. If $|e|\geq 3$, then we split $e$ into two hyperedges $e_1$ and $e_2$ such that $||e_1|-|e_2||\leq 1$. If $|e_1|$ (or $|e_2|$) is of size one, then we remove $e_1$ (or $|e_2|$). (It cannot be the case that both $e_1$ and $e_2$ have size one since we require $|e|\geq 3$.) Notice, this procedure does not decrease the number of hyperedges in the hypergraph. Now, if we apply this procedure on all hyperedges recursively, we will eventually have a simple graph containing $n$ nodes. Since a simple graph with $n$ nodes has at most ${n\choose 2}$ edges, the claim is proved. \end{proof} The following first key technical lemma shows that within each iteration of the main algorithm, after part one, we have generated a large equitable subhypergraph containing at least half of the undecided nodes. \begin{lemma}[Adopted from Claim 1 in \cite{luczak97}]\label{lemma-linear-mis-large-eq} Assume at the beginning of an iteration there are $\hat{n}$ nodes in $\mathcal{H}$ that still have not decided whether to join the MIS or not. Then, after part one of this iteration, there are at least $\hat{n}/2$ nodes in $active$ state, and they induce an equitable subhypergraph. \end{lemma} \begin{proof} During part one, we have a loop which contains $\Theta(\log^{2}{\hat{n}})$ inner iterations. Assume there are $\hat{n}'$ $active$ nodes at the beginning of an inner iteration. Now, if $active$ nodes do not form an equitable subhypergraph, then within this inner iteration, each $active$ node $v$ will check whether $d_i(v)>(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$ for some $i\leq\log{\hat{n}'}$. If such $d_i(v)$ exists, then $v$ will set itself as $idle$, and inform adjacent client nodes (so that these hyperedges will not be in the equitable subhypergraph). We now argue, if at the beginning of an inner iteration, the $\hat{n}'$ $active$ nodes do not form an equitable hypergraph, and by the end of this inner iteration, the updated hypergraph is still not equitable, then by the end of this inner iteration, for some $i$ where $i\leq\log{\hat{n}'}$, the number of active dimension $i$ hyperedges decrease by at least a factor of $\log{\hat{n}'}$. To see this, notice that for such an event to happen, there must exist some node $v$ and some $i\leq\log{\hat{n}'}$ such that $d_i(v)\leq(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$ prior to this inner iteration, and $d'_i(v)>(iu'_i/\hat{n}'')\cdot\log^5{\hat{n}''}$ after this inner iteration. Notice, according to the definition, we know $d'_i(v)\leq d_i(v)$. If $u_i$ decrease by a factor less than $\log{\hat{n}'}$, then we know $d'_i(v)>(iu'_i/\hat{n}'')\cdot\log^5{\hat{n}''}>(iu_i/\hat{n}'')\cdot(1/\log{\hat{n}'})\cdot\log^5{\hat{n}''}\geq(iu_i/\hat{n}')\cdot(1/\log{\hat{n}'})\cdot\log^5{\hat{n}'}\geq d_i(v)$, a contradiction. We then argue $\Theta(\log^{2}{\hat{n}})$ inner iterations are enough to generate an equitable subhypergraph. Assume prior to the first inner iteration, we have $x_i$ dimension $i$ hyperedges, where $1\leq i\leq\log{\hat{n}}$. Due to Fact \ref{fact-linear-hypergraph-edge-num}, we know $\sum_{i=1}^{\log{\hat{n}}}{x_i}\leq{\hat{n}\choose 2}<\hat{n}^2$, implying $x_i<\hat{n}^2$. After each inner iteration, either we have an equitable subhypergraph, or number of active dimension $i$ hyperedges is decreased by at least a factor of $\log{\hat{n}'}$ for some $i$, where $\hat{n}'$ is the number of active nodes prior to this inner iteration. Notice, once the number of active dimension $i$ hyperedges drops below one for all $i\leq\log{\hat{n}}$, the resulting subhypergraph must be equitable. On the other hand, for $x_i$ to drop below one, it is easy to see we need at most $O(\log{\hat{n}})$ inner iterations. Hence, the total number of inner iterations we need is at most $O(\log^2{\hat{n}})$. Finally, we argue that the equitable subhypergraph generated by part one contains at least $\hat{n}/2$ nodes. To see this, notice that for arbitrary $i$, prior to an inner iteration, if there are $\hat{n}'$ $active$ nodes in total, then there are at most $\hat{n}'/\log^4{\hat{n}'}$ nodes satisfying $d_i(v)>(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$. Hence, during this iteration, we set at most $\hat{n}'/\log^3{\hat{n}'}=O(\hat{n}/\log^3{\hat{n}})$ $active$ nodes to $idle$. Since there are only $O(\log^2{\hat{n}})$ inner iterations, we know after part one, the generated equitable subhypergraph contains at least $\hat{n}/2$ nodes. \end{proof} In the following discussion, we focus on part two and three of each iteration. In particular, we show that if the generated equitable subhypergraph contains $c_1\leq n'\leq n$ nodes, then after part two and three, with at least some constant probability, at least $n'/\log^{17}{n}$ previously undecided nodes will make up their minds. Here, $c_1$ is a sufficiently large positive constant. To prove the above claim, we consider three cases, depending on the value of $\hat{a}$. The first case focuses on the scenario where $\hat{a}\leq\log^8{n'}/n'$. In such situation, there must exist a node $u$ that is contained within a lot of size two hyperedges. Thus, by letting $u$ join the MIS, the other nodes in these size two hyperedges will decide to not join the MIS. \begin{lemma}[Adopted from Case 1 of Lemma 1 in \cite{luczak97}]\label{lemma-linear-mis-remove-nodes-case1} Assume after part one of an iteration there are $n'$ nodes in the generated equitable subhypergraph. Further assume $\hat{a}\leq\log^8{n'}/n'$. Then, after part three of this iteration, at least $\Theta(n'/\log^{17}{n'})$ previously undecided nodes will decide whether to join the MIS or not. \end{lemma} \begin{proof} First, notice that $\sum_{i\geq 3}{(iu_i\cdot\hat{a}^{i-1})}\leq\hat{a}^2\sum_{i\geq 3}{iu_i}$ when $\hat{a}\leq\log^8{n'}/n'$. We then argue, the value of $\sum_{i\geq3}iu_i$ is at most $3\cdot{n'\choose 2}\leq(3/2)\cdot(n')^2$. To see this, we interpret $\sum_{i\geq3}iu_i$ as the sum of \emph{G3-degrees} of all the $n'$ nodes in the equitable hypergraph. Here, for a node, the G3-degree is defined as the degree of it when counting hyperedges with dimension at least three. Now, to count the sum of G3-degrees, consider the following procedure. Take an arbitrary hyperedge $e$ in the hypergraph with dimension at least three, we split $e$ into two hyperedges $e_1$ and $e_2$ such that $|e_1|=2$. If $|e_2|$ is one, then we remove $e_2$. Notice, if we apply this procedure on all hyperedges with dimension at least three recursively, we will eventually have a simple graph containing $n'$ nodes. Moreover, during the above procedure, the sum of degrees of all nodes always upper bounds the sum of G3-degrees of the original hypergraph. The only exception is that when we have a hyperedge of size one, it is removed, and this decreases the sum by one. Since such bad event can happen at most once for each of the at most ${n'\choose 2}$ hyperedges in the original hypergraph, and since for a simple graph with $n'$ nodes, the sum of all nodes' degree is at most $n'(n'-1)$, we know $\sum_{i\geq3}iu_i\leq n'(n'-1)+{n'\choose 2}= 3\cdot{n'\choose 2}$. With the above fact, we can now conclude: \vspace{-2ex} \begin{align*} \sum_{i\geq 3}{\left(iu_i\cdot\hat{a}^{i-1}\right)}\leq\hat{a}^2\cdot\sum_{i\geq 3}{iu_i}\leq\left(\log^{16}{n'}/(n')^2\right)\cdot(3/2)\cdot(n')^2=(3/2)\cdot\log^{16}{n'} \end{align*} Hence, we know: \vspace{-2ex} \begin{align*} u_2 &= (1/2\hat{a})\cdot\left(\sum_{i=2}^{\log{n'}}{(iu_i\cdot\hat{a}^{i-1})}-\sum_{i=3}^{\log{n'}}{(iu_i\cdot\hat{a}^{i-1})}\right) \\ &\geq (1/2\hat{a})\cdot\left(n'/\log^8{n'}-(3/2)\cdot\log^{16}{n'}\right) \\ &\geq \left(n'/(2\log^8{n'})\right)\cdot\left(n'/\log^8{n'}-(3/2)\cdot\log^{16}{n'}\right) \\ &\geq (n')^2/(3\log^{16}{n'}) \end{align*} Notice, the last inequality holds when $n'$ is sufficiently large. Hence, assuming $v$ maximizes $d_2$, we have $d_2(v)\geq 2u_2/n'\geq 2n'/(3\log^{16}{n'})\geq n'/\log^{17}{n'}$. Now, notice during part three, for each of the $d_2(v)$ dimension two hyperedges that contain node $v$, the other node in the hyperedge will decide to not be in the MIS (since $v$ is already in the MIS). As a result, we remove at least $d_2(v)$ nodes. \end{proof} The second case focuses on the scenario where $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$. This is the most involved situation. Since we have adjusted the definition of $\hat{a}$, when compared with the original proof provided in \cite{luczak97}, a refined and more careful analysis is needed to show the correctness of the following lemma. At a high-level, the proof is organized in the following way. Let $\mathcal{W}$ be the set of $elected$ nodes. We first show that with at least constant probability, there are lots of hyperedges $e$ in the equitable subhypergraph satisfying $|e\cap\mathcal{W}|=|e|-1$. Then, we prove that most of these hyperedges are vertex-disjoint and do not intersect with the hyperedges that are entirely contained in $\mathcal{W}$. Therefore, for most of the hyperedges $e$ satisfying $|e\cap\mathcal{W}|=|e|-1$, at least one node in $e$ will decide to not join the MIS. \begin{lemma}\label{lemma-linear-mis-remove-nodes-case2} Assume after part one of an iteration there are $n'$ nodes in the generated equitable subhypergraph. Further assume $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$. Then, after part three of this iteration, with at least constant probability, at least $\Theta(n'/\log^{8}{n'})$ previously undecided nodes will decide whether to join the MIS or not. \end{lemma} \begin{proof} Before proving the lemma, we briefly recap what part two and three do. During part two, when $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$, we sample a set of nodes $\mathcal{W}$ by choosing each node independently with probability $\hat{a}$. We then construct an independent set $\mathcal{I}\subseteq\mathcal{W}$ by removing from $\mathcal{W}$ the set of nodes that constitute some hyperedge $e\subseteq E(\mathcal{H}')$. Lastly, in part three, we let a node $v$ decide to not join the MIS if it is in some hyperedge $e$ such that every node except $v$ in $e$ has already decided to join the MIS. To prove the lemma, we rely on two key claims. The first claim shows that with at least some constant probability there are lots of hyperedges $e$ in the equitable subhypergraph satisfying $|e\cap\mathcal{W}|=|e|-1$. The second claim shows that most of these hyperedges are vertex-disjoint and do not intersect the hyperedges that are entirely contained in $\mathcal{W}$. Let $X$ be a random variable denoting the number of hyperedges in the equitable subhypergraph such that for each such hyperedge all but one of its nodes are in $\mathcal{W}$. The first claim, as mentioned previously, estimates the value of $X$. \begin{claim} With at least constant probability, $X=\Theta(n'/\log^8{n'})$. \end{claim} \begin{proof} For a hyperedge $e$ in the generated equitable subhypergraph $\mathcal{H}'=(\mathcal{V}',\mathcal{E}')$, define $X_e$ to be an indicator random variable taking value one iff $|e\cap\mathcal{W}|=|e|-1$. It is easy to see: \vspace{-2ex} \begin{align*} \mathbb{E}(X) & =\sum_{e\in\mathcal{E}'}{\mathbb{E}(X_e)}=\sum_{e\in\mathcal{E}'}{\left(|e|\cdot(1-\hat{a})\cdot\hat{a}^{|e|-1}\right)} \\ & =(1-\hat{a})\cdot\sum_{e\in\mathcal{E}'}{\left(|e|\cdot\hat{a}^{|e|-1}\right)} = (1-\hat{a})\cdot\sum_{i\geq 2}{\left(iu_i\cdot\hat{a}^{i-1}\right)} \\ & =\Theta(1)\cdot\left(\sum_{i=2}^{\log{n'}}{\left(iu_i\cdot\hat{a}^{i-1}\right)}+\sum_{i>\log{n'}}{\left(iu_i\cdot\hat{a}^{i-1}\right)}\right) \\ & =\Theta(1)\cdot\left(\Theta\left(\frac{n'}{\log^8{n'}}\right)+\sum_{i>\log{n'}}{\left(iu_i\cdot\hat{a}^{i-1}\right)}\right) \end{align*} Notice that: \vspace{-2ex} \begin{align*} \sum_{i>\log{n'}}{\left(iu_i\cdot\hat{a}^{i-1}\right)} & \leq\sum_{i>\log{n'}}{\left(iu_i\cdot \left(e^{-6}\right)^{(i-1)}\right)} \leq \sum_{i>\log{n'}}{\left(iu_i\cdot e^{-6\log{n'}}\right)} \\ & =e^{-6\log{n'}}\cdot\sum_{i>\log{n'}}{iu_i} \leq e^{-6\log{n'}}\cdot n'\cdot{n'\choose 2} \\ & =O\left(\left(n'\right)^{-3}\right) \end{align*} As a result, we know $\mathbb{E}(X)=\Theta(n'/\log^8{n'})$. To show $X$ is not likely to deviate much from its expectation, we will use the Chebyshev's inequality~\cite{mitzenmacher17}, which in turn requires us to calculate the variance of $X$. By the definition of variance, we know: \vspace{-2ex} \begin{align*} \mathrm{Var}(X) =\mathrm{Var}\left(\sum_{e\in\mathcal{E'}}{X_e}\right) = \quad \sum_{e\in\mathcal{E'}}{\mathrm{Var}(X_e)} \quad + \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathrm{Cov}(X_e,X_{e'})} \end{align*} Since $\mathrm{Var}(X_e)=\mathbb{E}(X_e^2)-(\mathbb{E}(X_e))^2\leq\mathbb{E}(X_e^2)=\mathbb{E}(X_e)$, we know $\sum_{e\in\mathcal{E'}}{\mathrm{Var}(X_e)}\leq\mathbb{E}(X)$. On the other hand: \vspace{-2ex} \begin{align*} \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})} \quad = & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\left(\mathbb{E}(X_{e} X_{e'})-\mathbb{E}(X_{e}) \mathbb{E}(X_{e'})\right)} \\ \leq & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})} \end{align*} Since $\mathcal{H'}$ is a linear hypergraph, we can conclude: \vspace{-2ex} \begin{align*} \phantom{=} & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})}\\ = & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\left( (|e|-1)(|e'|-1)(1-\hat{a})^2\cdot\hat{a}^{|e|+|e'|-3}\right)} +\\ \phantom{+} & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\left((1-\hat{a})\cdot\hat{a}^{|e|+|e'|-2}\right)}\\ \leq & \quad 2\cdot\sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e|\cdot|e'|\cdot\hat{a}^{|e|+|e'|-3})}\\ = & \quad 2\cdot\sum_{e\in\mathcal{E'}}{\left( |e|\cdot\hat{a}^{|e|-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})} \right)}\\ \end{align*} Notice, in the above, the first equality holds since $X_e X_{e'}=1$ iff $X_e=X_{e'}=1$, which can only happen in one of the two following cases: (a) $e\cap e'=\{u\}$, all nodes in $(e\cup e')-\{u\}$ is marked and $u$ is not marked; or (b) $e\cap e'=\{u\}$, one node in $e-\{u\}$ is not marked, one node in $e'-\{u\}$ is not marked, and all other nodes in $e\cup e'$ are marked. Our next step is to obtain an upper bound for $\sum_{e,e'\in\mathcal{E'}; e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})}$, by bounding $\sum_{e\in\mathcal{E'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})})}$. Define $\Delta_i=\max_{v\in\mathcal{V'}}{d_i(v)}$. Since $\mathcal{H'}$ is equitable, we know $\Delta_i\leq(iu_i/n')\cdot\log^5{n'}$ for $2\leq i\leq\log{n'}$. The analysis in Figure \ref{fig-eqnarray-1} shows $\sum_{e,e'\in\mathcal{E'}; e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})}\in O((\mathbb{E}(X))^2/(\log{n'}))$, and some explanations are needed: \begin{itemize} \item To see inequality (\ref{eqn-mis-case2-leq-2}), notice $\sum_{e\in\mathcal{E'}}\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}$ is equal to the sum of $\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}$ and $\sum_{e\in\mathcal{E'};|e|>\log{n'}}\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}$. Let us focus on $\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}$. Fix a hyperedge $e\in\mathcal{E'}$ such that $|e|\leq\log{n'}$, we now bound $\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}$. For each node $v\in e$, for arbitrary $i$, by the definition of $\Delta_i$, we know $d_i(v)\leq\Delta_i$. That is, for each node $v\in e$, for arbitrary $i$, node $v$ is contained within at most $\Delta_i$ dimension $i$ hyperedges; or, put another way, there are at most $\Delta_i-1$ dimension $i$ hyperedges in $\mathcal{E'}$ that intersect with $e$ on node $v$. Since $|e|\leq\log{n'}$, we know $\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}\leq\log{n'}\cdot\sum_{i\geq2}{(i\Delta_i \hat{a}^{i-2})}$. Similarly, when $|e|\geq\log{n'}$, we know $\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})}\leq n'\cdot\sum_{i\geq2}{(i\Delta_i \hat{a}^{i-2})}$, as the maximum size for any hyperedge is bounded by $n'$. \item To see inequality (\ref{eqn-mis-case2-leq-4}), notice $\sum_{i>\log{n'}}{(i\Delta_i\hat{a}^{i-2})}\leq n'\cdot(n'\cdot{n'\choose 2}\cdot(e^{-6})^{\log{n'}-1})=O(1/(n')^2)$. In the meantime, $\sum_{i=2}^{\log{n'}}{(i\Delta_i\hat{a}^{i-2})}\leq\sum_{i=2}^{\log{n'}}{((i^2u_i/n')\cdot\log^5{n'}\cdot\hat{a}^{i-2})}\leq(\log^6{n'}/(\hat{a}n'))\cdot\sum_{i=2}^{\log{n'}}{(i\cdot u_i\cdot\hat{a}^{i-1})}\leq (1/\log^2{n'})\cdot(2n'/\log^8{n'})=O(n'/\log^{10}{n'})$. \item To see inequality (\ref{eqn-mis-case2-leq-5}), notice $\sum_{|e|>\log{n'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot(n'(O({n'}/{\log^{10}{n'}})+O({1}/{(n')^2}))))}=\sum_{|e|>\log{n'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot(n'\cdot O({n'}/{\log^{10}{n'}})))}=O((n')^2/\log^{10}{n'})\cdot\sum_{|e|>\log{n'}}{(|e|\cdot\hat{a}^{|e|-1})}$. Moreover, $\sum_{|e|>\log{n'}}{(|e|\cdot\hat{a}^{|e|-1})}\leq{n'\choose 2}\cdot(n'\cdot(e^{-6})^{\log{n'}})\leq 1/(n')^3$. As a result, we know $\sum_{e\in\mathcal{E'};|e|>\log{n'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot(n'(O({n'}/{\log^{10}{n'}})+O({1}/{(n')^2}))))}\leq O(1/(n'\cdot\log^{10}{n'}))$. \item To see inequality (\ref{eqn-mis-case2-leq-6}), notice that $\sum_{|e|\leq\log{n'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot O({1}/{(n')^2})))}=O(\log{n'}/(n')^2)\cdot\sum_{|e|\leq\log{n'}}{(|e|\cdot\hat{a}^{|e|-1})}$. In the meantime, $\sum_{|e|\leq\log{n'}}{(|e|\cdot\hat{a}^{|e|-1})}=\sum_{i=2}^{\log{n'}}{(i\cdot u_i\cdot\hat{a}^{i-1})} \leq 2n'/\log^8{n'}$. Therefore, $\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{(|e|\cdot\hat{a}^{|e|-1}\cdot O({1}/{(n')^2})))}\leq O(1/(n'\cdot\log^{7}{n'}))$. \end{itemize} \begin{figure}[t!] \begin{small} \begin{align*} \phantom{\leq} & \quad \sum_{e,e'\in\mathcal{E'}; e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})}\\ \leq & \quad 2\cdot\sum_{e\in\mathcal{E'}}{\left( |e|\cdot\hat{a}^{|e|-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(|e'|\cdot\hat{a}^{|e'|-2})} \right)}\\ \leq & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(\log{n'}\cdot\sum_{i\geq2}{(i\Delta_i \hat{a}^{i-2})}\right)\right)} + \numberthis\label{eqn-mis-case2-leq-2} \\ \phantom{+} & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|>\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(n'\cdot\sum_{i\geq 2}{(i\Delta_i \hat{a}^{i-2})}\right)\right)}\\ = & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(\log{n'}\left(\sum_{i=2}^{\log{n'}}{(i\Delta_i \hat{a}^{i-2})}+\sum_{i>\log{n'}}{(i\Delta_i \hat{a}^{i-2})}\right)\right)\right)} +\\ \phantom{+} & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|>\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(n'\left(\sum_{i=2}^{\log{n'}}{(i\Delta_i \hat{a}^{i-2})}+\sum_{i>\log{n'}}{(i\Delta_i \hat{a}^{i-2})}\right)\right)\right)}\\ \leq & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(\log{n'}\left(\sum_{i=2}^{\log{n'}}{(i\Delta_i \hat{a}^{i-2})}+O\left(\frac{1}{(n')^2}\right)\right)\right)\right)} + \numberthis\label{eqn-mis-case2-leq-4}\\ \phantom{+} & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|>\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(n'\left(O\left(\frac{n'}{\log^{10}{n'}}\right)+O\left(\frac{1}{(n')^2}\right)\right)\right)\right)}\\ \leq & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\left(\log{n'}\left(\sum_{i=2}^{\log{n'}}{(i\Delta_i \hat{a}^{i-2})}+O\left(\frac{1}{(n')^2}\right)\right)\right)\right)} + \numberthis\label{eqn-mis-case2-leq-5}\\ \phantom{+} & \quad O\left(\frac{1}{n'\cdot\log^{10}{n'}}\right)\\ \leq & \quad 2\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left(|e|\cdot\hat{a}^{|e|-1}\cdot\log{n'}\cdot\sum_{i=2}^{\log{n'}}{(i\Delta_i \hat{a}^{i-2})}\right)} + O\left(\frac{1}{n'\log^7{n'}}\right) + \numberthis\label{eqn-mis-case2-leq-6}\\ \phantom{+} & \quad O\left(\frac{1}{n'\cdot\log^{10}{n'}}\right)\\ \leq & \quad \frac{2\log^7{n'}}{\hat{a}\cdot n'}\cdot\sum_{e\in\mathcal{E'};|e|\leq\log{n'}}{\left( |e|\cdot\hat{a}^{|e|-1}\cdot\left(\sum_{i=2}^{\log{n'}}{i\cdot u_i\cdot\hat{a}^{i-1}}\right) \right)}+O(1)\\ \leq & \quad \frac{2\log^7{n'}}{\hat{a}\cdot n'\cdot(1-\hat{a})}\cdot\mathbb{E}(X)\cdot\left(\sum_{i=2}^{\log{n'}}{i\cdot u_i\cdot\hat{a}^{i-1}}\right)+O(1)\\ \leq & \quad \frac{2\log^7{n'}}{\hat{a}\cdot n'\cdot(1-\hat{a})}\cdot\mathbb{E}(X)\cdot\frac{2n'}{\log^8{n'}}+O(1)\\ \leq & \quad O\left(\frac{1}{\log{n'}}\right)\cdot\mathbb{E}(X)\cdot\mathbb{E}(X)+O(1)\\ = & \quad O\left(\frac{(\mathbb{E}(X))^2}{\log{n'}}\right)\\ \end{align*} \end{small} \vspace{-6ex} \caption{\textbf{Bounding $\sum_{e,e'\in\mathcal{E'}; e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})}$ to $O((\mathbb{E}(X))^2/(\log{n'}))$.}}\label{fig-eqnarray-1} \end{figure} At this point, we can conclude $\mathrm{Var}(X)\leq\mathbb{E}(X)+O((\mathbb{E}(X))^2/\log{n'})=O((\mathbb{E}(X))^2/\log{n'})$. Apply the Chebyshev's inequality, and our claim follows. \end{proof} We then prove our second claim, which states there are only few pairs of intersecting hyperedges in $\mathcal{H'}$ which share a large number of nodes with $\mathcal{W}$. More precisely: \begin{claim} With at least constant probability, equitable hypergraph $\mathcal{H'}$ contains at most $O(n'/\log^9{n'})$ pairs of hyperedges $e,e'$ for which $e\cap e'\neq\emptyset$, $|e\cap\mathcal{W}|\geq|e|-1$, and $e'\backslash e\subseteq\mathcal{W}$. \end{claim} \begin{proof} Let $Y$ denote the number of such pairs of hyperedges, since $\mathcal{H'}$ is a linear hypergraph, by the analysis shown in Figure \ref{fig-eqnarray-2}, we know $\mathbb{E}(Y)$ is at most $O(n'/(\log^{10}{n'}))$. As a result, the claim follows by Markov's inequality. \end{proof} \begin{figure}[t!] \begin{align*} \mathbb{E}(Y) \quad \leq & \quad \sum_{e\in\mathcal{E'}}{\left( |e|\cdot\hat{a}^{|e|-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\hat{a}^{|e'|-1}} \right)}\\ = & \quad \sum_{i=2}^{\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\hat{a}^{|e'|-1}} \right)} + \sum_{i>\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\sum_{e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\hat{a}^{|e'|-1}} \right)}\\ \leq & \quad \sum_{i=2}^{\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\log{n'}\cdot\sum_{j\geq 2}{(\Delta_j\cdot\hat{a}^{j-1})} \right)} + \sum_{i>\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot n'\cdot\sum_{j\geq 2}{(\Delta_j\cdot\hat{a}^{j-1})} \right)}\\ \leq & \quad \sum_{i=2}^{\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\log{n'}\cdot\left(\sum_{j=2}^{\log{n'}}{(\Delta_j\cdot\hat{a}^{j-1})}+\sum_{j>\log{n'}}{(\Delta_j\cdot\hat{a}^{j-1})}\right) \right)} + \\ \phantom{=} & \quad \sum_{i>\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot n'\cdot\left(\sum_{j=2}^{\log{n'}}{(\Delta_j\cdot\hat{a}^{j-1})}+\sum_{j>\log{n'}}{(\Delta_j\cdot\hat{a}^{j-1})}\right) \right)}\\ \leq & \quad \sum_{i=2}^{\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\log{n'}\cdot\left(O\left(\frac{1}{\log^3{n'}}\right)+O\left(\frac{1}{(n')^3}\right)\right) \right)} + \\ \phantom{=} & \quad \sum_{i>\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot n'\cdot\left(O\left(\frac{1}{\log^3{n'}}\right)+O\left(\frac{1}{(n')^3}\right)\right) \right)}\\ \leq & \quad \sum_{i=2}^{\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot\log{n'}\cdot O\left(\frac{1}{\log^3{n'}}\right) \right)} + \sum_{i>\log{n'}}{\left( i\cdot u_i\cdot\hat{a}^{i-1}\cdot n'\cdot O\left(\frac{1}{\log^3{n'}}\right) \right)}\\ \leq & \quad O\left(\frac{n'}{\log^{10}{n'}}\right) + O\left(\frac{1}{(n')^2\cdot\log^3{n'}}\right) = O\left(\frac{n'}{\log^{10}{n'}}\right)\\ \end{align*} \vspace{-6ex} \caption{\textbf{Bounding $\mathbb{E}(Y)$ to $O(n'/(\log^{10}{n'}))$.}}\label{fig-eqnarray-2} \end{figure} We now prove the lemma. The above two claims show that in each iteration, with at least constant probability, in $\mathcal{H'}$ there exists a set $\tilde{\mathcal{E}}'$ of hyperedges of cardinality $\Theta(n'/\log^8{n'})$ such that: (a) for $e\in\tilde{\mathcal{E}}'$ we have $e\backslash\mathcal{W}=\{v_e\}$; (b) no $e$ from $\tilde{\mathcal{E}}'$ share a node with a hyperedge of $\mathcal{H'}$ entirely contained in $\mathcal{W}$; and (c) for $e, e'\in\tilde{\mathcal{E}}'$, $v_{e}\neq v_{e'}$ whenever $e\neq e'$. Now, notice that (a) and (b) imply that after part three of the iteration, for each hyperedge in $\tilde{\mathcal{E}}'$, at least one node has decided to not be in the MIS. Moreover, condition (c) guarantees that these nodes are different. Therefore, we have proved the lemma. \end{proof} The last case focuses on the scenario where $\hat{a}\geq e^{-6}$. In such situation, $\Theta(n')$ undecided nodes will be marked (i.e., $elected$), yet entirely marked hyperedges will contain at most $O(n'/\log^8{n'})$ nodes. As a result, we know $\Theta(n')$ nodes will decide to join the MIS. \begin{lemma}[Adopted from Case 3 of Lemma 1 in \cite{luczak97}]\label{lemma-linear-mis-remove-nodes-case3} Assume after part one of an iteration there are $n'$ nodes in the generated equitable subhypergraph. Further assume $\hat{a}\geq e^{-6}$. Then, after part three of this iteration, with at least constant probability, at least $\Theta(n')$ previously undecided nodes will decide whether to join the MIS or not. \end{lemma} \begin{proof} Since $\hat{a}\geq e^{-6}$, we know each node in the generated equitable hypergraph $\mathcal{H'}$ will be selected with probability $e^{-6}$. By a Chernoff bound~\cite{mitzenmacher17}, we know w.h.p.\ w.r.t.\ $n'$, $\Theta(n')$ nodes will be selected into $\mathcal{W}$. I.e., $|\mathcal{W}|=\Theta(n')$ with at least constant probability. On the other hand, in expectation, the number of nodes that belong to some hyperedges that are entirely contained in $\mathcal{W}$ is upper bounded by $\sum_{i\geq 2}{i\cdot u_i\cdot p_0^i}=\sum_{i=2}^{\log{n'}}{i\cdot u_i\cdot p_0^i}+\sum_{i>\log{n'}}{i\cdot u_i\cdot p_0^i}\leq e^{-6}\cdot\sum_{i=2}^{\log{n'}}{i\cdot u_i\cdot \hat{a}^{i-1}}+O(1/(n')^2)=O(n'/\log^8{n'})$. Therefore, by a Markov's inequality, we know with at least some constant probability, after part two of the iteration, we can find an independent set of size $\Theta(n')$. Moreover, these nodes will decide to join the MIS by the end of this iteration. \end{proof} Combine the above four lemmas, we can conclude if prior to an iteration there are $n'$ undecided nodes, then after this iteration, with at least constant probability, at least $\Omega(n'/\log^{17}{n'})$ nodes will decide, provided $n'$ is sufficiently large. Since $n'\leq n$, this means after $O(\log^{18}{n})$ iterations, the number of undecided nodes will be reduced to some sufficiently large constant $c_1$, w.h.p. Once the number of undecided nodes is reduced to $c_1$, during part two of an iteration, one of the two following situations will happen: (a) $p_0\leq\log^8{n'}/n'$, in which case only one node is selected into $\mathcal{W}$; or (b) $e^{-6}\geq p_0>\log^8{n'}/n'$, in which case each of the $n'$ nodes is selected with probability $p_0$. In the first case, the single selected node will decide to join the MIS after this iteration. In the second case, since $n'\leq c_1$ is a constant, we know with at least some constant probability only one of the $n'$ nodes will be selected into $\mathcal{W}$, and will decide to join the MIS after this iteration. Either way, we know after each iteration, with at least some constant probability, one node will decide to join the MIS. At this point, we can conclude that after at most some poly-logarithmic (w.r.t.\ $n$) iterations, all nodes in $\mathcal{H}$ will decide, w.h.p. Moreover, it is easy to see that the result is indeed an MIS of $\mathcal{H}$. Combine these with Lemma \ref{lemma-decomp}, we immediately have the following theorem. \begin{theorem}\label{thm-linear-mis} In the CONGEST model, there exists a distributed algorithm that can solve the MIS problem for linear hypergraphs within poly-logarithmic time, w.h.p. \end{theorem} \section{Computing a GMIS in Constant Dimension Linear Hypergraphs} \subsection{The Algorithm} To compute a generalized maximal independent set (GMIS) for a linear hypergraph, we take a similar approach as in the MIS case: first decompose the input hypergraph; then run the core GMIS algorithm within the generated subhypergraphs in parallel; finally, combine these partial solutions to obtain a complete GMIS for the original input hypergraph. We once again restrict our attention to hypergraphs with $O(\log{n})$ diameter (see Lemma \ref{lemma-decomp}). We also restrict the dimension of the input hypergraph to be some constant $d$. Towards the end of the paper, we will discuss why this limitation is posed. The core algorithm for computing GMIS in low-diameter hypergraphs is a non-trivial generalization of our previous hypergraph MIS algorithm. Before presenting more details, we introduce some updated notations. The first one is \emph{strict subhypergraph}. Consider a hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$, for a subset $\mathcal{W}$ of $\mathcal{V}$, the induced strict subhypergraph $\mathcal{H}_{\mathcal{W}}=(\mathcal{W},\mathcal{E}_{\mathcal{W}})$ is defined as: for each $v\in\mathcal{V}\backslash\mathcal{W}$, delete $v$ from each hyperedge $e\in\mathcal{E}$, the threshold attached with $e$ remains unchanged; then, for each remaining hyperedge $e'=e\cap\mathcal{W}$, delete $e'$ if $|e'|\leq t_{e'}$. (Notice $t_{e'}=t_e$.) The reason for defining strict subhypergraph is to maintain a property that is critical to the correctness of our algorithm. More specifically, consider a hypergraph $\mathcal{H}$ and one of its subhypergraph $\mathcal{H}'$. When dealing with the MIS problem, an independent set of $\mathcal{H}'$ is also independent in $\mathcal{H}$. (See Fact \ref{fact-subhypergraph-mis}.) However, for generalized independent sets, this is no longer the case. By contrast, if $\mathcal{H}''$ is a strict subhypergraph of $\mathcal{H}$, then a generalized independent set of $\mathcal{H}''$ is also a generalized independent set of $\mathcal{H}$. That is, we have: \begin{fact}\label{fact-strict-subhypergraph-gmis} If $\mathcal{H}_{\mathcal{W}}$ is a strict subhypergraph of $\mathcal{H}$, then a generalized independent set of $\mathcal{H}_{\mathcal{W}}$ is also a generalized independent set of $\mathcal{H}$. \end{fact} \begin{proof} We prove the claim by contradiction. Assume $I_{\mathcal{W}}$ is a generalized independent set of $\mathcal{H}_{\mathcal{W}}=(\mathcal{W},\mathcal{E}_{\mathcal{W}})$, but not a generalized independent set of $\mathcal{H}=(\mathcal{V},\mathcal{E})$. Then there must exist a hyperedge $e\in\mathcal{E}$ such that $|e\cap I_{\mathcal{W}}|>t_e$. Notice, all the nodes in $I_{\mathcal{W}}$ are also in $\mathcal{W}$. Therefore, $t_e<|e\cap I_{\mathcal{W}}|\leq|e\cap\mathcal{W}|$. As a result, during the construction of $\mathcal{H}_{\mathcal{W}}$, a hyperedge $e'=e\cap\mathcal{W}$ is added to $\mathcal{H}_\mathcal{W}$, with threshold $t_{e'}=t_e$. However, recall that $|e'\cap I_{\mathcal{W}}|=|e\cap\mathcal{W}\cap I_{\mathcal{W}}|=|e\cap I_{\mathcal{W}}|>t_e=t_{e'}$, which contradicts the assumption that $I_{\mathcal{W}}$ is a generalized independent set of $\mathcal{H}_{\mathcal{W}}$. Thus our claim is proved. \end{proof} On the other hand, we have also significantly adjusted the definition of equitable hypergraph. In this section, for a hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$, define $U_t(\mathcal{H})=U_t$ to be the set of hyperedges with threshold $t$, and $u_t(\mathcal{H})=u_t$ to be the cardinality of $U_t$. Define $d_t(v,\mathcal{H})=d_t(v)$ to be the number of hyperedges that contain $v$ and have threshold $t$. Now, for a hypergraph $\mathcal{H}$, we say it is \emph{equitable} if it contains less than $c_{eq}$ nodes (where $c_{eq}$ is a sufficiently large constant), or for each node $v$ and each $i$ where $1\leq i\leq d-1$ we have $d_i(v)\leq(i u_i/n)\cdot\log^5{n}$. We now describe the algorithm for computing GMIS in a low-diameter constant dimension linear hypergraph $\mathcal{H}$. The algorithm contains multiple iterations, each of which has three parts. In the first part, we try to find a large equitable strict subhypergraph $\mathcal{H'}$. In the second part, we add some nodes in $\mathcal{H}'$ into a candidate set $\mathcal{W}$. The detailed rule depends on a parameter $\hat{a}$ satisfying $n'/\log^8{n'}\leq\sum_{i=1}^{d-1}{u_i\cdot \hat{a}^{i}}\leq 2n'/\log^8{n'}$. (Notice, this definition of $\hat{a}$ is quite different from the one we used in our previous linear hypergraph MIS algorithm.) In case $\hat{a}$ is small, we add one node $v$ which maximizes $d_1(v)$ into $\mathcal{W}$; otherwise, we independently add each node into $\mathcal{W}$ with probability $\min\{\hat{a},e^{-6}\}$. Then, we remove from $\mathcal{W}$ all nodes that would violate some hyperedge's threshold constraint in $\mathcal{H}'$. The resulting set $\mathcal{I}'$ is a generalized independent set of $\mathcal{H}'$. In the last part, we add nodes from $\mathcal{I}'$ to $\mathcal{I}$, and remove them from $\mathcal{H}$. We also remove from $\mathcal{H}$ all nodes $v\notin\mathcal{W}$ for which there exists a hyperedge $e\in\mathcal{E}$ such that $|(\{v\}\cup\mathcal{I}')\cap e|>t_e$, as these nodes cannot be added into $\mathcal{I}$. The detailed algorithm is shown in Figure \ref{fig-protocol-linear-gmis-distributed}. For simplicity, we again only include the pseudocode for server nodes. Moreover, to obtain the values of $n'$ and $u_i$ (and some other parameters), we reuse the aggregation procedures described in earlier sections. \begin{figure}[t] \hrule \vspace{1ex}\textbf{Pseudocode executed at a node $v$ in $\mathcal{H}$:}\vspace{1ex} \hrule \begin{small} \begin{algorithmic}[1] \State $state\gets active$ \For {($l_1\gets 1$ to $\Theta(\log^{18}{n})$)} \If {($state\neq included$ and $state\neq excluded$)} \State $state\gets active$ \Comment If $v$ has not decided then join this iteration \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part I}: Create equitable strict subhypergraph \State $\hat{n}\gets\texttt{CountNode}()$ \Comment Count nodes that are still in $active$ state in $O(\log{n})$ time \For {($l_2\gets 1$ to $\Theta(d\cdot\log{\hat{n}})$)} \State $n'\gets\texttt{CountNode}()$ \State $\texttt{CountUi}()$ \Comment Count $u_i$ for $1\leq i\leq d-1$, takes $O(d\cdot\log{n})$ time \If {($\texttt{CheckEq()}=true$)} \Comment Checks if hypergraph is equitable in $O(\log{n})$ time \State \textbf{continue} \ElsIf {($state=active$ \textbf{and} $d_i(v)>\frac{iu_i}{n'}\cdot\log^4{n'}$ for some $1\leq i\leq d-1$)} \State $state\gets idle$ \State Inform adjacent client nodes about $v$ becoming $idle$ for this iteration \EndIf \EndFor \If {($state=idle$)} \Comment Ignore part two if $v$ is not in the equitable strict subhypergraph \State \textbf{goto} \textsc{Part III} \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part II}: Generate a generalized independent set \State Compute $\hat{a}$ such that $\frac{n'}{\log^8{n'}}\leq\sum_{i=1}^{d-1}{u_i\cdot \hat{a}^{i}}\leq\frac{2n'}{\log^8{n'}}$ \State $p_0\gets\min\{\hat{a},e^{-6}\}$ \If {($p_0\leq\frac{\log^8{n'}}{n'}$)} \State $v'\gets\texttt{MaxD1}()$ \Comment $\texttt{MaxD1}$ finds an active node $v'$ that maximizes $d_1(v')$ in $O(\log{n})$ time \If {($v=v'$)} $state\gets elected$ \EndIf \ElsIf {($\texttt{Random(0,1)}\leq p_0$)} $state\gets elected$ \EndIf \If {(there is no adjacent client $e$ s.t.\ $elected$ servers connected to $e$ exceed $t_e$)} \State $state\gets included$ \Comment $v$ decides to join the GMIS \EndIf \Statex \hspace{3ex}$\blacktriangleright$ \textsc{Part III}: Update the hypergraph \If {($state=included$)} \State Inform adjacent client nodes about $v$ deciding to join the GMIS \EndIf \If {(there is an adjacent client $e$ s.t.\ $included$ servers connected to $e$ reaches $t_e$)} \State $state\gets excluded$ \Comment $v$ decides to not join the GMIS \EndIf \State Inform adjacent client nodes about $v$'s decision if it has decided \EndFor \end{algorithmic} \end{small} \hrule\vspace{1ex} \caption{\textbf{Pseudocode executed at a node in $\mathcal{H}$ for computing GMIS.}}\label{fig-protocol-linear-gmis-distributed} \vspace{-3ex} \end{figure} \subsection{The Analysis} The pseudocode clearly indicates the runtime of the algorithm is poly-logarithmic. In this part, we focus on showing the correctness of the algorithm. To begin with, we show that after part one of each main iteration, an equitable strict subhypergraph containing at least half of the undecided nodes is generated. \begin{lemma}\label{lemma-linear-gmis-large-eq} Assume at the beginning of an iteration there are $\hat{n}$ nodes in $\mathcal{H}$ that still have not decided whether to join the GMIS or not. Then, after part one of this iteration, there are at least $\hat{n}/2$ nodes in $active$ state, and they induce an equitable strict subhypergraph. \end{lemma} \begin{proof} During part one, we have an inner loop containing $\Theta(d\cdot\log{\hat{n}})$ iterations. Assume there are $\hat{n}'$ $active$ nodes at the beginning of an inner iteration. If $active$ nodes have not formed an equitable strict subhypergraph yet, then within this inner iteration, each $active$ node $v$ will check whether $d_i(v)>(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$ for some $1\leq i\leq d-1$. If such $d_i(v)$ exists, then $v$ will set itself as $idle$, and inform adjacent client nodes about this. We now argue, if at the beginning of an inner iteration, the $\hat{n}'$ $active$ nodes do not form an equitable hypergraph, and by the end of this inner iteration, the updated hypergraph is still not equitable, then by the end of this inner iteration, for some $i$ where $1\leq i\leq d-1$, the number of active threshold $i$ hyperedges decrease by at least a factor of $\log{\hat{n}'}$. To see this, notice that for such an event to happen, there must exist some node $v$ and some $1\leq i\leq d-1$ such that, prior to this inner iteration $d_i(v)\leq(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$, and after this inner iteration $d'_i(v)>(iu'_i/\hat{n}'')\cdot\log^5{\hat{n}''}$. Notice, according to the definition, we know $d'_i(v)\leq d_i(v)$. If $u_i$ decrease by a factor less than $\log{\hat{n}'}$, then $d'_i(v)>(iu'_i/\hat{n}'')\cdot\log^5{\hat{n}''}>(iu_i/\hat{n}'')\cdot(1/\log{\hat{n}'})\cdot\log^5{\hat{n}''}\geq(iu_i/\hat{n}')\cdot(1/\log{\hat{n}'})\cdot\log^5{\hat{n}'}\geq d_i(v)$, a contradiction. With the above claim, we argue $\Theta(d\cdot\log{\hat{n}})$ inner iterations are enough to generate an equitable strict subhypergraph. Assume prior to the first inner iteration, we have $x_i$ hyperedges with threshold $i$, where $1\leq i\leq d-1$. Due to Fact \ref{fact-linear-hypergraph-edge-num}, we know $\sum_{i=1}^{d-1}{x_i}\leq{\hat{n}\choose 2}<\hat{n}^2$, which implies $x_i<\hat{n}^2$. After each inner iteration, either we have an equitable strict subhypergraph, or the number of active threshold $i$ hyperedges is decreased by at least a factor of $\log{\hat{n}'}$ for some $i$, where $\hat{n}'$ is the number of active nodes prior to this inner iteration. Notice, once the number of active threshold $i$ hyperedges drops below one for all $i\leq d-1$, the resulting strict subhypergraph must be equitable. On the other hand, for $x_i$ to drop below one, we need at most $O(\log{\hat{n}})$ inner iterations. Hence, the total number of inner iterations we need is at most $O(d\cdot\log{\hat{n}})$. Lastly, we argue that the equitable strict subhypergraph generated by part one contains at least $\hat{n}/2$ nodes. To see this, notice that for arbitrary $i$, prior to an inner iteration, if there are $\hat{n}'$ $active$ nodes in total, then there are at most $(d/i)\cdot\hat{n}'/\log^4{\hat{n}'}$ nodes satisfying $d_i(v)>(iu_i/\hat{n}')\cdot\log^4{\hat{n}'}$. Hence, during this inner iteration, we set at most $d^2\cdot\hat{n}'/\log^4{\hat{n}'}$ $active$ nodes to $idle$. Since there are only $O(d\cdot\log{\hat{n}})$ iterations, we know after part one, the generated equitable strict subhypergraph contains at least $\hat{n}/2$ nodes. \end{proof} In the following discussion, we consider part two and three of the main iteration. Particularly, we show that if the generated equitable strict subhypergraph contains sufficiently many nodes, then after part two and three, with at least constant probability, lots of previously undecided nodes (in the equitable strict subhypergraph) will make up their minds. We focus on the most involved case in which $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$. \begin{lemma}\label{lemma-linear-gmis-remove-nodes-case2} Assume after part one of an iteration there are $n'$ nodes in the generated equitable strict subhypergraph. Further assume during part two $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$. Then, after part three of this iteration, with at least some constant probability, at least $\Theta(n'/\log^{8}{n'})$ previously undecided nodes will decide whether to join the GMIS or not. \end{lemma} \begin{proof} Let $\mathcal{H}'=(\mathcal{V}',\mathcal{E}')$ be the generated equitable hypergraph. Let $\mathcal{W}$ be the set of marked (i.e., $elected$) nodes during part two. Let $X$ be a random variable denoting the number of hyperedges $e$ in $\mathcal{H}'$ satisfying $|e\cap\mathcal{W}|=t_e$. The proof relies on two key claims. \begin{claim}\label{claim-linear-gmis-remove-nodes-case2-claim1} With at least some constant probability, $X=\Theta(n'/\log^8{n'})$. \end{claim} \begin{proof} For a hyperedge $e$ in the generated equitable hypergraph $\mathcal{H}'$, define $X_e$ to be an indicator random variable taking value one iff $|e\cap\mathcal{W}|=t_e$. We now calculate $\mathbb{E}(X)$. By linearity of expectation, we know $\mathbb{E}(X)=\sum_{e\in\mathcal{E}'}{\mathbb{E}(X_e)}=\sum_{e\in\mathcal{E'}}{({|e|\choose t_e}\cdot\hat{a}^{t_e}\cdot(1-\hat{a})^{|e|-t_e})}$. Since $|e|\leq d=\Theta(1)$ and $\log^8{n'}/n'\leq\hat{a}\leq e^{-6}$, we know ${|e|\choose t_e}=\Theta(1)$ and $(1-\hat{a})^{|e|-t_e}=\Theta(1)$. Therefore, we know $\mathbb{E}(X)=\Theta(1)\cdot\sum_{e\in\mathcal{E'}}{\hat{a}^{t_e}}=\Theta(1)\cdot\sum_{i=1}^{d-1}{(u_i\cdot\hat{a}^i)}=\Theta(1)\cdot\Theta(n'/\log^8{n'})=\Theta(n'/\log^8{n'})$. We will show the concentration of $X$ via Chebyshev's inequality, and hence calculate the variance of $X$: $\mathrm{Var}(X)=\mathrm{Var}(\sum_{e\in\mathcal{E'}}{X_e})=\sum_{e\in\mathcal{E'}}{\mathrm{Var}(X_e)}+\sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathrm{Cov}(X_e,X_{e'})}$. Since $\mathrm{Var}(X_e)=\mathbb{E}(X_e^2)-(\mathbb{E}(X_e))^2\leq\mathbb{E}(X_e^2)=\mathbb{E}(X_e)$, we know $\sum_{e\in\mathcal{E'}}{\mathrm{Var}(X_e)}\leq\mathbb{E}(X)$. On the other hand, notice: \vspace{-2ex} \begin{align*} \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathrm{Cov}(X_{e},X_{e'})} \quad = & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{(\mathbb{E}(X_{e} X_{e'})-\mathbb{E}(X_{e}) \mathbb{E}(X_{e'}))} \\ \leq & \quad \sum_{e,e'\in\mathcal{E'};e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})} \end{align*} Fix two hyperedges $e$ and $e'$ such that $e\cap e'\neq\emptyset$. Since $X_e$ and $X_{e'}$ are indicator random variables, we know $\mathbb{E}(X_eX_{e'})=\mathbb{P}(X_e=1\wedge X_{e'}=1)$. Since $\mathcal{H'}$ is a linear hypergraph, assume $e\cap e'=\{v\}$. (I.e., $e$ and $e'$ overlaps on node $v$.) By the definition of $X_e$ and $X_{e'}$, event ``$X_e=1\wedge X_{e'}=1$'' happens iff one of the two following (disjoint) events happens: (a) $v$ is marked, $t_e-1$ of the $|e|-1$ nodes in $e\backslash\{v\}$ are marked, and $t_{e'}-1$ of the $|e'|-1$ nodes in $e'\backslash\{v\}$ are marked; or (b) $v$ is not marked, $t_e$ of the $|e|-1$ nodes in $e\backslash\{v\}$ are marked, and $t_{e'}$ of the $|e'|-1$ nodes in $e'\backslash\{v\}$ are marked. Therefore, we can further bound $\sum_{e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})}$: \vspace{-2ex} \begin{align*} \sum_{e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})} = & \sum_{e\cap e'\neq\emptyset}{\left( \hat{a}\cdot{|e|-1\choose t_e-1}\cdot\hat{a}^{t_e-1}\cdot(1-\hat{a})^{|e|-t_e}\cdot{|e'|-1\choose t_{e'}-1}\cdot\hat{a}^{t_{e'}-1}\cdot(1-\hat{a})^{|e'|-t_{e'}} \right)} +\\ \phantom{+} & \sum_{e\cap e'\neq\emptyset}{\left( (1-\hat{a})\cdot{|e|-1\choose t_e}\cdot\hat{a}^{t_e}\cdot(1-\hat{a})^{|e|-t_e-1}\cdot{|e'|-1\choose t_{e'}}\cdot\hat{a}^{t_{e'}}\cdot(1-\hat{a})^{|e'|-t_{e'}-1} \right)}\\ \leq & \frac{2}{\hat{a}}\cdot\sum_{e\cap e'\neq\emptyset}{\left( {|e|\choose t_e}\cdot\hat{a}^{t_e}\cdot{|e'|\choose t_{e'}}\cdot\hat{a}^{t_{e'}} \right)} \leq \frac{2n'}{\log^8{n'}}\cdot\sum_{e\cap e'\neq\emptyset}{\left( \Theta(1)\cdot\hat{a}^{t_e}\cdot\Theta(1)\cdot\hat{a}^{t_{e'}} \right)}\\ \leq & \Theta\left(\frac{n'}{\log^8{n'}}\right)\cdot\sum_{e\in\mathcal{E'}}{\left( \hat{a}^{t_e}\cdot\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}} \right)} \end{align*} Next, we need to estimate $\sum_{e\in\mathcal{E'}}{(\hat{a}^{t_e}\cdot\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}})}$ to upper bound $\sum_{e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})}$. Define $\Delta_i=\max_{v\in\mathcal{V'}}{d_i(v)}$. Since $\mathcal{H'}$ is equitable, we have $\Delta_i\leq(iu_i/n')\cdot\log^5{n'}$ for $1\leq i\leq d-1$. Fix a hyperedge $e\in\mathcal{E'}$, we now give an upper bound of $\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}}$. Consider an arbitrary node $v\in e$. For every $1\leq j\leq d-1$, we know $v$ is contained within $d_j(v)\leq\Delta_j$ hyperedges of threshold $j$. Meanwhile, $|e|\leq d$. Hence, $\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}}\leq d\cdot\sum_{j=1}^{d-1}{(\Delta_j\cdot\hat{a}^j)}$. As a result, we know: \vspace{-2ex} \begin{align*} \sum_{e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})} \quad \leq & \quad \Theta\left(\frac{n'}{\log^8{n'}}\right)\cdot\sum_{e\in\mathcal{E'}}{\left( \hat{a}^{t_e}\cdot\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}} \right)}\\ \leq & \quad \Theta\left(\frac{n'}{\log^8{n'}}\right)\cdot\sum_{e\in\mathcal{E'}}{\left( \hat{a}^{t_e}\cdot d\cdot\sum_{j=1}^{d-1}{(\Delta_j\cdot\hat{a}^j)} \right)}\\ \leq & \quad \Theta\left(\frac{1}{\log^3{n'}}\right)\cdot\sum_{e\in\mathcal{E'}}{\left( \hat{a}^{t_e}\cdot\sum_{j=1}^{d-1}{(u_j\cdot\hat{a}^j)} \right)}\\ \leq & \quad \Theta\left(\frac{1}{\log^3{n'}}\right)\cdot\sum_{e\in\mathcal{E'}}{\left(\hat{a}^{t_e}\cdot\frac{2n'}{\log^8{n'}}\right)} \end{align*} Recall that we have previously shown $\mathbb{E}(X)=\Theta(1)\cdot\sum_{e\in\mathcal{E'}}{\hat{a}^{t_e}}=\Theta(n'/\log^8{n'})$. Thus, $\sum_{e\cap e'\neq\emptyset}{\mathbb{E}(X_{e}X_{e'})}\leq \Theta(1/\log^3{n'})\cdot\mathbb{E}(X)\cdot\sum_{e\in\mathcal{E}'}{\hat{a}^{t_e}}=\Theta(1/\log^3{n'})\cdot(\mathbb{E}(X))^2$. By now, we know $\mathrm{Var}(X)\leq\mathbb{E}(X)+\Theta(1/\log^3{n'})\cdot(\mathbb{E}(X))^2$. Recall $\mathbb{E}(X)=\Theta(n'/\log^8{n'})$, thus $\mathrm{Var}(X)=O((\mathbb{E}(X))^2/\log^3{n'})$. Hence, the claim follows by Chebyshev's inequality. \end{proof} \begin{claim}\label{claim-linear-gmis-remove-nodes-case2-claim2} With at least some constant probability, $\mathcal{H'}$ contains at most $O(n'/\log^{10}{n'})$ pairs of hyperedges $e,e'$ for which $e\cap e'\neq\emptyset$, $|e\cap\mathcal{W}|\geq t_e$, and $|(e'\backslash e)\cap\mathcal{W}|\geq t_{e'}$. \end{claim} \begin{proof} Let $Y$ denote the number of such pairs of hyperedges. Recall we have defined $\Delta_i=\max_{v\in\mathcal{V'}}{d_i(v)}$; and hence know $\Delta_i\leq(iu_i/n')\cdot\log^5{n'}$ for $1\leq i\leq d-1$. As a result, we can bound $\mathbb{E}(Y)$ as follows: \vspace{-2ex} \begin{align*} \mathbb{E}(Y) \leq & \sum_{e\in\mathcal{E'}}{\left( {|e|\choose t_e}\cdot\hat{a}^{t_e}\cdot\sum_{e\cap e'\neq\emptyset}{\left({|e'|-1\choose t_{e'}}\cdot\hat{a}^{t_{e'}}\right)} \right)} \leq \left[{d\choose d/2}\right]^2\cdot\sum_{e\in\mathcal{E}'}{\left(\hat{a}^{t_e}\cdot\sum_{e\cap e'\neq\emptyset}{\hat{a}^{t_{e'}}}\right)}\\ \leq & \Theta(1)\cdot\sum_{e\in\mathcal{E}'}{\left( \hat{a}^{t_e}\cdot d\cdot\sum_{i=1}^{d-1}{(\Delta_i\cdot\hat{a}^i)} \right)} \leq \Theta\left(\frac{\log^5{n'}}{n'}\right)\cdot\sum_{e\in\mathcal{E}'}{\left( \hat{a}^{t_e}\cdot\sum_{i=1}^{d-1}{(u_i\cdot\hat{a}^{i})} \right)}\\ = & \Theta\left(\frac{1}{\log^3{n'}}\right)\cdot\sum_{e\in\mathcal{E}'}{\hat{a}^{t_e}} = \Theta\left(\frac{1}{\log^3{n'}}\right)\cdot\sum_{i=1}^{d-1}{(u_i\cdot\hat{a}^{i})} = \Theta\left(\frac{n'}{\log^{11}{n'}}\right) \end{align*} By Markov's inequality, the claim follows. \end{proof} The above two claims show that in each iteration, with at least some constant probability, in $\mathcal{H'}$ there exists a set $\tilde{\mathcal{E}}'$ of hyperedges of cardinality $\Theta(n'/\log^8{n'})$ such that: (a) for each $e\in\tilde{\mathcal{E}}'$, exactly $t_e$ nodes are in $\mathcal{W}$; (b) for $e\in\tilde{\mathcal{E}}'$ and $e'\in\mathcal{E}'$, if $e\cap e'\neq\emptyset$ then $|(e'\backslash e)\cap\mathcal{W}|< t_{e'}$; and (c) for $e\in\tilde{\mathcal{E}}'$ and $e'\in\tilde{\mathcal{E}}'$, there exist $v\in e$ and $v'\in e'$ such that $v\neq v'$ and both $v,v'$ are not in $\mathcal{W}$. Now, notice that (a) and (b) imply that after part three of the iteration, for each hyperedge in $\tilde{\mathcal{E}}'$, at least one node has decided to not be in the GMIS. Moreover, condition (c) guarantees that these nodes are different. Therefore, we have proved the lemma. \end{proof} The remaining two cases (namely, $\hat{a}\leq\log^8{n'}/n'$ and $\hat{a}\geq e^{-6}$) are simpler, interested readers can refer to Lemma \ref{lemma-linear-gmis-remove-nodes-case1} and Lemma \ref{lemma-linear-gmis-remove-nodes-case3} in Appendix \ref{appdix-gmis-lemma} for more details. Finally, we conclude that these lemmas prove the correctness of our algorithm. \begin{theorem}\label{thm-linear-gmis} In the CONGEST model, there exists a distributed algorithm that computes a GMIS for constant dimension linear hypergraphs within poly-logarithmic time, w.h.p. \end{theorem} \begin{proof} Lemma \ref{lemma-linear-gmis-large-eq}, \ref{lemma-linear-gmis-remove-nodes-case2}, \ref{lemma-linear-gmis-remove-nodes-case1}, and \ref{lemma-linear-gmis-remove-nodes-case3} tell us: if prior to an outer iteration there are $n'$ undecided nodes, then after this iteration, with at least some constant probability, at least $\Theta(n'/\log^{17}{n'})$ of these nodes will decide, provided that $n'$ is sufficiently large. Since $n'\leq n$, this means after at most some poly-logarithmic (w.r.t.\ $n$) outer iterations, the number of undecided nodes will be reduced to a sufficiently large constant $c_1$, w.h.p. Now, once the number of undecided nodes is reduced to $c_1$, during part two of an outer iteration, one of the two following scenarios will happen: (a) $p_0\leq\log^8{n'}/n'$, in which case only one node is selected into $\mathcal{W}$; or (b) $e^{-6}\geq p_0>\log^8{n'}/n'$, in which case each of the $n'$ nodes is selected with probability $p_0$. In the first case, the single selected node will decide after this iteration. In the second case, since $n'\leq c_1$ is a constant, we know with at least constant probability only one of the $n'$ nodes will be selected into $\mathcal{W}$, and will decide after this iteration. Therefore, we can conclude when the number of undecided nodes is at most $c_1$, after each iteration, with at least constant probability, at least one node will decide. At this point, we can claim that after at most some poly-logarithmic (w.r.t.\ $n$) outer iterations, all nodes in $\mathcal{H}$ will decide, w.h.p. Moreover, it is easy to see that the result indeed is a GMIS of $\mathcal{H}$. Combine this with Lemma \ref{lemma-decomp}, and we have proved the theorem. \end{proof} \section{Summary and Discussion} In this paper, we study the problem of efficient computation of MIS and GMIS in linear hypergraphs in the CONGEST model. In particular, we have developed a poly-logarithmic time randomized algorithm for computing an MIS in arbitrary linear hypergraphs. We have then generalized this algorithm and devised a variant that is able to compute a GMIS in constant dimension linear hypergraphs, again in poly-logarithmic time. To the best of our knowledge, this is the first work that defines the GMIS problem and devises non-trivial algorithms for computing it. We believe this problem deserves further investigation. On the one hand, it can potentially model many real-world problems that involve multi-party interactions; on the other hand, it is also a challenging symmetry breaking problem and solving it efficiently seems to require the development of novel techniques. A natural question to ask is how to efficiently compute GMIS for linear hypergraphs with super-constant dimension, in the CONGEST model? (For the LOCAL model, recall that Theorem \ref{thm-gmis-local} already gives the answer.) Why does an algorithm (or, the techniques behind it) that can solve MIS for arbitrary dimension linear hypergraphs stops at constant dimension for GMIS? It turns out there are several difficulties. To begin with, for the key parameter $\hat{a}$, in the GMIS setting, instead of our current definition, the most natural one should actually be $\sum_{i=2}^{d}\sum_{j=1}^{i-1}{i\choose j}\cdot u_{i,j}\cdot\hat{a}^j\cdot(1-\hat{a})^{i-j}=\Theta(n'/\log^{\Theta(1)}{n'})$, where $u_{i,j}$ is the number of size $i$ hyperedges with threshold $j$. However, this definition would break the proof of Lemma \ref{lemma-linear-gmis-remove-nodes-case2}. Particularly, the analysis for Claim \ref{claim-linear-gmis-remove-nodes-case2-claim2} is no longer valid. On the other hand, once we introduce the notion of $u_{i,j}$, the definition for equitable hypergraph also needs to be adjusted: in the GMIS setting, $\mathcal{H}$ is equitable if it contains not too many nodes, or for each node $v$, for each $i$ where $2\leq i\leq d$, for each $j$ where $1\leq j\leq i-1$, it holds that $d_{i,j}(v)\leq(i u_{i,j}/n)\cdot\log^{\Theta(1)}{n}$. Unfortunately, this definition could greatly increase the time complexity of the equitable subhypergraph generation algorithm: for given $i$ and $j$, the value of $d_{i,j}(v)$ is not necessarily monotonically decreasing over multiple iterations. To summarize, we have the feeling that GMIS might be fundamentally harder than MIS, and that obtaining more general solutions might require non-trivial novel algorithmic techniques. \clearpage \bibliographystyle{plainurl}
2024-02-18T23:40:42.677Z
2018-05-10T02:04:09.000Z
algebraic_stack_train_0000
3,152
17,013
proofpile-arXiv_065-15392
\section{Introduction} The existence of dark matter (DM) is strongly supported by several evidence that has been accumulated over the last decades~\cite{zwicky1, zwicky2, vera, bulletcluster1, bulletcluster2, bc3, cmb, cmb1, cmb2}. Despite the world-wide experimental effort, the identity of DM is still unknown and the need for novel ideas is more pressing than ever. Under the assumption that DM is made of beyond Standard Model (BSM) particles, several experiments have been designed. In particular, one of the most promising strategy is to indirectly detect DM from high density regions, e.g. the Galactic Center (GC)~\cite{vanEldik:2015qla}, where it can either decay or annihilate producing radiation that reaches our detectors on Earth. The main complication of this strategy is the fact that it might be non trivial to distinguish a signal produced by DM from the background produced by other astrophysical sources. A distinguishable evidence would come from the observation of peaks in the astrophysical spectrum, such as the ones predicted in~\cite{Bringmann:2012vr, Garny:2013, Kopp:2014, Okada:2014zja, Garny:2015wea, Kumar:2016cum, Bartels:2017dpb} due to the annihilation of fermionic Majorana DM. In this scenario, the 2 to 2 annihilation of DM via a charged mediator is p-wave suppressed being internal bremsstrahlung more likely to happen. From this process a sharp peak in the spectrum is expected at the DM mass energy scale for a mediator close in mass to the DM particle. While DM coupling to quarks is already tightly constrained for this scenario~\cite{Garny:2015wea}, the leptophilic case still offers a wide window to explain the DM nature. In this context, not only annihilation will provide a peak in the spectrum, for this class of models another peak is expected at the DM-mediator mass splitting energy due to the scattering of DM with cosmic ray (CR) electrons, as it has been analysed in~\cite{Gorchtein:2010xa, Profumo:2011jt, Huang:2011dg, Gomez:2013qra}. Depending on the parameters of the model (mass splitting, coupling, etc) the number of photons at this energy can be comparable with the one due to annihilation. Nevertheless, what can certainly differentiate the two signals is the fact that only the flux of photons coming from the DM-CR electron scattering can be circularly polarised. A net circular polarisation signal can be observed in the sky when when there is an excess of one circular polarisation state over the other. Since photons flip helicity under parity, parity must be violated in at least one of the dominant photon emission processes. But parity (P) violation is not the only condition required, there must be either an asymmetry in the number density of one of the particles in the initial state or CP must be violated by the interactions at play. Therefore, an interaction where the initial state is a CP-eigenstate, such as the annihilation of a Majorana particle, cannot generate circular polarisation even if it violates parity. Using this arguments, in~\cite{Kumar:2016cum,Elagin:2017cgu,Gorbunov:2016zxf,Boehm:2017nrl, Huang:2019ikw, Balaji:2019fxd, Balaji:2020oig} it is suggested that DM and neutrinos can generate circular polarised signals in X-rays or gamma-rays through decays and interactions with SM particles, pointing out an unexplored way to look for new physics. In particular, a net circular polarisation signal can be generated by DM interactions with ambient CRs. Motivated by this, in our work we perform for the first time the full computation of the flux of circularly polarised photons coming from the interactions between a Majorana DM fermion and CR electrons in the GC. After finding that the circular polarisation fraction can reach values higher than $90\%$, we argue that this feature and their energy dependence could be used to reveal these BSM interactions as well as to study their nature. \section{Circular polarisation} In this section we present the formalism used for the photon polarisation states. As previously anticipated, in order to have a source of net polarisation both P and CP symmetries need to be violated by the underlying physics process. Photon helicities flip under P transformations. Therefore, if the physics is invariant under parity it means that no helicity is preferred to the other, as they both couple in the same way. The simplest way to have a parity violating interaction is to have chiral interactions, with right and left-handed fermion components coupled in different manners. However, if P is violated but CP is not, the CP conjugate process will generate the opposite helicity with the same rate, making it effectively impossible to produce a net circular polarisation. In this work we consider a scenario in which CP is violated not at the fundamental level, but by means of an asymmetry in one of the particles of the initial state, i.e. the number density of the particle is not the same as the number density of its antiparticle. In this situation, the CP process cannot counter-balance the production of polarised photons. Four our calculations, we use the same convention for the photon polarisation vectors that is used in~\cite{Boehm:2017nrl}. We consider a photon with momentum $k^\mu=(k_0, k_x, k_y, k_z)$, whose two possible transverse polarisation vectors are \begin{equation} \epsilon_1^\mu(k)=\frac{1}{k_0 k_T}(0, k_x k_z, k_y k_z, -k_T^2) \end{equation} and \begin{equation} \epsilon_2^\mu(k)=\frac{1}{k_T}(0, -k_y, k_x, 0), \end{equation} where $k_T=\sqrt{k_x^2+k_y^2}$. The positive and negative photon circular polarisation vectors are then defined as \begin{equation} \epsilon_{\pm}^\mu(k)= \frac{1}{2}(\mp\epsilon_1^\mu(k)-i \epsilon_2^\mu(k) ). \end{equation} These definitions allow us to define the squared helicity amplitudes $\mathcal{A}_-= \sum_{spins} |\epsilon^{\mu}_{-}\mathcal{M}_\mu|^2$ and $\mathcal{A}_+= \sum_{spins} |\epsilon^{\mu}_{+}\mathcal{M}_\mu|^2$ such that their sum is equivalent to the total averaged amplitude. For a P violating interaction $\mathcal{A}_- \neq \mathcal{A}_+$ and a net circularly polarised spectrum is expected. \section{Model} \begin{figure}[t!] \centering \includegraphics[scale=0.23]{figures/exm_exma1.pdf} \qquad \includegraphics[scale=0.23]{figures/exm_exma2.pdf} \qquad \includegraphics[scale=0.23]{figures/exm_exma3.pdf} \caption{ \label{fig:diagrams} Diagrams for the resonantly enhanced 2 to 3 radiative processes of electrons scattering with a Majorana DM particle through a right-handed scalar mediator. } \end{figure} In order to compute the circular polarisation asymmetries from the expected flux of photons coming from DM-CR scattering in the GC, following~\cite{Profumo:2011jt, Kopp:2014, Garny:2015wea}, we consider a t-channel simplified model in which a Majorana DM candidate $\tilde{\chi}$ is coupled to right-handed electrons by means of a charged scalar mediator $\varphi$. The SM Lagrangian is therefore extended with two degrees of freedom and the dark sector interactions are given by \begin{equation} \mathcal{L}_{DM} = i \bar{\psi}_{\tilde{\chi}}(\slashed{D} - m_{\tilde{\chi}})\psi_{\tilde{\chi}} + D_\mu \varphi^\dagger D^\mu \varphi - m_\varphi \varphi^\dagger \varphi + ( a_R \, \bar{e}_R \, \psi_{\tilde{\chi}} \, \varphi + h.c.) \, . \end{equation} This simplified model can be interpreted in a supersymmetry (SUSY) context where $\tilde{\chi}$ is the lightest neutralino and $\varphi$ is the right-handed selectron. The scalar mediator is carrying the same quantum numbers of the right-handed electron, since the DM is a singlet of the SM gauge groups. The model parameter space is three dimensional, i.e. is characterised uniquely by the set of parameters $\{m_{\tilde{\chi}}, m_\varphi, a_R \}$. Additionally, in order to ensure DM stability, the mass of the mediator has to be bigger than the mass of the DM candidate ($m_\varphi > m_{\tilde{\chi}}$). By considering a coupling solely to the right-handed component of the electron, we maximise the parity violation of the model and consequently the possibility to produce a net signal of circularly polarised photons. The choice of a t-channel model is motivated by the fact that they can lead to a kinematic enhancement in the DM-CR scattering by means of a resonant contribution when the mass splitting $\Delta M = m_\varphi - m_{\tilde{\chi}}$ is of the order of the CR energies ($\sim$ few GeV). Thanks to this feature, one can exploit the resonance to probe the degenerate region of the parameter space which is difficult to access at colliders and other experiments. The relevant diagrams of the DM-CR interactions which are resonantly enhanced can be seen in Fig.~\ref{fig:diagrams}. \section{Polarised photon flux from cosmic ray scattering} In this section we present results for the photon spectrum generated by DM-CR scattering. In particular, we focus on the GC, due to its high DM density and large electron flux. The leading contribution to the flux comes from the $2$ to $3$ scattering $\tilde{\chi} e^- \rightarrow \tilde{\chi} e^- \gamma$. We define the flux of circularly polarised photons at a distance $r_\odot$ from the GC as \begin{eqnarray} \frac{d\Phi_{e\tilde{\chi}, pol}}{dE_\gamma}=\bar{J}\frac{1}{m_{\tilde{\chi}}}\int d\Omega_\gamma \int dE_e \frac{d\phi}{dE_e} \left|\frac{d^2\sigma_+}{d\Omega_\gamma dE_\gamma}(E_e,\theta_\gamma,E_\gamma )-\frac{d^2\sigma_-}{d\Omega_\gamma dE_\gamma}(E_e,\theta_\gamma,E_\gamma ) \right|, \label{polflux1} \end{eqnarray} where $\Omega_\gamma$ is the solid angle between the emitted photon and the incoming CR electron (with $\theta_\gamma$ the polar coordinate), and $E_e$ and $E_\gamma$ are the incoming electron and the outgoing photon energies. The $+$ and $-$ signs in the differential cross section indicate the positive and negative circular polarisations. The CR electron energy spectrum, which has a relevant impact on the overall flux and on the degree of net polarisation as well, is described by $\frac{d\phi}{dE_e}$. The spatial dependence of the DM and electron distributions are taken into account in the factor \begin{eqnarray} \bar{J}(\Delta \Omega_{\rm obs})=\frac{1}{\Delta \Omega_{\rm obs}}\int_{\Delta \Omega_{\rm obs}} d\Omega \int_{\rm l.o.s} ds \; \rho(r(s,\theta)) \; f(r(s, \theta)), \label{jbar} \end{eqnarray} which integrates over the line of sight and solid angle of observation of the experiment, $\Omega_{\rm obs}$, the product of the DM density distribution, $\rho(r)$, and the function $f(r)=\frac{e^{-\frac{r}{r_0}}}{e^{-\frac{r_\odot}{r_0}}}$, which takes into account the fact that the CR flux is larger in the GC vicinity~\cite{Profumo:2011jt,Strong:2004td}. Here, $r_\odot= 8.5$ kpc is the distance from the GC to the Earth and $r_0=4 \; \rm kpc$. Being the cross section of the process dominated by the resonant contributions, especially for small mass splittings, we are in the ideal scenario to employ the narrow width approximation (NWA)~\cite{Berdine:2007uv}. A more thorough description of the performed calculation can be found in Ref.~\cite{Cermeno:2021rtk}. Thanks to the resonant enhancement, the cross section is not proportional to $a_R^4$ as naively expected, but to $a_R^2$. \subsection{Results} \label{subsec:results} \begin{figure}[t!] \centering \includegraphics[width=.45\columnwidth]{figures/plot_NWA_var_dm_mx_50_v2.pdf} \includegraphics[width=.45\columnwidth]{figures/inj_flux_plot_NWA_var_dm_mx_50_v2.pdf}\\ \includegraphics[width=.45\columnwidth]{figures/plot_NWA_var_DeltaM_mx500_v2.pdf} \includegraphics[width=.45\columnwidth]{figures/inj_flux_plot_NWA_var_dm_mx_500_v2.pdf} \caption{ \label{fig:flux} The flux spectrum of negative and positive circularly polarised photons in dash-dotted and dashed lines, respectively, and the sum of them, in solid lines, from the scattering of CR electrons off DM. In the lower part of each plot the difference between the flux of positive and negative polarised photons over the total one, i.e the circular polarisation asymmetry $A_{pol}$, can be seen.} \end{figure} In the following we present the photon fluxes and the circular polarisation asymmetries expected from the scattering of DM and CR electrons in the GC. Regarding the DM density, we assume the Einasto profile which provides a $\bar{J}_{\rm Ein}=1.67 \; 10^{24}\; \rm GeV/ cm^2$ for the typical angular resolution of Fermi-LAT~\cite{Fermi:2009} and e-ASTROGAM~\cite{DeAngelis:2017gra}, i.e., $\theta_{\rm obs}= 1^{\circ}$, which corresponds to $\Delta \Omega_{\rm obs} \sim 10^{-3}$. In Fig.~\ref{fig:flux} we show the flux spectrum of negative and positive circularly polarised photons and their sum (dash-dotted, dashed and solid lines respectively). We fix the DM mass to $m_{\tilde{\chi}}=50$ GeV (upper panel) and $m_{\tilde{\chi}}=500$ GeV (bottom panel) and take different values of $\Delta M$, namely $1$, $0.1$ and $0.01$ GeV. In the lower part of each plot the difference between the flux of positive and negative polarised photons over the total one, i.e. the circular polarisation asymmetry $\mathcal{A}_{pol}$, is reported. As naively expected, we observe that the higher the DM mass and $\Delta M$ are, the lower the flux is. With respect to the electron spectrum, in the left panel of Fig.~\ref{fig:flux} we show results considering the local interstellar spectrum, extracted from Fig. 9 of~\cite{Vittino:2019yme}. On the right panel of Fig.~\ref{fig:flux} instead, we can see the results that we get if we consider the electron injected spectrum. In order to be consistent with the local interstellar spectrum interpolated from data of~\cite{Vittino:2019yme}, we use the injected spectrum of the $1$ break model from this Reference. We observe that the asymmetry of polarised photons coming from these interactions can reach up to $90\%$. For a fixed value of $\Delta M$, the peak of the flux is obtained at $E_\gamma=\Delta M$, which is also where the polarisation asymmetry displays a maximum. The detection of this kind of signature is complementary to the observation of a peak at $E_\gamma \sim m_{\tilde{\chi}}$ coming from self-annihilation of DM, which cannot produce a net flux of circular polarised photons (initial state is CP even). While from the DM-CR scattering one can gain information on the mass splitting $\Delta M$, from the annihilation one can learn about the mass of DM. In this perspective, the circular polarisation could be used as a characterisation feature to pinpoint the nature of DM. However, it must be noticed that, taking into account the background of photons coming from the GC in this energy region predicted by~\cite{Gaggero:2017dpb} (Fig. 2 black solid line), even if we consider the highest flux of photons found for the injected spectrum, we would need an exposure time of $\Delta t \sim 10^{10}$~s for a $3 \sigma$ evidence. Nowadays the only efficient techniques to measure photon circular polarisation in this energy range band are based on the measurement of the secondary asymmetry of photons via Compton scattering \cite{Elagin:2017cgu}. Therefore, unless a new technique which exploits the polarisation fractions with the objective of increasing sensitivity is developed, these signals will not be able to be detected in the forthcoming years. \section{Conclusions} In this work we have provided a discussion on the possibility to detect and characterise DM through detection of a signal of circularly polarised photons. We showed that if DM is coupled to the SM by means of a parity violating interaction, a flux of highly polarised photons is expected from the GC of our galaxy. Prospects of detection of this signal in the near future by experiments like Fermi-LAT and e-ASTROGAM are not too optimistic, unless novel techniques are devised to reduce backgrounds and to improve the angular resolution and the sensitivity to the polarisation fraction. However, other sources such as cosmic accelerators could potentially lead to higher fluxes by means of an enhanced CR spectrum. We leave the study of these cases for a future work. \section*{Acknowledgements} The work of C.D. and M.C. was funded by the F.R.S.-FNRS through the MISU convention F.6001.19. The work of L.M. has received funding from the European Union’s Horizon 2020 research and innovation programme as part of the Marie Sklodowska-Curie Innovative Training Network MCnetITN3 (grant agreement no. 722104). Computational resources have been provided by the supercomputing facilities of the Universit\'e Catholique de Louvain (CISM/UCL) and the Consortium des \'Equipements de Calcul Intensif en F\'ed\'eration Wallonie Bruxelles (C\'ECI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region. \bibliographystyle{utphys}
2024-02-18T23:40:42.994Z
2021-10-13T02:21:50.000Z
algebraic_stack_train_0000
3,172
2,764
proofpile-arXiv_065-15439
\section{Introduction \& Motivation} Due to the increasing penetration of distributed energy resources (DERs), distribution networks are expected to be operated closer to their capacity limits. Wholesale markets, however, generally do not consider distribution network constraints, which can often lead to limit violations at distribution level (e.g. line limits, voltage limits, etc.). Until recently, a more costly dispatch or grid reinforcement would have been necessary to avoid such violations. This paper explores a continuous local energy market model that can harness the flexibility available in the distribution system and defer costly investments in the distribution grid. Indeed, the recent report \cite{ENTSOE} published by ENTSO-E together with the major European associations of DSOs emphasises the requirement for grid flexibility procurement. Ref.~\cite{Schitte} recognises flexibility markets as the tool needed in Europe to make a more efficient use of the existing distribution grid. Local flexibility markets (LFMs) are introduced to promote flexibility trade in limited areas such as communities or small towns in Ref.~\cite{Jin2020}. The authors analyse the concepts and models proposed so far for LFMs, defining them as platforms that connect actors requiring flexibility with actors offering it. In this context, flexibility is identified as a controlled power variation that can be performed at a localized point in the network, with a given duration and at a specific time. Ref.~\cite{Schitte} analyses four pioneering European projects that implement flexibility markets: Pico Flex, Enera, GOPACS and NODES. While Enera, GOPACS and NODES use continuous trading, Pico Flex, together with the Danish LFM project EcoGrid 2.0 \cite{EcoGrid} consider auction-based trading. In Ref.~\cite{FuturIntraday} the implications of using both mechanisms when designing future electricity intraday markets are discussed. With auction-based clearing, the bids accepted in the market are the ones that lead to the highest social welfare, whereas with continuous clearing bids are matched as they enter the market. The authors state that even if the continuous clearing results in a sub-optimal social welfare, it can allow for more trades. The same conclusion is drawn in Ref.~\cite{Comparison}, where continuous and auction-based energy markets are compared analytically, having as benchmark a discrete double auction. The authors also provide an upper bound on the sub-optimality of continuous clearing compared to auction-based clearing. In contrast to what we present in this paper, though, the compared models do not include network constraints or block bids, which would affect the outcome of the comparison. Network constraints have been included in Ref.~\cite{cont}, which presents probably the first formulation of a continuous LFM considering network constraints. This market trades flexibility as reserve capacity following the first-come, first-served principle to match flexibility requests and offers and uses the pay-as-bid pricing rule. Including a network check when clearing the market is critical to ensure that the trades do not lead to any limits violations. When it comes to block bids, both Ref.~\cite{DSOcontract}~and~\cite{CongestionManagement} account for the preferences of the flexibility providers through asymmetric block bids, which implies including integer variables in their models. These bids consist of several single offers linked to each other, submitted at the same time and location but with different direction, quantities and prices, and referred to different time targets. Papers \cite{Thermal-electric,Supermarket,FlexStrategy} present different approaches to provide flexibility from the demand side through this type of bids. Thermal electric loads such as building heating and cooling, water heating and refrigeration are ideal candidates for load shifting, as they can vary their energy consumption without compromising their purpose \cite{Thermal-electric}. Ref~\cite{Thermal-electric} models the rebound effect attached to them, designing block bids that can be only fully accepted or fully rejected, i.e All-or-Nothing condition (AoN). Ref.~\cite{Supermarket} builds the block bids needed by a supermarket refrigerator to supply flexibility, analysing its demand response capability. Ref.~\cite{FlexStrategy} develops an offering strategy for a flexibility aggregator to participate in a balancing market using asymmetric block bids. Extending our previous work in Ref.~\cite{cont}, this paper has two main contributions: \begin{itemize} \item We introduce a continuous local flexibility market which explicitly includes both network constraints and block bids. To our knowledge, this is the first formulation of continuous energy markets that considers both network constraints and block bids. \item We introduce a method that can determine both the \emph{upper bound} and the \emph{lower bound} on the suboptimality of such continuous markets compared with their auction-based counterparts. \end{itemize} The rest of this paper is organised as follows. In Section \ref{sec:LFM} we describe the designed local flexibility market (LFM). We first present the market framework in which the two clearing mechanisms are set up and then we explain the formulation and performance of the continuous and auction-based models developed. The case study used to illustrate the operation of the LFM and the corresponding results for different configurations are given in Section \ref{sec:case}. The sub-optimality gap is further studied in Section \ref{sec:subopt}, where we propose a method to obtain the highest and lowest social welfare using continuous clearing. In Section \ref{sec:discussion}, we discuss the models designed and the clearing mechanisms compared. Finally, Section \ref{sec:ccl} presents the conclusions and directions for future work. \section{Local Flexibility Market Design} \label{sec:LFM} Flexibility can locally be traded either as reserve or as energy. In this paper, we develop Local Flexibility Market (LFM) clearing algorithms in which flexibility is traded in the form of energy. Future work will extend this framework to reserve markets as well. The platform connects actors who require flexibility (i.e. DSO) with actors who offer it (i.e. prosumers and aggregators), matching flexibility requests with flexibility offers. The flexibility market operator is an external agent, whom we assume the DSO provides with perfect information about the network topology and limits and the system setpoint. When clearing the market, a DC power flow algorithm checks whether the power grid is technically able to handle the expected energy transfers without causing or aggravating congestions, similar to the auction-based market counterparts. The LFM is multiperiod, to account for block bids that model the preferences of certain flexibility providers. Block bids consist of several single offers linked to each other, one for each time period covered by the block bid. Only offers can be submitted as block bids since they are mainly introduced to encourage suppliers' participation. While single bids can be partially matched (i.e. the market can accept part of the offered energy), block bids can only be fully accepted or rejected, as proposed in Ref.~\cite{CongestionManagement}. This condition and the asymmetric form of the block bids are both due to the technical requirements associated with the rebound effect. Within this framework, two different models are presented: one with continuous clearing, and the other with auction-based clearing which will be used as a benchmark. The main difference is that a continuous market clears as soon as there is a match between bids, whereas an auction-based market clears once every time period considering all the bids submitted for that period. \subsection{Continuous Clearing Model} The market clearing follows the first-come, first-served principle, which means that the older bid sets the clearing price and has priority for the same price. Every time a bid enters the market, we examine all the previous requests/offers standing in the corresponding order book. A match can occur if a request and an offer have the same direction (upward or downward) and time target, and if the request price is higher or equal to the offer price. The unmatched requests are stored by descending price and the unmatched offers are ordered by ascending price in their respective order books. This ensures that the incoming bid is matched at the best available price, so the social welfare is the highest possible for each match. Once a match is found in terms of direction, time target and price and if only single bids are matched, the model calculates the \textit{Quantity\_max} to be exchanged through the algorithm introduced in Ref.~\cite{cont}. It performs a DC power flow analysis using PTDFs factors that link power injections with line flows. The system setpoint is updated after every match. Similar to the single bids, block bids should be matched with the best available requests. However, due to the AoN condition, we have to ensure that all the single offers included in the block bid can be fully matched before setting a match. To prioritise seniority, older requests should be temporarily assigned to the block bid and stored until its complete match is possible. To be fair across all incoming offers, however,, the requests assigned to a block bid should still be available in their order book, in case a new matching offer appears before the match with the block bid becomes effective. Then, if a temporary match with a block bid is partially or completely cancelled, the requests order book should be immediately revisited looking for a new match for the remaining offer. Furthermore, every match that occurs while the block bid is not fully matched changes the line flows and can, thus, technically limit the match of the block bid. Therefore, network constraints should be constantly checked to guarantee that the temporary matches with the block bid are still feasible. To avoid this tedious process, we store all the possible matches with the single offers involved in the block bid as candidates, until there are enough candidate requests to fully match it. After meeting this condition, it may happen that some parts of the block bid have several candidates to match with. To determine which one(s) to choose we run an optimisation problem, considering all the possible matches with the single offer concerned. This way, we can select the request(s) that lead to the highest social welfare respecting network constraints. The optimisation problem is a linear program (LP) formally defined as follows: \begin{subequations} \begingroup \allowdisplaybreaks \begin{align} & \underset{\mathbf{x}}{\min} && \lambda _{b,t}^{\text{U}} P_{b,t}^{\text{U}}+\lambda _{b,t}^{\text{D}} P_{b,t}^{\text{D}} -\sum_{r \in \mathcal{R}_{b}} \left ( \lambda _{r,t}^{\text{U}} P_{r,t}^{\text{U}}+\lambda _{r,t}^{\text{D}} P_{r,t}^{\text{D}}\right) \label{eq:01Obj}\\ & \text{s.t.} && P_{n,t}^{\text{S}} + P_{b,t}^{\text{U}}-P_{b,t}^{\text{D}} - \sum_{r \in \mathcal{R}_{n}}\left ( P_{r,t}^{\text{U}} -P_{r,t}^{\text{D}} \right ) \notag\\ & && - \sum_{m \in \Omega_{n}}\left ( b_{n,m}\left ( \delta_{n,t} - \delta_{m,t}\right ) \right ) = 0 \quad n = n_{b} \label{eq:02NodBal1}\\ & && P_{n,t}^{\text{S}} - \sum_{r \in \mathcal{R}_{n}}\left ( P_{r,t}^{\text{U}} -P_{r,t}^{\text{D}} \right ) \notag\\ & && - \sum_{m \in \Omega_{n}}\left ( b_{n,m}\left ( \delta_{n,t} - \delta_{m,t}\right ) \right ) = 0 \quad \forall{n} \in \mathcal{N}, n \neq n_{b} \label{eq:02NodBal2}\\ & && -P_{n,m}^{\text{lim}}\leq b_{n,m} \left ( \delta_{n} - \delta_{m}\right )\leq P_{n,m}^{\text{lim}} \quad \forall n \in \mathcal{N}, m \in \Omega_{n} \label{eq:O3LineLim} \\ & && 0\leq P_{r}^{\text{U}}\leq P_{r}^{\text{Umax}} \quad \forall {r} \in \mathcal{R}\label{eq:O4RUmax} \\ & && 0\leq P_{r}^{\text{D}}\leq P_{r}^{\text{Dmax}} \quad \forall {r} \in \mathcal{R}\label{eq:O4RDmax} \\ & && \delta_{\text{ref}} = 0 \label{eq:O9ref} \end{align} \endgroup \label{eq:problemO.0} \end{subequations} \normalsize where $\mathbf{x}=\{P_{r,t}^{\text{U}},P_{r,t}^{\text{D}},\delta_{n,t}\}$. The objective function \eqref{eq:01Obj} seeks to minimise total costs, given the cost of the offer in question, and the sum of the costs of the requests that constitute the complete match. The decision variables are $P_{r}$, the energy fulfilling request $r$ and $\delta_{n,t}$ the voltage angle at node $n$. The price and the quantity bid for the single offer $b$ contained in the block bid $k$ are given in $\lambda _{b,t}$ and $P_{b,t}$. The information about the requests are $\lambda _{r,t}$, the price of the request $r$ and $\mathcal{R}_{b}$, the set of candidate requests to match with $b$. The superscript $U$ stands for upward and $D$ for downward. The nodal balance at the node where the block bid is located, $n_{b}$, is represented in \eqref{eq:02NodBal1}, whereas \eqref{eq:02NodBal2} applies for the rest of the nodes, contained in the set $\mathcal{N}$. $P_{n,t}^{\text{S}}$ is the initial setpoint of node $n$ at time period $t$ and $\mathcal{R}_{n}$ is the set of requests located at node $n$. The last term of both equations refers to the energy flows from or to the node, with $\Omega _{n}$ being the sets of nodes connected to $n$. These flows are calculated using the susceptance of the line, $b_{n,m}$, and the voltage angle difference of the connecting nodes $n$ and $m$. Constraint \eqref{eq:O3LineLim} guarantees that the energy flow through each line respects its capacity limit in both directions, $-P_{n,m}^{\text{lim}}$ and $P_{n,m}^{\text{lim}}$. Constraint \eqref{eq:O4RUmax} makes the amount of energy traded for each request positive and equal or lower than the quantity bid, $P_{r}^{\text{max}}$. Finally, the last constraint \eqref{eq:O9ref} defines that the voltage angle $\delta$ at the reference node is always $0$. As the optimisation problem is solved separately for each single offer composing the block bid, subscript $t$ corresponds to the time target for the given single offer $b$. Since $b$ has only one direction, upwards ($U$) or downwards ($D$), the terms in the opposite direction are disregarded along the whole problem. The entire process carried out to match block bids is described in Algorithm~\ref{alg:BBmatching}. In case the possible match is between single bids, or there is only one candidate request for the single offer of a block bid, we use the PTDF method to check network constraints, similar to \cite{cont}, rather than solving the optimisation problem \eqref{eq:problemO.0}, in order to reduce computational complexity. \begin{algorithm} \caption{Block Bids Matching}\label{alg:BBmatching} \begin{algorithmic} \small \For{each \textit{offer} of the \textit{block bid}} \If {there is just one possible match} \State Calculate \textit{Quantity\_max} following Ref.~\cite{cont} \If {\textit{Quantity\_max} = \textit{Quantity\_bid}} \State Save potential match and move to the next \textit{offer} \Else {} \If {\textit{Quantity\_max} $<$ \textit{Quantity\_bid}} \State Store the \textit{request} as candidate match and exit \EndIf \EndIf \Else {} \If {there are multiple possible matches} \State Solve (\ref{eq:problemO.0}) to determine the best match \If {(\ref{eq:problemO.0}) is feasible} \State Save potential match and move to the next \textit{offer} \Else{} \State Store the \textit{requests} as candidate match and exit \EndIf \EndIf \EndIf \EndFor \If {there is a potential match for each \textit{offer} of the \textit{block bid}} \State Set a match \State Update the Setpoint \EndIf \normalsize \end{algorithmic} \end{algorithm} As a result of this continuous clearing model, single bids are matched with the best available option at the time of their submission, and block bids are matched with the set of the best options available at the moment when the full match of the block bid is possible. For all matches, the proposed market clearing guarantees that network constraints will be satisfied. \subsection{Auction-Based Clearing Model} The auction-based clearing, which will serve as a benchmark for the proposed continuous clearing algorithm, is built as a mixed integer linear program (MILP), since it accounts for block bids using binary variables as proposed in Ref.~\cite{CongestionManagement}. These variables represent the acceptance ratio of the block bids, with 1 standing for acceptance and 0 for rejection. The aim of the auction-based configuration is to achieve the maximum social welfare for the whole market time horizon. Due to the AoN acceptance condition of the block bids, all the time periods are cleared at once. The problem is formally defined as follows: \begin{subequations} \begingroup \allowdisplaybreaks \begin{align} & \underset{\mathbf{x}}{\text{min.}} && \sum_{t \in \mathcal{T}} \Bigg[ \sum_{o \in \mathcal{O}}\left( \lambda _{o,t}^{\text{U}} P_{o,t}^{\text{U}}+\lambda _{o,t}^{\text{D}} P_{o,t}^{\text{D}} \right) \notag\\ & &&+\sum_{k \in \mathcal{K}}\left(AR_{k} \sum_{b \in \mathcal{B}_{k}} \left( \lambda _{b,t}^{\text{U}} P_{b,t}^{\text{U}}+\lambda _{b,t}^{\text{D}} P_{b,t}^{\text{D}} \right)\right) \notag\\ & &&-\sum_{r \in \mathcal{R}} \left ( \lambda _{r,t}^{\text{U}} P_{r,t}^{\text{U}}+\lambda _{r,t}^{\text{D}} P_{r,t}^{\text{D}}\right) \Bigg] \label{eq:O1.1Obj}\\ & \text{s.t.} && P_{n,t}^{\text{S}} - \sum_{r \in \mathcal{R}_{n}}\left ( P_{r,t}^{\text{U}} -P_{r,t}^{\text{D}} \right ) + \sum_{o\epsilon \mathcal{O}_{n}}\left ( P_{o,t}^{\text{U}}-P_{o,t}^{\text{D}} \right ) \notag\\ & && + \sum_{k \in \mathcal{K}_{n}}\left (AR_{k} \sum_{b \in \mathcal{B}_{k}} \left( P_{b,t}^{\text{U}}+ P_{b,t}^{\text{D}} \right)\right ) \notag\\ & && - \sum_{m \in \Omega_{n}}\left ( b_{n,m}\left ( \delta_{n,t} - \delta_{m,t}\right ) \right ) = 0, \quad \forall{n} \in \mathcal{N},\forall {t} \in \mathcal{T} \label{eq:02.1NodBal}\\ & && \sum_{o \in \mathcal{O}}\ P_{o,t}^{\text{U}} + \sum_{k \in \mathcal{K}} AR_{k} P_{k,t}^{\text{U}} - \sum_{r \in\mathcal{R}} P_{r,t}^{\text{U}} = 0, \quad \forall {t} \in \mathcal{T}\label{eq:02.1up}\\ & && \sum_{o \in \mathcal{O}}\ P_{o,t}^{\text{D}} + \sum_{k \in \mathcal{K}} AR_{k} P_{k,t}^{\text{D}} - \sum_{r \in \mathcal{R}} P_{r,t}^{\text{D}} = 0 \quad \forall {t} \in \mathcal{T}\label{eq:02.1down}\\ & && -P_{n,m}^{\text{lim}}\leq b_{n,m} \left ( \delta_{n,t} - \delta_{m,t}\right )\leq P_{n,m}^{\text{lim}} \notag\\ & &&\quad \forall n \in \mathcal{N}, m \in \Omega_{n},\forall {t} \in \mathcal{T}\label{eq:O3.1LineLim}\\ & && 0\leq P_{r,t}^{\text{U}}\leq P_{r,t}^{\text{Umax}} \quad \forall {r} \in \mathcal{R},\forall {t} \in \mathcal{T}\label{eq:O4.1RUmax}\\ & && 0\leq P_{r,t}^{\text{D}}\leq P_{r,t}^{\text{Dmax}} \quad \forall {r} \in \mathcal{R},\forall {t} \in \mathcal{T}\label{eq:O5RDmax}\\ & && 0\leq P_{o,t}^{\text{U}}\leq P_{o,t}^{\text{Umax}} \quad \forall {o} \in \mathcal{O},\forall {t} \in \mathcal{T}\label{eq:O6OUmax}\\ & && 0\leq P_{o,t}^{\text{D}}\leq P_{o,t}^{\text{Dmax}} \quad \forall {o} \in \mathcal{O},\forall {t} \in \mathcal{T}\label{eq:O7ODmax}\\ & && AR_{k}\: \epsilon \left\{0,1 \right\} \quad \forall {k} \in \mathcal{K}\label{eq:O8Binary}\\ & && \delta_{\text{ref},t} = 0 \quad \forall {t} \in \mathcal{T} \label{eq:O9.1ref} \end{align} \endgroup \label{eq:problemO} \end{subequations} \normalsize The objective function \eqref{eq:O1.1Obj} minimises the cost of trading flexibility with $\mathbf{x}=\{P_{r,t}^{\text{U}},P_{r,t}^{\text{D}},P_{o,t}^{\text{U}},P_{o,t}^{\text{D}},\delta_{n,t}, AR_{k}\}$. It considers all the single offers $\mathcal{O}$, block offers $\mathcal{K}$ and requests $\mathcal{R}$ submitted for all the time periods $\mathcal{T}$. The first and the third term refer to the cost of single offers and requests. The second term is the sum of the cost of all single offers contained in a block bid $\mathcal{B}_{k}$, multiplied by the acceptance ratio $AR_{k}$ of the block bid $k$, defined in \eqref{eq:O8Binary}. The first constraint \eqref{eq:02.1NodBal} is the nodal balance, quite similar to \eqref{eq:02NodBal1} but adding a term for the block bids and considering all the offers located at each node $\mathcal{O}_{n}$ and $\mathcal{K}_{n}$. Constraints \eqref{eq:O3.1LineLim} and \eqref{eq:O9.1ref} stand for the power flow limits and the reference voltage angle, exactly in the same way as in \eqref{eq:O3LineLim} and \eqref{eq:O9ref}. The energy accepted per bid is limited in \eqref{eq:O4.1RUmax} - \eqref{eq:O7ODmax}, where $P^\text{max}$ is the total quantity bid. Solving the optimisation problem, we determine which bids to accept and for which quantity in order to achieve the highest social welfare respecting network constraints. \section{Case Study} \label{sec:case} \subsection{Case Description} To compare both clearing mechanisms we design a case study where we apply the market models presented. We operate the market for 24 hours in the 33-bus radial distribution grid introduced in \cite{Grid}, considering consumption acting as flexible loads and including small-scale renewable generation as DERs, and allow for flexibility offers. The initial setpoint for 24 hours is obtained combining an average daily load profile for a distribution grid with typical generation daily profiles for photovoltaic plants and wind farms. We start from an infeasible setpoint, which violates line limits, aiming to make it feasible by trading system flexibility. We assume that we have perfect information from the DSO in terms of topology and technical specifications of the network, as well as the power injection setpoints per node and time period. We assume that the power injection setpoint is defined after the clearing of the wholesale market. Through a DC-OPF analysis we determine the flexibility requests needed by the DSO to operate the system in a feasible way. Upward requests represent the load shedding and downward requests the curtailment required in case the DSO does not have access to trade flexibility. Flexibility offers are created randomly to simulate that they can be submitted whenever and wherever. The prices of flexibility requests and offers are randomly generated within certain intervals. We consider that the DSO sets them according to the damage caused when capping generation and shedding loads. As the DSO usually wants to avoid load shedding at any cost, the prices of upward requests, [0.250-0.300]€/kWh, are quite high compared to downward requests, [0.035-0.045]€/kWh, to guarantee their acceptance. As a starting point, we consider that the price of flexibility offers is close to the wholesale market price in both directions, [0.030-0.040]€/kWh. Block bids price is assumed to be a bit lower to promote their acceptance, since it is more difficult for them due to their AoN condition, [0.020-0.035]€/kWh. The data used is available online, as well as the implemented models \cite{github}. Both market configurations are developed in Pyomo, a Python-based environment targeting optimisation problems \cite{pyomo} and solved using Gurobi \cite{gurobi}. \subsection{Simulation and Results} The performance of the models is evaluated focusing on the social welfare and the volume of energy traded, both for the whole market horizon (i.e. 24 hours). To analyse the differences between the clearing mechanisms, we define four test cases, containing the same bids but varying their form: \begin{itemize} \item \textit{BB and NC}: Single and block bids (BB) with network constraints (NC). \item \textit{SB and NC}: Only single bids (SB) with network constraints, i.e. block bids are treated as independent single bids. \item \textit{BB}: Single and block bids without network constraints. \item \textit{SB}: Only single bids without network constraints. \end{itemize} Being aware of the continuous clearing dependence on the arrival order of the bids, we generate a set of 100 scenarios with the same bids, but random arrival sequences. Fig.~\ref{fig:SW} and Fig.~\ref{fig:Vol} show the total social welfare and energy volume traded for all the scenarios when operating both models. The auction-based market results are constant because they are not affected by the arrival order of the bids. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{Pictures/Plot_SW.pdf} \caption{Social welfare for the four cases using continuous and auction-based clearing.} \label{fig:SW} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{Pictures/Plot_Vol.pdf} \caption{Energy volume traded for the four cases using continuous and auction-based clearing.} \label{fig:Vol} \end{figure} Comparing the results of the auction-based clearing in Fig.~\ref{fig:SW}, we observe that the integration of network constraints has more impact on the social welfare than the introduction of block bids. With the continuous clearing, however, it is generally the other way around, i.e. block bids affect more the social welfare than network constraints. Considering only single bids, the social welfare in the continuous market can be closer to the auction-based, whereas when including block-bids the social welfare difference increases. As an insight we get that block bids might be more difficult to handle in the continuous market, which results to larger differences in social welfare between the clearing mechanisms. Regarding the energy volume traded displayed in Fig.~\ref{fig:Vol}, we notice that with single bids, continuous clearing usually leads to more liquidity. Without considering block bids nor network constraints, all the scenarios for the continuous clearing model result in the same volume, which is the maximum according to the bids submitted. The difference we observe in the volume between the continuous clearing and the auction-based clearing in this case lies in the acceptance of some bids that do not add to the social welfare (social welfare of zero). Our auction-based model maximises social welfare, so it does not accept bids that bring no social welfare. In contrast, our continuous model does consider the match between bids with the same price, i.e. zero social welfare. For this reason, there are more transactions and hence market liquidity increases. Table~\ref{tab:comparison} presents the continuous market performance relative to the auction-based market outcome in terms of social welfare and volume. The difference between them is displayed as percentages of the auction-based result, since it is considered as benchmark. \begin{table}[t] \caption{Continuous vs auction-based clearing (values in \% relative to the auction-based Social Welfare and Volume).} \resizebox{0.5\textwidth}{!}{ \label{tab:comparison} \begin{tabular}{l|ccc|ccc|} \cline{2-7} \multicolumn{1}{c|}{\textbf{}} & \multicolumn{3}{c|}{\textbf{Social welfare}} & \multicolumn{3}{c|}{\textbf{Volume}} \\ \cline{2-7} \multicolumn{1}{c|}{} & Average & Max & Min & Average & Max & Min \\ \hline \multicolumn{1}{|l|}{BB and NC} & 88.6\% & 99.2\% & 78.9\% & 89.5\% & 99.9\% & 78.3\% \\ \multicolumn{1}{|l|}{SB and NC} & 96.7\% & 98.7\% & 93.6\% & 100.9\% & 101.5\% & 99.4\% \\ \multicolumn{1}{|l|}{BB} & 89.9\% & 96.0\% & 81.4\% & 90.6\% & 95.9\% & 81.0\% \\ \multicolumn{1}{|l|}{SB} & 97.6\% & 98.9\% & 96.2\% & 101.0\% & 101.0\% & 101.0\% \\ \hline \end{tabular} } \end{table} The results of this case study show that including block bids, the sub-optimality gap in social welfare between continuous and auction-based clearing is on average slightly higher than 10\%, whereas without them is around 3\%. The difference between considering network constraints or not is generally around 1\%. With single bids and without network constraints the volume can increase by 1\% with continuous clearing, and up to 1.5\% considering network constraints. However, with block bids it is reduced by around 10\%. In general, we can conclude that when we only have single bids, the bids arrival order has less influence on the continuous market results as when we introduce block bids. When we have block bids, the sub-optimality gap in terms of social welfare and volume can be between 1\% and 21\% with network constraints and 4\% and 19\% without them. We therefore observe that the arrival order of the bids, which is initially unpredictable, has a considerable influence on the outcome of the continuous market, and hence it widens the gap between the clearing mechanisms. The next section develops an approach to determine the upper and lower bound of this sub-optimality gap between the continuous and the auction-based markets, by determining the worst and best arrival sequence of the bids. \section{Study on the Sub-Optimality} \label{sec:subopt} In the previous case study, we generated random scenarios regarding the sequence of submission of the bids. Here, we want to analyse exactly how close and how far from the auction-based market the continuous market clearing can result in terms of social welfare. In, \cite{Comparison}, the best and worst arrival sequences are defined by ordering the offers by ascending and descending price, assuming that all requests are submitted previously. However, this does not apply for markets with network constraints and block bids. Indeed, network constraints limit the matching of the bids to prevent line congestions, and block bids disturb the best (and worst) arrival sequence, since all the offers involved are submitted at the same time. This fact together with their AoN acceptance condition also influences the matching. This section proposes an algorithm that can indeed define the best and worst arrival sequences, and thus determine the sub-optimality gap for a given set of bids, when considering both network constraints and block bids. \subsection{Method} Under the assumption that all requests are submitted first and standing in the order book, the worst (and best) sequence can be obtained by solving a multi-level optimisation problem, which reproduces the continuous market clearing, with the objective of minimising (or maximising) the resulting social welfare. It can be written as follows: \small \begin{subequations} \begingroup \allowdisplaybreaks \begin{align} & \underset{\mathbf{x}}{\min} && \sum_{r\epsilon \mathcal{R}} \lambda _{r} P_{r}^{\text{tot}} - \sum_{o\epsilon \mathcal{O}} \lambda _{o} P_{o}^{\text{tot}} - \sum_{k\epsilon \mathcal{K}} AR_{k}^{\text{tot}} \left(\lambda_{k}^\text{U} P_{k}^\text{U,max} - \lambda _{k}^\text{D} P_{k}^\text{D,max} \right) \label{eq:UL_obj}\\ & \text{s.t.} && C_i, \quad \forall{i} \in \mathcal{I} \label{eq:UL_LL}\\\ & && P_{r}^{\text{tot}} = \sum_{i} P_{i,r}, \quad \forall r \in \mathcal{R} \label{eq:UL_Pr} \\ % & && P_{o}^{\text{tot}} = \sum_{i} P_{i,o}, \quad \forall o \in \mathcal{O} \label{eq:UL_Po} \\ % & && AR_{k}^{\text{tot}} = \sum_{i} AR_{i,k}, \quad \forall k \in \mathcal{K} \label{eq:UL_ARsum} \\ % & && AR_{k}^{\text{tot}} \in \left\{0,1 \right\} \quad \forall k \in \mathcal{K} \label{eq:UL_AR_bin}\\ % & && \sum_{b \in \mathcal{B}} s_{i,b} = 1 \quad \forall i \in \mathcal{I} \label{eq:UL_bin1} \\ % & && \sum_{i \in \mathcal{I}} s_{i,b} = 1 \quad \forall {b} \in \mathcal{B} \label{eq:UL_bin2} \\ % & && s_{i,b} \in \left\{0,1 \right\} \quad \forall i \in \mathcal{I}, \forall s \in \mathcal{B} \label{eq:UL_bin} \end{align} \endgroup \label{eq:UL} \end{subequations} \normalsize where $\mathbf{x}=\{P_{r}^\text{tot},P_{o}^\text{tot},AR_{k}^\text{tot}, s_{i,b}\}$. Solving this problem returns one of the sequences in which a given set of offers submitted to the continuous market would result in the lowest social welfare (or highest if we use maximise instead of minimise). The value of the corresponding social welfare is available through the objective function \eqref{eq:UL_obj}, where $\lambda_r$, $\lambda_o$, $\lambda_k^\text{U}$ and $\lambda_k^\text{D}$ are the submitted prices of requests, single offers, upward and downward blocks of a block bid, and $P_{r}^\text{tot}$, $P_{o}^\text{tot}$, $P_{k}^{\text{U,max}}$ and $P_{k}^{\text{D,max}}$ are the associated quantity. For the block bids, those quantities are parameters and the all-or-nothing acceptance is decided through the variable $AR_{k}^\text{tot}$, which is defined as a binary in \eqref{eq:UL_AR_bin}. The set $\mathcal{R}$ gathers all requests, $\mathcal{O}$ all the single offers and $\mathcal{K}$ the block bids. These two are gathered in $\mathcal{B}=\mathcal{O}\cup \mathcal{K}$. The set $\mathcal{I}$ represents the different clearing rounds and has the same number of elements as $\mathcal{B}$: $|\mathcal{I}| = |\mathcal{B}|$. In \eqref{eq:UL_LL}, $C_i$ represent the round $i$ of the continuous clearing. The expression of the clearing will be later detailed. The auxiliary variables $P_{i,r}$, $P_{i,o}$ and $AR_{i,k}$ are defined to detail what is happening at each round of the market clearing. They sum up to the total quantities, as given in \eqref{eq:UL_Pr}, \eqref{eq:UL_Po} and \eqref{eq:UL_ARsum}. Finally, $s_{i,b}$ are introduced as binary variables \eqref{eq:UL_bin} to define the sequence for submitting the offers. When equal to 1, it means that offer $b$ is submitted in round $i$. The last constraints \eqref{eq:UL_bin1} and \eqref{eq:UL_bin2} ensure that only one bid is submitted at a time and that each bid is submitted only once. The continuous matching $C_i$ can also be described by an optimization problem: \small \begin{subequations} \begingroup \allowdisplaybreaks \begin{align} & \underset{\mathbf{y}}{\max} && \sum_{r \in \mathcal{R}} \lambda _{r} P_{i,r} - \sum_{o \in \mathcal{O}} \lambda _{o} P_{i,o} \notag \\ & && - \sum_{k \in \mathcal{K}} AR_{i,k} \left (\lambda _{k}^\text{U} P_{k}^\text{U,max} + \lambda _{k}^\text{D} P_{k}^\text{D,max}\right) \label{eq:LL_obj}\\ & \text{s.t.} && \sum_{o \in \mathcal{O}_{t}^\text{U}} P_{i,o} + \sum_{k \in \mathcal{K}_{t}} P_{i,k}^\text{U} = \sum_{r \in \mathcal{R}_{t}^\text{U}} P_{i,r}, \quad \forall{t} \in \mathcal{T} \label{eq:LL_sumU} \\ & && \sum_{o \in \mathcal{O}_{t}^{\text{D}}} P_{i,o} + \sum_{k \in \mathcal{K}_{t}} P_{i,k}^\text{D} = \sum_{r \in \mathcal{R}_{t}^\text{D}} P_{i,r}, \quad \forall{t} \in \mathcal{T} \label{eq:LL_sumD}\\ & && P_{n,t}^{\text{S}} + \sum_{j \in \mathcal{I}, j\leq i} \left ( \sum_{o \in \mathcal{O}_{n,t}^{U}} P_{j,o} - \sum_{o \in \mathcal{O}_{n,t}^{D}} P_{j,o} \right. \notag\\ & && \left. + \sum_{k \in \mathcal{K}_{n,t}} (P_{j,k}^\text{U} - P_{j,k}^\text{D}) - \sum_{r \in \mathcal{R}_{n,t}^\text{U}} P_{j,r} + \sum_{r \in \mathcal{R}_{n,t}^\text{D}} P_{j,r} \right ) \notag\\ & && - \sum_{m \in \Omega _{n}}b_{n,m}\left ( \delta_{i,n,t} - \delta_{i,m,t}\right ) = 0, \quad \forall{n} \in \mathcal{N}, \forall{t} \in \mathcal{T} \label{eq:LL_net1}\\ % & && \delta_{i,\text{ref},t} = 0 \quad \forall {t} \in \mathcal{T} \label{eq:LL_net2}\\ % & && -P_{n,m}^{\text{lim}}\leq b_{n,m} \left ( \delta_{i,n,t} - \delta_{i,m,t}\right )\leq P_{n,m}^{\text{lim}} \notag\\ & && \quad \forall n \in \mathcal{N}, m \in \Omega_{n}, \forall {t} \in \mathcal{T} \label{eq:LL_net3}\\ % & && 0 \leq P_{i,r} \leq P_r^\text{max} - \sum_{j \in \mathcal{I}, j < i} P_{j,r}, \quad \forall{r} \in \mathcal{R} \label{eq:LL_prb}\\ % & && 0 \leq P_{i,o} \leq \left ( \sum_{j \in \mathcal{I}, j \leq i} s_{j,o}^{\text{i}} \right) (P_o^\text{max} - \sum_{j \in \mathcal{I}, j < i} P_{j,o}), \notag\\ & && \quad \forall{o} \in \mathcal{O} \label{eq:LL_pob}\\ % & && P_{k}^{\text{i,U}} = AR_{i,k} s_{i,k} P_{k}^\text{U,max}, \quad \forall{k} \in \mathcal{K} \label{eq:LL_bbu}\\ % & && P_{k}^{\text{i,D}} = AR_{i,k} s_{i,k} P_{k}^\text{D,max}, \quad \forall{k} \in \mathcal{K} \label{eq:LL_bbd}\\ % & && AR_{i,k} \in \left\{0,1 \right\} \quad \forall k \in \mathcal{K} \label{eq:LL_AR_bin} \end{align} \endgroup \label{eq:problem2bis} \end{subequations} \normalsize where $\mathbf{y}=\{P_{i,r},P_{i,o},AR_{i,k}, P_{i,k}^\text{U}, P_{i,k}^\text{D}, \delta_{i,n,t}\}$. Whenever a new bid is submitted, the continuous market matches the bids in order to maximize the social welfare for the current match, which is given by the objective function \eqref{eq:LL_obj}. Equations \eqref{eq:LL_sumU} and \eqref{eq:LL_sumD} ensure that offers can only match with requests, in up and down directions, where $\mathcal{O}_{t}^{\text{U}}$, $\mathcal{O}_{t}^{\text{D}}$, $\mathcal{R}_{t}^{\text{U}}$, $\mathcal{R}_{t}^{\text{D}}$ and $\mathcal{K}_{t}$ are subsets of $\mathcal{O}$, $\mathcal{R}$ and $\mathcal{K}$ for upward (U) and downward (D) bids, submitted for time period $t$. The network constraints are given in \eqref{eq:LL_net1}, \eqref{eq:LL_net2} and \eqref{eq:LL_net3}, similarly as shown in the previous sections. The main difference here is that the initial setpoint is modified by all the offers and requests that have been accepted in previous rounds of the continuous market clearing. Constraints \eqref{eq:LL_prb} and \eqref{eq:LL_pob} give the bounds for each request and single offer. The maximum is equal to the quantity bid minus what has been previously accepted. The offers can only be accepted if they were previously submitted, which is why we need the sum over the binary variables here. Regarding the block bids, with \eqref{eq:LL_bbu}, \eqref{eq:LL_bbd} and \eqref{eq:LL_AR_bin}, the all-or-nothing condition is ensured. Note that with this formulation, we do not consider that for two potential matches at the same price, the oldest bid would be prioritized. The formulation could be completed to include this, however it is not critical as we are mostly interested in determining the total resulting social welfare, which will not be impacted by this. The resulting bilevel problem can be reformulated as a single level problem in order to be solved. To do this, it is necessary to relax the binary conditions \eqref{eq:LL_AR_bin}, as done in \cite{ye2019}. The binary conditions can be reintroduced in the resulting problem but the reformulation is then not exact. Solving the problem with the relaxed binary constraints gives a lower bound on the worst social welfare. It is exactly equal to the worst social welfare (i.e. the relaxation is tight) if the corresponding variables only take 1 and 0 values at the optimal point. Otherwise, it is possible to reintroduce the binary constraints and the exact worst social welfare will be somewhere between the two values obtained. \subsection{Results for a Small Test-Case} We solved this problem on a 5-bus test network, with 5 bids submitted on two time periods, including one block bid. The corresponding code and data used are available online \cite{github}. To reformulate the problem, the KKT conditions of the relaxed lower-level are used and the big-M method is applied to linearize the complementarity constraints. The solutions found for the relaxed problem are feasible for the original problem, which guarantees their optimality. For this system, we determine that the social welfare will be between 83\% and 100\% of the value obtained with auction-based clearing for the same set of bids. The proposed algorithm is a computationally heavy problem, with the complexity increasing exponentially with the number of bids. The number of binaries and the number of lower-level problems to be solved are both equal to the squared number of offer bids considered. Future work will focus on reducing this complexity. \section{Discussion} \label{sec:discussion} \subsection{Models Formulation} Using auction-based or continuous clearing in LFMs has different implications besides the market outcome. The main distinction lies in their formulation. On one hand, the continuous matching algorithm is constantly comparing the incoming bids with the unmatched ones previously submitted, performing a network check every time there is a match and updating the network setpoint when a match is feasible. It also identifies and stores all candidate matches for the block bids and verifies the status of all the offers involved whenever one of them is matched. All of these processes increase the algorithm's complexity, but bids match as soon as possible. Flexibility requests and offers can be submitted close to real-time, which allows the latest forecasts to be used. On the other hand, the auction-based market is defined as an optimisation problem focused on minimising the total system costs. It is not straightforward to solve though, as (i) it is a MILP and (ii) it implies clearing all the time periods at the same time to accommodate the bids that link several periods. Thus, all the bids must be submitted before the market clearing, regardless of their time target. While continuous clearing can be performed in an online fashion, auction-based clearing requires the definition of a market time horizon. \subsection{Block Bids} Block bids are mainly introduced in the LFM to model the rebound effect suffered by certain flexible loads. In order to handle technical requirements of flexibility providers they are subject to the AoN acceptance condition. This condition is also attached to block bids in most intraday markets for the sake of simplicity. Block bids could also be attached to less restrictive conditions, such as being partially accepted or having a minimum acceptance ratio. Such kind of block bids could allow more flexibility trades and also model other providers' preferences. However, the introduction of different type of block bids would significantly increase the computational time of the market clearing, which is a very valuable property for markets operating close to real time. For this reason we decided to have only fully accepted or rejected block bids. Moreover, the other types of block bids mentioned can be simulated by submitting several smaller block bids or by combining single bids. \section{Conclusion} \label{sec:ccl} This paper has two main contributions. First, it introduces a continuous market clearing algorithm for local flexibility markets which considers, for the first time to our knowledge, both network constraints and block bids. Continuous markets can allow a higher market liquidity, as flexibility offers and requests are matched continuously, and can result to larger trading volumes. The algorithm we propose in this paper focuses on energy flexibility markets in the distribution grid. Such markets have the potential to harness the available flexibility in the distribution system and defer costly network reinforcements. During the online matching, our proposed algorithm considers the network constraints to guarantee the technical feasibility of the trades and avoid causing or aggravating congestions in the distribution grid. At the same time, it enables block bids, which increases the pool of available flexibility as it promotes the participation of potential flexibility suppliers that suffer a rebound effect when providing the service. The second main contribution of this paper is the design of an algorithm that determines the upper and lower bound of the suboptimality of a continuous energy market compared with its auction-based counterpart. Compared to an auction-based clearing, which considers all flexibility offers and requests for the total time horizon at the same time, continuous markets result by design to a lower social welfare; this is often seen as an acceptable shortcoming by regulators and market operators, when considering the increased liquidity that continuous markets often result to, and their ability to operate close to real-time. This paper formulates an optimization algorithm that can determine the worst and best arrival sequence of the bids in a continuous market, and through that, the best and worst performance in terms of social welfare, compared to an auction-based market. Considering a known set of possible bids, our results on a 5-bus system show that the continuous market (which includes both network constraints and block bids) will result to a social welfare between 83\% and 100\% of the social welfare that an auction-based market can achieve. Besides the design of an algorithm, we also carried out a series of simulations on a larger system to investigate the difference in social welfare and traded volume between the two market clearing models for a series of different conditions. Running our model for 100 different bid arrival sequences, we find that the average social welfare is at 88.6\% of the optimal. Regarding liquidity, we find that with single bids it is very likely to end up with a higher energy volume traded through continuous clearing. This is not the case, however, when we allow the trade of block bids, where the average energy volume traded is 89.5\% lower than with auction-based clearing. \bibliographystyle{IEEEtran}
2024-02-18T23:40:43.235Z
2021-10-13T02:22:49.000Z
algebraic_stack_train_0000
3,186
7,518
proofpile-arXiv_065-15618
\section{DirILP Formulation - Special Case} \label{appx:DirILP_special} The ILP Formulation for DirILP when $\kappa_{ic} = \frac{1}{\sigma^2}$ for every source $(i,c)$ is given below. The objective function we want to minimize is given by \begin{equation} \min \sum_o \bigg(\;\;p^o + \sum_p a_p w_p^o + t^o \bigg) \end{equation} The following constraints restrict $x_{ic}^o, y_{ic, i'c'}^o, z_k^o, w_p^o$ to binary variables and $t^o$ to have non-negative values. \begin{equation} x_{ic}^o, y_{ic, i'c'}^o, z_k^o, w_p^o \in \mathbb{Z} \text{ and } 0 \leq x_{ic}^o, y_{ic, i'c'}^o, z_k^o, w_p^o \leq 1, \; 0 \leq t^o, \; \; \forall (i,c), k, p, o. \end{equation} The next equation ensures that all sources $(i,c)$ need to belong to exactly one subset: \begin{equation}\label{eq:DirILP_special_l1}\sum_o x_{ic}^o = 1, \;\;\forall (i,c) \end{equation} The following equation imposes that every subset takes no more than $1$ source from each catalog. \begin{equation}\sum_i x_{ic}^o \leq 1, \;\;\forall o \in \{1, 2, ..., N\},\;\; \forall c \in \{1, \ldots, C\} \end{equation} \label{eq:DirILP_special_l2} The following set of constraints on $y_{ic, i'c'}^o$ is an implementation of the definition of $y_{ic, i'c'}^o$ in Section~\ref{sec:DirILP_setup}, which requires $y_{ic, i'c'}^o = 1$ only if $x^o_{ic} = x^o_{i'c'} = 1$: \begin{gather}\label{eq:DirILP_special_l3} y_{ic, i'c'}^o \geq \; x^o_{ic} + x^o_{i'c'} - 1, \\ y_{ic, i'c'}^o \leq x_{ic}^o, \\ y_{ic, i'c'}^o \leq x_{i'c'}^o, \end{gather} for all $(i,c) \neq (i',c') \text{ and } \forall o$. Since the cardinality of any subset from a partition $P$ is between $0$ and $C,$ the following equation states that only $1$ of $z_k^o$ can take a value of $1$. \begin{equation}\sum_{k=0}^C z_k^o = 1, \forall o, \label{eq:DirILP_special_l4} \end{equation} The next constraint is the definition of $w_p^o$ as described in Section~\ref{sec:DirILP_setup}. \begin{equation} w_1^o \geq w_2^o \geq \cdots \geq w^o_C \text{ and } \sum_{p=1}^C w_p^o = \sum_{ic} x_{ic}^o, \;\; \forall o, \label{eq:DirILP_special_l5} \end{equation} With the specific choice of the constant $M$ as defined below, the equation that follows becomes redundant when $z^o_k = 0$ since the RHS will be negative and so $t_o \geq 0$ becomes the enforcing constraint, and when $z^o_k = 1$, the minimization forces $t^o$ to be equal to the first term of the RHS. \begin{equation}t^o \geq \frac{\sum\kappa\psi_{ic,i'c'}^2 y_{ic, i'c'}^o}{4k} - (1 - z_k^o) M,\;\; \forall o \text{ and } k \in \{1, 2, \cdots, C\}, \label{eq:DirILP_special_l6} \end{equation} where $M = \bigg\lceil \sum\limits_{ic, i'c' \in D}\frac{\kappa\psi^2_{ic,i'c'}}{4} \bigg\rceil$. The following set of equations constitutes the definition of $z_k^o.$ \begin{gather} \sum_{ic}x_{ic}^o \leq k z_k^o + C(1 - z_k^o)\label{eq:DirILP_special_l7}\\ \sum_{ic}x_{ic}^o \geq k z_k^o,\label{eq:DirILP_special_l7-II} \end{gather} for all $k \in \{0, 1, 2, \cdots, C\}$ and for all $o$. Finally, the last equation \begin{equation} p^o \geq \ln(2\kappa)(1 - \sum_p w_p^o) - \ln(2\kappa)z_0^o, \;\; \forall o \label{eq:DirILP_special_l8}, \end{equation} ensures that for an empty subset $S_o$, $p^o = 0$, hence contributing nothing to the objective. This is because when $z_0^o = 1$ (nothing is assigned to subset $S_o$), $w_p^o = 0, \forall p$. As we are minimizing the objective function with respect to $p^o$, $p^o$ will be set to $0$. On the other hand, when $z_0^o = 0$, the constraint becomes $p^o \geq \ln(2\kappa)(1 - \sum_p w_p^o)$ and again, since we are minimizing, $p^o$ will equal this value. \section{DirILP Formulation - General Case} \label{appx:DirILP_general} Below, we give the ILP Formulation for DirILP when $\kappa_{ic}$ is different for distinct sources $(i,c)$. Some of these constraints are similar to the special case so we will only give explanations for the new constraints, which are shown after the ILP formulation. We follow the notation introduced in Section~\ref{sec:DirILP_general}. In particular, recall the constants $c_0, c_1, \ldots, c_Q$ designed to model, up to the nearest 100, the sum of subsets of uncertainties $\kappa_{ic}$, the associated decision variables $u^o_k$ and the decision variables $\chi^o_P$ to model the logarithms of such sums. The objective in the general case is \begin{equation}\label{eq:obj-general} \min \sum_o \bigg(\;\;p^o - \sum_{ic}x^o_{ic}\ln\kappa_{ic} + \chi_1^o b_{\min} + \epsilon\sum_{p=2}^R \chi_p^o + t^o \bigg) \end{equation} As in the special case, the following constraints on the variables restrict $x_{ic}^o, y_{ic, i'c'}^o, z_k^o$ to binary variables and $t^o$ to have non-negative values. Additionally, the new variables $\chi_p^o, u^o_k$ are also restricted to binary values. $$x_{ic}^o, y_{ic, i'c'}^o, z_k^o, \chi_p^o, u^o_k \in \mathbb{Z} \text{ and } 0 \leq x_{ic}^o, y_{ic, i'c'}^o, z_k^o, \chi_p^o, u^o_k \leq 1, \;\; 0 \leq t^o.$$ \medskip Next, constraints~\eqref{eq:DirILP_special_l1}--\eqref{eq:DirILP_special_l4} and~\eqref{eq:DirILP_special_l7}--\eqref{eq:DirILP_special_l7-II} are included verbatim. \medskip The following impose the conditions required on $\chi^o_p$ as described in Section~\ref{sec:DirILP_general}. \begin{equation}\label{eq:DirILP_general_l2} \chi_1^o \geq \chi_2^o \geq \cdots \geq \chi^o_R \text{ and } \chi_1^o \exp(b_1) + \sum_{p=2}^R \chi_p^o (\exp(b_p) - \exp(b_{p-1})) \geq \sum_{ic} \kappa_{ic} x_{ic}^o, \;\; \forall o. \end{equation} \medskip The next set of constraints ensure that the value of $\sum_{ic \in S_o} \kappa_{ic}$ will be approximately equal to $c_k$ (up to the nearest 100) for some $k \in \{0, 1, 2, \cdots Q\}.$ \begin{equation} \sum_{k=0}^Q u_k^o = 1, \;\;\forall o,\label{eq:DirILP_general_l1} \end{equation} \begin{equation}\label{eq:DirILP_general_l3} \begin{array}{l} \sum_{ic}[\kappa_{ic}]_{_{100}}x_{ic}^o \leq c_k u_k^o + M'(1 - u_k^o)\\ \sum_{ic}[\kappa_{ic}]_{_{100}}x_{ic}^o \geq c_k u_k^o \end{array}, \;\;\forall k \in \{0, 1, 2, \cdots, Q\} \text{ and } \forall o. \end{equation} where $M' = C\max_{ic \in D}\kappa_{ic}.$ Finally, the last set of constraints tie everything back into the objective function~\eqref{eq:obj-general}. \begin{gather} t^o \geq \frac{\sum_{ic}\sum_{i'c'}\kappa_{ic}\kappa_{i'c'}\psi_{ic,i'c'}^2 y_{ic, i'c'}^o}{4c_k} - (1 - u_k^o) M,\;\; \forall o \text{ and } k \in \{1, 2, \cdots, Q\}, \\ p^o \geq (1 - \sum_{ic \in S_o} x_{ic}^o)\ln2 - z_0^o\ln2, \;\; \forall o. \end{gather} where $M = \Bigg\lceil \frac{\max_{ic \in D}\kappa_{ic}^2\sum\limits_{ic,i'c' \in D} \psi_{ic,i'c'}^2}{4\min_{ic \in D} \kappa_{ic}} \Bigg\rceil$. \section{Motivation} Several approaches have been proposed over the years to combine observations across telescopes and epochs. In the era of time-domain astronomy where dozens or hundreds of observation epochs are available, these problems are more important than ever. In particular, combining catalogs has been a central issue where detections in separate exposures are matched by identifying the ones that correspond to the same celestial object. Several tools were developed to provide solution to the cross-matching problem, such as TOPCAT \citep{taylor2015} and CDS XMatch \citep{Pineau_2011, boch2012}. However, they do not consider the statistical aspect of the problem. This cross-identification problem was successfully addressed using Bayesian hypothesis testing by \citet{budavari_szalay_2008}, whose methodology was implemented in the latest version of the SkyQuery service \citep{skyquery} which is now part of the SciServer Science Platform \citep{sciserver}. The Bayesian formalism and the combinatorial nature of the problem is discussed in a review by \citet{budavari_loredo_2015}. The first solution came from \citet{budavari_basu_2016} who formulated the matching problem as a search for globally optimal associations using combinatorial optimization, where the marginal likelihood of the entire matched catalog is maximized, and used the Hungarian algorithm \citep{Munkres57hunalg} to solve it. After that proof of concept was developed for two catalogs, \cite{shi2019probabilistic} extended the algorithm to handle multiple catalogs using Integer Linear Programming, or ILP for short. For simplicity, we will refer this method as {\em{}CanILP}, as it enumerates all possible candidate associations and uses ILP to choose the best valid subset. As we will discuss later, the method suggested in~\cite{shi2019probabilistic} does not scale very well with large number of catalogs. This scaling problem is also observed in \citet{Pineau_2017} as the authors try to estimate the probability, for all combinations of sources, that a tuple of sources from different catalogs correspond to the same object. The exhaustive search results in an exponential growth in the number of possible tuples as the number of catalogs increases. They also note that this approach is not feasible in practice for more than 9 catalogs. In this paper, we improve on the previous studies by introducing a novel formulation, hereafter referred to as {\em DirILP}, where we use ILP to directly assign detections to hypothesized objects. Section~\ref{sec:theory} describes the new approach, and Section~\ref{sec:result_analysis} illustrates how the new method scales better with the number of input catalogs. Section~\ref{sec:software} discusses a public software tool to solve the catalog matching problem. Section~\ref{sec:future} concludes the study. \section{Our Approach}\label{sec:theory} To quantify the associations among independent detections, a relatively recent approach was developed that uses a hierarchical Bayesian formalism. Suppose there are $C$ catalogs, indexed by $c \in \{1, \ldots, C\}$, with each catalog capturing $N_c$ sources respectively. Let $D_{ic}$ denote the measurements for source $i$ in catalog $c$, hereafter denoted by tuple $(i,c)$. Associated with any $(i,c)$ measurement is a likelihood function $\ell_{ic}(\omega) = p(D_{ic}|\omega),$ for the unknown true direction $\omega$, which captures the astrometric uncertainty. While other object properties could also be considered in general, such as their brightness or colors, here we focus on directional matching only. Here we adopt the definition for matching used in previous papers \citep[e.g.,][]{budavari_szalay_2008, budavari_loredo_2015, budavari_basu_2016, shi2019probabilistic} where one tests whether two or more detections correspond to the same physical object. That said, it is possible to define the ``match'' hypothesis such that detections are not of the ``same'' object but instead just ``part of'' another, e.g., an optical galaxy in an X-ray cluster, or a star in a blend of two. In theory such scenarios can be accommodated (by changing the marginal likelihood calculations, see below), but the computation requirements might increase, and the interpretation of the matched catalog would be more difficult.% \footnote{Further complications emerge when different blends are to be matched, in which case one considers whether the detections ``share'' components, e.g., a common star in two different blends.} Our association approach described below is flexible and will yield matches based on the underlying model. Associations are created by grouping all sources in all catalogs such that each belongs to only one group. Mathematically, a \textit{partition} $P$ is created of the data set $D$, the union of all sources in all catalogs, where each subset corresponds to the same celestial object. The number of subsets in the partition will constitute the number of hypothesized objects $N_{\rm{}obj}$, which is an unknown but bounded quantity that is less or equal to the number of all sources. Typically it is much less as the equality would mean that every source is in fact a separate object altogether. We can index every object by an integer \mbox{$o \in \{1, \ldots, N_{\rm{}obj}\}$}. Let $S_o$ be the set of sources $(i,c)$ associated with object $o$ and $C_o$ be the list of catalogs containing sources associated with object $o$. Following \citet{budavari_loredo_2015}, the likelihood of a partition $P$ of all sources, a collection of the $S_o$ subsets, will be a product of conditionally independent terms, \begin{equation} \mathcal{L}(P) \equiv p(D|P) = \prod_o \mathcal{M}_o, \end{equation} where the marginal likelihood $\mathcal{M}_o$ for the association corresponding to object $o$ is \begin{equation} \mathcal{M}_o = \int d\omega\, \rho_{C_o}(\omega)\!\!\prod\limits_{(i,c) \in S_o}\ell_{ic}(\omega). \end{equation} Here $\rho_{C_o}(\omega)$ is the prior probability density function of the object direction producing sources within (the footprint of) every catalog in the set $C_o$. This notation enables the treatment of catalogs with different sky coverage, whose angular selection function would enter the prior on the latent direction $\omega$. Technically, the $\rho_{C_o}(\omega)$ function is not simply the angular selection function of the intersection area of the catalogs, because it also incorporates the astrometric uncertainty: there is a non-zero probability of observing a source within a given footprint even if its true direction is outside the field of view, but this effect is negligible if the field of view is large in comparison to its boundary blurred by the astrometric uncertainty, which is the case for typical observations and surveys, but would not apply, for example, if a catalog were an aggregation of disjoint sky patches comparable in size to the point-spread function. Alternatively, one can define the marginal likelihood for a non-association hypothesis, which assumes that every source in $S_o$ is a separate object on its own \begin{equation} \mathcal{M}_o^{\rm{}N\!A} = \prod\limits_{(i,c) \in S_o}\int d\omega\, \rho_{c}(\omega)\,\ell_{ic}(\omega), \end{equation} where $\rho_{c}(\omega)$ is the prior probability density function of direction for sources in catalog $c$. This hypothesis serves as a natural comparison, with which it is useful to introduce the Bayes factor as the ratio of the marginal likelihoods of these two cases, \begin{equation} B_o = \frac{\mathcal{M}_o}{\mathcal{M}_o^{\rm{}N\!A}}. \end{equation} The $B_o$ takes values larger than 1 when the association of the sources in $S_o$ is more likely than the alternative, \mbox{$B_o<1$} favors separate objects. We note that \mbox{$\prod \mathcal{M}_o^{\rm{}N\!A}$} is simply a product over all sources in all catalogs independent of partition $P$ and, hence, is constant. Consequently, the maximization of the likelihood ${\cal{}L}(P)$ is equivalent to optimizing $\prod B_o$. As customary, we work with a summary of the raw imaging data $D_{ic}$ for each detection $(i,c)$, the measured direction $x_{ic}$, which is essentially the intensity-weighted pixel direction. In order to calculate the Bayes factors $B_o$ with these measurements, we specify a distribution for the member likelihood function $\ell_{ic}(\omega)$, i.e., the astrometric uncertainty. To describe directional uncertainty in the observations, the spherical analog of the Gaussian is often assumed, the \citet{Fisher1953} distribution, \begin{equation} \ell_{ic}(\omega) \coloneqq f(x_{ic};\omega, \kappa_{ic}) = \frac{\kappa_{ic}}{4\pi\sinh{\kappa_{ic}}}\exp\left({\kappa_{ic}\,\omega\cdot x_{ic}}\right), \end{equation} where the $x_{ic}$ observed direction and true $\omega$ direction are both 3D unit vectors. The latter is the mode of the distribution, and $\kappa_{ic}$ is a concentration parameter. When $\kappa_{ic} \gg 1,$ the Fisher distribution approximates a Gaussian distribution with standard deviation (in radians) for each coordinate $\sigma_{ic}$ with \mbox{$\kappa_{ic} = 1/\sigma_{ic}^2$} and the (all-sky) Bayes factor can be calculated analytically as shown in \cite{budavari_szalay_2008} as follows, \begin{equation}\label{Bayes_factor} B_o = 2^{\vert S_o \vert - 1} \frac{\prod_{ic} \kappa_{ic}}{\sum_{ic} \kappa_{ic}} \exp{\left(-\frac{\sum_{ic}\sum_{i'c'}\kappa_{ic}\kappa_{i'c'}\psi_{ic,i'c'}^2}{4\sum_{ic}\kappa_{ic}}\right)}, \end{equation} where $(i,c)$ and $(i',c')$ are all sources in subset $S_o$ and $\psi_{ic,i'c'}$ is the (small) angle between the directions for sources $(i,c)$ and $(i',c')$. The next section discusses how to find the globally optimal associations, i.e., the partition $P$ that maximizes the ${\cal{}L}(P)$ likelihood function by optimizing $\prod B_o$ using integer linear programming. \subsection{CanILP: Optimal Selection of Candidates}\label{sec:CanILP_setup} First we summarize the previous approach introduced by \citet{shi2019probabilistic} and highlight some of the outstanding challenges. Maximizing $\prod B_o$ is equivalent to minimizing \begin{equation} -\sum_{o} \ln B_o. \end{equation} Given a data set $D$ of all $(i, c)$ pairs for all catalog $c$ and source $i$ in catalog $c$, we introduce a binary variable $x_T$ taking values in $\{0, 1\}$ for each nonempty subset $T \subseteq D$, with the interpretation that $x_T = 1$ indicates that the subset $T$ is included in the partition. To ensure the validity of the partition, we require \begin{equation} \sum\limits_{T \ni (i,c)} x_T = 1 \end{equation} for every element $(i,c) \in D.$ This forces every source $(i,c)$ to be included in exactly one subset of the partition. However, note that for an orphan $o,$ $B_o = 1.$ Hence, these coefficients do not contribute to the objective function and we could simply remove those subsets $T$ that have $\lvert T \rvert = 1.$ From this, we can modify the above constraint to \begin{equation} \sum\limits_{T \ni (i,c)} x_T \leq 1 \end{equation} for every element $(i,c) \in D.$ In the final solution, if a source $(i,c)$ does not appear in any subset $T,$ we treat it as an orphan. For example, in Figure \ref{fig:CanILP_eg}, \texttt{Source (2,1)} is not included in any subset $T$, so, in the solution, we will include it as an orphan. By defining $w_T = -\ln B_T$ for every subset $T$, the final integer linear programming function can be written as follows, \begin{gather} \min \sum_T w_T x_T \nonumber \\ \text{subject to } x_T \in \mathbb{Z} \text{ and } 0 \leq x_T \leq 1 \text{ for all } T, \nonumber \\ \text{and } \sum_{T \ni (i,c)} x_T \leq 1 \text{ for all } (i,c) \in D. \end{gather} \iffalse \begin{equation} \min_x \sum_T w_T x_T \end{equation} subject to \begin{equation} x_T \in \mathbf{Z} \text{ and } 0 \leq x_T \leq 1 \text{ for all } T, \end{equation} \begin{equation} \sum_{T \ni (i,c)} x_T \leq 1 \text{ for all } (i,c) \in D \end{equation}. \fi Note that the formulation above can be used to solve the matching problem given any number of catalogs $C$ but requires an enumeration of all possible candidate associations, which can be numerous. This requires calculations of combinatorially many $w_T$ and the introduction of the corresponding $x_T$ binary variables, which quickly becomes prohibitively expensive for many catalogs. \cite{shi2019probabilistic} demonstrated their approach on three catalogs with identical astrometric uncertainty. Here we extend their work and describe a novel approach, which can be efficiently used with many catalogs. \tikzstyle{catalog} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30] \tikzstyle{source} = [circle, text centered, draw=black, fill=orange!30] \tikzstyle{object} = [circle, text centered, draw=black, fill=blue!30] \tikzstyle{assoc} = [diamond, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=white!30] \tikzstyle{activate} = [diamond, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=green!30] \tikzstyle{arrow} = [thick,->,>=stealth] \begin{figure*} \centering \begin{tikzpicture}[node distance=1.7cm, every node/.style={scale=0.8}] \node (catalog_1) [catalog] {\textsc{\textbf{Catalog 1}}}; \node (source_11) [source, below of=catalog_1, xshift = 1.5cm] {\textsc{Source (1,1)}}; \node (source_21) [source, below of=source_11, yshift = -0.5cm] {\textsc{Source (2,1)}}; \node (catalog_2) [catalog, below of=source_21, xshift = -1.5cm, yshift=-0.2cm] {\textsc{\textbf{Catalog 2}}}; \node (source_12) [source, below of=catalog_2, xshift = 1.5cm] {\textsc{Source (1,2)}}; \node (catalog_3) [catalog, below of=source_12, xshift = -1.5cm, yshift=-0.2cm] {\textsc{\textbf{Catalog 3}}}; \node (source_13) [source, below of=catalog_3, xshift = 1.5cm] {\textsc{Source (1,3)}}; \node (source_23) [source, below of=source_13, yshift = -0.5cm] {\textsc{Source (2,3)}}; \node (assoc_1) [assoc, right of=source_11, xshift = 2.5cm] {\{(1,1),(1,2)\}}; \node (assoc_2) [assoc, right of=assoc_1, xshift = 2cm] {\{(2,1),(1,2)\}}; \node (assoc_3) [assoc, right of=assoc_2, xshift = 2cm] {\{(1,1),(1,3)\}}; \node (assoc_4) [assoc, below of=assoc_1, yshift = -1cm] {\{(2,1),(1,3)\}}; \node (assoc_5) [activate, below of=assoc_2, yshift = -1cm] {\{(1,1),(2,3)\}}; \node (assoc_6) [assoc, below of=assoc_3, yshift = -1cm] {\{(2,1),(2,3)\}}; \node (assoc_7) [activate, below of=assoc_4, yshift = -1cm] {\{(1,2),(1,3)\}}; \node (assoc_8) [assoc, below of=assoc_5, yshift = -1cm] {\{(1,2),(2,3)\}}; \node (assoc_9) [assoc, below of=assoc_6, yshift = -1cm] {\{(1,1),(1,2),(1,3)\}}; \node (assoc_10) [assoc, below of=assoc_7, yshift = -1.5cm] {\{(1,1),(1,2),(2,3)\}}; \node (assoc_11) [assoc, below of=assoc_8, yshift = -1.5cm] {\{(2,1),(1,2),(1,3)\}}; \node (assoc_12) [assoc, below of=assoc_9, yshift = -1.5cm] {\{(2,1),(1,2),(2,3)\}}; \draw [dashed] (2.8,0) -- (2.8,-10.5); \end{tikzpicture} \caption{An illustration of CanILP. As can be seen on the left side, we assume there are $2$ detections from Catalog 1 (\texttt{Sources (1,1)} and \texttt{(2,1)}), $1$ detection from Catalog 2 (\texttt{Source (1,2)}) and $2$ detections from Catalog 3 (\texttt{Sources (1,3)} and \texttt{(2,3)}). In CanILP, we list all candidates for possible associations across independent detections, which are shown on the right side. These are the $x_T$ in the formulation. We then find the combinations of subsets that maximize the overall likelihood. Here, the solution given by CanILP indicates that the subsets $\{(1,1),(2,3)\}$ and $\{(1,2),(1,3)\}$ are included in the partition. These subsets, which are represented by a green color, correspond to the variables $x_{\{(1,1),(2,3)\}} = x_{\{(1,2),(1,3)\}} = 1$ in the model. On the other hand, all other variables $x_T = 0.$ Notice that because \texttt{Source (2,1)} does not appear in any of these subsets, so we treat it as an orphan. As a result, the association outputted by CanILP is $\{\{(1,1),(2,3)\}, \{(1,2),(1,3)\}, \{(2,1)\}\}$.} \label{fig:CanILP_eg} \end{figure*} \subsection{DirILP: Optimal Direct Associations}\label{sec:DirILP_setup} The key idea is to introduce variables that directly assign the detections to hypothesized objects instead of simply switching on and off some previously enumerated candidate associations in the final matched catalog. The objective $\prod B_o$ is the same but expressing it with the new variables is significantly more complicated than before. In the process one needs to introduce several additional (sets of) auxiliary variables to linearize the problem. In case of homoscedasticity when all astrometric uncertainty are the same for all detections, the linearization is relatively straightforward, but further modeling tricks are required in the general setting. In the following sections these two cases are introduced along with the variables needed to model and solve the global association problem. Further details are provided in the appendix about the derivation of the general heteroscedastic formalism. \subsubsection{Homoscedasticity} For simplicity, we first discuss the special case where the astrometric uncertainty of each detection is the same, i.e., \mbox{$\sigma_{ic}\!=\!\sigma$} for each source $(i,c)$. Given a data set $D$, let $N$ be the total number of detections in all catalogs considered. The number of astronomical objects these represent will be at most $N$, corresponding to the hypothesis that every detection comes from a different object. Our goal is to find a mapping that matches each source to one (and only one) object. This association between a source and an object means that the source is an observation of that object in the sky. Naturally, multiple sources are expected to be assigned to the same object, which represents the hypothesis that all of these sources are observations of that same object. To capture the matching between a source $(i,c)$ and an object $o$, we introduce binary variables $\{x_{ic}^o\}$, where a given $x_{ic}^o=1$ if the $(i,c)$ detection is associated with object $o$, and 0 otherwise. Figure~\ref{fig:DirILP_eg} illustrates how this approach works. For example, the arrow from \texttt{Source (2,1)} to \texttt{Object 1} representing an association means that $x_{21}^1 = 1$. Similarly, $x_{11}^3 = 0$ means no association, hence there is no arrow between the corresponding entries. A partition $P$ can now be represented as a set \mbox{$\{S_o: o = 1,\dots, N\}$}, where $S_o$ is the subset of sources assigned to $o$, i.e., \begin{equation} S_o \coloneqq \left\{(i,c): x^o_{ic} = 1\right\}. \end{equation} If, for a given index $o$, \mbox{$x^o_{ic} = 0$} for all $(i,c)$ sources, then \mbox{$S_o = \emptyset$} is empty, which means object $o$ is not needed for that particular partition. \begin{figure} \centering \begin{tikzpicture}[node distance=2cm, every node/.style={scale=0.7}] \node (catalog_1) [catalog] {\textsc{\textbf{Catalog 1}}}; \node (source_11) [source, below of=catalog_1, xshift = 2cm] {\textsc{Source (1,1)}}; \node (source_21) [source, below of=source_11, yshift = -0.5cm] {\textsc{Source (2,1)}}; \node (catalog_2) [catalog, below of=source_21, xshift = -2cm] {\textsc{\textbf{Catalog 2}}}; \node (source_12) [source, below of=catalog_2, xshift = 2cm] {\textsc{Source (1,2)}}; \node (catalog_3) [catalog, below of=source_12, xshift = -2cm] {\textsc{\textbf{Catalog 3}}}; \node (source_13) [source, below of=catalog_3, xshift = 2cm] {\textsc{Source (1,3)}}; \node (source_23) [source, below of=source_13, yshift = -0.5cm] {\textsc{Source (2,3)}}; \node (object_1) [object, right of=source_21, xshift = 3cm, yshift = 2cm] {\textsc{Object 1}}; \node (object_2) [object, below of=object_1, yshift = -0.5cm] {\textsc{Object 2}}; \node (object_3) [object, below of=object_2, yshift = -0.5cm] {\textsc{Object 3}}; \node (object_4) [object, below of=object_3, yshift = -0.5cm] {\textsc{Object 4}}; \node (object_5) [object, below of=object_4, yshift = -0.5cm] {\textsc{Object 5}}; \draw [arrow] (source_11) -- (object_4); \draw [arrow] (source_21) -- (object_1); \draw [arrow] (source_12) -- (object_3); \draw [arrow] (source_13) -- (object_3); \draw [arrow] (source_23) -- (object_4); \end{tikzpicture} \caption{An illustration of DirILP. As in Figure \ref{fig:CanILP_eg}, assume there are $2$ detections from Catalog 1 (\texttt{Sources (1,1)} and \texttt{(2,1)}), $1$ detection from Catalog 2 (\texttt{Source (1,2)}) and $2$ detections from Catalog 3 (\texttt{Sources (1,3)} and \texttt{(2,3)}). In this case, the output of DirILP indicates that \texttt{Sources (1,1)} and \texttt{(2,3)} belong to the same object, that \texttt{Sources (1,2)} and \texttt{(1,3)} belong to the same object, and that \texttt{Source (2,1)} is an orphan. Notice that it is okay for an object to not have any source associated with it. The solution given by DirILP is $\{\{(1,1),(2,3)\}, \{(1,2),(1,3)\}, \{(2,1)\}\}$, which is the same as the one given by CanILP in Figure \ref{fig:CanILP_eg}.} \label{fig:DirILP_eg} \end{figure} Recall that the goal is to maximize the product of Bayes factors $\prod B_o$ (or to minimize $-\sum \ln B_o$) corresponding to these associations. Given an association $S_o$, assuming $\kappa_{ic} = \kappa$ for all source $(i,c)$, eq.~\eqref{Bayes_factor} gives us \begin{align} B_o &= 2^{\vert S_o \vert - 1} \frac{\prod_{ic} \kappa}{\sum_{ic} \kappa} \exp{\left(-\frac{\sum_{ic}\sum_{i'c'}\kappa^2\psi_{ic,i'c'}^2}{4\sum_{ic}\kappa}\right)} \\ & = 2^{\vert S_o \vert - 1} \frac{\kappa^{\vert S_o \vert}}{\vert S_o \vert \kappa} \exp{\left(-\frac{\kappa\sum_{ic}\sum_{i'c'}\psi_{ic,i'c'}^2}{4\vert S_o \vert}\right)} \end{align} Hence, \iffalse \begin{align} -\ln B_o &= (1 - \vert S_o\vert)\ln2 - \vert S_o \vert \ln\kappa + \ln \vert S_o\vert + \ln\kappa + \frac{\sum \limits_{ic,i'c'}\kappa\psi_{ic,i'c'}^2}{4\vert S_o\vert} \\ & = \ln(2\kappa)(1 - \vert S_o\vert) + \ln \vert S_o\vert + \frac{\sum \limits_{ic,i'c'}\kappa\psi_{ic,i'c'}^2}{4\vert S_o\vert} \end{align} \else \begin{equation} -\ln B_o = \ln(2\kappa) \left(1 - \vert S_o\vert\right) + \ln \vert S_o\vert + \frac{\kappa\sum \psi_{ic,i'c'}^2}{4\vert S_o\vert} \end{equation} \fi We want to find the partition $P$ that minimizes $- \sum \ln B_o.$ Notice that as of now, there are still several non-linear terms in $-\ln B_o$ so it is not yet a linear objective. To make use of ILP method, we will first need to rewrite this as a linear function. To do that, we introduce the following variables, defined for each index $k \in \{0, \ldots, C\}$, with $C$ representing the total number of catalogs: \begin{equation} z_k^o = \begin{cases*} 1 & if $\sum\limits_{ic} x_{ic}^o = k$ \\ 0 & otherwise \end{cases*} \end{equation} This variable captures the number of sources getting matched to object $o$, or the cardinality of the subset $S_o.$ When $z_{k'}^{o'} = 1,$ there are $k'$ hypothesized observations of object $o'$ in the data. In addition, notice that at most $1$ of the $z_k^o$, $k =0, \ldots, C$ can be $1.$ We also introduce \begin{equation} y_{ic, i'c'}^o = \begin{cases*} 1 & if $x_{ic}^o = x_{i'c'}^o = 1$ \\ 0 & otherwise \end{cases*} \end{equation} This is an indicator variable that checks whether the sources $(i,c)$ and $(i',c')$ belong to the same object $o$. In particular, $y_{ic, i'c'}^{o'} = 1$ indicates the hypothesis that sources $(i,c)$ and $(i',c')$ are observations of object $o'.$ We also have \begin{equation} t^o = \begin{cases*} \frac{\sum\kappa\psi_{ic,i'c'}^2 y_{ic, i'c'}^o}{4k} & if $z_k^o = 1$ for some $k \in [C]$ \\ 0 & if $z_0^o = 1$ \end{cases*} \end{equation} where $[C]$ represents the set of numbers $\{1, 2, \cdots, C \}$. This variable captures the last term in $-\ln B_o$ for a subset $S_o.$ In particular, when $z^o_k = 1$ for some $k \in [C]$, i.e. $\lvert S_o \rvert = k$ by definition of $z^o_k$, we have \begin{equation} t^o = \frac{\sum \kappa\,\psi_{ic,i'c'}^2}{4\vert S_o\vert} \end{equation} as desired, where the summation goes over all $(i,c)$ and $(i',c')$ in $S_o.$ On the other hand, when $z^o_0 = 1,$ no detection is assigned to object $o$, so this term should contribute nothing to the objective function. Next, we introduce \begin{equation}\label{eq:po} p^o = \begin{cases*} \left(1 - k\right) \ln(2\kappa) & if $z_k^o = 1$ for some $k \in [C]$ \;\; \\ 0 & if $z_0^o = 1$ \end{cases*}. \end{equation} This variable captures the first term in $-\ln B_o$ for a subset $S_o.$ It plays a similar role as $t^o$, i.e., when $z^o_0 = 1,$ no detection is assigned to object $o$, so this term should contribute nothing to the objective function. On the other hand, if some sources are matched to object $o$, $p^o = \ln(2\kappa)(1 - \vert S_o\vert)$ as desired. Finally, we will linearize the term $\ln \lvert S_o \rvert$ by breaking the natural log function into finitely many affine linear pieces. We first introduce constants $a_1, a_2, \cdots, a_C,$ where $a_1 = 0$ and $a_p = \ln(p) - \ln(p-1)$, for \mbox{$p = 2, \cdots, C$}. Then for each object $o$, we define binary variables \mbox{$w_1^o \geq w_2^o \geq \cdots \geq w^o_C$} and impose the constraint that \begin{equation} \sum_{p=1}^C w_p^o = \sum_{ic} x_{ic}^o = \lvert S_o \rvert. \end{equation} Using the new notation, we can now express $\ln \lvert S_o \rvert$ as a linear function of $w_p^{o}$: $\ln \vert S_o\vert = \sum_{p=1}^C a_p w_p^{o}$. To explain why this is the case, it is best to work with an example. Suppose 3 sources are matched with object $o,$ so $\lvert S_o \rvert = 3$ and $\ln \lvert S_o \rvert = \ln{3}.$ Because $\sum_{p=1}^C w_p^o = \vert S_o\vert = 3$ and $w_p^o$ are $0/1$ variables with $w_1^o \geq w_2^o \geq \cdots \geq w^o_C$, we have $w_1^o = w_2^o = w_3^o = 1$ and $w_4^o = w_5^o = \cdots = w_C^o = 0.$ Then, $\sum_{p=1}^C a_p w_p^{o} = a_1 + a_2 + a_3 = (0) + (\ln{2} - \ln{1}) + (\ln{3} - \ln{2}) = \ln{3},$ which is exactly $\ln \lvert S_o \rvert.$ Our objective function now becomes \begin{equation}\min \sum_o \bigg(\;\;p^o + \sum_p a_p w_p^o + t^o \bigg),\end{equation} which is linear in the variables $p^o,w^o_p$ and $t^o$. As can be seen in the definitions of these variables, there are certain relationships that still need to be modeled using linear constraints. The full ILP formulation is given in Appendix~\ref{appx:DirILP_special} with detailed explanations for how the constraints model the relationships between the variables $x^o_{ic}, y^o_{ic,i'c'}, z^o_k, p^o, w^o_p,$ and $t^o$. \subsubsection{Heteroscedasticity} \label{sec:DirILP_general} We can also remove the assumption that every source has the same measure of uncertainty $\kappa_{ic}$. From eq.~\eqref{Bayes_factor}, we have, \iffalse \begin{equation}\label{eq:ln_bayes_general} -\ln B_o = (1 - \vert S_o \vert) \ln 2 - \sum_{ic} \ln \kappa_{ic} + \ln \sum_{ic} \kappa_{ic} + \frac{\sum_{ic}\sum_{i'c'}\kappa_{ic}\kappa_{i'c'}\psi_{ic,i'c'}^2}{4\sum_{ic}\kappa_{ic}}, \end{equation} \else \begin{eqnarray}\label{eq:ln_bayes_general} -\ln B_o &=& (1 - \vert S_o \vert) \ln 2 - \sum_{ic} \ln \kappa_{ic} + \ln \sum_{ic} \kappa_{ic} + \nonumber \\ &+&\frac{\sum_{ic}\sum_{i'c'}\kappa_{ic}\kappa_{i'c'}\psi_{ic,i'c'}^2}{4\sum_{ic}\kappa_{ic}}, \end{eqnarray} \fi where all the summations run over all $(i,c)$ and $(i',c')$ in $S_o.$ We use $x_{ic}^o, z_k^o,$ and $y_{ic, i'c'}^o$ as defined in the special case of Section~\ref{sec:DirILP_setup}. We also introduce new variables to convert eq.~\eqref{eq:ln_bayes_general} into a linear function. We first linearize the term $\ln\sum \kappa_{ic}$ using the same trick as when we linearized $\ln \lvert S_o \rvert$ in Section \ref{sec:DirILP_setup}. We introduce constants $b_{\min} \equiv b_1, b_2, b_3, \cdots$, where \begin{equation} b_{\min} = \ln\left(\min_{ic \in D} \kappa_{ic}\right) \end{equation} and \begin{equation} b_{\max} = \ln\left(C \, \max_{ic \in D} \kappa_{ic}\right)\,. \end{equation} Now, if we set an error threshold $\epsilon,$ then the \begin{equation} R \equiv \left\lceil \frac{b_{\max} - b_{\min}}{\epsilon} \right\rceil \end{equation} constants $b_p$ are defined as \begin{equation} b_p = b_{\min} + (p-1) \times \epsilon \quad \textrm{for} \quad p = 1, \dots, R. \end{equation} Then for each object $o$, we define binary variables \mbox{$\chi_1^o \geq \chi_2^o \geq \cdots \geq \chi^o_P$} and impose the constraint \begin{equation} \chi_1^o \exp(b_1) + \sum_{p=2}^R \chi_p^o \left[\exp(b_p) - \exp(b_{p-1})\right] \geq \sum_{ic} \kappa_{ic} x_{ic}^o \,. \end{equation} Using the new variables, we have \begin{equation} \ln \sum_{ic} \kappa_{ic} \approx \chi_1^o b_1 + \sum_{p=2}^R \chi_p^o (b_p - b_{p-1}) = \chi_1^o\,b_{\min} + \epsilon \sum_{p=2}^R \chi_p^o \end{equation} since $b_p - b_{p-1} = \epsilon$ for all $p \geq 2$. To illustrate how the $\chi_p^o$ variables work, let us assume that after looking at the data, we determine that \mbox{$b_{\min} = 29$} and \mbox{$b_{\max} = 33$}. If we let \mbox{$\epsilon = 0.5$}, then the value of constants $b_p$ are $\{29, 29.5, \cdots, 32.5, 33\}.$ Now suppose there are $3$ sources that are matched to an object $o$ with associated $\kappa_{ic}$ values of $5\times10^{12}, 8\times10^{12},$ and $10^{13}.$ Then the true value of $\ln \sum_{ic \in S_o} \kappa_{ic}$ is $\ln(2.3\times10^{13}),$ which evaluates to $30.77.$ With the defined variables, the solution given by ILP is $\chi_1^o = \chi_2^o = \cdots = \chi_5^o = 1$ and $\chi_6^o = \cdots = \chi_9^o = 0$ because $\chi_1^o \exp(b_1) + \sum_{p=2}^P \chi_p^o (\exp(b_p) - \exp(b_{p-1})) = \exp(29) + \exp(29.5) - \exp(29) + \exp(30) - \exp(29.5) + \cdots + \exp(31) - \exp(30.5) = \exp(31) > 2.3\times10^{13},$ which satisfies the constraint \mbox{$\chi_1^o \exp(b_1) + \sum_{p=2}^P \chi_p^o (\exp(b_p) - \exp(b_{p-1})) \geq \sum_{ic} \kappa_{ic} x_{ic}^o.$} Notice that setting the variables $\chi_6^o, \cdots, \chi_9^o = 1$ will also satisfy the constraint. However, since we will model our problem with a minimization objective, the optimal solution will force $\chi_1^o b_1 + \sum_{p=2}^R \chi_p^o (b_p - b_{p-1})$ to be as small as possible. Finally, notice that in this case the value of $\chi_1^o b_1 + \sum_{p=2}^R \chi_p^o (b_p - b_{p-1})$, which is used to approximate $\ln \sum_{ic \in S_o} \kappa_{ic}$, is $31,$ which is close to the true value of $30.77.$ Next, we will linearize the last term in eq.~\eqref{eq:ln_bayes_general} by first introducing the constant \begin{equation} c_{\min} = \min_{ic \in D} \kappa_{ic} \end{equation} and \begin{equation} c_{\max} = C\, \max_{ic \in D} \kappa_{ic} \,. \end{equation} Then by rounding these two values to the nearest 100, we can introduce grid points $0 \equiv c_0, c_1, c_2, \cdots, c_Q,$ where $c_1$ is the nearest 100 of $c_{\min}$, $c_Q$ is the nearest 100 of $c_{\max}$, and for all $i > 2$, $c_i = c_1 + 100(i-1).$ We then introduce \begin{equation} u_k^o = \begin{cases*} 1 & if $\sum\limits_{ic} \left[ \kappa_{ic} \right]_{_{100}}\, x_{ic}^o = c_k $ \\ 0 & otherwise \end{cases*} \end{equation} where $k$ ranges in $\{0, 1, \ldots, Q\}$ and the operator $[\cdot]_{_{100}}$" is defined as rounding to the nearest $100$. This variable attempts to approximate $\sum_{ic \in S_o} \kappa_{ic}$, which appears in the denominator of the last term of eq.~\eqref{eq:ln_bayes_general}. The variables $p^o$ and $t^o$ are also very similar to the definitions in Section~\ref{sec:DirILP_setup}; however, we need to slightly modify them as follows: \begin{equation}\label{eq:to-def}t^o = \frac{\sum_{ic}\sum_{i'c'}\kappa_{ic}\kappa_{i'c'}\psi_{ic,i'c'}^2 y_{ic, i'c'}^o}{4c_k},\end{equation} if $u_k^o = 1$ for some $k \in \{1, 2, \cdots, Q \},$ and $t^o = 0$ otherwise. The reasoning for defining $t^o$ this way is that if \mbox{$u_0^o = 1$}, \begin{equation} \sum\limits_{(i,c)} [\kappa_{ic}]_{_{100}}x_{ic}^o = c_0 = 0. \end{equation} This happens only when $x_{ic}^o = 0$ for all $(i,c)$, i.e. no sources are matched to object $o.$ Hence, $t^o$ should not contribute to the objective function, hence the value of $0$. On the other hand, if $u_k^o = 1$ for some $k > 0,$ by definition of $u_k,$ $c_k$ is the best approximation to $\sum_{ic \in S_o} \kappa_{ic}$. Thus,~\eqref{eq:to-def} holds In addition, we modify $p^o$ defined in eq.~\eqref{eq:po} as follows, \begin{equation} p^o = \begin{cases*} \left(1 - k\right) \ln(2) & if $z_k^o = 1$ for some $k \in [C]$ \;\; \\ 0 & if $z_0^o = 1$ \end{cases*}. \end{equation} This variable serves a similar function as in the homoscedastic case, which is to capture the first term in eq.~\eqref{eq:ln_bayes_general}. The objective function can now be written as \begin{equation}\sum_o \left(\;\;p^o - \sum_{ic}x^o_{ic}\ln\kappa_{ic} + \chi_1^o b_{\min} + \epsilon\sum_{p=2}^P \chi_p^o + t^o \right),\end{equation} which is linear in all the variables involved. There are certain relationships that still need to be modeled using linear constraints because ILP formulations only take linear constraints. The full ILP formulation is given in Appendix \ref{appx:DirILP_general}, with detailed explanations for how the constraints model the relationships between the variables $x^o_{ic}, y^o_{ic,i'c'}, z^o_k, \chi^o_p, u^o_k, p^o,$ and $t^o$. \section{Mock Objects and Simulations}\label{sec:result_analysis} We consider the idealized case where all the catalogs capture the same astronomical properties of objects in the sky, i.e., they detect the same set of objects. As we generate 100 objects and assume there are $C$ distinct catalogs, we expect to see $100 \times C$ sources and $100$ $C-$way association sets. We will now show the catalog matching results using both of our approaches. The ILP programs in both approaches are solved using Gurobi, an optimization solver \cite{gurobi}. \subsection{Homoscedasticity: identical \mbox{$\kappa_{ic} = 1/\sigma^2$}} \label{subsec:Homoscedasticity} Observe that for the CanILP formulation in Section \ref{sec:CanILP_setup}, we need to list all the possible valid subsets $T \subseteq D.$ We could do this by sequentially adding catalogs one by one and considering sources from the new catalog. However, this evaluates to $101^C - 1$ subsets, which is exponential in terms of the number of catalogs $C.$ Hence, we first try to reduce the number of possible subsets by observing that sources that are far away cannot be from the same object. So we can impose some distance constraints on the sources that are put into the same candidate association set. In doing so, we should be careful not to discard potentially true associations later on because say two sources from the first $2$ catalogs that are far away might not be a $2-$way matching; but, if on the third catalog, there is a source lying in the middle of the path between these $2$ sources, the $3$ sources together might be a $3-$way matching. That being said, this suggests an idea for dividing the whole region of the sky that is of interest into different islands where the sources are clustered together so that instead of solving one big problem, we could break it into smaller problems and make use of parallel computing. Essentially, we first apply a single-linkage clustering algorithm to our dataset, which is done using the DBSCAN algorithm with parameters ``min\_samples" = $2$ and ``eps" = $5 \times \max_{ic \in D}\sigma_{ic}.$ With the chosen parameters, we are essentially performing the friends-of-friends algorithm. It turns out that for our simulation, most of these islands consist of only $1$ source from each catalog. Hence, from now on, we will show the result for this scenario of having $1$ source from each catalog. This situation is not peculiar to our simulation but is, in fact, observed in real data sets from multiple visits of the same part of the sky by the same telescope. Analysis for the multiple sources per catalog will be discussed later. As can be seen in Figure \ref{fig:comparison-special-1}, even though we are able to handle more than $3$ catalogs, the maximum number of catalogs we could analyze in a day is $20.$ Though, similar to \citet{shi2019probabilistic}, we do not include any pruning procedures, such as those described in \citet{kunszt2001}; \citet{Gorski_2005}; \citet{gray2007}; \citet{lee2017}. These pruning procedures might speed up the matching, but from our experience the complexity of the problem is still exponential in terms of the number of catalogs. The next paragraph discusses how far we could get using DirILP formulation. \paragraph{DirILP formulation analysis.} The main drawback from the previous approach is that the process of creating potential subsets $T$ is exponential in terms of the number of catalogs. Even if we consider the island, the number of nonempty subsets in such an island will still be $2^C - 1,$ so creating the variables for the ILP takes a tremendous amount of time. DirILP formulation attempts to fix that problem by reducing the time complexity to create the variables for the ILP to something that is polynomial in the total number of sources. However, since this catalog matching problem is intrinsically difficult, we still have to tackle the exponential complexity of the problem somewhere else: this appears in the time needed to solve the ILP. We believe that with advances in the field of integer linear programming, the Gurobi solver will be able to solve this problem more efficiently. It turns out using DirILP, we are able to tackle up to $60$ catalogs. The comparison for the total running time between CanILP and DirILP is shown in Figure \ref{fig:comparison-special-1}. In addition, we also include the set up time and optimization time for each formulation in Figures \ref{fig:comparison-special-2} and \ref{fig:comparison-special-3}. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{total_time_special.png} \caption{Total running time comparison between the two formulations for the special case (Log Scale). Notice that CanILP chokes when there are $20$ catalogs.}\label{fig:comparison-special-1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{setup_time_special.png} \caption{Set up time comparison between the two formulations for the special case (Log Scale)}\label{fig:comparison-special-2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{optimize_time_special.png} \caption{Optimization time comparison between the two formulations for the special case (Log Scale)}\label{fig:comparison-special-3} \end{figure} Moreover, by including some heuristic constraints, such as imposing a time limit between incumbent solutions, on the Gurobi solver, we are able to push the DirILP further to handle $160$ catalogs. Finally, it is important to note that the associations given by CanILP and DirILP are the same and they match the ground truth perfectly. Hence, there is no difference in the accuracy of the matching between the two approaches. They only differ in their running time. \subsection{General case: different $\kappa_{ic}$ for every detection} For the general case, both approaches still give all correct associations that match the ground truth. However, as in the special case, DirILP is still more efficient at solving the matching problem than CanILP, as shown in Figure \ref{fig:comparison-general-1}. We should point out that even though in this general setting, the optimal value found in DirILP is just an approximation of the Bayes factor associated with the ground-truth matching, the values are still quite close to each other. More importantly, the associations obtained from DirILP still match the ground-truth associations. Figures \ref{fig:comparison-general-1} - \ref{fig:comparison-general-3} show the total running time, time to set up the ILP, and time for Gurobi to solve the ILP, for both CanILP and DirILP in this general case. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{total_time_general.png} \caption{Total running time comparison between the two formulations for the general case (Log Scale). Notice that CanILP chokes when there are $18$ catalogs.}\label{fig:comparison-general-1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{setup_time_general.png} \caption{Set up time comparison between the two formulations for the general case (Log Scale)}\label{fig:comparison-general-2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{optimize_time_general.png} \caption{Optimization time comparison between the two formulations for the general case (Log Scale) }\label{fig:comparison-general-3} \end{figure} \subsection{Multiple sources per catalog in each island} Recall that in the previous sections, we assume that in each island there is only one detection from each catalog, which is a reasonable assumption in many real-life situations. In this section, we would like to discuss scenarios when the uncertainty $\sigma_{ic}$ is large or the source density is very high. These scenarios will result in islands where there might be multiple detections from each catalog in an island. It turns out that in our simulation, CanILP and DirILP still give the correct association under this scenario. However, both methods run much slower than in the previous scenario and are able to handle only about half as many catalogs with the same settings on the algorithms. We give an example of the running time for the $2$ methods when there are $2$ detections from each catalog. One can see how much worse it can get when the number of detections from each catalog becomes larger. Figure \ref{fig:2-detection} shows the total running time for both CanILP and DirILP when there are $2$ detections from each catalog in an island. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{2_detection.png} \caption{Total Running time comparison between the two formulations when there are $2$ detections from each catalog in an island (Log Scale)}\label{fig:2-detection} \end{figure} \subsection{Discussion of running time complexity}\label{sec:summary} We now give a brief explanation for the shape of the curves in Figures \ref{fig:comparison-special-1}--\ref{fig:comparison-general-3}. For CanILP, since the number of variables is exponential in terms of the number of catalogs, under the log scale as in Figures \ref{fig:comparison-special-2} and \ref{fig:comparison-general-2}, the time to create these variables and set up the ILP as a function of the number of catalogs is represented by a straight line. On the other hand, for DirILP, we have a curve with decreasing gradient instead of a straight line because the number of variables and constraints in this method is polynomial in the number of catalogs. The explanation for the curves in Figures \ref{fig:comparison-special-3} and \ref{fig:comparison-general-3} are similar because the amount of time to solve an ILP generally depends on the number of variables and constraints in the problem. That being said, the curves in these two figures look more jagged because of other complexities involved in the optimization procedure. Finally, as most of the time to solve the catalog matching problem is spent on setting up the ILP, the curves in Figures \ref{fig:comparison-special-1} and \ref{fig:comparison-general-1} are very much similar to their counterparts in Figures \ref{fig:comparison-special-2} and \ref{fig:comparison-general-2}, respectively. \section{Implementation and software} \label{sec:software} CanILP and DirILP algorithms are implemented in several Jupyter notebooks. They share a common structure: In the first part, we create a simulation with different catalogs and a number of mock objects on each of the catalogs. Next, we perform the DBSCAN algorithm to output different islands, or clusters of detections. Again, as mentioned in \ref{subsec:Homoscedasticity}, with our chosen parameters, this is similar to executing a friends-of-friends algorithm. The reason we pick DBSCAN is because of its well-developed library in Python. After running the clustering algorithm, we implement CanILP and DirILP to solve the catalog matching problem in each island. The optimization problems in these modules were solved using Gurobi software \citet{gurobi}. In addition, we employ several (optional) heuristics in the DirILP algorithm to speed up the catalog matching procedure. First, any $2$ sources that are more than $8\sigma$ away from each other, we force them to belong to separate objects. Second, for sources that are $0.1\sigma$ away from each other, we restrict them to belong to the same object. Finally, we set an MIP Gap (optimality gap) of $0.5\%$ to prevent Gurobi from taking too long to verify optimality. Through our experiments, we have found that running the algorithm with these heuristics give the same results but it was 10 times faster. The notebooks can be found on Github at \url{https://github.com/tunguyen52/Nway-matching}. \section{Summary and Future Work}\label{sec:future} We have shown how the CanILP approach of \citet{shi2019probabilistic} and the new DirILP solve for a globally optimal matched catalog in crowded fields where a greedy approach is not sufficient. The former enumerates the candidate associations and picks the optimal combination of those; the latter introduces variables to directly assign sources to objects. The new DirILP formulation is superior in the sense that it scales to large number of catalogs better, i.e., produces results in less time. The method comes at a price, which is in complexity of the algorithm, especially in case of heteroscedasticity when the catalogs have different astrometric uncertainty. In fact DirILP only out-performs the previous method when many catalogs are to be crossmatched. We recommend the simpler CanILP approach for small number of catalogs, where the combinatorial explosion is not as severe. In our experiments, this crossover threshold appears to be at around 12 visits or catalogs, beyond which the DirILP method gets faster. Both of these methods optimize the same objective and yield the best possible catalog matching result in a likelihood sense. No prior on the partition is imposed currently in our study and the accompanying software. While placing priors on the partition might seem complicated, certain simple priors can be easily expressed, such as those that depend only the number of objects in the matched catalog. This is a possible direction to explore in the future. Additional future work includes testing the software and its performance on imaging surveys, such as multiple visits of the Hyper Suprime-Cam Subaru Strategic Program, whose data collection resembles future observations of the LSST. \begin{acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No.~1909709, 1814778, 1452820 and 1934979. T.B. gratefully acknowledges the aforementioned AST/AAG grants from NSF and funding from NASA via awards STScI-49721 and STScI-52333 under NAS5-26555. T.N. and A.B. gratefully acknowledge support from the aforementioned NSF grant and ONR grant N000141812096. A.B. is also grateful for support from the aforementioned NSF grant and AFOSR grant FA95502010341. The authors thank the anonymous referee for the careful review and the thoughtful comments. \end{acknowledgements}
2024-02-18T23:40:44.023Z
2022-07-25T02:16:21.000Z
algebraic_stack_train_0000
3,208
9,086
proofpile-arXiv_065-15635
\section{Introduction} A \textit{mouse pair} is a premouse $M$ together with an iteration strategy $\Sigma$ for $M$ with certain condensation properties. The notion is isolated in \cite{nitcis}. That book proves a comparison theorem for mouse pairs, and shows that many of the basic results of inner model theory can be stated in their proper general form by considering mouse pairs instead of just mice. For example, we have the full Dodd-Jensen property for mouse pairs, and thus a wellfounded mouse pair order, whereas these both fail if we consider iterable premice in isolation. These and other results seem to indicate that mouse pairs, and not just their mouse components, are the fundamental objects of study in inner model theory. One important technical device employed in \cite{nitcis} is embedding normalization. Given a stack $s$ of normal trees on $M$ with last model $P$, there is a natural attempt to build a minimal normal tree $W(s)$ such that $P$ embeds into its last model. $W(s)$ is called the {\em embedding normalization} of $s$. An iteration strategy $\Sigma$ for $M$ {\em normalizes well} iff whenever $s$ is by $\Sigma$, then $W(s)$ exists and is by $\Sigma$. It is one of the defining properties of mouse pairs that their strategy components normalize well. \cite{nitcis} shows that full background extender constructions done in an appropriate universe yield mouse pairs.\footnote{ Embedding normalization was first studied systematically by Schlutzenberg and Steel, independently and then partly jointly. The good behavior of nice strategies on infinite stacks is due to Schlutzenberg (see \cite{farmer}). Some of this work was later re-cast by Jensen and extended to his $\Sigma^*$-elementary iterations in \cite{jensen}.} Our main result here is that, assuming $\mathsf{AD^+}$, if $(M,\Sigma)$ is a mouse pair, and $s$ is a stack of normal trees on $M$ by $\Sigma$ with last model $P$, then in fact there is a normal tree $X(s)$ by $\Sigma$ whose last model is equal to $P$.\footnote{So in what sense then was $W(s)$ minimal? The answer has to do with how the extenders used in $s$ get associated to extenders used in $W(s)$, which is more direct than the way they are associated to extenders used in $X(s)$. $W(s)$ is minimal, granted that we demand the more direct connection. See \ref{factor lemma} below.} We call $X(s)$ the {\em full normalization} of $s$, and say that $\Sigma$ fully normalizes well. Special cases of this theorem were proved in \S 6.1 of \cite{nitcis}.\footnote{What we actually prove here is Theorem \ref{fullnormalizationeheorem}, which does not cover certain anomalous stacks $s$. The complete proof involves more bookkeeping, but no further ideas.} In contrast to embedding normalization, the proof goes beyond iteration tree combinatorics. The sort of phalanx comparison typical of condensation proofs for ordinary mice comes into play, even in the case $s$ consists of two normal trees, each using only one extender. Our results on full normalization figure heavily in the construction of optimal Suslin representations for mouse pairs given in \cite{mouse.suslin}. As one would expect, such Suslin representations are useful. For example, \cite{mouse.suslin} uses them to characterize the {\em Solovay sequence} $\langle \theta_\alpha \mid \alpha < \Omega \rangle$ in terms of the cutpoint Woodin cardinals of $\text{HOD}$, assuming $\ad_\R$ and a natural mouse capturing hypothesis.\footnote{The mouse capturing hypothesis is $\textsf{HPC}$, or {\em HOD Pair Capturing}. It simply asserts that the iteration strategies of mouse pairs are Wadge cofinal in the Suslin-co-Suslin sets. \cite{mouse.suslin} shows that assuming $\ad_\R$ + $\textsf{HPC}$, the following are equivalent: (i) $\delta$ is a cutpoint Woodin cardinal of $\text{HOD}$, and (ii) $\delta = \theta_0$, or $\delta = \theta_{\alpha+1}$ for some $\alpha$.} The papers \cite{mouse.suslin} and \cite{jacksonsargsyansteel} show that under the same hypotheses, the Suslin cardinals are precisely the cardinalities of cutpoints in $\text{HOD}$.\footnote{$\kappa$ is a cutpoint of $\text{HOD}$ iff there is no extender $E$ on the $\text{HOD}$-sequence such that $\text{crit}(E) < \kappa \leq \text{lh}(E)$.} For further applications of our results on full normalization, see \cite{mouse.suslin} and \cite{jacksonsargsyansteel}. The idea of our proof that there is a normal tree $X(s)$ by $\Sigma$ whose last model is equal to $P$ is roughly as follows. It is not too hard to define $X(s)$, by extending the definition of $X(s)$ in the special cases covered by \cite{nitcis} to arbitrary stacks $s$. The main problem is to show that $X(s)$ is by $\Sigma$. Suppose then that $\tree{S} = X(s)\upharpoonright \lambda$ is by $\Sigma$, and $X(s)$ picks $b = [0,\lambda)_X$, while $\Sigma(\tree{S}) = c$. We must show that $b=c$. For this, we compare the phalanxes of the trees $\tree{S}{}^\frown b$ and $\tree{S}{}^\frown c$, using $\Sigma$ to iterate the phalanx $\Phi(\tree{S}{}^\frown c)$. The strategy for iterating the phalanx $\Phi(\tree{S} {}^\frown b)$ comes from pulling back the strategy for $\Phi(W(s))$ induced by $\Sigma$, under a natural embedding $\Psi \colon X(s) \to W(s)$ that comes out of the definition of $X(s)$. Here we face one of our main new problems: unless $X(s) = W(s)$, $\Psi$ is not actually a tree embedding, but something weaker. A fair amount of our work is devoted to isolating the properties of $\Psi$ in the notion of a {\em weak tree embedding}, and showing that if $\Gamma \colon \tree{T} \to \tree{U}$ is a weak tree embedding, then we can use $\Gamma$ to pull back strategies for $\Phi(\tree{U})$ to strategies for $\Phi(\tree{T})$. There is a second issue, one that also comes up in the phalanx comparisons of \cite{nitcis}. In order to show that $b=c$, we need to use the full Dodd-Jensen property, and so we must compare the last models of our phalanxes as mouse pairs. This means we must use something like of the mouse pair comparison process developed in \cite{nitcis}, comparing both phalanxes with the levels of a common background construction. To ensure that our comparison process doesn't terminate in a trivial way (by applying the same extender to a model common to both trees), at certain stages we lift a phalanx. This much follows the comparison processes in the proofs of solidity, universality, condensation, in \cite{nitcis}, where the resulting systems are called {\em pseudo-iteration trees}.\footnote{ The argument closest to the one we give here is the proof that {\sf UBH} holds in lbr hod mice, Theorem 7.3.2 of \cite{nitcis}. See also \cite{trang}.} However, here our phalanxes are all of the form $\Phi(\tree{T})$, for some iteration tree $\tree{T}$, and this enables us to lift them in a different way. Namely, we can use one step of the embedding normalization process, lifting $\tree{T}$ to $W(\tree{T},F)$. The resulting system is best viewed as a tree of normal iteration trees, something we shall call a \textit{meta-iteration tree}, or \textit{meta-tree}. The meta-tree notion evolved from the work of Jensen, Schlutzenberg, and Steel on embedding normalization. Its full, general form is due to Schlutzenberg. (See \cite{jensen} and \cite{farmer}. Those papers use somewhat different terminology for meta-trees and their associated apparatus.) Meta-trees provide a very convenient framework for thinking about certain aspects of iteration tree combinatorics, and in particular, they help a lot here. The general results about meta-trees that we shall need are due to Schlutzenberg and Siskind. We shall state those results and outline their proofs here, but the reader should see \cite{farmer} and \cite{associativity} for an in-depth treatment. The general theory of meta-iteration trees has other applications that are described in those papers. In section 1 of the paper we review embedding normalization. The one new result here is a factoring lemma for tree embeddings. Section 2 is devoted to the general theory of meta-trees. Section 3 proves a general comparison theorem for the ``tail normal components" of nice meta-strategies, Theorem \ref{main comparison theorem}. This is where we show that the appropriate phalanx comparisons terminate. One can think of this as a strategy comparison theorem for phalanxes of the form $\Phi(\tree{S})$.\footnote{Since not all phalanxes are of this form, it is not a truly general strategy comparison theorem for phalanxes.} We then use our tree-phalanx comparison theorem to characterize those meta-strategies that are induced by an ordinary iteration strategy.\footnote{An ordinary strategy $\Sigma$ induces a meta-strategy $\Sigma^*$ via: $\langle \mtree{S}_\xi \mid \xi < \lambda \rangle \text{ is by }\Sigma^* \Leftrightarrow\text{$\forall \xi < \lambda$ (every tree occurring in $\mtree{S}_\xi$ is by }\Sigma)$. See \ref{meta-iterability}.} It turns out every sufficiently nice meta-strategy is induced in this way. (That is Theorem \ref{induced strategy theorem}.) The moral one might draw is that meta-strategies are not something fundamentally new, but rather a useful way of organizing constructions and proofs to do with ordinary strategies. The main step toward Theorem \ref{induced strategy theorem} is Lemma \ref{induced strategy lemma}, which is a kind of uniqueness theorem for ordinary iteration strategies $\Sigma$ whose induced meta-strategies $\Sigma^*$ behave well. Section 4 contains the definitions of $X(s)$, weak tree embeddings, and the weak tree embedding from $X(s)$ to $W(s)$. We then use the meta-strategy uniqueness results of section 3 to show that if $(M,\Sigma)$ is a mouse pair, then $\Sigma$ condenses to itself under weak tree embeddings. This is Theorem \ref{vshctheorem}, one of the central results of the paper. It follows easily from this {\em very strong hull condensation} property that $\Sigma$ fully normalizes well.\footnote{ The results in section 4 are essentially due to the second author. He outlined proofs of them that made use of pseudo-iteration trees in \cite{localhod}. The conversion of those outlines to full proofs that we present here, making use of meta-iteration trees, is due to the first author. We shall attribute the results of sections 2 and 3 when we get to them.} We assume that the reader is familiar with \cite{nitcis}, especially Chapter 6 (on embedding normalization) and Chapter 9 (on phalanx comparison). We shall review and re-do much of this work, however, so this prerequisite is perhaps less burdensome than it may seem. Our unexplained general inner-model-theoretic notation is laid out in Chapters 2-4 of \cite{nitcis}. In the rest of the paper, by \lq\lq premouse" we mean Jensen-indexed pure extender or least branch pfs premouse. ``pfs" stands for ``projectum-free spaces", a variant on the standard fine structure that is described in Chapter 4 of \cite{nitcis}. \section{Tree embeddings and embedding normalization} We review some material from \cite{nitcis}, while adding a few things to it. We shall be using the projectum-free spaces fine structure of \cite{nitcis}, for reasons explained there.\footnote{See the beginning of Chapter 4 of \cite{nitcis}.} Our iteration trees on such premice will be linear stacks of {\em plus trees}. Briefly: for $F$ an extender of the sort that might appear on the sequence of a pfs premouse\footnote{The extenders are Jensen-indexed, so that $E$ is indexed in $M$ at $\text{lh}(E) = \lambda(E)^{+,N}$, where $N= \text{Ult}(M||\text{lh}(E),E)$ and $\lambda(E) = i_E^{M||\text{lh}(E)}(\text{crit}(E))$.}, $E^+$ is the extender of $E$-then-$D$, where $D$ is the order zero measure of $\text{Ult}(M||\text{lh}(E),E)$ on $\lambda(E)$. $F$ has {\em plus type} iff $F=E^+$ for some such $E$. If $F=E^+$, then $F^- = E$. If $F$ is not of plus type, $F^- = F$. In both cases we let $\text{lh}(F) = \text{lh}(F^-)$, and set $\hat{\lambda}(F) = \lambda(F^-)$. The {\em extended $M$-sequence} consists of all $E$ and $E^+$ on the $M$-sequence. A {\em plus tree} on $M$ is a quasi-normal iteration tree $\tree{T}$ on $M$ that is allowed to use an extender from the extended sequence of $M_\alpha^{\tree{T}}$ to form $M_{\alpha+1}^{\tree{T}}$. $\tree{T}$ is {\em $\lambda$-separated} if it always uses extenders of plus type, and {\em $\lambda$-tight} if it never uses such extenders. Roughly, $\tree{T}$ is {\em quasi-normal} iff the extenders it uses are $\hat{\lambda}$-nondecreasing, and it always applies its extenders to the longest possible initial segment of the earliest possible model. $\tree{T}$ is {\em normal} iff it is quasi-normal and the extenders it uses are (strictly) $\text{lh}$-increasing. \subsection{Tree embeddings} Tree embeddings in general were isolated by the second author and Schlutzenberg. They arise naturally in the context of embedding normalization and will feature prominently in the rest of the paper. The precise definition must be tailored to the context of interest: we will have to look at tree embeddings between arbitrary \textit{plus trees}. This variation was worked out by the second author in \cite{nitcis}. We first establish some notation for applications of the Shift Lemma (that is, copy maps). As usual, for $E$ an extender on the extended $M$-sequence, the \textit{domain of $E$} is $\text{dom}(E) = M||(\text{crit} (E)^+)^{M|\text{lh}(E)}$. \begin{definition} Let $\varphi \colon M \to N$ and $\pi \colon P \to Q$ be nearly elementary, $E$ an extender on the extended $M$-sequence, and $F$ an extender on the extended $N$-sequence. We say \textit{the Shift Lemma applies to $(\varphi, \pi, E, F)$} iff \begin{enumerate} \item[i.] $\text{dom}(E) = P||({\text{crit} (E)^+})^{P}$,\footnote{We allow $P=\text{dom}(E)$ here.} \item[ii.] $\varphi \upharpoonright (\text{dom}(E) \cup \{\text{dom}(E) \}) = \pi \upharpoonright (\text{dom}(E) \cup \{\text{dom}(E) \})$, and \item[iii.] either \begin{enumerate} \item[(a)] $F=\varphi(E)$ or \item[(b)] $E$ is not of plus type and $F=\varphi(E)^+$. \end{enumerate} \end{enumerate} In this situation, the Shift Lemma (cf. \cite{nitcis}, Corollary 2.5.20) gives that there is a unique nearly elementary map $\sigma: Ult(P, E)\to Ult(Q,F)$ such that \begin{enumerate} \item $\sigma \circ i^{P}_E= i^{Q}_{F} \circ\pi$ and \item $\sigma \upharpoonright \varepsilon(E) = \varphi \upharpoonright \varepsilon(E)$. \end{enumerate} We call this $\sigma$ the \textit{copy map associated to $(\varphi, \pi, E, F)$}.\footnote{If hypothesis (iii)(a) obtains, then the existence of $\sigma$ follows literally by the Shift Lemma. If hypothesis (iii)(b) obtains, then $\sigma=i^{\text{Ult}(M, F^-)}_\mu\circ \sigma^-$ where $\sigma^-$ is the ordinary copy map associated to $(\varphi, \pi, E, F^-)$ and $\mu$ is the Mitchell-order zero normal measure on $\lambda(F^-)$. Since $\text{crit}(i^{\text{Ult}(M, F^-)}_\mu)=\lambda(F^-)$ and $\sigma^-$ satisfies (2) and the appropriate version of (1), this $\sigma$ does actually satisfy (1) and (2). Uniqueness of this map is guaranteed as usual, since $\text{Ult}(P, E)$ is the hull of points in the range of $i_E^P$ together with $\varepsilon(E)=\lambda(E)$. Also note that in this case, if $\sigma^-$ is elementary (for example when $\langle \varphi, \pi\rangle: (P, E)\to^* (Q, F^-)$), then so is $\sigma$.} \end{definition} It will be convenient to use all this terminology even when $\text{dom}(E)\trianglelefteq P$ but clause (i), above, fails. In this case, letting $\bar{P}\trianglelefteq P$ the least initial segment $R$ of $P$ such that $\text{dom}(E)\trianglelefteq R$ and $\rho(R)\leq \text{crit}(E)$, we have that $\pi(\bar{P})$ is the least initial segment $S$ of $Q$ such that $\text{dom}(\varphi(E))\trianglelefteq S$ and $\rho(S)\leq \text{crit}(\varphi(E))$ by near elementarity of $\pi$. We'll say that the \textit{Shift Lemma applies to }$(\varphi, \pi, E, F)$ when it applies to $(\varphi, \pi\upharpoonright\bar{P}, E, F)$ and let the copy map associated to $(\varphi, \pi, E, F)$ be that associated to $(\varphi, \pi\upharpoonright\bar{P}, E, F)$. Our definition of a tree embedding between two plus trees will actually be an ostensible weakening of that in \cite{nitcis}. The advantage for this definition is its relative ease of verification. We shall verify it is equivalent to the definition from \cite{nitcis} shortly. \begin{definition}\label{treembdef} Let $\mathcal{S}$ and $\mathcal{T}$ be plus trees on a premouse $M$. A \textit{tree embedding} $\Phi:\tree{S}\to \tree{T}$ is a system $\langle v,u, \{s_\xi\}_{\xi<\text{lh}\tree{S}}, \{t_\zeta\}_{\zeta+1<\text{lh}(\tree{S})}\rangle$ such that \begin{enumerate} \item $v:\text{lh}(\tree{S})\to \text{lh}(\tree{T})$ is tree-order preserving, $u:\{\eta\,|\,\eta+1<\text{lh} (\tree{S})\}\to \text{lh} (\tree{T})$, $v(\xi)=\sup\{u(\eta)+1\,|\, \eta<\xi\}$, and for all $\xi+1<\text{lh}(\tree{S})$, $v(\xi)\leq_\tree{T}u(\xi)$; \item For all $\xi$ and $\eta\leq_\tree{S}\xi$, \begin{enumerate} \item $s_\xi: M^\tree{S}_\xi\to M^\tree{T}_{v(\xi)}$ is nearly elementary and $s_0 = id_{M^\tree{S}_0}$, \item $\hat\imath^\tree{T}_{v(\eta),v(\xi)}\circ s_\eta= s_\xi\circ \hat\imath^\tree{S}_{\eta,\xi}$,\footnote{We are using $\hat i^\tree{T}$ to denote the possibly partial branch embeddings of $\tree{T}$, as in \cite{nitcis}.} and \item if $\xi+1<\text{lh}(\tree{S})$, then $t_\xi= \hat\imath^\tree{T}_{v(\xi),u(\xi)}\circ s_\xi$ with $M_\xi^\tree{S}|\text{lh}(E_\xi^\tree{S})\trianglelefteq \text{dom}(t_\xi)$; \end{enumerate} \item for all $\xi+1 < \text{lh}(\mathcal{S})$, letting $\eta=\tree{S}\text{-pred}(\xi+1)$ and $\eta^*=\tree{T}\text{-pred}(u(\xi)+1)$, \begin{enumerate} \item either $E^\tree{ T}_{u(\xi)}=t_\xi^\Phi(E_\xi^\tree{S})$ or else $E^\tree{S}_\xi$ is not of plus type and $E^\tree{ T}_{u(\xi)}=t_\xi^\Phi(E_\xi^\tree{S})^+$, \item $\eta^*\in[v(\eta),u(\eta)]_\tree{T}$, \item $s_{\xi+1}^\Phi\upharpoonright \varepsilon(E_\xi^\tree{S})=t^\Phi_\xi\upharpoonright\varepsilon(E_\xi^\tree{S})$. \end{enumerate} \end{enumerate} \end{definition} Clause (1) implies that $v(0)=0$ but also that $v$ is continuous at limit ordinals $\lambda$ so that clause (2)(b) then gives that $s_\lambda : M_\lambda^{\tree{T}} \to M_{v(\lambda)}^{\tree{U}}$ is the direct limit of the $s_\eta$ for $\eta <_T \lambda$ sufficiently large. Clause (3) together with the commutativity clause (2)(b) guarantee that the all of the $s$-maps are actually copy maps and, in fact, elementary, not just nearly elementary. To see this, we just need to see that we've maintained enough agreement so that the Shift Lemma applies at successors. Then we can repeat the proof of the Copy Lemma of \cite{nitcis} (Lemma 4.5.17) to get that all the $s$-maps are elementary, by induction. \begin{proposition} Let $\Phi:\tree{S}\to \tree{T}$ be a tree embedding. Then for all $\xi<\text{lh}(\tree{S})$, \begin{enumerate} \item for all $\eta<\xi$, $s_\xi\upharpoonright\varepsilon(E_\eta^\tree{S})=s_\eta\upharpoonright\varepsilon(E_\eta^\tree{S})$; \item if $\xi+1 < \text{lh}(\mathcal{S})$, letting $\eta=\tree{S}\text{-pred}(\xi+1)$ and $\eta^*=\tree{T}\text{-pred}(u(\xi)+1)$, \begin{enumerate} \item $\eta^*$ is the least $\zeta\in [v(\eta), u(\eta)]_\tree{T}$ such that $\zeta=u(\eta)$ or else $\text{crit}(\hat\imath^\tree{T}_{\zeta, u(\eta)})>\text{crit}(E_{u(\xi)}^\tree{T})$, \item the Shift Lemma applies to $(t_\xi, \hat\imath^\tree{T}_{v(\eta), \eta^*}\circ s_\eta, E_\xi^\tree{S}, E_{u(\xi)}^\tree{T})$ and $s_{\xi+1}$ is the associated copy map; and \end{enumerate} \item $s_\xi$ is elementary. \end{enumerate} \end{proposition} \begin{proof}[Proof sketch.] (1) and (2) can be verified by induction, simultaneously. (3) can be proved by a separate induction using that (1) and (2) hold for all $\xi<\text{lh}(\tree{S})$, using the analysis from \cite{nitcis} about when extenders of an iteration tree very close to the models to which they are applied. As mentioned above, this is essentially the proof of the Copy Lemma of \cite{nitcis}. We omit further detail. \end{proof} This proposition implies that our current definition is equivalent to that of \cite{nitcis}. \begin{definition} If $\tree{T}$ is a plus tree, and $\alpha < \text{lh}(\tree{T})$, then \begin{align*} \lambda_\alpha^\tree{T} & = \sup \{ \lambda(E_\beta^\tree{T})\,|\,\beta < \alpha \}\\ & = \sup \{ \lambda(E_\beta^\tree{T}) \,|\, \beta <_T \alpha\} \end{align*} is the sup of the Jensen generators of $M_\alpha^\tree{T}$. \end{definition} The agreement of the $s$ and $t$ maps in a tree embedding is given by \begin{lemma}\label{pshullagree} Let $\langle v, u, \langle s_\beta \mid \beta < \text{lh} \mathcal{T}\rangle, \langle t_\beta \mid \beta+1 < \text{lh} \mathcal{T}\rangle\rangle$ be a tree embedding of $\mathcal{T}$ into $\mathcal{U}$; then \begin{itemize} \item[(a)] if $\alpha +1 < \text{lh}(\mathcal{T})$, then $t_\alpha$ agrees with $s_\alpha$ on $\lambda_\alpha^{\mathcal{T}}$, \item[(b)] if $\beta < \alpha < \text{lh}(\mathcal{T})$, then $s_\alpha$ agrees with $t_\beta$ on $\text{lh}(E_\beta^{\mathcal{T}}) +1$, and \item[(b)] if $\beta < \alpha < \text{lh}(\mathcal{T})$, then $s_\alpha$ agrees with $s_\beta$ on $\lambda_\beta^{\mathcal{T}}$ \end{itemize} \end{lemma} We also have a formula for the point of application $\eta^* = T\text{-pred}(u(\xi)+1)$ occurring in clause (4) of Definition \ref{treembdef}, namely \[ T\text{-pred}(u(\xi)+1) = \text{least $\gamma \in [v(\eta), u(\eta)]_T$ such that $\text{crit} \hat{\imath}^\tree{T}_{\gamma,u(\eta)} > \hat{\imath}^\mathcal{T}_{v(\eta),\gamma} \circ s_\eta(\mu)$,} \] where \[ \eta = \tree{S}\text{-pred}(\xi +1) \mbox{ and } \mu = \text{crit}(E_\xi^{\mathcal{S}}). \] These facts are proved in \cite{nitcis}. The proof uses the following elementary fact about iteration trees.\footnote{See Proposition 8.2.1 of \cite{nitcis}. In reading it, recall that the convention of \cite{nitcis} is that when $P$ is active, then $P||o(P) \triangleleft P$. } \begin{proposition} \label{unravelonbranch} Let $\mathcal{S}$ be a normal tree, let $\delta \le_S \eta$, and suppose that $P \unlhd M_\eta^{\mathcal{S}}$, but $P \not\trianglelefteq M_\sigma^{\mathcal{S}}$ whenever $\sigma <_S \delta$. Suppose also that $P \in \text{ran}(\hat{\imath}^{\mathcal{S}}_{\delta,\eta})$. Let \begin{align*} \alpha & = \text{ least $\gamma$ such that $P \unlhd M_\gamma^{\mathcal{S}}$}\\ &= \text{ least $\gamma$ such that $o(P) < \text{lh}(E_\gamma^{\mathcal{S}})$ or $\gamma = \eta$,} \end{align*} and \[ \beta = \text{least $\gamma \in [0,\eta]_S$ such that } o(P) < \text{crit}(\hat{\imath}^\mathcal{S}_{\gamma,\eta}) \mbox{ or } \gamma = \eta. \] Then $\beta \in [\delta,\eta]_S$, and \begin{itemize} \item[(a)] either $\beta = \alpha$, or $\beta = \alpha +1$, and $\lambda(E_\alpha^{\mathcal{S}}) \le o(P) < \text{lh}(E_\alpha^{\mathcal{S}})$; \item[(b)] if $P = \text{dom}(E_\xi^{\mathcal{S}})$, then $S\text{-pred}(\xi+1) = \alpha = \beta$. \end{itemize} (We allow $\delta = \eta$, with the understanding $\hat{\imath}_{\delta,\delta}$ is the identity.) \end{proposition} This proposition or its (very short) proof show up in many arguments to do with extending tree embeddings. Commonly, one has $\eta = u(\xi)$, $\delta = v(\xi)$, and $P = t_\xi(\bar{P})$. \begin{definition}\label{extendedtreeemb} Let $\tree{S}$ be of successor length $\gamma+1$ and $\Phi:\tree{S}\to \tree{T}$ a tree embedding where $\tree{T}$ has successor length $\delta+1$. If $v(\gamma)\leq_\tree{T} \delta$, we let $u(\gamma)=\delta$ and $t_\gamma=\hat\imath^\tree{T}_{v(\gamma),\delta}\circ s_\gamma$ and call the resulting system an \textit{extended tree embedding}. An extended tree embedding $\Phi$ is \textit{non-dropping} if $(v(\gamma),\delta]_\tree{T}$ doesn't drop. \end{definition} Note that if $\Phi:\tree{S}\to \tree{T}$ is a tree embedding and $\tree{S}$ has successor length $\delta+1$, then we can always view $\Phi$ as a non-dropping extended tree embedding $\Phi:\tree{S}\to \tree{T}\upharpoonright v(\delta)+1$. \begin{remark} On the other hand, it is not always possible to extend tree embeddings defined on trees of limit length, even if our trees $\mathcal{S}$ and $\mathcal{T}$ are by the same nice iteration strategy. For example, assume $M_1^\#$ exists and let $\Lambda$ be the iteration strategy for $M_1$. Toward a contradiction, suppose for every $\tree{S},\tree{T}$ of limit lengths by $\Lambda$ such that there is $\Phi: \tree{S}\to \tree{T}$ with $\text{ran}(v^\Phi)$ cofinal in $\text{lh} \tree{T}$, $v^\Phi ``\Lambda(\tree{S})\subseteq \Lambda(\tree{T})$. Now let $\tree{T}\in M_1$ be a tree by $\Lambda$ of height ${\delta^+}^{M_1}$ which has no branch in $M_1$, where $\delta$ is the Woodin cardinal of $M_1$ (that such a tree exists is due to Woodin, see Lemma 1.1 of \cite{schindler steel}). Let $g$ be $Col(\omega, \delta)$ generic over $M_1$ and $h$ generic for the Namba forcing over $M_1[g]$. The restriction of $\Lambda$ to countable trees which are in $M_1[g][h]$ is in $M_1[g][h]$ since Namba forcing adds no reals and $M_1$ contains the restriction of $\Lambda$ to trees of length $\delta$ which are in $M_1$. Now, in $M_1[g][h]$, $\text{lh} \tree{T}$ has countable cofinality. We can take a Skolem hull to get $\tree{S}$ countable in $M_1[g][h]$ and $\Phi: \tree{S}\to \tree{T}$ with $\text{ran}(v^\Phi)$ cofinal in $\text{lh} \tree{T}$. Since $\tree{S}$ is countable and by $\Lambda$, $\Lambda(\tree{S})\in M_1[g][h]$. So, by assumption, $v^\Phi`` \Lambda(\tree{S})\subseteq \Lambda(\tree{T})$; so $\Lambda(\tree{T})$ is just the downwards closure of $v^\Phi``\Lambda(\tree{S})$ (in $M_1[g][h]$). This identification of $\Lambda(\tree{T})$ was independent of our choice of $g,h, \tree{S}$, so we get $\Lambda(\tree{T})\in M_1$, a contradiction. \end{remark} The tree embeddings that appear naturally in embedding normalization can be viewed as extended tree embeddings. We deal almost exclusively with extended tree embeddings in the rest of the paper. \begin{definition}\label{treeembagreedef} For tree embeddings or extended tree embeddings $\Phi:\tree{S}\to \tree{T},\Gamma:\tree{U}\to \mathcal{V}$, we put \[ \Phi\upharpoonright\xi+1\approx \Gamma \upharpoonright\xi+1 \] iff $\tree{S}\upharpoonright\xi+1=\tree{U}\upharpoonright\xi+1$, $v^\Phi\upharpoonright\xi+1 = v^{\Gamma}\upharpoonright\xi+1$, and $\tree{T}\upharpoonright v^\Phi(\xi)+1=\tree{V}\upharpoonright v^{\Gamma}(\xi)+1$. \end{definition} If $\Phi\upharpoonright\xi+1\approx \Gamma \upharpoonright\xi+1$, then $s_\eta^\Phi= s_\eta^{\Gamma}$ for $\eta\leq \xi$, $u^\Phi\upharpoonright\xi=u^{\Gamma}\upharpoonright\xi$, and $t_\eta^\Phi=t_\eta^\Gamma$ for $\eta<\xi$. It does \textit{not} imply that $u^\Phi(\xi)= u^{\Gamma}(\xi)$ (even when $\Phi, \Phi^*$ are extended tree embeddings). Intuitively, $u^\Phi(\xi)$ and $t^\Phi_\xi$ are telling us how to inflate $E_\xi^\tree{S}$, while $\Phi \upharpoonright \xi +1$ is the part of $\Phi$ that acts on $\tree{S} \upharpoonright \xi+1$, which does not include $E_\xi^\tree{S}$. We will sometimes write $u_\alpha$ for $u(\alpha)$ and $v_\alpha$ for $v(\alpha)$.\footnote{This is a notational convention introduced by Schlutzenberg which cleans up some type-setting issues, as the reader may notice.} \subsection{Direct limits under tree embeddings} If $\Phi \colon \mathcal{S} \to \mathcal{T}$ and $\Psi \colon \mathcal{T} \to \mathcal{U}$ are (extended) tree embeddings, then $\Psi \circ \Phi \colon \mathcal{S} \to \mathcal{U}$ is the tree embedding obtained by component-wise composition, in the obvious way. \begin{definition}\label{directedsystemoftrees} Let $M$ be a premouse. A \textit{directed system of plus trees} on $M$ is a system $\mathcal{D}=\langle\{\tree{T}_a\}_{a\in A},\{\Psi_{a,b}\}_{a\preceq b}\rangle$, where $\preceq$ is a directed partial order on some set $A$ and \begin{enumerate} \item[(a)] for any $a\in A$, $\tree{T}_a$ is a plus tree on $M$ of successor length, \item[(b)] for any $a,b\in A$ with $a\prec b$, $\Psi_{a,b}: \tree{T}_a\to \tree{T}_b$ is an extended tree embedding, \item[(c)] for any $a,b,c\in A$ such that $a\preceq b\preceq c$, $\Psi_{a,c}= \Psi_{b,c}\circ \Psi_{a,b}$. \end{enumerate} \end{definition} It follows from (c) that $\Psi_{a,a}$ is the identity extended tree embedding on $\tree{T}_a$. Let $\mathcal{D}=\langle\{\tree{T}_a\}_{a\in A},\{\Psi_{a,b}\}_{a\preceq b}\rangle$ be a directed system of plus trees on $M$, where $\preceq$ is a directed partial order on $A$. Assuming the $t$-maps of our tree embeddings behave properly, we shall define the direct limit of $\mathcal{D}$, which we denote $\lim\mathcal{D}$, as an algebraic structure. We then show that if all models in $\lim\mathcal{D}$ are wellfounded, then it is (isomorphic to) a plus tree on $M$. In components, we shall have \[ \lim\mathcal{D} = \langle D, \leq, \leq^*, \{M_x\}_{x\in D}, \{E_x\}_{x\in D}, \{\Gamma_a\}_{a\in A}\rangle. \] Let us define these components. Let \[\Psi_{a, b}= \langle v_{a,b},u_{a,b}, \{s^{a,b}_\gamma\}_{\gamma<\text{lh}(\tree{T}_a)}, \{t^{a,b}_\gamma\}_{\gamma<\text{lh}( \tree{T}_a)}\rangle. \] A \textit{$u$-thread} is a partial function $x:A\rightharpoonup Ord$ such that whenever $a \in \text{dom}(x)$ and $a \prec b$, then $b \in \text{dom}(x)$ and $u_{a,b}(x(a)) = x(b)$. Since $\text{dom}(x)$ is upward closed, it must be $\prec$-cofinal. If $x$ and $y$ are $u$-threads, then $x \sim y$ iff $\exists a (x(a) = y(a))$ iff $\forall a \in \text{dom}(x) \cap \text{dom}(y) (x(a) = y(a))$. Every $u$-thread is equivalent to a unique {\em maximal} $u$-thread. For any $a\in A$ and $\gamma<\text{lh}(\tree{T}_a)$, there is exactly one maximal $u$-thread $x$ such that $x(a)=\gamma$, and we write $x=[a,\gamma]_\mathcal{D}$ for it. $D$ is the set of all maximal $u$-threads. $\leq$ and $\leq^*$ will be certain partial orders with field $D$. Going forward, we write $\forall^*a \varphi(a)$ to abbreviate $\exists b \forall a\succeq b \,\varphi(a)$. For $u$-threads $x,y$ we put \begin{align*}x\leq y &\Leftrightarrow \forall^*a \, x(a)\leq y(a)\\ x\leq^* y &\Leftrightarrow \forall^*a\, x(a)\leq_{\tree{T}_a} y(a). \end{align*} Since $\leq_{\tree{T}_a}$ is a refinement of the order $\leq$ on ordinals, we get that $\leq^*$ is a refinement of $\leq$. $u$-maps preserve $\leq$ on ordinals, so $x \leq y$ iff for some (all) $a \in \text{dom}(x) \cap \text{dom}(y)$, $x(a) \leq y(a)$. But $u$-maps don't preserve tree-order everywhere, so the $\forall^*a$ quantifier is needed in the definition of $x \leq^* y$. It's easy to see that $\leq$ is a linear order on $D$, but it could fail to be a wellorder. If it is a wellorder, we identify it with its order-type $\delta$. In any case, we will think of $\langle D,\leq \rangle$ as the length of the direct limit. In the case that the direct limit produces an iteration tree, $\delta$ will really be its length. We now define $\Gamma_a=\langle u_a, v_a, \{s^a_\gamma\}_{\gamma<\text{lh}(\tree{T}_a)}, \{t^a_\gamma\}_{\gamma<\text{lh}(\tree{T}_a)}\rangle$ along with $M_x$ and $E_x$ for $x$ such that $a\in \text{dom}(x)$. We will actually only define the $u$-map and $t$-maps of the $\Gamma_a$; these determined the whole tree embedding in the case that the direct limit produces an iteration tree. So fix $a$ and $\gamma<\text{lh}(\tree{T}_a)$. Let $x=[a,\gamma]_\mathcal{D}$. We set $u_a(\gamma)=x$. We'll actually leave $M_x$, $E_x$ and $t^a_\gamma$ undefined unless the $t$-maps along $x$ are eventually total. So suppose we're in this case, i.e. $\forall^*b \forall c\geq b\, (t^{b,c}_{x(b)}$ is total). We define \[ M_x = \lim \langle M^{\tree{T}_b}_{x(b)}, t^{b,c}_{x(b)}\,|\, b\text{ such that for all }c\succeq b, \,t^{b,c}_{x(b)}\text{ is total}\rangle. \] For any $b$ such that for all $c\succeq b$, $t^{b,c}_{x(b)}$ is total, we let $t^b_{x(b)}$ be the direct limit map and we put $t^a_\gamma = t^b_{x(b)}\circ t^{a,b}_\gamma$ for any such $b$ (this is independent of the choice of $b$). Our assignment of the $E_x$ requires slightly more care. Notice that for any $u$-thread $x$, \[\forall^*b\forall c\succeq b E_{x(c)}^{\tree{T}_c}=t^{b,c}_{x(b)}(E_{x(b)}^{\tree{T}_b}).\] This is because the only way to have $E_{x(c)}^{\tree{T}_c}\neq t^{b,c}_{x(b)}(E_{x(b)}^{\tree{T}_b})$ is for $E_{x(b)}^{\tree{T}_b}$ to be \textit{not} of plus type and $E_{x(c)}^{\tree{T}_c}= t^{b,c}_{x(b)}(E_{x(b)}^{\tree{T}_b})^+$. But then since $E_{x(c)}^{\tree{T}_c}$ is now of plus type, for any $d\succeq c$, $E_{x(d)}^{\tree{T}_d}= t^{c,d}_{x(c)}(E_{x(c)}^{\tree{T}_c})$. So we may let let \[ E_x = t^b_{x(b)}(E^{\tree{T}_b}_{x(b)}), \] for any $b$ such that $t^{b,c}_{x(b)}$ is total and $E_{x(c)}^{\tree{T}_c}=t^{b,c}_{x(b)}(E_{x(b)}^{\tree{T}_b})$ for all $c\succeq b$. Again, this is independent of the choice of $b$. This finishes the definition of $\lim \mathcal{D}$. We say that $\lim \mathcal{D}$ is \textit{wellfounded} iff \begin{enumerate} \item for all $x\in D$, the model $M_x$ is defined and wellfounded, \item $\leq$ is wellfounded, \item $\tree{U}=\langle M_x,E_x,\leq^*\rangle$ is a plus tree (i.e. with models $M_x$, exit extenders $E_x$, and tree-order $\leq^*$). \end{enumerate} If $\lim \mathcal{D}$ is wellfounded, one can show that letting $v_a(x)=\sup\{u_a(y)+1\mid y<x\}$, we can define $s^a_\gamma$ to be the required copy maps so that $\Gamma_a=\langle u_a, v_a, \{s^a_\gamma\}_{\gamma<\text{lh}(\tree{T}_a)}, \{t^a_\gamma\}_{\gamma<\text{lh}(\tree{T}_a)}\rangle$ is an extended tree embedding from $\tree{T}_a$ into $\tree{U}$ and $\Gamma_b\circ \Psi_{a,b}= \Gamma_a$ for every $a\preceq b$. Part of this is the analysis of successors in the $\leq^*$-order, below. Perhaps surprisingly, we can drop conditions (2) and (3) in the definition of the wellfoundedness of the direct limit. \begin{proposition}\label{direct limit characterization} Let $\mathcal{D}$ be a directed system of plus trees. Then $\lim \mathcal{D}$ is wellfounded iff for every $u$-thread $x$, the models $M_x$ are defined and wellfounded. \end{proposition} Before we give a proof, we need the following observations about iterated applications of the Shift Lemma. \begin{lemma}\label{shift composition} Let $\pi_0: M_0\to M_1,$ $\pi_1: M_1\to M_2$ and $\sigma_0: N_0\to N_1$, $\sigma_1: N_1\to N_2$ be nearly elementary and let $E$ be on the extended $M_0$-sequence, $F$ on the extended $M_1$-sequence, and $G$ on the extended $M_2$-sequence. Suppose that the Shift Lemma applies to $(\pi_0, \sigma_0, E, F)$ and to $(\pi_1, \sigma_1, F, G)$. Let $\tau_0$ be the copy map associated to $(\pi_0, \sigma_0, E, F)$ and $\tau_1$ the copy map associated to $(\pi_1, \sigma_1, F, G)$. Then the Shift Lemma applies to $(\pi_1\circ \pi_0, \sigma_1\circ \sigma_0, E, G)$ and $\tau_1\circ\tau_0$ is the associated copy map. \end{lemma} Next we record how the copying interacts with direct limits. This is implicit in \cite{nitcis}. \begin{lemma}\label{shift direct limits} Let $\preceq$ be a directed partial order on a set $A$. Suppose we have directed systems of premice $\mathcal{M}=\langle \{M_a\}_{a\in A}, \{\pi_{a,b}\}_{a\preceq b}\rangle$ and $\mathcal{N}=\langle \{N_a\}_{a\in A}, \{\sigma_{a,b}\}_{a\preceq b}\rangle$ and extenders $\{E_a\}_{a\in A}$ such that \begin{enumerate} \item[(a)] for all $a\in A$, $E_a$ is on the extended $M_a$-sequence, \item[(b)] for all $a,b\in A$ such that $a\preceq b$, $\pi_{a,b}$ and $\sigma_{a,b}$ are nearly elementary, and \item[(c)] for all $a,b\in A$ such that $a\preceq b$, the Shift Lemma applies to $(\pi_{a,b}, \sigma_{a,b}, E_a, E_b)$. \end{enumerate} For $a,b\in A$ such that $a\preceq b$, let $\tau_{a,b}$ be copy map associated to $(\pi_{a,b}, \sigma_{a,b}, E_a, E_b)$. Let $M = \lim\mathcal{M}$, $N=\lim \mathcal{N}$, $\pi_a:M_a\to M$ and $\sigma_a:N_a\to N$ be the direct limit maps, and $E$ the eventual common value of $\pi_a(E_a)$.\footnote{As in the discussion of the direct limit of a system of plus trees, we must have $\pi_{a,b}(E_a)=E_b$ on a $\preceq$-cofinal subset of $A$, so this makes sense.} Let $\mathcal{P} = \langle \{\text{Ult}(N_a, E_a)\}_{a\in A}, \{\tau_{a,b}\}_{a\preceq b}\rangle$, $P = \lim\mathcal{P}$, $\tau_a:Ult(N_a, E_a)\to P$ the direct limit maps, and $j: N\to P$ the unique map such that for every $a\in A$, the following diagram commutes \begin{center} \begin{tikzcd} N_a \arrow[r, "\sigma_{a}"] \arrow[d, "E_a"'] & N \arrow[d,"j"]\\ Ult(N_a, E_a) \arrow[r, "\tau_{a}"'] & P. \end{tikzcd} \end{center} Then $P= Ult(N,E)$, $j=i^N_E$, and for all $a\in A$, $\tau_a$ is the copy map associated to $(\pi_a, \sigma_a, E_a, E)$. Moreover, if $\tau_{a,b}$ is elementary for every $a,b\in A$ such that $a\preceq b$, then for every $a\in A$, $\tau_a$ is elementary.\footnote{For example, if $\langle \pi_{a,b}, \sigma_{a,b}\rangle:(N_a, E_a)\to^*(N_b, E_b)$ for every $a,b\in A$, then $\tau_a$ is elementary for every $a\in A$.} \end{lemma} The following diagram illustrates the situation along chains of $\preceq$. \begin{center} \begin{tikzcd} M_a \arrow[r, "\pi_{a,b}"] & M_b \arrow[r] \arrow[rr, bend left, "\pi_{b}"]& {} \arrow[r, "\cdots" ,phantom]& M\\ E_a \arrow[r, "\mapsto", phantom] \arrow[u, "\scriptsize \mathbin{\rotatebox[origin=c]{90}{$\in$}}" ,phantom]& E_b\arrow[r, "\mapsto", phantom] \arrow[rr,mapsto, bend left]& {} \arrow[r, "\cdots" ,phantom]& E\arrow[u, "\scriptsize\mathbin{\rotatebox[origin=c]{90}{$\in$}}" ,phantom]\\ N_a \arrow[r, "\sigma_{a,b}"] \arrow[d, "E_a"] & N_b \arrow[d,"E_b"] \arrow[r]\arrow[rr, bend left, "\sigma_{b}"] & {} \arrow[r, "\cdots" ,phantom]& N \arrow[d,"j"]\\ Ult(N_a, E_a) \arrow[r, "\tau_{a,b}"'] & Ult(N_b, E_b) \arrow[r] \arrow[rr, bend left, "\tau_{b}"]& {} \arrow[r, "\cdots" ,phantom]& P \end{tikzcd} \end{center} The main relevant fact here is that $\pi_a$ and $\sigma_a$ agree on $\text{dom}(E_a) \cup \lbrace \text{dom}(E_a) \rbrace$. This follows at once from the fact that $\pi_{b,c}$ agrees with $\sigma_{b,c}$ on $\text{dom}(E_b) \cup \lbrace \text{dom}(E_b) \rbrace$, for all $b \prec c$. We leave the calculations that show everything else fits together properly to the reader. \begin{proof}[Proof of Proposition \ref{direct limit characterization}.] Again, this proposition amounts to saying (1) implies (2) and (3) in the above definition of when the direct limit is wellfounded. We first show (1) implies (2). \begin{claim}\label{direct limit claim 1} Let $x\in D$. Suppose $M_x$ is defined and that $\leq$ is illfounded below $x$. Then $M_{x}$ is illfounded. \end{claim} \begin{proof} We define an order preserving embedding $f$ from $\leq\upharpoonright x$ into the ordinals of $M_x$. Let $x=[a,\alpha]$ and fix $y=[b,\beta]<x$. Without loss of generality, we may assume $b\preceq a$ and $\beta+1<\text{lh}(\tree{T}_a)$ (this is just because we can move to $c\geq a,b$ where we have $[c, u_{a,c}(\alpha)]=[a,\alpha]$ and $[c,u_{b,c}(\beta)]=[b,\beta]$, so that $u_{b,c}(\beta)+1<u_{a,c}(\alpha)+1\leq \text{lh} (\tree{T}_c)$, since $[b,\beta]<[a,\alpha]$). We let \begin{equation*} f(y) = t^{b}_{u_{a,b}(\alpha)}(\text{lh}( E^{\tree{T}_a}_\beta)). \end{equation*} Clearly $f$ maps $y$ to an ordinal of $M_{x}$. It's easy to check that it is (strictly) order preserving, so $M_{x}$ is illfounded. \hfill{$\qed$ Claim \ref{direct limit claim 1}} \end{proof} So now suppose the $(D,\leq)$ is a well-order and that the models $M_{x}$ exist and are wellfounded (i.e. (1) and (2)). We show (3) by induction on $(D,\leq)$. More specifically, for $x\in D$, we let $\mathcal{D}^{\leq x} = \langle \tree{T}_a\upharpoonright (x(a)+1), \Psi_{a,b}\upharpoonright(\tree{T}_a\upharpoonright (x(a)+1))\,|\, a\preceq b \wedge a,b\in \text{dom} (x)\rangle$. It's easy to see that $\lim \mathcal{D}^{\leq x} = (\lim \mathcal{D})\upharpoonright x+1$, where for the $\Gamma$ systems we mean that for any $a\in \text{dom} (x)$, $(u_a)^{\mathcal{D}^{\leq x}}= u_a\upharpoonright x(a)+1$ and the $t$-maps are the same. \begin{claim}\label{direct limit induction} For all u-threads $x\in D$, \begin{enumerate} \item[(i)] $\lim \mathcal D^{\leq x}$ is wellfounded. \item[(ii)] for all $a\in \text{dom} (x)$, $\Gamma_a\upharpoonright (\tree{T}_a\upharpoonright x(a)+1)$ is an extended tree embedding from $\tree{T}_a\upharpoonright x(a)+1$ into $\lim \mathcal{D}\upharpoonright x+1$ and \item[(iii)] for all $a\in \text{dom} (x)$, all $b\succeq a$, and \[(\Gamma_b\circ\Phi_{a,b})\upharpoonright x(a)+1\approx \Gamma_a\upharpoonright x(a)+1.\] \end{enumerate} \end{claim} We first show that the $u$ maps of our system preserve tree-predecessors of successor $u$-threads on a tail, in the following sense. \begin{claim}\label{direct limit claim 2} For any $x\in D$ which has a $\leq$-successor $z$ in $D$, there is $y\in D$ such that \[y=\leq^*\text{-pred} (z).\] Moreover, \[y=\leq^*\text{-pred} (z)\, \Leftrightarrow\forall^* a\, y(a)=\tree{T}_a\text{-pred} (x(a)+1).\] \end{claim} \begin{proof} Fix $x$ and $z$ it's $\leq$-successor. First note that for any $a\in \text{dom} (z)\cap \text{dom} (x)$, $z(a)=x(a)+1$, since $z(a)\leq x(a)+1$ (as $[a, x(a)+1]$ is a $u$-thread $>x$) but $z(a)\neq x(a)$ (since $z\neq x$). We'll show that there is a $u$-thread $y$ such that $\forall ^* a y(a)= \tree{T}_a\text{-pred}(x(a)+1))$. Suppose that there is no immediate $\leq^*$-predecessor of $z$. We'll show that $\leq$ is illfounded, a contradiction. We define sequences $\langle a_n\,|\,n\in \omega\rangle$, $\langle \beta_n\,|\,n\in \omega\rangle$ such that $a_n\prec a_{n+1}$ and $\beta_n=\tree{T}_{a_n}\text{-pred} (x(a_n)+1)$ but $\beta_{n+1}<u_{a_n,a_{n+1}}(\beta_n)$. Then, taking $y_n=[a_n, \beta_n]$ gives a witness to the illfoundedness $\leq$. We start with any $a_0\in \text{dom} (z)$ and take $\beta_0 = \tree{T}_{a_0}\text{-pred} (x(a_0)+1)$, as we must. Given $a_n$ and $\beta_n=\tree{T}_{a_n}\text{-pred}(x(a_n)+1)$, let \[a_{n+1}>\xi_n \text{ least such that } u_{a_n,a_{n+1}} (\beta_n)\neq \tree{T}_{a_{n+1}}\text{-pred}(x(a_{n+1})+1),\] We have that such an $a_{n+1}$ exists, since otherwise $y_n=[a_n, b_n]$ is the immediate predecessor of $z$ in $\leq^*$. Now let $\beta_{n+1}= \tree{T}_{a_{n+1}}\text{-pred} (x(a_{n+1})+1)$. Since $\Psi_{a_n, a_{n+1}}$ is a tree embedding, we must have that $\beta_{n+1}\in [v_{a_n, a_{n+1}}(\beta_n),u_{a_n,a_{n+1}}(\beta_n)]_{\tree{T}_{{a_{n+1}}}}$. So since $\beta_{n+1}\neq u_{a_n,a_{n+1}}(\beta_n)$, we have $\beta_{n+1}<u_{a_n, a_{n+1}}(\beta_n)$, as desired. \hfill{$\qed$ Claim \ref{direct limit claim 2}} \end{proof} Let $z$ be a $u$-thread of successor rank, say $z$ is the $\leq$-successor of $x$ (i.e. the rank of $z$ is the rank of $x$ plus one). The observation made at the start of the previous proof shows $z(a)=x(a)+1$ for all most all $a$. Fix such an $a$, so for all $b\succeq a$ $z(b)=x(b)+1$. So we have for all $b\succeq a$, \begin{align*} z(b) &= x(b)+1\\ &=u_{a,b}(x(a))+1\\ &= v_{a,b}(x(a)+1)\\&= v_{a,b}(z(a)). \end{align*} This shows that all successor $u$-threads are actually $v$-threads (defined in the obvious way) when $\leq$ is wellfounded. Even when $\leq$ is wellfounded, there may be $u$-threads which are not $v$-threads, so it was important to use $u$-threads in defining the direct limit. Going forward, if $x$ is $u$-thread which is not the $\leq$-largest $u$-thread, we'll let $x+1$ be the $\leq$-successor of $x$. \begin{proof}[Proof of Claim \ref{direct limit induction}.] \setcounter{claim}{2} We proceed by induction. We already know all the $M_y$ are defined and wellfounded and $\leq$ is wellfounded, so to show (i) we just need to see that $\langle M_y,E_y, \leq^*\upharpoonright x\rangle$ is a plus iteration tree. In the base case, where $x$ is the minimum $u$-thread, this is unique tree of length one on the base model and (ii) and (iii) hold trivially. For the successor case, suppose we have (i)-(iii) for all $z\leq x$ and suppose that $x$ is not the last $u$-thread. So $x$ has a $\leq$-successor, $x+1$ and, appealing to Claim \ref{direct limit claim 2}, we can take $y=\leq^*\text{-pred}(x+1)$. To show (i) we need to see that we're applying $E_x$ following the rules of quasi-normality, i.e. \begin{subclaim}\label{direct limit subclaim1} $y$ is $\leq$-least such that $\text{crit} (E_{x})<\lambda(E_{y})$ and for $P\trianglelefteq M_{y}$ least such that $M_{y}|\text{lh} (E_{y})\trianglelefteq P$ and $\rho(P)\leq \text{crit} (E_{x})$, \[M_{x+1}=\text{Ult}(P,E_{y}).\] \end{subclaim} \begin{proof} By Claim \ref{direct limit claim 2}, we may take $a$ such that for all $b\succeq a$, $y(b)=\tree{T}_b \text{-pred} (x(b)+1)$. For the appropriate choice of maps and models, we are now exactly in the situation of Lemma \ref{shift direct limits}. The rest of the Subclaim follows from the normality of each of the trees $\tree{T}_b$. We leave the details to the reader. \hfill{$\qed$ Claim \ref{direct limit subclaim1}} \end{proof} $(ii)$ and $(iii)$ at $x+1$ easily follow. Suppose now $x$ has limit rank and $(i)$-$(iii)$ hold for all $z<x$. We first need to see \begin{subclaim}\label{direct limit subclaim2} For $b=[0,x)_{\leq^*}=_\text{def} \{y\,|\, y<^*x\}$, $b$ is $<$-cofinal in $x$, there are finitely many drops along $b$, and $M_{x}$ is the direct limit along $b$. \end{subclaim} \begin{proof} To see that $b$ is cofinal, let $y<x$. Since $x$ has limit rank, $y+1<x$. Let $a$ be sufficiently large such that $y+1=[a,y(a)+1]$. Let $\gamma+1$ least such that $y(a)+1\leq\gamma+1\leq_{\tree{T}_a}x(a)$. We have that for all $b\succeq a$, \[y(b)<u_{a,b}(\gamma)+1=v_{a,b}(\gamma+1)\leq_{\tree{T}_b}v_{a,b} (x(a))\leq_{\tree{T}_b} x(b),\] using here that $\Phi_{a,b}$ is a tree embeddings (and the $v$-maps of tree embeddings are tree-order preserving). So letting $z=[a,\gamma]+1$, we have that $y<z\leq^* x$. Since $x$ is not a successor, we actually have $y<z<^*x$, as desired. Since the model $M_x$ is defined, there is an $a$ such that for all $b\succeq a$, $t^{a,b}_{x(a)}$ is total. Suppose first that there is some successor $\eta<_{\tree{T}_a} x(a)$ such that $(\eta,x(a)]_{\tree{T}_a}$ doesn't drop. Then for all $b\succeq a$, we have that $[v_{a,b}(\eta), x(b))_{\tree{T}_{b}}$ doesn't drop. There is some $u$-thread $z$ such that $z=[b,v_{a,b}(\eta)]$ for all sufficiently large $\zeta'$. Now, any drops from $z$ to $x$ in the direct limit corresponds to a drop in $[z(b), x(b))_{\tree{T}_b}$ for all sufficiently large $b$, so there are no such drops. In the remaining case, $x(a)$ is a successor ordinal and a drop in $\tree{T}_a$. Let $\beta=\tree{T}_a\text{-pred} (x(a))$. Letting $z$ the $u$-thread such that $z(b)=v_{a,b}(x(a))$ for all sufficiently large $b$, we have $z<^* x$ and there can be no drops between $z$ and $x$ in the direct limit tree, just as before (since for all sufficiently large $b$, there are no drops in $(z(b), x(b)]_{\tree{T}_{b}}$, as $t^{a,b}_{x(a)}$ is total). By induction, this means there are only finitely many drops. Using our induction hypotheses $(ii)$ and $(iii)$ for $z<x$, it is straightforward to check that $M_x$ is the direct limit along $b$, so we leave it to the reader. \hfill{$\qed$ Subclaim \ref{direct limit subclaim2}} This gives us (i) and it is now easy to verify $(ii)$ and $(iii)$. \end{proof}\hfill{$\qed$ Claim \ref{direct limit induction}} \end{proof} \hfill{$\qed$ Proposition \ref{direct limit characterization}} \end{proof} Notice that in the proof of Proposition \ref{direct limit characterization}, we also verified (or rather left it to the reader to verify) that when the direct limit $\lim\mathcal{D}$ is wellfounded, then $\Gamma_a = \langle v_a, u_a, \{s^a_\xi\},\{t^a_\xi\}\rangle$ is an extended tree embedding from $\tree{T}_\xi $ into $\tree{U}$, the direct limit tree, and $\Gamma_b\circ \Psi_{a,b}= \Gamma_a$ when $a\preceq b$. According to the following proposition, the direct limit we just defined is indeed the direct limit in the category of plus iteration trees of successor lengths and extended tree embeddings. \begin{proposition}\label{direct limit prop} Let $\mathcal{D}= \langle \tree{T}_a, \Psi_{a,b}, \preceq\rangle$ be a directed system of trees, where $\preceq$ has field $A$. Suppose there is a normal tree $\tree{S}$, and for each $a\in A$ an extended tree embedding $\Pi_a:\tree{T}_a\to \tree{S}$ such that whenever $a\preceq b$, $\Pi_b=\Psi_{a,b}\circ \Pi_a$; then the direct limit $\lim\mathcal{D}$ is wellfounded, and there is a unique tree embedding $\Pi:\lim \mathcal{D}\to \tree{S}$ such that $\Pi_a= \Pi\circ \Gamma_a$ for all $a\in A$. \end{proposition} We omit the straightforward proof. \begin{remark}\label{direct limit remark} We can define the direct limit of a commuting system of trees under ordinary tree embeddings in the obvious way and verify the versions of Propositions \ref{direct limit characterization} and \ref{direct limit prop}. It is easy to see that the direct limit of a system of trees under extended tree embeddings is either the same as the corresponding direct limit of under ordinary tree embeddings, or else is some tree $\tree{U}$ with length $\gamma+1$ for a limit ordinal $\gamma$, and the corresponding direct limit under ordinary tree embeddings is just $\tree{U}\upharpoonright\gamma$. \end{remark} \subsection{Embedding normalization and quasi-normalization} We begin with one-step embedding normalization. Let $\tree{S}$ and $\tree{T}$ be normal trees of successor length on some common base premouse $M$. Let $F$ be on the sequence of last model of $\tree{T}$. Put \[ \alpha=\alpha(F,\tree{T})<\text{lh} (\tree{T})= \\\text{least $\gamma$ such that $F$ is on the sequence of $M^\tree{T}_\gamma$} \] and \[ \beta=\beta(F,\tree{T}) = \text{least $\gamma$ such that $\gamma=\alpha$ or $\lambda(E^\tree{T}_\beta)>\text{crit} (F)$.} \] Suppose that $\tree{S}\upharpoonright\beta+1=\tree{T}\upharpoonright\beta+1$ and $\text{dom} F\leq \lambda( E^\tree{S}_\beta)$, if $\beta+1<\text{lh} (\tree{S})$. In this case, we define a tree \[ \tree{W}=W(\tree{S},\tree{T},F), \] a partial extended tree embedding \[ \Phi^{W(\tree{S},\tree{T},F)} = \Phi \colon \tree{S} \to \tree{W}, \] and a nearly elementary map \[ \sigma^{W(\tree{S},\tree{T},F)} = \sigma\colon\text{Ult}(P, F) \to M_\infty^{\tree{W}}, \] where $P$ is the largest initial segment of the last model of $\tree{S}$ to which we can apply $F$, and $M_\infty^{\tree{W}}$ is the last model of $\mathcal{W}$. In general, we may reach illfounded models in forming $\tree{W}$, and we stop if so. We say that $\tree{W}$ is wellfounded if we never reach illfounded models. If $\tree{S}$ and $\tree{T}$ are by a strategy $\Sigma$ which has strong hull condensation, then $\tree{W}$ will be wellfounded. We let $\tree{W}\upharpoonright \alpha+1 =\tree{T}\upharpoonright \alpha+1$ and $E^\tree{W}_\alpha=F$. For the rest of $\tree{W}$, we consider cases. Let $Q$ be the initial segment of $M_\beta^\mathcal{S}$ to which $F$ is must be applied in a normal tree. \paragraph{The dropping case.} $\beta+1 = \text{lh} (\mathcal{S})$ and $Q\triangleleft M^\tree{S}_\beta$ or $\beta + 1 < \text{lh} (\mathcal{S})$, $Q\triangleleft M^\tree{S}_\beta|\text{lh} (E^\tree{S}_\beta)$. In this case we have described all of $\tree{W}$ already: \[ \tree{W}=\tree{T}\upharpoonright\alpha+1 {}^\frown \langle F\rangle, \] and $\Phi \restriction \beta+1$ is just the identity on $\tree{S}\upharpoonright\beta+1$. $\Phi$ is the associated extended tree embedding: $u(\beta)=\alpha+1$ and $t_\beta = \hat{\imath}_{v(\beta),u(\beta)}^\mathcal{W} \circ s_\beta = i_F^Q \circ \mbox{id}$ = $i_F^Q$. In this case $P=Q$, and $\text{Ult}(P,F) = M_{\alpha +1}^\mathcal{W} = \text{Ult}(P,F)$ is the last model of $\tree{W}$, so we set $\sigma=id$. In the dropping case, $\Phi$ is total exactly when $\beta+1=\text{lh} (\tree{S})$. \paragraph{The non-dropping case.} Otherwise. We define the $u$ and $v$ maps of $\Phi$ by \begin{equation*} u(\xi) = \begin{cases*} \xi & if $\xi<\beta$, \\ \alpha+1+(\xi-\beta) & if $\xi\geq \beta$, \end{cases*} \end{equation*} and \begin{equation*} v(\xi) = \begin{cases*} \xi & if $\xi\le \beta$, \\ \alpha+1+(\xi-\beta) & if $\xi > \beta$. \end{cases*} \end{equation*} So $u = v$, except that $v(\beta) = \beta <_W u(\beta)=\alpha+1$. The remainder of $\mathcal{W}$ and $\Phi$ are determined by our having set $E_\alpha^\mathcal{W} = F$ and the rules for tree embeddings. For example, if $\xi+1 < \text{lh} (\mathcal{S})$, then $E_{u(\xi)}^\mathcal{W}$ is defined to be $t_\xi(E_\xi^\mathcal{S})$, which then determines $M_{u(\xi)+}^\mathcal{W}$ by normality. Letting $\eta = S\text{-pred}(\xi+1)$, and letting $\eta^*$ be what normality dictates for $W\text{-pred}(u(\xi)+1)$, we must see that $\eta^*$ is properly located in $\mathcal{W}$, namely that $v(\eta) \le_{W_\eta} \eta^* \le_{W_\eta} u(\eta)$. But it is easy to check that if $\eta \neq \beta$, then $v(\eta) = \eta^* = u(\eta)$, while if $\eta = \beta$, then $\eta^* = v(\eta) = \beta$ when $\text{crit}(E_\eta) < \text{crit}(F)$, and $\eta^* = u(\eta) = \alpha+1$ when $\text{crit}(F) \le \text{crit}(E_\eta^\mathcal{S})$. In either case, $t_\xi$ agrees with $\hat{\imath}_{v(\eta),\eta^*}^\mathcal{S} \circ s_\eta$ on $\text{dom}(E_\xi^\mathcal{S}$. Letting $Q$ be what $E_\xi^\mathcal{S}$ applies to in $\mathcal{S}$, and $Q^*$ what $E_{u(\xi)}^\mathcal{W}$ applies to in $W$, and \[ \pi = \hat{\imath}^\mathcal{W} \circ s_\eta \restriction Q, \] we get that $\pi$ agrees with $t_\xi$ on $\text{dom}(E_\xi^\mathcal{S})$, so the Shift Lemma applies to $(t_\xi,\pi, E_\xi^\mathcal{S})$, and we can let \[ s_{\xi+1} \colon \text{Ult}(Q,E_\xi^\mathcal{S}) \to \text{Ult}(Q^*,E_{u(\xi)}^\mathcal{W}) \] be the associated copy map. \begin{remark} The argument just sketched is a special case of a general lemma on extending tree embeddings given in \S8.2 of \cite{nitcis}. \end{remark} In the non-dropping case, $\Phi$ is a total, non-dropping extended tree embedding from $\mathcal{S}$ to $\mathcal{W}$. We have $\text{lh} (\tree{W})= \alpha+1+(\text{lh} (\tree{S)}-\beta)$, and $\text{ran} (u)=[0,\beta)\cup [\alpha+1,\text{lh} (\tree{W}))$. Since $t_\beta=i^P_F$ and all the $t_\xi$ for $\xi\geq \beta$ agree with $t_\beta$ beyond $\text{dom} (F)$, we have that $F$ is an initial factor of the extender of these $t_\xi$. We let $\sigma$ be the unique map such that the last $t$ map factors as $\sigma\circ i^N_F$, where $N$ is the last model of $\tree{S}$ (using here that $F$ is total on the last model by our case hypothesis). Next we review the one-step \textit{quasi-normalization}, which applies to plus trees. Let $\tree{T}$ a plus tree of successor length and $F$ an extender such that $F^-$ is on the sequence on the last model of $\tree{T}$. Put \begin{enumerate} \item[] $\alpha_0=\alpha_0(F,\tree{T})=$ least $\gamma$ such that \begin{enumerate}\item[i.] $\alpha(\tree{T},F^-)\leq \alpha_0$ and \item[ii.] $\text{lh}(F)<\hat\lambda(E_\gamma^\tree{T})$, or $E_\gamma^\tree{T}$ is of plus type, or $\gamma+1=\text{lh}(\tree{T})$. \end{enumerate} and \item[] $\beta=\beta(F,\tree{T}) =$ least $\gamma$ such that $\text{crit} (F)< \text{lh}(E^\tree{T}_\beta)$. \end{enumerate} It's easy to see that $\beta\leq\alpha_0$ and if $\beta<\alpha_0$, then $\beta$ is least such that $\text{crit}(F)<\hat\lambda(E_\beta^\tree{T})$. Now suppose $\tree{S}$ and $\tree{T}$ be plus trees of successor length on a premouse $M$ such that $\tree{S}\upharpoonright\beta+1=\tree{T}\upharpoonright\beta+1$ and if $\beta+1<\text{lh}(\tree{S})$, then $\text{dom}(F)\triangleleft M_\beta^\tree{S}|\text{lh}(E_\beta^\tree{T})$. We define the quasi-normalization \[ \tree{V}=V(\tree{S},\tree{T},F), \] a partial extended tree embedding \[\Phi=\Phi^{V(\tree{S}, \tree{T}, F)}\] from $\tree{S}$ into $\tree{V}$, and a nearly elementary map \[\sigma=\sigma^{V(\tree{S},\tree{T}, F)}:\text{Ult}(P,F)\to M_\infty^\tree{V},\] where $P$ is the longest initial segment of the last model of $\tree{S}$ to which we can apply $F$. As before, we may reach illfounded models in forming $\tree{V}$ and stop if we do. Otherwise, we say $\tree{V}$ is wellfounded. We set \[\tree{V}\upharpoonright\alpha_0+1=\tree{T}\upharpoonright\alpha_0+1\] and put \[ E_{\alpha_0}^\tree{V}=F. \] For the rest of the definitions, we split into cases, as before. Let $Q$ be the initial segment of $M_\beta^\tree{S}$ to which $F$ must be applied in a quasi-normal tree. \paragraph{The dropping case.} Either $\beta+1 = \text{lh} (\mathcal{S})$ and $Q\triangleleft M^\tree{S}_\beta$ or $\beta + 1 < \text{lh} (\mathcal{S})$ and $Q\triangleleft M^\tree{S}_\beta|\text{lh} (E^\tree{S}_\beta)$. In this case we have described all of $\tree{V}$ already: \[ \tree{V}=\tree{T}\upharpoonright\alpha_0+1 {}^\frown \langle F\rangle. \] We also set $\Phi \restriction \beta+1$ to be the identity on $\tree{S}\upharpoonright\beta+1$. $\Phi$ is the associated extended tree embedding: $u(\beta)=\alpha_0+1$ and $t_\beta = \hat{\imath}_{v(\beta),u(\beta)}^\mathcal{W} \circ s_\beta = i_F^Q \circ \mbox{id}$ = $i_F^Q$. In this case $P=Q$ and $\text{Ult}(P,F) = M_{\alpha +1}^\mathcal{V} $ is the last model of $\tree{V}$, and so we set $\sigma=id$. Note that in the dropping case, $\Phi$ is total exactly when $\beta+1=\text{lh} (\tree{S})$. \paragraph{The non-dropping case.} Otherwise. We define the $u$ and $v$ maps of $\Phi$ by \begin{equation*} u(\xi) = \begin{cases*} \xi & if $\xi<\beta$, \\ \alpha_0+1+(\xi-\beta) & if $\xi\geq \beta$, \end{cases*} \end{equation*} and \begin{equation*} v(\xi) = \begin{cases*} \xi & if $\xi\le \beta$, \\ \alpha_0+1+(\xi-\beta) & if $\xi > \beta$. \end{cases*} \end{equation*} So $u = v$, except that $v(\beta) = \beta <_V u(\beta)=\alpha_0+1$. The remainder of $\mathcal{V}$ and $\Phi$ are determined by our having set $E_{\alpha_0}^\mathcal{W} = F$ and the rules for tree embeddings. By induction on $\xi\geq \beta$, one shows that there are nearly elementary embeddings $\tilde{\sigma}_\gamma:\text{Ult}(M_\gamma^\tree{S}, F)\to M_{u(\xi)}^\tree{V}$ and we let $\sigma$ be the last of these embeddings. In the non-dropping case, $\Phi$ is always a total extended tree embedding (assuming that $\tree{V}$ is wellfounded). This finishes our description of the quasi-normalization. The following lemma, due to Siskind, connects the one-step quasi-normalization to an analysis of particularly well-behaved tree embeddings. \begin{lemma} [Factor Lemma]\label{factor lemma} Let $\Psi: \tree{S}\to \tree{T}$ be an (extended) tree embedding such that $\Psi\neq Id$. Let $\beta=\text{crit} (u^\Psi)$ and $\alpha_0+1$ be the successor of $v^\Psi(\beta)=\beta$ in $(v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$. Suppose that $\text{dom}(E^\tree{T}_{\alpha_0})\triangleleft M_\beta^\tree{S}|\text{lh}(E_\beta^\tree{S})$. Then $V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})$ is defined and wellfounded and there is a unique (extended) tree embedding $\Gamma: V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})\to \tree{T}$ such that $u^\Gamma\upharpoonright\alpha_0+1=id$ and $\Psi= \Gamma\circ \Phi^{V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})}$. \end{lemma} \begin{proof}[Proof sketch.] First notice that our hypotheses guarantee that $V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})$ is defined. Now let $\tree{V}=V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})$ and $\Phi= \Phi^{V(\tree{S},\tree{T}\upharpoonright\alpha_0+1, E^\tree{T}_{\alpha_0})}$. The commutativity condition together with the demand that $u^\Gamma\upharpoonright\alpha_0+1=\text{id}$ totally determine the $u$-map of $\Gamma$: \begin{equation*} u^\Gamma(\xi)=\begin{cases} \xi &\text{ if } \xi<\alpha_0+1\\ u^\Psi\circ(u^\Phi)^{-1}(\xi)&\text{ if } \xi\geq \alpha_0+1, \end{cases} \end{equation*} using in the second case that $[\alpha_0+1, \text{lh} (\tree{V}))\subseteq \text{ran} (u^\Phi)$. One must check by induction on $\xi$ that $u^\Gamma\upharpoonright(\xi+1)$ is the $u$-map of a tree embedding from $\tree{V}\upharpoonright (\xi+1)$ into $\tree{T}$. For this, one uses the result on extending tree embeddings, Proposition 8.2.1 of \cite{nitcis}. \qed \end{proof} The Factor Lemma gives us a sense in which the one-step quasi-normalization $V(\mathcal{S},\mathcal{T},F)$ is a minimal plus tree that uses $F$ and tree-embeds $\mathcal{S}$.\footnote{$\mathcal{T}$ is relevant too, but if we are dealing with trees by a fixed iteration strategy, then $F$ determines $\mathcal{T}$.} It is parallel to the fact that if $j \colon M \to N$ is elementary, and $E$ is an initial segment of the extender of $j$, then $\text{Ult}(M,E)$ embeds into $N$ in a way that makes the diagram commute. Indeed, one can think of $V(\mathcal{S},\mathcal{T},F)$ as an $F$-ultrapower of $\mathcal{S}$. A version of the Factor Lemma also holds for embedding normalization; this appears in in \cite{associativity}. We can use iterated applications of the Factor Lemma to factor appropriate non-identity extended tree embedding $\Psi$ as \[\cdots \circ\Phi_{F_{\xi+1}}\circ\Phi_{F_\xi}\circ\cdots\circ \Phi_{F_1}\circ \Phi_{F_0},\] where the $F_\xi$ are a (non-overlapping) sequence of exit extenders of $\tree{T}$ and the $\Phi_{F_\xi}$ are certain one-step quasi-normalizations by $F_\xi$. To do this, we need that the additional hypotheses on the relationship between the domain of $F$ in the Factor Lemma statement obtain at every step. Rather than state this directly, we identify a simpler, stronger property which suffices for our applications. \begin{definition} A tree embedding $\Psi:\tree{S}\to \tree{T}$ is \textit{inflationary} iff for any $\xi+1<\text{lh}(\tree{S})$ and $\gamma+1\in (v^\Psi(\xi), u^\Psi(\xi)]_\tree{T}$, letting $\eta=\tree{T}\text{-pred}(\gamma+1)$, \[\text{dom}(E_\gamma^\tree{T})\triangleleft \text{lh}(s_{\xi, \eta}^\Psi(E_\xi^\tree{S})).\] \end{definition} It is easy to see that the quasi-normalization tree embeddings $\Phi^{V(\tree{S}, \tree{T}, F)}$ are inflationary, but tree embeddings (even hull embeddings) may fail to be inflationary, in general. Still, one can prove the following useful closure properties of the class of inflationary tree embeddings. \begin{proposition} Let $\Phi:\tree{S}\to \tree{T}$ and $\Psi:\tree{T}\to \tree{U}$ be inflationary tree embeddings. Then $\Psi\circ \Phi$ is inflationary. \end{proposition} \begin{proposition} Let $\Phi:\tree{S}\to \tree{T}$ and $\Psi:\tree{T}\to \tree{U}$ be tree embeddings such that $\Phi$ and $\Psi\circ \Phi$ are inflationary and $[\text{crit}(u^\Psi), \text{lh}(\tree{T}))\subseteq \text{ran}(u^\Phi)$. Then $\Psi$ is inflationary. \end{proposition} \begin{proposition} Let $\mathcal{D}=\langle \{\tree{T}_a\}_{a\in A}, \{\Psi_{a,b}\}_{a\preceq b}\rangle$ be a directed system of plus trees such that all tree embeddings $\Psi_{a,b}$ are inflationary. Suppose $\lim \mathcal{D}$ is wellfounded and let $\{\Gamma_a\}_{a\in A}$ be the direct limit tree embeddings from $\tree{T}_a$ into the direct limit tree. Then for all $a\in A$, $\Gamma_a$ is inflationary. \end{proposition} We omit the somewhat tedious but straightforward proofs. The reader can find proofs of each proposition for the representative case that all trees are $\lambda$-tight and normal in \cite{associativity}. Now we return to our iterative factorization result. \begin{theorem} Let $\Psi:\tree{S}\to \tree{T}$ be an inflationary extended tree embedding. Then there is a unique sequence of extenders $\langle F_\xi\mid \xi<\lambda\rangle$ such that there is a directed system of plus trees $\mathcal{D}=\langle \{\tree{S}_\xi\}_{\xi\leq\lambda}, \{\Psi_{\eta,\xi}\}_{\eta\leq \xi\leq \lambda}\rangle$ such that \begin{enumerate} \item $\tree{S}_0=\tree{S}$, $\tree{S}_\lambda= \tree{T}$, and $\Psi_{0,\lambda}=\Psi$; \item for $\xi+1\leq \lambda$, letting $\beta_\xi=\text{crit}(u^{\Psi_{\eta,\xi}})$ and $\alpha_\xi+1$ the successor of $\beta_\xi$ in $(\beta_\xi, u^{\Psi_{\xi, \lambda}}(\beta_\xi)]_\tree{T}$, \begin{enumerate} \item $F_\xi=E_{\alpha_\xi}^\tree{T}$, \item $\tree{S}_{\xi+1}=V(\tree{S}_\xi, \tree{T}\upharpoonright\alpha_\xi+1, F_\xi)$, and \item $\Psi_{\xi, \xi+1}=\Phi^{V(\tree{S}_\xi, \tree{T}\upharpoonright\alpha_\xi+1, F_\xi)}$;\end{enumerate} \item for $\gamma\leq \lambda$ a limit ordinal, \begin{enumerate} \item $\tree{S}_\gamma= \lim \langle \{\tree{S}_\xi\}_{\xi<\gamma}, \{\Psi_{\eta, \xi}\}_{\eta\leq \xi<\gamma}\rangle$, and \item for $\xi<\gamma$, $\Psi_{\xi, \gamma}$ is the direct limit tree embedding; \item for $\xi\leq \lambda$ and $\eta<\xi$, $u^\Psi_{\xi, \lambda}\upharpoonright \alpha_\eta+1 =id$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof}[Proof sketch.] We define $\tree{S}_\xi$, $\Psi_{\eta, \xi}$, and $F_\xi$ by induction, maintaining that we always have additional inflationary extended tree embeddings $\Gamma_\xi: \tree{S}_\xi\to \tree{T}$ which commute appropriately with the rest of our system. Assuming that the tree embeddings produced so far are inflationary, one continues at limits by taking direct limits at limits and at successors by applying the factor lemma to the last $\Gamma_\xi$ (as long as this is not the identity). The preceding propositions guarantees that all the resulting tree embeddings produced in this way are actually inflationary so that this process never breaks down. Since we are pulling all the extenders $F_\xi$ from $\tree{T}$, this process must terminate at some stage $\leq lh(\tree{T})$, and so we must end up with a stage $\Gamma_\xi= Id$ (or else we could continue the construction). A detailed proof in the case that all the trees are normal appears in \cite{associativity}. \qed \end{proof} For $\Psi:\tree{S}\to \tree{T}$ an inflationary extended tree embedding, we call the sequence $\vec{F}=\langle F_\xi\, |\, \xi<\lambda\rangle$ as in the previous theorem the \textit{factorization of $\Psi$}. One might ask whether $\vec{F}$ and $\mathcal{S}$ determine $\Psi$ and $\mathcal{T}$. It is easy to see that this is not the case; for example, consider the one-step normalization embeddings $\Phi \colon \mathcal{S} \to W(\mathcal{S},\mathcal{T},F)$ and $\Psi \colon \mathcal{S} \to W(\mathcal{S},\mathcal{U},F)$, with $\mathcal{T}$ and $\mathcal{U}$ being distinct normal trees on $M$ that diverge before each of them reaches a model with $F$ on its sequence. Then $\Phi \neq \Psi$, but they have the common factorization $\langle F \rangle$. If we are dealing with normal trees on $M$ by a fixed iteration strategy, this kind of thing can't happen, and in fact it is easy to see that then $\mathcal{S}$ and the factorization $\vec{F}$ of $\Phi \colon \mathcal{S} \to \mathcal{T}$ determine $\Phi$. In general, $\mathcal{S}$ and $\vec{F}$ determine $\Phi$ up to ``similarity". First, some notation from \cite{nitcis}: \begin{definition} For $\tree{T}$ a plus tree and $\eta\leq_\tree{T}\xi$, we let $e^\tree{T}_{\eta,\xi}$ be the sequence of extenders used along $(\eta,\xi)_\tree{T}$, i.e. $e^\tree{T}_{\eta,\xi}= \langle E_\alpha^\tree{T}\,|\,\alpha+1\in (\eta,\xi]_\tree{T}\rangle$. \end{definition} The next proposition says that if $\tree{T}$ and $\tree{U}$ are plus trees having two common models $M,N$ which are tree-related in both trees, and have the same branch embeddings between them in both trees, then the extenders used to get from $M$ to $N$ are the same in both trees. We omit the well known proof. \begin{proposition}\label{branch extender prop} Suppose $\tree{T}$, $\tree{U}$ are plus trees, $\eta\leq_\tree{T} \xi$, $\eta^*\leq_\tree{U} \xi^*$, $M_{\xi}^\tree{T}=M_{\xi^*}^\tree{U}$, and either \begin{enumerate} \item $\eta$-to-$\xi$ and $\eta^*$-to-$\xi^*$ don't drop, $M_{\eta}^\tree{T}=M_{\eta^*}^\tree{U}$, and $i^\tree{T}_{\eta,\xi}= i^\tree{U}_{\eta^*,\xi^*}$, or \item $\eta^*$ and $\eta$ are the locations of the last drop along $\eta$-to-$\xi$ and $\eta^*$-to-$\xi^*$. \end{enumerate} Then $e^\tree{T}_{\eta,\xi}=e^\tree{U}_{\eta^*,\xi^*}$. \end{proposition} \begin{definition}\label{similar tree embeddings} Let $\Phi:\tree{S}\to \tree{T}$ and $\Psi:\tree{S}\to \tree{U}$ be extended tree embeddings. We say $\Phi$ and $\Psi$ are \textit{similar}, and write $\Phi\equiv \Psi$, iff for all $\xi<\text{lh} (\tree{S})$, $e^\tree{T}_{0,u^\Phi(\xi)}=e^\tree{U}_{0,u^\Psi(\xi)}$. \end{definition} We can characterize similarity using the Factor Lemma.\footnote{The definitions and results on factoring tree embeddings are due to Siskind.} \begin{theorem}\label{tree embedding uniqueness} Let $\Phi:\tree{S}\to \tree{T}$ and $\Psi:\tree{S}\to \tree{U}$ are extended inflationary tree embeddings. Then $\Phi\equiv \Psi$ iff $\Phi$ and $\Psi$ have the same factorization. \end{theorem} We omit the proof; the special but representative case that all the trees are $\lambda$-tight and normal appears in \cite{associativity}. We now briefly describe the quasi-normalization of a length 2, maximal $M$-stack $\langle \tree{T},\tree{U}\rangle$ with $\tree{U}$ a normal, $V(\tree{T},\tree{U})$. This is the last tree system \[\langle \tree{V}_\xi, \sigma_\xi,F_\zeta,\Phi^{\eta,\xi}\,|\, \eta,\xi,\zeta+1 <\text{lh} (\tree{U}), \, \eta\leq_\tree{U} \xi\rangle, \] defined by induction on $\text{lh} (\tree{U})$ by: \begin{enumerate} \item $\tree{V}_\xi = V(\tree{T},\tree{U}\upharpoonright \xi+1)$; \item $\sigma_\xi:M^\tree{U}_\xi\to R_\xi$ is nearly elementary, where $R_\xi$ is the last model of $\tree{V}_\xi$, and if $\xi+1<\text{lh} (\tree{U})$, $F_\xi=\sigma_\xi(E^\tree{U}_\xi)$; \item For $\zeta\leq_\tree{U}\eta\leq_\tree{U}\xi$, \begin{enumerate} \item $\Phi^{\eta,\xi}:\tree{V}_\eta\to \tree{V}_\xi$ is a partial extended tree embedding, and \item $\Phi^{\zeta,\xi}=\Phi^{\eta,\xi}\circ \Phi^{\zeta,\eta}$; \end{enumerate} \item For $\eta=\tree{U}\text{-pred}(\xi+1)$, \begin{enumerate} \item $\tree{V}_{\xi+1}= V(\tree{V}_\eta,\tree{V}_\xi,F_\xi)$, \item $\Phi^{\eta,\xi+1}=\Phi^{V(\tree{V}_\eta,\tree{V}_\xi,F_\xi)}$ and $\sigma_{\xi+1}= \sigma^{V(\tree{V}_\eta,\tree{V}_\xi,F_\xi)}$; \end{enumerate} \item For $\lambda <\text{lh} (\tree{U})$ a limit and $b=[0,\lambda)_\tree{U}$, \[\tree{V}_\lambda = \lim \langle\tree{V}_\xi,\Phi^{\eta,\xi}\,|\, \eta\leq_\tree{U}\xi\in b\rangle\] and $\Phi^{\xi, \lambda},\sigma_\lambda$ the direct limit tree embeddings and direct limit map. \end{enumerate} \noindent If $\tree{U}$ has successor length, we let $\sigma^{V(\tree{T},\tree{U})}$ be the last of the $\sigma_\xi$. In general, we define $V(s)$ for a stack of plus trees $s$ of length $n+1$ and $\sigma_s$ from the last model of $s$ to the last model of $V(s)$ by induction on length $n$. For $s=\langle \tree{S}_1,\ldots \tree{S}_n\rangle$ we put \[ V(s)=V( V(s\upharpoonright n), \sigma_{s\upharpoonright n} \tree{S}_n), \] and, if $\tree{S}_n$ has successor length, we let \[ \sigma_s = \sigma^{V(V(s\upharpoonright n), \sigma_{s\upharpoonright n}\tree{S}_n)}\circ\sigma^*, \] where $\sigma^*$ is the last copy map from the last model of $\tree{S}_n$ to the last model of $\sigma_{s\upharpoonright n}\tree{S}_n$. \begin{remark} {\rm The normalizations we are considering here are ``bottom-up", in the sense of \cite{nitcis}. One could normalize in other orders, for example, by setting $\mathcal{V}^*(\langle \mathcal{U}_0,\mathcal{U}_1,\mathcal{U}_2\rangle = V(\mathcal{U}_0,V(\mathcal{U}_1,\mathcal{U}_2))\rangle$. We expect that all such quasi-normalizations are equivalent; in the case of embedding normalization for $\lambda$-tight normal trees, this has been proven by Siskind (see \cite{associativity}). The same proof should pass over to quasi-normalizations, but this has not been checked.}\end{remark} \subsection{Mouse pairs} We recall some notions from \cite{nitcis}. \begin{definition} \label{normalizeswell} Let $\Sigma$ be a complete $(\lambda,\theta)$-iteration strategy for $M$, where $\lambda > 1$. We say that $\Sigma$ {\em quasi-normalizes well} iff whenever $s$ is an $M$-stack by $\Sigma$ and $\langle \tree{T}, \tree{U}\rangle$ is a maximal 2-stack by $\Sigma_s$ such that $\tree{U}$ is normal, then \begin{itemize} \item[(a)] $V(\tree{T}, \tree{U})$ is by $\Sigma_s$, and \item[(b)] letting $\tree{V}=V(\tree{T}, \tree{U})$ and $\sigma=\sigma^{V(\tree{T}, \tree{U})}$, $\Sigma_{s{}^\frown\langle \tree{T}, \tree{U}\rangle} = (\Sigma_{s{}^\frown\tree{V}})^\sigma$. \end{itemize} \end{definition} Here $M$ is a premouse of some kind, coarse, pure extender, or least branch. It is easy to see that if $\Sigma$ normalizes well, then so do all its tails $\Sigma_s$. Recall that $\mathcal{T}$ is a {\em pseudo-hull} of $\mathcal{U}$ iff there is a tree embedding of $\mathcal{T}$ into $\mathcal{U}$. \begin{definition} \label{stronghull} Let $\Sigma$ be a complete $(\lambda,\theta)$-iteration strategy for a premouse $M$; then $\Sigma$ has \textit{strong hull condensation} iff whenever $s$ is a stack of plus trees by $\Sigma$ and $N\trianglelefteq M_\infty(s)$, and $\mathcal{U}$ is a plus tree on $N$ by $\Sigma_{s,N}$, then for any plus $\mathcal{T}$ on $N$, \begin{itemize} \item[(a)] if $\mathcal{T}$ is a pseudo-hull of $\mathcal{U}$, then $\mathcal{T}$ is by $\Sigma_{s,N}$, and \item[(b)] if $\Phi \colon \mathcal{T} \to \mathcal{U}$ is an extended tree embedding, with last $t$-map $\pi$, and $Q \unlhd \text{dom}(\pi)$, then $\Sigma_{s{}^\frown \langle\mathcal{T}\rangle,Q} = (\Sigma_{s{}^\frown \langle\mathcal{U}\rangle,\pi(Q)})^\pi$. \end{itemize} \end{definition} The most important part of \ref{stronghull} is clause (a) in the case $s=\emptyset$; that is, that every pseudo-hull of a plus tree by $\Sigma$ is also by $\Sigma$. \begin{definition}\label{pureextenderpair} Let $\delta$ be regular. $(M,\Omega)$ is a {\em pure extender pair} with scope $H_\delta$ iff \begin{itemize} \item[(1)]$M$ is a pure extender premouse, and $M \in H_\delta$, \item[(2)] $\Omega$ is a complete $(\delta,\delta)$-iteration strategy for $M$, and \item[(3)] $\Omega$ quasi-normalizes well, is internally lift consistent\footnote{Roughly, for all $N \lhd M$, $\Omega_N$ is consistent with $\Omega$ in the natural sense. See \cite[Def. 5.4.4]{nitcis}.} and has strong hull condensation. \end{itemize} \end{definition} \begin{definition}\label{lbrhodpair} Let $\delta$ be regular. $(M,\Omega)$ is a {\em least branch hod pair} (lbr hod pair) with scope $H_\delta$ iff \begin{itemize} \item[(1)] $M$ is a least branch premouse, and $M \in H_\delta$, \item[(2)] $\Omega$ is a complete $(\delta,\delta)$-iteration strategy for $M$, \item[(3)] $\Omega$ normalizes well, is internally lift consistent, and has strong hull condensation, and \item[(4)] $(M,\Omega)$ is {\em pushforward consistent}, in that if $s$ is by $\Omega$ and has last model $N$, then $N$ is an lpm, and $\hat\Sigma^N \subseteq \Omega_s$. \end{itemize} \end{definition} \begin{definition}\label{mousepair} $(M,\Omega)$ is a {\em mouse pair} iff $(M,\Omega)$ either a pure extender pair, or an lbr hod pair. \end{definition} According to these definitions, the strategy in a mouse pair with scope $H_\delta$ must be a $(\delta,\delta)$-strategy. \cite{nitcis} demanded only an $(\omega,\delta)$-strategy, so as to avoid having to talk about normalizing infinite stacks. Here we are not avoiding that at all. It is more natural for a strategy with scope $H_\delta$ to act on all stacks in $H_\delta$, that is, to be a $(\delta,\delta)$-strategy. The paradigm case here is a mouse pair $(M,\Omega)$ with scope $\textrm{HC}$, considered in a model of $\mathsf{AD^+}$. One important way to obtain such pairs, assuming $\mathsf{AD^+}$, is to start with a coarse $\Gamma$-Woodin pair $(N^*,\Sigma^*)$. Let $\mathbb{C}$ be a full background construction of of one of the two types done in $N^*$, and let $(M,\Omega) = (M_{\nu,k}^\mathbb{C},\Omega_{\nu,k}^\mathbb{C})$ be one of its levels. From the point of view of $N^*$, $(M,\Omega)$ is a mouse pair with scope $H_\delta$, where $\delta$ is the $\Gamma$-Woodin cardinal of $N^*$. But $\Omega$ can be extended to act on all stacks $s \in \textrm{HC}$, because it is induced by $\Sigma^*$, which acts on all stacks in $\textrm{HC}$. So from the point of view of $V$, $(M,\Omega)$ is a mouse pair with scope $\textrm{HC}$.\footnote{These facts are proved in \cite{nitcis}; see especially Theorems 7.6.2 and 10.4.1. Those proofs deal only with $(\omega,\delta)$-iteration strategies, but they adapt in a routine way to $(\delta,\delta)$-strategies.} In sum \begin{theorem}(\cite{nitcis}) Assume $\mathsf{AD^+}$, let $(N^*,\delta,S,T,\lhd,\Sigma^*)$ be a coarse $\Gamma$-Woodin tuple, and let $\mathbb{C}$ be a $\lhd$-construction ( either pure extender or least branch) in $L[N^*,S,T,\lhd]$ with all $F_\nu^{\mathbb{C}} \in N^*$; then $\mathbb{C}$ does not break down. Moreover, letting $M = M_{\nu,k}^\mathbb{C}$, and letting $\Omega$ be the canonical extension of $\Omega_{\nu,k}^{\mathbb{C}}$ to all $M$-stacks in $\textrm{HC}$, $(M,\Omega)$ is a mouse pair, with scope $\textrm{HC}$. \end{theorem} The strategy in a mouse pair is actually determined by its action on countable normal trees: \begin{theorem}\label{uniqueextension} Let $\delta$ be regular, and let $(M,\Sigma)$ and $(M,\Omega)$ be mouse pairs with scope $H_\delta$. Suppose that $\Sigma$ and $\Omega$ agree on all countable $\lambda$-separated trees; then $\Sigma = \Omega$. \end{theorem} This is proved as Lemma 7.6.5 in \cite{nitcis}, for the slightly weaker notion of mouse pair used in that book. The same proof yields \ref{uniqueextension}. In a similar vein, one can show that a strategy for a pure extender premouse that behaves well on normal trees can be extended to the strategy component of a pure extender pair.\footnote{There seems to be no way to show that the extended strategy is pushforward consistent, so these strategy extension results do not seem to help one to construct least branch hod pairs.} One need only assume that every pseudo-hull of a normal tree by $\Sigma$ is by $\Sigma$. The following theorem, which combines work of Schlutzenberg and Siskind, captures this fact. \begin{theorem} (\cite{farmer}, \cite{associativity})\label{strategyextension} Let $M$ be a countable premouse, and $\Sigma$ an $\omega_1 +1$-iteration strategy for $M$. Suppose that every pseudo-hull of a countable tree by $\Sigma$ is by $\Sigma$; then there is a unique $(\omega_1,\omega_1)$-iteration strategy $\Sigma^*$ for $M$ such that \begin{itemize} \item[(1)] $\Sigma \restriction \textrm{HC} \subseteq \Sigma^*$, \item[(2)] $\Sigma^*$ quasi-normalizes well, and \item[(3)] $\Sigma^*$ has strong hull condensation. \end{itemize} \end{theorem} That such a strategy $\Sigma$ extends uniquely to a strategy for finite stacks of normal trees that normalizes well was proved independently by Schlutzenberg and Steel. (See \cite{nitcis}, Theorem 7.3.11.) The extension to countable stacks requires significantly more work, and is due to Schlutzenberg. (See \cite{farmer}.) We shall outline the proof in the next section. That the resulting strategy $\Sigma^*$ has strong hull condensation is due to Siskind. (See \cite{associativity}.) \section{Meta-iteration trees} The system $\langle \tree{V}_\xi, F_\eta \leq_\tree{U}\,|\, \xi,\eta+1<\text{lh} (\tree{U})\rangle$ that arises in the construction of $V(\mathcal{T},\mathcal{U})$ is a kind of tree of iteration trees, with tree-embeddings between tree-order related nodes. This perspective is used in \cite{nitcis} and abstracted in \cite{farmer}, \cite{jensen} to their notions of \lq\lq factor trees of inflations" and \lq\lq insertion iterations", respectively. Here, we isolate a closely related abstraction, and call the resulting system a \textit{meta-iteration tree}, or \textit{meta-tree}. \subsection{The definition} \begin{definition}\label{meta-trees} Let $\mathcal{S}$ be a plus tree on a premouse $M$. A \textit{meta-iteration tree on $\mathcal{S}$} is a system \[ \mtree{S}=\langle \text{lh} (\mtree{S}), \leq_\mtree{S}, \{\tree{S}_\xi\}_{\xi<\text{lh} (\mtree{S})}, \{F_\xi\}_{\xi+1<\text{lh} (\mtree{S})}, \{\Phi^{\eta,\xi}\}_{\eta\leq_\mtree{S} \xi}\rangle \] such that $\mathcal{S}_0 = \mathcal{S}$, and \begin{enumerate} \item $\text{lh}(\mtree{S})$ is an ordinal and $\leq_\mtree{S}$ is a tree-order on $\text{lh}(\mtree{S})$; \item for all $\zeta\leq_\mtree{S}\eta\leq_\mtree{S}\xi<\text{lh} (\mtree{S}$), \begin{enumerate} \item $\tree{S}_\xi$ is a plus tree on $M$ of successor length, \item if $\xi+1<\text{lh} (\mtree{S})$, $F_\xi^-$ is an extender on the sequence of $M_\infty^{\tree{S}_\xi}$, \item $\Phi^{\eta,\xi}$ is a partial extended tree embedding from $\tree{S}_\eta$ into $\tree{S}_\xi$, \item $\Phi^{\zeta,\xi}=\Phi^{\eta,\xi}\circ\Phi^{\zeta,\eta}$; \end{enumerate} \item (Normality) for $\xi+1<\text{lh}(\mtree{S})$, letting $\alpha_\xi=\alpha_0(\tree{S}_\xi, F_\xi)$, \begin{enumerate} \item for all $\eta<\xi$, $\text{lh} (F_\eta)<\text{lh} (F_\xi)$, \item for $\eta=\mtree{S}\text{-pred}(\xi+1)$, \begin{enumerate} \item $\eta$ is least such that $\text{crit} (F_\xi)<\lambda(F_\eta)$, \item $V(\tree{S}_\eta,\tree{S}_\xi,F_\xi)\upharpoonright \alpha_\xi+2\subseteq\tree{S}_{\xi+1}\subseteq V(\tree{S}_\eta,\tree{S}_\xi,F_\xi)$, \item $\Phi^{\eta,\xi+1}= \Phi^{V(\tree{S}_\eta,\tree{S}_\xi,F_\xi)}$ (as a partial extended tree embedding from $\tree{S}_\eta$ into $\tree{S}_{\xi+1}$); \end{enumerate} \end{enumerate} \item for $\lambda<\text{lh}(\mtree{S})$, $b=[0,\lambda)_\mtree{S}$ is a cofinal subset of $\lambda$ and there is a tail $c\subseteq b$ such that \begin{enumerate} \item for all $\eta,\xi\in c$ with $\eta\leq \xi$, $\Phi^{\eta,\xi}$ is total, \item $\lim\langle \tree{S}_\xi,\Phi^{\eta,\xi}\,|\, \xi\leq_\mtree{S}\xi\in c\rangle$ is wellfounded, \item $\lim\langle \tree{S}_\xi,\Phi^{\eta,\xi}\,|\, \xi\leq_\mtree{S}\xi\in c\rangle\upharpoonright \sup \{ \alpha_\xi\,|\, \xi<\lambda\}\subseteq\tree{S}_\lambda \subseteq \lim\langle \tree{S}_\xi, \Phi^{\eta,\xi}\,|\,\xi\leq_\mtree{S}\xi\in c\rangle,$ and \item for all $\eta\in c$, $\Phi^{\eta,\lambda}$ is the direct limit extended tree embedding and for $\eta\in b\setminus c$, $\Phi^{\eta,\lambda} =\Phi^{\xi, \lambda}\circ \Phi^{\eta,\xi},$ where $\xi=\min c$. \end{enumerate} \end{enumerate} \end{definition} For a meta-tree $\mtree{S}$ and a branch $b$ of $\mtree{S}\upharpoonright\gamma$, we let \[ \lim_b(\mtree{S}\upharpoonright\gamma) = \lim\langle \tree{S}_\xi,\Phi^{\eta,\xi}\,|\,\eta\leq_\mtree{S}\xi\in c\rangle, \] where $c$ is any tail of $b$ where the $\Phi^{\eta,\xi}$ are total (if there is such a $c$). We also let \[ \tree{S}_\gamma^+= V(\tree{S}_\eta,\tree{S}_\xi, F_\xi)\text{ if } \gamma\text{ is a successor }\xi+1, \] where $\eta=\mtree{S}\text{-pred}(\xi+1)$, and \[ \tree{S}_\gamma^+ = \lim_b(\mtree{S}\upharpoonright\gamma)\text{ if }\gamma \text{ is a limit}, \] where $b=[0,\gamma)_\mtree{S}$. We also put $\alpha_\xi^\mtree{S}= \alpha_0(F_\xi, \tree{S}_\xi)$ and $\beta^\mtree{S}_\xi= \beta(F_\xi,\tree{S}_\xi)$. A meta-tree $\mtree{S}$ \textit{drops along} $(\eta,\xi]_\mtree{S}$ iff for some $\gamma\in (\eta,\xi]_\mtree{T}$, either $\gamma$ is a successor, $\lambda+1$, and for $\zeta=\mtree{S}\text{-pred}(\lambda+1)$, $V(\tree{S}_\zeta,\tree{S}_{\lambda},F_{\lambda})$ is in the dropping case, or $\tree{S}_\gamma\subsetneq \tree{S}_\gamma^+$. We call the former way of dropping a \textit{necessary drop} and the latter way of dropping a \textit{gratuitous drop}. When $\eta=\mtree{S}\text{-pred} (\xi+1)$ and $V(\tree{S}_\eta, \tree{S}_\xi, F_\xi)$ is in the dropping case we'll say that $[\eta,\xi+1)_\mtree{S}$ \textit{is a necessary drop} or that $\eta$-to-$\xi+1$ \textit{is a necessary drop in} $\mtree{S}$. We shall use other natural variants of this terminology with their obvious meaning (e.g. \lq\lq$\eta$-to-$\xi$ drops in $\mtree{S}$"). A meta-tree $\mtree{S}$ is \textit{simple} iff it has no gratuitous drops. We write $\Phi^{\mtree{S}}_{\eta,\xi}$ for the tree embedding from $\mathcal{S}_\eta$ to $\mathcal{S}_\xi$ given by $\mtree{S}$, when it exists, that is, when $\mtree{S}$ does not drop along $(\eta,\xi]_\mtree{S}$. We write $\Phi^{\mtree{S}}$ for $\Phi^{\mtree{S}}_{0,\xi}$, where $\text{lh}(\mtree{S})=\xi+1$, if this exists. Here are a few examples of meta-trees. The first is familiar: normal plus trees can be viewed as simple meta-trees with the same tree-order and exit extenders. \begin{example}\label{inseration1} Let $\tree{T}$ be a normal plus tree. Let $\tree{T}_\xi= \tree{T}\upharpoonright \xi+1$ and $F_\xi=E^\tree{T}_\xi$ and for $\eta=\tree{T}\text{-pred}(\xi+1)$; then \[ \mtree{T}=\langle \tree{T}_\xi, \Phi_{\xi,\eta}, F_\zeta\,|\, \xi,\eta,\zeta+1<\text{lh} (\tree{T}), \, \xi\leq_{T}\eta\rangle \] is a meta-tree with underlying tree structure $\tree{T}$ (i.e. $\text{lh} (\mtree{T})=\text{lh} (\tree{T})$ and $\leq_\mtree{T}=\leq_\tree{T}$). This is a meta-tree since for $\eta=\tree{T}\text{-pred}(\xi+1)$, we have \begin{align*} \tree{T}_{\xi+1} & =\tree{T}_\xi{}^\frown\langle E^\tree{T}_{\xi}\rangle\\ & =V(\tree{T}_\xi,\tree{T}_\eta,F_\xi). \end{align*} Notice that we haven't explicitly defined the tree embeddings of $\mtree{T}$. As remarked above, they are uniquely determined by the extenders $F_\zeta$ and the tree order $\leq_{\tree{T}}$. One can check that for $\eta=\tree{T}\text{-pred} (\xi+1)$, the (extended) tree embedding $\Phi^{\eta,{\xi+1}}$ is just the unique extended tree embedding with associated $u$-map given by \begin{equation*} u(\zeta) = \begin{cases} \zeta & \text{if\,\,} \zeta<\eta, \\ \xi+1 & \text{if\,\,} \zeta=\eta. \end{cases} \end{equation*} \end{example} \begin{definition}\label{meattreeonM} Let $M$ be a premouse; then a {\em meta-iteration tree on $M$} is a meta-iteration tree on $\mathcal{S}$, where $\mathcal{S}$ is the trivial iteration tree of length 1 on $M$. \end{definition} Because we have allowed gratuitous dropping, we have another familiar example. \begin{example} Let $\tree{S}$ and $\tree{T}$ be normal trees of successor length with $\tree{T}$ a normal extension of $\tree{S}$; say $\tree{T}= \langle M_\xi, E_\xi,\leq_\tree{T}\rangle$ and $\tree{S}=\tree{T}\upharpoonright\gamma+1$. We obtain a gratuitously dropping meta-tree $\mtree{S}$ on $\tree{S}$ with trees $\tree{S}_\xi = \tree{T}\upharpoonright\gamma+\xi$ and extenders $F_\xi= E_{\gamma+\xi}$ (in particular $\tree{S}_0= \tree{S}$ and $\tree{S}_{\text{lh} (\tree{T})-1}= \tree{T}$). Note that at every step here we are letting $\tree{S}_{\xi+1}= \tree{S}_\xi{}^\frown \langle E_\xi\rangle$, which will often be a gratuitous drop. \end{example} The more important example of meta-trees comes from quasi-normalization. \begin{example} Using our notation above for the quasi-normalization of a maximal $M$-stack $\langle \tree{T},\tree{U}\rangle$, with $\tree{U}$ normal, \[ \mtree{V}(\tree{T},\tree{U}) = \langle \tree{V}_\xi, \Phi^{\xi,\eta}, F_\zeta\,|\,\xi,\eta,\zeta+1<\text{lh} (\tree{U}), \, \xi\leq_\tree{U} \eta\rangle \] is a meta-tree with underlying tree structure $\tree{U}$. \end{example} We call a meta-tree \textit{countable} if it has countable length and all of its component trees are countable. Clause (3) of Definition \ref{meta-trees} requires that meta-trees be weakly normal, in the sense that the exit extenders $F_\xi$ increase in length, and are used to inflate the earliest possible tree. That $\mtree{V}(\tree{T},\tree{U})$ is a meta-tree, then, makes use of the normality of $\tree{U}$. We shall also need to consider stacks of meta-trees. \begin{definition}\label{stack of meta-trees} Let $\mathcal{T}$ be a plus tree on $M$. A \textit{meta-stack on $\mathcal{T}$} is a sequence $s = \langle \mtree{S}^\xi\,|\,\xi<\delta\rangle$ of meta-trees $\mtree{S}^\xi$ of successor lengths, with associated normal trees $\mathcal{T}_\xi$ on $M$, such that $\mathcal{T}_0 = \mathcal{T}$, and for all $\xi < \delta$ \begin{itemize} \item[(a)] $\mtree{S}^\xi$ is on $\mathcal{T}_\xi$ \item[(b)] if $\xi+1<\delta$, then $\mathcal{T}_{\xi+1}$ is the last tree of $\mtree{S}^\xi$, and \item[(c)] if $\xi$ is a limit ordinal, then $\mathcal{T}_\xi$ is the direct limit of the $\mathcal{T}_\alpha$ for $\alpha < \xi$ sufficiently large, under the tree embeddings $\Phi^{\mtree{S}_\alpha} \colon \mathcal{T}_\alpha \to \mathcal{T}_{\alpha+1}$. \end{itemize} If $\mathcal{T}$ is the trivial tree of length 1, then we say that $s$ is a meta-stack on $M$. \end{definition} Of course, if $s$ is a meta-stack on $\mathcal{T}$, and $\mathcal{T}$ is on $M$, then $\langle \mathcal{T} \rangle^\frown s$ is a meta-stack on $M$. We emphasize that meta-trees themselves must be weakly normal: stacks of meta-trees are \textit{not} meta-trees, in general. \subsection{Meta-iteration strategies} Let $\mathcal{S}$ be a normal tree on $M$, and $\theta \in \textrm{OR}$. In the {\em meta-iteration game} $G(\mathcal{S},\theta)$, players {\rm I} and {\rm II} cooperate to produce a meta-iteration tree $\mtree{S}=\langle \text{lh} (\mtree{S}), \leq_\mtree{S}, \{\tree{S}_\xi\}_{\xi<\text{lh} \mtree{S}}, \{F_\xi\}_{\xi+1<\text{lh} \mtree{S}}, \{\Phi^{\eta,\xi}\}_{\eta\leq_\mtree{S} \xi}\rangle$ on $\mathcal{T}$. Player {\rm I} plays the extenders $F_\xi$, and decides whether to drop gratuitously, and if so, how far. Player {\rm II} must play cofinal wellfounded branches at limit ordinals, which are then used to extend $\mtree{S}$. If {\rm II} fails to do this, or if some $\mathcal{S}_{\xi+1}$ has an illfounded model, then {\rm I} wins. If {\rm I} does not win in the first $\theta$ moves, then {\rm II} wins. Clearly, if $\text{lh} (\mathcal{S}) =1$, then $G(\mathcal{S},\theta)$ is equivalent to the usual normal iteration game $G(M,\theta)$. For $\lambda > 1$, $G(\mathcal{S},\lambda,\theta)$ is the corresponding variant of $G(M,\lambda,\theta)$. In a play of $G(\mathcal{S},\lambda,\theta)$ in which {\rm II} has not lost, the output is a stack of meta-trees $\langle \mtree{S}_\xi \mid \xi < \lambda \rangle$, with $\mtree{S}_0$ being on $\mathcal{S}$, and each meta-tree $\mtree{S}_\xi$ having length $< \theta$. \begin{definition} Let $\tree{S}$ be a normal tree on $M$. A \textit{$\theta$-iteration strategy for $\tree{S}$} is a winning strategy for {\rm II} in $G(\mathcal{S},\theta)$. A \textit{$(\lambda,\theta)$-iteration strategy} is a winning strategy for {\rm II} in $G(\mathcal{S},\lambda,\theta)$. for choosing branches in meta-trees on $\tree{S}$ of limit length. \end{definition} We shall sometimes call such strategies {\em meta-strategies}. Clearly, if $\mathcal{S}$ is the trivial tree of length 1 on $M$, then a meta-strategy for $\mathcal{S}$ of whatever type is equivalent to an ordinary iteration strategy for $M$ of the corresponding type. So it makes sense to speak of meta-strategies for $M$. The following is the main theorem on the existence of meta-strategies. The theorem is essentiatlly a generalization, due to Jensen, of Schlutzenberg's theorem that normal iterability implies stack iterability. (See \ref{strategyextension} above.) The main ideas occur in Schlutzenberg's work. \begin{theorem}\label{meta-iterability} Suppose that $M$ is a countable premouse and $\Sigma$ is an $\omega_1+1$ strategy for $M$ with strong hull condensation. Then there is a unique $(\omega_1,\omega_1)$-meta-strategy $\Sigma^*$ for $M$ such that for any stack $\langle \mtree{S}_\xi \mid \xi < \lambda \rangle$, \[ \langle \mtree{S}_\xi \mid \xi < \lambda \rangle \text{ is by }\Sigma^* \Leftrightarrow\text{$\forall \xi < \lambda$ (every tree occurring in $\mtree{S}_\xi$ is by }\Sigma). \] \end{theorem} We call $\Sigma^*$ the {\em meta-strategy induced by $\Sigma$}. Note that the restriction of $\Sigma$ to countable normal trees is essentially the same as the restriction of $\Sigma^*$ to meta-trees on $M$. It follows from Theorem \ref{meta-iterability} that, assuming $\mathsf{AD^+}$, whenever $(M,\Sigma)$ is a mouse pair with scope $\textrm{HC}$, then $\Sigma$ induces an $(\omega_1,\omega_1)$-meta-strategy $\Sigma^*$ for $M$. Moreover, whenever $\mathcal{S}$ is a plus tree by $\Sigma$, the tail strategy $\Sigma^*_{\mathcal{S}}$ is an $(\omega_1,\omega_1)$-meta-strategy for $\mathcal{S}$. We shall outline a proof of Theorem \ref{meta-iterability} now. A full proof for the normal tree context appears in \cite{associativity}. \begin{lemma}\label{meta-strategy lemma} Let $M$ is a countable premouse and $\Sigma$ an $\omega_1+1$ strategy for $M$ with strong hull condensation. Suppose $\tree{T}$ is a countable plus tree by $\Sigma$ of successor length, and $\mtree{S}=\langle \tree{S}_\xi,F_\xi,\ldots\rangle$ is a meta-tree on $\tree{T}$ of limit length $\leq \omega_1$ such that for all $\xi<\text{lh} (\mtree{S})$, $\tree{S}_\xi$ is by $\Sigma$. Let $\delta(\mtree{S})= \sup \{\alpha_\xi+1\,|\,\xi<\text{lh} (\mtree{S})\}$. Then there is a unique branch $b$ of $\mtree{S}$ such that $(\lim _b\mtree{S})\upharpoonright\delta(\mtree{S})+1$ is by $\Sigma$. Moreover, for this $b$, $\lim_b \mtree{S}$ is wellfounded, and is by $\Sigma$ whenever $\text{lh} (\mtree{S})<\omega_1$. \end{lemma} \begin{proof}[Proof sketch.] The proof resembles the analysis of branches of $W(\tree{T},\tree{U})$ when $\tree{U}$ has limit length from section 6.6 of \cite{nitcis}. Let $\mathcal{W}$ be the plus tree of length $\delta = \delta(\mtree{S})$ such that for all $\xi < \delta$, $\mathcal{W} \upharpoonright \xi = \mathcal{S}_\eta \upharpoonright \xi$ for all sufficiently large $\eta$. Let $a = \Sigma(\mathcal{W})$. As in \S6.6 of \cite{nitcis}, we can decode from $a$ a branch $b$ of $\mtree{S}$\footnote{This can also be found in \cite{jensen} for the corresponding notion of meta-tree.}. The condensation property of $\Sigma$ implies that if $\mtree{S}$ is countable, then $\mathcal{W}_b = \lim_b \mtree{S}$ is by $\Sigma$. We are left to show that if $\text{lh} (\mtree{S})=\omega_1$, then all models in $\mathcal{W}_b$ are wellfounded. This follows by a simple Skolem hull argument, using again the condensation property of $\Sigma$. \qed \end{proof} Notice that if we started with $\Sigma$ a strategy for countable trees and $\mtree{S}$ of countable length with the properties in the hypothesis of the lemma, the conclusion still holds. Lemma \ref{meta-strategy lemma} tells us that, under its hypotheses on $(M,\Sigma)$, for any plus tree $\mathcal{T}$ by $\Sigma$, the meta-strategy $\Sigma^*_{\mathcal{T}}$ induced by $\Sigma$ is well-defined on meta-trees on $\mathcal{T}$ of length $\le \omega_1$. \begin{definition}\label{sigma*0} Suppose that $M$ is a countable premouse, and $\Sigma$ is an $\omega_1+1$ strategy for $M$ with strong hull condensation. Let $\mathcal{T}$ be a countable tree of successor length that is by $\Sigma$. Then $\Sigma^{*,0}_{\mathcal{T}}$ is the $(\omega_1+1)$-meta-strategy for $\mathcal{T}$ given by: if $\mtree{S}$ is a meta-tree on $\tree{T}$ of limit length $\leq \omega_1$ by $\Sigma^{*,0}_{\mathcal{T}}$, then \[ \Sigma^{*,0}_{\mathcal{T}}(\mtree{S}) = b \text{ iff } \lim _b\mtree{S}\upharpoonright\delta(\mtree{S})+1 \text{ is by $\Sigma$.} \] \end{definition} Lemma \ref{meta-strategy lemma} shows that the putative $(\omega_1,\omega_1)$-meta-strategy $\Sigma^*$ for $M$ defined in the statement of Theorem \ref{meta-iterability} does not break down at successor rounds. For if $\mathcal{T}$ is the last tree of $\mtree{S}^\xi$, then part of $\Sigma^*$ that is needed to form $\mtree{S}^{\xi+1}$ in the next round is just $\Sigma^{*,0}_{\mathcal{T}}$. So what is left is to show that $\Sigma^*$ does not break down at some limit of rounds. One ingredient here is a theorem on normalizing stacks of meta-trees, due to Schlutzenberg and Siskind.\footnote{The theorem is a variant of Schlutzenberg's theorem on the commutativity of inflation from \cite{farmer}. As stated, it was discovered later but independently by Siskind.} \begin{theorem} [Schlutzenberg, Siskind] \label{meta-tree full norm} Let $M$ be a countable premouse and $\Sigma$ an $\omega_1$-iteration strategy for $M$ with strong hull condensation. Let $\tree{S}$ be a plus tree by $\Sigma$, and $\langle\mtree{S}, \mtree{T}\rangle$ be a stack of (simple) meta-trees on $\tree{S}$ by $\Sigma^*_\tree{S}$ with last tree $\tree{U}$; then there is a (simple) meta-tree $\mtree{U}$ on $\tree{S}$ by $\Sigma^*_\tree{S}$ with last tree $\tree{U}$ such that $\Phi^\mtree{U}=\Phi^\mtree{T}\circ\Phi^\mtree{S}$. Moreover, $\tree{S}$-to-$\tree{U}$ drops in $\mtree{U}$ iff $\tree{S}$-to-$\tree{U}$ drops in $\langle \mtree{S},\mtree{T}\rangle$. \end{theorem} For a proof of the theorem in the context of $\lambda$-tight normal trees, see \cite{associativity}. The theorem says that $\Sigma^*$ fully normalizes well for meta-stacks of length 2, but \cite{associativity} shows that in fact it does so for arbitrary countable meta-stacks. It is worth noting that the proof is entirely combinatorial; no phalanx comparisons like those we shall use later to prove that the ordinary strategies in mouse pairs fully normalize well are needed. \begin{lemma} \label{meta-tree uniqueness} Suppose $(M,\Sigma)$ is a mouse pair, $\tree{S}$ is a plus tree by $\Sigma$, and $\mtree{S}$, $\mtree{T}$ are simple meta-trees on $\tree{S}$ by $\Sigma^*$ with the same last tree $\tree{T}$; then $\mtree{S}=\mtree{T}$. \end{lemma} \begin{proof} The proof is really the same as the proof that there is a unique normal tree by $\Sigma$ giving rise to any normal $\Sigma$-iterate of $M$. Let $\mtree{S} = \langle \tree{S_\xi}, F_\xi, ...\rangle$ and $\mtree{T}=\langle \tree{T_\xi}, G_\xi, \ldots\rangle$ (so $\tree{T}_0=\tree{S}_0=\tree{S}$ and $\tree{T}_{\infty}=\tree{S}_{\infty}=\tree{T}$). We'll verify by induction that $\tree{T}_\xi=\tree{S}_\xi$ and $F_\xi=G_\xi$ for all $\xi<\text{lh}(\mtree{S})=\text{lh}\mtree{T})$. We've assumed the base case, so now suppose $\tree{S}_\xi=\tree{T}_\xi$ and $F_\eta=G_\eta$ for all $\eta<\xi$. Towards a contradiction, suppose $\text{lh} (F_\xi)< \text{lh} (G_\xi)$. Then $\text{lh}(F_\xi)< \text{lh} (G_\eta)$ for all $\eta\geq\xi$. It follows that $F_\xi^-$ is on the sequence of last model of $\tree{T}_\eta$ for all $\eta\geq \xi$. In particular, $F_\xi^-$ is on the sequence of the last model of $\tree{T}$. But $F_\xi$ is used in $\tree{T}$ since it is used in $\tree{S}_{\eta}$ for \textit{every} $\eta>\xi$. A contradiction. The same argument shows that we can't have $\text{lh} (G_\xi) < \text{lh} (F_\xi)$, either, so $F_\xi=G_\xi$. Since $\mtree{S}$ and $\mtree{T}$ are simple, we get $\tree{S}_{\xi+1}=\tree{T}_{\xi+1}$. $\mtree{S}$ and $\mtree{T}$ cannot diverge at a limit stage, because both are by $\Sigma^*$, and are both simple. The argument in the successor case also shows we cannot have $\text{lh}(\mtree{S})<\text{lh}(\mtree{T})$ or vice-versa, so $\mtree{S}=\mtree{T}$, as desired. \qed \end{proof} \begin{remark} If you drop the assumption that the meta-trees are simple, uniqueness can fail. However, the above argument shows that the extender sequences still must be the same. In particular, we get that the partial tree embeddings from $\tree{S}$ into $\tree{T}$ determined by $\mtree{S}$ and $\mtree{T}$ are the same. \end{remark} The remaining ingredient is a comparison theorem for plus trees. It is a variant of a theorem of Schlutzenberg (see \cite{farmer}). For $\lambda$-tight normal trees, it is proved in \cite{associativity}. We shall generalize this theorem in a later section. \begin{theorem} [Tree comparison; Schlutzenberg, Siskind] \label{meta-tree comparison v1} Suppose that $M$ is a countable premouse, and $\Sigma$ is an $\omega_1+1$ strategy for $M$ such that every pseudo-hull of a countable tree by $\Sigma$ is by $\Sigma$. Suppose $\{\tree{S}_i\,|\, i\in \omega\}$ is a set of countable plus trees of successor lengths which are by $\Sigma$; then there is a countable tree $\tree{T}$ by $\Sigma$ and countable simple meta-trees $\mtree{S}^i$ on $\tree{S}_i$ by $\Sigma^*$, each with last tree $\tree{T}$, such that for some $i$, the branch $\tree{S}_i$-to-$\tree{T}$ of $\mtree{S}^i$ does not drop. \end{theorem} The proof is a straightforward modification of comparison by least extender disagreement, the disagreements in question being those between the extender sequences of the last models of the last trees in our current approximation to the $\mtree{S}^i$'s. It is important here that the last trees of those approximations cannot diverge at a limit step, because they are all by $\Sigma$. Theorem \ref{meta-tree comparison v1} is no longer true if we allow the $\mathcal{S}_i$ to be by different strategies, however nice they are. For example, let $\Sigma$ and $\Omega$ be such that $(M,\Sigma)$ and $(M,\Omega)$ are mouse pairs, and let $\mathcal{S}^\frown b$ be by $\Sigma$, $\mathcal{S}^\frown c$ by $\Omega$, and $b \neq c$. There can be no tree $\mathcal{T}$ that is by both $\Sigma$ and $\Omega$ such that both $\mathcal{S}^\frown b$ and $\mathcal{S}^\frown c$ are pseudo-hulls of $\mathcal{T}$, by strong hull condensation. We can now show that the putative meta-strategy $\Sigma^*$ described in Theorem \ref{meta-iterability} does not break down at limit rounds. \begin{lemma}\label{limitcase} Suppose that $M$ is a countable premouse, and $\Sigma$ is an $\omega_1+1$ strategy for $M$ such that every pseudo-hull of a countable normal tree by $\Sigma$ is also by $\Sigma$. Let $\lambda < \omega_1$ be a limit ordinal, and $\mtree{S} = \langle \mtree{S}^\xi\,|\,\xi< \lambda\rangle$ be a meta-stack on $M$ such that for all $\xi < \lambda$, $\mtree{S}^\xi$ is countable, and all trees occurring in $\mtree{S}^\xi$ are by $\Sigma$. Let \[ \mathcal{T}_\xi = \text{ first tree in } \mtree{S}^\xi. \] Then \begin{itemize} \item[(a)] for all sufficiently large $\xi<\eta < \lambda$, $\Phi^{\mtree{S}}_{\xi,\eta}$ exists, and \item[(b)] letting $\mathcal{T}$ be the direct limit of the $\mathcal{T}_\xi$, for $\xi < \lambda$ sufficiently large, under the $\Phi^{\mtree{S}}_{\xi,\eta}$, we have that $\mathcal{T}$ is by $\Sigma$. \end{itemize} \end{lemma} \begin{proof} Applying the tree comparison theorem (Theorem \ref{meta-tree comparison v1}) to $\{\tree{T}_\xi\,|\,\xi<\lambda\}$, we get a countable tree $\mathcal{V}$ which is by $\Sigma$, and for each $\xi<\lambda$ a simple countable meta-tree $\mtree{T}^\xi$ on $\tree{T}_\xi$ by $\Sigma^*$ with last tree $\mathcal{V}$. Moreover, for some $\xi<\lambda$, $\mtree{T}^\xi$ doesn't drop along $\tree{T}_\xi$-to-$ \mathcal{V}$. For every $\eta<\lambda$, $\langle \mtree{S}^\eta, \mtree{T}^{\eta+1}\rangle$ is a meta-stack on $M$ with last tree $\mathcal{V}$. Applying Theorem \ref{meta-tree full norm} to this stack, we get a meta-tree \[ \mtree{U}^\eta = \mtree{V}(\mtree{S}^\eta, \mtree{T}_{\eta+1}) \] by $\Sigma^*$ on $\tree{T}_\eta$ with last tree $\mathcal{V}$. By the remark following Lemma \ref{meta-tree uniqueness}, $\mtree{U}^\eta$ and $\mtree{T}^\eta$ determine the same partial tree embedding from $\tree{T}_\eta\to \mathcal{V}$. (If $\mtree{S}^\eta$ were simple, then we would have $\mtree{U}=\mtree{T}^\eta$.) Further, if $\mtree{T}^\eta$ does not drop along $\tree{T}_\eta$-to-$\mathcal{V}$, then $\Phi^{\mtree{T}^\eta}$ is total. Since $\Phi^{\mtree{T}^\eta} = \Phi^{\mtree{U}^\eta} = \Phi^{\mtree{T}^{\eta+1}}\circ \Phi^{\mtree{S}^\eta}$, it follows that $\Phi^{\mtree{S}^\eta}$ and $\Phi^{\mtree{S}^\eta}$ are total as well. So, we get that $\Phi^{\mtree{T}^\xi}=\Phi^{\eta,\xi}\circ\Phi^{\mtree{T}^\eta}$ for all $\eta\leq\xi<\alpha$ and for all sufficiently large $\eta,\xi$, these are total extended tree embeddings. Fixing $\zeta$ above which these are total, we have $\lim \langle \tree{S}_\xi, \Phi^{\eta,\xi}\,|\, \zeta\leq\eta\leq\xi<\alpha\rangle$ exists, and a pseudo-hull of $\mathcal{V}$. (See Proposition \ref{direct limit prop}.) Since $\mathcal{V}$ is by $\Sigma$, so is $\lim \langle \tree{S}_\xi, \Phi^{\eta,\xi}\,|\, \zeta\leq\eta\leq\xi<\alpha\rangle$. \qed \end{proof} Lemmas \ref{meta-strategy lemma} and \ref{limitcase} clearly combine to give our variant of Schlutzenberg's meta-iterability theorem, Theorem \ref{meta-iterability}. \begin{remark} We shall show in Theorem \ref{induced strategy theorem} that, assuming $\mathsf{AD^+}$, if $\Lambda$ is a meta-strategy for a normal tree $\mathcal{T}$ on $M$, and $\Lambda$ has a certain Dodd-Jensen property, then $\Lambda$ is the meta-strategy induced by some ordinary strategy for $M$. So meta-trees and strategies are not something fundamentally new. They are rather a useful way of organizing the construction of ordinary iteration trees by an ordinary iteration strategy.\end{remark} As an immediate corollary to \ref{meta-iterability}, we get Schlutzenberg's part of Theorem \ref{strategyextension}. \begin{theorem}[Schlutzenberg] \label{stack iterability ii} Let $M$ be a premouse and $\Sigma$ an $\omega_1+1$ strategy for $M$ such that every pseudo-hull of a countable plus tree by $\Sigma$ is by $\Sigma$; then there is a unique extension of $\Sigma$ to a strategy for countable stacks of countable trees which quasi-normalizes well. \end{theorem} \begin{proof} Let $\Sigma^*$ be the $(\omega_1,\omega_1)$-meta-strategy for $M$ induced by $\Sigma$. We define an ordinary $(\omega_1,\omega_1)$-strategy for $M$, which we call $\Omega$, as follows. We associate inductively to any stack \[ t = \langle \mathcal{T}_\xi \mid \xi < \alpha \rangle \] by $\Omega$ a meta-stack \[ \mtree{S} = \langle \mtree{S}^\xi \mid \xi < \alpha \rangle \] that is by $\Sigma^*$. For $\xi < \alpha$, let $P_\xi$ be the base model of $\mathcal{T}_\xi$. Let $\mtree{S}^0$ be $\mathcal{T}_0$, considered as a meta-tree on $P_0 = M$. For $\xi > 0$, let \[ \mathcal{U}_{\xi} = \text{ last tree in $\mtree{S}^\xi$}, \] and for $\lambda$ a limit, \[ \mathcal{U}^0_\lambda = \text{ last tree in $\lim_{\xi<\lambda}\mtree{S}^\xi$}, \] where the limit is taken under the tree embeddings of $\mtree{S}$. So $\mathcal{U}_0 = \mathcal{T}_0$. Let $Q_0=P_0$, and \[ Q_{\xi+1} = \text{last model of $\mathcal{U}_\xi$,} \] and if $\lambda$ is a limit ordinal, \[ Q_\lambda = \text{last model of $\mathcal{U}^0_\lambda$}. \] So $Q_1= P_1$. We maintain inductively that we have nearly elementary maps \[ \sigma_\xi \colon P_\xi \to Q_\xi \] that commute with the maps of $t$ and $\mtree{S}$. (That is, if $i \colon P_\xi \to P_\eta$ is a map of $t$, then $\Phi^{\mtree{S}}_{\xi,\eta}$ yields a $t$-map $j \colon Q_\xi \to Q_\eta$, and $\sigma_\eta \circ i = j \circ \sigma_\xi$.) $\sigma_0 = \sigma_1 = \mbox{ id}$. Let $\xi > 0$, and suppose that the stack $t \restriction \xi$ is by $\Omega$, with associated meta-stack $\mtree{S} \restriction \xi$ has been constructed with the properties above. If $\xi$ is a limit ordinal, then because $\mtree{S} \restriction \xi$ is by $\Sigma^*$, $\mathcal{U}^0_\xi$ is by $\Sigma$, and $Q_\xi$ exists. Moreover we have $\sigma_\xi \colon P_\xi \to Q_\xi$ by our commutativity hypothesis. We define the tail strategy $\Omega_{t \restriction \xi, P_\xi}$ for normal trees by \[ \mathcal{V} \text{ is by }\Omega_{t \restriction \xi, P_\xi} \Leftrightarrow \mtree{V}(\mathcal{U}^0_\xi, \sigma_\xi\mathcal{V}) \text{ is by }\Sigma^*. \] Equivalently, $\Omega$ plays round $\xi$ by pulling back under $\sigma_\xi$ the strategy $\Psi_{\mathcal{U}^0_\xi,Q_\xi}$, where $\Psi$ is the unique extension of $\Sigma$ to stacks of length two. Playing round $\xi$ this way, $\Omega$ does not lose, and we have $\mathcal{T}_\xi$ as the result of our play. Set \[ \mtree{S}^\xi = \mtree{V}(\mathcal{U}^0_\xi,\sigma_\xi \mathcal{T}_\xi). \] $\mtree{S}^\xi$ is a meta-tree on $\mathcal{U}^0_\xi$, and $\mtree{S} \restriction \xi+1$ is by $\Sigma^*$. Moreover, the last tree in $\mtree{S}^\xi$ is $\mathcal{U}_{\xi+1} = V(\mathcal{U}^0_\xi,\sigma_\xi\mathcal{T}_\xi)$, and so we have a quasi-normalization map $\tau$ from the last model $R$ of $\sigma_\xi\mathcal{T}_\xi$ to $Q_{\xi+1}$. We also have a copy map $\pi$ from $P_{\xi+1}$, which is the last model of $\mathcal{T}_\xi$, to $R$. We then set $\sigma_{\xi+1} = \tau \circ \pi$. Our induction hypotheses still hold. The case $\xi = \gamma+1$ is completely parallel. We have $t(\gamma)$ with last model $P_{\gamma+1}$, $\mtree{S}^\gamma$ with last tree $\mathcal{U}_\gamma$, whose last model is $Q_{\gamma+1}$, and $\sigma_{\gamma+1} \colon P_{\gamma +1} \to Q_{\gamma+1}$ already. We put \[ \mathcal{V} \text{ is by }\Omega_{t \restriction \gamma +1, P_{\gamma+1}} \Leftrightarrow \mtree{V}(\mathcal{U}_\gamma, \sigma_{\gamma+1}\mathcal{V}) \text{ is by }\Sigma^*, \] and \[ \mtree{S}^{\gamma+1} = \mtree{V}(\mathcal{U}_\gamma,\sigma_{\gamma+1}\mathcal{T}_{\gamma+1}), \] and so on. \qed \end{proof} \subsection{Copying meta-trees} Some of the basic facts about meta-trees can be understood through the analogy: \begin{align*} \text{premice} &\leftrightsquigarrow \text{ iteration trees}\\ \text{iteration trees} &\leftrightsquigarrow \text{meta-trees}\\ \text{strategies} & \leftrightsquigarrow \text{meta-strategies}\\ \text{elementary embeddings} &\leftrightsquigarrow \text{tree embeddings} \end{align*} Notice that one might have found the tree comparison theorem (Theorem \ref{meta-tree comparison v1}) this way: the theorem says any two iteration trees have a common meta-iterate, modulo the condition that we started with trees by the same strategy. Except for this (necessary) restriction, this is an analogue of the usual comparison theorem for premice. In this section we prove a copying theorem that conforms to this analogy. The usual copying construction is about lifting an iteration tree via an elementary embedding. Following the analogy, our result will be about lifting a meta-tree via a tree embedding. The copying construction results from repeated applications of one-step copying, as codified in the Shift Lemma. The Shift Lemma for meta-iteration says that, under the right conditions, we can complete the following diagram. \begin{center} \begin{tikzcd} \tree{S} \arrow[r, "\Psi"] & \tree{T} \\ F \arrow[r, "\mapsto", phantom] \arrow[u, "\scriptsize \mathbin{\rotatebox[origin=c]{90}{$\in$}}" ,phantom]& G\arrow[u, "\scriptsize\mathbin{\rotatebox[origin=c]{90}{$\in$}}" ,phantom]\\ \mathcal{U} \arrow[r, "\Pi"] \arrow[d, "F"] & \mathcal{V} \arrow[d,"G"]\\ V(\tree{S},\mathcal{T},F) \arrow[r, "\Gamma", dashed] & V(\mathcal{U},\mathcal{V},G) \end{tikzcd} \end{center} We need enough agreement between $\Psi$ and $\Pi$ that we can indeed complete the diagram. In the case of ordinary premice, the lifting maps must agree on $\text{dom}(F)$. The requirement here is similar, but more complicated. Before we state and prove the Shift Lemma, we isolate a preliminary fact which is essential to the proof. This fact captures a way in which quasi-normalization is better behaved than embedding normalization: if one replaces the $\alpha_0$'s by $\alpha$'s in its statement, it becomes false (in general). In \cite{associativity}, a variant of the Shift Lemma is proved where one uses embedding normalization instead of quasi-normalization, but it is considerably more complicated than our present version because of possible failures of the next lemma for $\alpha$'s. \begin{lemma}\label{key lemma 1} Let $\Psi:\tree{S}\to \tree{T}$ be an extended tree embeddings and $F$ an extender such that $F^-$ is on the $M_\infty^\tree{S}$-sequence and $M_\infty^\tree{S}|\text{lh}(F)\trianglelefteq\text{dom}( t_\infty^\Psi)$. Let $G=t_\infty^\Psi(F)$, $\alpha_0=\alpha_0(\tree{S}, F)$, $\alpha_0^*=\alpha_0(\tree{T}, G)$, $\beta=\beta(\tree{S}, F)$, and $\beta^*=\beta(\tree{T},G)$. Then \begin{enumerate} \item[(a)] $\alpha_0^*\in [v^\Psi(\alpha_0), u^\Psi(\alpha_0)]_\tree{T}$ and \item[(b)] $s^\Psi_{\alpha_0, \alpha_0^*}\upharpoonright\text{lh}(F)+1=t_{\alpha_0}^\Psi\upharpoonright\text{lh}(F)+1=t^\Psi_\infty\upharpoonright\text{lh}(F)+1$. \item[(c)] $\beta^*\in [v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$, and \item[(d)] $s^\Psi_{\beta, \beta^*}\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}=s^\Psi_{\alpha_0, \alpha^*_0}\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}$. \end{enumerate} \end{lemma} \begin{proof} We first work towards verifying (a) and (b). Let $\alpha=\alpha(\tree{S}, F)$ and $\alpha^*=\alpha(\tree{T}, G)$. Since $[\alpha, \alpha_0)$ is contained in delay interval which begins at $\alpha$ (cf. \cite{nitcis} Lemma 6.7.2), $\alpha_0=\alpha+n$ for some $n\in \omega$ and, by the definition of $\alpha_0$, for every $i<n$, $E_{\alpha+i}^\tree{S}$ is not of plus type and $\hat\lambda(E_{\alpha+i}^\tree{S})<\text{lh}(F)$. The agreement hypotheses between the maps of a tree embedding give that $t_{\alpha+n}^\Psi\upharpoonright\text{lh}(F)+1=t_\infty^\Psi\upharpoonright\text{lh}(F)+1$. In particular, $t_{\alpha+n}^\Psi(F)=G$. So let $j\leq n$ least such that $t_{\alpha+j}^\Psi(F)=G$. \paragraph{Case 1.} $j<n$. \\ First we'll check that for any $k<n-j$, \[E_{u(\alpha+j+k)}^\tree{T}=t_{\alpha+j+k}^\Psi(E_{\alpha+j+k}^\tree{S}) \text{ and if $k\geq 1$, then } v^\Psi(\alpha+j+k)=u^\Psi(\alpha+j+k).\] Note that since $E_{\alpha+j+k}^\tree{S}$ is not of plus type, this implies that $E_{u^\Psi(\alpha+j+k)}^\tree{T}$ is not of plus type either. For any $i<n$, since $\hat\lambda(E_{\alpha+i}^\tree{S})<\text{lh}(F)$, $u^\Psi(\alpha+i)$ is the least $\xi\in[v^\Psi(\alpha+i), u^\Psi(\alpha+i)]_\tree{T}$ such that $t^\Psi_{\alpha+i}(F)^-$ is on the $M_\xi^\tree{T}$-sequence. (This is because all critical points used along $[v^\Psi(\alpha+i), u^\Psi(\alpha+i)]_\tree{T}$ must be below the current image of $\text{lh}(F)$, since they are below the current image of $\text{lh}(E_{\alpha+i}^\tree{S})$, since we are blowing up this extender along this partial branch.) For any $i<n$, since $\Psi$ is an extended tree embedding, $v^\Psi(\alpha+i+1)=u^\Psi(\alpha+i)+1$, and $E_{\alpha+i}^\tree{S}$ is not of plus type, either $E_{u(\alpha+i)}^\tree{T}=t_{u(\alpha+i)}^\Psi(E_{\alpha+i}^\tree{S})$ (and so is \textit{not} of plus type) and $s_{\alpha+i+1}^\Psi\upharpoonright\text{lh}(E_{\alpha+i}^\tree{S})+1=t_{\alpha+i}^\Psi\upharpoonright\text{lh}(E_{\alpha+i}^\tree{S})+1$ or else $E_{u(\alpha+i)}^\tree{T}=t_{u(\alpha+i)}^\Psi(E_{\alpha+i}^\tree{S})^+$ (and so is of plus type), $s_{\alpha+i+1}^\Psi\upharpoonright\hat\lambda(E_{\alpha+i}^\tree{S})=t_{\alpha+i}^\Psi\upharpoonright\hat\lambda(E_{\alpha+i}^\tree{S})$, but $s_{\alpha+i+1}^\Psi(\hat\lambda(E_{\alpha+i}^\tree{S}))>t_{\alpha+i}^\Psi(\hat\lambda(E_{\alpha+i}^\tree{S}))$. It follows that for any $i<n$, $\text{lh}(t_{\alpha+i}^\Psi(F))\leq \text{lh}( t_{\alpha+i+1}^\Psi(F))$, with equality obtaining only when $E_{u(\alpha+i)}^\tree{T}=t_{u(\alpha+i)}^\Psi(E_{\alpha+i}^\tree{S})$ (which is \textit{not} of plus type) and if $i<n-1$, $v^\Psi(\alpha+i+1)=u^\Psi(\alpha+i+1)$.\footnote{In the case $i=n-1$, we may still have equality obtaining when $E_{u(\alpha+i)}^\tree{T}=t_{u(\alpha+i)}^\Psi(E_{\alpha+i}^\tree{S})$ but $v^\Psi(\alpha+i+1)<u^\Psi(\alpha+i+1)$.} Since we reach our final image $G$ of $F$ already by $u^\Psi(\alpha+j)$, we get that for all $k$ such that $k<n-j$, $E_{u(\alpha+j+k)}^\tree{T}=t_{\alpha+j+k}^\Psi(E_{\alpha+j+k}^\tree{S})$ and if $k\geq 1$, then $v^\Psi(\alpha+j+k)=u^\Psi(\alpha+j+k)$, as desired. In particular, $u^\Psi(\alpha+j+k)=u^\Psi(\alpha+j)+k$ for all $k< n-j$. This will let us verify that $\{u^\Psi(\alpha+j)+k\mid k<n-j\}$ is contained in a delay interval of $\tree{T}$ and that for all $k<n-j$, $\hat\lambda(E_{u^\Psi(\alpha+j)+k}^\tree{T})<\text{lh}(G)$. Since $\{\alpha+j+k\mid k<n-j\}$ is contained in a single delay interval in $\tree{S}$ and $\hat\lambda(E_{\alpha+j+k})<\text{lh}(F)$, this is immediate from the agreement of the $t$-maps using that $E_{u(\alpha+j)+k}^\tree{T}=t_{\alpha+j+k}^\Psi(E_{\alpha+j+k}^\tree{S})$. We have $\alpha^*\leq u^\Psi(\alpha+j)$ by our choice of $j$. The considerations above showed that $u^\Psi(\alpha+j)$ is the least $\xi\in [v^\Psi(\alpha+j), u^\Psi(\alpha+j)]_\tree{T}$ such that $s_{\alpha+j, \xi}^\Psi(F)=G$. It is easy to see that $u^\Psi(\alpha+j)$ is also the least $\xi\in [v^\Psi(\alpha+j), u^\Psi(\alpha+j)]_\tree{T}$ such that $G^-$ is on the $M_\xi^\tree{T}$-sequence and also that $G^-$ is not on the sequence of any $M^\tree{T}_{u^\Psi(\alpha+i)}$ for $i<j$. It follows that $\alpha^*\geq v^\Psi(\alpha+j)$ (as this is trivial if $j=0$). \paragraph{Subcase A.} $\alpha^*=u^\Psi(\alpha+j)$.\\ In this case, our preceding observations give that $\alpha_0^*\geq \sup\{u^\Psi(\alpha+j)+k\mid k<n-j\}=v^\Psi(\alpha+n)$. Since $G$ isn't moved along $[v^\Psi(\alpha+n), u^\Psi(\alpha+n)]_\tree{T}$ we must have that either $v^\Psi(\alpha+n)=u^\Psi(\alpha+n)$ or else $\hat\lambda(E^\tree{T}_{v^\Psi(\alpha+n)}>\text{lh}(G)$ ($\hat\lambda(E^\tree{T}_{v^\Psi(\alpha+n)}<\text{lh}(G)$ would imply $G$ is blown up along $[v^\Psi(\alpha+n), u^\Psi(\alpha+n)]_\tree{T}$, as before, contradicting that $t_{\alpha+n}^\Psi(F)=G$). In the latter case (i.e. that $\hat\lambda(E^\tree{T}_{v^\Psi(\alpha+n)}>\text{lh}(G)$), we get $\alpha^*_0=v^\Psi(\alpha_0)$, giving (a). (b) easily follows so suppose $v^\Psi(\alpha+n)=u^\Psi(\alpha+n)$. Since $\alpha+n=\alpha_0$, we have that $\alpha+n=\text{lh}(\tree{S})$ or else either $\hat\lambda(E_{\alpha+n}^\tree{S})>\text{lh}(F)$ or $E_{\alpha+n}^\tree{S}$ is of plus type. If $\alpha+n+1=\text{lh}(\tree{S})$ then since $\Psi$ is an extended tree embedding we must have $v^\Psi(\alpha+n)+1=u^\Psi(\alpha+n)+1=\text{lh}(\tree{T})$, so that $\alpha_0^*=v^\Psi(\alpha+n)=u^\Psi(\alpha+n)$ which immediately gives (a) and easily gives (b). In the remaining cases, $E_{u^\Psi(\alpha+n)}^\tree{T}$ must either be of plus type (either $t_{\alpha+n}^\Psi(E_{\alpha+n}^\tree{S})$ if $E_{\alpha+n}^\tree{S}$ is of plus type or possibly $t_{\alpha+n}^\Psi(E_{\alpha+n}^\tree{S})^+$ even if it isn't) or have $\hat\lambda(E_{u^\Psi(\alpha+n)}^\tree{T})>\text{lh}(G)$. So in these cases, too, we get $\alpha_0^*=v^\Psi(\alpha+n)=u^\Psi(\alpha+n)$, giving (a) and (b). \paragraph{Subcase B.} $\alpha^*<u^\Psi(\alpha+j)$.\\ We can run the argument of Subcase A once we know that every $\xi\in [\alpha^*, u^\Psi(\alpha+j)]$ is not of plus type and has $\hat\lambda(E_\xi^\tree{T})<\text{lh}(G)$. But we already verified that $E_{u(\alpha+j)}^\tree{T}$ cannot be of plus type (or else we'd violate quasi-normality at the next extender) and has $\hat\lambda( E_{u(\alpha+j)}^\tree{T})<\text{lh}(G)$. Since $\tree{T}$ is quasi-normal, every $\xi\in [\alpha^*, u^\Psi(\alpha+j)]$ must have $\hat\lambda(E_\xi^\tree{T})\leq \hat\lambda( E_{u(\alpha+j)}^\tree{T})<\text{lh}(G)$, too. Moreover, as $G^-$ is on the sequence of $M_{\alpha^*}^\tree{T}$ and $M_\infty^\tree{T}$, all of these extender must have length $>\text{lh}(G)$. But then if any of these $E_\xi^\tree{T}$ is of plus type, the delay interval which begins at $\alpha^*$ would have to end strictly before $E_{u^\Psi(\alpha+j)}^\tree{T}$, forcing $\hat\lambda(E_{u^\Psi(\alpha+j)}^\tree{T})>\text{lh}(G)$, a contradiction. This finishes Case 1. \paragraph{Case 2.} $j=n$.\\ Recall that this means that for all $i<n$, $t^\Psi_{\alpha+i}(F)\neq G$. Let $\xi\in [v^\Psi(\alpha_0), u^\Psi(\alpha_0)]_\tree{T}$ be least such that $s_{\alpha_0, \xi}^\Psi(F)= G$. First suppose $\xi=v^\Psi(\alpha_0)$. Then we must have $n=0$, i.e. $\alpha_0=\alpha$, or else we'd have $t^\Psi_{\alpha+i}(F)= G$ for some $i<n$ (by considerations similar to those at the start of Case 1). It follows that $\alpha^*=\xi$. But then, since $G$ is in the range of $t_{\alpha_0}^\Psi$, either $u^\Psi(\alpha_0)=v^\Psi(\alpha_0)=\xi$, in which case the argument at the end of Case 1 Subcase A gives $\alpha_0^*=\xi$, or else $\hat\lambda(E_\xi^\tree{T})>\text{lh}(G)$, in which case, again, $\alpha_0^*=\xi$. So if $\xi=v^\Psi(\alpha_0)$, we're done. So for the remainer of the proof, suppose $\xi>v^\Psi(\alpha_0)$. \paragraph{Subcase A.} $\xi$ is a limit ordinal.\\ In this case, we can again show that $\xi=\alpha^*=\alpha_0^*$. Towards a contradiction, suppose $\alpha^*<\xi$. Then the delay interval beginning at $\alpha^*$ must end strictly before $\xi$, so that quasi-normality guarantees that for some $\eta+1\in [v^\Psi(\alpha_0), \xi)_\tree{T}$, $\hat\lambda(E_\eta^\tree{T})>\text{lh}(G)$. But then we must use an extender with critical point $>\text{lh}(G)$ along $[v^\Psi(\alpha_0), \xi)_\tree{T}$, contradicting our choice of $\xi$. So $\xi=\alpha^*$. If $\xi=u^\Psi(\alpha_0)$, then we get $\xi=\alpha_0^*$ by the argument at the end of Case 1 Subcase A. If $\xi<u^\Psi(\alpha_0)$, we must have $\hat\lambda(E_\xi^\tree{T})>\text{lh}(G)$, so that, again, $\xi=\alpha_0^*$. \paragraph{Subcase B.} $\xi$ is a successor ordinal.\\ Let $\xi=\gamma+1$ and $\eta=\tree{T}\text{-pred}(\gamma+1)$. Since $\xi>v^\Psi(\alpha_0)$, $\eta\geq v^\Psi(\alpha_0)$. Since we haven't reached the final image $G$ of $F$ along $[v^\Psi(\alpha_0), u^\Psi(\alpha_0)]_\tree{T}$ by $\eta$, we must have $\text{crit}(E_\gamma^\tree{T})<\text{lh}(G)$ which implies $\hat\lambda(E_\gamma^\tree{T})<\text{lh}(G)$. It follows that $\gamma<\alpha_0^*$, i.e. $\gamma+1\leq\alpha_0^*$. If $\gamma+1=u^\Psi(\alpha_0)$, one can show $\gamma+1=\alpha_0^*$ (by the Case 1 Subcase A argument, again). So suppose $\gamma+1=u^\Psi(\alpha_0)$. In this case we must again have $\gamma+1=\alpha_0^*$ because $\gamma+1<\alpha_0^*$ is impossible, since it implies that we must next use an extender with critical point $<\text{lh}(G)$, contradicting that we've already reached $G$ as the final image of $F$ at $\gamma+1$. This finishes our verification of (a) and (b).\\ Verifying (c) and (d) is actually much easier and essentially appears in \cite{nitcis}. We just handle the case that $\beta+1<\text{lh}(\tree{S})$ (the remaining case being that $\beta+1=\text{lh}(\tree{S})$, so that actually $\alpha_0=\alpha=\beta$; the argument in this case is basically the same). By definition, $\beta$ is the least $\xi$ such that $\text{dom}(F)\triangleleft M_\xi^\tree{S}|\hat\lambda(E_\xi^\tree{S})$. It is easy to see $\beta\leq \alpha_0$ and that the agreement between the maps of $\Psi$ gives $t_\beta^\Psi\upharpoonright\text{dom}(F)+1=t_{\alpha_0}\upharpoonright\text{dom}(F)+1$. It follows that $t_\beta^\Psi(\text{dom}(F))=\text{dom}(G)$. Now let $\gamma\in [v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$ least such that $s^\Psi_{\beta, \gamma}(\text{dom}(F))=\text{dom}(G)$. We'll show that $\gamma=\beta^*$. We must have that $\text{dom}(G)\triangleleft M_\gamma^\tree{T}|\hat\lambda(E_\gamma^\tree{T})$, since if this were to fail then we must have $\gamma<u^\Psi(\beta)$ and that the extender applied to $M_\gamma^\tree{T}$ along $[v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$ has critical point $\leq\text{crit}(G)$, contradicting that we've reached $\text{dom}(G)$ as the image of $\text{dom}(F)$ at $\gamma$. So $\beta^*\leq \gamma$. It's also easy to verify that $\beta^*\geq v^\Psi(\beta)$ and $\gamma$ is the least $\xi\in[v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$ such that $\text{dom}(G)\triangleleft M_\xi^\tree{T}|\hat\lambda(E_\xi^\tree{T})$. So if $\beta^*\neq \gamma$, then $\beta^*<\gamma$ is a successor $\xi+1$ for $\xi\geq\beta^*$, and $\tree{T}\text{-pred}(\xi+1)=\eta\geq v^\Psi(\beta)$. Since $\eta<\gamma$, we must have $\text{crit}(E_\xi^\tree{T})<\text{crit}(s_{\beta, \eta}^\Psi(F)$ (i.e. we are still moving up $\text{dom}(F)$, and so actually $\text{crit}(F)$, along $[v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$. But then that $\hat\lambda(E_\xi^\tree{T})<\text{crit}(s_{\beta, \gamma}(F))=\text{crit}(G)$, contradicting that $\xi\geq \beta^*$, as $\text{dom}(G)\triangleleft M_\xi^\tree{T}|\hat\lambda(E_\xi^\tree{T})$ is impossible. So $\gamma=\beta^*$, giving (c) and (d). \qed \end{proof} We now isolate the agreement hypotheses needed for the Shift Lemma in a definition. \begin{definition}\label{metashiftapplies} Let $\Psi:\tree{S}\to \tree{T}$ and $\Pi:\mathcal{U} \to \mathcal{V}$ be extended tree embeddings, $F$ an extender such that $F^-$ be an extender on the $M_\infty^\tree{S}$-sequence, and $G$ an extender such that $G^-$ is on the $M_\infty^\tree{T}$-sequence. We say that \textit{the Shift Lemma applies to $(\Psi,\Pi, F, G)$} iff letting $\beta = \beta(\mathcal{S},F)$ and $\beta^*=\beta(\mathcal{T}, G)$, \begin{enumerate} \item $M_\infty^\tree{S}|\text{lh}(F)\trianglelefteq\text{dom}( t_\infty^\Psi)$ and $G=t_\infty^\Psi(F)$, \item $\Psi\upharpoonright\beta+1\approx \Pi\upharpoonright\beta+1$, \item $\tree{T}\upharpoonright \beta^*+1=\tree{V}\upharpoonright\beta^*+1$ \item $\beta^*\in [v^\Pi(\beta), u^\Pi(\beta)]_\tree{V}$ and $t_{\beta}^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}=s_{\beta, \beta^*}^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}$, \item if $\beta+1<\text{lh}(\tree{U})$, then $\text{dom}(F) \triangleleft M_\beta^\tree{U}|\text{lh}(E^\mathcal{U}_\beta)$, and \item if $\beta^*+1<\text{lh}(\tree{V})$, $\text{dom}(G) \triangleleft M_{\beta^*}^\tree{U}|\text{lh}(E^\mathcal{V}_{\beta^*})$. \end{enumerate} \end{definition} Note that in clause (4) we do not need to say that $\beta^*\in[v^\Psi(\beta), u^\Psi(\beta)]_\tree{T}$ (though this is needed for $s^\Psi_{\beta,\beta^*}$ to makes sense), since this is a consequence of Lemma \ref{key lemma 1}. In the ordinary Shift Lemma, the upstairs models must agree up to the common image of $\text{dom}(F)$. Clauses (2)-(4) play an analgous role: they say that the upstairs trees and embeddings agree up to the place in $\mathcal{T}$ where $t_\infty^\Psi(\text{dom}(F))$ has been created. (5) and (6) additionally ensure that $V(\tree{U}, \tree{S}, F)$ and $V(\tree{V}, \tree{T}, G)$ are defined. \begin{lemma}[Shift Lemma]\label{Shift Lemma} Let $\Psi:\tree{S}\to \tree{T}$ and $\Pi:\mathcal{U} \to \mathcal{V}$ be extended tree embeddings, and let $F$ be an extender such that $F^-$ be an extender on the sequence of the last model of $\tree{S}$ and $G$ be an extender such that $G^-$ is on the extender sequence of the last model of $\tree{T}$. Let $\alpha_0=\alpha_0(\tree{S}, F)$ and $\alpha^*_0=\alpha_0(\tree{T},G)$. Suppose that the Shift Lemma applies to $(\Psi,\Pi, F, G)$. Then $V(\tree{U},\tree{S},F)$ and $V(\tree{V},\tree{T},G)$ are defined and, letting $\mu$ the greatest ordinal such that $V(\mathcal{U},\mathcal{S},F)\upharpoonright \mu$ is wellfounded and $\mu^*$ the greatest ordinal such that $V(\mathcal{V},\tree{T}, G)\upharpoonright\mu^*$ is wellfounded, there is a unique partial tree embedding $\Gamma: V(\mathcal{U},\mathcal{S},F)\upharpoonright\mu \to V(\mathcal{V},\tree{T}, G)\upharpoonright\mu^*$ with maximal domain such that \begin{enumerate} \item $\Gamma\upharpoonright \alpha_0+1\approx \Psi \upharpoonright \alpha_0+1$, \item $u^\Gamma(\alpha_0)=\alpha^*_0$, and \item $\Gamma\circ \Phi^{V(\mathcal{U},\mathcal{S},F)} =\Phi^{V(\mathcal{V},\tree{T}, G)}\circ \Pi$ (on their common domain). \end{enumerate} Moreover, if $V(\mathcal{V},\tree{T}, G)$ is wellfounded, then $V(\mathcal{U},\mathcal{S}, F)$ is wellfounded and $\Gamma$ is a (total) extended tree embedding from $V(\mathcal{U}, \mathcal{S}, F)$ into $V(\mathcal{V},\mathcal{T}, G)$. If $V(\mathcal{V},\tree{T}, G)$ is wellfounded and also $\Pi$ is non-dropping, then $\Gamma$ is a non-dropping extended tree embedding. \end{lemma} \begin{remark}\label{shift remark} If we assume that $\tree{S},\mathcal{T},\mathcal{U},$ and $\mathcal{V}$ are all by some strategy $\Sigma$ for $M$ with strong hull condensation that quasi-normalizes well, then $V(\mathcal{U},\tree{S}, F)$ and $V(\mathcal{V},\tree{T}, G)$ are by $\Sigma$, and $\Gamma$ is a total extended tree embedding. \end{remark} \begin{proof} We have $V(\tree{U}, {\tree{S}}, F)$ is defined by hypotheses (2) and (5) in the definition of when the Shift Lemma applies. $V(\tree{V},\tree{T},G)$ is defined by hypotheses (3) and (6). So all of the work is in identifying $\Gamma$ and proving it is as desired, inductively. At bottom, we are able to do this because the $s$-maps of tree embeddings are given by the ordinary Shift Lemma at successors. Let $\tree{W}=V(\tree{U}, {\tree{S}}, F)\upharpoonright\mu+1$, $\tree{W}^*=V(\tree{V},\tree{T},G)\upharpoonright\mu^*+1$, $\Phi=\Phi^{V(\tree{U}, {\tree{S}}, F)}$, and $\Phi^*=\Phi^{V(\tree{V},\tree{T},G)}$. Notice that $\mu\geq\alpha_0+1$ and $\mu^*\geq \alpha^*_0+1$ since $V({\tree{U}},{\tree{S}}, F)\upharpoonright\alpha_0+1= \tree{S}\upharpoonright\alpha_0+1$ and $V(\tree{V},\tree{T}, F)\upharpoonright\alpha_0^*+1= \tree{T}\upharpoonright\alpha_0^*+1$, so the first models which are possibly illfounded are the new models $M^{V({\tree{U}},{\tree{S}}, F)}_{\alpha_0+1}$ and $M^{V(\tree{V},\tree{T}, F)}_{\alpha_0^*+1}$, which are obtained as ultrapowers by $F$ and $G$, respectively. Now, the $u$-map of $\Gamma$ is totally determined by what we have demanded in (1)-(3). We must have \begin{equation*} u^\Gamma(\zeta)=\begin{cases} u^{\Psi}(\zeta) & \text{ if } \zeta<\alpha_0 \\ \alpha_0^* & \text{ if } \zeta=\alpha_0\\ u^{\Phi^*\circ \Pi}(\xi) & \text{ if } \zeta>\alpha_0, \text{ where $\xi$ is such that $u^{\Phi}(\xi)=\zeta$}. \end{cases} \end{equation*} Recall that this third case makes sense since $u^{\Phi}$ maps $[\beta,\text{lh}(\tree{U}))$ onto $[\alpha_0+1, \text{lh} (\tree{W}))$ and $u^{\Phi^*\circ\Pi}(\xi)>\alpha_0^*$ for all $\xi$ such that $u^{\Phi}(\xi)>{\alpha_0}$ (since if $u^{\Phi}(\xi)>{\alpha_0}$, then $\xi\geq \beta$, so $u^{\Phi^*\circ \Pi}(\xi)\geq u^{\Phi^*}(\beta)\geq \alpha_0^*+1$). The definition of $u^\Gamma(\zeta)$ just given makes sense for any ordinal $\zeta$, but the actual $u$-map of $\Gamma$ has domain $\{\zeta\mid \zeta< \mu $ and $u^\Gamma(\zeta)< \mu^*\}$ (since the domain can't include anymore than this and we want the domain of $\Gamma$ to be maximal). In the course of the proof, we'll show that we can drop the condition \lq\lq$\zeta<\mu$" from the description of the domain of $u^\Gamma$. Since we had to define $u^\Gamma$ as above and tree embeddings are totally determined by their $u$-map, uniqueness of $\Gamma$ is guaranteed. We just need to check that we can actually find a tree embedding with this $u$-map. This amounts to identifying $s$-maps and $t$-maps that make the relevant diagrams commute. We've stipulated $\Gamma\upharpoonright\alpha_0+1\approx \Psi\upharpoonright\alpha_0 +1$ and $u^\Gamma(\alpha_0)=\alpha_0^*$. By Lemma \ref{key lemma 1} this determines a legitimate extended tree embedding from $\tree{S}\upharpoonright\alpha_0+1$ into $\tree{T}\upharpoonright\alpha_0^*+1$ with last $t$-map $t_{\alpha_0}^\Gamma=s_{\alpha_0, \alpha_0^*}^\Psi$. Since $u^{\Phi}$ maps $[\beta,\text{lh} ({\tree{U}}))$ onto $[\alpha+1,\text{lh} ({\tree{W}}))$, we just need to find appropriate $s^\Gamma_{u^{\Phi}_\xi}$ and $t^\Gamma_{u^{\Phi}_\xi}$ by induction on $\xi\in[\beta,\text{lh}({\tree{U}}))$. We also show that if $u^{\Phi^*\circ \Pi}(\xi)< \mu^*$, then $u^{\Phi}(\xi)< \mu$. We start with the base case.\\ \noindent \textbf{Base case.} $\xi=\beta$.\\ We first want to define $s_{\alpha_0+1}^\Gamma$. For $\Gamma$ to be a tree embedding, $s_{\alpha_0+1}^\Gamma$ must be the copy map associated to $(t_{\alpha_0}^\Gamma, s_{\beta, \beta^*}^\Gamma, E_{\alpha_0}^\tree{W}, E_{\alpha_0^*}^{\tree{W}^*})$. Note that we have $F=E_{\alpha_0}^\tree{W}$, $G=E_{\alpha_0^*}^{\tree{W}^*}$, and by Lemma \ref{key lemma 1}, $t^\Gamma_{\alpha_0}(F)=G$. Since $s^\Gamma_\beta=s^\Psi_\beta$ and $\beta^*\leq \alpha_0^*$, we have $s^\Gamma_{\beta, \beta^*}=s^\Psi_{\beta,\beta^*}$. Lemma \ref{key lemma 1}, again, gives $t^\Gamma_{\alpha_0}\upharpoonright \upharpoonright\text{dom}(E_{\alpha_0}^\tree{W})\cup\{\text{dom}(E_{\alpha_0}^\tree{W})\}=s_{\beta,\beta^*}^\Gamma\upharpoonright \upharpoonright\text{dom}(E_{\alpha_0}^\tree{W})\cup\{\text{dom}(E_{\alpha_0}^\tree{W})\}$. So the ordinary Shift Lemma does apply to $(t_{\alpha_0}^\Gamma, s_{\beta, \beta^*}^\Gamma, E_{\alpha_0}^\tree{W}, E_{\alpha_0^*}^{\tree{W}^*})$. So we can actually let $s_{\alpha_0+1}^\Gamma$ be the copy map associated to $(t_{\alpha_0}^\Gamma, s_{\beta, \beta^*}^\Gamma, E_{\alpha_0}^\tree{W}, E_{\alpha_0^*}^{\tree{W}^*})$. Here is a diagram for this application of the ordinary Shift Lemma. \[\begin{tikzcd}[column sep = huge] M^{{\tree{W}}}_{\alpha_0}=M^{{\tree{S}}}_{\alpha_0} \arrow{r}{ s_{\alpha_0,\alpha_0^*}^{\Psi}} & M^{\tree{T}}_{\alpha_0}=M^{\tree{W}^*}_{\alpha_0^*}\\ F\arrow[mapsto]{r}{}& G\\ M^{{\tree{W}}}_{\beta}=M^{{\tree{S}}}_{\beta} \arrow{r}{ s_{\beta,\beta^*}^{\Psi}}\arrow{d}{ F} & M^{\tree{T}}_{\beta^*}=M^{\tree{W}^*}_{\beta^*}\arrow{d}{G}\\ M^{{\tree{W}}}_{\alpha_0+1}\arrow[dashed]{r}{s_{\alpha_0+1}^\Gamma} & M^{\tree{W}^*}_{\alpha_0^*+1} \end{tikzcd}\] If $\alpha^*_0+1<\mu^*$, then since $s^\Gamma_{\alpha_0}$ is a total elementary embedding from $M_{\alpha_0}^{\tree{W}}$ into $M_{\alpha_0^*}^{\tree{W}^*}$, $\alpha_0+1<\mu$, too. If $V(\tree{U},\tree{S},F)$ is in the dropping case, we've already defined all of $\Gamma$. So for the remainder of the proof, assume that $V(\tree{U},\tree{S},F)$ is \textit{not} in the dropping case. We need to see that $V(\tree{V},\tree{T},G)$ is not in the dropping case either, or else we may not be able to successfully find $t^\Gamma_{\alpha_0+1}$ (or later maps). First, suppose that $\beta+1=\text{lh}(\tree{U})$. Then $F$ is total over $M_\beta^\tree{U}=M_\beta^\tree{S}=M_\beta^\tree{W}$, i.e. no proper initial segment of $M_\beta^\tree{U}$ beyond $\text{dom}(F)$ projects to or below $\text{crit}(F)$. It follows that for every $\eta\in [v^\Pi(\beta), u^\Pi(\beta)]_\tree{V}$, no level of $M_\eta^\tree{V}$ beyond $s_{\beta, \eta}^\Pi(\text{dom}(F))$ projects to or below $\text{crit}(s_{\beta, \eta}^\Pi(F))$. By hypothesis (4), $\beta^*$ is such an $\eta$ and $s_{\beta, \beta^*}^\Pi(\text{dom}(F))=\text{dom}(G)$. So $V(\tree{V},\tree{T}, G)$ is not in the dropping case either. Now suppose that $\beta+1<\text{lh}(\tree{U})$. Then there is no level $P\triangleleft M_\beta^\tree{U}|\text{lh}(E_\beta^\tree{U})$ beyond $\text{dom}(F)$ which projects across $\text{crit}(F)$. It follows that there is no level $Q\triangleleft M_{\beta^*}^{\tree{V}}|\text{lh}(s_{\beta,\beta^*}^\Pi(E_\beta^\tree{U})$ beyond $\text{dom}(G)$ which projects across $\text{crit}(G)$. So the only potential problem would be that some level $Q\triangleleft M_{\beta^*}^{\tree{V}}| \text{lh}(E_{\beta^*}^\tree{V})$ beyond $\text{lh}(s_{\beta,\beta^*}^\Pi(E_\beta^\tree{U}))$ projects across $\text{crit}(G)$. But in this case we must have $\beta^*<u^\Pi(\beta)$ (or else $E_{\beta^*}^\tree{V}=s_{\beta,\beta^*}^\Pi(E_\beta^\tree{U})$) and so, letting $\eta+1$ the successor of $\beta^*$ in $[v^\Pi(\beta), u^\Pi(\beta)]_\tree{V}$, $\text{crit}(E_\eta^\tree{V})$ cannot be in the interval $[\text{crit}(G), \text{lh}(s_{\beta,\beta^*}^\Pi(E_\beta^\tree{U}))]$ (since nothing here is cardinal of $M_{\beta^*}^{\tree{V}}| \text{lh}(E_{\beta^*}^\tree{V})$). Since we haven't reached the final image of $E_\beta^\tree{U}$ along $[v^\Pi(\beta), u^\Pi(\beta)]_\tree{V}$, we must have $\text{crit}(E_\eta^\tree{V})<\text{lh}(s_{\beta,\beta^*}^\Pi(E_\beta^\tree{U}))$. So actually $\text{crit}(E_\eta^\tree{V})<\text{crit}(G)$, contradicting hypothesis (4) that $t_\beta^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}=s_{\beta,\beta^*}^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}$. So we've shown $V(\tree{V},\tree{T}, G)$ is not in the dropping case either. Now, if $\mu^* \leq u^{\Phi^*\circ\Pi}(\beta)$, we stop, so suppose $\mu^*> u^{\Phi^*\circ\Pi}(\beta)$. We must now put \begin{align*} t_{\alpha_0+1}^\Gamma &= \hat\imath^{\tree{W}^*}_{\alpha_0+1, u^{\Gamma}_{\beta}}\circ s_{\alpha_0+1}^\Gamma. \end{align*} For this to make sense, we need that $\alpha_0^*+1\leq_{\tree{W}^*}u^\Gamma(\beta)=u^{\Phi^*\circ\Pi}(\beta)$. Hypothesis (4) implies that any extender used along $(\beta^*,u^\Pi(\beta)]_\tree{V}$ has critical point $>\text{crit}(G)$. So since $\Phi^*=\Phi^{V(\tree{V},\tree{T},G)}$, $u^{\Phi^*}$ is tree-order preserving on $[\beta^*,u^\Pi(\beta)]_\tree{V}$. So $\alpha_0^*+1=u^{\Phi^*}(\beta^*)\leq_{\tree{W}^*}u^{\Phi^*\circ\Pi}(\beta)=u^\Gamma(\beta)$, as desired. We have the following picture. \[\begin{tikzcd}[column sep = huge] M^{{\tree{W}}}_{\beta}=M^{{\tree{U}}}_{\beta} \arrow{r}{ s_{\beta,\beta^*}^{\Pi}}\arrow{d}{ F} & M^{\tree{V}}_{\beta^*}=M^{\tree{W}^*}_{\beta^*}\arrow{d}{G} \arrow {r}{\tree{V}}& M^{\tree{V}}_{u^\Pi_{\beta}}\arrow{d}{t^{\Phi^*}_{u^\Pi_{\beta}}}\\ M^{{\tree{W}}}_{\alpha_0+1}\arrow[dashed]{r}{s_{\alpha_0+1}^\Gamma} & M^{\tree{W}^*}_{\alpha_0^*+1}\arrow{r}{\tree{W}^*} & M^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_{\beta}} \end{tikzcd}\] As part of conclusion (3), we need to see that this diagram commutes. We already have that the left square commutes, by our choice of $s^\Gamma_{\alpha_0+1}$, so we just need to see that the right one does. We may assume $\beta^*<u^\Pi(\beta)$, as otherwise this is trivial. Now since $\Phi^*=\Phi^{V(\tree{V},\tree{T}, G)}$, for every $\zeta>\beta^*$, $u^{\Phi^*}(\zeta)=v^{\Phi^*}(\zeta)$ and so $t_\zeta^{\Phi^*}=s_\zeta^{\Phi^*}$, where $\zeta+1$ be the least element of $(\beta^*,u^\Pi(\beta)]_{\tree{V}}$, so we can expand the right square as follows. \[\begin{tikzcd}[column sep = huge] M^{\tree{V}}_{\beta^*}\arrow{d}{G} \arrow {r}{E^\tree{V}_\zeta}& M^\tree{V}_{\zeta} \arrow{r}{\tree{V}} \arrow{d}{s_\zeta^{\Phi^*}}& M^\tree{V}_{u^\Pi_{\beta}}\arrow{d}{s_{u^\Psi_{\beta}}^{\Phi^*}}\\ M^{\tree{W}^*}_{\alpha_0^*+1}\arrow{r}{E^{\tree{W}^*}_ {u^{\Phi^*}_{\zeta}}} & M^{\tree{W}^*}_{v^{\Phi^*}_{\zeta}} \arrow{r}{\tree{W}^*}& M^{\tree{W}^*}_{u^{\Phi^*\circ \Psi}_{\beta}} \end{tikzcd}\] But these squares commute since $\Phi$ is a tree embedding: the left square commutes since $s^\Phi_\zeta$ is the appropriate copy map and the right square commutes since the $s$-maps of a tree embedding commute with branch embeddings, by definition. This finishes the base case.\\ \noindent \textbf{Successor case.} $\beta<\xi+1$ and $u^{\Phi^*\circ \Pi}(\xi+1)< \mu^*$.\\ Since $v^{\Phi^*\circ \Pi}(\xi)<\mu^*$, we have $M^{\tree{W}^*}_{v^{\Phi^*\circ \Pi}(\xi)}$ is wellfounded and so $M^{{\tree{W}}}_{v^{\Phi}(\xi)}$ is as well (since $s^\Gamma_{u^{\Phi}_\xi}:M^{{\tree{W}}}_{v^{\Phi}(\xi)}\to M^{\tree{W}^*}_{v^{\Phi^*\circ \Pi}(\xi)}$ is a total elementary embedding). If $\xi>\beta$, then $v^{\Phi}(\xi)= u^{\Phi}(\xi)$, so we have $u^{\Phi}(\xi)<\mu$. So $u^{\Phi}(\xi+1)=v^{\Phi}(\xi+1)=u^{\Phi}(\xi)+1\leq \mu$. If $\xi=\beta$, then we have $u^{\Phi}(\xi)=\alpha_0+1$, so we already had $u^{\Phi}(\xi)\leq \mu$. Let $\eta = {\tree{U}}\text{-pred} (\xi+1)$ and $\eta^*=\tree{V}\text{-pred} (u^\Pi(\xi)+1)$. We have $\eta\in[v^\Pi(\eta),u^\Pi(\eta)]_\tree{V}$ since $\Pi$ is a tree embedding. We consider two subcases depending on the critical point of $E^{{\tree{U}}}_{\xi}$. The arguments in each case are basically the same. \paragraph{Subcase A.} $\text{crit} (E^{{\tree{U}}}_{\xi})<\text{crit}(F)$. \\ In this case, $\eta\leq \beta$ and $\eta={\tree{W}}\text{-pred}(u^{\Phi}(\xi)+1)$. We also have that $\text{crit} (E^\tree{V}_{u^\Pi(\xi)})<\text{crit} (G)$, so that $\eta^*\leq \beta^*$ and $\eta^*=\tree{W}^*\text{-pred}(u^{\Phi^*\circ \Pi}(\xi)+1)$, as well. We have the following picture, in the case that we don't drop. \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=1em,column sep=1em,minimum width=1em] { M^{{\tree{U}}}_\xi & {} & {} & M^\tree{V}_{u^\Pi_\xi} \\ {} & E^{{\tree{U}}}_\xi & E^\tree{V}_{u^\Pi_\xi}\\ {} & E^{{\tree{W}}}_{u^{\Phi}_\xi} & E^{\tree{W}^*}_{u^{\Phi^*\circ\Pi}_\xi} \\ M^{{\tree{W}}}_{u^{\Phi}_\xi} & {} & {} & M^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_\xi}\\ }; \path[-stealth] (m-1-1) edge node [above] {$t_\xi^\Pi$} (m-1-4) edge node [left] {$t_\xi^{\Phi}$} (m-4-1) (m-1-4) edge node [right] {$t_{u^\Pi_\xi}^{\Phi^*}$} (m-4-4) (m-4-1) edge node [below] {$t_{u^{\Phi}_\xi}^\Gamma$} (m-4-4) (m-2-2) edge[draw=none] node [sloped, auto=false, allow upside down] {$\in$} (m-1-1) edge[draw=none] node [sloped, auto=false, allow upside down] {$\mapsto$} (m-2-3) edge[draw=none] node [sloped, auto=false, allow upside down] {$\mapsto$} (m-3-2) (m-2-3) edge[draw=none] node [sloped, auto=false, allow upside down] {$\in$} (m-1-4) edge[draw=none] node [sloped, auto=false, allow upside down] {$\mapsto$} (m-3-3) (m-3-2) edge[draw=none] node [sloped, auto=false, allow upside down] {$\mapsto$} (m-3-3) edge[draw=none] node [sloped, auto=false, allow upside down] {$\in$} (m-4-1) (m-3-3) edge[draw=none] node [sloped, auto=false, allow upside down] {$\in$} (m-4-4) ; \end{tikzpicture}\] \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { M^{{\tree{U}}}_{\xi+1} & {} & {} & M^\tree{V}_{u^\Pi_\xi+1} \\ {} & M^{{\tree{U}}}_{\eta} & M^\tree{V}_{\eta^*}\\ {} & M^{{\tree{W}}}_{\eta} & M^{\tree{W}^*}_{\eta^*} \\ M^{{\tree{W}}}_{u^{\Phi}_{\xi+1}} & {} & {} & M^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_\xi+1}\\ }; \path[-stealth] (m-1-1) edge node [above] {$s_{\xi+1}^\Pi$} (m-1-4) edge node [left] {$s_{\xi+1}^{\Phi}$} (m-4-1) (m-1-4) edge node [right] {$s_{u^\Pi_\xi+1}^{\Phi^*}$} (m-4-4) (m-4-1) edge[dashed,->] node [below] {$s_{u^{\Phi}_{\xi+1}}^\Gamma$} (m-4-4) (m-2-2) edge node [below] {$E^{{\tree{U}}}_\xi$} (m-1-1) edge node [above] {$ s_{\eta,\eta^*}^\Pi$} (m-2-3) edge node [left] {$id$} (m-3-2) (m-2-3) edge node [below] {$E^\tree{V}_{u^\Pi_\xi}$} (m-1-4) edge node [right] {$id$} (m-3-3) (m-3-2) edge node [below] {$ s_{\eta,\eta^*}^\Gamma$} (m-3-3) edge node [above] {$E^{{\tree{W}}}_{u^{\Phi}_\xi}$} (m-4-1) (m-3-3) edge node [above] {$E^{\tree{W}^*}_{u^{\Phi^*\circ\Pi}_\xi}$} (m-4-4) ; \end{tikzpicture}\] Each of the maps along the outer square of the bottom diagram are the copy maps associated to the maps along corresponding side of the top square, inner square, and appropriate extenders. In particular, we let $s_{u^{\Phi}_{\xi+1}}^\Gamma$ be the copy map associated to ($t^\Gamma_{u^{\Phi}_\xi}$, $ s^\Gamma_{\eta,\eta^*}$, $E^{{\tree{W}}}_{u^{\Phi}(\xi)}, E^{{\tree{W}^*}}_{u^{\Phi^*\circ \Pi}(\xi)}$), as we must. Note that the ordinary Shift Lemma applies in this case because we have assumed that, so far, $\Gamma$ is a tree embedding. In particular, since $\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\trianglelefteq M^{{\tree{W}}}_{\eta}|\hat\lambda(E_{\eta}^{{\tree{W}}})$, the agreement properties of $t$-maps gives that \begin{align*} t^\Gamma_{u^{\Phi}(\xi)}\upharpoonright\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\cup \{\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\} & = t^\Gamma_{\eta}\upharpoonright\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\cup \{\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\} \\ & = s^\Gamma_{\eta,\eta^*}\upharpoonright\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\cup \{\text{dom}(E^{{\tree{W}}}_{u^{\Phi}(\xi)})\}, \end{align*} using for this second equivalence that $\text{crit}(\hat\imath^{\tree{W}^*}_{\eta,u^{\Gamma}(\eta)})>\text{crit}(E^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}(\xi)})$, as, otherwise, we used an extender $E_\gamma^{\tree{W}^*}$ with $\text{crit}(E_\gamma^{\tree{W}^*})\leq \text{crit}(E^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}(\xi)})<\hat\lambda(E_\gamma^{\tree{W}^*})$, but then $\text{crit}(E^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}(\xi)})$ can't be in $\text{ran}(\hat\imath^{\tree{W}^*}_{\eta^*,u^{\Gamma}(\eta)})\supseteq \text{ran}(t^\Gamma_{\eta})$, a contradiction. In the lower diagram, the inner square commutes by our induction hypothesis and all the trapezoids commute since the outer maps are copy maps associated to the relevant objects. We now want to see that the full outer square commutes. Let's look at the two ways of going around this outer square, $s^{\Phi^*}_{u^{\Pi_\xi +1}}\circ s^\Pi_{\xi+1}$ and $s^\Gamma_{u^{\Phi}_{\xi+1}}\circ s^{\Phi}_{\xi+1}$. Since $s^\Pi_{\xi+1}$ and $s^{\Phi^*}_{u^\Pi_\xi +1}$ are copy maps associated to the appropriate objects, Lemma \ref{shift composition} gives that $s^{\Phi^*}_{u^\Pi_\xi +1}\circ s^\Pi_{\xi+1}$ is the copy map associated to ($t^{\Phi^*}_{u^\Pi_\xi}\circ t^\Pi_\xi$, $ s_{\eta,\eta^*}^\Pi$, $E^{{\tree{U}}}_\xi$, $E^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_\xi}$). Similarly, $s^\Gamma_{u^{\Phi}_{\xi+1}}\circ s^{\Phi}_{\xi+1}$ is the copy map associated to ($t^\Gamma_{u^{\Phi}_\xi}\circ t^{\Phi}_\xi$, $ s^\Gamma_{\eta,\eta^*}$, $E^{{\tree{U}}}_\xi$, $E^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_\xi}$). But $t^{\Phi^*}_{u^\Pi_\xi}\circ t^\Pi_\xi= t^\Gamma_{u^{\Phi}_\xi}\circ t^{\Phi}_\xi$ and $s_{\eta,\eta^*}^\Pi= s^\Gamma_{\eta,\eta^*}$, so the two ways of going around the outer square are both the copy map associated to the same objects, i.e. $s^{\Phi^*}_{u^\Pi_\xi +1}\circ s^\Pi_{\xi+1}=s^\Gamma_{u^{\Phi}_{\xi+1}}\circ s^{\Phi}_{\xi+1}$. Note that $u^{\Phi^*\circ \Pi}_\xi+1= v^{\Phi^*\circ\Pi}_{\xi+1}\leq_{\tree{W}^*} u^{\Phi^*\circ\Pi}_{\xi+1}$. We now define \[t_{u^{\Phi}_{\xi+1}}^\Gamma= \hat\imath^{\tree{W}^*}_{u^{\Phi^*\circ \Pi}_\xi+1, u^{\Phi^*\circ \Pi}_{\xi+1}}\circ s_{u^{\Phi}_{\xi+1}}^\Gamma,\] as we must. Finally, we check that this assignment gives us a commuting square of the $t$-maps. We get the following diagram. \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=6em,column sep=6em,minimum width=6em] { M^{{\tree{U}}}_{\xi+1} & M^\tree{V}_{v^\Pi_{\xi+1}} & M^\tree{V}_{u^\Pi_{\xi+1}} \\ M^{{\tree{W}}}_{u^{\Phi}_{\xi+1}} & M^{\tree{W}^*}_{v^{\Phi^*}( u^\Pi_\xi+1)} & M^{\tree{W}^*}_{u^{\Phi^*\circ\Pi}_{\xi+1}}\\ }; \path[-stealth] (m-1-1) edge node [above] {$s_{\xi+1}^\Pi$} (m-1-2) edge node [left] {$t_{\xi+1}^{\Phi}$} (m-2-1) (m-1-2) edge node [above] {$\tree{V}$} (m-1-3) edge node [left] {$t_{v^\Pi_{\xi+1}}^{\Phi^*}$} (m-2-2) (m-1-3) edge node [right] {$t_{u^\Pi_{\xi+1}}^{\Phi^*}$} (m-2-3) (m-2-1) edge node [below] {$s_{u^{\Phi}_{\xi+1}}^{\Gamma}$} (m-2-2) (m-2-2) edge node [below] {$\tree{W}^*$} (m-2-3); \end{tikzpicture}\] We just need to see that this diagram commutes, since $t_{\xi+1}^\Pi$ is just the map going across the top and $t_{u^{\Phi}(\xi+1)}^\Gamma$ is the map going across the bottom (so this really is the relevant square of $t$-maps). The left square is just the outer square of the lower commuting diagram, above (though we used $u^\Pi(\xi)+1= v^\Pi(\xi+1)$ and $u^{\Phi^*}\circ u^\Pi(\xi) +1= v^{\Phi^*}(u^\Pi(\xi)+1)$ to change the labeled indices of the models in the middle column to emphasize how we knew they were tree-related to the appropriate models all the way on the right; we get these equivalences since $\xi+1>\beta$ and $u^\Pi(\xi+1), u^\Pi(\xi)+1>\beta^*$). We've also used that all the vertical $t$-maps are the same as the corresponding $s$-maps (by the equivalence of the indices just mentioned). This last fact (that the vertical $t$-maps are the same as the corresponding $s$-maps) also gives us that the square on the right commutes, since $\Phi$ is a tree embedding. If we drop when applying any of the $E^{{\tree{U}}}_\xi, E^\tree{V}_{u^\Pi_\xi}$, etc, then we drop applying all of them and the initial segments to which we apply these extenders are all mapped to each other by the relevant maps. In this case, everything remains the same except that we must use the initial segments to which we apply the extenders instead of the models displayed in the above diagrams, e.g. some $P\trianglelefteq M^{{\tree{U}}}_{{\eta}}$ instead of $M^{{\tree{U}}}_{{\eta}}$. \paragraph{Subcase B.} $\text{crit} (E^{{\tree{U}}}_{\xi})\geq \text{crit} ( F)$. \\ In this case, $\eta\geq \beta$ and $u^{\Phi}(\eta)={\tree{W}}\text{-pred}(u^{\Phi}(\xi)+1)$. We also get $\text{crit} (E^\tree{V}_{u^\Pi(\xi)})\geq \text{crit} (G)$, so $\eta^*\geq \beta^*$ and $u^{\Phi^*}(\eta^*)=\tree{W}^*\text{-pred}(u^{\Phi^*\circ \Pi}(\xi)+1)$. We now have the model to which $E^{{\tree{U}}}_\xi$ is applied is related to the model to which $E^{{\tree{W}}}_{u^{\Phi}(\xi)}$ is applied by a $t$-map of $\Phi$, whereas they were just the same model in the previous case. Similarly on the $\tree{V}$-$\tree{W}^*$ side. The only thing this changes is that we replace the identity maps in the above previous diagram with these $t$-maps. This is the diagram for the non-dropping case (as before, dropping makes little difference). \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { M^{{\tree{U}}}_{\xi+1} & {} & {} & M^\tree{V}_{u^\Pi_\xi+1} \\ {} & M^{{\tree{U}}}_{\eta} & M^\tree{V}_{\eta^*}\\ {} & M^{{\tree{W}}}_{u^{\Phi}_{\eta}} & M^{\tree{W}^*}_{u^{\Phi^*}_{\eta^*}} \\ M^{{\tree{W}}}_{u^{\Phi}_{\xi+1}} & {} & {} & M^{\tree{W}^*}_{u^{\Phi^*\circ\Pi}_\xi+1}\\ }; \path[-stealth] (m-1-1) edge node [above] {$s_{\xi+1}^\Pi$} (m-1-4) edge node [left] {$s_{\xi+1}^{\Phi}$} (m-4-1) (m-1-4) edge node [right] {$s_{u^\Pi_\xi+1}^{\Phi^*}$} (m-4-4) (m-4-1) edge[dashed,->] node [below] {$s_{u^{\Phi}_{\xi+1}}^\Gamma$} (m-4-4) (m-2-2) edge node [below] {$E^{{\tree{U}}}_\xi$} (m-1-1) edge node [above] {$ s_{\eta,\eta^*}^\Pi$} (m-2-3) edge node [left] {$t_{\eta}^{\Phi}$} (m-3-2) (m-2-3) edge node [below] {$E^\tree{V}_{u^\Pi_\xi}$} (m-1-4) edge node [right] {$t_{\eta^*}^{\Phi^*}$} (m-3-3) (m-3-2) edge node [below] {$ s_{u^\Phi_\eta,u^{\Phi^*}_{\eta^*}}^\Gamma$} (m-3-3) edge node [above] {$E^{{\tree{W}}}_{u^{\Phi}_\xi}$} (m-4-1) (m-3-3) edge node [above] {$E^{\tree{W}^*}_{u^{\Phi^*\circ\Pi}_\xi}$} (m-4-4) ; \end{tikzpicture}\] The rest of the diagrams and arguments are essentially the same as before. This finishes the successor case. \paragraph{Limit case.} $\lambda>\beta^*$ is a limit and $u^{\Phi^*\circ \Pi}(\lambda)\leq\mu^*$. We have $u^{\Phi^*\circ \Pi}(\xi)<\mu^*$ for all $\xi<\lambda$, so that by our induction hypothesis, $u^{\Phi}(\xi)<\mu$. So, $u^{\Phi}(\lambda)=v^{\Phi}(\lambda)=\sup\{u^{\Phi}(\xi)\mid \xi<\lambda\}\leq \mu$. Let $ c=[0,u^{\Phi}(\lambda)_{{\tree{W}}}$ and $c^*=[0,u^{\Phi^*\circ \Pi}(\lambda))_{\tree{W}^*}$. We need to see that $c^*$ is the $\leq_{\tree{W}^*}$-downward closure of $v^\Gamma[c]$. To do this, we just trace $c$, $c^*$ back to the branch $ b=[0,\lambda)_{{\tree{U}}}$. We have \[c=\{\xi\mid \exists \eta\in b \,(\xi\leq_{{\tree{W}}}v^{\Phi}(\eta))\},\] and \[c^*=\{\xi\mid \exists \eta\in b \,(\xi\leq_{\tree{W}^*} v^{\Phi^*\circ \Pi}(\eta))\}.\] We also have that $v^\Gamma(v^{\Phi}(\eta))=v^{\Phi^*\circ \Pi}(\eta)$ for every $\eta$, so that $v^\Gamma[c] \subseteq c^*$. This implies $c^*$ is the downward closure of $v^\Gamma[c]$, as desired. So, we get our map $s^\Gamma_{u^{\Phi}_\lambda}$ commuting with the maps $s^\Gamma_{u^{\Phi}(\xi)}$ since we are taking the direct limits along $c$ and $c^*$ on both sides. From here, we get $t^\Gamma_{u^{\Phi}_\lambda}$ as in the successor case. This finishes our construction of $\Gamma$. For the \lq\lq moreover" clause, we've already shown that if $u^{\Phi^*\circ \Pi}(\xi)\leq\mu^*$, $u^{\Phi}(\xi)\leq\mu$. So, if the full $V(\tree{V},\tree{T}, G)$ is wellfounded, then for all $\xi< \text{lh}({\tree{U}})$, $u^{\Phi}(\xi)<\mu$. But then $\mu+1=\text{lh} (V({\tree{U}},{\tree{S}},F))$, so $\Gamma$ is a total extended tree embedding from $V({\tree{U}},{\tree{S}},F)$ into $V(\tree{V},\tree{T},G)$. \qed \end{proof} We will carry over our notation for the ordinary Shift Lemma. \begin{definition} Let $\Psi:\tree{S}\to \tree{T}$ and $\Pi:\mathcal{U} \to \mathcal{V}$ be extended tree embeddings, $F$ an extender such that $F^-$ be an extender on the $M_\infty^\tree{S}$-sequence, and $G$ an extender such that $G^-$ is on the $M_\infty^\tree{T}$-sequence. Suppose that the Shift Lemma applies to $(\Psi,\Pi, F, G)$. We'll say that an extended tree embedding $\Gamma: V(\tree{U},\tree{S}, F)\to V(\tree{V},\tree{T}, G)$ is \textit{the copy map associated to $(\Psi, \Pi, F, G)$} iff it is the unique tree embedding as in the conclusion of the Shift Lemma. \end{definition} We can now carry out the copying construction. \begin{theorem}[Copying]\label{copying} Let $\Gamma:\tree{S}\to \tree{T}$ be a non-dropping extended tree embedding. Let $\mtree{S}=\langle \tree{S}_\xi, \Phi^{\eta,\xi},F_\zeta\,|\, \xi,\zeta+1<\text{lh} (\mtree{S})\rangle$ be a meta-tree on $\tree{S}$. Then there is some largest $\mu\leq \text{lh} (\mtree{S})$ such that there is a meta-tree $\Gamma\mtree{S}=\langle \tree{T}_\xi, \Psi^{\eta,\xi}, G_\zeta\,|\,\xi,\zeta+1<\mu\rangle$ on $\tree{T}$ with tree-order $\leq_\mtree{S}\upharpoonright \mu$ and for $\xi<\mu$, non-dropping extended tree embeddings $\Gamma^\xi: \tree{S}_\xi\to \tree{T}_\xi$ with (total) last $t$-map $t_\infty^\xi$ such that \begin{enumerate} \item $\Gamma=\Gamma^0$, \item$G_\xi=t_\infty^\xi(F_\xi)$, \item and for all $\eta\leq_\mtree{S}\xi$, $\Gamma^\xi\circ \Phi^{\eta,\xi}=\Psi^{\eta,\xi}\circ \Gamma^\eta$. \end{enumerate} Moreover, \begin{enumerate} \item if $\mtree{S}$ is simple, so is $\Gamma\mtree{S}$, \item if $\mtree{S}=\mtree{V}(\tree{S}, \tree{U})$ for some tree $\tree{U}$, then $\Gamma\mtree{S}= \mtree{V}(\tree{T},t^\Gamma_\infty \tree{U}\upharpoonright\mu)$.\footnote{We can copy $\tree{U}$ by $t^\Gamma_\infty$ since this is a total elementary embedding from the last model of $\tree{S}$ to the last model of $\tree{T}$.} \item if $\tree{S},\tree{T}$ are by some strategy $\Sigma$ with strong hull condensation and $\mtree{S}$ is by $\Sigma^*$, then $\mu=\text{lh} (\mtree{S})$ and $\Gamma\mtree{S}$ is by $\Sigma^*$. \end{enumerate} \end{theorem} \begin{proof} We define $\Gamma \mtree{S}$ by induction, using the Shift Lemma at successors. $\mu$ will just be the least ordinal such that this process breaks down or the full $\text{lh} (\mtree{S})$ if it doesn't break down. We just do the case that $\mtree{S}$ is simple. To deal with gratuitous drops, we just make the corresponding drops on the $\Gamma\mtree{S}$ side as well. That is, if at stage $\xi$, $\tree{S}_\xi= \tree{S}^+_\xi\upharpoonright\eta+1$ with $\eta+1< \text{lh} (\tree{S}^+_\xi)$, we just put $\tree{T}_\xi = \tree{T}^+_\xi\upharpoonright v^{\Gamma^\xi}(\eta)+1$.\footnote{This is somewhat arbitrary. We only need to drop to some level $\tree{T}_\xi^+\upharpoonright \eta^*+1$ such that $v^{\Gamma^\xi}(\eta)\leq_{\tree{T}^+_\xi}\eta^*$ and $(v^{\Gamma^\xi}(\eta),\eta*)_{\tree{T}^+_\xi}$ doesn't drop so that we have an extended tree embedding with a total last $t$-map. (Note that here we technically mean the extension of $\Gamma^\xi$ to an extended tree embedding $\tree{S}^+_\xi\to \tree{T}^+_\xi$.)} Let $\alpha_\xi=\alpha_0(F_\xi, \tree{S}_\xi)$ and $\beta_\xi=\beta(F_\xi, \tree{S}_\xi)$. Supposing we've define $\Gamma\mtree{S}\upharpoonright\xi+1$, let $\alpha_\xi^*=\alpha_0(G_\xi, \tree{T}_\xi)$ and $\beta_\xi^*=\beta( G_\xi,\tree{T}_\xi)$. We'll maintain the following by induction. For $\eta\leq\xi<\mu$, (1)-(3) hold as well as \begin{enumerate} \item[(4)] $\Gamma^\eta\upharpoonright\alpha_\eta+1\approx\Gamma^\xi\upharpoonright\alpha_\eta+1$, \item[(5)] $u^{\Gamma^\xi}(\alpha_\eta)\leq_{\tree{T}_\eta} u^{\Gamma^\eta}(\alpha_\eta)$ and $\tree{T}_\xi \upharpoonright u^{\Gamma^\xi}(\alpha_\eta)+1=\tree{T}_\eta \upharpoonright u^{\Gamma^\xi}(\alpha_\eta)+1$ \item[(6)] $t^\xi_\infty$ is total and $t^\eta_\infty\upharpoonright\text{lh} (F_\eta)+1=t^\xi_\infty\upharpoonright\text{lh}( F_\eta)+1$. \end{enumerate} This will allow us to verify that $\Gamma\mtree{S}$ has the same tree order as $\mtree{S}$ and show that we satisfy the hypotheses of the Shift Lemma at successor stages. Suppose we've defined $\Gamma(\mtree{S}\upharpoonright \xi+1)$, so we have $\Gamma^\xi:\tree{S}_\xi\to \tree{T}_\xi$ an extended tree embedding with total last $t$-map $t^\xi_\infty$, by (6). Let $\eta=\mtree{S}\text{-pred} (\xi+1)$. We need to check that $\eta$ is least such that $\text{crit} (G_\xi)<\hat\lambda(G_\eta)$, so that $\eta=\Gamma\mtree{S}\text{-pred}(\xi+1)$, according to normality. $\text{crit} (G_\xi) =t^\xi_\infty(\text{crit} (F_\xi))= t_\infty^\eta(\text{crit} (F_\xi))$, since $\text{crit} (F_\xi)<\hat\lambda(F_\eta)$ and $t^\eta_\infty$ agrees with $t_\infty^\xi$ up to $\text{lh} (F_\eta) +1$ by (6). So $\text{crit} (G_\xi)=t^\eta_\infty(\text{crit} (F_\xi))<t^\eta_\infty(\hat\lambda(F_\eta))=\hat\lambda(G_\eta)$. Now suppose $\zeta$ is such that $\text{crit} (G_\xi)<\hat\lambda(G_\zeta)$. Then $t^\zeta_\infty(\text{crit} (F_\xi))= t^\xi_\infty(\text{crit} (F_\xi)) = \text{crit} (G_\xi)< \hat\lambda(G_\zeta)=t^\zeta_\infty(\hat\lambda(F_\zeta))$. So $\text{crit} (F_\xi)<\hat\lambda(F_\zeta)$, so $\zeta\geq \eta$, as desired. We now want to apply our Shift Lemma to, in the notation of that lemma, $\Psi= \Gamma_\xi$, $\Pi=\Gamma_\eta$, $F=F_\xi$ and $G=G_\xi$, and then let $\Gamma_{\xi+1}$ be the resulting copy tree embedding $\Gamma$, assuming $V(\tree{T}_\eta,\tree{T}_\xi, G_\xi)$ is wellfounded. \begin{claim}\label{copy claim 1} The Shift Lemma applies to $(\Gamma^\xi, \Gamma^\eta, F_\xi, G_\xi)$, i.e. \begin{enumerate} \item[(i)] $M_\infty^{\tree{S}_\xi}|\text{lh}(F_\xi)\trianglelefteq\text{dom}( t_\infty^\xi)$ and $G_\xi=t_\infty^\xi(F)$, \item[(ii)] $\Gamma_\xi\upharpoonright\beta_\xi+1\approx \Gamma_\eta\upharpoonright\beta_\xi+1$, \item[(iii)] $\tree{T}_\xi\upharpoonright \beta^*_\xi+1=\tree{T}_\eta\upharpoonright\beta^*_\xi+1$ \item[(iv)] $\beta^*_\xi\in [v^\eta(\beta_\xi), u^\eta(\beta_\xi)]_{\tree{T}_\eta}$ and $t_{\beta_\xi}^\eta\upharpoonright\text{dom}(F_\xi)\cup\{\text{dom}(F_\xi)\}=s_{\beta, \beta^*}^\eta\upharpoonright\text{dom}(F_\xi)\cup\{\text{dom}(F_\xi)\}$, \item[(v)] if $\beta_\xi+1<\text{lh}(\tree{S}_\eta)$, then $\text{dom}(F_\xi) \triangleleft M_{\beta_\xi}^{\tree{S}_\eta}|\text{lh}(E^{\tree{S}_\eta}_{\beta_\xi})$, and \item[(vi)] if $\beta^*_\xi+1<\text{lh}(\tree{T}_\eta)$, $\text{dom}(G_\xi) \triangleleft M_{\beta^*_\xi}^{\tree{S}_\eta}|\text{lh}(E^{\tree{T}_\eta}_{\beta^*_\xi})$. \end{enumerate} \end{claim} \begin{proof} First notice that (i) is trivial as $t^\xi_\infty$ is a total elementary embedding. We also have $\beta_\xi\leq\alpha_\eta$, since either $\eta=\xi$ and this is trivial, or else $\eta<\xi$ and $F_\eta=E_{\alpha_\eta}^{\tree{S}_\xi}$, so that this follows by our choice of $\eta$. Similarly, $\beta_\xi^*\leq\alpha^*_\eta$. These observations and hypothesis (4) imply (ii) and (iii). For (iv) we split into cases depending on whether $\beta_\xi=\alpha_\eta$ or $\beta_\xi<\alpha_\eta$. If $\beta_\xi=\alpha_\eta$, then Lemma \ref{key lemma 1} and hypothesis (5) gives (iv). If $\beta_\xi<\alpha_\eta$, then this just follows from Lemma \ref{key lemma 1} and (4). (v) and (vi) follow by considering similar cases. \hfill{$\qed$ Claim \ref{copy claim 1}} \end{proof} So we may let $\Gamma^{\xi+1}$ be the copy tree embedding associated to $(\Gamma_\xi, \Gamma_\eta, F_\xi, G_\xi)$. We assume that $V(\tree{T}_\eta, \tree{T}_\xi, G_\xi)=\tree{T}_{\xi+1}$ is wellfounded, otherwise we stop. It follows from the Shift Lemma that $\tree{S}_{\xi+1}$ is wellfounded and that $\Gamma^{\xi+1}: \tree{S}_{\xi+1}\to \tree{T}_{\xi+1}$ is a non-dropping extended tree embedding. It is easy to see that, since $\Gamma^{\xi+1}$ is the appropriate copy tree embedding, (1)-(6) still hold at $\xi+1$. This finishes the successor step. Now let $\lambda<\text{lh} (\mtree{S})$ be a limit ordinal and let $b=[0,\lambda)_\mtree{S}$. We put $\tree{T}_\lambda= \lim\langle \tree{T}_\xi,\Psi^{\eta,\xi}\,|\,\eta\leq_\mtree{S}\xi\in b\rangle$ if this direct limit is wellfounded. Otherwise we put $\mu=\lambda$ and stop. If this direct limit is wellfounded, we let $\Gamma_\lambda$ be the extended tree embedding guaranteed by Proposition \ref{direct limit prop}. We leave it to the reader to check that our induction hypotheses go through. We know turn to the \lq\lq moreover" clauses. We've already shown (1). For (2), suppose $\mtree{S}=\mtree{V}(\tree{S},\tree{U})$. We verify by induction that $\Gamma\mtree{S}= \mtree{V}(\tree{T},t^\Gamma_\infty\tree{U})$. Let $\tau_\xi: M^\tree{U}_\xi\to M^{t^\Gamma_\infty\tree{U}}_\xi$ be the copy maps (so $\tau_0=t^\Gamma_\infty$) and $\sigma_\xi:M^\tree{U}_\xi\to M^{\tree{S}_\xi}_{\infty}$, $\sigma_\xi^*:M^{\tau_0\tree{U}}_\xi\to M^{\tree{T}_\xi}_\infty$, the quasi-normalization maps. We check by induction that for all $\xi<\mu$, \begin{enumerate} \item $\tau_0 \tree{U}\upharpoonright\xi+1$ is a putative plus tree, \item $\Gamma\mtree{S}\upharpoonright \xi+1 = \mtree{V}(\tree{T},\tau_0 \tree{U}\upharpoonright\xi+1)$, \item $\sigma^*_\xi\circ\tau_\xi = t^\xi_\infty \circ \sigma_\xi$. \end{enumerate} (1) follows from (2) by induction, since the last model of $V(\tree{T}, \tau_0 \tree{U}\upharpoonright\xi+1)$ embeds $M^{\tau_0\tree{U}}_\xi$. These hypotheses go through at limits $\lambda<\mu$ since we're taking direct limits everywhere. So we just deal with the successor case. Let $\xi+1<\mu$ and \[\eta=\tree{U}\text{-pred} (\xi+1)= \mtree{S}\text{-pred}(\xi+1)=\Gamma\mtree{S}\text{-pred}(\xi+1).\] We have \[E^{\tau_0\tree{U}}_{\xi}=\tau_\xi(E^\tree{U}_\xi)\] so applying $\sigma^*_\xi$ and using (3), we get \[\sigma^*_\xi(E^{\tau_0\tree{U}}_{\xi})=\tau_\xi\circ \sigma_\xi(E^\tree{U}_\xi).\] But $\sigma_\xi(E^\tree{U}_\xi)= F_\xi$, so we get \[ \sigma^*_\xi(E^{\tau_0\tree{U}}_{\xi}) = G_\xi.\] So \[\tree{T}_{\xi+1} = V(\tree{T}_\eta, \tree{T}_\xi, G_\xi) = V(\tree{T}, \tau_0\tree{U}\upharpoonright\xi+1),\] giving us (2) and (1) at $\xi+1$. All that remains is to verify $\sigma^*_{\xi+1}\circ \tau_{\xi+1}=t^{\xi+1}_\infty\circ \sigma_{\xi_1}$. So we just need to see that the outermost square of the following diagram commutes. \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { M^\tree{U}_{\xi+1} & {} & {} & Ult(M^{\tree{S}_\eta}_\infty, F_\xi) & M^{\tree{S}_{\xi+1}}_\infty \\ {} & M^\tree{U}_{\eta} & M_\infty^{\tree{S}_\eta}\\ {} & M^{\tau_0 \tree{U}}_\eta & M_\infty^{\tree{T}_\eta} \\ M^{\tau_0 \tree{U}}_{\xi+1} & {} & {} & Ult(M_\infty^{\tree{T}_\eta}, G_\xi) & M_\infty^{\tree{T}_{\xi+1}}\\ }; \path[-stealth] (m-1-1) edge node [above] {shift} (m-1-4) edge node [left] {$\tau_{\xi+1}$} (m-4-1) edge [bend left] node [above] {$\sigma_{\xi+1}$} (m-1-5) (m-1-4) edge node [right] {} (m-1-5) (m-1-5) edge node [right] {$t^{\xi+1}_\infty$} (m-4-5) (m-4-1) edge node [below] {shift} (m-4-4) edge [bend right] node [below] {$\sigma^*_{\xi+1}$} (m-4-5) (m-2-2) edge node [below] {$E^\tree{U}_\xi$} (m-1-1) edge node [above] {$\sigma_\eta$} (m-2-3) edge node [left] {$\tau_\eta$} (m-3-2) (m-2-3) edge node [below] {$F_\xi$} (m-1-4) edge node [right] {$t^\eta_\infty$} (m-3-3) edge node [below] {$t^{\Phi^{\eta,\xi+1}}_{\infty}$} (m-1-5) (m-3-2) edge node [below] {$\sigma^*_\eta$} (m-3-3) edge node [above] {$E^{\tau_0\tree{U}}_\xi$} (m-4-1) (m-3-3) edge node [above] {$G_\xi$} (m-4-4) edge node [above] {$t^{\Psi^{\eta,\xi+1}}_{\infty}$} (m-4-5) (m-4-4) edge node {} (m-4-5); \end{tikzpicture}\] We have that every region except the rightmost trapezoid commutes by facts about quasi-normalization or the ordinary Shift Lemma. So we just need to see that the rightmost trapezoid commutes, i.e. \[t^{\Psi^{\eta,\xi+1}}_{\infty}\circ t^\eta_\infty =t^{\xi+1}_\infty \circ t^{\Phi^{\eta,\xi+1}}_{\infty}.\] But this follows immediately from the fact that $\Psi^{\eta,\xi+1}\circ\Gamma_{\eta}= \Gamma_{\xi+1}\circ \Phi^{\eta,\xi+1}$. This finishes the successor step of the induction and so establishes part (2) of the \lq\lq moreover" clause. To finish, we check the \lq\lq moreover" clause part (3), i.e. suppose $\tree{S},\tree{T}$ are by $\Sigma$ and $\mtree{S}$ is by $\Sigma^*$, where $\Sigma$ is some strategy for $M$ with strong hull condensation. We show $\mu=\text{lh} (\mtree{S})$ and $\Gamma\mtree{S}$ is by $\Sigma^*$ simultaneously by induction. As long as $\Gamma\mtree{S}\upharpoonright\xi+1$ is by $\Sigma^*$, we know $\xi<\mu$ since the process hasn't broken down. Successor cause no trouble by Remark \ref{shift remark}, so we deal with limits. So we have $\Gamma\mtree{S}\upharpoonright\lambda$ is by $\Sigma^*$ and we need to see that for $b=\Sigma^*(\Gamma\mtree{S}\upharpoonright\lambda)$, $b=[0,\lambda)_\mtree{S}$. Since we take direct limits of both sides, by Proposition \ref{direct limit prop}, we get a direct limit tree embedding from the last tree of $\mtree{S}{}^\frown b$ to the last tree of $\Gamma( \mtree{S}\upharpoonright\lambda){}^\frown b$, which is by $\Sigma$. So since $\Sigma$ has strong hull condensation, the last tree of $\mtree{S}\upharpoonright\lambda{}^\frown b$ is by $\Sigma$, hence $b=\Sigma^*(\mtree{S}\upharpoonright\lambda)=[0,\lambda)_\mtree{S}$ by the definition of $\Sigma^*$. Of course, because $b=\Sigma^*(\Gamma\mtree{S}\upharpoonright \lambda)$, $\lambda<\mu$, as well. \qed \end{proof} We will also need the analogue of Lemma \ref{shift direct limits}, whose proof we omit. \begin{lemma}\label{m-shift direct limits} Let $\preceq$ be a directed partial order on a set $A$. Suppose we have directed systems of plus trees $\mathcal{C}=\langle \{\tree{S}_\xi\}_{a\in A}, \{\Psi^{a, b}\}_{a\preceq b}\rangle$ and $\mathcal{D}=\langle \{\tree{T}_\xi\}_{a\in A}, \{\Pi^{a, b}\}_{a\preceq b}\rangle$ and extenders $\{F_a\}_{a\in A}$ such that \begin{enumerate} \item for all $a\in A$, $F_a^-$ is on the $M^{\tree{S}_a}_\infty$-sequence and $M^{\tree{S}_a}_\infty|\text{lh}(F_a)\trianglelefteq \text{dom}(t^{\Psi^{a,b}}_\infty)$, \item for all $a,b\in A$ such that $a\preceq b$, the Shift Lemma applies to $(\Psi^{a,b}, \Pi^{a,b}, F_a, F_b)$. \end{enumerate} For $a,b\in A$ such that $a\preceq b$, let $\Gamma_{a,b}$ be the copy tree embedding associated to $(\Psi^{a,b}, \Pi^{a,b}, F_a, F_b)$. Let $\tree{S}_\infty = \lim\mathcal{C}$, $\tree{T}_\infty=\lim\mathcal{D}$, $\Psi_{a,\infty}:\tree{S}_a\to \tree{S}_\infty$ and $\Pi_{a, \infty}:\tree{T}_a\to \tree{T}_\infty$ the direct limit tree embedding, and $F_\infty$ the common value of $t^{\Psi^{a,\infty}}_\infty(F_a)$. Let $\tree{V}_a=V(\tree{T}_a, \tree{S}_a, F_a)$, $\Phi^a=\Phi^{V(\tree{T}_a, \tree{S}_a, F_a)}$, $\tree{V}_\infty=\lim \langle \{\tree{V}_a\}_{a\in A}, \{\Gamma^{a,b}\}_{a\preceq b}\rangle$, $\Gamma^{a,\infty}: \tree{V}_a\to \tree{V}_\infty$ the direct limit tree embedding, and $\Phi^\infty: \tree{T}_\infty\to \tree{V}_\infty$ the unique extended tree embedding such that for every $a\in A$, the following diagram commutes.\footnote{such an extended tree embedding is guaranteed by Proposition \ref{direct limit prop}} \begin{center} \begin{tikzcd} \tree{T}_a \arrow[r, "\Psi^{a,\infty}"] \arrow[d, "\Phi^a"'] & \tree{T}_\infty \arrow[d,"\Phi^\infty"]\\ \tree{V}_a \arrow[r, "\Gamma^{a,\infty}"'] & \tree{V}_\infty \end{tikzcd} \end{center} for every $\xi<\gamma$ (such an extended tree embedding is guaranteed by Proposition \ref{direct limit prop}). Then $\tree{V}_\infty=V(\tree{T}_\infty,\tree{S}_\infty,F_\infty)$, $\Phi^\infty= \Phi^{V(\tree{T}_\infty,\tree{S}_\infty, F_\infty)}$, and $\Gamma^{a,\infty}$ is given the copy tree embedding associated to $(\Psi^{a,\infty}, \Pi_{a,\infty}, F_a, F_\infty)$. \end{lemma} We end this section with an application of copying which won't be used in the remainder of the paper. It shows that there is some redundancy in the pullback clause (i.e. clause (b)) in the definition of strong hull condensation. Ultimately, nice strategies for plus trees are generated by their action on normal trees and so it seems plausible that this pullback clause is actually redundant. \begin{proposition} [Siskind] \label{elementarity prop} Let $M$ be a premouse and $\Sigma$ an iteration strategy for $M$ which quasi-normalizes well such that every pseudo-hull of a plus tree on $M$ by $\Sigma$ is by $\Sigma$. Let $\tree{S},\tree{T}$ be plus trees on $M$ by $\Sigma$, and $\Phi:\tree{S}\to \tree{T}$ a non-dropping extended tree embedding. Then for any normal tree $\tree{U}$ on $M_\infty^\tree{S}$ by $\Sigma_\tree{S}$, $t^\Phi_\infty \tree{U}$ is by $\Sigma_\tree{T}$.\end{proposition} \begin{proof} Suppose $\tree{U}$ is a normal on $M_\infty^\tree{S}$ of limit length $\lambda$ which is by $\Sigma_{\tree{S}}$ such that $t^\Phi_\infty \tree{U}$ is by $\Sigma_{\tree{T}}$. We need to see that $\Sigma_{\tree{S}}(\tree{U})=\Sigma_\tree{T}(t^\Phi_\infty \tree{U})$. Let $b=\Sigma_\tree{T}(t^\Phi_\infty \tree{U})$. Then, by our copying result, there is an extended tree embedding $V(\tree{S},\tree{U}{}^\frown b)\to V(\tree{T},\tau\tree{U}{}^\frown b)$ (as these are the last trees of $\mtree{V}(\tree{S},\tree{U}{}^\frown b)$ and $\mtree{V}(\tree{T},t^\Phi_\infty(\tree{U}{}^\frown b))$). Since $V(\tree{T},t^\Phi_\infty\tree{U}{}^\frown b)$ is by $\Sigma$, by quasi-normalizing well, we have that $V(\tree{S},\tree{U}{}^\frown b)$ is by $\Sigma$, since it is a pseudo-hull of a plus tree by $\Sigma$. Hence $b=\Sigma_{\tree{S}}(\tree{U})$. \qed \end{proof} \section{Nice meta-strategies and phalanx comparison} So far we have worked exclusively with meta-strategies generated by an ordinary strategy, that is, with meta-strategies of the form $\Sigma^*$, where $\Sigma$ is a strategy acting on plus trees on some premouse $M$. In this section we shall start with a plus tree $\tree{S}$ of successor length on a premouse $M$ and an arbitrary meta-strategy $\Sigma$ for meta-trees or stacks of meta-trees on $\tree{S}$. We shall identify regularity properties of $\Sigma$ which are the natural analogues of strong hull condensation and quasi-normalizing well. We then prove a general comparison theorem for pairs of the form $(\tree{S},\Sigma)$ such that $\Sigma$ has these properties. (See Theorem \ref{main comparison theorem}.) One can think of this as a strategy comparison theorem for phalanxes of the form $\Phi(\tree{S})$. Not all phalanxes are of this form, so it is not a truly general strategy comparison theorem for phalanxes. We then use our tree-phalanx comparison theorem to show characterize those meta-strategies that are of the form $\Sigma^*$ for some ordinary strategy $\Sigma$. It turns out every sufficiently nice meta-strategy is of this form. (That is Theorem \ref{induced strategy theorem}. ) The moral one might draw is that meta-strategies are not something fundamentally new, but rather a useful way of organizing constructions and proofs to do with ordinary strategies. The main step toward Theorem \ref{induced strategy theorem} is Lemma \ref{induced strategy lemma}, which is a kind of uniqueness theorem for ordinary iteration strategies $\Sigma$ whose induced meta-strategies $\Sigma^*$ behave well. \subsection{Regularity properties of meta-strategies} We first isolate the natural notion of tree embedding for meta-trees. \begin{definition} Let $\mtree{S}=\langle \tree{S}_\xi, F_\xi, \Phi^{\eta,\xi}\rangle$, $\mtree{T}=\langle \tree{T}_\xi, G_\xi, \Psi^{\eta,\xi}\rangle$, and $\alpha_\xi=\alpha_0(F_\xi,\tree{S}_\xi)$, $\beta_\xi=\beta_0(F_\xi, \tree{S}_\xi)$ and $\alpha^*_\xi=\alpha_0(G_\xi, \tree{T}_\xi)$, $\beta^*_\xi=\beta(G_\xi, \tree{T}_\xi)$. A \textit{meta-tree embedding} from $\mtree{S}$ to $\mtree{T}$ is a system $\vec{\Delta}=\langle v, u, \{\Gamma_\xi\}_{\xi<\text{lh}\tree{S}}, \{\Delta_\zeta\}_{\zeta+1<\text{lh}(\mtree{S})} \rangle$ such that \begin{enumerate} \item $v:\text{lh} (\mtree{S})\to \text{lh}(\mtree{T})$ is tree-order preserving, $u:\{\eta\mid\eta+1<\text{lh} (\mtree{S})\}\to \text{lh} (\mtree{T})$, $v(\xi)=\sup\{u(\eta)+1\mid \eta<\xi\}$, and for all $\xi+1<\text{lh}(\mtree{S})$, $v(\xi)\leq_{\mtree{T}}u(\xi)$; \item For all $\xi$ and $\eta\leq_\mtree{S}\xi$, \begin{enumerate} \item $\Gamma_\xi: \tree{S}_\xi\to\tree{T}_{v(\xi)}$ is an extended tree embedding and $\Gamma_0= Id_{\tree{S}_0}$; \item $\Psi^{v(\eta),v(\xi)}\circ \Gamma_\eta = \Gamma_\xi\circ \Phi^{\eta,\xi}$, \item if $\xi+1<\text{lh}(\mtree{S})$, then $\Delta_\xi= \Psi^{v(\xi),u(\xi)}\circ \Gamma_\xi$ with $M_\infty^{\tree{S}_\xi}|\text{lh}(F_\xi)\trianglelefteq \text{dom}(t_\infty^{\Delta_\xi})$; \end{enumerate} \item for $\xi+1<\text{lh} (\mtree{S})$, $\eta=\mtree{S}\text{-pred}(\xi+1)$, and $\eta^*=\mtree{T}\text{-pred}(v(\xi)+1)$, \begin{enumerate} \item $G_{u(\xi)}=t^{\Delta_\xi}_\infty(F_\xi)$,\footnote{Notice that we haven't built in the option of sending a non-plus-type extender to a plus-type extender extender; this is just because we have no use for such embeddings. Probably one can develop the basics while accommodating this possibility.} \item $\eta^*\in[v(\eta),u(\eta)]_\mtree{T}$, \item $\Gamma_{\xi+1}\upharpoonright \alpha_\xi +1 \approx \Delta_\xi\upharpoonright \alpha_\xi+1$, and \item $u^{\Gamma_{\xi+1}}(\alpha_\xi)=\alpha^*_{u(\xi)}$. \end{enumerate} \end{enumerate} \end{definition} One can show, letting $\eta,\xi,\eta^*$ as in (3), that conditions (3) and the commutativity condition (2)(b) imply that the Shift Lemma applies to ($\Delta_\xi$, $\Psi^{v(\eta),\eta^*}\circ \Gamma_\eta$, $F_\xi$, $G_{u(\xi)}$) and that $\Gamma_{\xi+1}$ is the copy tree embedding associated to these objects. At limit ordinals $\lambda$, $\tree{S}_\lambda$ and $\tree{T}_{v(\lambda)}$ are given by direct limits and so (2)(b) implies that $\Gamma_\lambda$ must be the extended tree embedding guaranteed by Proposition \ref{direct limit prop}. Suppose $\text{lh} (\mtree{S})=\gamma+1$, $\text{lh} (\mtree{T})=\delta+1$, and $v(\gamma)\leq_\mtree{T} \delta$. Then we define the associated \textit{extended meta-tree embedding} by putting $u(\gamma)=\delta$ and $\Delta_\gamma = \Psi^{v(\gamma),\delta}\circ \Gamma_\gamma$. \begin{proposition}\label{meta-tree agreement} Let $\vec{\Delta}:\mtree{S}\to \mtree{T}$ be a meta-tree embedding. Then for all $\eta<\xi<\text{lh} (\mtree{S})$, \begin{enumerate} \item $\Delta_\xi\upharpoonright\alpha_\eta+2=\Gamma_\xi\upharpoonright\alpha_\eta+2$ and \item $\Delta_\xi\upharpoonright\alpha_\eta+1=\Gamma_\xi\upharpoonright\alpha_\eta+1= \Delta_\eta\upharpoonright\alpha_\eta+1$. \end{enumerate} \end{proposition} \begin{proof} First, we show for all $\eta<\xi$, \[\Gamma_\xi\upharpoonright\alpha_\eta+1 = \Delta_\eta\upharpoonright\alpha_\eta+1\] by induction on $\xi$. This easily passes through limits, so we check at successors $\xi+1$. But this follows from condition (3) (c) in the definition of meta-tree embedding, since the $\alpha_\eta$'s are increasing. Now we just need to check that for all $\eta<\xi$, $v^{\Psi^{v(\xi),u(\xi)}}$ is the identity on $v^{\Gamma_\xi}(\alpha_\eta)+1$. But this is immediate from the normality condition of meta-trees: letting $\chi+1$ be the successor of $v(\xi)$ along $[v(\xi),u(\xi)]_\mtree{T}$, we have that $\alpha^*_{u(\eta)}<\beta_\chi\leq \alpha^*_\xi$ (using here that $G_{u(\eta)}^*= E^{\tree{T}_{u(\xi)}}_{\alpha^*_{u(\eta)}}$), so $u^{\Psi^{v(\xi),u(\xi)}}$ is the identity on $u^{\Gamma_\xi}(\alpha_\eta)+1$, i.e. $v^{\Psi^{v(\xi),u(\xi)}}$ is actually the identity on $v^{\Gamma_\xi}(\alpha_\eta)+2$. \qed \end{proof} \begin{definition} A meta-strategy $\Sigma$ for $\tree{S}$ has \textit{meta-hull condensation} iff whenever $\mtree{S}$ is by $\Sigma$ and $\vec{\Delta}: \mtree{T}\to \mtree{S}$ is a meta-tree embedding, $\mtree{T}$ is by $\Sigma$. \end{definition} \begin{remark}\label{meta-strategy remark} In the case that $\tree{S}=\{M\}$ (i.e. the trivial tree of length $1$ on a premouse $M$), a meta-strategy is just an iteration strategy for single normal trees. Moreover, meta-hull embeddings are tree embeddings (though not every tree embedding between trees on $M$ is a meta-hull embedding because we only allow mapping non-plus-type extenders to non-plus-type extenders. \end{remark} \begin{proposition}\label{nice pulls back} Let $\Pi:\tree{S}\to \tree{T}$ be a non-dropping extended tree embedding. Let $\Sigma$ be a $(\lambda, \theta)$-strategy for $\tree{T}$. Define $\Sigma^\Pi$ by \[\mtree{S}\text{ is by }\Sigma^\Pi\text{ iff }\Pi\mtree{S}\text{ is by }\Sigma.\] Then $\Sigma^\Pi$ is a $(\lambda,\theta)$-strategy for $\tree{S}$. Moreover, if $\Sigma$ has meta-hull condensation, so does $\Sigma^\Pi$. \end{proposition} \begin{proof} It's straightforward to use the copying construction (Theorem \ref{copying}) to get that $\Sigma^\Pi$ is a $(\lambda,\theta)$-strategy for countable meta-trees on $\tree{S}$, so we just handle the \lq\lq moreover" part. Suppose that $\Sigma$ has meta-hull condensation. Let $\mtree{S}$ be by $\Sigma^\Pi$ and let $\vec{\Delta}=\langle u, v, \Gamma_\xi, \Delta_\xi\rangle:\mtree{T}\to \mtree{S}$ be a meta-tree embedding. By induction on $\text{lh} (\mtree{T})$ we define a meta-tree embedding $\vec{\Delta}^*:\Pi\mtree{T}\to \Pi\mtree{S}$. This shows that the definition of $\Pi \mtree{T}$ doesn't break down and, since $\Sigma$ has meta-hull condensation, that $\Pi\mtree{T}$ is by $\Sigma$. So $\mtree{T}$ is by $\Sigma^\Pi$, as desired. Let $\mtree{S}=\langle \tree{S}_\xi, F_\xi, \Phi^{\eta,\xi}\rangle$, $\Pi \mtree{S}= \langle \tree{S}^*_\xi, F^*_\xi, {\Phi^*}^{\eta,\xi}\rangle$, and $\Pi^\mtree{S}_\xi: \tree{S}_\xi\to \tree{S}^*_\xi$ be the copy tree embeddings. Let $\mtree{T}= \langle \tree{T}_\xi,G_\xi, \Psi^{\eta,\xi}\rangle$, $\Pi\mtree{T} = \langle \tree{T}^*_\xi, \tree{T}^*_\xi,G^*_\xi, {\Psi^*}^{\eta,\xi}\rangle$, and $\Pi^\mtree{T}_\xi:\tree{T}_\xi\to \tree{T}^*_\xi$ be the copy tree embeddings. We define $\vec{\Delta}^*=\langle u,v, \Gamma^*_\xi, \Delta^*_\xi\rangle$ by induction on $\text{lh} (\mtree{T})$ maintaining for all $\xi<\text{lh} (\mtree{T})$, \begin{enumerate} \item $\vec{\Delta}^*\upharpoonright(\Pi\mtree{T}\upharpoonright\xi+1)$ is an extended meta-tree embedding $\Pi\mtree{T}\upharpoonright\xi+1\to \Pi\mtree{S}\upharpoonright u(\xi)+1$ (in particular, $\Pi\mtree{T}\upharpoonright\xi+1$ hasn't broken down) and \item $\Pi^\mtree{S}_{v(\xi)}\circ\Gamma_\xi = \Gamma^*_\xi\circ \Pi^\mtree{T}_\xi$. \end{enumerate} Notice that we've demanded $\vec{\Delta}^*$ has the same $u$ and $v$ maps as $\vec{\Delta}$, so we just need to check that we can find the tree embeddings $\Gamma^*_\xi$ (we'll get $\Delta^*_\xi$ for free). Limits cause no trouble, so we'll just handle the successor case. Suppose $\eta=\mtree{T}\text{-pred} (\xi+1)$. Let $\eta^*=\mtree{S}\text{-pred} (u(\xi)+1)$ (so $\eta^*\in [v(\eta),u(\eta)]_\mtree{S}$ since $\vec{\Delta}$ is a meta-tree embedding). We need to see that the process of defining $\tree{T}^*_{\xi+1}$ doesn't break down and that there is a tree embedding $\Gamma^*_{\xi+1}:\tree{T}^*_{\xi+1}\to \tree{S}^*_{u(\xi)+1}$ completing the following diagram. \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { \tree{T}_{\xi+1} & {} & {} & \tree{S}_{u(\xi)+1} \\ {} & \tree{T}_{\eta} &\tree{S}_{\eta^*}\\ {} & \tree{T}^*_\eta & \tree{S}^*_{\eta^*} \\ \tree{T}^*_{\xi+1} & {} & {} & \tree{S}^*_{u(\xi)+1}\\ }; \path[-stealth] (m-1-1) edge node [above] {$\Gamma_{\xi+1}$} (m-1-4) edge node [left] {$\Pi^\mtree{T}_{\xi+1}$} (m-4-1) (m-1-4) edge node [right] {$\Pi^\mtree{S}_{u(\xi)+1}$} (m-4-4) (m-4-1) edge[dashed,->] node [below] {$\Gamma^*_{\xi+1}$} (m-4-4) (m-2-2) edge node [below] {$G_\xi$} (m-1-1) edge node [above] {$\Phi^{v(\eta),\eta^*}\circ \Gamma_\eta$} (m-2-3) edge node [left] {$\Pi^\mtree{T}_\eta$} (m-3-2) (m-2-3) edge node [below] {$F_{u(\xi)}$} (m-1-4) edge node [right] {$\Pi^\mtree{S}_{\eta^*}$} (m-3-3) (m-3-2) edge node [below] {${\Phi^*}^{v(\eta),\eta^*}\circ\Gamma^*_\eta$} (m-3-3) edge node [above] {$G^*_\xi$} (m-4-1) (m-3-3) edge node [above] {$F^*_{u(\xi)}$} (m-4-4) ; \end{tikzpicture}\] Of course, $\Gamma^*_{\xi+1}$ will be the copy tree embedding associated to ($\Delta^*_\xi$, ${\Phi^*}^{v(\eta),\eta*}\circ \Gamma^*_\eta$, $G_\xi^*$, $F_{u(\xi)}^*$). Then, since all the trapezoids commute and the outer maps are given by the Shift Lemma, we get that the whole outer square commutes (as in the proof of the Shift Lemma, but now with tree embeddings.) We leave it to the reader to check that the Shift Lemma applies. We then set $\Delta_{\xi+1}^*=\Phi^*_{v(\xi+1), u(\xi+1)}\circ \Gamma_\xi^*$, as we must. At limit $\lambda$, let $b^*= [0,v(\lambda))_\mtree{S}= [0,v(\lambda))_{\Pi\mtree{S}}$. Let $b=[0,\lambda)_\mtree{T}$. We have $v^{-1}[b^*] = b$ since $\vec{\Delta}$ is a meta-tree embedding, so we get a tree embedding $\Gamma^*_\lambda$ from $\tree{T}^*_\lambda=\lim_b (\Pi\mtree{T}\upharpoonright\lambda)$ to $\lim_{b^*} (\Pi\mtree{S}\upharpoonright{\lambda}) = \tree{S}^*_{v(\lambda)}$, as desired. We then continue as in the successor case. \qed \end{proof} We also have the following easy proposition about the existence of meta-strategies with meta-hull condensation. \begin{proposition}\label{nice meta-strategy existence} Suppose that $\Sigma$ is a ($\lambda$, $\theta$)-strategy for $M$ with strong hull condensation and $\tree{S}$ is by $\Sigma$. Then the induced meta-strategy $\Sigma^*_\tree{S}$ has meta-hull condensation. \end{proposition} \begin{proof} Let $\vec\Delta= \langle v, u, \Gamma_\xi, \Delta_\xi\rangle :\mtree{T}\to \mtree{S}$ where $\mtree{S}$ is a meta-tree by $\Sigma^*_\tree{S}$. We just want to see that $\mtree{T}$ is by $\Sigma^*_\tree{S}$, so it's enough to show that every tree $\tree{T}_\xi$ is by $\Sigma$ (by the definition of $\Sigma^*$). For every $\xi<\text{lh}(\mtree{T})$, $\Gamma_\xi$ is a (total) extended tree embedding $\tree{T}_\xi\to\tree{S}_{v(\xi)}$. Since $\tree{S}_{v(\xi)}$ is by $\Sigma$ and $\Sigma$ has strong hull condensation, $\tree{T}_\xi$ is by $\Sigma$. So $\mtree{T}$ is by $\Sigma^*_\tree{S}$, as desired. \qed \end{proof} We'll now see some examples of meta-tree embeddings. \begin{proposition}\label{meta-tree embedding example prop} Let $\tree{S}$ be a plus tree of successor length, $\tree{T}$ and $\tree{T}$ normal trees on the last model of $\tree{S}$ and $\Psi: \tree{T}\to \tree{T}^*$ an extended tree embedding. Let $\mu$ greatest such that $\mtree{V}(\tree{S},\tree{T}\upharpoonright \mu+1)$ is wellfounded and $\mu^*$ greatest such that $\mtree{V}(\tree{S},\tree{T}^*\upharpoonright\mu^*+1)$ is wellfounded. Then $u^\Psi(\mu)\geq \mu^*$ and there is a unique partial meta-tree embedding with maximal domain $\vec \Delta: \mtree{V}(\tree{S},\tree{T}\upharpoonright\mu+1)\to \mtree{V}(\tree{S},\tree{T}^*\upharpoonright\mu^*+1)$ with $u$-map $u^\Psi$. Moreover, for $\xi\leq\mu$ letting $R_\xi$ be the last model of $V(\tree{S}, \tree{T}\upharpoonright\xi+1)$ and $\sigma_\xi: M^\tree{T}_\xi\to R_\xi$ the quasi-normalization map, for $\xi\leq \mu^*$, letting $R_\xi^*$ be the last model of $V(\tree{S},\tree{T}^*)$ and $\sigma^*_\xi:M^{\tree{T}^*}_\xi\to R_\xi^*$ the quasi-normalization map, we have the following diagram commutes. \begin{center} \begin{tikzcd} M^{\tree{T}}_\xi \arrow[r, "t^\Psi_\xi"] \arrow[d, "\sigma_\xi"'] & M^{\tree{T}^*}_{u(\xi)} \arrow[d,"\sigma^*_{u(\xi)}"]\\ R_\xi \arrow[r, "t^{\Delta_\xi}_\infty"'] & R^*_{u(\xi)} \end{tikzcd} \end{center} \end{proposition} \begin{remark} In particular, if $\Sigma$ is a meta-strategy for $\tree{S}$ with meta-hull condensation and $\mtree{V}(\tree{S},\tree{T}^*)$ is by $\Sigma$, then $\mu^*+1=\text{lh} (\tree{T}^*)$, $\mu+1=\text{lh}(\tree{T})$, $\vec \Delta: \mtree{V}(\tree{S},\tree{T})\to \mtree{V}(\tree{S},\tree{T}^*)$ is a total extended meta-tree embedding, and $\mtree{V}(\tree{S},\tree{T})$ is by $\Sigma$. \end{remark} \begin{proof} Let $\mtree{V}= \langle \tree{V}_\xi, F_\xi, \Phi^{\xi,\eta}\rangle = \mtree{V}(\tree{S},\tree{T})$ and $\mtree{V}^*= \langle \tree{V}^*_\xi, F^*_\xi, {\Phi^*}^{\xi,\eta}\rangle = \mtree{V}(\tree{S},\tree{T}^*)$. We have that $F_\xi= \sigma_\xi(E^\tree{T}_\xi)$, $F^*_\xi= \sigma^*_\xi(E^{\tree{T}^*}_\xi)$. Our meta-tree embedding $\vec\Delta$ will have $v=v^\Psi$ and $u=u^\Psi$. We just need to see that this works, by induction. Using the notation of the \lq\lq moreover" clause, we have that $R_\xi$ is the last model of $\tree{V}_\xi$ and $R^*_\xi$ is the last model of $\tree{V}^*_\xi$. Also let $t_{\eta,\xi}$ be the last $t$-map of $\Phi^{\eta,\xi}$ when $\eta\leq_\mtree{V}\xi$ and $t^*_{\eta,\xi}$ the last $t$-map of ${\Phi^*}^{\eta,\xi}$ when $\eta\leq_{\mtree{V}^*}\xi$. We maintain by induction on $\xi$ that \begin{enumerate} \item $\vec\Delta\upharpoonright\xi$ is an extended meta-tree embedding from $\mtree{V}\upharpoonright\xi+1\to \mtree{V}^*\upharpoonright u(\xi)+1$, and \item the following diagram commutes. \begin{center} \begin{tikzcd} M^{\tree{T}}_\xi \arrow[r, "t^\Psi_\xi"] \arrow[d, "\sigma_\xi"'] & M^{\tree{T}^*}_{u(\xi)} \arrow[d,"\sigma^*_{u(\xi)}"]\\ R_\xi \arrow[r, "t^{\Delta_\xi}_\infty"'] & R^*_{u(\xi)} \end{tikzcd} \end{center} \end{enumerate} Note that the maps in (2) may be partial so we mean that they have the same domain and commute. We start with the successor case. Let $\eta=\mtree{V}\text{-pred}(\xi+1)=\tree{T}\text{-pred}(\xi+1)$ and $\eta^*=\mtree{V}^*\text{-pred}(u(\xi)+1)=\tree{T}^*\text{-pred}(u(\xi)+1)$. Since $\Psi$ is a tree embedding and $\tree{T}^*$ has the same tree order as $\mtree{V}^*$, we get that $\eta^*\in[v(\eta), v(\eta)]_{\mtree{V}^*}$. By condition (2) of our induction hypothesis, we get that $t^{\Delta_\xi}_\infty(F_\xi)=F^*{u(\xi)}$. The agreement properties of meta-tree embeddings (Proposition \ref{meta-tree agreement}) imply that the Shift Lemma applies to ($\Delta_\xi$, ${\Phi^*}^{v(\eta),\eta^*}\circ \Gamma_\eta$, $F_\xi$, $G_{u(\xi)}$), so that we may let $\Gamma_{\xi+1}$ be given by the Shift Lemma, as desired. We have that $v(\xi+1)=u(\xi)+1\leq_{\mtree{V}^*}u(\xi+1)$, using again that $\tree{T}^*$ has the same tree order as $\mtree{V}^*$ and $\Psi$ is a tree embedding, so we may let $\Delta_{\xi+1}={\Phi^*}^{v(\xi+1),u(\xi+1)}\circ \Gamma_{\xi+1}$, as we must. This assignment clearly maintains (1), so we just need to see that (2) holds as well. We have the following diagram. \[\begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=4em,column sep=4em,minimum width=4em] { M^{\tree{T}}_{\xi+1} & {} & {} & M^{\tree{T^*}}_{v(\xi+1)} & M^{\tree{T}^*}_{u(\xi+1)} \\ {} & M^{\tree{T}}_{\eta} & M^{\tree{T}^*}_{\eta^*}\\ {} & R_\eta & R^*_{\eta^*} \\ R_{\xi+1}& {} & {} & R^*_{v(\xi+1)} & R^*_{u(\xi+1)}\\ }; \path[-stealth] (m-1-1) edge node [above] {$s_{\xi+1}^\Psi$} (m-1-4) edge node [left] {$\sigma_{\xi+1}$} (m-4-1) (m-1-4) edge node [above] {$\tree{T}^*$} (m-1-5) edge node [right] {$\sigma^*_{v(\xi+1)}$} (m-4-4) (m-1-5) edge node [right] {$\sigma^*_{u(\xi+1)}$} (m-4-5) (m-4-1) edge node [below] {$\bar{\tau}_{\xi+1}$} (m-4-4) (m-4-4) edge node [below] {$t^*_{v(\xi+1),u(\xi+1)}$} (m-4-5) (m-2-2) edge node [below] {$E^{\tree{T}}_\xi$} (m-1-1) edge node [above] {$\hat\imath^{\tree{T}^*}_{v(\eta),\eta^*}\circ s^\Psi_\eta$} (m-2-3) edge node [left] {$\sigma_\eta$} (m-3-2) (m-2-3) edge node [below] {$E^{\tree{T}^*}_{u(\xi)}$} (m-1-4) edge node [right] {$\sigma^*_{\eta^*}$} (m-3-3) (m-3-2) edge node [below] {$t^*_{V(\eta),\eta^*}\circ t^{\Gamma_\eta}\infty$} (m-3-3) edge node [above] {$t_{\eta,\xi+1}$} (m-4-1) (m-3-3) edge node [above] {$t^*_{\eta^*,v(\xi+1)}$} (m-4-4) ; \end{tikzpicture}\] We leave it to the reader to check that everything commutes. This relies quite heavily on properties of the quasi-normalization maps, which can be found in \cite{nitcis}. For example, the leftmost trapezoid commutes for free because of how we define the map $\sigma_{\xi+1}$. \qed \end{proof} We now turn to another source of meta-tree embeddings: the analogue of embedding normalization for meta-trees. We start with the one-step case. Given meta-trees $\mtree{S}$ and $\mtree{T}$ of successor lengths, $H$ an extender such that $H^-$ is on the last model of the last tree of $\mtree{T}$, we want to define a meta-tree $\mtree{V}=\mtree{V}(\mtree{S}, \mtree{T}, H)$ and an extended meta-tree embedding $\vec{\Delta}=\vec{\Delta}^{\mtree{V}(\mtree{S},\mtree{T},H)}$ from $\mtree{S}$ into $\mtree{V}$. Moreover, we will have that the last tree of $\mtree{V}$ is just $V(\tree{S}_\infty, \tree{T}_{\infty}, H)$ and $\Delta_\infty$, the last $\Delta$-map of $\vec{\Delta}$, is $\Phi^{V(\tree{S}_\infty, \tree{T}_\infty, H)}$, so that, in particular, this is really producing an analogue of full normalization. For a meta-tree $\mtree{S}=\langle \tree{S}_\xi, F_\xi\rangle$ of successor length and $H$ on the sequence of the last model of $\mtree{S}$, we define \begin{align*} a(\mtree{S}, H)&= \text{least $\xi$ such that $H^-$ is on the sequence of the last model of $\tree{S}_\xi$}\\ b(\mtree{T}, H) &= \text{ least $\xi$ such that $\xi+1=\text{lh}(\mtree{S})$ or $\xi+1<\text{lh}(\mtree{S})$ and $\text{crit} (H)<\hat\lambda(F_\xi).$} \end{align*} \noindent Note that $a(\mtree{S},H)$ is also the least $\xi$ such that $\xi+1=\text{lh}(\mtree{S})$ or $\xi+1<\text{lh}(\mtree{S})$ and $\alpha(\tree{S}_\xi, F_\xi)\leq \alpha(\tree{S}_\infty, G)$. Let $\mtree{S} = \langle \tree{S}_\xi, F_\xi, \Phi_{\eta,\xi}\rangle$, $\mtree{T}= \langle \tree{T}_\xi, G_\xi,\Psi_{\eta,\xi}\rangle$, and $H$ on the sequence of the last model of $\mtree{T}$. Let $a=a(\mtree{T}, H)$ and $b=b(\mtree{T}, H)$. Suppose that $\mtree{S}\upharpoonright b+1= \mtree{T}\upharpoonright b+1$ and if $b+1<\text{lh}(\mtree{S})$, then $\text{dom}(H)\trianglelefteq \text{lh}(F_b)$. We define $\mtree{V} = \langle \tree{V}_\xi, H_\zeta,\ldots\rangle$, ordinal valued maps $u,v$, extended tree embeddings $\Gamma_\xi: \tree{S}_\xi \to \tree{V}_{v(\xi)}$, and partial extended tree embeddings $\Delta_\xi: \tree{S}_\xi \to \tree{V}_{u(\xi)}$ inductively as follows. First, we let $\mtree{V}\upharpoonright a+1= \mtree{T}\upharpoonright a+1$. We then put \[u(\xi)=\begin{cases} \xi &\text{if } \xi<b\\ a+1+ (\xi-b) &\text{if }\xi\geq b \end{cases},\] and $v(\xi)= \sup\{u(\eta)+1\,|\,\eta<\xi\}$. We maintain by induction on $\xi\geq b$ that \begin{enumerate} \item $\vec{\Delta}\upharpoonright\xi+1=\langle u\upharpoonright\xi+1, v\upharpoonright\xi+1, \{\Gamma_\eta\}_{\eta\leq \xi}, \{\Delta_\eta\}_{\eta\leq\xi}\rangle$ is an extended meta-tree embedding from $\mtree{S}\upharpoonright\xi+1$ into $\mtree{V}\upharpoonright u(\xi)+1$, \item $\tree{V}_{u(\xi)} = V(\tree{S}_\xi, \tree{T}_a, H)$ and $\Delta_\xi = \Phi^{V(\tree{S}_\xi, \tree{T}_a, H)}$. \end{enumerate} There is a lot built into (1); for example, we must have that $\Gamma_\xi = \Delta_\xi$ for $\xi>b$ by our choice of $u$. We just need to see that this works. It's easy to see that we get the base case $\xi=b$ for free by setting $H_a=H$, so we just need to deal with successors and limits $\xi>b$. We start with successors. \paragraph{Successor case.} Suppose our induction hypotheses hold up to $\xi\geq b$. \\ Let $\eta=\mtree{S}\text{-pred} (\xi+1)$. There are two subcases depending on the critical point of $F_\xi$. \paragraph{Subcase 1.} $\text{crit} (F_\xi)<\text{crit} (H)$.\\ In this case $\eta\leq b$, $\tree{V}_\eta=\tree{S}_\eta$, $\Gamma_\eta=Id$, and $\text{crit} (H_{u(\xi)})=\text{crit} (t_\infty^{\Delta_\xi})$. Note that $t_\infty^{\Delta_\xi}$ has critical point $\text{crit} (H)$, by our induction hypothesis (2). Since $\mtree{V}\upharpoonright b+1=\mtree{S}\upharpoonright b+1$, we must put $\eta= \mtree{V}\text{-pred} (u(\xi)+1)$, as dictated by normality. We let $\tree{V}_{u(\xi)+1}= V(\tree{V}_\eta, \tree{V}_{u(\xi)}, H_{u(\xi)})$ and, following the definition of meta-tree embedding, we let $\Gamma_{\xi+1}$ be the copy tree embedding associated to ($\Delta_\xi$, $\Gamma_\eta = Id$, $F_\xi$, $H_{u(\xi)}$). We leave it to the reader to check that our induction hypothesis (1) guarantees that the Shift Lemma applies. To see (2), we show that $\tree{V}_{u(\xi)+1}= V(\tree{S}_{\xi+1}, \tree{T}_a, H)$ and $\Gamma_{\xi+1}= \Phi^{V(\tree{S}_{\xi+1}, \tree{T}_a, H)}$ simultaneously. Let $\bar{\tree{V}}= V(\tree{S}_{\xi+1}, \tree{T}_a, H)$ and $\bar \Gamma = \Phi^{V(\tree{S}_{\xi+1}, \tree{T}_a, H)}$. Let $\bar\alpha_\xi = \alpha_0(F_\xi, \tree{S}_\xi)$ and $\alpha_\xi = \alpha_0(H_{\xi}, \tree{V}_\xi)$. Let $\bar\Phi = \Phi^{V(\tree{S}_\eta, \tree{S}_\xi, F_\xi)}$ and $\Phi = \Phi^{V(\tree{V}_\eta,\tree{V}_{u(\xi)}, H_{u(\xi)})}$. We show $\bar\Gamma= \Gamma_{\xi+1}$ by showing it satisfies the conditions which uniquely determine $\Gamma_{\xi+1}$ in the conclusion of the Shift Lemma. First we show that $\bar\Gamma\upharpoonright \bar\alpha +1\approx\Gamma_\xi\upharpoonright\bar\alpha+1$ and $u^{\bar\Gamma}(\bar\alpha)=\alpha$, which guarantees that $\bar\Gamma\upharpoonright\bar\alpha+2 \approx \Gamma_{\xi+1}\upharpoonright\bar\alpha+2$. From here, we'll show simultaneously by induction on $\zeta<\text{lh} (\tree{S}_\eta)$ that $\bar\Gamma \circ \bar\Phi\upharpoonright \zeta+1\approx \Phi\upharpoonright\zeta+1$ and $\bar{\tree{V}}\upharpoonright u^{\Phi}(\zeta)+1 = \tree{V}_{u(\xi)+1}\upharpoonright u^{\Phi}(\zeta)+1$, which establishes $\bar\Gamma =\Gamma_{\xi+1}$. Note that we're using in several places that $\Gamma_\eta= Id$. We have that $\tree{S}_{\xi+1}\upharpoonright\bar\alpha+1=\tree{S}_\xi\upharpoonright\bar\alpha+1$, so that $\bar {\tree{V}}\upharpoonright \alpha+1= \tree{V}_{u(\xi)}\upharpoonright\alpha+1$ and $\bar\Gamma\upharpoonright\bar\alpha+1\approx\Gamma_\xi\upharpoonright\bar\alpha+1$, as both are just given by one-step quasi-normalization $V(\tree{S}_\xi\upharpoonright\bar\alpha+1, \tree{S}_a, H)$. Moreover, $u^{\bar\Gamma}(\bar\alpha)= u^{\Gamma_\xi}(\bar\alpha)$, since this is just the $u$-map of embedding normalization by $H$. We have that $F_\xi=E^{\tree{S}_{\xi+1}}_{\bar\alpha}$ and so $t^{\Gamma_\xi}_{\bar\alpha}=t^{\bar\Gamma}_{\bar\alpha}$ agrees with $t^{\Delta_\xi}_\infty$ on $F_\xi$. It follows that $H_{u(\xi)}= E^{\bar{\tree{V}}}_{u^{\bar\Gamma}(\bar\alpha)}$, so that $u^{\bar\Gamma}(\bar\alpha)=\alpha$. This establishes $\bar\Gamma\upharpoonright\bar\alpha+2=\Gamma_{\xi+1}\upharpoonright\bar\alpha+2$. For the rest, we show by induction on $\zeta<\text{lh} (\tree{S}_\eta)$ that $\bar\Gamma\circ \bar\Phi\upharpoonright\zeta+1 \approx \Phi\upharpoonright\zeta+1$, $u^{\bar\Gamma\circ \bar\Phi}(\zeta)=u^\Phi(\zeta)$, and $\bar{\tree{V}}\upharpoonright u^\Phi(\zeta)+1=\tree{V}_{u(\xi)+1}\upharpoonright u^\Phi(\zeta)+1$. Let $\bar\beta= \beta(F_\xi,\tree{S}_\xi)$ and $\beta=\beta(H_{u(\xi)}, \tree{V}_{u(\xi)})$. We have that $\bar\beta=\beta$ by our case hypothesis and so $v^{\bar\Phi}\upharpoonright \beta +1= v^{\Phi}\upharpoonright\beta+1=id$. Also by our case hypothesis we have $v^{\bar\Gamma}\upharpoonright\beta+1=id$, so $\bar\Gamma \circ \bar\Phi\upharpoonright\beta+1 = \Phi\upharpoonright\beta+1= Id_{\tree{S}_\eta\upharpoonright\beta+1}$. We have $u^{\bar\Phi}(\beta)=\bar\alpha+1$ so $u^{\bar\Gamma\circ \bar\Phi}(\beta)= u^{\bar\Gamma}(\bar\alpha+1)=\alpha+1$ (since $u^{\bar\Gamma}$ agrees with $v^{\bar\Gamma}$ on $\bar\alpha+1$ and $u^{\bar\Gamma}(\bar\alpha)= \alpha$, as already established). We also clearly have $u^\Phi(\beta) = \alpha+1$ (since $\Phi$ it is just one-step normalization by $H_u(\xi)$), so $u^{\bar\Gamma\circ \bar\Phi}(\beta)= u^\Phi(\gamma)$. Moreover, we already established $\bar{\tree{V}}\upharpoonright\alpha+1 = \tree{V}_{u(\xi)}\upharpoonright\alpha+1 = \tree{V}_{u(\xi)+1}\upharpoonright\alpha+1$, as desired. Suppose now $\zeta\geq\beta$ and our induction hypothesis holds up to $\zeta$. We have that the exit extenders $E^{\bar{\tree{V}}}_{u^\Phi(\zeta)}$ and $E^{\tree{V}_{\xi+1}}_{u^\Phi(\zeta)}$ are equal since they are both images of $E^{\tree{S}_\eta}_\zeta$ under the same $t$-map (our induction hypothesis implies the $\zeta$th $t$-maps of $\bar\Gamma\circ\bar\Phi$ and $\Phi$ are the same). It follows that $\bar{\tree{V}}$ and $\tree{V}_{\xi+1}$ agree up to $v^\Phi(\zeta+1)=u^\Phi(\zeta)+1$ and $\bar\Gamma\circ\bar\Phi$ and $\Phi$ agree up to $\zeta+2$. Since $\zeta+1\geq \beta$, we get that $u^{\bar\Phi}$ and $u^{\Phi}$ agree with their corresponding $v$-maps on $\zeta+1$. Moreover, since $u^{\Phi}(\zeta)>\alpha>\alpha_0(H,\tree{T}_a)$ (using here that $\xi\geq b$ so $u(\xi)>a$), we have that $u^{\bar\Gamma}$ agrees with $v^{\bar\Gamma}$ above $u^{\bar\Phi}(\zeta)$. So $u^{\bar\Gamma\circ \bar\Phi}(\zeta+1)= v^{\bar\Gamma\circ \bar \Phi}(\zeta+1)= v^\Phi(\zeta+1)=u^\Phi(\zeta+1)$ and the trees agree this far, too. Both trees must pick the same branches at limit $\alpha$ and at limits $\lambda= v^\Phi(\bar\lambda)= v^{\bar\Gamma\circ\bar\Phi}(\bar\lambda)$, both trees must pick the image of $[0,\bar\lambda)_{\tree{S}_\eta}$ under the same map. So as long as we don't reach illfounded models, this agreement continues through limits. This finishes Subcase 1. \paragraph{Subcase 2.} $\text{crit} (F_\xi)\geq \text{crit} (H)$.\\ In this case $\eta\geq b$ and $\text{crit} (H_{u(\xi)}) \geq \hat\lambda (G)$. It follows that $a+1\leq u(\eta)= \mtree{V}\text{-pred}(u(\xi)+1)$. Where we used $\Gamma_\eta=Id$ above, we must now use $\Delta_\eta$, which is, in particular, \textit{not} the identity. We let $\tree{V}_{u(\xi)+1}= V(\tree{V}_{u(\eta)}, \tree{V}_{u(\xi)}, H_{u(\xi)})$ and $\Gamma_{\xi+1}$ the copy tree embedding associated to ($\Delta_\xi$, $\Delta_\eta$, $F_\xi$, $H_{u(\xi)}$). As in the previous case we let $\bar{\tree{V}} = V(\tree{S}_{\xi+1},\tree{T}_a, H)$ and $\bar\Gamma=\Phi^{V(\tree{S}_{\xi+1},\tree{T}_a,H)}$. Again we must show that $\bar\Gamma$ satisfies the properties in the conclusion of the Shift Lemma which uniquely determine $\Gamma_{\xi+1}$. Getting $\bar\Gamma\upharpoonright\bar\alpha+2=\Gamma_{\xi+1}\upharpoonright\bar\alpha+2$ is the same as before. For the rest, we now want to see that $\bar\Gamma\circ \bar\Phi = \Phi\circ \Delta_\eta$, where $\bar\Phi$ is as before and $\Phi=\Phi^{V(\tree{V}_{u(\eta)},\tree{V}_{u(\xi)},H_{u(\xi)})}$. The argument here is pretty much the same as in the previous case: we get agreement up to $\bar\beta+2$ for free and then use that the remainder of our trees and tree embeddings are given by images of $\tree{S}_\eta$ under the same maps. We leave the details to the reader. This finishes Subcase 2 and the successor case. \paragraph{Limit case.} Suppose our induction holds below $\lambda>b$.\\ We must let $\Gamma_\lambda$ be the unique extemded tree embedding from $\tree{S}_\lambda=\lim_{[0,\lambda)_\mtree{S}} (\mtree{S}\upharpoonright\lambda)$ to $\tree{V}_\lambda= \lim_{[0, v(\lambda))_{\mtree{V}}} (\mtree{V}\upharpoonright v(\lambda))$ which commutes with the rest of our embeddings. By Lemma \ref{m-shift direct limits}, we must have that $\Gamma_\lambda = \Phi^{V(\tree{S}_\lambda, \tree{T}_a, H)}$. This finishes the one-step case.\\ In our main comparison result, we will use analogues of the Factor Lemma and Shift Lemma for meta-tree embeddings, though we don't need to go on to develop analogues of the factorization analysis or copying construction. \begin{lemma}[Factor Lemma] Let $\vec{\Psi}: \mtree{S}\to \mtree{T}$ be an (extended) meta-tree embedding such that $\vec{\Psi}\neq Id$. Let $b=\text{crit}(u^{\vec{\Psi}})$ and $a+1$ be the successor of $v^{\vec{\Psi}}(b)=b$ in $(v^{\vec{\Psi}}(b), u^{\vec{\Psi}}(b)]_\mtree{T}$. Suppose that $\text{dom}(F_a^\mtree{T})\triangleleft M_\infty^{\tree{S}_b}|\text{lh}(F_b^\mtree{S})$. Then $\mtree{V}(\mtree{S}, \mtree{T}\upharpoonright a+1, F_a^\mtree{T})$ is defined and wellfounded and there is a unique (extended) meta-tree embedding $\vec{\Pi}:\mtree{V}(\mtree{S},\mtree{T}\upharpoonright a+1, F_a^\mtree{T})\to \mtree{T}$ such that $u^{\vec{\Pi}}\upharpoonright a+1=id$ and $\vec{\Psi}=\vec{\Pi}\circ \vec{\Delta}^{\mtree{V}(\mtree{S},\mtree{T}\upharpoonright a+1, F_a^\mtree{T})}$. \end{lemma} \begin{proof} Let $\vec{\Psi} = \langle u^{\vec{\Psi}}, v^{\vec{\Psi}}, \{\Phi_\xi\}, \{\Psi_\xi\}\rangle$. We'll define $\vec{\Pi} = \langle u^{\vec{\Pi}}, v^{\vec{\Pi}}, \{ \Lambda_\xi\}, \{\Pi_\xi\}\rangle$ our desired meta-tree embedding by induction. Note that our hypotheses immediately gives that $\mtree{V}(\mtree{S},\mtree{T}\upharpoonright a+1, F_a^\mtree{T})$ is defined. Let $\mtree{V}=\mtree{V}(\mtree{S},\mtree{T}\upharpoonright a+1, F_a^\mtree{T})$ and $\vec{\Delta} = \vec{\Delta}^{\mtree{W}(\tree{S},\tree{T}\upharpoonright a+1, F_a^\mtree{T})}=\langle u^{\vec{\Delta}},v^{\vec{\Delta}}, \{\Gamma_\xi\}, \{\Delta_\xi\}\rangle$. We can now define our meta-tree embedding $\vec{\Pi}$. To start, we define the $u$-map of $\vec{\Xi}$ like we did in the the tree embedding Factor Lemma: \begin{equation*} u^{\vec{\Pi}}(\xi)=\begin{cases} \xi &\text{ if } \xi<a+1\\ u^{\vec{\Psi}}\circ(u^{\vec{\Delta}})^{-1}(\xi)&\text{ if } \xi\geq a+1. \end{cases} \end{equation*} So we put $\Pi_\xi = \Lambda_\xi = Id$ for $\xi<a+1$ and $\Lambda_{a+1} = Id$. All $\xi\geq a+1$ are in the range of $u^{\vec{\Delta}}$, so we find the remaining $\Lambda_\xi$ and $\Pi_\xi$ as follows. First let $\alpha_0=\alpha_0(\tree{T}_a, F_a^\mtree{T})$ and $\beta=\beta(\tree{T}_a, F_a^\mtree{T})$. We have that $\tree{S}_b=\tree{T}_b$, $\Phi_b= Id$, and $\Psi_{b}=\Phi_{b, u^{\vec{\Psi}}(b)}^\mtree{T}$. So $\Psi_b$ is an inflationary tree embedding with $\text{crit}(u^{\Psi_b})=\beta$, $u^{\Psi_b}(\beta)=\alpha_0+1$, and first factor $F_a^\mtree{T}$. In particular, as either $\beta+1=\text{lh}(\tree{S}_b)$ or $\beta+1<\text{lh}(\tree{S}_b)=\text{lh}(\tree{T}_b)$, $E_\beta^{\tree{S}_b}=E_\beta^{\tree{T}}$, and $\text{dom}(F_a^\mtree{T})\triangleleft M_\beta^{\tree{S}_b}|\text{lh}(E_\beta^{\tree{S}_b})$. Now let $\xi>b$. The agreement properties of meta-tree embeddings give that $\text{crit} (u^{\Phi_\xi})=\beta$ and $\alpha_0+1$ is the successor of $\beta$ in $(\beta, u^{\Phi_\xi}(\beta)]_{\tree{T}_{v^{\vec{\Psi}}(\xi)}}$. Moreover, $E_{\alpha_0}^{\tree{T}_{v^{\vec{\Psi}}(\xi)}}=F_a^\mtree{T}$ and either $E_\beta^{\tree{S}_\xi}=E_\beta^{\tree{S}_b}$ or else $\beta=\alpha_0(\tree{S}_b, F_b^\mtree{S})$ and $E_\beta^{\tree{S}_\xi}=F_b^\mtree{S}$. In either case, $\text{dom}(F_a^\mtree{T})\triangleleft M_\beta^{\tree{S}_\xi}|\text{lh}(E_\beta^{\tree{S}_\xi})$, so that the tree embedding Factor Lemma applies to $\Phi_\xi$, i.e. there is an extended tree embedding $\Lambda_{u^{\vec{\Delta}}(\xi)}:\tree{V}_{u^{\vec{\Delta}}(\xi)}\to \tree{T}_{u^{\vec{\Psi}}(\xi)}$ such that $\Lambda_{u^{\vec{\Delta}}(\xi)}\circ \Delta_\xi = \Phi_\xi$. Finally, for $\xi\geq b$ we define $\Pi_{u^{\vec{\Delta}}(\xi)} = \Phi^\mtree{T}_{v^{\vec{\Pi}}\circ u^{\vec{\Delta}}(\xi), u^{\vec{\Psi}}(\xi)}\circ \Lambda_{u^{\vec{\Delta}}(\xi)}$, as we must. To finish, one needs to check that this really is a meta-tree embedding. This is straightforward, by induction. We leave the details to the reader. \qed \end{proof} Because we demanded that meta-trees are normal, we have used an analogue of embedding normalization instead of the quasi-normalization in the meta-tree normalization procedure just introduced. We mentioned above that one \textit{cannot} prove that $\alpha(\tree{T},G)\in [v(\alpha(\tree{S},F)), u(\alpha(\tree{S},F)]_\tree{T}$ for an arbitrary extended tree embedding $\Phi:\tree{S}\to \tree{T}$ and extenders $F$, $G$ such that $F^-$ is on the $M^\tree{S}_\infty$-sequence and $G=t_\infty(F)$. We needed to move to the $\alpha_0$'s to have this property. Similarly, we cannot prove $a(\mtree{T},G)\in [v(\alpha(\mtree{S},F)), u(\alpha(\mtree{S},F)]_\mtree{T}$ for an arbitrary meta-tree embedding $\vec{\Delta}:\mtree{S}\to \mtree{T}$ and extenders $F$, $G$ such that $F^-$ is on the $M^\mtree{S}_\infty$-sequence and $G=t^{\Delta_\infty}_\infty(F)$. Still, this condition will be met in our application of the meta-tree embedding Shift Lemma, so we just add it as an additional hypothesis, (2), below. \begin{lemma}[Shift Lemma] Let $\vec{\Psi}=\langle u^{\vec{\Psi}}, v^{\vec{\Psi}}, \{\Phi_\xi\}_{\xi<\text{lh}(\mtree{S})},\{\Psi_\xi\}_{\xi<\text{lh}(\mtree{S})}\rangle:\mtree{S}\to \mtree{T}$ and $\vec{\Pi}=\langle u^{\vec{\Pi}}, v^{\vec{\Pi}}, \{\Lambda_\xi\}_{\xi<\text{lh}(\mtree{U})},\{\Pi_\xi\}_{\xi<\text{lh}(\mtree{U})}\rangle:\mtree{U}\to \mtree{V}$ extended meta-tree embeddings and $F,G$ extenders be such that $F^-$ on the $M_\infty^\mtree{S}$-sequence and $G$ such that $G^-$ is on the $M_\infty^\mtree{T}$-sequence. Let $a=a(\mtree{S}, F)$, $b=b(\mtree{S},F)$, $a^*=a(\mtree{T},G)$, and $b^*=b(\mtree{T},G)$. Suppose \begin{enumerate} \item $M_\infty^\mtree{S}|\text{lh}(F)\trianglelefteq\text{dom}(t^{\Psi_\infty}_\infty)$ and $G=t_\infty^{\Psi_\infty}(F)$, \item $a^*\in [v^{\vec{\Psi}}(a), u^{\vec{\Psi}}(a)]_\mtree{T}$ and $G=t_\infty^{{\Phi^\mtree{T}_{v^{\vec{\Psi}}(a),a^*}}\circ \Phi_a}(F)$, \item $\vec{\Psi}\upharpoonright b+1\approx \vec{\Pi}\upharpoonright b+1$, \item $\mtree{T}\upharpoonright b^*+1=\mtree{V}\upharpoonright b^*+1$, \item $b^*\in [v^{\vec{\Pi}}(b), u^{\vec{\Pi}}(b)]_\mtree{V}$ and $t^{\Pi_\infty}_\infty\upharpoonright\text{dom}(F)\cup\{F\}=t^{\Phi^\mtree{V}_{v^{\vec{\Pi}}(b), b^*}\circ \Lambda_b}_\infty \upharpoonright\text{dom}(F)\cup\{F\}$, \item if $b+1<\text{lh}(\mtree{U})$, then $\text{dom}(F)\triangleleft M_\infty^\mtree{U}|\text{lh}(F_b^\mtree{U})$, and \item if $b^*+1<\text{lh}(\mtree{V})$, $\text{dom}(G)\triangleleft M_\infty^\mtree{V}|\text{lh}(F_{b^*}^\mtree{V})$. \end{enumerate} Then $\mtree{V}(\mtree{U},\mtree{S},F)$ and $\mtree{V}(\mtree{V},\mtree{T},G)$ are defined and, letting $\mu$ the greatest ordinal such that $\mtree{V}(\mtree{U},\mtree{S},F)\upharpoonright \mu$ is wellfounded and $\mu^*$ the greatest ordinal such that $\mtree{V}(\mtree{V},\mtree{T}, G)\upharpoonright\mu^*$ is wellfounded, there is a unique partial meta-tree embedding $\vec{\Delta}=\langle v^{\vec{\Delta}}, u^{\vec{\Delta}}, \{\Gamma_\xi\}, \{\Delta_\xi\}\rangle: \mtree{V}(\mtree{U},\mtree{S},F)\upharpoonright \mu \to \mtree{V}(\mtree{V},\mtree{T}, G)\upharpoonright\mu^*$ with maximal domain such that \begin{enumerate} \item $\vec{\Delta}\upharpoonright a+1\approx \vec{\Psi} \upharpoonright a+1$, \item $u^{\vec{\Delta}}(a)=a^*$, and \item $\vec{\Delta}\circ \vec{\Delta}^{\mtree{V}(\mtree{U},\mtree{S},F)} =\vec{\Delta}^{\mtree{V}(\mtree{V},\mtree{T},G)}\circ \vec{\Pi}$ (on their common domain). \end{enumerate} Moreover, if $\mtree{V}(\mtree{V},\mtree{T},G)$ is wellfounded, then $\mtree{V}(\mtree{U},\mtree{S},F)$ is wellfounded and $\vec{\Delta}$ is a (total) extended meta-tree embedding from $\mtree{V}(\mtree{U},\mtree{S},F)$ into $\mtree{V}(\mtree{V},\mtree{T},G)$. If $\mtree{V}(\mtree{V},\mtree{T},G)$ is wellfounded and also $\vec{\Pi}$ is non-dropping, then $\vec{\Delta}$ is a non-dropping extended meta-tree embedding. \end{lemma} We omit the proof because one can essentially copy the proof of the tree embedding Shift Lemma, above, using that lemma where we used the ordinary Shift Lemma everywhere in that proof. Alternatively, one can give a somewhat simpler proof by using that the meta-tree embedding normalization coincides with full normalization, in the sense discussed above.\\ For normalizing a stack of meta-trees $\langle \mtree{S},\mtree{T}\rangle$, we'll need to talk about direct limits of systems of meta-trees under meta-tree embeddings. Our analysis of direct limits of trees under extended tree embeddings from \S1.2 carries over to meta-trees under meta-tree embeddings in the obvious way. \begin{definition} A \textit{directed system of meta-trees} is a system $\mathbb{D}=\langle\{\mtree{T}_a\}_{a\in A},\{\vec{\Delta}^{a,b}\}_{a\preceq b}\rangle$, where $\preceq$ is a directed partial order on some set $A$ and \begin{enumerate} \item[(a)] for any $a\in A$, $\mtree{T}_a$ is a simple meta-tree of successor length, \item[(b)] for any $a,b\in A$ with $a\prec b$, $\vec{\Delta}^{a,b}: \mtree{T}_a\to \mtree{T}_b$ is an extended meta-tree embedding, \item[(c)] for any $a,b,c\in A$ such that $a\preceq b\preceq c$, $\vec{\Delta}^{a,c}= \vec{\Delta}^{b,c}\circ \vec{\Delta}^{a,b}$. \end{enumerate} \end{definition} We define $\lim \mathbb{D}$ just as before, except we replace the parts of the tree embeddings with the corresponding parts of our meta-tree embeddings, e.g. we form $u$-threads $x$ using the $u_{a,b}$ and form trees $\tree{T}_x$ by taking direct limits along $\Delta^{a,b}_{x(a)}$ instead of the $t$-maps, provided that enough of these are total. We also define systems $\vec{\Pi}^a$ which, when the direct limit is wellfounded, are extended meta-tree embeddings from $\mtree{T}_a$ into $\lim \mathbb{D}$. We say $\lim \mathbb{D}$ is \textit{wellfounded} if all the $\tree{T}_x$ are defined and are actually plus trees, the order on $u$-threads is wellfounded, and the direct limit object is a meta-tree. Like in the case of direct limits of trees under extended tree embeddings, the last two conditions follow from the first. We get that this construction really identifies the direct limit in the category of meta-trees of successor lengths and extended meta-tree embeddings between them, i.e. we have \begin{proposition}\label{meta-tree direct limit prop} Let $\mathbb{D}= \langle \mtree{T}_a, \vec{\Delta}^{a,b}, \preceq\rangle$ be a directed system of meta-trees, where $\preceq$ has field $A$. Suppose there is a meta-tree $\mtree{S}$ and for all $a\in A$ meta-tree embeddings $\vec{\Psi}^a:\mtree{T}_a\to \mtree{S}$ such that whenever $a\preceq b$, $\vec{\Psi}^b=\vec{\Delta}^{a,b}\circ \vec{\Psi}^a$. Then the direct limit $\lim\mathbb{D}$ is wellfounded and there is a unique tree embedding $\vec{\Psi}:\lim \mathbb{D}\to \mtree{S}$ such that $\vec{\Psi}^a= \vec{\Psi}\circ \vec{\Pi}^a$ for all $a\in A$. \end{proposition} Now, given a stack of meta-trees $\langle \mtree{S},\mtree{T}\rangle$, with $\mtree{S}= \langle \tree{S}_\xi, F_\xi, \Phi_{\eta,\xi}\rangle$, $\mtree{T}=\langle \tree{T}_\xi, G_\xi, \Psi_{\eta,\xi}\rangle$, we define $\mtree{V}(\mtree{S},\mtree{T})$ as the last meta-tree in a sequence of meta-trees $\mtree{V}^\xi=\langle \tree{V}^\xi_\zeta, F^\xi_\zeta\rangle$ (each of successor length). We also define (partial) extended meta-tree embeddings $\vec{\Delta}^{\eta,\xi}:\mtree{V}^\eta\to \mtree{V}^\xi$ for $\eta\leq_\mtree{T} \xi$. Of course, our construction only makes sense as long as we never reach illfounded models, in which case we'll say that $\mtree{V}(\mtree{S},\mtree{T})$ is \textit{wellfounded}. We maintain the following by induction. \begin{enumerate} \item $\tree{V}^\xi_{\infty}=\tree{T}_\xi$, \item for $\eta\leq \xi$, $\mtree{V}^\eta\upharpoonright a(\mtree{V}^\eta, G_\eta)+1 = \mtree{V}^\xi\upharpoonright a(\mtree{V}^\eta, G_\eta)+1$, \item for $\eta<\xi$, $G_\eta= F^\xi_{a(\mtree{V}^\eta, G_\eta)}$, \item for $\zeta\leq_\mtree{T} \eta\leq_\mtree{T} \xi$, $\vec{\Delta}^{\zeta,\xi}= \vec{\Delta}^{\eta, \xi}\circ \vec{\Delta}^{\zeta,\eta}$. \item for $\eta\leq_\mtree{T} \xi$, $\Delta^{\eta,\xi}_{\infty}= \Psi_{\eta,\xi}$. \end{enumerate} To start, $\mtree{V}^0=\mtree{S}$. Given everything up to $\mtree{V}^\xi$, let $\eta=\mtree{T}\text{-pred} (\xi+1)$. We want to set $\mtree{V}^{\xi+1}= \mtree{V}(\mtree{V}^\eta, \mtree{V}^\xi, G_\xi)$, so we need to see that the agreement hypotheses of the one-step case are met. If $\eta=\xi$, we're good; so assume $\eta<\xi$. By our induction hypothesis (3), we have that $G_\eta = F^\xi_{a(\mtree{V}^\eta,G_\eta)}$. By the normality of $\mtree{T}$, we have that $\text{crit} (G_\xi)<\hat\lambda( G_\eta)$, so $b(\mtree{V}^\xi, G_\xi)\leq a(\mtree{V}^\eta,G_\eta)$. If $b(\mtree{V}^\xi, G_\xi)< a(\mtree{V}^\eta,G_\eta)$ we're done by our induction hypothesis (2). If $b(\mtree{V}^\xi, G_\xi)+1= a(\mtree{V}^\eta,G_\eta)+1= \text{lh} (\mtree{V}^\eta)$, we're also done, since $F^\eta_{b(\mtree{V}^\xi, G_\xi)}$ is undefined. So assume $b(\mtree{V}^\xi, G_\xi)+1= a(\mtree{V}^\eta,G_\eta)+1< \text{lh} (\mtree{V}^\eta)$. Then $F^\eta_{a(\mtree{V}^\eta,G_\eta)}$ is defined but as $G_\eta^-$ is on the sequence of the last models of both $\tree{V}^\eta_{a(\mtree{V}^\eta, G_\eta)}$ and $\tree{T}_\eta$, the last tree of $\mtree{V}^{\eta}$, we must have that $\text{lh} (F^\eta_{a(\mtree{V}^\eta,G_\eta)})\geq \text{lh}(G_\eta)$. So the hypotheses of the one-step case still apply. We also put $\vec{\Delta}^{\eta,\xi+1} = \vec{\Delta}^{\mtree{V}(\mtree{V}^\eta, \mtree{V}^\xi, G_\xi)}$ and $\vec{\Delta}^{\zeta, \xi+1} =\vec{\Delta}^{\eta,\xi+1}\circ \vec{\Delta}^{\zeta,\eta}$ whenever $\zeta\leq_\mtree{T}\eta$. By our work in the one-step case and our induction hypothesis at $\eta$ and $\xi$, it's easy to see all our induction hypotheses still hold at $\xi+1$. At limit $\lambda$ we take the extended meta-tree embedding direct limit along the branch chosen by $\tree{T}$. That is, letting $\mathbb{D}_\lambda=\langle \mtree{V}^\eta, \vec{\Delta}^{\eta,\xi}\,|\,\eta\leq_\mtree{T}\xi<_\mtree{T}\lambda\rangle$, we let $\mtree{V}^\lambda = \lim \mathbb{D}_\lambda$, if this is wellfounded. The last tree of $\mtree{V}^\lambda$ is the direct limit of the $\tree{T}_\eta$ under $\Psi_{\eta,\xi}$ for $\eta\leq_\mtree{T}\xi<_\mtree{T}\lambda$ by our induction hypotheses (1) and (5), which is just $\tree{T}_\lambda$, since $\mtree{T}$ is a meta-tree. We also let $\vec{\Delta}^{\eta, \lambda}$ be the direct limit meta-tree embeddings. It's easy to see that this maintains the rest of our induction hypotheses. This finishes the definition of $\mtree{V}(\mtree{S},\mtree{T})$.\\ For a finite stack of meta-trees $\vec{\mtree{S}}=\langle \mtree{S}^0,\ldots, \mtree{S}^n\rangle$, we also define, by induction, $\mtree{V}(\vec{\mtree{S}})=\mtree{V}(\mtree{V}(\vec{\mtree{S}}\upharpoonright n), \mtree{S}_n)$. Notice that this makes sense since, by induction, $\mtree{S}_{n-1}$ and $\mtree{V}(\mtree{S}\upharpoonright n)$ have the same last tree, so $\langle \mtree{V}(\mtree{S}\upharpoonright n), \mtree{S}_n\rangle$ is really a stack of meta-trees. \begin{definition} A $(\lambda,\theta)$ strategy $\Sigma$ for $\tree{S}$ \textit{normalizes well} iff for any finite stack of meta-trees $\vec{\mtree{S}}$ by $\Sigma$, $\mtree{V}(\vec{\mtree{S}})$ is by $\Sigma$. \end{definition} \begin{remark} Like with the case of a strategy on a premouse $M$ with strong hull condensation, one can show that a meta-strategy on a tree $\tree{S}$ with meta-hull condensation extends uniquely to a strategy for finite stacks which normalizes well. We have no use for this here and it would take some space to write out, so we'll just assume we are given strategies for finite stacks of meta-trees. \end{remark} \begin{proposition}\label{nice meta-strategy existence 2} Suppose $\Sigma$ is a strategy for $M$ with strong hull condensation and $\tree{S}$ is by $\Sigma$. Then $\Sigma^*_\tree{S}$ normalizes well. \end{proposition} \begin{proof}[Proof sketch.] By induction, we just need to verify this for stacks of length $2$. By our characterization of $\Sigma^*_\tree{S}$, we just need to see that all the trees in all the $\mtree{V}^\xi = \mtree{V}(\mtree{S},\mtree{T}\upharpoonright\xi+1)$ are by $\Sigma$. We do this by induction. At successors all our new trees are all of the form $V(\tree{U}, \tree{V}, G)$ for trees $\tree{U},\tree{V}$ which are by $\Sigma$, so $V(\tree{U},\tree{V}, G)$ is by $\Sigma$, by strong hull condensation and quasi-normalizing well. At limit $\lambda$, we have all the $\mtree{V}^\xi$ for $\xi<\lambda$ are by $\Sigma^*$ and we want to see the direct limit along $[0,\lambda)_\mtree{T}$ is by $\Sigma^*$. The trees of the direct limit are either trees of $\mtree{V}^\xi$ or else are (non-trivial) direct limits along one-step embedding normalization tree embeddings by the extenders of $[0,\lambda)_\mtree{T}$. All these non-trivial direct limit trees agree with $\tree{T}_\lambda$ up to $\delta_\lambda +1= \sup_{\xi<\lambda}\{\alpha_0(G_\xi, \tree{T}_\xi)\}$, which is by $\Sigma$. At limit ordinals $\gamma>\delta_\lambda$ in these direct limit trees, the branches are images under the $v$-maps of earlier trees which are by $\Sigma$ and so must be by $\Sigma$ by strong hull condensation. \qed \end{proof} We can now easily prove Theorem \ref{meta-tree full norm}. \begin{proof}[Proof of Theorem \ref{meta-tree full norm}.] Let $\tree{S}$ be a countable plus tree by $\Sigma$ and $\langle \mtree{S},\mtree{T}\rangle$ be a stack by $\Sigma^*_\tree{S}$ with last tree $\tree{U}$. Since $\Sigma^*_\tree{S}$ normalizes well, $\mtree{U}=\mtree{V}(\mtree{S},\mtree{T})$ is by $\Sigma^*_\tree{S}$, and is a meta-tree with last tree $\tree{U}$. We get $\Phi^\mtree{U}=\Phi^\mtree{T}\circ \Phi^\mtree{S}$ easily by induction (using the commutativity condition in the definition of meta-tree embeddings). \qed \end{proof} We end this section with a few definitions. Let $M$ a premouse and $\tree{S}$ a plus tree on $M$ of successor length. Suppose $\Sigma$ is a strategy for finite stacks of meta-trees on $\tree{S}$ which has meta-hull condensation and normalizes well. \begin{definition}\label{tail strategies} Let $ \mtree{S}$ be a meta-tree by $\Sigma$ of successor length. We define the \textit{tail normal tree strategy} $\Sigma_{\mtree{S}}$ by \[\tree{U} \text{ is by }\Sigma_{\mtree{S},P} \Leftrightarrow \langle \mtree{S},\mtree{V}(\tree{T}, \tree{U})\rangle\text{ is by }\Sigma.\] It is easy to see that this is a strategy for single normal trees on $M_\infty^\mtree{S}$. We can naturally extend this to an internally lift consistent strategy for $M_\infty^\mtree{S}$. By putting, for $Q\trianglelefteq M_\infty^\mtree{S}$, \[\tree{U} \text{ is by }\Sigma_{\mtree{S},Q} \Leftrightarrow \langle \mtree{S}, \mtree{V}(\tree{T},\tree{U}^+)\rangle \text{ is by }\Sigma,\] where $\tree{U}^+$ is the copy of $\tree{U}$ on $Q$ to a normal tree on the full $M_\infty^\mtree{S}$. \end{definition} It is easy to see the following. \begin{proposition}\label{tail strategy prop} Let $(M,\Sigma)$ be a mouse pair, $\tree{S}$ a plus tree by $\Sigma$ of successor length, and $\mtree{S}$ be a meta-tree by $\Sigma$ of successor length with last tree $\tree{T}$. Then for all $Q\trianglelefteq M_\infty^\mtree{S}$, $(\Sigma^*_\tree{S})_{\mtree{S}, Q}=\Sigma_{\tree{T},Q}$. \end{proposition} \begin{definition} If $M$ is a least branch premouse, $\tree{S}$ is a tree on $M$, and $\Sigma$ is a strategy for finie stacks of meta-trees on $\tree{S}$ which has meta-hull condensation and normalizes well, we say that $\Sigma$ is \textit{pushforward consistent} iff for every $\mtree{S}$ by $\Sigma$ and $Q\trianglelefteq M_\infty^\mtree{S}$, $\dot\Sigma^Q\subseteq \Sigma_{\mtree{S},Q}$. \end{definition} By Proposition \ref{tail strategy prop} and pushforward consistency for lbr hod pairs, we immediately get \begin{proposition}\label{pushforward} Let $(M,\Sigma)$ be a lbr hod pair, $\tree{S}$ a plus tree by $\Sigma$ of successor length. Then $\Sigma^*_\tree{S}$ is pushforward consistent. \end{proposition} A property in the same vein needed for our meta-strategy comparison result is a version of strategy coherence for $\lambda$-separated meta-trees, which follows from normalizing well and meta-hull condensation (essentially by the proof of Corollary 7.6.9 from \cite{nitcis}). \begin{proposition}\label{strategy coherence prop} Let $M$ a premouse, $\tree{S}$ a countable plus tree on $M$ of successor length, and $\Sigma$ a meta-strategy for finite stacks of countable meta-trees on $\tree{S}$ which normalizes well and has meta-hull condensation. Let $\mtree{S}, \mtree{T}$ be $\lambda$-separated meta-tree on $\tree{S}$ of successor lengths and suppose $Q\trianglelefteq M_\infty^\mtree{S}, M_\infty^\mtree{T}$. Then $\Sigma_{\mtree{S}, Q}=\Sigma_{\mtree{T}, Q}$. \end{proposition} We'll consider just one additional property for a meta-strategy, relating it back to an ordinary iteration strategy for the base model. \begin{definition}\label{doddjensendef} Let $M$ a premouse, $\Omega$ be an iteration strategy for $M$, $\tree{S}$ a plus tree on $M$ of successor length, and $\Sigma$ a meta-strategy for finite stacks of meta-trees on $\tree{S}$. $\Sigma$ has\textit{ the Dodd-Jensen property relative to $\Omega$} iff for any meta-tree $\mtree{S}$ on $\tree{S}$ by $\Sigma$ of successor length with last tree $\tree{T}$ and last model $P$, \begin{enumerate} \item if $\tree{T}$ doesn't drop along its main branch, then $(\Sigma_{\mtree{S}})^{i^\tree{T}}\subseteq \Omega$ and \item for any $Q\trianglelefteq P$ and nearly elementary map $\pi:M\to Q$ such that $(\Sigma_{\mtree{S},Q})^\pi\subseteq \Omega$, \begin{enumerate} \item $M$-to-$P$ doesn't drop in $\tree{T}$ and $Q=P$, \item $i^\tree{T}(\xi)\leq \pi(\xi)$ for any $\xi<o(M)$. \end{enumerate} \end{enumerate} \end{definition} \begin{proposition}\label{dodd-jensen} Let $(M,\Sigma)$ be a lbr hod pair, $\tree{S}$ a plus tree by $\Sigma$ of successor length. Then $\Sigma^*_\tree{S}$ has the Dodd-Jensen property relative to $\Sigma$. \end{proposition} \begin{proof} Let $\mtree{S}$ on $\tree{S}$ by $\Sigma^*_\tree{S}$ of successor length, $\tree{T}$ its last tree, and $P$ its last model. Since $\mtree{S}$ is by $\Sigma^*_\tree{S}$, $\tree{T}$ is by $\Sigma$. If $\tree{T}$ doesn't drop along its main branch, then since $\Sigma$ is pullback consistent, $(\Sigma_\tree{T})^{i^\tree{T}}=\Sigma$. But $(\Sigma^*_\tree{S})_{\mtree{S}}\subseteq \Sigma_\tree{T}$, by Proposition \ref{tail strategy prop}, so $((\Sigma^*_\tree{S})_{\mtree{S}})^{i^\tree{T}}\subseteq \Sigma$, giving (1). Now suppose we have $Q\trianglelefteq P$ and $\pi:M\to Q$ such that $((\Sigma^*_\tree{S})_{\mtree{S},Q})^\pi\subseteq \Sigma$. By Proposition \ref{tail strategy prop}, $(\Sigma^*_\tree{S})_{\mtree{S},Q}\subseteq\Sigma_{\tree{T},Q}$. It follows that $\Sigma$ and $(\Sigma_{\tree, Q})^\pi$ agree on normal trees, and so by Theorem \ref{uniqueextension}, $\Sigma=(\Sigma_{\tree, Q})^\pi$. That is $\pi$ nearly elementary as a map from $(M, \Sigma)$ into $(Q, \Sigma_{\tree{T},Q})$. So the Dodd-Jensen Lemma for mouse pairs (Theorem 9.3.4 from \cite{nitcis}) immediately gives (2). \qed \end{proof} To review what we've established above, let $(M,\Sigma)$ be a pfs mouse pair or lbr hod pair with scope $H_\delta$, $\tree{S}$ a plus tree of successor length by $\Sigma$, and let $\Sigma^*_\tree{S}$ be the induced meta-strategy; then the tails $(\Sigma^*_\tree{S})_{\mtree{S},P}$ are contained in the appropriate tails of $\Sigma$. Moreover $\Sigma^*_\tree{S}$ \begin{itemize} \item[(i)] has meta-hull condensation (Proposition \ref{nice meta-strategy existence}), \item[(ii)] normalizes well (Proposition \ref{nice meta-strategy existence 2}), \item[(iii)] has the Dodd-Jensen property relative to $\Sigma$ (Proposition \ref{dodd-jensen}), and \item[(iv)] is pushforward consistent, if $(M,\Sigma)$ is an lbr hod pair (Proposition \ref{pushforward}). \end{itemize} We shall see in the next section that, in the right context, these properties uniquely determine $\Sigma^*_\tree{S}$. \subsection{Comparison} We need to generalize the tree comparison theorem we proved earlier. In that theorem (Theorem \ref{meta-tree comparison v1}), we showed that, in particular, any two trees by $\Sigma$ had a common meta-iterate (via meta-trees which were by $\Sigma^*$). We now want to compare pairs of the form $(\tree{S}, \Sigma)$, $(\tree{T}, \Lambda)$ where $\Sigma, \Lambda$ are meta-strategies for $\tree{S},\tree{T}$. To do this, we will have to weaken the conclusion that we tree embed both into a common tree. Recall that we reached this conclusion by first arranging that there we reached trees with a common last model. This is what we'll try to arrange in our generalization. As in \cite{nitcis}, at stages where we reach extender disagreements, we will hit the corresponding plus extender, rather than the disagreement extender itself. A meta-tree $\mtree{S}$ on $\tree{S}$ is \textit{$\lambda$-separated} if for all $\xi+1<\text{lh}(\mtree{S})$, $F_\xi^\mtree{S}$ is of plus type. Notice that if $\tree{T}$ is a plus tree of successor length and $\tree{U}$ is a $\lambda$-separated plus tree on $M_\infty^\tree{T}$, then actually $\tree{U}$ is normal and so $\mtree{V}(\tree{T}, \tree{U})$ is defined and is also $\lambda$-separated meta-tree (assuming it is wellfounded). \begin{theorem}[Meta-strategy comparison]\label{tree comparison v2} Assume $\mathsf{AD^+}$. Let $M, N$ be countable, strongly stable premice of the same type, $\tree{S},\tree{T}$ countable plus trees on $M,N$ of successor lengths, and $\Sigma$, $\Lambda$ meta-strategies for finite stacks of countable meta-trees on $\tree{S}, \tree{T}$ which have meta-hull condensation and normalize well. Then there are simple $\lambda$-separated meta-trees $\mtree{S}$ by $\Sigma$ and $\mtree{T}$ by $\Lambda$ with last models $P,Q$ such that either \begin{enumerate} \item $(P,\Sigma_{\mtree{S},P}) \trianglelefteq (Q,\Lambda_{\mtree{T},Q})$ and $\mtree{S}$ doesn't drop along its main branch, or \item $(Q,\Lambda_{\mtree{T},Q}) \trianglelefteq (P,\Sigma_{\mtree{S},P})$ and $\mtree{T}$ doesn't drop along its main branch. \end{enumerate} \end{theorem} \begin{remark} By Remark \ref{meta-strategy remark} and the fact that agreement on normal trees (in fact $\lambda$-separated trees) suffices for full strategy agreement (see \cite{nitcis}), this is a generalization of the main comparison theorem for mouse pairs in \cite{nitcis}. \end{remark} This theorem, and a variant we'll need later, follow from a general theorem about comparison with the levels of a background construction. The statement is almost what you would expect, except that we have added that we gratuitously drop however we want, in a sense to follow. To get Theorem \ref{tree comparison v2}, we just need the case $A=\emptyset$, but we will need other $A$ in the variant we need for full normalization. \begin{definition}\label{parameter set} Let $\Sigma$ be a meta-strategy for $\tree{S}$ and $A$ any set. A meta-tree $\mtree{S}$ is by $\Sigma, A$ iff it is by $\Sigma$ and for any $\xi<\text{lh} (\mtree{S})$, \begin{enumerate} \item if there is a unique $\gamma$ such that $\langle \tree{S}_\xi^+,\gamma\rangle\in A$, $\gamma+1<\text{lh} (\tree{S}_\xi^+)$, and $\gamma\geq \sup\{\alpha^\mtree{S}_\eta+1\,|\,\eta<\xi\}$, then $\gamma<\text{lh}(\tree{S}_\xi^+)$ and $\tree{S}_\xi = \tree{S}_\xi^+\upharpoonright\gamma+1$; \item otherwise, $\tree{S}_\xi = \tree{S}_\xi^+$. \end{enumerate} \end{definition} One can think that, whereas $\Sigma$ is a winning strategy for II in the game where player I must say if and how to gratuitously drop, $(\Sigma, A)$ determines a winning strategy for player II in the game where this information is decided by player II. We need one more definition. \begin{definition} $(P, \Omega)$ a mouse pair and $M$ a premouse of the same type as $P$. Let $\tree{S}$ a plus tree on $M$ of successor length, and $\Sigma$ a $(\lambda, \theta)$-strategy for $\tree{S}$ and $A$ a set. Then $(\tree{S}, \Sigma, A)$ \textit{iterates past} $(P, \Omega)$ iff there is a $\lambda$-separated meta-tree $\mtree{S}$ by $\Sigma$ such that $M_{\eta,j}^\mathbb{C}\trianglelefteq M_\infty^{\mtree{S}}$ and $\Sigma_{\mtree{S}, P}\subseteq \Omega$.\footnote{This inclusion says that $\Sigma_{\mtree{S}, P}$ is the restriction of $\Omega$ to single normal trees of length $<\theta$ on $P$. Recall that $\Omega$ is defined on all plus trees, so includes more information than $\Sigma_{\mtree{S}, P}$, but that it is actually redundant since it is totally determined by its action on normal trees (in fact, $\lambda$-separated trees).} $(\tree{S}, \Sigma, A)$ \textit{iterates strictly past} $(P, \Omega)$ if there is such an $\mtree{S}$ such that, also, either $P\triangleleft M_\infty^\mtree{S}$ or $\mtree{S}$ has a necessary drop along its main branch. Finally, $(\tree{S}, \Sigma, A)$ \textit{iterates to} $(P, \Omega)$ if it iterates past $(P,\Omega)$ via an $\mtree{S}$ such that $P=M_\infty^\mtree{S}$ and $\mtree{S}$ doesn't have a necessary drop along its main branch. \end{definition} \begin{theorem}\label{main comparison theorem} Suppose $\delta$ is an inaccessible cardinal, $M$ is a strongly stable pfs premouse (or lbr hod mouse), $\tree{S}$ is plus tree on $M$ of successor length, and $\Sigma$ is an $(\omega, \delta)$-strategy for $\tree{S}$ which has meta-hull condensation and normalizes well. Let $\mathbb{C}$ be a PFS (or least branch) construction of length $\leq\delta$ such that $\mathcal{F}^\mathbb{C}\subseteq H_\delta$ and for all $E\in \mathcal{F}^\mathbb{C}$, $\text{crit}(E)>o(M), \text{lh}(\tree{S})$, $i_E(\Sigma)\subseteq \Sigma$, and and $i_E(A)\subseteq A$. Let $\langle \nu, k\rangle<\text{lh}(\mathbb{C})$ and suppose that $(\tree{S},\Sigma, A)$ iterates strictly past $(M_{\eta,j}^\mathbb{C}, \Omega_{\eta,j}^\mathbb{C})$, for all $\langle\eta, j\rangle<_{\text{lex}}\langle \nu, k\rangle$. Then $(\tree{S},\Sigma, A)$ iterates past $(M_{\nu,k}^\mathbb{C}, \Omega_{\nu,k}^\mathbb{C})$. \end{theorem} That Theorem \ref{main comparison theorem} implies Theorem \ref{tree comparison v2} is a simple variant of the analogous argument from \cite{nitcis} (i.e. the proof that Theorem 8.3.5 follows from Theorem 8.3.4). Our proof of Theorem \ref{main comparison theorem} also closely follows the proof of the analogous result, Theorem 8.3.4 from \cite{nitcis}: we compare against levels of the background by least extender disagreement, using plus extenders at every stage, and show by induction on $\langle \nu,k\rangle$ that the background doesn't move and no strategy disagreements show up. The difference is that our comparison process is now producing meta-trees $\mtree{S}$ by $\Sigma, A$ on our base plus tree $\tree{S}$ rather than plus trees by a strategy for a base model $M$. Let $\mtree{S}_{\nu,k}^*$ be the $\lambda$-separated meta-trees which are by $\Sigma, A$ which are obtained by comparing against the last model of $\tree{S}$ against $M^\mathbb{C}_{\nu,k}$ until we reach an extender disagreement coming from the $M^\mathbb{C}_{\nu,k}$-side \textit{or} until we reach a strategy disagreement. That is, $\mtree{S}_{\nu,k}^*=\langle \tree{S}_\xi, F_\xi, \Phi_{\eta,\xi}\rangle$ is the unique meta-tree by $\Sigma,A$ such that for all $\xi<\text{lh} (\mtree{S}^*_{\nu,k})$, letting $P_\xi$ the last model of $\tree{S}_\xi$, \begin{itemize} \item[(i.)] if $(M_{\nu,k}, \Omega_{\nu,k})\parallel (P_\xi, \Sigma_{\mtree{S}^*_{\nu,k}\upharpoonright\xi+1, P_\xi})$, then $\xi+1=\text{lh} (\mtree{S}^*_{\nu,k})$, and \item[(ii).] If $(M_{\nu,k}, \Omega_{\nu,k})\not\parallel (P_\xi, \Sigma_{\mtree{S}^*_{\nu,k}\upharpoonright\xi+1, P_\xi})$ then for $\langle \eta, l\rangle<\ell(M_{\nu,k})$ least such that \[(M_{\nu,k}, \Omega_{\nu,k})|\langle \eta,l\rangle\not\parallel (P_\xi, \Sigma_{\mtree{S}^*_{\nu,k}\upharpoonright\xi+1, P_\xi})|\langle \eta,l\rangle,\] either \begin{itemize} \item [(a)] $\xi+1<\text{lh} (\mtree{S}^*_{\nu,k})$, $l=0$, and $\eta$ is the index of an extender of the $P_\xi$-sequence but not the index of an extender on the $M_{\nu,k}$-seqeunce, \item[(b)] $\xi+1=\text{lh} (\mtree{S}^*_{\nu,k})$ and $l=0$, but $\eta$ is the index of an extender on the $M_{\nu,k}$-sequence which is \textit{not} on the $P_{\nu,k}^*$-sequence, or \item[(c)] $\xi+1=\text{lh}(\mtree{S}^*_{\nu,k})$, $M_{\nu,k}|\langle\eta,l\rangle=P_{\nu,k}^*|\langle \eta,l\rangle$ but $\Omega_{\nu,k}|\langle \eta,l\rangle$ disagrees with $\Sigma_{\mtree{S}^*_{\nu,k}, P^*_{\nu,k}}|\langle \eta,l\rangle$ on a normal tree. \end{itemize} \end{itemize} Here (ii)(a) just says that we are building $\mtree{S}^*_{\nu,k}$ by comparing via least extender disagreements arising from just the $\tree{S}$-side for as long as possible; (ii)(b) says we had to stop because we'd need to hit an extender on the $M_{\nu,k}$-side to continue comparing by least disagreement; (ii)(b) says that we had to stop because we reached a strategy disagreement. We show by induction on $\langle \nu, k\rangle$ that (i) holds at least until we reach a $\langle \nu, k\rangle$ such that $(\tree{S}, \Sigma, A)$ iterates to $(M_{\nu,k}, \Omega_{\nu,k})$ via $\mtree{S}^*_{\nu,k}$, i.e. a $\langle \nu,k\rangle$ such that $P^*_{\nu,k}=M_{\nu,k}$, $ \Sigma_{\mtree{S}^*_{\nu,k}, P^*_{\nu,k}}\subseteq \Omega_{\nu,k}$, and $\mtree{S}^*_{\nu,k}$ doesn't have a necessary drop along its main branch. So suppose (i) holds for all $\langle \eta, l\rangle\leq_{\text{lex}}\langle \nu,k\rangle$ and but that we haven't yet reached this situation. We have the following lemma, which is the appropriate generalization of Sublemma 8.3.1.1 of \cite{nitcis}). \begin{lemma}\label{res map lemma 1} Suppose $M_{\nu,k}$ is \textit{not} sound. Then $(i)$ holds for $\mtree{S}_{\nu,k+1}^*$ and \begin{enumerate} \item $M_{\nu,k}$ is the last model of $\mtree{S}^*_{\nu,k}$ and $\Sigma_{\mtree{S}^*_{\nu,k}, M_{\nu,k}}\subseteq\Omega_{\nu,k}$, \item $\mtree{S}^*_{\nu,k+1}\trianglelefteq \mtree{S}^*_{\nu,k}$, \item letting $\eta$ and $\delta_{\nu,k}$ be such that $\mtree{S}^*_{\nu, k+1}=\mtree{S}^*_{\nu,k}\upharpoonright\eta+1$ and $\delta_{\nu,k}+1 = \text{lh} (\mtree{S}^*_{\nu,k})$, we have $\eta\leq_{\mtree{S}^*_{\nu,k}} \delta_{\nu,k}$, \item the last $t$-map of the tree embedding $\Phi^{\mtree{S}^*_{\nu,k}}_{\eta,\delta_{\nu,k}}$ is the uncoring map $\pi: M^-_{\nu,k+1}\to M_{\nu,k}$. \end{enumerate} \end{lemma} The proof of this lemma relies on our analysis of drops in meta-trees along with the following easy fact about ordinary normal trees. \begin{proposition}\label{no projectum overlap} Suppose $\tree{T}$ is a normal tree with last model $N$ and $P\trianglelefteq N$ is sound. Then for all $\xi+1<\text{lh}( \tree{T})$, \[\text{lh} (E^\tree{T}_\xi)\not\in(\rho(P), o(P)).\] \end{proposition} \begin{proof} We may assume that $\tree{T}$ uses no extenders with lengths $>o(P)$, in which case we just need to show that $\tree{T}$ uses no extenders with lengths $>\rho(P)$. If $o(P)<o(N)$, then no ordinal $\alpha\in (\rho(P), o(P)]$ is a cardinal of $N$ (since ${|\rho(P)|^+}^Q\geq o(J_1(P))$), so $\text{lh} (E_\xi^\tree{T})\not\in (\rho(P), o(P)]$ as it is a cardinal of $N$. Now suppose $o(P)=o(N)$, i.e. $P=N|\langle o(N),k(P)\rangle$. Towards a contradiction, assume $\tree{T}$ uses an extender with length $>\rho(P)$. Then there is an extender used along the main branch of $\tree{T}$ with length $>\rho(P)$. Let $\delta+1=\text{lh} (\tree{T})$ and let $\eta,\xi$ such that $\eta=\tree{T}\text{-pred}(\xi+1)$, $\text{lh} (E_\xi^\tree{T})>\rho(P)$, and $(\xi+1, \delta]_\tree{T}$ doesn't drop. Then we have that $\pi=_\text{def}\hat\imath^\tree{T}_{\eta, \delta}$ is elementary and cofinal on its domain $Q\trianglelefteq M_\eta^\tree{T}$. Now we claim that $\rho_{k(P)+1}(Q)\leq \text{crit} (E^\tree{T}_\xi)$. To see this, note that if $k(P)<k(N)$, $\pi(\rho_{k(P)+1}(Q))= \rho(P)$ and if $k(P)=k(N)$, $\sup\pi" \rho_{k(P)+1}(Q)=\rho(P)$, so in either case $\rho_{k(P)+1}(Q)>\text{crit} (E^\tree{T}_\xi)$ implies $\rho(P)\geq \text{lh} (E_\xi^\tree{T})$, a contradiction. But then since we apply $E^\tree{T}_\xi$ to $Q$ along the main branch of $\tree{T}$, the last model of $\tree{T}$, $N$, is not $k(P)+1$-sound. Hence $P=N|\langle o(N), k(P)\rangle$ is not sound, a contradiction. \qed \end{proof} \begin{proof}[Proof of Lemma \ref{res map lemma 1}.] Let $\mtree{S}^*_{\nu,k}= \langle\tree{S}_\xi, F_\xi, \Phi_{\eta,\xi}\rangle$. Since $M_{\nu,k}$ is an initial segment of the last model of $\mtree{S}^*_{\nu,k}$ but isn't sound, it must be equal to the last model; giving (1). Since we assumed we haven't reached the situation of (1)(c), $\mtree{S}^*_{\nu,k}$ must have a necessary drop along its main branch. Let $\eta$-to-$\xi+1$ be the last necessary drop along this branch. By our analysis of droppin in a meta-tree, we have that the last $t$-map of $\Phi_{\eta,\delta_{\nu,k}}$ is the uncoring map $\pi:M_{\nu,k+1}^-\to M_{\nu,k}$ and $M_{\nu,k+1}$ is an initial segment of the last model of $\tree{S}_\eta$. To finish, we just need to see that $\mtree{S}_{\nu,k+1}^*= \mtree{S}^*_{\nu,k}\upharpoonright\eta+1$, since then (2) clearly holds and we've seen (4) and the rest of (3) hold for $\eta$. Let $\rho=\rho(M_{\nu,k+1})$. We have that $M_{\nu,k+1}|{\rho^+}^{M_{\nu,k+1}}=M_{\nu,k}|{\rho^+}^{M_{\nu,k}}$, so that $\mtree{S}^*_{\nu,k}$ and $\mtree{S}^*_{\nu,k+1}$ use the same extenders with lengths $\leq \rho$. So we just need to see that $\mtree{S}^*_{\nu,k}\upharpoonright\eta+1$ uses no extender with length $>\rho$. All of the meta-tree exit extenders $F_\zeta$ for $\zeta< \eta$ are ordinary exit extenders of the normal tree $\tree{S}_\eta$. Since $M_{\nu,k+1}^-$ is a sound initial segment of the last model of $\tree{S}_\eta$, by Proposition \ref{no projectum overlap}, $\tree{S}_\eta$ uses no extenders with lengths in the interval $(\rho, o(M_{\nu,k+1}))$. So, we need to show that $\text{lh} (F_\zeta)<o(M_{\nu,k+1})$ for all $\zeta<\eta$, since then all these $F_\zeta$ must have $\text{lh} (F_\zeta)\leq \rho$. This just follows by the normality of $\mtree{S}^*_{\nu,k}$ and the quasi-normality of $\tree{S}_{\xi+1}$. Fix $\zeta<\eta$. Since $\eta=\mtree{S}_{\nu,k}^*\text{-pred}(\xi+1)$, $\text{crit} (F_\xi)>\hat\lambda (F_\zeta)$. In $\tree{S}_{\xi+1}$, $F_\xi=E^{\tree{S}_{\xi+1}}_{\alpha_\xi}$ and $F_\zeta=E^{\tree{S}_{\xi+1}}_{\alpha_\zeta}$, so $\alpha_\zeta<\beta_\xi= \tree{S}_{\xi+1}\text{-pred}(\alpha_\xi+1)$. So we have that $\text{lh} (F_\zeta)<\text{lh} (E_{\beta_\xi}^{\tree{S}_{\xi+1}})$. Also, $F_{\xi}$ is applied to $M_{\nu,k+1}^-\trianglelefteq M^{\tree{S}_{\xi+1}}_{\beta_\xi}$, so $M_{\beta_\xi}^{\tree{S}_{\xi+1}}|\text{lh} (E^{\tree{S}_{\xi+1}}_{\beta_\xi})\trianglelefteq M_{\nu,k+1}$. So $\text{lh} (F_\zeta)<o(M_{\nu,k+1})$, as desired. \qed \end{proof} Using Lemma \ref{res map lemma 1}, we get that resurrection embeddings are realized as the last $t$-maps of appropriate tree embeddings (which is the appropriate generalization of Lemma 8.3.1 of \cite{nitcis}). We will also use Lemma \ref{res map lemma 1} below to rule out that (ii) (b) ever occurs, i.e. we never stop building $\mtree{S}_{\nu,k}^*$ because we reach an extender disagreement coming from the background. \begin{lemma}\label{res map lemma 2} Let $\langle \theta, j\rangle< \langle \nu, k\rangle$ and $P\trianglelefteq M_{\theta, j}$. Let $\langle \theta_0, j_0\rangle = Res_{\theta,j} [P]$ and $\tau=\sigma_{\theta, j}[P]$ (so $\tau:P\to M_{\theta_0, j_0}$). Let $\xi$ least such that $P\trianglelefteq M^{{\tree{S}^{\theta, j}}^*}_{z_{\theta,j}(\xi)}$. Then \begin{enumerate} \item $\mathbb{S}^*_{\theta_0,j_0}\upharpoonright \xi+1 = \mathbb{S}^*_{\theta,j}\upharpoonright \xi+1$, \item $\xi \leq_{\mtree{S}^*_{\theta_0, j_0}} \delta =\text{lh} (\mtree{S}^*_{\theta_0, j_0})-1$, \item for $t^{\xi, \delta}$ the last $t$-map along $[\xi, \delta]_{\mtree{S}^*_{\theta_0,j_0}}$, $t^{\xi, \delta}\upharpoonright P= \tau$. \end{enumerate} \end{lemma} This follows from Lemma \ref{res map lemma 1} by, essentially, the argument for Lemma 8.3.1 of \cite{nitcis}. We omit further details. It will be useful to observe that nice trees on $V$ naturally give rise to tree embeddings in the following way. \begin{proposition}\label{hull embeddings prop} Let $\tree{T}$ be a nice, normal tree on $V$ with last model $Q$. Let $i_\tree{T}: V\to Q$ be the iteration map of $\tree{T}$. Let $M$ be a premouse such that $\tree{T}$ is above $|M|^+$, i.e. all the critical points of extenders used in $\tree{T}$ are $>|M|^+$. Then there is a (unique) extended tree embedding $I^\tree{S}_\tree{T}: \tree{S}\to i_\tree{T}(\tree{S})$ with $u$-map $i_\tree{T}\upharpoonright \text{lh} \tree{S}$ and $t$-maps $t_\xi=i_\tree{T}\upharpoonright M^\tree{S}_\xi$. \end{proposition} We'll prove this by induction on length $\tree{T}$ using the following lemma at successor stages. \begin{lemma}\label{hull embeddings lemma} Let $\tree{T}$ be a nice, normal tree on $V$ and let $Q_\xi = M^\tree{T}_\xi$. Suppose $\nu=\tree{T}\text{-pred}(\gamma+1)$ and $\tree{S}\in Q_\nu$ is a normal tree on a premouse $M\in Q_\nu$ such that the extender $F= E^\tree{T}_\gamma$ has critical point $>|M|^+$. Let $\pi=i^{Q_\nu}_F$. Then there is a (unique) extended tree embedding $I^{\tree{T};\nu,\gamma+1}_\tree{S}: \tree{S}\to \pi(\tree{S})$ with $u$-map $\pi\upharpoonright \text{lh} (\tree{S})$ and $t$-maps $t_\xi=\pi\upharpoonright M^\tree{S}_\xi$. \end{lemma} \begin{proof} We'll just check the simplest case $\tree{T}=\langle G\rangle$ and leave the general case to the reader. So let $G$ an extender on $V$ and suppose $M$ is a premouse such that $|M|^+<\text{crit} G$ and $\tree{S}$ is a normal tree on $M$. Let $\pi= i^V_G$. We check that there is an extended tree embedding $I=\langle u,v,\{s_\xi\}, \{t_\xi\}\rangle: \tree{S}\to \pi(S)$ with $u$-map $\pi\upharpoonright \text{lh} (\tree{S})$ and $t$-maps $t_\xi = \pi\upharpoonright M_\xi^\tree{S}$. Since $u\upharpoonright\kappa=id$, we have that $I\upharpoonright(\tree{S}\upharpoonright\kappa)$ is just the identity tree embedding on $\tree{S}\upharpoonright\kappa$. At $\kappa$, we must let $v(\kappa)=\kappa$ and $s_\kappa = id$. To see we can make our desired assignments for $u(\kappa)$ and $t_\kappa$, we check \begin{claim} $\kappa\leq_{\pi(\tree{S})} \pi(\kappa)$ and $\hat \imath^{\pi(\tree{S})}_{\kappa, \pi(\kappa)} = \pi\upharpoonright M^\tree{S}_\kappa$. \end{claim} \begin{proof} This is routine. First, we have $\xi<_{\pi(\tree{S})} \pi(\kappa)$ for every $\xi<_{\tree{S}}\kappa$ since $\pi\upharpoonright\kappa=id$, so $\kappa\leq_{\pi(\tree{S})} \pi(\kappa)$ since branches of iteration trees are closed below their sup. Also, $\tree{S}\upharpoonright\kappa+1 \trianglelefteq \pi(\tree{S})$ since $\pi$ is tree-order preserving (and the identity below $\kappa$), fixes $M_\xi^\tree{S}$ for $\xi<\kappa$ (since $M$ is small), and $M_\kappa^\tree{S}=M_\kappa^{\pi(\tree{S})}$ since they are both given by the same direct limit. Now let $x\in M_\kappa^\tree{S}$. Let $\xi<_\tree{S}\kappa$ and $\bar x\in M_\xi^\tree{S}$ such that $\hat\imath^\tree{S}_{\xi, \kappa}(\bar x)= x$. Now \begin{align*} \pi(x)&= \pi( \hat\imath^\tree{S}_{\xi, \kappa}(\bar x))\\ &= \hat\imath^{\pi(\tree{S})}_{\xi, \pi(\kappa)}(\bar x)\\ &=\hat\imath^{\pi(\tree{S})}_{\kappa, \pi(\kappa)}\circ \hat\imath^\tree{S}_{\xi, \kappa}(\bar x)\\ &=\hat\imath^{\pi(\tree{S})}_{\kappa, \pi(\kappa)}(x). \end{align*} The second equality is given by the elementarity of $\pi$ and using that $\pi(\bar x)= \bar x$ since $\bar x\in M_\xi^\tree{S}$ and $\xi<\kappa$. The third equality uses that $\kappa\in (\xi, \pi(\kappa))_{\pi(\tree{S})}$ and $\hat\imath^\tree{S}_{\xi, \kappa} = \hat\imath^{\pi(\tree{S})}_{\xi, \kappa}$. The final equality is immediate from our choice of $\xi, \bar x$. \hfill{$\qed$} \end{proof} So letting $u(\kappa)= \pi(\kappa)$ and $t_\kappa = \hat\imath^{\pi(\tree{S})}_{\kappa, \pi(\kappa)}$ (as we must), we have that $I\upharpoonright\tree{S}\upharpoonright\kappa+1 \to\pi( \tree{S})\upharpoonright\pi(\kappa)+1$ is an extended tree embedding meeting the desired assignments. We check the rest by induction. The general limit case is basically the same as the case at $\kappa$. Suppose that $I\upharpoonright (\tree{S}\upharpoonright\lambda):\tree{S}\upharpoonright\lambda\to \pi(\tree{S})\upharpoonright \pi(\lambda)$ is a tree embedding\footnote{Here we are \textit{not} assuming it is an extended tree embedding, since that doesn't make sense as $\tree{S}\upharpoonright\lambda$ has limit length.} with $u(\xi)=\pi(\xi)$ and $t_\xi = \pi\upharpoonright M^\tree{S}_\xi$ for all $\xi<\lambda$. To extend $I\upharpoonright(\tree{S}\upharpoonright\lambda)$, we must let $v(\lambda)=\sup \pi" \lambda$. We first check \begin{claim} $v(\lambda)\leq_{\pi(\tree{S})} \pi(\lambda)$ and $\pi"[0,\lambda)_\tree{T}$ is a cofinal subset of $[0,v(\lambda))_{\pi(\tree{S})}$. \end{claim} \begin{proof} For $\xi<_\tree{S}\lambda$, $\pi(\xi)<_{\pi(\tree{S})} \pi(\lambda)$, so $\pi"[0,\lambda)_\tree{S}\subseteq[0,\pi(\lambda))_{\pi(\tree{S})}$. Since branches of iteration trees are closed below their sup and $[0,\lambda)_\tree{S}$ is cofinal in $\lambda$, $v(\lambda)=\sup \pi"[0,\lambda)_\tree{S}\leq_{\pi(\tree{S})} \pi(\lambda)$. \hfill{$\qed$} \end{proof} Now $M_\lambda^\tree{S}=\lim_{\xi\in [0,\lambda)_\tree{S}} M_\xi^\tree{S}$ and $M_{v(\lambda)}^{\pi(\tree{S})}=\lim_{\xi\in [0,\lambda)_\tree{S}} M_{\pi(\xi)}^{\pi(\tree{S})}$ so that $s_\lambda$ must be the unique map from $M^\tree{S}_\lambda$ into $M^{\pi(\tree{S})}_{v(\lambda)}$ such that for all $\xi<_\tree{S} \lambda$, $s_\lambda \circ \hat\imath^{\tree{S}}_{\xi, \lambda}=\hat\imath^{\pi(\tree{S})}_{\pi(\xi), v(\lambda)}\circ t_\xi$. Since we've stipulated $u(\lambda)=\pi(\lambda)$, we must put $t_\lambda = \hat\imath^{\pi(\tree{S})}_{v(\lambda),\pi(\lambda)}\circ s_\lambda$. So to finish the limit case we just need to check \begin{claim} $\hat\imath^{\pi(\tree{S})}_{v(\lambda),\pi(\lambda)}\circ s_\lambda= \pi\upharpoonright M^\tree{S}_\lambda$. \end{claim} \begin{proof} Let $x\in M_\lambda^\tree{S}$. Let $\xi<_\tree{S}\lambda$ and $\bar x\in M^\tree{S}_\xi$ such that $\hat\imath^\tree{S}_{\xi, \lambda}(\bar x)= x$. Then \begin{align*} \pi(x)&= \pi(\hat\imath^\tree{S}_{\xi, \lambda}(\bar x))\\ &= \hat\imath^{\pi(\tree{S})}_{\pi(\xi), \pi(\lambda)}(\pi(\bar x))\\ &=\hat\imath^{\pi(\tree{S})}_{v(\lambda), \pi(\lambda)}\circ\hat\imath^{\pi(\tree{S})}_{\pi(\xi), v(\lambda)}\circ t_\xi(\bar x)\\ &= \hat\imath^{\pi(\tree{S})}_{v(\lambda), \pi(\lambda)}\circ s_\lambda \circ \hat\imath^{\tree{S}}_{\xi, \lambda}(\bar x)\\ &= \hat\imath^{\pi(\tree{S})}_{v(\lambda), \pi(\lambda)}\circ s_\lambda(x). \end{align*} The second equality is just using the elementarity of $\pi$, the third splits up the branch embedding and uses our induction hypothesis that $\pi(\bar x)=t_\xi(\bar x)$ (since $\bar x\in M^\tree{S}_\xi$), the fourth uses $s_\lambda \circ \hat\imath^{\tree{S}}_{\xi, \lambda}=\hat\imath^{\pi(\tree{S})}_{\pi(\xi), v(\lambda)}\circ t_\xi$, as observed above, and the final equality just uses that $x=\hat\imath^\tree{S}_{\xi, \lambda}(\bar x)$. \hfill{$\qed$} \end{proof} For the successor case, suppose we have $I\upharpoonright (\tree{S}\upharpoonright\xi+1)$ is an extended tree embedding from $\tree{S}\upharpoonright\xi+1$ to $\pi(\tree{S})\upharpoonright \pi(\xi)+1$ and $t_\eta = \pi\upharpoonright M^\tree{S}_\eta$ for all $\eta\leq \xi$. Notice that $\pi(\xi+1)=\pi(\xi)+1$, so we have $u(\xi+1)=v(\xi+1)$, and so only need to define $s_{\xi+1}=t_{\xi+1}$. We also have that $E_{\pi(\tree{S})}^{\pi(\tree{S})} = \pi(E^\tree{S}_\xi)=t_\xi(E^\tree{S}_\xi)$ and for $\eta=\tree{S}\text{-pred}(\xi+1)$, $\pi(\eta)=\pi(\tree{S})\text{-pred} (\pi(\xi)+1)$, so we know that $s_{\xi+1}$ is given by the Shift Lemma applied to $t_\xi, t_\eta, E_\xi$. To finish, we just need to check that \begin{claim} $s_{\xi+1}=\pi\upharpoonright M_{\xi+1}^\tree{S}$. \end{claim} \begin{proof} We'll just deal with the case in which we take 0-ultrapowers on both sides. Let $P\trianglelefteq M^\tree{S}_\eta$ the level to which we apply $E^\tree{S}_\xi$. By the elementarity of $\pi$, $t_\eta(P)= \pi(P)$ is the level of $M^{\pi(\tree{S})}_{\pi(\xi)}$ to which we apply $E^{\pi(\tree{S})}_{\pi(\xi)}$. Let $a\in [\hat\lambda(E^\tree{S}_\xi)]^{<\omega}$ and $f:[\text{crit} (E^\tree{S}_\xi)]^{|a|}\to Q$, $f\in Q$. We have that \begin{align*} \pi([a,f]^P_{E^\tree{S}_\xi})&=[\pi(a),\pi(f)]^{\pi(P)}_{E^{\pi(\tree{S})}_{\pi(\xi)}}\\ &= [t_\xi(a), t_\eta(f)]^{\pi(P)}_{E^{\pi(\tree{S})}_{\pi(\xi)}}\\ &=s_{\xi+1}([a,f]^P_{E^\tree{S}_\xi}), \end{align*} as desired. \hfill{$\qed$} \end{proof} \qed \end{proof} This proposition naturally extends to meta-tree embeddings. The proof is straightforward using Proposition \ref{hull embeddings prop}, so we omit it. \begin{proposition}\label{hull embeddings prop 2} Let $\tree{T}$ a nice, normal tree on $V$ and let $Q_\xi = M^\tree{T}_\xi$. Suppose $\nu\leq_\tree{T}\gamma$ and $\mtree{S}\in M^\tree{T}_\nu$ is a normal tree on a premouse $M\in Q_\nu$ such that the extenders in $[\nu, \gamma)_\tree{T}$ have critical points $>|M|^+$. Then there is a (unique) extended meta-tree embedding from $\mtree{S}$ into $i^\tree{T}_{\nu,\gamma}(\mtree{S})$ with $u$-map $i^\tree{T}_{\nu, \gamma}\upharpoonright \text{lh} (\mtree{S})$ and $\Delta$-maps $\Delta_\xi=I^{\tree{T};\nu, \gamma}_{\tree{S}_\nu}$. \end{proposition} We now verify that we never stop building $\mtree{S}^*_{\nu,k}$ because we reach an extender disagreement coming from the background, i.e. case (ii)(b) doesn't occur. So suppose we've built $\mtree{S}^*_{\nu,k}\upharpoonright\xi+1$ with last model $P_\xi$ and let $\langle \eta,l\rangle$ least such that \[ (M_{\nu,k}, \Omega_{\nu,k})|\langle \eta,l\rangle\not\parallel (P_\xi, \Sigma_{\mtree{S}^*_{\nu,k}\upharpoonright\xi+1, P_\xi})|\langle \eta,l\rangle. \] Then we have the following. \begin{lemma}\label{background doesn't move} If $l=0$, $E^{M_{\nu,k}}_\eta=\emptyset$. \end{lemma} \begin{proof} Suppose not and let $G= E^{M_{\nu,k}}_\eta$. If $k=m+1$ for some $m$, then by our induction hypothesis and Lemma \ref{res map lemma 1}, $M_{\nu,k+1}$ is an initial segment of the last model of $\mtree{S}^*_{\nu,k+1}$, so $G$ must have been on the $P_\xi$-sequence after all. So $k=0$ and $G$ is the top extender of $M_{\nu,0}$. Let $\mtree{S}=\mtree{S}^*_{\nu,0}$. Let $G^*$ be the background for $G$ in $\mathbb{C}$. Let $\kappa=\text{crit} (G)=\text{crit} (G^*)$ and $\pi: V\to Ult(V,G^*)$ the ultrapower embedding. Let $\vec{\Psi}=\langle \pi, \Psi_\xi\rangle$ be the extended tree embedding from $\tree{S}$ into $\pi(\tree{S})=(\tree{S}^*_{\pi(\nu), 0})^{\pi(\mathbb{C})}$ given by Proposition \ref{hull embeddings prop 2}. By hypothesis, $M, \tree{S}$ are fixed by $\pi$ and $\pi(\Sigma)\subseteq \Sigma$ and $\pi(A)\subseteq A$. So $\pi(\mtree{S})$ is by $\Sigma, A$. Since $\nu= \text{lh} (G)< \text{lh} (G^*)<\pi(\kappa)$, and $G^*$ is strong to its length, we have that $M_{\nu,0}^{\pi(\mathbb{C})}||\text{lh} (G)=M_{\nu,0}||\text{lh} (G)$. Since background certificates must be Mitchell-order minimal, we get that $M_{\nu,0}^{\pi(\mathbb{C})}$ is passive, i.e. $M_{\nu,0}^{\pi(\mathbb{C})}= M_{\nu,0}||\text{lh} (G)$. Since $G$ is the trivial completion of $(\kappa, \hat\lambda(G))$ extender of $\pi\upharpoonright M_{\nu,0}$, we get that \[M_{\pi(\nu),0}^{\pi(\mathbb{C})}|\text{lh} (G) = \pi(M_{\nu,0})|\text{lh} (G) = M_{\nu,0}||\text{lh} (G)=M_{\nu,0}^{\pi(\mathbb{C})}.\] Since $\mtree{S}$ is obtained by comparing $\tree{S}$ and $M_{\nu,0}$ by least extender disagreements and below $\text{lh} (\mtree{S})$ $(ii)(b)(c)$ never occur, by elementarity of $\pi$, $\pi(\mtree{S})$ is obtained by comparing $\tree{S}$ and $M_{\pi(\nu),0}^{\pi(\mathbb{C})}$ by least extender disagreements and below $\text{lh} (\pi(\mtree{S}))$, $(ii)(b)(c)$ never occur. By the agreement between $M_{\nu,0}$ and $M_{\pi(\nu),0}^{\pi(\mathbb{C})}$ observed above, and since both trees are by $\Sigma, A$, we get that $\mtree{S}\trianglelefteq \pi(\mtree{S})$. Since $\kappa=\text{crit} (\pi)$, we get $\kappa\leq_{\pi(\mtree{S})}\pi(\kappa)$ and there is no dropping (of any kind) between $\kappa$ and $\pi(\kappa)$. So $\Phi=\Phi_{\kappa,\pi(\kappa)}^{\pi(\mtree{S})}$ is a total extended tree embedding from $\tree{S}_\kappa$ into $\pi(\tree{S}_\kappa)$. Moreover, it is easy to see that $\Phi=\Psi_\kappa$, since both have $u$-map and $t$-maps the appropriate restrictions of $\pi$.\footnote{$\Psi_\kappa$ is the unique tree embedding with this property; to see that $\Phi$ has this property, we just use that $\kappa$ is the critical point of $\pi$ and the directed system of trees whose limit is $\tree{S}_{\pi(\kappa)}$ is the image under $\pi$ of the directed system of trees whose limit is $\tree{S}_\kappa$. We leave the details to the reader. } Now, $\text{lh} (\mtree{S})\leq \nu+1< \pi(\kappa)$ and $M_{\nu,0}^{\pi(\mathbb{C})}$ is an initial segment of the last model of $\mtree{S}\trianglelefteq \pi(\mtree{S})$, so since all meta-tree exit extenders of $\pi(\mtree{S})$ used after $\mtree{S}$ have length strictly greater than $\text{lh} (G)$ (as $G$ is not on the sequence of the last model of $\mtree{S}$), we have that $M_{\nu, 0}^{\pi(\mathbb{C})}$ is an initial segment of the last model of $\tree{S}_\xi^{\pi(\mathbb{C})}$ for any $\xi\geq \text{lh} (\mtree{S})$. Let $\lambda+1$ be the successor of $\kappa$ in $(\kappa,\pi(\kappa)]_{\pi(\mtree{S})}$. $F_\lambda^{\pi(\mtree{S})}$ is compatible with $G$ since both are initial segments of the extender of $t^\Phi_\kappa$. \footnote{for $F_\lambda^{\pi(\mtree{S})}$, this is just because $t^{\Phi_{\kappa,\lambda+1}}_\kappa $ is just the ultrapower by $F_\lambda^{\pi(\mtree{S})}$; for $G$ this is because $t^\Phi_\kappa$ is just $\pi\upharpoonright M_\kappa^{\tree{S}_\kappa}$ and $G$ is total on $M_{\pi(\kappa)}^{\tree{S}_{\pi(\kappa)}^{\pi(\mathbb{C})}}||\text{lh} (G)$ which is either an initial segment of $M_\kappa^{\tree{S}_\kappa}$ or has the same $P(\kappa)$ as it.} Now if $\lambda<\text{lh} (\mtree{S})$, then $\text{lh} (F_\lambda^{\pi(\mtree{S})})<\text{lh} (G)$ so that $F_\lambda^{\pi(\mtree{S})}$ is on the $M_{\nu,0}$-sequence by the Jensen initial segment condition. But then $F_\lambda^{\pi(\mtree{S})}$ isn't an extender disagreement, a contradiction. So $\lambda\geq \text{lh} (\mtree{S})$. But then $\text{lh} (F_\lambda^{\pi(\mtree{S})})$ and we get $G$ is on the sequence of the last model of $\tree{S}_\lambda^{\pi(\mtree{S})}$. But $M_{\pi(\nu),0}^{\pi(\mathbb{C})}$ agrees with the last model of $\tree{S}_\lambda^{\pi(\mtree{S})}$ through $F_\lambda^{\pi(\mtree{S})}$, so $G$ is on the sequence of $M_{\pi(\nu),0}^{\pi(\mathbb{C})}$, a contradiction (as observed above, $M_{\pi(\nu),0}^{\pi(\mathbb{C})}|\langle\text{lh} (G),0\rangle$ is passive). \qed \end{proof} All that remains is to verify that no strategy disagreements show up, i.e. (ii)(c) never occurs. To do this, we generalize the proof of Theorem 8.4.3 of \cite{nitcis}. This generalization is straightforward, but quite involved, so we give a sketch of it here. Before we start, we can quote that result to prove the present theorem in the case when $\Sigma=\Lambda^*_\tree{S}$ for some $\Lambda$ a strategy for the base model $M$ such that $(M,\Lambda)$ is a mouse pair. Suppose we're in this case. Let $\delta+1=\text{lh} (\mtree{S}^*_{\nu,k})$ and $\langle \eta, l\rangle$ such that $P_\delta|\langle \eta, l\rangle = M_{\nu,k}|\langle \eta, l\rangle$. Let $\gamma<\text{lh} (\tree{S}_\delta)$ least such that $M_{\nu,k}|\langle \eta, l\rangle \trianglelefteq M^{\tree{S}_\delta}_\gamma$. Then $\tree{S}_\delta\upharpoonright\gamma+1$ is the unique shortest tree which is by $\Lambda$ and has $M_{\nu,k}|\langle \eta, l\rangle$ as an initial segment of its last model. So then $\tree{S}_\delta\upharpoonright\gamma+1\trianglelefteq \tree{W}^*_{\nu,k}$, the comparison of $M$ with $M_{\nu,k}$ via $\Lambda$. By (the proof of) Theorem 8.4.3 of \cite{nitcis}, we have that \[( P_\delta|\langle \eta, l\rangle, \Lambda_{\tree{S}_\delta\upharpoonright\gamma+1, P_\delta|\langle \eta,l\rangle}) = (M_{\nu,k}, \Omega_{\nu, k})|\langle \eta,l\rangle.\] But $(M,\Lambda)$ is internally lift consistent, so \[\Lambda_{\tree{S}_\delta, P_\delta}|\langle \eta, l \rangle =\Lambda_{\tree{S}_\delta\upharpoonright\gamma+1, P_\delta|\langle \eta,l\rangle}.\] By the definition of $\Lambda^*$, $\Sigma_{\mtree{S}_{\nu,k}^*, P_\delta}= \Lambda_{\tree{S}_\delta, P_\delta}$, so \[( P_\delta, \Sigma_{\mtree{S}_{\nu,k}^*, P_\delta})| \langle\eta, l\rangle = (M_{\nu,k}, \Omega_{\nu, k})|\langle \eta, l\rangle,\] as desired. We now turn to the general case of establishing that no strategy disagreements show up. Suppose now that we have built $\mtree{S} = \mtree{S}^*_{\nu,k}\upharpoonright\xi+1$ and lined up it's last model $P$ up to $\langle \eta, l\rangle$ with $M_{\nu,k}$. It suffices to show the iteration strategies agree on $\lambda$-separated trees, so let $\tree{U}$ be a $\lambda$-separated plus tree of limit length on $P|\langle \eta, l\rangle= M_{\nu,k} |\langle \eta, l\rangle$ which is by both $\Sigma_{\mtree{S}, P}$ and $\Omega_{\nu,k}$. We show that $\Sigma_{\mtree{S}, P}(\tree{U})=\Omega_{\nu,k}(\tree{U})$. Let \[c=\langle M_{\nu, k}, id, M_{\nu,k},\mathbb{C}, V\rangle,\] \[\text{lift}(\tree{U},c)=\langle \tree{U}^*,\langle c_\alpha\mid\alpha<\text{lh}(\tree{U})\rangle\] and \[c_\alpha=\langle M_\alpha^\tree{U}, \psi_\alpha, Q_\alpha, \mathbb{C}_\alpha, S_\alpha\rangle\rangle.\] A reflection argument, as in \cite{nitcis} Lemma 8.4.27, gives that $\tree{U}^*$ has a cofinal wellfounded branch, so it suffices to show that that for any cofinal wellfounded branch $b$ of $\tree{U}^*$, $b=\Sigma_{\mtree{S}, P}(\tree{U})$. Then it follows that $\Sigma_{\mtree{S}, P}(\tree{U})=\Omega_{\nu,k}(\tree{U})$, since the definition of $\Omega_{\nu,k}$ as a (partial) iteration strategy gives $\Omega_{\nu,k}(\tree{U})=b$ iff $b$ is the unique cofinal branch of $\tree{U}^*$. So this would establishes that there are no strategy disagreements, finishing the proof. So we just need to prove the following lemma. \begin{lemma}\label{main comparison lemma} If $b$ is a cofinal wellfounded branch of $\tree{U}^*$, then $\Sigma_{\mtree{S}, P}(\tree{U})=b$. \end{lemma} \begin{proof}[Proof.] We write $(\mtree{S}^*_{\nu,k})^{S_\gamma}$ for $\langle \eta,j\rangle\leq_{\text{lex}} i_{0,\gamma}^{\tree{U}^*}(\langle \nu, k\rangle)$ to denote $i^{\tree{U}^*}_{0,\gamma}(\langle \zeta, l\rangle \mapsto \mtree{S}^*_{\zeta, l})(\eta, j)$. It is easy to see that our hypotheses give that $M$, $\tree{S}$ are fixed by $i_{0,\gamma}^{\tree{U}^*}$ and that $i^{\tree{U}^*}(\Sigma)\subseteq \Sigma$ and $i^{\tree{U}^*}(A)\subseteq A$, so $(\mtree{S}^*_{\nu,k})^{S_\gamma}$ is by $\Sigma, A$. As $M_b^{\tree{U}^*}$ is wellfounded, in addition to the $c_\alpha$, we also have a last conversion stage \[c_b=\langle M_b^\tree{U}, \psi_b, Q_b, \mathbb{C}_b, S_b\rangle\] in $\text{lift}(\tree{U}{}^\frown b, c)$. For $\gamma<\text{lh}(\tree{U})$ or $\gamma=b$, let Let\begin{align*} \langle \eta_\gamma, l_\gamma\rangle&=\text{ the unique }\langle \eta, l\rangle\text{ such that }Q_\gamma=M_{\eta, l}^{\mathbb{C}_\gamma},\\ \mtree{S}^*_\gamma&=(\mtree{S}^*_{\eta_\gamma, l_\gamma})^{S_\gamma},\\ N_\gamma&=M_\infty^{\mtree{S}^*_\gamma}. \end{align*} So $Q_\gamma\trianglelefteq N_\gamma$, the last model of $\mtree{S}^*_\gamma$, which is the unique $\lambda$-separated meta-tree on $\tree{S}$ by $\Sigma,A$ which iterates $\tree{S}$ past $Q_\gamma$, and if $\nu<_\tree{U}\gamma$, and $(\nu, \gamma]_\tree{U}$ doesn't drop, then $i^{\tree{U}^*}_{\nu, \gamma}(\mtree{S}^*_\nu)=\mtree{S}^*_\gamma$. Note that $\mtree{S}^*_0$ is our initial meta-tree $\mtree{S}$. For simplicity, we will assume that $Q_0=N_0$, so that $\tree{U}=\tree{U}^+$ (this makes little difference). Let $\tree{T}$ be the last tree of $\mtree{S}=\mtree{S}^*_0$. Let $\mtree{U}=\mtree{V}(\tree{T}, \tree{U}{}^\frown b)$. By the definition of $\Sigma_{\mtree{S}, P}$, to show $b=\Sigma_{\mtree{S}, P}(\tree{U})$, it suffices to show $\langle \mtree{S}, \mtree{U}\rangle$ is by $\Sigma$. To do this, it suffices to show $\mtree{V}(\mtree{S}, \mtree{U})$ is by $\Sigma$, since $\Sigma$ normalizes well. Since $\Sigma$ has meta-hull condensation this follows immediately from the following claim. \begin{sublemma}\label{main sublemma} There is an extended meta-tree embedding from $\mtree{V}(\mtree{S}, \mtree{U})$ into $\mtree{S}^*_b$. \end{sublemma} \begin{proof} \end{proof} We'll have to consider stages of the meta-tree normalization $\mtree{V}(\mtree{S},\mtree{U})$. Set \begin{align*} \mtree{V}_\gamma &= \mtree{V}(\mtree{S}, \mtree{U}\upharpoonright (\gamma+1)) \text{ for $\gamma < \text{lh} (\mtree{U})$, and}\\ \mtree{V}_b &= \mtree{V}(\mtree{S}, \mtree{U}). \end{align*} For $\gamma<\text{lh}(\tree{U})$ or $\gamma=b$, let $R_\gamma$ the last model of $\mtree{V}_\gamma$. For $\gamma<\text{lh}(\tree{U})$, we have that the last tree of $\mtree{V}_\gamma$ is $V(\tree{T}, \tree{U}\upharpoonright \gamma+1)$ and the last tree of $\mtree{V}_b$ is $V(\tree{T}, \tree{U}{}^\frown b)$. In particular, for $\gamma<\text{lh}(\tree{U})$ or $\gamma=b$, the models $R_\gamma$ are the last models of these quasi-normalization trees so that the associated quasi-normalization maps $\sigma_\gamma$ are nearly elementary maps from $M_\gamma^\tree{U}$ into $R_\gamma$. For $\gamma<\text{lh} (\tree{U})$, let $F_\gamma=\sigma_\gamma(E^\tree{U}_\gamma)$, $a_\gamma = a(F_\gamma, \mtree{V}_\gamma)$, and $b_\gamma=b(F_\gamma, \mtree{V}_\gamma)$. If $\nu\leq_\tree{U} \gamma$, let $\vec{\Phi}^{\nu, \gamma}$ the partial extended meta-tree embedding from $\mtree{V}_\nu$ into $\mtree{V}_\gamma$ and $\vec{I}^{\nu, \eta}$ the extended meta-tree embedding from $\mtree{S}_\nu^*$ into $\mtree{S}_\gamma^*$ coming from Proposition \ref{hull embeddings prop 2}. Let $H_\gamma= \psi_\gamma (E^\tree{U}_\gamma)$, $res_\gamma = (\sigma_{\eta_\gamma, l_\gamma}[M_{\eta_\gamma,l_\gamma}| \text{lh} (H_\gamma)])^{S_\gamma}$, $G_\gamma = res_\gamma(H_\gamma)$ and $G^*_\gamma = E^{\tree{U}^*}_\gamma$. So $G_\gamma$ comes from resurrecting $P= N_\gamma|\text{lh} (H_\gamma)$ inside $S_\gamma$ and $G_\gamma^*$ is the corresponding background extender. Let $\tau_\gamma$ least such that $P$ is an initial segment of the last model of $\tree{S}_{\tau_\gamma}^{\mtree{S}^*_\gamma}$. By induction on $\gamma<\text{lh} (\tree{U})$ or $\gamma=b$, we build extended meta-tree embeddings $\vec \Delta^\gamma=\langle u^\gamma, v^\gamma, \{\Psi^\gamma_\xi\}, \{\Delta^\gamma_\xi\}\rangle: \mtree{V}_\gamma \to \mtree{S}_\gamma^*$ maintaining that for all $\nu<\eta\leq \gamma$, \begin{enumerate} \item $\vec \Delta^\nu \upharpoonright a_\nu+1 \approx \vec \Delta^\eta \upharpoonright a_\nu +1$ \item if $\nu\leq_\tree{U} \eta$ and $\nu$-to-$\eta$ doesn't drop, then $\vec{\Delta}^\eta\circ \vec{\Phi}^{\nu, \eta}= \vec{I}^{\nu, \eta}\circ \vec{\Delta}^\nu$ ; \item For $s^\eta, t^\eta$ the last $s$-map and $t$-map of $\Delta^\eta_{\infty}$, respectively, \begin{itemize} \item[(a)]$s^\eta \upharpoonright\text{lh} (F_\nu) +1 = res_\nu \circ t^\nu \upharpoonright \text{lh} (F_\nu) +1$, and \item[(b)]$\psi_\eta = t^\eta\circ \sigma^\eta$; \end{itemize} \item $G_\nu$ is a meta-tree exit extender of $\mtree{S}^*_\eta$ and for $\xi_\eta$ such that $G_\nu = F_{\xi_\eta}^{\mtree{S}^*_\eta}$, \begin{itemize} \item [(a)]$\tau_\nu\leq_{\mtree{S}^*_\eta}{\xi_\nu}\leq_{\mtree{S}^*_\eta}v^{\eta}(a_\eta)$, \item [(b)] for $t^{\tau_\nu, \xi_\nu}$ the last $t$-map along $(\tau_\nu,{\xi_\nu}]_{\mtree{S}^*_\eta}$, $t^{\tau_\nu, \xi_\nu}\upharpoonright \text{lh} (H_\nu) +1 = res_\nu \upharpoonright \text{lh} (H_\nu) +1$. \end{itemize} \end{enumerate} The bulk of the work is seeing that these hypotheses pass through the successor case. In our sketch, we'll just handle the non-dropping successor case, omitting discussion about the dropping successor case and limit case. Let $\nu=\tree{U}\text{-pred}(\gamma+1)$ and suppose (1)-(4) hold up to $\gamma$. Applying Lemma \ref{res map lemma 2} to $P=N_\gamma|\text{lh} (H_\gamma)$ in $S_\gamma$ with $\langle \theta, j\rangle = \langle \eta_\gamma, l_\gamma\rangle$, we get a meta-tree $\mtree{T}_\gamma = (\mtree{S}^*_{\theta_0,j_0})^{Q_\gamma}$ such that $\mtree{T}_\gamma\upharpoonright\tau_\gamma+1= \mtree{S}^*_\gamma\upharpoonright \tau_\gamma+1$\footnote{Here we are relying on the definition $\mtree{S}^*_\gamma= (\mtree{S}^*_{\eta_\gamma, l_\gamma})^{S_\gamma}$.}, $\tau_\gamma \leq_{\mtree{T}_\gamma} \xi_\gamma =_\text{def} \text{lh} (\mtree{T}_\gamma)-1$ and for $t^{\tau_\gamma, \xi_\gamma}_\infty$ the last $t$-map of $\Phi^{\mtree{T}_\gamma}_{\tau_\gamma,\xi_\gamma}$, $t^{\tau_\gamma, \xi_\gamma}_\infty\upharpoonright P = res_\gamma$. Note that since $G_\gamma= res_\gamma(H_\gamma)$, $G_\gamma^-$ is on the sequence of the last model of $\mtree{T}_\gamma$. Let $N^*_\gamma =M_{\theta_0,j_0}^{S_\gamma}$. Now suppose $(\nu, \gamma+1]_\tree{U}$ doesn't drop. As mentioned above, this is the only case we'll consider. It contains most of the ideas needed for the droping case but is somewhat simpler. \begin{claim} $\mtree{T}_\gamma\trianglelefteq \mtree{S}^*_{\gamma+1}$ and $G_\gamma=F^{\mtree{S}^*_{\gamma+1}}_{\xi_\gamma}$ is the first factor of $\vec{I}^{\nu,\gamma+1}$. \end{claim} \begin{proof} We first show that $N_{\gamma+1} ||\text{lh} (G_\gamma) = N^*_\gamma|| \text{lh} (G_\gamma)$. Let $\mu =\text{crit} (F_\gamma)$, and $\bar\mu =\text{crit} (E^\tree{U}_\gamma)$. By our case hypothesis, $E^\tree{U}_\gamma$ is total on $M^\tree{U}_\nu$, so no level of $M^\tree{U}_\nu$ beyond $\text{lh} (E^\tree{U}_\nu)$ projects to or below $\bar\mu$. So, applying $\sigma_\nu$, no level of $R_\nu$ beyond $\text{lh} (F_\nu)$ projects to or below $\mu = \sigma_\nu(\bar\mu)$ (using here that $\sigma_\nu$ agrees with $\sigma_\gamma$ up to $\text{lh} (F_\nu)+1> \mu$). Applying $t^\nu$ and using our induction hypothesis (3)(b) at $\nu$, no level of $N_\nu$ beyond $\text{lh} (H_\nu)$ projects to or below $t^\nu(\mu)$ and so $res_\nu$ is the identity on ${t^\nu(\mu)^+}^{N_\nu}$. Now since $G_\nu = res_\nu(H_\nu)$, $\text{crit} (G_\nu) =res_\nu(t^\nu(\mu))=t^\nu(\mu)$ and so ${t^\nu(\mu)^+}^{N_\nu^*}<\hat\lambda(G_\nu)$. So we get \[ N_\nu|{t^\nu(\mu)^+}^{N_\nu}=N^*_\nu|{t^\nu(\mu)^+}^{N_\nu^*}=N_\gamma|{t^\nu(\mu)^+}^{N_\gamma}\] Let $\lambda$ be this common value of $t^\nu(\mu)^+$. If $\nu=\gamma$, the above shows that $N_\gamma|\lambda= N^*_\gamma|\lambda$. If $\nu<\gamma$, then no proper initial segment of $M_\gamma^\tree{U}$ projects to or below $\text{lh} (E_\nu^\tree{U})$, so no proper initial segment of $N_\gamma$ projects to or below $\text{lh} (H_\nu)$, so $res_\gamma$ is identity on all of $\text{lh} (H_\nu)$. So even when $\nu<\gamma$, we get $N_\gamma|\lambda= N^*_\gamma|\lambda$. Applying $\pi=i^{S_\nu}_{G^*_\gamma}$, we get \[\pi(N_\gamma|\lambda)= \pi(N^*_\gamma|\lambda)\] Now by our choice of background extender $G^*_\gamma$, $\pi(N^*_\gamma)|\text{lh} (G_\gamma)+1 = Ult(N^*_\gamma, G_\gamma)|\text{lh} (G_\gamma)+1$. By our case hypothesis, $N_{\gamma+1}= \pi(N^*_\nu)$, so by the agreement between the models identified above gives \[N_{\gamma+1}||\text{lh} (G_ \gamma) = N^*_\gamma||\text{lh} (G_\gamma),\] as desired. It follows that $\mtree{S}^*_{\gamma+1}$ and $\mtree{T}_\gamma$ use the same meta-tree exit extenders of length $<\text{lh} (G_\gamma)$. But $G_\gamma^-$ is the top extender of $N^*_\gamma$, so $\mtree{T}_\gamma$ uses no extenders of length $\geq \text{lh}(G_\gamma)$, hence $\mtree{T}_\gamma\trianglelefteq \mtree{S}^*_{\gamma+1}$. As $G_\gamma^-$ is on the sequence of the last model of $\mtree{T}_\gamma$ but not on the $N_{\gamma+1}|\text{lh} (G_\gamma)+1=Ult(N^*_\gamma, G_\gamma)|\text{lh} (G_\gamma)+1$-sequence, the first new exit extender of $\mtree{S}^*_{\gamma+1}$, $F^{\mtree{S}^*_{\gamma+1}}_{\xi_\gamma}$, must have length $\leq \text{lh} (G_\gamma)$. But then we must have $F^{\mtree{S}^*_{\gamma+1}}_{\xi_\gamma} = G_\gamma$, since $\mtree{S}^*_{\gamma+1}$ and $\mtree{T}_\gamma$ use all the same extenders of shorter lengths. To finish the claim, we just need to see that the meta-tree embedding Factor Lemma applies to $\vec{I}^{\nu, \gamma+1}$ and the $G_\gamma$ is the corresponding factor. Using that the $u$-map of $\vec{I}^{\nu, \gamma+1}$ and $u$-maps and $t$-maps of the $\Delta$-maps of $\vec{I}^{\nu, \gamma+1}$ are restrictions of $\pi$, one can show that the first meta-tree extender used along $(\kappa, \pi(\kappa)]_{\mtree{T}_\gamma}$ is $G_\gamma$ (since it is compatible with $G_\gamma$ and distinct compatible extenders can't be used in a meta-tree, since they can't be used in a plus tree). But then the agreement between $\mtree{T}_\gamma$ and $\mtree{S}_\nu^*$ guarantee that the Factor Lemma applies and so $G_\gamma$ is the desired factor. \qed \end{proof} Since $G_\gamma$ is the first factor of $\vec{I}^{\nu, \gamma+1}$, we let $\vec{\Pi}^{\gamma+1}$ be the extended meta-tree embedding from $\mtree{V}(\tree{S}^*_\nu, \mtree{T}_\gamma, G_\gamma)$ into $\mtree{S}^*_{\gamma+1}$ such that $\vec{I}^{\nu,\gamma+1}= \vec{\Pi}^{\gamma+1}\circ \vec{\Delta}^{\mtree{V}(\tree{S}^*_\nu, \mtree{T}_\gamma, G_\gamma)}$ and $u^{\vec{\Pi}}\upharpoonright \xi_\gamma+1=id$ guaranteed by the meta-tree embedding Factor Lemma (using here that $a(\mtree{S}^*_{\gamma+1}, G_\gamma)=\xi_\gamma$). By our induction hypothesis $(3)(b)$ at $\gamma$, $H_\gamma=t^\gamma(F_\gamma)$, so $\vec{\Delta}^\gamma\upharpoonright a_\gamma+1: \mtree{V}_\gamma\upharpoonright a_\gamma+1\to \mtree{S}^*_\gamma\upharpoonright{\tau_\gamma}+1\trianglelefteq \mtree{T}_\gamma$ is a meta-tree embedding. Since $\tau_\gamma\leq_{\mtree{T}_\gamma} \xi_\gamma=\text{lh} (\mtree{T}_\gamma)-1$ and $\tau_\gamma\in [v^\gamma(a_\gamma), u^{\gamma}(a_\gamma)]_{\mtree{S}^*_\gamma}$ (using here that these meta-trees are $\lambda$-separated), we can view $\vec{\Delta}^\gamma\upharpoonright a_\gamma+1$ as an extended meta-tree embedding from $\mtree{V}_\gamma\upharpoonright a_\gamma+1$ into $\mtree{T}_\gamma$, which we'll denote $(\vec{\Delta}^\gamma)^*$. That is, we let the last $\Delta$-tree embedding of $(\vec{\Delta}^\gamma)^*$ be $\Phi^{\mtree{T}_\gamma}_{v^\gamma(a_\gamma), \xi_\gamma}\circ \Gamma^\gamma_{a_\gamma}$. Now, one can show that the last $t$-map of $\Phi^{\mtree{T}_\gamma}_{V^\gamma(a_\gamma), \tau_\gamma}\circ \Gamma^\gamma_{a_\gamma}$ maps $F_\gamma$ to $H_\gamma$. Since the last $t$-map of $\Phi^{\mtree{T}_\gamma}_{\tau_\gamma, \xi_\gamma}$ agrees with $\text{res}_\gamma$ on $N_\gamma|\text{lh} (H_\gamma)$ (and $\text{res}_\gamma(H_\gamma)=G_\gamma$), we have that $G_\gamma$ is the image of $F_\gamma$ under the last $t$-map of $(\vec{\Delta}^\gamma)^*$. By our induction hypotheses and observations we've already made about $(\vec{\Delta}^\gamma)^*$, it is straightforward to check that \begin{claim} The meta-tree Shift Lemma applies to $((\vec{\Delta}^\gamma)^*$, $\vec{\Delta}^\nu$, $F_\gamma$, $G_\gamma)$. \end{claim} So, letting $\vec{\Psi}^{\gamma+1}:\mtree{V}_{\gamma+1}\to\mtree{V}(\mtree{S}^*_\nu, \mtree{T}_\gamma, G_\gamma)$ be the associated copy meta-tree embedding, we finally set $\vec{\Delta}^{\gamma+1}= \vec{\Pi}^{\gamma+1}\circ \vec{\Psi}^{\gamma+1}$. This is clearly an extended meta-tree embedding from $\mtree{V}_{\gamma+1}$ into $\mtree{S}^*_{\gamma+1}$ and it is straightforward to verify our induction hypotheses are maintained. This finishes the non-dropping successor case and our sketch of the proof of Sublemma \ref{main sublemma}, thereby finishing the proof of Lemma \ref{main comparison lemma} and Theorem \ref{main comparison theorem}. \qed\end{proof} The next theorem is our main application of Theorem \ref{main comparison theorem}: a characterization of meta-strategies of the form $\Sigma^*$. \begin{theorem}\label{induced strategy theorem} Assume $\mathsf{AD^+}$. Let $(M,\Sigma)$ be a strongly stable mouse pair with scope $\textrm{HC}$. Let $\tree{S}$ be a countable plus tree of successor length by $\Sigma$. Let $\Lambda$ be a meta-strategy for finite stacks of meta-trees on $\tree{S}$ which has meta-hull condensation, normalizes well, has the Dodd-Jensen property relative to $\Sigma$, and is pushforward consistent, if $(M,\Sigma)$ is an lbr hod pair. Then $\Lambda=\Sigma^*_\tree{S}$. \end{theorem} This is a consequence of the following lemma. \begin{lemma}\label{induced strategy lemma} Assume $\mathsf{AD^+}$. Let $(M,\Sigma)$ be a strongly stable mouse pair with scope $\textrm{HC}$. Let $\tree{S}$ be a countable plus tree of limit length by $\Sigma$. Suppose $c$ is a cofinal, wellfounded branch of $\tree{S}$ and $\Lambda$ is a meta-strategy for finite stacks of meta-trees on $\tree{S}{}^\frown c$ which has meta-hull condensation, normalizes well, has the Dodd-Jensen property relative to $\Sigma$, and is pushforward consistent, if $(M,\Sigma)$ is a lbr hod pair. Then $\Sigma(\tree{S})=c$.\end{lemma} In the course of the proof, we shall make use of the following fact about meta-trees, which is a variant of Lemma 6.6.14 of \cite{nitcis} with essentially the same proof. \begin{proposition}\label{incompatibility prop} Let $\mtree{T}$ be a meta-tree on a plus tree $\tree{T}$ and $\gamma,\delta<\text{lh}(\mtree{T})$ $\leq_\mtree{T}$-incomparable. Let $\eta=\sup([0,\gamma)_\mtree{T}\cap [0,\delta)_\mtree{T})$, $\bar\alpha\geq \text{crit}(u^{\Phi_{\eta,\gamma}^\mtree{T}})$, $\bar\epsilon\geq\text{crit}(u^{\Phi_{\eta, \delta}^\mtree{T}})$, $\alpha=u^{\Phi_{\eta,\gamma}^\mtree{T}}(\bar\alpha)$, and $\epsilon=u^{\Phi_{\eta,\delta}^\mtree{T}}(\bar\epsilon)$. Then $e_{0,\alpha}^{\tree{T}_\gamma} \bot e_{0,\epsilon}^{\tree{T}_\delta}$. \end{proposition} \begin{proof}[Proof of Lemma \ref{induced strategy lemma}.] By the Basis Theorem we may assume $\Sigma$, $\Lambda$ are Suslin-co-Suslin (in the codes) and work in an appropriate coarse $\Gamma$-Woodin tuple such that $\Sigma$ and $\Lambda$ are coded by sets of reals in $\Gamma$.\\ Fix $\tree{S}$, $c$, and $\Lambda$ as above. Let $b=\Sigma(\tree{S})$. We'll apply Theorem \ref{main comparison theorem} twice: once to $\tree{S}{}^\frown b$ with the induced meta-strategy $\Sigma^*_{\tree{S}{}^\frown b}$ and once to $\tree{S}{}^\frown c$ with $\Lambda$, in each case using some sets $A_b$ and $A_c$ which use information coming from the other meta-strategy. The idea behind the choices of these sets is to capture the following rules for simultaneously comparing $\tree{S}{}^\frown b$ and $\tree{S}{}^\frown c$ against levels of the background. Fix a level $\langle \nu, k\rangle$. We build meta-trees $\mtree{S}^b$ and $\mtree{S}^c$ by $\Sigma^*$ and $\Lambda$, respectively, in stages. At every stage we'll drop to the shortest tree which witnesses a disagreement between $\Sigma^*$ and $\Lambda$, in the following sense. At successor stages, given $\mtree{S}^b\upharpoonright \xi+1$ and $\mtree{S}^c\upharpoonright \zeta +1$, we extend the meta-trees by hitting the least disagreements $F^b_\xi$ and $F^c_\zeta$ between the current last models and $M_{\nu,k}$ (using here that least disagreements are extender disagreements which don't come from the background). If the new main branches $[0,\xi+1]_{\mtree{S}^b}$ and $[0,\zeta+1]_{\mtree{S}^c}$ have the same exit extenders (in particular, $F^b_\xi=F^c_\zeta$), then we extend $\mtree{S}^b\upharpoonright \xi+1$ and $\mtree{S}^c\upharpoonright \zeta +1$ by these extenders without gratuitously dropping. Otherwise, we drop to $\alpha_0(\tree{S}^b_\xi, F^b_\xi)+2$ and $\alpha_0(\tree{S}^c_\zeta, F^c_\zeta)+2$ of these extenders (this may be a gratuitous drop or just a necessary drop). Similarly, if we're at a stage where we've defined $\mtree{S}^b\upharpoonright \xi$ and $\mtree{S}^c\upharpoonright\zeta$ for limit ordinals $\xi$ and $\zeta$, we extend these meta-trees by choosing $\Sigma^*(\mtree{S}^b\upharpoonright \xi)$ and $\Lambda(\mtree{S}^c\upharpoonright \zeta)$ and gratuitously dropping to the supremum of the respective $\alpha_0$'s whenever these branches use different exit extenders. We can capture this simultaneous comparison by considering two applications of Theorem \ref{main comparison theorem} using appropriately chosen sets $A_b$ and $A_c$. First, we apply Theorem \ref{main comparison theorem} to $\tree{S}{}^\frown b$, $\Sigma^*$, and $A_b$ the set of pairs of trees and ordinals $\langle \tree{T}, \gamma\rangle$ such that there is a meta-tree $\mtree{T}=\langle \{\tree{T}_\eta\}, \{F_\eta\}\rangle$ by $\Sigma^*$ on $\tree{S}{}^\frown b$ of length $\xi+1$ such that \begin{enumerate} \item[(i)] $\tree{T}=\tree{T}_\xi=\tree{T}_\xi^+$, \item[(ii)]$\gamma=\sup\{\alpha_\eta^\mtree{T}+1\,|\,\eta<\xi\}<\text{lh} (\tree{T})$, and \item[(iii)] for $P$ the last model of $\tree{T}$ and for any meta-tree $\mtree{U}$ by $\Lambda$ with some last model $Q$ such that $P|\delta(\mtree{T}) = Q|\delta(\mtree{T})$, the exit extenders used along the main branch of $\mtree{T}$ are different from the exit extenders used along the main branch of $\mtree{U}$. \end{enumerate} The closure properties of Suslin sets guarantee that $A_b$ is also Suslin-co-Suslin (in the codes), since $\Sigma$ and $\Lambda$ are. Moreover, we can choose our pointclass $\Gamma$ sufficiently large at the outset so that $A_b$ (and also $A_c$, which is defined similarly) are is coded by sets of reals in $\Gamma$. \\ Now let $\mtree{S}^b_{\nu,k}$ be the resulting meta-trees by $\Sigma^*,A_b$ and, via the universality argument, let $\langle\nu_b,k_b\rangle$ least such that $(\tree{S}{}^\frown b,\Sigma^*,A_b)$ iterates to $(M_{\nu_b,k_b}, \Omega_{\nu_b, k_b})$. Similarly, applying Theorem \ref{main comparison theorem} to $\mtree{S}^c$, $\Lambda$, and the set $A_c$ which is defined just like $A_b$ but with the roles of $(\tree{S}{}^\frown b, \Sigma^*)$ and $(\tree{S}{}^\frown c, \Lambda)$ switched, we let $\mtree{S}^c_{\nu,k}$ be the resulting meta-trees by $\Lambda,A_c$ and $\langle \nu_c, k_c\rangle$ least such that $(\tree{S}{}^\frown c, \Lambda, A_c)$ iterates to $(M_{\nu_c,k_c}, \Omega_{\nu_c, k_c})$ Let $\langle \nu,k\rangle$ be the lexicographic minumum of $\langle \nu_b, k_b\rangle$ and $\langle \nu_c, k_c\rangle$. Let $\mtree{S}^b= \mtree{S}^b_{\nu,k}$ and $\mtree{S}^c=\mtree{S}^c_{\nu,k}$. Without loss of generality we assume that $\langle \nu,k\rangle=\langle \nu_b, k_b\rangle$. We then have that $\mtree{S}^b$ has no necessary drops along its main branch and the last model of $\mtree{S}^b$ is $M_{\nu,k}$. Let $b^*$ and $c^*$ be the main branches of $\mtree{S}^b$ and $\mtree{S}^c$ and $\tree{T}_b$ and $\tree{T}_c$ be their last trees. Let $\Phi_{b}$ and $\Phi_{c}$ be the possibly partial tree embeddings from $\tree{S}{}^\frown b$ into $\tree{T}_b$ and $\tree{S}{}^\frown c$ into $\tree{T}_c$. Finally, let $b^*$ and $c^*$ be the main branches of $\tree{T}_b$ and $\tree{T}_c$. Ultimately, we'll show the main branches of $\mtree{S}^b$ and $\mtree{S}^c$ use the same meta-tree exit extenders, and use this to show $b=c$. So let $\langle F_\eta\mid \eta<\gamma\rangle$ and $\langle G_\eta\mid \eta<\lambda\rangle$ enumerate the meta-tree exit extenders used along the main branch of $\mtree{S}^b$ and $\mtree{S}^c$, respectively, in increasing order. We show by induction that $F_\eta=G_\eta$ (and ultimately that $\gamma=\lambda$). Let $\xi_\eta$ such that $F_\eta=F_{\xi_\eta}^{\mtree{S}^b}$ and $\zeta_\eta$ such that $G_\eta=F_{\zeta_\eta}^{\mtree{S}^c}$. Suppose we've shown that $F_\eta=G_\eta$ for all $\eta<\chi$. Let $\xi=\sup\{\xi_\eta+1\mid \eta<\chi\}$ and $\zeta=\sup\{\zeta_\eta\mid \eta<\chi\}$, so that $\xi$ is on the main branch of $\mtree{S}^b$ and $\zeta$ on that of $\mtree{S}^c$. Since at most one side of our comparison has a necessary drop along its main branch, it follows that there can be no necessary drops at all coming from the $F_\eta=G_\eta$ for $\eta<\chi$, i.e. $0$-to-$\xi$ doesn't have a necessary drop in $\mtree{S}^b$ and $0$-to-$\zeta$ doesn't have a necessary drop in $\mtree{S}^c$. By our choice of $A_b$ and $A_c$, we get that there are no gratuitous drops along these branches so far, either. Now we want to show $F_\chi=G_\chi$. We consider cases. \paragraph{Case 1.} $\xi$-to-$\xi_\chi+1$ drops in $\mtree{S}^b$. \paragraph{Subcase 1A.} $\zeta$-to-$\zeta_\chi+1$ drops in $\mtree{S}^c$.\\ First suppose that $c^*$ doesn't drop. Using the Dodd-Jensen property relative to $\Sigma$, we get that $b^*$ cannot drop either, $M_{\infty}^{\tree{T}_b}=M_\infty^{\tree{T}_c}$ and $i_{b^*}^{\tree{T}_b}=i_{c^*}^{\tree{T}_c}$. Since the exit extender sequences were the same below $\chi$ and both sides dropped (perhaps gratuitously), we actually must have $F_\chi$ and $G_\chi$ are applied to a common model along $b^*$ and $c^*$ from which it is easy to see that $F_\chi, G_\chi$ are compatible, and so equal by the Jensen ISC. Now suppose that $c^*$ does drop. Then since its last model is not sound, we must have $M_\infty^{\tree{T}_b}=M_\infty^{\tree{T}_c}$ and $b^*$ also drops. Without loss of generality, assume that $\mtree{S}^c$ is the side which doesn't have any necessary drop along its main branch. It follows that the last drop in $c^*$ is \textit{before} $\beta^{\mtree{S}^c}_{\zeta_\chi}$ in $c^*$ and is, moreover, in the range of $u^{\Phi_{0,\zeta}^{\mtree{S}^c}}$, say $\eta$ the location of the drop in $\tree{S}$ such that $u^{\Phi_{0,\zeta}^{\mtree{S}^c}}(\eta)$ is the last drop in $c^*$. Since the exit extenders are the same, so far, Theorem \ref{tree embedding uniqueness} implies that $M_{u^{\Phi_{0,\xi}^{\mtree{S}^b}}(\eta)}^{\tree{S}^b_\xi}=M_{u^{\Phi_{0,\zeta}^{\mtree{S}^c}}(\eta)}^{\tree{S}^c_\zeta}$ and the common core of $M_\infty^{\tree{T}_b}=M_\infty^{\tree{T}_c}$ is an initial segment of this model. One can then show that the models and exit extenders used along $b^*$ and $c^*$ are the same up to the minimum of where $F_\chi$ and $G_\chi$ are applied. As before we can show that $F_\chi$ and $G_\chi$ are actually applied to the same model, so compatible, and the Jensen ISC gives $F_\chi=G_\chi$. \paragraph{Subcase 1B.} $\zeta$-to-$\zeta_\chi+1$ \textit{doesn't} drop in $\mtree{S}^c$.\\ By our definition of $A_c$, it follows that $G_\chi$ must be a meta-tree exit extender of $\mtree{S}^b$, too. By considering cases about drops in $b^*$ and $c^*$, one gets, again, that the resulting branch embeddings, or a tail of them, are the same, using the Dodd-Jensen property relative to $\Sigma$. But then we can reach a contradiction using Proposition \ref{incompatibility prop}. That proposition gives that the first extender used along the main branch of $\tree{S}^b_{\xi_\chi+1}$ which is not already in $\tree{S}^b_{\xi}$ and the first extender used along the main branch of $\tree{S}^c_{\zeta_\chi+1}$ which is not already in $\tree{S}^c_{\zeta}$ must be incompatible. If we drop at the next stage, this incompatibility must persist, so we do not drop, which gives that $G_{\chi+1}$ is used in $\mtree{S}^b$ (otherwise we must gratuitiously drop in $\mtree{S}^c$ at this next stage). But then Proposition \ref{incompatibility prop} implies that the incompatibility persists anyway, letting us get that $G_{\chi+2}$ is used in $\mtree{S}^b$, and so on. In the end, we get the full $\langle G_\eta\mid \eta<\lambda\rangle$ corresponds to a maximal branch of $\mtree{S}^b$ and can show that, because we chose a different branch, the incompatibility must persist till the end, contradicting that the appropriate branch embeddings are the same. \paragraph{Case 2.} $\xi$-to-$\xi_\chi+1$ \textit{doesn't} drop in $\mtree{S}^b$.\\ Here we just use the argument from Subcase 1B.\\ Now, without loss of generality assume $\gamma\leq \lambda$. So we've shown $F_\eta=G_\eta$ for all $\eta<\gamma$. Suppose $\gamma<\lambda$ and let $\zeta=\sup\{\zeta_\eta\mid \eta<\gamma\}$. Using Theorem \ref{tree embedding uniqueness} and the Dodd-Jensen property, a tail (or all) of the extenders used in $b^*$ already appear in $\tree{S}^c_\zeta$. But applying the remaining $G_\eta$'s must add new extenders to a tail of $c^*$ which don't already appear in $\tree{S}^c_\zeta$. But the Dodd-Jensen property gives these branches $b^*$ and $c^*$ use common extenders on a tail, a contradiction. So we have $\gamma=\lambda$. By Theorem \ref{tree embedding uniqueness}, $\Phi_b\upharpoonright \tree{S}\equiv \Phi_c\upharpoonright\tree{S}$. It follows that $v^{\Phi_b}[c]$ generates a cofinal wellfounded branch $d$ of $\tree{T}_b\upharpoonright\sup v^{\Phi_b}[c]$ and $\tree{S}{}^\frown c$ tree embeds into $(\tree{T}_b\upharpoonright\sup v^{\Phi_b}[c]){}^\frown d$. One can show, using the Dodd-Jensen property, that actually $d$ is an initial segment of $b$. So that $(\tree{T}_b\upharpoonright\sup v^{\Phi_b}[c]){}^\frown d\trianglelefteq \tree{T}_b$. In particular, since $\tree{T}_b$ is by $\Sigma$, strong hull condensation gives $\tree{S}{}^\frown c$ is by $\Sigma$, i.e. $c=\Sigma(\tree{S})=b$. \qed \end{proof} \begin{proof}[Proof of Theorem \ref{induced strategy theorem}.] This is easy from the previous lemma. Let $\tree{S}$, $\Lambda$ be as in the theorem statement. Suppose $\Sigma^*\neq \Lambda$. Let $\mtree{S}$ of limit length on $\tree{S}$ by both $\Sigma^*$ and $\Lambda$. Let $b=\Sigma^*(\mtree{S})$ and $c=\Lambda(\mtree{S})$. Let $\mtree{S}_b$ and $\mtree{S}_c$ be the gratuitously dropping meta-trees where we drop to the common sup of the $\alpha_0$'s of the meta-tree exit extenders of $\mtree{S}$ on both sides. Then $\mtree{S}_b$ and $\mtree{S}_c$ have last trees some $\tree{T}{}^\frown b^*$ and $\tree{T}{}^\frown c^*$, respectively. By the definition of $\Sigma^*$, $b^*=\Sigma(\tree{T})$. But then applying Lemma \ref{induced strategy lemma} to $\tree{T}{}^\frown c*$ gives that $\Sigma(\tree{T})=c^*$. So $b^*=c^*$. It follows that $b=c$ by Lemma \ref{meta-strategy lemma}. \qed \end{proof} We finish this section with some applications of these comparison results. \begin{theorem}\label{normalizeswellthm} Assume $\mathsf{AD^+}$. Let $(M,\Sigma)$ be a strongly stable mouse pair with scope $\textrm{HC}$. Then $\Sigma$ normalizes well. \end{theorem} Since we've assumed $\Sigma$ quasi-normalizes well and tails of mouse pairs are mouse pairs, this follows immediately from the following theorem. \begin{theorem}\label{normalizeswellthm2} Assume $\mathsf{AD^+}$. Let $(M,\Sigma)$ be a strongly stable mouse pair with scope $\textrm{HC}$. Let $\tree{T}$ be a plus tree by $\Sigma$ of successor length and $\tree{S}$ be the normal companion of $\tree{T}$. Then \begin{enumerate} \item $\tree{S}$ is by $\Sigma$ and \item $\Sigma_\tree{S}=\Sigma_\tree{T}$. \end{enumerate} \end{theorem} \begin{proof}[Proof sketch.] We'll get this by applying Lemma \ref{induced strategy lemma} and Theorem \ref{induced strategy theorem}. So fix $\tree{T}$ a plus tree by $\Sigma$ of successor length and $\tree{S}$ the normal companion of $\tree{T}$. Meta-trees on $\tree{S}$ correspond in a simple way to those on $\tree{T}$; for example one can show that if we take $F$ from the sequence of the common last model of $\tree{S}$ and $\tree{T}$, then $V(\tree{S}, F)=V(\tree{T},F)$. One can use this correspondence to define a meta-strategy $\Lambda$ for meta-trees on $\tree{S}$ determined in a simple way by the meta-strategy $\Sigma^*_\tree{T}$. Using the corresponding properties of $\Sigma^*_\tree{T}$, one can show that $\Lambda$ has meta-hull condensation, normalizes well, has the Dodd-Jensen property relative to $\Sigma$, and, in the case that $(M,\Sigma)$ is an lbr hod pair, is pushforward consistent. Moreover, one gets that if $\tree{U}$ is a normal tree on $M_\infty^\tree{T}=M_\infty^\tree{S}$, then $\tree{U}$ is by $\Sigma_\tree{T}$ iff $\mtree{V}(\tree{S}, \tree{U})$ is by $\Lambda$. We can then use Lemma \ref{induced strategy lemma} to conclude that the normal companion of any plus tree by $\Sigma$ is by $\Sigma$, as follows. Suppose $\tree{T}$ is a plus tree of limit length by $\Sigma$ such that its normal companion $\tree{S}$ is also by $\Sigma$. Let $b=\Sigma(\tree{T})$ and $c$ the unique cofinal wellfounded branch of $\tree{S}$ such that $\tree{S}{}^\frown c$ is the normal companion of $\tree{T}{}^\frown b$. Then we can apply Lemma \ref{induced strategy lemma} to $\tree{S}{}^\frown c$ together with the meta-strategy $\Lambda$ to get $\Sigma(\tree{S})=c$. This gives conclusion (1). Now assuming conclusion (1) we can easily get conclusion (2) by Theorem \ref{induced strategy theorem}. Let $\tree{T}$ be a plus tree of successor length by $\Sigma$ and $\tree{S}$ its normal companion. Since $\tree{S}$ is by $\Sigma$, from (1), Theorem \ref{induced strategy lemma} applies to $\tree{S}$ together with $\Lambda$, so that $\Lambda=\Sigma^*_\tree{S}$. Now we want to show that $\Sigma_\tree{S}=\Sigma_\tree{T}$. For this, it suffices to show the two strategies agree on normal trees. So let $\tree{U}$ be a normal tree on $M_\infty^\tree{S}=M_\infty^\tree{T}$. Then, by quasi-normalizing well, $\tree{U}$ is by $\Sigma_\tree{S}$ iff $\mtree{V}(\tree{S},\tree{U})$ is by $\Sigma^*_\tree{S}=\Lambda$. But one property of $\Lambda$ we promised to verify is that $\tree{U}$ is by $\Sigma_\tree{T}$ iff $\mtree{V}(\tree{S},\tree{U)}$ is by $\Lambda$, we we get $\tree{U}$ is by $\Sigma_\tree{S}$ iff $\tree{U}$ is by $\Sigma_\tree{T}$, as desired. \qed \end{proof} Next we show that, in the $\mathsf{AD^+}$ context, iteration strategies for mouse pairs are totally determined by their action on $\lambda$-tight normal trees. Let $\tree{T}$ be a normal tree on a premouse $M$. One can define the \textit{$\lambda$-tight companion} $\tree{S}$ of $\tree{T}$, a $\lambda$-tight tree on $M$ with the same last model and branch embedding as $\tree{T}$. We will include a formal definition in a subsequent draft but here is the basic idea. One takes $\tree{T}$ and considers the (possibly non-quasi-normal) iteration tree $\hat{\tree{T}}$ on $M$ which splits up each application of a plus extender in $\tree{T}$ into two steps: first using the minus extender, and then using the order zero measure on the image of the critical point (so the difference between $\tree{T}$ and $\hat{\tree{T}}$ being that one considers these both as genuine exit extenders in $\hat{\tree{T}}$). One gets $\tree{S}$ as $W(\langle M\rangle, \hat{\tree{T}})$, the result of a meta-tree-like process where we follow the tree-order of $\hat{\tree{T}}$ and use as meta-tree exit extenders the extenders of $\hat{\tree{T}}$. One can show that the resulting tree is $\lambda$-tight and that the steps embedding normalization coincide with full normalization, so $\tree{S}$ has the same last model and branch embedding of $\tree{T}$ (because it has the same last model and branch embedding of $\hat{\tree{T}}$). Meta-trees on $\tree{T}$ and its normal companion $\tree{S}$ correspond in a sufficiently nice way that a meta-strategie for $\tree{S}$ determines a meta-strategy for $\tree{T}$. Moreover, if $\tree{S}$ is by $\Sigma$, a nice strategy for the base model $M$, if we start with the meta-strategy $\Sigma^*_{\tree{S}}$, the resulting meta-strategy for $\tree{T}$ has all of the nice properties need to run the argument of Theorem \ref{normalizeswellthm2} to obtain the following. \begin{theorem}\label{tight trees} Assume $\mathsf{AD^+}$. Let $(M,\Sigma)$ be a strongly stable mouse pair with scope $\textrm{HC}$. Let $\tree{T}$ be a normal tree by $\Sigma$ of successor length and $\tree{S}$ be the $\lambda$-tight companion of $\tree{T}$. Then \begin{enumerate} \item $\tree{S}$ is by $\Sigma$ and \item $\Sigma_\tree{S}=\Sigma_\tree{T}$. \end{enumerate} \end{theorem} \noindent Details will be added in a later draft. \section{Full normalization} In this section we generalize the notion of a tree embedding $\Phi:\tree{S}\to \tree{T}$ by relaxing the requirement that the images of exit extenders of $\tree{S}$ under the $t$-maps of $\Phi$ are exit extenders of $\tree{T}$. We call the resulting systems {\em weak tree embeddings}. We also define the full normalization $X(\tree{T},\tree{U})$ of a stack of normal trees $\langle\tree{T},\tree{U}\rangle$, and show that there are weak tree embeddings from $\tree{T}$ to $X(\tree{T},\tree{U})$ and from $X(\tree{T},\tree{U})$ to $W(\tree{T},\tree{U})$. Finally we prove our main theorem: the iteration strategy in a mouse pair condenses to itself under weak tree embeddings. This implies that the strategies in mouse pairs fully normalize well, and are therefore positional. Condensation under tree embeddings $\Phi:\tree{S}\to \tree{T}$ has to do with the structure of the iteration process that produced $\tree{S}$ and $\tree{T}$. Condensation within the hierarchies of the individual models of $\tree{S}$ and $\tree{T}$ is not relevant; indeed, condensation under tree embeddings makes sense for iteration strategies for coarse structures. In contrast, condensation under weak tree embeddings does involve the condensation properties of the individual models. We shall need the following theorem in this direction. \begin{theorem}\label{condensationtheorem}[\cite{trang}] Assume $\mathsf{AD^+}$, and let $(M,\Lambda)$ be a mouse pair with scope $\textrm{HC}$. Let $H$ be a sound premouse of type 1, $\pi \colon H \to M$ be nearly elementary, and suppose that \begin{itemize} \item[(1)] $\rho(H) \le \text{crit}(\pi)$, and \item[(2)] $H \in M$. \end{itemize} Then either \begin{itemize} \item[(a)] $H \lhd M$, or \item[(b)] $H \lhd \text{Ult}(M,E_\alpha^M)$, where $\alpha = \text{crit}(\pi)$. \end{itemize} \end{theorem} \cite{trang} proves a slightly more general result, but \ref{condensationtheorem} will suffice here.\footnote{ One does not need that $H$ is sound, just that it is $\text{crit}(\pi)$-sound in an appropriate sense. If $\text{crit}(\pi) < \rho^-(H)$, then the external strategy also condenses properly, in that $(H,\Lambda^\pi) \lhd (M,\Lambda)$ or $(H,\Lambda^\pi) \lhd \text{Ult}((M,\Lambda),E_{\text{crit}(\pi)}^M)$.} \subsection{Dropdown sequences} The connection between exit extenders in a weak tree embedding is mediated by a dropdown sequence, so we shall need some elementary facts about such sequences. \begin{definition}\label{dropdownsequencedef} Let $Q$ be a pfs premouse and $N \lhd Q$. The {\em $N$-dropdown sequence of $Q$} is given by \begin{itemize} \item[(1)] $A_0 = N$, \item[(2)] $A_{i+1}$ is the least $B \unlhd Q$ such that $A_i \lhd B$ and $\rho^-(B) < \rho^-(A_i)$. \end{itemize} We write $A_i = A_i(Q,N)$, and let $n(Q,N)$ be the largest $i$ such that $A_i$ is defined. Let also $\kappa_i(Q,N) = \rho^-(A_i(Q,N))$. We also let $A_i(Q,\eta) = A_i(Q,Q|\eta)$, $\kappa_i(Q,\eta) = \kappa_i(Q,Q|\eta)$, and $n(Q,\eta) = n(Q,Q|\eta)$. \end{definition} One place that dropdown sequences come up is the following. Suppose that $Q=M_\xi^\tree{T}$ and $N= M|\text{lh}(E_\xi^\tree{T})$ for some plus tree $\tree{T}$; then the levels of the $N$-dropdown sequence of $Q$ correspond to levels of $Q$ to which we might apply an extender $E_\delta^\tree{T}$ with $\xi=\tree{T}\text{-pred}(\delta+1)$. More precisely, \[ M_{\delta+1}^{*,\mathcal{T}} = A_i(Q,N)^-, \] where $i$ is least such that $\text{crit}(E_\delta^\mathcal{T}) \le \kappa_i(Q,N)$.\footnote{Because of this, it might be more natural to let the dropdown sequence consist of the $A_i^-$, rather than the $A_i$.} Maps that are nearly elementary and exact preserve dropdown sequences. Unfortunately, we must deal with maps that are not exact, and this adds a small mess. \begin{lemma}\label{preserverhominus} Let $\pi \colon M\to N$ be nearly elementary and $\nu \le o(M)$; then \begin{itemize} \item[(a)] if $\nu < \rho^-(M)$, then $\pi(\nu) < \rho^-(N)$, \item[(b)] if $P \lhd M$ and $\nu \le \rho^-(P)$, then $\pi(\nu) \le \rho^-(\pi(P))$, and \item[(c)] if $Q \lhd R \unlhd M$, and $\forall P( Q \unlhd P \lhd R) \Rightarrow \nu \le \rho^-(P))$, then $\forall P(\pi(Q) \unlhd P \lhd \pi(R) \Rightarrow \pi(\nu) \le \rho^-(P))$. \end{itemize} \end{lemma} \begin{proof} The standard conventions that $\pi(M)=N$, $\pi(M|\langle\hat{o}(M),k\rangle)=N|\langle\hat{o}(N),k\rangle$, and $\pi(o(M)) = o(N)$ are in force here. (a) and (b) are immediate from the definition of near elementarity. For part (c), if $R \in M$ the statement is clearly preserved. Otherwise we have $R=M|\<\hat{o}(M),n\>$ for some $n <k(M)$. The statement $\forall P \in M( Q \unlhd P \lhd R) \Rightarrow \nu \le \rho^-(P))$ is $\Pi_1$, hence preserved. But $\rho_0(M),...,\rho_{n-1}(M)$ are also preserved because $\pi$ is nearly elementary, and this covers the remaining $P$. \end{proof} If one replaces $<$ by $\le$ in \ref{preserverhominus}(a), then it becomes false. Similarly, (c) fails if the quantifier is over $P \unlhd R$, rather than $P \lhd R$. \begin{lemma}\label{preservedropdown} Let $\pi \colon M \to X$ be nearly elementary. Let $N \lhd M$ and $n = n(M,N)$; then \begin{itemize} \item[(a)] $n(X,\pi(N)) \in \lbrace n-1, n, n+1 \rbrace$, \item[(b)] for all $i \le n-1$, $A_i(X,\pi(N)) = \pi(A_i(M,N))$, \item[(c)] if $n(X,\pi(N))=n-1$, then $A_n(M,N) = M$ and $\pi(\rho^-(M)) < \rho^-(X)$, \item[(d)] if $n \le n(X,\pi(N))$, then $\pi(A_n(M,N)) = A_n(X,\pi(N))$, and \item[(e)] if $ n+1 = n(X,\pi(N))$, then \begin{itemize} \item[(i)]$A_n(M,N) \lhd M$, \item[(ii)]$\rho^-(M)=\rho^-(A_n(M,N))$, \item[(iii)] $ \sup \pi``\rho^-(M) \le \rho^-(X) < \pi(\rho^-(M))$, and \item[(iv)]$A_{n+1}(X,\pi(N)) = X$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} Let $A_i = A_i(M,N)$, $\kappa_i = \rho^-(A_i)$, $B_i = A_i(X,\pi(N))$, and $\mu_i = \rho^-(B_i)$. From Lemma \ref{preserverhominus}, we get \[ A_i \lhd M \Rightarrow (i \le n(X,\pi(N)) \wedge \pi(A_i)=B_i \wedge \pi(\kappa_i)=\mu_i). \] Now suppose that $A_{n} \lhd M$. The preservation just noted shows that $n \le n(X,\pi(N))$, as well as (b), (c) (vacuously), and (d). We are done if $n(X,\pi(N)) =n$, so suppose not. We then have $\rho^-(B_{n+1}) < \mu_n$. Since $\forall P( A_n \unlhd P \lhd M) \Rightarrow \kappa_n \le \rho^-(P))$, Lemma \ref{preserverhominus} implies that $\forall P( B_n \unlhd P \lhd X) \Rightarrow \mu_n \le \rho^-(P))$. It follows that $B_{n+1}=X$, and $n(X,\pi(N)) = n+1$. This gives us the rest of (a). But also, if $\kappa_n < \rho^-(M)$ then $ \mu_n < \rho^-(X)$, so we must have $\kappa_n = \rho^-(M)$, and then $\rho^-(X) < \mu_n = \pi(\rho^-(M))$. This proves (e). So we have proved the lemma when $A_n \lhd M$. Suppose now that $A_{n} = M.$ Since $\pi(A_{i})=B_{i}$ and $\pi(\kappa_{i}) = \mu_{i}$ for $i \le n-1$, we have (b), and that $n-1 \le n(X,\pi(N))$. If $n(X,\pi(N))=n-1$, then $\rho^-(X) \ge \pi(\kappa_{n-1}) > \pi(\kappa_n) = \pi(\rho^-(X))$, so we have (c), and we are done. So assume $n(X,\pi(N)) \ge n$. Since $A_n =M$, $\forall P( A_{n-1} \unlhd P \lhd M) \Rightarrow \kappa_{n-1} \le \rho^-(P))$. By Lemma \ref{preserverhominus}, $\forall P( B_{n-1} \unlhd P \lhd X) \Rightarrow \mu_{n-1} \le \rho^-(P))$. It follows that $B_n = X$, and $n(X,\pi(N))=n$. So we have (a). We have (d) because $A_n = M$ and $B_n =X$. Clause (e) is vacuously true. \end{proof} The mess gets smaller if $\pi$ is almost exact, and disappears if $\pi$ is exact. Recall here that elementary maps, like the branch embeddings of an iteration tree or the resurrection maps of a PFS construction, are almost exact. Resurrection maps are actually exact, as are the branch embeddings of an iteration tree whose base model is strongly stable.\footnote{See \cite[\S 4.3]{nitcis}. $M$ is strongly stable iff $\eta_{k(M)}^M$ is not measurable by the $M$-sequence. In particular, if $k(M)=0$, then $M$ is strongly stable.} Factor embeddings, such as the lifting maps of a conversion system, may fail to be almost exact. \begin{lemma}\label{preservedropdownelem} Let $\pi \colon M \to X$ be nearly elementary. Let $N \lhd M$ and $n = n(M,N)$; then \begin{itemize} \item[(a)] If $\pi$ is almost exact, then $n(X,\pi(N)) \in \lbrace n, n+1 \rbrace$, and for all $i \le n$, $A_i(X,\pi(N)) = \pi(A_i(M,N))$. \item[(b)] If $\pi$ is exact, then $n(X,\pi(N)) = n$. \end{itemize} \end{lemma} The proof of \ref{preservedropdownelem} is implicit in that of \ref{preservedropdown}. \subsection{Maps that respect drops} The connection between exit extenders required by a weak tree embedding is an abstraction from the following situation. Let $M$ be a premouse, $\eta<o(M)$, and $E^M_\eta\neq \emptyset$. Let $F$ be close to $M$, $\text{crit} F=\mu<\eta$, and suppose $N=\text{Ult}(M,F)$ makes sense. Let $\lambda=i^M_F(\eta)$. There is a natural factor embedding \[\sigma:\text{Ult}(M|\eta, F)\to i^M_F(M|\eta)=N|\lambda,\] given by \[ \sigma([a,g]^{M|\eta}_F)=[a,g]^M_F \] in the case $k(M)=0$. One can use $\sigma$ to to show that $i_F^{M|\eta}(E)$ is on the $N$ sequence. It is the connection between $i_F^{M|\eta}(E)$ and $i_F^M(E)$ provided by $\sigma$ that we wish to abstract. One shows that $i_F^{M|\eta}(E)$ is on the $N$-sequence by factoring $\sigma$, using the $\text{Ult}(P,F)$ for $P^+$ in the $(M,M|\eta)$ dropdown sequence as factors. We shall apply Theorem \ref{condensationtheorem} to the natural embedding from $\text{Ult}(P,F)$ into $i^{Q}_F(P)$, where $P^+$ and $Q^+$ are successive elements of the dropdown sequence. The next two lemmas capture the uses of the condensation theorem here. The lemmas are complicated by the fact that $\text{Ult}(P,F)$ might have type 2. In that case $\text{Ult}(P,F)$ cannot be an initial segment of $\text{Ult}(Q,F)$, and it is $\mathfrak{C}_k(\text{Ult}(P,F))$ that is an initial segment of $\text{Ult}(Q,F)$, where $k = k(P)$. \begin{lemma}\label{notes lemma 1} ($\mathsf{AD^+}$) Let $P$ and $Q$ be countable premice such that $P\triangleleft Q$ and $k(P)>0$. Suppose that $\rho^-(P)$ is a cardinal of $Q$ and that $\rho^-(P) \le \rho^-(Q)$. Suppose that $F$ is an extender that is close to $P$ with $\text{crit} (F)=\mu<\rho^-(P)$. Let \[ \sigma:\text{Ult}(P,F)\to i^Q_F(P) \] be the natural map, and suppose there is a $\Lambda$ such that $(i^Q_F(P), \Lambda)$ is a mouse pair; then either \begin{itemize} \item[(i)] $\text{Ult}(P,F)\triangleleft i^Q_F(P)$, or \item[(ii)] $\text{Ult}(P,F)$ has type 2, and for $k=k(P)$, $\mathfrak{C}_k(\text{Ult}(P,F)) \triangleleft i^Q_F(P)$. \end{itemize} \end{lemma} \begin{proof} Let $k=k(P)$ and $n=k(Q)$, and \begin{align*} i \colon P \to \text{Ult}_k(P,F) = R\\ \intertext{ and } j\colon Q \to \text{Ult}_n(Q,F) \end{align*} be the canonical embeddings. We have the factor map \[ \sigma \colon R \to j(P) \] given by $\sigma([a,f]_F^{P}) = [a,f]_F^{Q}.$ The function $f$ here is $r\Sigma_k^P$, hence $r\Sigma_n^Q$, so the formula makes sense.\footnote{ If $o(P)=o(Q)$, so that $P=Q|\langle o(Q), k\rangle$ where $k<n$, then by $j(P)$ we mean $\text{Ult}(Q,F) \downarrow k$.} Let \begin{align*} \nu& = \rho_k(P),\\ \mu&=\text{crit}(F). \end{align*} By hypothesis, $\nu$ is a cardinal of $Q$ and $\nu \le \rho_n(Q)$, so every $r\Sigma_n^Q$ function $f$ with domain $\mu$ and range bounded in $\nu$ belongs to $P$. Thus \[ \rho_k(R) = \sup i``\nu \le \text{crit}(\sigma). \] We may assume that $j(P)^k$ is not contained in $\text{ran}(\sigma)$, as otherwise $R=j(P)$, so conclusion (i) holds. But then $\sigma$ witnesses that the reduct $R^k$ is a proper initial segment of $j(P)^k$, so $R^k \in j(P)^k$, so $R \in j(P)$. Let \begin{align*} \gamma&=\sup i"\nu =\rho_k(R),\\ \intertext{and } \delta&=j(\nu) =\rho_k(j(P)). \end{align*} That $\delta=\rho_k(j(P))$ comes from the fact that $j\upharpoonright P$ is sufficiently elementary. \begin{remark} These things work out if $\nu=o(P)$, but we don't use that case in our applications of Lemma \ref{notes lemma 1}, so we might as well assume that $\nu<o(P)$. \end{remark} \begin{claim}\label{nearelementarityclaim} $\sigma \colon R\to j(P)$ is nearly elementary. \end{claim} \begin{proof} The $k$-th reduct of $P$ is $P^{k}=(P||\nu, A)$, where $A$ is (essentially) $Th^P_k(\nu\cup \{w_k(P)\})$.\footnote{ $w_k(P) = \langle p_k(P), \nu, \eta_k^P \rangle$, where $\eta_k^P$ is the $r\Sigma_k^P$ cofinality of $\rho_k(P)$.} Let \[B=\bigcup_{\alpha<\nu}i(A\cap \alpha)\] and \[C=j(A),\] where $j(A)$ is understood properly if $o(Q)=o(P)$.\footnote{In that case, $Th^P_k(\nu\cup\{w_k(P)P\})$ is coded into $Th^Q_n(\nu\cup \{p_n(Q)\cup \{ w_n(Q)\})$ since $o(Q)=o(P)$, $k<n$, and $\rho_n(Q)=\nu$. We take then $C=Th^{\text{Ult}(Q,F)}_k\big(j(\nu)\cup\{j(w_k(P))\}\big)$.} We have for $\alpha <\nu$, \begin{align*}\sigma\big(i(A\cap \alpha)\big)&=j(A\cap \alpha),\\ B\cap i(\alpha)&=C\cap j(\alpha).\end{align*} So $R^{k}\prec_{\Sigma_0} j(P)^k$. Since $k>0$, $\sigma$ is cardinal preserving. Thus $\sigma$, which is the $k$-completion of $\sigma\upharpoonright R^{k}=id$, is nearly elementary from $R$ into $j(P)$. \qed \end{proof} \paragraph{Case 1.} $\gamma =\delta$. Then $R^{k}=j(P)^{k}$, so $R=j(P)$ and conclusion (i) of the lemma holds. \paragraph{Case 2.} $\gamma<\delta$. $R^{k}=(j(P)^{k}|\gamma, C\cap \gamma)$, so $R^{k}\in j(P)|\delta$, so $R\in j(P)|\delta$. On the other hand, $\sigma(i(\nu))=\delta$. It follows that $\text{crit}(\sigma)\leq i(\nu)$. Set \[\alpha=\text{crit}(\sigma);\] then \[ \gamma\leq \alpha\leq i(\nu). \] We want to apply the Condensation Theorem \ref{condensationtheorem}. For this, let $m$ largest such that $\alpha<\rho_m(R)$. $\gamma=\rho_k(R)$, so $m<k$. Let \begin{align*} X&=R \downarrow m,\\ Y &= j(P) \downarrow m. \end{align*} Then $\sigma:X\to Y$ is nearly elementary (in fact, elementary) and $\gamma = \rho(X)\leq \alpha<\rho^-(X)$. $X$ is a premouse and $X \in Y$. Let us suppose first that $X$ is sound. (If not, then $R$ has type 2, and $X=R^-$ is only almost sound.) Letting $\Omega=\Lambda_{Y}$, \[\sigma:(X, \Omega^\sigma)\to (Y, \Omega)\] is nearly elementary in the category of mouse pairs. We get from \ref{condensationtheorem} that either \begin{itemize} \item[(a)] $X \triangleleft Y$, or \item[(b)] $X \triangleleft \text{Ult}_0(Y, E^{Y}_\alpha)$. \end{itemize} We will be done if we can rule out (b). \paragraph{Subcase 2.1} $\gamma<\alpha$. Then $(\gamma^+)^{X}\leq \alpha$. But $X \in Y$ and $\rho(X)=\gamma$, so $(\gamma^+)^{X}<(\gamma^+)^{Y}$, so $\alpha=(\gamma^+)^{X}$. But then $(b)$ cannot hold, because $\alpha$ is a cardinal of $\text{Ult}_0(Y, E^{Y}_\alpha)$ and $X$ defines a map from $\alpha$ onto $\gamma$. \paragraph{Subcase 2.2} $\gamma=\alpha$. Suppose $\nu$ is a limit cardinal of $P$. It follows that $\gamma$ is a limit cardinal in $X$ and $Y$. But then $E^{Y}_\gamma=\emptyset$, as desired. Next suppose $\nu=(\kappa^+)^P$, where $\kappa$ is a cardinal of $P$. It follows that $j$ is continuous at $\nu$, for otherwise we have a $\mathbf{\Sigma}^Q_n$ function $f:\mu\to \nu$ with cofinal range. We then have a $\mathbf{\Sigma}_n^Q$ function $g$ such that for all $\xi < \mu$, $g(\xi)$ be a wellorder of $\kappa$ of ordertype $f(\xi)$, and $\{(\xi, \eta, \lambda)\,|\, \eta<_{g(\xi)}\lambda\}$ witnesses that $\rho_n(P)\leq \kappa$, contradiction. So $j$ is continuous at $\nu$. On the other hand, our case 2 hypothesis is that $j$ is discontinuous at $\nu$. This finishes Case 2 and the Lemma under the assumption that $X$ is sound. We have (i) of the Lemma in this case. Suppose now that $X$ is not sound. It follows that $P$ has type 1B, $i$ is discontinuous at $\nu$, and $R$ has type 2. Thus $\rho_k(R) < \rho_{k-1}(R)$, so $m=k-1$. Let \[ Z = \mathfrak{C}_m(X), \] so that \[ R^- = \text{Ult}(Z,D), \] where $D$ is the order zero measure of $Z$ on $\hat{\rho}_k(R) = i(\nu)$. We have that $i_D \circ \sigma$ is a nearly elementary (in fact elementary) map from $Z$ to $Y$, and $\text{crit}(\sigma \circ i_D) = \text{crit}(\sigma)=\alpha$. By the condensation theorem \ref{condensationtheorem}, either \begin{itemize} \item[(a)] $Z \triangleleft Y$ or \item[(b)] $Z \triangleleft \text{Ult}_0(Y, E^{Y}_\alpha)$. \end{itemize} One can rule out (b) by the same argument that we applied to $X$. This gives us (a), and hence conclusion (ii) of the Lemma.\footnote{ A stronger possible conclusion for \ref{notes lemma 1} would demand condensation for the external part of $\Lambda$. Namely, either \begin{itemize} \item[(i)] $(\text{Ult}(P,F), \Lambda^\sigma)\triangleleft(i^Q_F(P), \Lambda)$, or \item[(ii)] $\text{Ult}(P,F)$ has type 2, and for $k=k(P)$ and $i_D \colon \mathfrak{C}_k(\text{Ult}(P,F)) \to \text{Ult}(P,F)$ the anticore map, $(\mathfrak{C}_k(\text{Ult}(P,F)), \Lambda^{\sigma\circ i_D}) \triangleleft (i^Q_F(P),\Lambda)$. \end{itemize} Using \cite{trang}, we get the stronger conclusion when $\text{crit}(\sigma) < \rho_k(\text{Ult}(P,F)$.} \hfill{$\qed$ Lemma \ref{notes lemma 1}} \end{proof} Lemma \ref{notes lemma 1} concerns the step from $P$ to $Q=R^-$, where $P$ and $R$ are successive elements of a dropdown sequence. The next lemma concerns the step from $R^-$ to $R$. \begin{lemma}\label{notes lemma 2} ($\mathsf{AD^+}$) Let $Q$ be a countable premouse with $k(Q)>0$ and $F$ close to $Q$. Suppose there is $\Lambda$ such that $(\text{Ult}(Q,F), \Lambda)$ is a mouse pair. Let $P=Q^-$, and $\sigma:\text{Ult}(P,F)\to \text{Ult}(Q,F)^-$ be the natural embedding; then either \begin{itemize} \item[(i)] $\text{Ult}(P,F) \triangleleft i^Q_F(P)$, or \item[(ii)] $\text{Ult}(P,F)$ has type 2, and for $k=k(P)$ and $\mathfrak{C}_k(\text{Ult}(P,F)) \triangleleft i^Q_F(P)$. \end{itemize} \end{lemma} \begin{proof} If $\rho^-(P)\leq \rho^-(Q)$, then this follows from Lemma \ref{notes lemma 1}. So let us assume $\rho^-(Q)<\rho^-(P)$. Let $\nu=\rho^-(Q)=\rho_{k+1}(Q)$, where $k+1=k(Q)$. (Recall $k(Q)>0$.) So letting $\hat{Q}$ be the bare premouse associated to $Q$, $\rho_{k+1}(\hat{Q}) < \rho_k(\hat{Q})$ and $\sigma$ is the natural embedding from $\text{Ult}_k(\hat{Q},F)$ to $\text{Ult}_{k+1}(\hat{Q},F)$. Let $R=\text{Ult}(P,F)=\text{Ult}_k(Q^-,F)$ and $r=p_k(Q)$. Let $S=\text{Ult}_{k+1}(Q,F)$ and $s=p_{k+1}(Q)$. The elements of $R$ are $[a, f_{\tau, r}^{Q^-}]^{Q^-}_F$ for $\tau$ a $r\Sigma_k$ Skolem term. The elements of $S$ are $[a,f^Q_{\tau,s}]^Q_F$ for $\tau$ a $r\Sigma_{k+1}$ term. Each $f^{Q^-}_{\tau,r}$ is also of the form $f^Q_{\gamma, s}$ for some $\gamma$. So $\sigma$ is just given by \[\sigma([a,g]^{Q^-}_F)=[a,g]^Q_F,\] where the superscripts $Q^-$ and $Q$ indicate the two classes of functions used. Letting $i=i_F^{Q^-}$ and $j=i_F^Q$, we have the diagram \[ \begin{tikzcd} Q \arrow{r}{i} \arrow[swap]{rd}{j}& R\arrow{d}{\sigma} \\ & S \end{tikzcd} \] Set \begin{align*} \gamma&=\sup i ``\nu\\ &=\sup j``\nu. \end{align*} The equality holds because the functions bounded in $\nu$ used in $\text{Ult}(Q^-,F)$ and $\text{Ult}(Q,F)$ are the same. We have \[\gamma=\rho_{k+1}(S),\] and \[j(s)=p_{k+1}(S)\] by the general theory of $\text{Ult}_{k+1}$. \begin{claim} $\gamma=\rho_{k+1}(R)$ and $p_{k+1}(R)=i(s)$. \end{claim} \begin{proof} That $\gamma \le \rho_{k+1}(R)$ and $i(s) \le_{\text{lex}} p_{k+1}(R)$ follows from the usual proof that solidity witnesses are mapped by $i$ to generalized solidity witnesses. The reverse inequalities follow from the fact that $R = \mbox{Hull}_{k+1}^R(\gamma \cup i(s))$. \end{proof} \begin{claim} $\sigma$ is nearly elementary. \end{claim} \begin{proof} The proof of Claim \ref{nearelementarityclaim} shows that $\sigma$ is $\Sigma_0$ elementary as a map from $R^k$ to $S^k$. We must see that $\sigma$ is cardinal preserving. This follows from the elementarity if $k>0$. If $k=0$, then for $f \in Q$, \begin{align*} R \models [a,f] \text{ is a cardinal} &\text{ iff for $F_a$ a.e. $u$, $Q \models f(u)$ is a cardinal}\\ &\text{ iff $S \models [a,f]$ is a cardinal}. \end{align*} The first line holds because the function $g(u) =$ first map from $|f(u)|$ onto $f(u)$ belongs to Q. This shows that $\sigma$ is cardinal preserving when $k=0$. \end{proof} \begin{claim}\label{ransigmabounded} If $\sigma``\rho_k(R)$ is unbounded in $\rho_k(S)$, then $R=S^-$. \end{claim} \begin{proof} Let $g$ and $h$ be the $\Sigma_1$ Skolem functions of $R^k$ and $S^k$. If $\alpha < \rho_k(S)$, then we have $\alpha = h(\beta,s)$ for some $\beta < \rho_{k+1}(S)$. By hypothesis, $\alpha = h^{S^k|\sigma(\delta)}(\beta,s)$ for some $\delta < \rho_k(R)$. (Here the superscript $S^k||\sigma(\delta)$ indicates a bound on the witness that $h(\beta,s)=\alpha$.) But then $\sigma(g(\delta,r)) = \alpha$, so $\alpha \in \text{ran}(\sigma)$. So $\rho_k(S) \subseteq \text{ran}(\sigma)$, so $R^k = S^k$, so $R = S^-$. \end{proof} By \ref{ransigmabounded} we may assume that $\text{ran}(\sigma)$ is bounded in $\rho_k(S)$. For this it follows easily that $R \in S$. If $R$ is type 1 (that is, $k$-sound), then since $R= \text{Hull}_{k+1}^R(\text{crit}(\sigma) \cup p_{k+1}(R))$, Theorem \ref{condensationtheorem} applies, and we get that $R \lhd S$. (The case $\text{crit}(\sigma)$ is an index of an extender in $S$ is ruled out as in Lemma \ref{notes lemma 1}.) If $R$ has type 2, then $R=\text{Ult}(\mathfrak{C}_k(R),D)$, where $D$ is the order zero measure of $\mathfrak{C}_k(R)$ on $i(\rho_k(Q))$. We then get $\mathfrak{C}_k(R) \lhd S$ by applying Theorem \ref{condensationtheorem} to $\sigma \circ i_D$. (Once again, we rule out $\text{crit}(\sigma)$ being an index of an extender in $S$ as in the proof of \ref{notes lemma 1}.) This proves Lemma \ref{notes lemma 2}. \end{proof} We isolate now, in the definition of ``resolution of $\sigma$", a fairly extensive list of the properties of sequences of factor embeddings such as those in Lemmas \ref{notes lemma 1} and \ref{notes lemma 2}. Lemma \ref{main full normalization lemma} then shows that for certain $M, F,$ and $\xi$, the natural $\sigma \colon \text{Ult}(M|\xi,F) \to i_F^M(M|\xi)$ admits a resolution. This example is the entire motivation for the definition. We say that $\sigma$ ``respects drops" iff it admits a resolution. \begin{definition}\label{corepair} Let $B$ be a type pfs premouse and $k=k(B)$; then $C(B) = B$ if $B$ has type 1, and $C(B) = \mathfrak{C}_k(B)$ if $B$ has type 2. If $B$ has type 2, $D(B)$ is the order zero measure $D$ on $\hat{\rho}_k(B)$ such that $B= \text{Ult}(C(B),D)$. If $B$ has type 1, $D(B)$ is a principal measure. In both cases, $i_D \colon C(B) \to B$ is the ultrapower (i.e. anticore) map. \end{definition} \begin{definition}\label{respectsdropsdef} Let $N$ be a premouse, and $\sigma \colon N|\eta \to N|\lambda$ be nearly elementary, where $\eta < \lambda \le o(N)$. We say that $\sigma$ \textit{respects drops over $(N,\eta,\lambda)$} iff letting $n=n(N,\eta)$, we have $\eta_i$ for $1 \le i \le n+1$ such that \[ \eta = \eta_1 \le \eta_2 < ... \le \eta_{n+1} = \lambda \] and $n = n(N,\eta_i)$ for all $i$, together pfs premice $B^i_k$ for $1 \le k \le n$ such that \[ C(B^i_k) = A_k(N,\eta_i)^- \] for all $i,k$, together with nearly elementary maps \[ \sigma_i \colon B^i_i \to B^{i+1}_i \] defined for $i \le n$ such that \[ \sigma = \sigma_n \circ ... \circ \sigma_1, \] and the following hold. \begin{itemize} \item[(a)] $k(B^1_k) = k(B^i_k)$ for all $i$; let $m_k$ be the common value, and set \[ \gamma^i_k = \rho(B^i_k) = \rho_{m_k+1}(B^i_k). \] \item[(b)] $C(B^i_k))$ is sound, that is, $m_k+1$-sound. \item[(c)] For all $i<k$, $B^i_k = C(B^k_k)$. \item[(d)] $\sigma_i \restriction \gamma^i_i = \text{ id}$, $\sigma_i(p_{m_i+1}(B^i_i)) = p_{m_i+1}(B^{i+1}_i)$, and $\sigma_i(\eta_i) = \eta_{i+1}$. \item[(e)] Either \begin{itemize} \item[(i)] $\eta_i = \eta_{i+1}$, $B^i_k = B_k^{i+1}$ for all $k$, and $\sigma_i = \text{ id}$, or \item[(ii)] $\eta_i < \eta_{i+1}$, $C(B^i_i) \lhd B^{i+1}_i$. \end{itemize} \item[(f)] For $i < n$, $\gamma^{i+1}_{i+1} < \gamma^i_i$, and either \begin{itemize} \item[(i)] $\gamma^i_i = \gamma^{i+1}_i$ and $\sigma_i \restriction \gamma^i_i +1 = \text{ id}$, or \item[(ii)] $\gamma^i_i$ is a limit cardinal of $B^{i+1}_i$. \end{itemize} \end{itemize} We call $(\vec{\eta}, \vec{B}, \vec{\sigma})$ a {\em resolution} of $\sigma$. \end{definition} We are allowing $\eta_i = \eta_{i+1}$ here simply for its bookkeeping value. Since $C(B^i_i)$ is $m_i+1$-sound, $\sigma_i$ is determined by $B^i_i$ and $B^{i+1}_i$. The $B^i_k$ are in turn determined by $\eta_i$. Thus the whole of the resolution is determined by the sequence of $\eta$'s. \begin{lemma}\label{main full normalization lemma} Let $M$ be a premouse, $\bar{\eta} \le o(M)$, and $E= E^M_{\bar{\eta}}\neq \emptyset$. Let $F$ be close to $M$, $\text{crit}(F)=\mu<\hat{\lambda}(E)$, and $N=\text{Ult}(M,F)$. Let \[ \sigma:\text{Ult}(M|\bar{\eta}, F)\to i^M_F(M|\bar{\eta}) \] be the natural factor map, $\lambda=i^M_F(\bar{\eta})$, and $\eta = o(\text{Ult}(M|\bar{\eta}, F))$; then \begin{itemize} \item[(a)] $\text{Ult}(M|\bar{\eta}, F) = N|\eta$, and \item[(b)] $\sigma$ respects drops over $N,\eta,\lambda$. \end{itemize} \end{lemma} \begin{proof} Let $n=n(M,\bar{\eta})$ and $A_i = A_i(M,\bar{\eta})$ for $i \le n$. Since $i_F^M$ is elementary, it is almost exact, so $n(N,\lambda) \in \lbrace n,n+1 \rbrace$. We assume that $n(N,\lambda) = n$. The other case is quite similar. We must define a resolution $(\vec{\eta},\vec{B},\vec{\sigma})$ of $\sigma$. We start by setting $\eta_{n+1} = \lambda$, and then define the $\eta_i$ and associated objects for $i \le n$ by reverse induction on $i$. In the end we shall have $\eta_1 = \eta$. As we go we shall verify the properties of a resolution. Let for $i \le n$ \begin{align*} m_i+1 &= k(A_i),\\ \gamma_i &= \rho^-(A_i) = \rho_{m_i+1}(A_i),\\ B^{n+1}_i &= A_i(N,\lambda)^-,\\ \intertext{ and} \gamma^{n+1}_i &= \rho^-(A_i(N,\lambda)) = \rho(B^{n+1}_i). \end{align*} Let $i_{n+1} = i^M_F$. By our preservation lemmas \ref{preservedropdown} and \ref{preservedropdownelem}, $i_{n+1}(A_i) = A_i(N,\lambda)$ for all $i \le n$, so $m_i = k(B^{n+1}_i)$ for all $i \le n$. Also, \[ \gamma^{n+1}_i = \begin{cases} i_{n+1}(\gamma_i) & \text{ if $i < n$}\\ \sup i_{n+1}``\gamma_i & \text{ if $i=n$.} \end{cases} \] Now let us define $\eta_n$ and $\sigma_n$. Set \begin{align*} B^n_n &= \text{Ult}(A_n^-,F) = \text{Ult}_{m_n}(A_n^-,F),\\ \intertext{ and } \eta_n &= i_n(\bar{\eta}), \end{align*} where $i_n \colon A_n^- \to B^n_n$ is the canonical embedding. We have that $k(A_n^-) = k(B^n_n) = m_n$ and $i_n$ is elementary, that is, $r\Sigma_{m_n+1}$ elementary. It is possible that $B^n_n$ has type 2.\footnote{ This happens iff $A_n^-$ has type 1B and $\eta_{m_n}(A_n^-) = \mu$.} Let \[ \sigma_n \colon B^n_n \to i_{n+1}(A_n^-) = B^{n+1}_n \] be the natural factor map. $\sigma_n$ is nearly elementary as a map on premice of degree $m_n$, and since $\sigma_n \circ i_n = i_{n+1} \upharpoonright A_n^-$, $\eta_{n+1} = \sigma_n(\eta_n)$. \bigskip \noindent {\em Claim 1.} \begin{itemize} \item[(a)] $\rho_{m_n+1}(B^n_n) = \sup i_n ``\gamma_n = \sup i_{n+1}``\gamma_n \le \rho_{m_n+1}(B^{n+1}_n)$. \item[(b)] $\sigma_n \restriction \gamma^n_n = \text{ id}$. \item[(c)] $\sigma_n(p_{m_n+1}(B^n_n))=p_{m_n+1}(B^{n+1}_n)$. \end{itemize} \medskip \noindent {\em Proof.} For (a): $\rho_{m_n+1}(B^n_n) = \sup i_n ``\rho_{m_n+1}(A_n^-) = \sup i_n``\gamma_n$ by the general properties of $\text{Ult}_{m_n}$; see the proof of Lemma \ref{notes lemma 2}. Every $r\Sigma_{k(M)}^M$ function from $\mu$ into $\gamma_n$ with range bounded in $\gamma_n$ belongs to $M$ because $n=n(M,\bar{\eta})$. Thus $\sup i_n``\gamma_n = \sup i_{n+1}``\gamma_n$. Finally, $\rho_{m_n+1}(B^{n+1}_n) = \sup i_{n+1}`` \gamma_n$ if $A_n = M$, and $\rho_{m_n+1}(B^{n+1}_n) = \sup i_{n+1}(\gamma_n)$ if $A_n \lhd M$, so $\sup i_{n+1}`` \gamma_n \le \rho_{m_n+1}(B^{n+1}_n)$ in either case. (b) follows from the fact that both ultrapowers use the same functions with range bounded in $\gamma_n$, namely just those functions belonging to $M$. For (c) we use the solidity and universality (over $C(B^n_n))$) of $p_{m_n+1}(B^n_n)$. See the proof of \ref{notes lemma 2}. \hfill $\square$ \bigskip \noindent {\em Claim 2.} If $A_n$ has type 1B, then $i_n$ is continuous at $\gamma_n$. \medskip \noindent {\em Proof.} If $A_n$ has type 1B, then $\rho^-(A_n) = \rho_{m_n+1}(A_n) = \gamma_n$ is $\Sigma_0$ regular in $A_n$. If $i_n$ is discontinuous at $\gamma_n$, then $\gamma_n$ is $r\Sigma_{m_n}$ singular over $A_n$. But $\gamma_n < \rho_{m_n}(A_n)$ because $A_n$ was the first level of $M$ with $\rho^- \le \gamma_n$. Thus $\gamma_n$ is $\Sigma_0$ singular in $A_n$, contradiction. \hfill $\square$ \bigskip \noindent {\em Claim 3.} $C(B^n_n)$ is $m_n+1$-sound; moreover, $p_{m_n+1}(C(B^n_n)) = i_n(p_{m_n+1}(A_n))$. \medskip \noindent {\em Proof.} If $A_n$ has type 1A, this follows from the usual properties of $\text{Ult}_{m_n}$; see the proof of \ref{notes lemma 2}. The proof also works if $A_n$ has type 1B, except for the problem that $C(B^n_n)^+$ may have type 2. But if $A_n$ has type 1B, then $\rho_{m_n+1}(B^n_n) = i_n(\rho_{m_n+1}(A_n))$ by Claim 2. This implies that $C(B^n_n)^+$ has type 1B, rather than type 2. \hfill $\square$ The next claim is the main step in our proof, the place where we integrate the condensation arguments in Lemmas \ref{notes lemma 1} and \ref{notes lemma 2}. \bigskip \noindent {\em Claim 4.} Either $C(B^n_n) = B^n_n = B^{n+1}_n$, $\eta_n = \eta_{n+1}$, and $\sigma_n = \text{ id}$, or $C(B^n_n) \lhd B^{n+1}_n$. \medskip \noindent {\em Proof.} Let $D=D(B^n_n)$, $C=C(B^n_n)$, and $m=m_n$. Let \[ \pi = \sigma_n \circ i_D, \] so that $\pi \colon C \to B^{n+1}_n$ is nearly elementary at degree $m_n$, and $\text{crit}(\pi) \ge \rho_{m+1}(C)$. (Note $\rho_{m+1}(C) = \rho_{m+1}(B^n_n) = \gamma^n_n$.) If $\pi = \text{ id}$, then $C = B^n_n = B^{n+1}_n$, $\eta_n = \eta_{n+1}$, and $\sigma_n = \text{ id}$, and we are done. So suppose that $\pi \neq \text{ id}$; we shall show that $C \lhd B^{n+1}_n$. We show first that $C \in B^{n+1}_n$. For that, suppose first that $\rho_{m +1}(C) < \rho_{m+1}(B^{n+1}_n)$; then since $C$ is coded by $\mbox{Th}_{m+1}^C(\rho_{m+1}(C) \cup \lbrace r \rbrace)$ for some parameter $r$, and \[ \mbox{Th}_{m+1}^C(\rho_{m+1}(C) \cup \lbrace r \rbrace) = \mbox{Th}_{m+1}^{B^{n+1}_n}(\rho_{m+1}(C) \cup \lbrace \sigma_n(r) \rbrace), \] we get that \[ \mbox{Th}_{m+1}^C(\rho_{m+1}(C) \cup \lbrace r \rbrace) \in B^{n+1}_n, \] so $C \in B^{n+1}_n$. Suppose next that $\rho_{m+1}(C) = \rho_{m+1}(B^{n+1}_n)$. Let $X=C^m$ and $Y=(B^{n+1}_n)^m$ be the two reducts. $\pi \restriction X = \sigma_n \restriction X$ is $\Sigma_0$ elementary from $X$ to $Y$, $\rho_1(X) \le \rho_1(Y)$, $\pi \restriction \rho_1(X) = \text{ id}$, and $\pi(p_1(X))=p_1(Y)$. Since $\rho_{m+1}(A_n) < \rho_m(A_n)$, $\rho_1(X) < o(X)$ and $\rho_1(Y) < o(Y)$. It will be enough to show that $\text{ran}(\pi)$ is bounded in $o(Y)$, for then $\mbox{Th}_1^X(\rho_1(X) \cup \lbrace \rho_1(X), p_1(X) \rbrace) \in Y$, so $X \in Y$, so $C \in B^{n+1}_n$. So suppose $\text{ran}(\pi)$ is unbounded in $o(Y)$. It follows that $\pi$ is $\Sigma_1$ elementary from $X$ to $Y$. If $A_n$ has type 1A, then $Y = \mbox{Hull}_1^Y(\rho_1(Y) \cup p_1(Y))$, so $Y$ is $\Sigma_1$-generated from points in $\text{ran}(\pi)$, so $X=Y$ and $\pi \restriction X = \text{ id}$. Thus $\pi = \text{ id}$, contradiction. If $A_n$ has type 1B, then $Y = \mbox{Hull}_1^Y(\rho_1(Y) \cup \lbrace \rho_1(Y),p_1(Y)\rbrace)$, so we just need to see that $\rho_1(Y) \in \text{ran}(\pi)$ for our contradiction. But $\rho_1(Y) = \rho_{m+1}(B^{n+1}_n) = i_{n+1}(\rho_{m+1}(A_n))$, since if $\rho_{m+1}(B^{n+1}_n) < i_{n+1}(\rho_{m+1}(A_n))$ then we must have $A_n = M$ and $\rho_{m+1}(B^{n+1}_n) = \sup i_{n+1}``\rho_{m+1}(A_n)$. This implies $\rho_{m+1}(B^{n+1}_n) < \hat{\rho}_{m+1}(B^{n+1}_n) = i_{n+1}(\rho_{m+1}(A_n))$, so $B^{n+1}_n$ has type 2. But $B^{n+1}_n \lhd N$, so it has type 1. Thus $C \in B^{n+1}_n$. By the Condensation Theorem \ref{condensationtheorem}, either $C \lhd B^{n+1}_n$ or $C \lhd \text{Ult}(B^{n+1}_n,G)$, where $G$ is on the $B^{n+1}_n$ sequence and $\text{lh}(G) = \text{crit}(\pi) = \text{crit}(\sigma_n)$. One can rule out the latter possibility just as in the proofs of Lemmas \ref{notes lemma 1} and \ref{notes lemma 2}. Thus $C \lhd B^{n+1}_n$, as desired. \hfill $\square$ Before we proceed to the general inductive step, we need some claims that deal with the possible difference between $B^n_n$ and $C(B^n_n)$. \bigskip \noindent {\em Claim 5.} If $C(B^n_n) \neq B^n_n$, then there is a limit cardinal $\xi$ of $B^n_n$ such that $\xi > \rho_{m_n}(B^n_n)$ and $B^n_n|\xi = C(B^n_n)|\xi$. \medskip \noindent {\em Proof.} If $C(B^n_n) \neq B^n_n$, then $B^n_n$ has type 2, and we can take $\xi = \hat{\rho}_{m_n}(B^n_n)$. \hfill $\square$ \bigskip \noindent {\em Claim 6.} $\gamma^n_{n-1} \le \rho_{m_n}(B^n_n)$. \medskip \noindent {\em Proof.} Let $m=m_n$. Note \[ \gamma_{n-1} \le \rho_{m}(A_n), \] since $m +1 = k(A_n)$ is by definition the least $i$ such that $\rho_i(A_n) < \gamma_{n-1}$. If $\rho_m(A_n) = \gamma_{n-1}$, then $A_n^- = A_{n-1}$, and \[ \rho_m(B^n_n) = \sup i_n``\gamma_{n-1} < i_n(\gamma_i) = \gamma_i^n, \] for all $i < n-1$, so $\rho_m(B^n_n) = \gamma^n_{n-1}$ and we are done. Suppose next that \[ \gamma_{n-1} < \rho_m(A_n). \] For all $i < m$, $\gamma_{n-1} \le \rho_i(A_n)$, so for all $i < m$, $\gamma_{n-1} < \rho_i(A_n)$, It follows that $A_{n-1} \in A_n$, so \[ \gamma_{n-1}^n = i_n(\gamma_{n-1}) < \sup i_n``\rho_m(A_n) = \rho_m(B^n_n), \] as desired. \hfill $\square$ Note also \bigskip \noindent {\em Claim 7.} \begin{itemize} \item[(a)] $n(N,\eta_n)=n$. \item[(b)] $C(B^n_n)^+ = A_n(N,\eta_n)$. \item[(c)] For $k < n$, $B^n_k = \sigma_n^{-1}(B^{n+1}_k)$ and $\gamma^n_k = \sigma_n^{-1}(\gamma^{n+1}_k)$. \item[(d)] $\gamma^n_n \le \gamma^{n+1}_n$. \end{itemize} \medskip \noindent {\em Proof.} These are all immediate consequences of the claims above. \hfill $\square$ We proceed to the inductive step. Suppose $1 \le e < n$, and suppose that for all $k \ge e+1$, $B^k_k$, $B^{k+1}_k$ and $\sigma_k \colon B^k_k \to B^{k+1}_k$ satisfy Claims 1--7, with $k$ and $k+1$ replacing $n$ and $n+1$. We define \[ B^e_e = \text{Ult}(A_e^-,F) = \text{Ult}_{m_e}(A_e^-,F), \] and let \[ i_e \colon A_e^- \to B^e_e \] be the canonical embedding. Let \[ \sigma_e \colon B^e_e \to i_{e+1}(A_e^-) = B^{e+1}_e \] be the factor map. $\sigma_e$ is nearly elementary as a map on premice of degree $m_e$. Let \begin{align*} \eta_e &= i_e(\bar{\eta}) = \sigma_e^{-1}(\eta_{e+1}). \end{align*} Claims 1-3 hold with $e$ and $e+1$ replacing $n$ and $n+1$, with the same proofs. We need a stronger version of Claim 4 now. \bigskip \noindent {\em Claim 8.} Either $C(B^e_e) = B^e_e = B^{e+1}_e$ and $\sigma_e = \text{ id}$, or $C(B^e_e) \lhd B^{e+1}_e \lhd C(B^{e+1}_{e+1})$. \medskip \noindent {\em Proof.} Suppose the first alternative does not hold. The proof of Claim 4 yields that $C(B^e_e) \lhd B^{e+1}_{e}$, and $B^{e+1}_e \lhd B^{e+1}_{e+1}$, so we may assume that $C(B^{e+1}_{e+1}) \neq B^{e+1}_{e+1}$. By Claims 5 and 6 at $e+1$, we may fix a limit cardinal $\xi$ of $B^{e+1}_{e+1}$ such that \[ B^{e+1}_{e+1}|\xi = C(B^{e+1}_{e+1})|\xi \] and $\gamma^{e+1}_e < \xi$. But $C(B^e_e)$ has cardinality $\gamma^e_e$ in $B^{e+1}_{e+1}$, and $\gamma^e_e \le \gamma_e^{e+1}$. Thus \[ C(B^e_e) \lhd B^{e+1}_{e+1}|(\gamma^{e+1}_e)^{+,B^{e+1}_{e+1}} \lhd C(B^{e+1}_{e+1}), \] as desired. \hfill $\square$ It is easy to check that Claims 5 and 6 hold with $e$ and $e+1$ replacing $n$ and $n+1$. Let us address item (f) in the definition of resolutions. ( Item (f) did not apply when $e=n$.) \bigskip \noindent {\em Claim 9.} $\gamma^{e+1}_{e+1} < \gamma^e_e$, and either \begin{itemize} \item[(i)] $\gamma^e_e = \gamma^{e+1}_e$ and $\sigma_e \restriction \gamma^e_e +1 = \text{ id}$, or \item[(ii)] $\gamma^e_e$ is a limit cardinal of $B^{e+1}_e$. \end{itemize} \medskip \noindent {\em Proof.} This is trivial if $C(B^e_e)=B^e_e = B^{e+1}_e$ and $\sigma_e = \text{ id}$, so assume otherwise. By Claim 8, $C(B^e_e) \lhd B^{e+1}_e$, so $B^e_e \in B^{e+1}_e$ and has cardinality $\gamma^e_e$ in $B^{e+1}_e$. For the first part, note \begin{align*} \gamma^{e+1}_{e+1} & = \sup i_{e+1}``\gamma_{e+1}\\ & = \sup i_{e} ``\gamma_{e+1} < \sup i_e ``\gamma_e = \gamma^e_e. \end{align*} The second equality holds because the two ultrapowers use the same functions with range bounded in $\gamma_{e+1}$, namely those belonging to $M$. Suppose that $\gamma_e$ is a limit cardinal of $A_e$. It follows that $\gamma^e_e = \sup i_e``\gamma_e$ is a limit cardinal of $B^e_e$, and since $\gamma^e_e \le \text{crit}(\sigma_e)$, $\gamma^e_e$ is a limit cardinal of $B^{e+1}_e$. Thus we have (ii). Suppose that $\gamma_e = \kappa^{+,A_e}$. By the definition of $A_{e+1}$, letting $m=m_{e+1}$ we have that $\gamma_e = \kappa^{+,A_{e+1}}$ and $\rho_m(A_{e+1}) \ge \gamma_e$. If $\gamma_e$ has $r\Sigma_{m}$ cofinality $\mu$ in $A_{e+1}$, then one easily gets that $\rho_{m}(A_{e+1}) \le \kappa$, contradiction. Thus \begin{align*} \gamma^{e+1}_e &= \sup i_{e+1}``\gamma_e = i_{e+1}(\gamma_e)\\ &= \sup i_e``\gamma_e = i_e(\gamma_e). \end{align*} This shows that (ii) holds. \hfill $\square$ Finally, the analog of Claim 7 takes more work when $e<n$. \bigskip \noindent {\em Claim 10.} \begin{itemize} \item[(a)] For $k < e$, $A_k(N,\eta_e)^- = \sigma_e^{-1}(A_k(N,\eta_{e+1})^-) = \sigma_e^{-1}(B^{e+1}_e)$. \item[(b)] $A_e(N,\eta_e)^- = C(B^e_e)^+$. \item[(c)] For $k>e$, $A_k(N,\lambda_e)^- = C(B^k_k)^+$. \item[(d)] $n(N,\eta_e) = n$. \end{itemize} \medskip \noindent {\em Proof.} Part (a) follows from Lemma \ref{preservedropdown}, and this implies that $\sigma_e(\gamma^e_k) = \gamma^{e+1}_k$ fot $k<e$. Part (b) holds because $\gamma^e_e = \rho(C(B^e_e)) = \rho(B^e_e) = \sup i_e``\gamma_e < i_e(\gamma_{e-1}) = \gamma^e_{e-1}$, and $i_e$ preserves the fact that all $P$ such that $A_e \lhd P \unlhd A_{e-1}$ satisfy $\gamma_{e-1} \le \rho^-(P)$. (See Lemma \ref{preserverhominus}(c).) For (c), we show first that $A_{e+1}(N,\eta_e)^- = C(B^{e+1}_{e+1})$. We have that $C(B^e_e) \lhd B^{e+1}_e \lhd C(B^{e+1}_{e+1})$ by Claim 8, and \[ \rho(C(B^{e+1}_{e+1})) = \gamma_{e+1}^{e+1} < \gamma^e_e = \rho(C(B^e_e)) \] by Claim 8. So it will suffice to show that whenever $C(B^e_e) \lhd Q \lhd C(B^{e+1}_{e+1})$, then $\gamma^e_e \le \rho(Q)$. If $B^{e+1}_e \unlhd Q \lhd C(B^{e+1}_{e+1})$ then \[ \gamma^e_e \le \gamma^{e+1}_e < \rho(Q), \] as desired. So suppose $C(B^e_e) \lhd Q \lhd B^{e+1}_e$ and $\rho(Q)<\gamma^e_e$. If $Q \in B^{e+1}_e$, then $\gamma^e_e$ is not a cardinal of $B^{e+1}_e$, contrary to both alternatives (i) and (ii) in Claim 9. Thus $Q = B^{e+1}_e \downarrow k$ for some $k < m_e = k(B^{e+1}_e)$. This means \[ \rho_{k+1}(B^{e+1}_e) < \gamma^e_e \le \gamma^{e+1}_e = \rho_{m_e+1}(B^{e+1}_e, \] a contradiction. Thus $A_{e+1}(N,\eta_e) = C(B^{e+1}_{e+1})^+$. It is then easy to see that \[ A_{e+2}(N,\eta_e) = A_{e+1}(N,\eta_{e+1}) = C(B^{e+2}_{e+2})^+, \] and so on until we reach $A_n(N,\eta_e) = C(B^n_n)^+$. Here the value of $\rho^-$ is $\gamma^n_n$, and no higher levels of $N$ project strictly below that. Thus $n(N,\eta_e) = n$. This finishes the proofs of (c) and (d). \hfill $\square$ In view of Claim 10, we may set $B^e_k = A_k(N,\eta_e)^-$ and $\gamma^k_e = \rho(B^e_k)$ for all $k \le n$, and we have the properties of a resolution that apply to the $\eta_i$, $\sigma_i$, and $B^i_k$ for $i \ge e$. Eventually we reach $e=1$. Since $A_0 = M|\bar{\eta}$ is active and $\gamma_0 = o(N|\bar{\eta})$, \begin{align*} \gamma_1 &= \rho_1(N|\bar{\eta}),\\ m_1 &= 0,\\ \intertext{ and } B^1_1 &= \text{Ult}_0(N|\bar{\eta},F). \end{align*} $B^1_0 = C(B^1_1)$ because $m_1 = 0$. Thus $B^1_1 \unlhd N$. It its clear that $\sigma = \sigma_n \circ ... \circ \sigma_1$. \end{proof} \subsection{Weak tree embeddings} For most purposes, we need only consider weak tree embeddings that act on $\lambda$-tight, normal plus trees. This is because we have already shown (assuming $\mathsf{AD^+}$)\footnote{See Theorem \ref{tight trees}.} that if $(P,\Sigma)$ is a mouse pair with scope HC, and $(N,\Lambda)$ is an iterate of $(P,\Sigma)$ via the plus tree $\tree{T}$, then $(N,\Lambda)$ is an iterate of $(P,\Sigma)$ via a $\lambda$-tight, normal plus tree $\tree{U}$.\footnote{$\tree{U}$ is the normal companion of $\tree{T}$, re-arranged so that each plus extender $E^+$ used in the normal companion corresponds to two extenders $E, D$ used in $\tree{U}$.} It is convenient to restrict attention to $\lambda$-tight, normal trees; in particular, the fact that they are length-increasing means the agreement properties between their models are simpler. So we shall do this.\footnote{As we shall see below, in one case the natural embedding from the full normalization $X(s)$ of a stack of plus trees to its embedding normalization $W(s)$ is not quite a weak tree embedding in the sense of \ref{weaktreeembeddingdef}. We shall describe the slightly more general notion of weak tree embedding $\Phi \colon \tree{S} \to \tree{T}$ required in this case in a future draft of this paper. It amounts to allowing the $M_\alpha^{\tree{T}}$ to be of type 2.} \begin{definition}\label{weaktreeembeddingdef} Let $\tree{S}$ and $\tree{T}$ be plus trees with the same base model. A \textit{weak tree embedding} $\Phi \colon \tree{S}\to \tree{T}$ is a system $\langle v,u, \{s_\xi\}_{\xi<\text{lh}\tree{S}}, \{t_\zeta\}_{\zeta+1<\text{lh}\tree{S}},\{\sigma_\zeta\}_{\zeta+1<\text{lh} \tree{S}}\rangle$ such that \begin{enumerate} \item $v:\text{lh} \tree{S}\to \text{lh}\tree{T}$ is tree-order preserving, $u:\{\eta\,|\,\eta+1<\text{lh} \tree{S}\}\to \text{lh} \tree{T}$, $v(\xi)=\sup\{u(\eta)+1\,|\, \eta<\xi\}$, and $v(\xi)\leq_\tree{T}u(\xi)$; \item For $\eta\leq_\tree{S}\xi$, \begin{enumerate} \item $s_\xi: M^\tree{S}_\xi\to M^\tree{T}_{v(\xi)}$ is elementary and $s_0 = id_{M^\tree{S}_0}$; \item $\hat\imath^\tree{T}_{v(\eta),v(\xi)}\circ s_\eta=s_\xi\circ \hat\imath^\tree{S}_{\eta,\xi}$, \item $t_\xi= \hat\imath^\tree{T}_{v(\xi),u(\xi)}\circ s_\xi$ (so $t_\xi$ is a partial elementary map $M^\tree{S}_\xi\to M^\tree{T}_{u(\xi)}$); \end{enumerate} \item for $\xi+1<\text{lh} \tree{S}$, $\eta=\tree{S}\text{-pred}(\xi+1)$, and $\eta^*=\tree{T}\text{-pred}(u(\xi)+1)$, \begin{enumerate} \item Either \begin{itemize} \item [(i)] (X-case) $\sigma_\xi:M^\tree{T}_{u(\xi)}|\text{lh} E^\tree{T}_{u(\xi)}\to M^\tree{T}_{u(\xi)}|\text{lh} t_\xi(E^\tree{S}_{\xi})$ respects drops, $\sigma_\xi(E^\tree{T}_{u(\xi)})=t_\xi(E^\tree{S}_\xi)$, and $\text{ran}(t_\xi)\subseteq \text{ran}(\sigma_\xi)$, or \item[(ii)] (W-case) $\sigma_\xi:M^\tree{T}_{u(\xi)}|\text{lh} t_\xi(E^\tree{S}_{\xi}) \to M^\tree{T}_{u(\xi)}|\text{lh} E^\tree{T}_{u(\xi)}$ respects drops and $E^\tree{T}_{u(\xi)}=\sigma_\xi\circ t_\xi(E^\tree{S}_\xi)$; \end{itemize} \item $\eta^*\in[v(\eta),u(\eta)]_\tree{T}$, \item $s_{\xi+1}\upharpoonright \text{lh} E^\tree{S}_\xi = \begin{cases} \sigma_\xi\circ t_\xi\upharpoonright \text{lh} E^\tree{S}_\xi+1 & \text{in the W-case}\\ \sigma_\xi^{-1}\circ t_\xi\upharpoonright \text{lh} E^\tree{S}_\xi+1 & \text{in the X-case.} \end{cases}$ \end{enumerate} \end{enumerate} \end{definition} Note that a tree embedding is just a weak tree embedding in which $\sigma_\xi=id$ for all $\xi+1<\text{lh} \tree{S}$. In particular, a tree embedding is in both the X-case and the W-case at $\xi$ for every $\xi+1<\text{lh}\tree{S}$. We now extend our Shift Lemma and Copying Construction for meta-trees to the case where we have a weak tree embedding rather than a tree embedding. We shall just state the relevant results. The calculations involved in their proofs are quite similar to those we have done in the tree embedding case, so we defer them to a later draft of this paper. We first extend our notation to weak tree embeddings. \begin{definition}\label{metashiftapplies2} Let $\Psi:\tree{S}\to \tree{T}$ and $\Pi:\mathcal{U} \to \mathcal{V}$ be extended weak tree embeddings, $F$ an extender such that $F^-$ be an extender on the $M_\infty^\tree{S}$-sequence, and $G$ an extender such that $G^-$ is on the $M_\infty^\tree{T}$-sequence. We say that \textit{the Shift Lemma applies to $(\Psi,\Pi, F, G)$} iff letting $\beta = \beta(\mathcal{S},F)$ and $\beta^*=\beta(\mathcal{T}, G)$, \begin{enumerate} \item $M_\infty^\tree{S}|\text{lh}(F)\trianglelefteq\text{dom}( t_\infty^\Psi)$ and $G=t_\infty^\Psi(F)$, \item $\Psi\upharpoonright\beta+1\approx \Pi\upharpoonright\beta+1$, \item $\tree{T}\upharpoonright \beta^*+1=\tree{V}\upharpoonright\beta^*+1$ \item $\beta^*\in [v^\Pi(\beta), u^\Pi(\beta)]_\tree{V}$ and if $\beta^*<u^\Pi(\beta)$, then either \begin{enumerate} \item if $\Pi$ is in the $W$-case at $\beta$, \[\sigma^\Pi_\beta\circ t_\beta^\Pi\upharpoonright \text{dom}(F)\cup\{\text{dom}(F)\}=s_{\beta, \beta^*}^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\}, \text{ or}\] \item if $\Pi$ is in the $X$-case at $\beta$, \[(\sigma^\Pi_\beta)^{-1}\circ t_\beta^\Pi\upharpoonright \text{dom}(F)\cup\{\text{dom}(F)\}=s_{\beta, \beta^*}^\Pi\upharpoonright\text{dom}(F)\cup\{\text{dom}(F)\},\] \end{enumerate} \item if $\beta+1<\text{lh}(\tree{U})$, then $\text{dom}(F) \triangleleft M_\beta^\tree{U}|\text{lh}(E^\mathcal{U}_\beta)$, and \item if $\beta^*+1<\text{lh}(\tree{V})$, $\text{dom}(G) \triangleleft M_{\beta^*}^\tree{U}|\text{lh}(E^\mathcal{V}_{\beta^*})$. \end{enumerate} \end{definition} \begin{lemma}[Shift Lemma for weak tree embeddings] $\Psi:\tree{S}\to \tree{T}$ and $\Pi:\mathcal{U} \to \mathcal{V}$ be extended weak tree embeddings, and let $F$ be an extender such that $F^-$ be an extender on the sequence of the last model of $\tree{S}$ and $G$ be an extender such that $G^-$ is on the extender sequence of the last model of $\tree{T}$. Let $\alpha_0=\alpha_0(\tree{S}, F)$ and $\alpha^*_0=\alpha_0(\tree{T},G)$. Suppose that the Shift Lemma applies to $(\Psi,\Pi, F, G)$. Then $V(\tree{U},\tree{S},F)$ and $V(\tree{V},\tree{T},G)$ are defined and, letting $\mu$ the greatest ordinal such that $V(\mathcal{U},\mathcal{S},F)\upharpoonright \mu$ is wellfounded and $\mu^*$ the greatest ordinal such that $V(\mathcal{V},\tree{T}, G)\upharpoonright\mu^*$ is wellfounded, there is a unique partial weak tree embedding $\Gamma: V(\mathcal{U},\mathcal{S},F)\upharpoonright\mu \to V(\mathcal{V},\tree{T}, G)\upharpoonright\mu^*$ with maximal domain such that \begin{enumerate} \item $\Gamma\upharpoonright \alpha_0+1\approx \Psi \upharpoonright \alpha_0+1$, \item $u^\Gamma(\alpha_0)=\alpha^*_0$, \item either \begin{enumerate} \item $\sigma^\Gamma_{\alpha_0}=\text{id}$, or \item $\alpha_0+1<\text{lh}(\tree{S})$, $\alpha_0^*=u(\alpha_0)$, and $\sigma^\Gamma_{\alpha_0}=\sigma^\Psi_{\alpha_0}$; and \end{enumerate} \item $\Gamma\circ \Phi^{V(\mathcal{U},\mathcal{S},F)} =\Phi^{V(\mathcal{V},\tree{T}, G)}\circ \Pi$ (on their common domain). \end{enumerate} Moreover, if $\Psi$ and $\Pi$ are in the $X$-case at everywhere, so is $\Gamma$; if $\Psi$ and $\Pi$ are in the $W$-case everywhere, so is $\Gamma$. If $V(\mathcal{V},\tree{T}, G)$ is wellfounded, then $V(\mathcal{U},\mathcal{S}, F)$ is wellfounded and $\Gamma$ is a (total) extended weak tree embedding from $V(\mathcal{U}, \mathcal{S}, F)$ into $V(\mathcal{V},\mathcal{T}, G)$. If $V(\mathcal{V},\tree{T}, G)$ is wellfounded and also $\Pi$ is non-dropping, then $\Gamma$ is a non-dropping extended weak tree embedding. \end{lemma} Note that this Shift Lemma implies the Shift Lemma for tree embeddings, as if $\Psi$, $\Pi$ are tree embeddings, they are in \textit{both} the $X$-case and $W$-case everywhere, so $\Gamma$ is as well, which implies $\Gamma$ is a tree embedding. \begin{theorem}[Copying]\label{copying2} Let $\Gamma:\tree{S}\to \tree{T}$ be a non-dropping extended weak tree embedding. Let $\mtree{S}=\langle \tree{S}_\xi, \Phi^{\eta,\xi},F_\zeta\,|\, \xi,\zeta+1<\text{lh} (\mtree{S})\rangle$ be a meta-tree on $\tree{S}$. Then there is some largest $\mu\leq \text{lh} (\mtree{S})$ such that there is a meta-tree $\Gamma\mtree{S}=\langle \tree{T}_\xi, \Psi^{\eta,\xi}, G_\zeta\,|\,\xi,\zeta+1<\mu\rangle$ on $\tree{T}$ with tree-order $\leq_\mtree{S}\upharpoonright \mu$ and for $\xi<\mu$, non-dropping extended weak tree embeddings $\Gamma^\xi: \tree{S}_\xi\to \tree{T}_\xi$ with (total) last $t$-map $t_\infty^\xi$ such that \begin{enumerate} \item $\Gamma=\Gamma^0$, \item$G_\xi=t_\infty^\xi(F_\xi)$, \item and for all $\eta\leq_\mtree{S}\xi$, $\Gamma^\xi\circ \Phi^{\eta,\xi}=\Psi^{\eta,\xi}\circ \Gamma^\eta$. \end{enumerate} \end{theorem} \begin{proof} Similar to copying meta-trees via ordinary tree embeddings. \end{proof} As usual, the copying construction guarantees that we have pullback strategies. \begin{definition} For $\tree{S}$, $\tree{T}$ countable plus trees of successor length on a premouse $M$, $\Phi:\tree{S}\to \tree{T}$ a non-dropping extended weak tree embedding, and $\Sigma$ a strategy for finite stacks of countable meta-trees on $\tree{T}$, we define the pullback strategy $\Sigma^\Phi$ for finite stacks of countable meta-trees on $\tree{S}$ by \[\mtree{S} \text{ is by }\Sigma^\Phi \Leftrightarrow \Phi\mtree{S} \text{ is by } \Sigma.\] \end{definition} \begin{lemma}\label{goodmetastrategy} Suppose $(M,\Sigma)$ is a mouse pair, $\tree{S}$ and $\tree{T}$ are countable plus trees on $M$, $\tree{T}$ is by $\Sigma$, and $\Phi:\tree{S}\to \tree{T}$ is a non-dropping extended weak tree embedding. Let $\Lambda= \Sigma^*_\tree{T}$ be the induced meta-strategy for $\tree{T}$; then $\Lambda$ has meta-hull condensation, normalizes well, has the Dodd-Jensen property relative to $\Sigma$, and if $M$ is a lbr premouse, $\Lambda$ is pushforward consistent. \end{lemma} As a corollary to Lemma \ref{goodmetastrategy} and our main comparison theorem, we get the main theorem of this section. (More properly, it is Theorem \ref{induced strategy theorem} and its Lemma \ref{induced strategy lemma} that are relevant.) \begin{theorem}\label{vshctheorem} Assume $\mathsf{AD^+}$. Suppose $(M,\Sigma)$ is a mouse pair such that $\Sigma$ is coded by a Suslin-co-Suslin set of reals. Let $\tree{S}$ and $\tree{T}$ be plus tree on $M$. Suppose that $\tree{T}$ is by $\Sigma$ and there is a weak tree embedding from $\tree{S}$ into $\tree{T}$; then $\tree{S}$ is by $\Sigma$. \end{theorem} \subsection{Full normalization} Finally, we describe the full normalization construction. We shall make some minor simplifying assumptions as we do that, to the effect that certain ultrapowers are not type 2 premice. These assumptions can be removed in a way that is conceptually simple but notationally complicated, so we save the general case for a later draft of this paper. Suppose we are given a maximal stack of $\lambda$-tight, normal trees $s=\langle \tree{S}_0, \ldots, \tree{S}_n\rangle$. The first of our simplifying assumptions is that each $\tree{S}_i$ has a strongly stable base model. That guarantees that all models of all $\tree{S}_i$ are of type 1.\footnote{Strong stability for the base model of $\tree{S}_0$ does not imply strong stability for its last model, the base model of $\tree{S}_1$, because the main branch of $\tree{S}_0$ might drop.} We shall define, subject to further such simplifying assumptions to come, a putative $\lambda$-tight, normal tree $X(s)$ and possibly partial weak tree embeddings $\Psi:\tree{S}_0\to X(s)$ and $\Gamma: X(s)\to W(s)$. By Theorem \ref{vshctheorem}, if $s$ is by the strategy of some mouse pair $(P,\Sigma)$, then since $X(s)$ weakly embeds into $W(s)$, $X(s)$ is also by $\Sigma$. Our construction guarantees that $s$ and $X(s)$ and $s$ have the same last model.\footnote{We must assume that $s$ is maximal, because normal trees are maximal. For example, if $s = \langle E, N, F \rangle$ is a non-maximal stack on $M$ with $\text{lh}(E) < \text{crit}(F)$, then there is no normal tree on $M$ with the same last model as $s$. Thought of as a single tree instead of a stack, $s$ is non-overlapping, but has a gratuitous drop, and so it is not normal.} As with embedding normalization, we first handle the one-step case. Let $\tree{S},\tree{T}$ be $\lambda$-tight, normal trees of successor length on $M$, where $M$ is strongly stable. Let $F$ be on the sequence of last model of $\tree{T}$. Let $\alpha=\alpha(F,\tree{T})$ and $\beta=\beta(F,\tree{T})$. Suppose that $\tree{S}\upharpoonright\beta+1= \tree{T}\upharpoonright\beta+1$ and $\text{dom} F\leq \lambda( E^\tree{S}_\beta)$, if $\beta+1<\text{lh} \tree{S}$. Granted our simplifying assumptions, we shall define $\tree{X}=X(\tree{S},\tree{T},F)$, and a partial extended weak tree embedding from $\tree{S}$ into $\tree{X}$, \begin{align*} \Psi&=\Psi^{X(\tree{S},\tree{T},F)}\\ &= \langle v,u, \{s_\xi\}_{\xi<\text{lh}\tree{S}}, \{t_\zeta\}_{\zeta+1<\text{lh}\tree{S}},\{\sigma_\zeta\}_{\zeta+1<\text{lh} \tree{S}}\rangle. \end{align*} We also define a partial extended weak tree embedding \[ \Gamma=\Gamma^{X(\tree{S},\tree{T},F)} \] of $X$ into $\tree{W}=W(\tree{S}, \tree{T},F)$ such that \[ \Gamma\circ \Psi=\Phi=\Phi^{W(\tree{S},\tree{T},F)}. \] The component maps of $\Phi$ and $\Gamma$ we shall indicate by adding superscripts. We shall have $u^\Gamma = \text{ id}$. In $\Psi$ the $X$-case will occur everywhere, and in $\Gamma$ the $W$-case will occur everywhere. As with embedding normalization, we may reach illfounded models in forming $\tree{X}$ and stop when we do. We say that $\tree{X}$ is wellfounded if we never reach illfounded models. When $\tree{S}$ and $\tree{T}$ are by a strategy $\Sigma$ which has strong hull condensation, $\tree{X}$ will be wellfounded. In this case, $\tree{X}$ will have last model $\text{Ult}(P, F)$, where $P$ is the longest initial segment of the last model of $\tree{S}$ to which $F$ applies. and, moreover, the embedding normalization map of $W(\tree{S},\tree{T},F)$ will be the last $t$-map of $\Gamma$: \[ \sigma^{W(\tree{S},\tree{T},F)} = t_\infty^\Gamma. \] We let $\tree{X}\upharpoonright \alpha+1 =\tree{T}\upharpoonright \alpha+1$ and $E^\tree{X}_\alpha=F$. For the rest of $\tree{X}$, we consider cases. \paragraph{The dropping case.}$F$ is applied to a proper initial segment $P\triangleleft M^\tree{S}_\beta|\text{lh} E^\tree{S}_\beta$, if $\beta+1<\text{lh} \tree{S}$, or $P\triangleleft M^\tree{S}_\beta$ if $\beta+1=\text{lh} \tree{S}$. In this case we've described all of $\tree{X}$ already: \[ \tree{X}=\tree{T}\upharpoonright\alpha+1 {}^\frown \langle F\rangle. \] Notice here that $\text{Ult}(P,F)$ has type 1. For otherwise, letting $k=k(P)$, we have that $\text{crit}(F) = \eta_k^P$. Since $P$ is stable, that implies $\eta_k^P < \rho_{k+1}(P)$, so that $\text{Ult}(P^+,F)$ makes sense. Our case hypothesis is that $P\triangleleft M^\tree{S}_\beta|\text{lh} E^\tree{S}_\beta$ or or $P\triangleleft M^\tree{S}_\beta$, so in either case, $F$ should have been applied to $P^+$ rather than $P$. Recall that in this case we also let $W(\tree{S},\tree{T},F)= \tree{T}\upharpoonright\alpha+1 {}^\frown \langle F\rangle$, so $\tree{W}=\tree{X}$. We let $\Psi$ be the identity on $\tree{S}\upharpoonright\beta+1$ except we set $u(\beta)=\alpha+1$ and $t_\beta= i^P_F$. We also let $\Gamma$ be the identity (on $\tree{X}=\tree{W}$). Recalling how we defined $\Phi^{W(\tree{S},\tree{T},F)}$ in the dropping case, we have $\Phi=\Gamma\circ \Psi$. Note that $\text{Ult}(P,F)$ is the last model of $\tree{X}$, and $\sigma^{W(\tree{S},\tree{T},F)} = t_{\beta+1}^\Gamma = \text{id}$, as desired \paragraph{The non-dropping case.} Suppose $F$ is applied to an initial segment $P\trianglelefteq M^\tree{S}_\beta$ with $M^\tree{S}_\beta|\text{lh} E^\tree{S}_\beta\trianglelefteq P$, if $\beta+1<\text{lh} \tree{S}$, or $F$ is total on $M^\tree{S}_\beta$ if $\beta+1=\text{lh} \tree{S}$. We define $\Psi, \Gamma$ and $\tree{X}$ as follows. We define $u=u^\Psi$ as we did in embedding normalization: \begin{equation*} u(\xi) = \begin{cases*} \xi & if $\xi<\beta$, \\ \alpha+1+(\xi-\beta) & if $\xi\geq \beta$. \end{cases*} \end{equation*} The models of $\tree{X}$ are given by setting $M_\theta^{\tree{X}} = M_\theta^{\tree{T}}$ if $\theta \le \alpha$, and \[ M_{u(\xi)}^{\tree{X}} = \begin{cases} \text{Ult}(P,F) & \text{ if $\xi = \beta$, and }\\ \text{Ult}(M_\xi^{\tree{S}},F) & \text{ if $\xi > \beta$.} \end{cases} \] Note that for all $\xi>\beta$, $F$ is total on $M_\xi^\tree{S}$ and $\text{crit}(F) < \rho^-(M_\xi^\tree{S})$. This follows easily from \begin{proposition}\label{local hod prop 1} Let $\tree{U}$ a normal tree, $\xi+1<\text{lh} \tree{U}$, and $\mu=\text{lh} E^\tree{U}_\xi$. Then if $\xi<\theta<\text{lh}\tree{U}$, then $\mu$ is a successor cardinal of $M_\theta^\tree{U}$ and for $k=\deg(M_\theta^\tree{U})$, $\mu<\rho_k(M_\theta^\tree{U})$. \end{proposition} \begin{proof} Well known and routine. \end{proof} However, it can now happen that some of the $\text{Ult}(M_\xi^{\tree{S}},F)$ for $\xi > \beta$ have type 2. Similarly, when $P=M_\beta^{\tree{S}}$ it can happen that $\text{Ult}(P,F)$ has type 2. (The branch to the relevant model must have dropped to a premouse that is not strongly stable in this case.) We wish to avoid this possibility, because it adds a notational mess without requiring any important new ideas. \bigskip \noindent {\em Simplifying assumption:} $\text{Ult}(P,F)$ and all $\text{Ult}(M_\xi^{\tree{S}},F)$ for $\xi > \beta$ have type 1. \bigskip \noindent We shall describe how to remove this assumption in a future draft.\footnote{The construction we are doing produces a system $Y(\tree{S},\tree{T},F)$ that has premice of type 2 on it, together with a (generalized) weak tree embedding $\Phi$ from $Y(\tree{S},\tree{T},F)$ to $W(\tree{S},\tree{T},F)$ whose $u$-map is the identity. $\tree{Y}$ is not literally an iteration tree, but we can convert it to one that reaches the same models, but using additional steps. This involves inserting ultrapowers by order zero measures into the full normalization $\tree{X}$, so that it can reach premice of the form $\text{Ult}(Q,F)^-$, where $\text{Ult}(Q,F)$ has type 2. Note $\text{Ult}(Q,F)^-$ has type 1, so it could be a model in a normal tree on $M$. Adding these order zero ultrapowers means that $X(\tree{S},\tree{T},F)$ and $W(\tree{S},\tree{T},F)$ may no longer have exactly the same tree order. Nevertheless, the existence of $\Phi$ implies that $\tree{X}$ is ``good", granted that $\tree{W}$ is good.} Under our simplifying assumption, $\tree{X}$ will have the same tree order and length as $\tree{W}$, and we set $u^\Gamma=v^\Gamma=id$. We can also now specify the remaining $t$-maps of $\Psi$: \[ t^\Psi_\xi=i^{M_\xi^\tree{S}}_F. \] What is left is to find the extenders $E_\xi^{\tree{X}}$ that make $\tree{X}$ into an iteration tree, and to finish defining $\Psi$ and $\Gamma$. Proposition \ref{local hod prop 1} implies that for $\mu = \text{lh} E_\xi^{\tree{S}}$ and $\xi+1 \le \eta < \text{lh} S$, $t_{\xi+1} \upharpoonright (\mu +1) = t_\eta \upharpoonright (\mu+1)$. In general, we do not have that $t_\xi\upharpoonright \mu =t_{\xi+1}\upharpoonright \mu$ . What we have is the following diagram \[ \begin{tikzcd} M^\tree{S}_\xi \arrow{r}{t_\xi} & M^\tree{X}_{u(\xi)}\arrow[Eq]{r}& \text{Ult}(M_\xi^\tree{S}, F)\\ M^\tree{S}_\xi||\mu \arrow[Is]{u}{} \arrow{r}{t_\xi} \arrow[swap]{rd}{t_{\xi+1}}& t_\xi(M^\tree{S}_\xi||\mu)\arrow[Is]{u}{} \\ & t_{\xi+1}(M^\tree{S}_\xi||\mu)\arrow[swap]{u}{\text{rs}_\xi}\arrow[Eq]{r}& \text{Ult}(M_\xi^\tree{S}||\mu, F) \end{tikzcd} \] $t_{\xi+1}(M_\xi^\tree{S}||\mu)$ is the ultrapower computed using functions in $M_\xi^\tree{S}||\mu$ and $t_{\xi}(M_\xi^\tree{S}||\mu)$ is the ultrapower computed using all functions in $M_\xi^\tree{S}$. $\text{rs}_\xi$ is the natural factor map. (\lq\lq rs" is meant to suggest \lq\lq resurrection".) Having defined the $E_\gamma^{\tree{X}}$ for $\gamma < u(\xi)$, we get at once from Proposition \ref{local hod prop 1}: \begin{claim}\label{local hod claim 1} For any $\gamma<u(\xi)$, $\text{rs}_\xi\upharpoonright\text{lh} E^\tree{X}_{\gamma}+1 = id$. Also, $\text{rs}_\xi \upharpoonright\text{lh} F +1 = id$. \end{claim} So for any $\theta\geq \xi+1$, $t_\theta\upharpoonright\text{lh} E_\xi^\tree{S}= t_{\xi+1}\upharpoonright\text{lh} E^\tree{S}_\xi$. We define now extenders $E_\gamma^\tree{X}$ which make $\tree{X}$ into a normal iteration tree. To start, we let \[ E_\gamma^\tree{X}=\begin{cases} E_\gamma^\tree{T}\text{ if } \gamma<\alpha\\ F \text{ if } \gamma=\alpha\end{cases}.\] Now let $\gamma>\alpha$, so $\gamma=u(\xi)$ for some $\xi\geq \beta$. Assume that $\xi>\beta$; the argument when $\xi=\beta$ is similar, but $M_\beta^\tree{S}$ gets replaced by the initial segment $P\trianglelefteq M_\beta^\tree{S}$ indicated above. Let $\mu=\text{lh} E_\xi^\tree{S}$. We have the following diagram. \[ \begin{tikzcd} M^\tree{S}_\xi \arrow{r}{t_\xi} & M^\tree{X}_{u(\xi)}\arrow[Eq]{r}& \text{Ult}(M_\xi^\tree{S}, F)\\ M^\tree{S}_\xi|\mu \arrow[Is]{u}{} \arrow{r}{t_\xi} \arrow[swap]{rd}{t_{\xi+1}}& t_\xi(M^\tree{S}_\xi|\mu)\arrow[Is]{u}{} \\ & t_{\xi+1}(M^\tree{S}_\xi|\mu)\arrow[swap]{u}{\text{rs}_\xi}\arrow[Eq]{r}& \text{Ult}(M_\xi^\tree{S}|\mu, F) \end{tikzcd} \] The difference from the preceding diagram is just that $M_\xi^\tree{S}|\mu$ has a predicate symbol $\dot F$ for $E^\tree{S}_\xi$, while $M_\xi^\tree{S}||$ is passive. The maps remain elementary (i.e. $\Sigma_1$ elementary) even with this added predicate. Applying Lemma \ref{main full normalization lemma}, we have \begin{claim}\label{local hod claim 2} $\text{Ult}(M_\xi^\tree{S}|\mu, F)\trianglelefteq M_{u(\xi)}^\tree{X}$ and $rs_\xi$ respects drops over $(M_{u(\xi)}^\tree{X}, t_{\xi+1}(\mu), t_\xi(\mu))$. \end{claim} We set \begin{align*} E_{u(\xi)}^\tree{X} &= \dot F ^{\text{Ult}(M_\xi^\tree{S}|\mu, F)}\\ &= \text{the top extender of }\text{Ult}(M_\xi^\tree{S}|\mu, F)\\ &=\bigcup_{\alpha<\mu} t_{\xi+1}\big(E_\xi^\tree{S}\cap M_\xi^\tree{S}|\alpha\big). \end{align*} We may sometimes write \[E_{u(\xi)}^\tree{X} = t_{\xi+1}(E_\xi^\tree{S})\] though literally $E_\xi^\tree{S}\not\in \text{dom} t_{\xi+1}$. We do always literally have $\text{lh} E_\xi^\tree{S}\in \text{dom} t_{\xi+1}$ and $t_{\xi+1}(\text{lh} E_\xi^\tree{S})=\text{lh} E^\tree{X}_{u(\eta)}$. We now let \begin{align*} G&=E_\xi^\tree{S}\\ H&= t_\xi(G)\\ \bar H &= t_{\xi+1}(G)=E_{u(\xi)}^\tree{X}. \end{align*} \begin{claim}\label{local hod claim 3} \begin{enumerate} \item[(a)] For any $\delta<\xi$, $\text{lh} E^\tree{X}_{u(\delta)}< \text{lh} \bar H$ \item[(b)] $\text{lh} F<\lambda (\bar H)$ \item[(c)] For any $\delta<\xi$, $\text{crit} G<\lambda(E_\delta^\tree{S})\Leftrightarrow \text{crit} H< \lambda(E_{u(\delta)}^\tree{X})\Leftrightarrow \text{crit} \bar H< \lambda(E_{u(\delta)}^\tree{X})$ \item[(d)] If $\text{crit} G<\lambda(E_\delta^\tree{X})$, then $\text{crit} H= \text{crit} \bar H$. In fact, $H\upharpoonright\text{lh} E^\tree{X}_{u(\delta)}=\bar H\upharpoonright \text{lh} E^\tree{X}_{u(\delta)}$. \end{enumerate} \end{claim} \begin{proof} For (a), let $\delta<\xi$. Then, using Claim \ref{local hod claim 1}, $\text{lh} E_\delta^\tree{S}<\text{lh} E^\tree{S}_\xi$, so \begin{align*} \text{lh} E^\tree{X}_{u(\delta)}&=t_{\delta+1}(\text{lh} E^\tree{S}_\delta)=t_{\xi+1}(\text{lh} E^\tree{S}_\delta)\\&< t_{\xi+1}(\text{lh} E_\xi^\tree{S})=\text{lh} E^\tree{X}_{u(\xi)}, \end{align*} as desired. For (b), $\text{crit} F^+<\lambda(E_\xi^\tree{S})$, so \[i_F^{M_\xi^\tree{S}|\mu}(\text{crit} F^+)=\text{lh} F<i_F^{M_\xi^\tree{S}|\mu}(\lambda(E_\xi^\tree{S}))=\lambda(E^\tree{X}_{u(\xi)}).\] For (c), let $\kappa=\text{crit} G=\text{crit} E^\tree{S}_\xi$. So $t_\xi(\kappa)=\text{crit} H$ and $t_{\xi+1}(\kappa)=\text{crit} \bar H.$ Then for $\delta<\xi$, \begin{align*} \kappa<\lambda(E^\tree{S}_\delta) &\text{\quad iff \quad} t_{\delta+1}(\kappa)<\lambda(E^\tree{X}_{u(\delta)})\\ &\text{\quad iff \quad}t_\xi(\kappa)<\lambda(E^\tree{X}_{u(\delta)})\\ &\text{\quad iff \quad}t_{\xi+1}(\kappa)<\lambda(E^\tree{X}_{u(\delta)}), \end{align*} using on the second and third lines that $t_{\delta+1}, t_\xi$, and $t_{\xi+1}$ all agree on $\text{lh} E^\tree{S}_\delta+1$. (d) is clear. \qed \end{proof} By Claim \ref{local hod claim 3}, setting $E_{u(\xi)}^\tree{X}=\bar H$ preserves the length-increasing clause in the definition of normality. We now need to check that this choice of extender gives rise to the appropriate next model when we apply it where we must using the Jensen normality rules. Let $\delta=\tree{S}\text{-pred}(\xi+1)$. We now break into cases. \paragraph{Case 1.} $\text{crit} G<\text{crit} F$. In this case, since $\text{crit} F<\lambda(E_\beta^\tree{S})$, $\delta\leq \beta$. If $\delta<\beta$ (so that $u(\delta)=\delta$), then Claim \ref{local hod claim 3} (b) tells us that $\bar H$ must be applied in $\tree{X}$ to the same $Q\trianglelefteq M_\delta^\tree{S}$ that $G$ is applied to in $\tree{S}$. In fact, $\text{crit} \bar H=\text{crit} G=\text{crit} H$. We then have the commutative diagram \[ \begin{tikzcd} M^\tree{S}_{\xi+1} \arrow{r}{t_{\xi+1}} & M^\tree{X}_{u(\xi+1)}\arrow[Eq]{r}& \text{Ult}(M_{\xi+1}^\tree{S}, F)\arrow[Eq]{r}& \text{Ult}(Q, \bar H)\\ Q\arrow{u}{G}\arrow[swap]{ur}{\bar H}\arrow[Is]{d}\\ M_\delta^\tree{S}\arrow[Eq]{r} &M_\delta^\tree{X}. \end{tikzcd} \] It is shown in \cite[\S 6.1]{nitcis} that $\text{Ult}(M_{\xi+1}^\tree{S}, F)= \text{Ult}(Q, \bar H)$ and that this diagram commutes. (See the proof of Claim \ref{local hod claim 5} in Case 2, below, for a similar calculation.) The situation when $\delta=\beta$ is the same: $\tree{X}\text{-pred}(\xi+1)=\beta$ and $\bar H$ is applied to the same $Q$ that $G$ was. Note that $u(\beta)\neq \beta$, so $u$ does not preserve tree order, just as in embedding normalization. \paragraph{Case 2.} $\text{crit} F\leq \text{crit} G$. In this case, $\delta\geq \beta$. Also, $\lambda(F)\leq \text{crit} \bar H$, so $\bar H$ is applied in $\tree{X}$ to some $Q\trianglelefteq M_\tau^\tree{X}$, where $\tau\geq \alpha+1$. Thus $\tau\in \text{ran}(u)$ and, by Claim \ref{local hod claim 3}, $\tau=u(\delta)$. That is, $\tree{X}\text{-pred}(u(\xi+1))=u(\delta)$. Let $\kappa=\text{crit} G$ and $P\trianglelefteq M_\delta^\tree{S}$ be such that $M_{\xi+1}^\tree{S}=\text{Ult}(P,G)$. \begin{claim}\label{local hod claim 4} $\bar H$ is applied to $\text{Ult}(P,F)$ in $\tree{X}$. \end{claim} \begin{proof} We have the following diagram. \[ \begin{tikzcd} M^\tree{S}_\delta \arrow{r}{t_\delta} & M^\tree{X}_{u(\delta)}\\ P \arrow[Is]{u}{} \arrow{r}{t_\delta} \arrow[swap]{rd}{i^P_F}& t_\delta(P)\arrow[Is]{u}{} \\ & \text{Ult}(P,F)\arrow[swap]{u}{l}\\ M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta \arrow[Is]{uu}{} \arrow{r}{i^P_F} \arrow[swap]{rd}{t_{\delta+1}}& i^P_F(M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta)\arrow[Is]{u}{} \\ & \text{Ult}(M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta,F)\arrow[swap]{u}{k}\arrow[bend right=60, swap]{uuu}{\text{rs}_\delta} \end{tikzcd} \] $k$ and $l$ are the natural factor maps and \[ \text{rs}_\delta=l\circ k. \] By Lemma \ref{main full normalization lemma}, $\text{Ult}(M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta, F)\trianglelefteq i^P_F(M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta)$ and $\text{rs}_\delta$ respects drops over $(M_{u(\delta)}^{\tree{X}}, t_{\delta+1}(\text{lh} E^{\tree{S}}_\delta), t_\delta(E^{\tree{S}}_\delta)$. Note that \[ k\upharpoonright t_{\delta+1}((\text{crit} G^+)^{M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta})= \text{ id} \] because $\rho_{deg(P)}(P)>\text{crit} G$. So $t_{\delta+1}(\text{dom} G)=i_F^P(\text{dom} G)$ and $t_{\delta+1}\upharpoonright(\text{crit} G^+)^P=i^P_F\upharpoonright (\text{crit} G^+)^P$. We have that $P = A_e(M_\delta^\tree{S}, M_\delta^{\tree{S}}|\text{lh} E^\tree{S})^-$ for some $e \le n$, where $n = n( M_\delta^\tree{S}, M_\delta^{\tree{S}}|\text{lh} E^\tree{S})$. By the proof of Lemma \ref{main full normalization lemma}, \begin{align*} \text{Ult}(P,F) &= B^e_e \trianglelefteq t_\delta(P),\\ k &= \sigma_{e-1} \circ ...\circ \sigma_1,\\ \intertext{ and } l &= \sigma_n \circ ...\circ \sigma_e, \end{align*} where $\text{rs}_\delta = \sigma_n \circ ...\circ \sigma_1$ is the factoring of $\text{rs}_\delta$ induced by the dropdown sequence of $(M_\delta^\tree{S}, M_\delta^{\tree{S}}|\text{lh} E^\tree{S})$. Note here that we are using our simplifying assumption to conclude that $B^e_e \lhd t_\delta(P)$. Note that for $\kappa=\text{crit} G$, $\rho(\text{Ult}(P,F))\leq i^P_F(\kappa)$, because $\text{Ult}(P,F)$ is generated by $i^P_F(\rho(P))\cup i^P_F"\kappa \cup \text{lh} F$, and $\text{lh} F<i^P_F(\kappa)$. But for $k=\deg(P)$, $\rho_k(P)\geq (\kappa^+)^P$, so $\rho_k(\text{Ult}(P,F))\geq i^P_F((\kappa^+)^P)$. It follows that $\bar H$, whose domain is $t_{\xi+1}(\text{dom} G)= t_{\delta+1}(\text{dom} G)=i^P_F(\text{dom} G)$, is applied to $\text{Ult}(P,F)$ in $\tree{X}$. \qed \end{proof} \begin{claim}\label{local hod claim 5} $\text{Ult}(\text{Ult}(P,F), \bar H)=M^\tree{X}_{u(\xi+1)}= \text{Ult}(\text{Ult}(P,G), F).$ \end{claim} \begin{proof} This is shown in \cite[\S 6.1]{nitcis}, but we repeat the calculations here. Set $N=\text{Ult}(P,G)$ and $Q=\text{Ult}(P,F)$. We have the diagram \[ \begin{tikzcd} N \arrow{r}{i^N_F} & \text{Ult}(N,F)& \text{Ult}(Q, \bar H)\\P\arrow{u}{i^P_G}\arrow{r}{i^P_F}& Q\arrow{ur} \end{tikzcd} \] Let $E$ be the extender of $i^N_F\circ i^P_G$; then $\nu(E)\leq \sup i^N_F"\lambda(G)$ and for $a\in [\nu(E)]^{<\omega}$, $E_a$ concentrates on $\text{crit} G^{|a|}$. Let $K$ be the extender of $i^Q_{\bar H}\circ i^P_F$, so $\nu(K)\leq \text{lh} \bar H=\sup i^N_F"\lambda(G)$, and each $K_a$ concentrates on $\text{crit} G^{|a|}$. Let $a=[b,g]^N_F$, where $g\in N|\lambda(G)=M_\xi^\tree{S}|\lambda(G)$ be a typical element of $[\sup i^N_F"\lambda(G)]^{<\omega}$ and $A\subseteq \text{crit} G^{|a|}$; then \begin{align*} (a,A)\in E &\text{\quad iff \quad} [b,g]^N_F\in i^N_F\circ i^P_G(A)\\ &\text{\quad iff \quad} \text{for $F_b$-a.e. $u$, $g(u)\in i^P_G(A)$}\\ &\text{\quad iff \quad} \text{for $F_b$-a.e. $u$, } (g(u),A)\in G\\ &\text{\quad iff \quad} \big([b,g]^{M_\xi^\tree{S}|\text{lh} G}_F, i^{M_\xi^\tree{S}|\text{lh} G}_F(A)\big)\in \bar H\\ &\text{\quad iff \quad} \big([b,g]^N_F, i^P_F(A)\big)\in \bar H\\ &\text{\quad iff \quad} [b,g]^N_F\in i^Q_{\bar H}\circ i^P_F(A)\\ &\text{\quad iff \quad} (a,A)\in K, \end{align*} using in the fifth line that $[b,g]^N_F=[b,g]^{M_\xi^\tree{S}|\text{lh} G}_F$ and $i^{M_\xi^\tree{S}|\text{lh} G}_F(A)=i^P_F(A)$. So $E=K$ and $\text{Ult}(N,F)= \text{Ult}(Q,\bar H)$, as desired. \end{proof} The proof of the previous proposition shows also that $i^N_F\circ i_G=i_{\bar H}\circ i^P_F$. So we have verified that as long as all the models $\text{Ult}(M_\xi^\tree{S},F)$ are wellfounded, $\tree{X}$ is a normal tree. Putting together our work above, we have the following. \begin{proposition} \begin{enumerate} \item $\tree{X}\upharpoonright\alpha+1=\tree{T}\upharpoonright\alpha+1$ \item $M_{\alpha+1}^\tree{X}=\text{Ult}(P,F)$ and $t_\beta=i^P_F$, for $P\trianglelefteq M_\beta^\tree{S}$, \item for $\xi>\beta$ $M_{u(\xi)}^\tree{X}=\text{Ult}(M_\xi^\tree{S}, F)$ and $t_\xi=i^{M_\xi^\tree{S}}_F$ \item $t_\xi=\text{rs}_\xi\circ t_{\xi+1}$, $\text{rs}_\xi$ respects drops, and $E^\tree{X}_{u(\xi)}=t_{\xi+1}(E^\tree{S}_\xi)$, \item for $\delta\leq_\tree{S}\xi$ either \begin{enumerate} \item $u(\delta)\leq_\tree{X} u(\xi)$ and $t_\xi\circ \hat\imath^\tree{S}_{\delta, \xi}=\hat\imath^\tree{X}_{u(\delta), u(\xi)}\circ t_\delta$, or \item $\delta=\beta$ and for $\zeta+1$ the successor of $\beta$ in $[\beta, \xi]_\tree{S}$, $\text{crit} E_\zeta^\tree{S}<\text{crit} F$, $\beta\leq_\tree{X} u(\xi)$ and $t_\xi\circ \hat\imath^\tree{S}_{\beta, \xi}=\hat\imath^\tree{X}_{\beta, u(\xi)}$. \end{enumerate} \end{enumerate} \end{proposition} For $\xi<\beta$, we let $t_\xi=id$ and $\text{rs}_\xi=id$. We then have that $\Psi= \langle u, \{t_\xi\}, \{\text{rs}_\xi\}\rangle$, $\Psi$ is an extended weak tree embedding from $\tree{S}$ into $\tree{X}$ which is in the $X$-case at every ordinal $\xi+1$. It just remains to describe the weak tree embedding $\Gamma:\tree{X}\to \tree{W}$ and verify that $\Gamma\circ \Psi=\Phi$. Recalling the one-step embedding normalization, our work above shows that $\tree{X}$ and $\tree{W}$ have the same length and tree order and $u=u^\Psi=u^\Phi$.\footnote{Without our simplifying assumption, this might not be true.} We let $u^\Gamma=id$. We just need to define the $t$-maps of $\Gamma$, which we call $\gamma_\xi$, and the $\sigma$-maps $ \text{rs}^*_\xi$. \[ \Gamma = \langle \text{ id}, \langle \gamma_\xi \mid \xi < \text{lh} \tree{X} \rangle, \langle \text{rs}^*_\xi \mid \xi+1 < \text{lh} \tree{X} \rangle \rangle. \] $\text{rs}^*_\xi$ will be in the $W$-case at every $\xi$. Let $\pi_\eta=t^\Phi_\eta$. Since $\tree{W}\upharpoonright\alpha+2=\tree{T}\upharpoonright\alpha+1{}^\frown\langle F\rangle =\tree{X}\upharpoonright\alpha+2$, we'll have $\gamma_\xi=id$ and $\text{rs}_\xi=id$ for all $\xi\leq \alpha+1$. Now suppose for all $\eta\leq \xi$ we've defined partial elementary maps $\gamma_{u(\eta)}:M_{u(\eta)}^\tree{X}\to M^\tree{W}_{u(\eta)}$ and maps $\text{rs}^*_{u(\eta)}: M^\tree{W}_{u(\eta)}|\gamma_{u(\eta)} (\text{lh} E^\tree{X}_{u(\eta)})\to M^\tree{W}_{u(\eta)}|\text{lh} E^\tree{W}_{u(\eta)}$ which respect drops such that \begin{enumerate} \item $\pi_\eta = \gamma_{u(\eta)}\circ t_\eta$, \item $\gamma_{u(\xi)}\upharpoonright\text{lh} F=id$, \item if $\beta\leq \eta<\zeta$, then $\gamma_{u(\zeta)}\upharpoonright \text{lh} E^\tree{X}_{u(\eta)}= \gamma_{u(\eta)}\circ \text{rs}_\eta \upharpoonright\text{lh} E^\tree{X}_{u(\eta)}$ \item $\text{rs}^*_{u(\eta)}\circ\gamma_{u(\eta)}=\gamma_{u(\eta)}\circ \text{rs}_\eta$ \end{enumerate} Let $G, H, \bar H$ as before. So (1) gives us that $E^\tree{W}_{u(\xi)}=\pi_\xi(E^\tree{S}_\xi)=\gamma_\xi(H)$. Let $\delta=\tree{S}\text{-pred}(\xi+1)$. \paragraph{Case 1.} $\text{crit} G<\text{crit} F$. In this case, $\text{dom} \bar H =\text{dom} H=\text{dom} G$, $\delta\leq \beta$, and $\delta=\tree{X}\text{-pred}(u(\xi+1))=\tree{W}\text{-pred}(u(\xi+1))$. Suppose $\bar H$ is applied to $P\trianglelefteq M_\delta^\tree{X}=M_\delta^\tree{S}$ in $\tree{X}$. Then $H$ would also be applied to $P$ if we had set $E_{u(\xi)}=H$. Now, $\bar H$ is a subextender of $H$ under $\text{rs}_\xi$: \[(a, A)\in \bar H \quad \text{iff} \quad (\text{rs}_\xi(a), A)\in H.\] So we let $\epsilon$ be the natural map from $M_{u(\xi+1)}^\tree{X}= \text{Ult}(M_{\xi+1}^\tree{S}, \bar H)$ into $\text{Ult}(P, H)$, i.e. \[\epsilon\big([a,f]^P_{\bar H}\big)=[\text{rs}_\xi(a), f]^P_H.\] So $\epsilon\upharpoonright\text{lh} \bar H=\text{rs}_\xi\upharpoonright\text{lh} \bar H$. We have the diagram \[ \begin{tikzcd} M^\tree{S}_{\xi+1} \arrow{r}{t_{\xi+1}} & M^\tree{X}_{u(\xi+1)}\arrow{r}{\epsilon}& \text{Ult}(P, H)\\ P\arrow{u}{G}\arrow{ur}{\bar H}\arrow[swap]{urr}{H}\arrow[Is]{d}\\ M_\delta^\tree{S}\arrow[Eq]{r} &M_\delta^\tree{X}. \end{tikzcd} \] Since $\delta\leq \beta$, we have $\gamma_\xi\upharpoonright\text{dom} H=id$, so $\gamma_\xi(H)$ is applied to $P$ in $\tree{W}$, and we have the diagram \[ \begin{tikzcd} M^\tree{S}_{\xi+1} \arrow{r}{t_{\xi+1}} & M^\tree{X}_{u(\xi+1)}\arrow{r}{\epsilon}& \text{Ult}(P, H)\arrow{r}{\theta}&M^\tree{W}_{u(\xi+1)}\\ P\arrow{u}{G}\arrow{ur}{\bar H}\arrow[swap]{urr}{H}\arrow[bend right=10, swap]{urrr}{\gamma_{u(\xi)}(H)}\arrow[Is]{d}\\ M_\delta^\tree{S}\arrow[Eq]{r} &M_\delta^\tree{X}\arrow[Eq]{r} &M_\delta^\tree{W}. \end{tikzcd} \] where $\theta$ is given by \[ \theta([a,f]^P_H)=[\gamma_{u(\xi)}, f]^P_{\gamma_{u(\xi)}(H)}. \] So $\theta$ agrees with $\gamma_{u(\xi)}$ on $\text{lh} H$. We set \[ \gamma_{u(\xi+1)}=\theta\circ \epsilon. \] So by the agreement of $\theta$ with $\gamma_{u(\xi)}$ and $\epsilon$ with $\text{rs}_\xi$, \[ \gamma_{u(\xi+1)}\upharpoonright\text{lh} \bar H=\gamma_{u(\xi)} \circ \text{rs}_\xi\upharpoonright\text{lh} \bar H, \] which gives us (3) at $\xi+1$. It is easy to see that \[ \pi_{u(\xi+1)}=\gamma_{u(\xi+1)}\circ t_{\xi+1}, \] so we have (1) at $\xi+1$. We let \[ \text{rs}_{u(\xi+1)}^*=\gamma_{u(\xi+1)}(\text{rs}_{\xi+1}). \] (It can happen that $\text{rs}_{\xi+1}$ does not belong to $M_{u(\xi+1)}^{\tree{X}}$, but in any case $\text{rs}_{\xi+1}$ is definable from parameters over $M_{u(\xi+1)}^{\tree{X}}$ in a way that is simple enough that we can move it by $\gamma_{u(\xi+1)}$.) This gives us (4). \paragraph{Case 2.} $\text{crit} F\leq \text{crit} G$. Suppose first that $\delta<\xi$. This yields that $t_\xi\upharpoonright \text{lh} E^\tree{S}_\delta=t_{\xi+1}\upharpoonright\text{lh} E^\tree{S}_{\delta}$, so $t_\xi$ and $t_{\xi+1}$ agree on $\text{dom} G$, so $\text{dom} \bar H=\text{dom} H$. Moreover, $\text{rs}_\xi\upharpoonright \text{dom} \bar H=id$. Let $P\trianglelefteq M^\tree{S}_\delta$ be what $G$ is applied to in $\tree{S}$. We have: \[ i^P_F\upharpoonright\text{dom} G=t_{\delta+1}\upharpoonright\text{dom} G=t_\xi\upharpoonright\text{dom} G=t_{\xi+1}\upharpoonright\text{dom} G. \] As in the previous case, $\bar H$ is a subextender of $H$ via $\text{rs}_\xi$, so we let $\epsilon$ be the copy map from $M_{u(\xi+1)}^\tree{X}=\text{Ult}(M_{\xi+1}^\tree{S}, \bar H)$ into $\text{Ult}(P, H)$, as in Case 1. We again have \[ \epsilon\upharpoonright\text{lh} \bar H=\text{rs}_\xi\upharpoonright\text{lh} \bar H. \] By (1) at $\xi$, we have that \[ E^\tree{W}_{u(\xi)}= \pi_{\xi}(E^\tree{S}_\xi)=\gamma_{u(\xi)}(t_\xi(E_\xi^\tree{S}0)= \gamma_{u(\xi)}(H). \] By our case hypothesis, we have that $E^\tree{W}_{u(\xi)}$ is applied to $\pi_\delta(P)\trianglelefteq M_{u(\delta)}^\tree{W}$ in $\tree{W}$. Let \begin{align*} P &= A_e(M_\delta^{\tree{S}}, M_\delta^{\tree{S}}|\text{lh} E_\delta^{\tree{S}})^-,\\ k &=\sigma_{e-1} \circ ...\circ \sigma_1,\\ l &= \sigma_n \circ ...\sigma_e,\\ \intertext{ where} \text{rs}_\delta &= \sigma_n \circ ...\sigma_1 \end{align*} is the resolution of the factor map $\text{rs}_\delta$ given by the fact that it respects drops over $(M_{u(\delta)}^{\tree{X}}, t_{\delta+1}(\text{lh} E_\delta^{\tree{S}},t_\delta (\text{lh} (E_\delta^{\tree{S}}))$. We have that $k:\text{Ult}(M_\delta^\tree{S}|\text{lh} E^\tree{S}_\delta, F)\to \text{Ult}(P,F)$ and $l:\text{Ult}(P,F)\to t_\delta(P)$, moreover \begin{align*} k\upharpoonright\text{dom} G & = \text{ id},\\ \intertext{ so} l\upharpoonright\text{dom} G & =\text{rs}_\delta\upharpoonright\text{dom} G. \end{align*} So we have \[ \gamma_{u(\xi)}\upharpoonright\text{dom} G=\gamma_{u(\delta)}\circ \text{rs}_\delta\upharpoonright\text{dom} G=\gamma_{u(\delta)}\circ l\upharpoonright\text{dom} G. \] So we may let $\theta$ be given by the Shift Lemma applied to $\gamma_{u(\delta)}\circ l$, $\gamma_{u(\xi)}$, and $H$, i.e. \[ \theta\big([a, f]^{\text{Ult}(P, F)}_H\big)=[\gamma_{u(\xi)}(a), \gamma_{u(\delta)}\circ l(f)]^{\pi_\delta(P)}_{\gamma_{u(\xi)}(H)}. \] We have the following commutative diagram. \[ \begin{tikzcd} M^\tree{S}_{\xi+1} \arrow{r}{t_{\xi+1}} & M^\tree{X}_{u(\xi+1)}\arrow{r}{\epsilon} & \text{Ult}(P, H)\arrow{r}{\theta}& M^\tree{W}_{u(\xi+1)}\\ P\arrow[Is]{dd}\arrow{u}{G}\arrow{r}{i^P_F}\arrow{dr}{t_\delta} &\text{Ult}(P,F)\arrow{u}{\bar H}\arrow[swap]{ur}{H}\arrow{d}{l}\\ & t_\delta(P) \arrow[Is]{d}\arrow{r}{\gamma_{u(\delta)}}& \pi_\delta(P)\arrow[ swap]{uur}{\gamma_{u(\xi)}(H)}\arrow[Is]{d}\\ M_\delta^\tree{S}\arrow{r}{t_\delta} & M_{u(\delta)}^\tree{X}\arrow{r}{\gamma_{u(\delta)}} & M_{u(\delta)}^\tree{W} \end{tikzcd} \] We now let \[\gamma_{u(\xi+1)}=\theta\circ \epsilon.\] Since $\epsilon \upharpoonright\text{lh} \bar H=\text{rs}_\xi\upharpoonright\text{lh} \bar H$ and $\theta\upharpoonright\text{lh} H=\gamma_{u(\xi)}\upharpoonright\text{lh} H$, we get \[ \gamma_{u(\xi+1)}\upharpoonright\text{lh} \bar H = \gamma_{u(\xi)}\circ \text{rs}_\xi\upharpoonright\text{lh} \bar H, \] giving (3) at $\xi+1$. As in the previous case, we let \[ \text{rs}_{u(\xi+1)}^*=\gamma_{u(\xi+1)}(\text{rs}_\xi), \] which guarantees (4) at $\xi+1$. It remains to show (1) at $\xi+1$, i.e. that $\pi_{\xi+1} = \gamma_{u(\xi+1)}\circ t_{\xi+1}$. Note first that the two sides agree on $\text{ran} i^P_G$, for letting $j=\hat\imath^\tree{W}_{u(\delta), u(\xi+1)} \colon \pi_\delta(P)\to M^\tree{W}_{u(\xi+1)}$, we have \begin{align*} \theta\circ \epsilon\circ t_{\xi+1}\circ i^P_G&=j\circ \gamma_{u(\delta)}\circ t_\delta\\ &=j\circ \pi_\delta\\ &=\pi_{\xi+1}\circ i^P_G, \end{align*} using the commutativity properties of the maps in embedding normalization. But $M^\tree{S}_{\xi+1}$ is generated by $\text{ran} i^P_G \cup \lambda(G)$, so it is enough to see that $\theta\circ \epsilon\circ t_{\xi+1}$ and $\pi_{\xi+1}$ agree on $\lambda(G)$. Since $\pi_{\xi+1}$ agrees with $\pi_\xi$ on $\lambda(G)$, we get \begin{align*} \pi_{\xi+1}\upharpoonright\lambda(G)&=\pi_\xi\upharpoonright\lambda(G)\\ &=\gamma_{u(\xi)}\circ t_\xi\upharpoonright\lambda(G)\\ &=\gamma_{u(\xi)}\circ \big(\text{rs}_\xi\circ t_{\xi+1}\big)\upharpoonright\lambda(G)\\ &= \big(\gamma_{u(\xi)}\circ \text{rs}_\xi\big)\circ t_{\xi+1}\upharpoonright\lambda(G)\\ &=\gamma_{u(\xi+1)}\circ t_{\xi+1}\upharpoonright\lambda(G), \end{align*} as desired. This finishes Case 2 under the additional hypothesis that $\delta<\xi$. The case $\delta=\xi$ is not different in any important way. In that case, we may have $\text{crit}\bar H<\text{crit} H$, but the relevant diagram is the same. We leave it to the reader to confirm this. For $\lambda$ a limit ordinal, we define $\gamma_\lambda = t_\lambda^\Gamma$ by setting $\gamma_\lambda(i_{\eta,\lambda}^{\tree{X}}(a)) = i_{\eta,\lambda}^{\tree{W}}(\gamma_\eta(a))$ for all sufficiently large $\eta$. Again, $\text{rs}^*_\lambda = \gamma_\lambda(\text{rs}_\lambda)$. We leave it to the reader to check (1)-(4). If $\tree{W}$ wellfounded (for example if $\tree{S}$ and $\tree{T}$ are by a strategy with strong hull condensation), our induction hypotheses imply that $\Gamma$ is an extended weak tree embedding from $\tree{X}$ into $\tree{W}$ which is in the $W$-case at every $\xi$, and $\Phi=\Gamma\circ \Psi$. In this case, let $\delta+1=\text{lh} \tree{S}$, so $u(\delta)=\text{lh}\tree{W}=\text{lh}\tree{X}$. Recall that $\sigma=\sigma^{W(\tree{S},\tree{T},F)}$ is the natural factor map witnessing that $F$ is an initial segment of the extender of $\pi_\delta$, which is just to say that $\sigma$ is the unique nearly elementary map from $\text{Ult}(M_\delta^\tree{S}, F) =M^\tree{X}_{u(\delta)}$ into $M^\tree{W}_{u(\delta)}$ such that $\sigma\upharpoonright\text{lh} F=id$ and $\pi_\delta=\sigma\circ i^{M_\delta^\tree{S}}_F=\sigma\circ t_{\delta}$. But $\gamma_{u(\delta)}$ has both of these properties, so $\gamma_{u(\delta)}=\sigma$, as desired. This finishes our work in the non-dropping case. We define now the full normalization $X(\tree{T}, \tree{U})$ of a maximal stack $\langle \tree{T},\tree{U}\rangle$. Formally, this will be another variety of meta-tree, where we do one-step full normalization at every step instead of one-step embedding normalization. We will not formally define this kind of meta-tree, however. We define \[\mtree{X}(\tree{T},\tree{U})=\langle \tree{X}_\xi, E^\tree{U}_\zeta,\Psi^{\eta,\xi}\,|\, \eta,\xi,\zeta+1<\text{lh} \tree{U}, \, \eta\leq_\tree{U} \xi\rangle,\] where \begin{enumerate} \item $\tree{X}_0 = \tree{T}$ and for all $\xi<\text{lh} \tree{U}$, $\tree{X}_\xi$ is a normal tree with last model $M^\tree{U}_\xi$, \item for $\zeta\leq_\tree{U}\eta\leq_\tree{U}\xi$, \begin{enumerate} \item $\Psi^{\eta,\xi}:\tree{X}_\eta\to \tree{X}_\xi$ is an $X$-type partial extended weak tree embedding, and \item $\Psi^{\zeta,\xi}=\Psi^{\eta,\xi}\circ \Psi^{\zeta,\eta}$; \end{enumerate} \item For $\eta=\tree{U}\text{-pred}(\xi+1)$, \begin{enumerate} \item $\tree{X}_{\xi+1}= X(\tree{X}_\eta,\tree{X}_\xi,E^\tree{U}_\xi)$, \item $\Psi^{\eta,\xi+1}=\Psi^{X(\tree{X}_\eta,\tree{X}_\xi,E^\tree{U}_\xi)}$ \end{enumerate} \item For $\lambda <\text{lh} \tree{U}$ a limit and $b=[0,\lambda)_\tree{U}$, \[\tree{X}_\lambda = \lim \langle\tree{X}_\xi,\Psi^{\eta,\xi}\,|\, \eta\leq_\tree{U}\xi\in b\rangle\] and $\Psi^{\xi, \lambda}$ is the direct limit weak tree embedding. \end{enumerate} It is easy to verify by induction that we have the necessary agreement hypothesis so that clause (3)(a) makes sense. Clause (4) relies on taking direct limits of systems of $X$-type weak tree embeddings, which we have not yet discussed. This exactly parallels direct limits of tree embeddings, up to the point at which we choose the exit extenders of the direct limit. That is, we must define $E_x$ for a $u$-thread $x$. Recall that we only do this when $M_x$ is wellfounded and there is an $a\in \text{dom} x$ such that for all $b\succeq a$, $t^{a,b}_{x(a)}$ (which is defined to be $t_{x(a)}^{\Psi^{a,b}}$) is total. In this case, we may actually assume that $a$ is such that for all $b\preceq a$, $t^{a,b}_{x(a)}(E^a_{x(a)})=E^b_{x(b)}$, that is, that $\text{rs}^{a,b}_{x(a)}=id$. This is because if $\text{rs}^{a,b}_{x(a)} \neq id$, then, since we are in the $X$-case at every step, $\text{lh} E^b_{x(b)}< t^{a,b}_{x(a)}(E^a_{x(a)})$, so if for all $a$ there is a $b\preceq a$ such that $\text{rs}^{a,b}_{x(a)}\neq id$, then the images of the lengths of these exit extenders forms an infinite decreasing sequence in the ordinals of $M_x$, contradicting that it is wellfounded. So we let $E_x$ be the image of the stabilized valued of $E^a_{x(a)}$ under the direct limit $t$-map $t^a_x$. Finally, for $c\in \text{dom} x$, we also define the $\sigma$-map of the direct limit weak tree embedding $\Psi^c$ as the common value $\text{rs}^c_x=t^b_x(\text{rs}^{c,b}_{x(c)})$ for any $b\preceq a,c$. Note that in dealing with direct limits of arbitrary weak tree embeddings abstractly, we must just assume that there is an $a\in \text{dom} x$ such that $t^{a,b}_{x(a)}(E^a_{x(a)})=E^b_{x(b)}$ in order to define the direct limit. With this additional hypothesis in the definition of a wellfounded direct limit, we get the obvious analogue of Proposition \ref{direct limit prop}. Let $\mtree{W}(\tree{T},\tree{U})=\langle \tree{W}_\xi, F_\zeta,\Phi^{\eta,\xi}, \,|\,\eta,\xi,\zeta+1<\text{lh} \tree{U}, \eta\leq_\tree{U} \xi\rangle$. Let $\sigma_\xi$ the embedding normalization map from $M_\xi^\tree{U}$ into the last model of $\tree{W}_\xi$. We define $W$-type tree embeddings $\Gamma^\xi:\tree{X}_\xi\to \tree{W}_\xi=W(\tree{T},\tree{U}\upharpoonright\xi+1)$, by induction, such that \begin{enumerate} \item for all $\eta\leq_\tree{U}\xi$, $\Phi^{\eta,\xi}=\Gamma^\xi\circ \Psi^{\eta, \xi}$, \item $\sigma_\xi$ is the last $t$-map of $\Gamma^\xi$. \end{enumerate} Note that (2) implies that $F_\xi$ is the image of $E_\xi^\tree{U}$ under the last $t$-map of $\Gamma_\xi$. To start, we let $\Gamma_0=Id_\tree{T}$. At limits, $\Gamma_\lambda$ is exists and is uniquely determined by the commutativity condition and the $\Gamma_\xi$ for $\xi\leq_\tree{U}\lambda$. So we just handle the successor stages. So suppose $\eta=\tree{U}\text{-pred}(\xi+1)$ and we have $\Gamma^\eta:\tree{X}_\eta\to \tree{W}_\eta$ and $\Gamma^\xi:\tree{X}_\xi\to \tree{W}_\xi$. \begin{claim} The weak tree embedding Shift Lemma applies to $(\Gamma^\eta, \Gamma^\xi, E^\tree{U}_\xi, F_\xi)$. \end{claim} \begin{proof} This is a routine induction. \end{proof} Now let $\Delta:W(\tree{X}_\eta,\tree{X}_\xi, E^\tree{U}_\xi)\to W(\tree{W}_\eta, \tree{W}_\xi, F_\xi)$ be the copy $W$-type weak tree embedding associated to ($\Gamma^\eta, \Gamma^\xi, E^\tree{U}_\xi, F_\xi$). Also let $\Xi=\Gamma^{X(\tree{X}_\eta,\tree{X}_\xi,E^\tree{U}_\xi)}$. We have the following commutative diagram. \begin{center} \begin{tikzcd} & \tree{X}_\eta \arrow{r}{\Gamma_\eta}\arrow[swap]{dl}{E_\xi^\tree{U}}\arrow{d}{\Phi^ {W(\tree{X}_\eta, \tree{X}_\xi, E^\tree{U}_\xi)}} & \tree{W}_\eta \arrow{d}{F_\xi}\\ \tree{X}_{\xi+1}\arrow[swap]{r}{\Xi} & W(\tree{X}_\eta, \tree{X}_\xi, E^\tree{U}_\xi) \arrow[swap]{r}{\Delta} &\tree{W}_{\xi+1} \end{tikzcd} \end{center} We let $\Gamma^{\xi+1}= \Delta\circ \Xi$, which is as desired since the above diagram commutes. This finishes the definition of the $\Gamma_\xi$. Note that the maps $\Gamma_\xi$ witness that $\mtree{X}(\tree{S},\tree{T})$ is a kind of weak meta-hull of $\mtree{W(\tree{T}, \tree{U})}$. This finishes our discussion of $X(\tree{T},\tree{U})$. It is a simple matter to extend the definition so as to define $X(s)$ for finite stacks $s$. Let us call a stack $s$ of plus trees {\em simple} iff its component plus trees are all $\lambda$-tight and normal, and the construction of $X(s)$ defined above never leads to type 2 premice; that is, our simplifying assumption always applies. What we have shown is: \begin{theorem}\label{fullnormalizationeheorem} Assume $\mathsf{AD^+}$, and let $(M,\Sigma)$ be a strongly stable mouse pair. Let $s$ be a simple stack on $(M,\Sigma)$ with last pair $(N,\Sigma_s)$; then there is a unique normal, $\lambda$-tight tree $\tree{X}$ on $(M,\Sigma)$ with last pair $(N,\Sigma_s)$. \end{theorem} \begin{proof} Let $\tree{W} = W(s)$ and $(R,\Lambda)$ be the last pair of $\tree{W}$. Let $\sigma \colon N \to R$ be the last $\sigma$-map of the embedding normalization. Since $(M,\Sigma)$ embedding normalizes well, \[ \Sigma_s = \Lambda^\sigma. \] Let $\tree{X} = X(s)$ be the tree constructed above. $\tree{X}$ has last model $N$. Moreover, we produced a weak tree embedding $\Gamma$ from $\tree{X}$ into $\tree{W}$ whose last $t$-map is \[ t^\Gamma = \sigma. \] Since $\Sigma$ has very strong hull condensation (by Theorem \ref{vshctheorem}), \[ \Lambda^{t^\Gamma} = \Sigma_{\tree{X},N}. \] Putting things together, we get $\Sigma_s = \Sigma_{\tree{X},N}$, as desired. \end{proof} As we indicated in the course of constructing $X(s)$, the hypothesis that $s$ is simple can be dropped from Theorem \ref{fullnormalizationeheorem}, and the details of that will appear in a future draft of this paper. From this stronger version of the theorem, we get at once \begin{corollary} \label{positionalitytheorem} Assume $\mathsf{AD^+}$, and let $(M,\Sigma)$ be a strongly stable mouse pair. Let $s$ and $t$ be finite stacks of plus trees on $M$ such that $s$ and $t$ are by $\Sigma$ and have a common last model $N$; then $\Sigma_{s,N} = \Sigma_{t,N}$. \end{corollary} That is, the strategy component of a mouse pair is positional.
2024-02-18T23:40:44.116Z
2022-07-25T02:14:05.000Z
algebraic_stack_train_0000
3,214
70,626
proofpile-arXiv_065-15718
\section{Introduction} In 1976 't Hooft [1] observed that the standard model does not absolutely conserve baryon and lepton number due to the Adler-Bell-Jackiw anomaly. The process 't Hooft considered was spontaneous fermion number violation due to instanton induced transitions. Fermion number violating tunnelling transitions between topologically distinct vacua might indeed be observable at high energies at future accelerators [2,3]. The possibility of fermion number violation in the standard model was considered from another point of view by Manton [4]. Investigating the topological structure of the configuration space of the Weinberg-Salam theory, Manton showed that there are noncontractible loops in configuration space, and predicted the existence of a static, unstable solution of the field equations, a sphaleron [5], representing the top of the energy barrier between topologically distinct vacua. At finite temperature this energy barrier between topologically distinct vacua can be overcome due to thermal fluctuations of the fields, and fermion number violating vacuum to vacuum transitions involving changes of baryon and lepton number can occur. The rate for such baryon number violating processes is largely determined by a Boltzmann factor, containing the height of the barrier at a given temperature and thus the energy of the sphaleron. Baryon number violation in the standard model due to such transitions over the barrier may be relevant for the generation of the baryon asymmetry of the universe [6-10]. How can baryon and lepton number change when the barrier between topologically distinct vacua is traversed? The answer is seen in the level crossing picture. Let us consider a process which starts in the vacuum sector labelled by the Chern-Simons number $N_{\rm CS}$. During the process the barrier is traversed. The Chern-Simons number changes continuously, ending in the vacuum sector $N_{\rm CS}-1$. Let us assume a filled Dirac sea in the first vacuum. While the gauge and Higgs field configurations slowly change and with them the Chern-Simons charge, the fermion levels also change. When the bosonic configurations reach the top of the barrier, the sphaleron with Chern-Simons charge $1/2$, one fermion level of the sea has precisely reached zero energy, and when the bosonic fields reach the next vacuum configuration, this occupied energy level has dived out of the Dirac sea. In this letter we demonstrate the level crossing phenomenon for fermions in the background field of the sphaleron barrier, by numerically determining the fermion eigenvalues along the minimal energy path from one vacuum to another [11,12]. We assume that the fermions of a doublet are degenerate in mass. This assumption, violated in the standard model, allows for spherically symmetric ans\"atze for all of the fields, when the mixing angle dependence is neglected (which is an excellent approximation [13,14]). At the top of the barrier, in the background field of the sphaleron, the fermions reach a zero mode [15-17]. We briefly review in section 2 the Weinberg-Salam lagrangian with the approximations employed. In section 3 we present the sphaleron energy barrier, providing the background field for the fermions. In section 4 we derive the radial equations for the fermions, and we present our results in section 5. \section{\bf Weinberg-Salam Lagrangian} Let us consider the bosonic sector of the Weinberg-Salam theory in the limit of vanishing mixing angle. In this limit the U(1) field decouples and can consistently be set to zero \begin{equation} {\cal L}_{\rm b} = -\frac{1}{4} F_{\mu\nu}^a F^{\mu\nu,a} + (D_\mu \Phi)^{\dagger} (D^\mu \Phi) - \lambda (\Phi^{\dagger} \Phi - \frac{v^2}{2} )^2 \ \end{equation} with the SU(2)$_{\rm L}$ field strength tensor \begin{equation} F_{\mu\nu}^a=\partial_\mu V_\nu^a-\partial_\nu V_\mu^a + g \epsilon^{abc} V_\mu^b V_\nu^c \ , \end{equation} and the covariant derivative for the Higgs field \begin{equation} D_{\mu} \Phi = \Bigl(\partial_{\mu} -\frac{i}{2}g \tau^a V_{\mu}^a \Bigr)\Phi \ . \end{equation} The ${\rm SU(2)_L}$ gauge symmetry is spontaneously broken due to the non-vanishing vacuum expectation value $v$ of the Higgs field \begin{equation} \langle \Phi \rangle = \frac{v}{\sqrt2} \left( \begin{array}{c} 0\\1 \end{array} \right) \ , \end{equation} leading to the boson masses \begin{equation} M_W = M_Z =\frac{1}{2} g v \ , \ \ \ \ \ \ M_H = v \sqrt{2 \lambda} \ . \end{equation} We employ the values $M_W=80 {\rm GeV}$, $g=0.67$. For vanishing mixing angle, considering only fermion doublets degenerate in mass, the fermion lagrangian reads \begin{eqnarray} {\cal L}_{\rm f} & = & \bar q_{\rm L} i \gamma^\mu D_\mu q_{\rm L} + \bar q_{\rm R} i \gamma^\mu \partial_\mu q_{\rm R} \nonumber \\ & - & f^{(q)} \bar q_{\rm L} (\tilde \Phi u_{\rm R} + \Phi d_{\rm R}) - f^{(q)} (\bar d_{\rm R} \Phi^\dagger +\bar u_{\rm R} \tilde \Phi^\dagger) q_{\rm L} \ , \end{eqnarray} where $q_{\rm L}$ denotes the lefthanded doublet $(u_{\rm L},d_{\rm L})$, while $q_{\rm R}$ abbreviates the righthanded singlets $(u_{\rm R},d_{\rm R})$, with covariant derivative \begin{equation} D_\mu q_{\rm L} = \Bigl(\partial_{\mu} -\frac{i}{2}g \tau^a V_{\mu}^a \Bigr) q_{\rm L} \ , \end{equation} and with $\tilde \Phi = i \tau_2 \Phi^*$. The fermion mass is given by \begin{equation} M_F=\frac{1}{\sqrt{2}}f^{(q)} v \ . \end{equation} All gauge field configurations can be classified by a charge, the Chern-Simons charge. The Chern-Simons current \begin{equation} K_\mu=\frac{g^2}{16\pi^2}\varepsilon_{\mu\nu\rho\sigma} {\rm Tr}( {\cal F}^{\nu\rho} {\cal V}^\sigma + \frac{2}{3} i g {\cal V}^\nu {\cal V}^\rho {\cal V}^\sigma ) \ \end{equation} (${\cal F}_{\nu\rho} = 1/2 \tau^i F^i_{\nu\rho}$, ${\cal V}_\sigma = 1/2 \tau^i V^i_\sigma$) is not conserved, its divergence $\partial^\mu K_\mu$ represents the U(1) anomaly. The Chern-Simons charge of a configuration is given by \begin{equation} N_{\rm CS} = \int d^3r K^0 \ . \end{equation} For the vacua the Chern-Simons charge is identical to the integer winding number, while the barriers are characterized by a half integer Chern-Simons charge. \section{\bf Sphaleron Energy Barrier} The height of the barrier can be obtained by constructing families of field configurations for the gauge and Higgs fields, which interpolate smoothly from one vaccum to another as a function of the Chern-Simons charge. Each of these families of configurations has a maximal energy along such a path. By finding the minimal value of these maximal energies one has found the height of the barrier, the sphaleron [4,5]. In the limit of vanishing mixing angle the general static, spherically symmetric ansatz for the gauge and Higgs fields is given by [18] \begin{eqnarray} \Phi & = & \frac{v}{\sqrt {2}} \Bigl(H(r) + i \vec \tau \cdot \hat r K(r)\Bigr) \left( \begin{array}{c} 0\\1 \end{array} \right) \ , \\ V_i^a & = & \frac{1-f_A(r)}{gr} \epsilon_{aij}\hat r_j + \frac{f_B(r)}{gr} (\delta_{ia}-\hat r_i \hat r_a) + \frac{f_C(r)}{gr} \hat r_i \hat r_a \ , \\ V_0^a & = & 0 \ , \end{eqnarray} and involves the five radial functions $H(r)$, $K(r)$, $f_A(r)$, $f_B(r)$ and $f_C(r)$. This ansatz leads to the energy functional \begin{eqnarray} E & = & \frac{4\pi M_W}{g^2} \int^{\infty}_0 dx \Bigl[ \frac{1}{2x^2} (f^2_A + f^2_B -1)^2 + (f'_A + \frac{f_Bf_C}{x})^2 + (f'_B - \frac{f_Af_C}{x})^2 \nonumber \\ & + & (K^2+H^2) (1+f_A^2+f^2_B + \frac{f_C^2}{2})+2f_A (K^2-H^2) - 4f_B H K \nonumber \\ & + & 2x^2(H'^2+K'^2) - 2xf_C (K'H - KH') + \frac{4\lambda}{g^2} x^2 (H^2+K^2 -1)^2 \Bigr] \ , \end{eqnarray} where $x=M_Wr$, and to the Chern-Simons number \begin{equation} N_{\rm CS} = \frac{1}{2\pi} \int^{\infty}_0 \ dx \Bigl[ (f_A^2+f^2_B) (\frac{f_C}{x} - \varphi') - (\frac{f_C}{x} - \Theta') - \Bigl(\sqrt {(f_A^2+f^2_B)}\ \sin (\varphi - \Theta)\Bigr)'\ \Bigr] \ \end{equation} with $\varphi = {\rm arctan}({f_B}/{f_A})$. The function $\Theta(x)$ is an arbitrary radial function, associated with the residual gauge invariance of the ansatz (11)-(13). This gauge freedom can be used to eliminate one of the functions. Here we choose the radial gauge with the gauge condition $f_C(x)=0$. Let us now consider families of configurations, which connect one vacuum ($N_{\rm CS}=0$) with another vacuum ($N_{\rm CS}=1$) passing the sphaleron ($N_{\rm CS}=1/2$). Note, that the Chern-Simons number of the sphaleron is independent of the Higgs mass, $N_{\rm CS}=1/2$ [5]. For this purpose we extremize the functional [11,12] \begin{equation} W = E + \frac{8 \pi M_W}{g^2} \ \xi \ N_{\rm CS} \ , \end{equation} where $\xi$ is a lagrange multiplier. The minimal energy path constructed accordingly for $M_H=M_W$ is shown in Fig.~1. This path is symmetric with respect its top, the sphaleron. For large values of the Higgs mass additional less symmetric sphaleron solutions, bisphalerons, appear [19,20]. The first bisphaleron takes over the role of the sphaleron. It represents the top of an asymmetric barrier [12,21], having a Chern-Simons number different from $1/2$. \section{\bf Fermion Equations} Let us now consider the fermions in the background of the sphaleron barrier. To retain spherical symmetry we consider only fermion doublets degenerate in mass. {}From the fermion lagrangian (7) we obtain the eigenvalue equations for the lefthanded doublet \begin{equation} i D_0 q_{\rm L} + i \sigma^i D_i q_{\rm L} -f^{(q)} (\tilde \Phi u_{\rm R} + \Phi d_{\rm R} )=0 \ , \end{equation} and for the righthanded singlets \begin{equation} i \partial_0 \left( \begin{array}{c} u_{\rm R}\\d_{\rm R} \end{array} \right) -i \sigma^i \partial_i \left( \begin{array}{c} u_{\rm R}\\d_{\rm R} \end{array} \right) -f^{(q)} \left( \begin{array}{c} \tilde \Phi^\dagger q_{\rm L}\\ \Phi^\dagger q_{\rm L} \end{array} \right) =0 \ . \end{equation} Employing the spherically symmetric ansatz for the fermion eigenstates, the hedgehog ansatz, \begin{equation} q_{\rm L}(\vec r\,,t) = e^{-i\omega t} \bigl( G_{\rm L}(r) + i \vec \sigma \cdot \hat r F_{\rm L}(r) \bigr) \chi_{\rm h} \ , \end{equation} \begin{equation} q_{\rm R}(\vec r\,,t) = e^{-i\omega t} \bigl( G_{\rm R}(r) + i \vec \sigma \cdot \hat r F_{\rm R}(r) \bigr) \chi_{\rm h} \ , \end{equation} with the hedgehog spinor satisfying the spin-isospin relation $\vec \sigma \chi_{\rm h} + \vec \tau \chi_{\rm h} = 0 $, we obtain the following set of four coupled first order differential equations \begin{equation} \tilde \omega G_{\rm L} - F'_{\rm L} - \frac{2}{x}F_{\rm L} +\frac{1-f_A}{x} F_{\rm L} -\frac{f_B}{x} G_{\rm L} -\frac{f_C}{2x}G_{\rm L} -\tilde M_F(H G_{\rm R} + K F_{\rm R}) = 0 \ , \end{equation} \begin{equation} \tilde \omega F_{\rm L} + G'_{\rm L} +\frac{1-f_A}{x} G_{\rm L} +\frac{f_B}{x} F_{\rm L} -\frac{f_C}{2x}F_{\rm L} -\tilde M_F(H F_{\rm R} - K G_{\rm R}) = 0 \ , \end{equation} \begin{equation} \tilde \omega G_{\rm R} + F'_{\rm R} + \frac{2}{x}F_{\rm R} -\tilde M_F(H G_{\rm L} - K F_{\rm L}) = 0 \ , \end{equation} \begin{equation} \tilde \omega F_{\rm R} - G'_{\rm R} -\tilde M_F(H F_{\rm L} + K G_{\rm L}) = 0 \ , \end{equation} where $x$ is the dimensionless coordinate, $\tilde \omega$ is the dimensionless eigenvalue $\tilde \omega = \omega /M_W$ and $\tilde M_F$ is the dimensionless fermion mass $\tilde M_F= M_F/M_W$. (Remember the gauge choice $f_C=0$.) The eigenvalue problem (21)-(24) for the fermions in a sphaleron-like background field requires certain boundary conditions for the fermion functions. At the origin $G_{\rm L}(x)$ and $G_{\rm R}(x)$ are finite, while $F_{\rm L}(x)$ and $F_{\rm R}(x)$ vanish, at spatial infinity all functions vanish. \section{\bf Results} In the background field of the sphaleron the fermions have a zero mode, i.~e.~a normalizable eigenstate with zero eigenvalue. In this case, the two functions $F_{\rm L}(x)$ and $F_{\rm R}(x)$ decouple and are identically equal to zero. When the mass of the fermions vanishes, also $G_{\rm R}(x)$ decouples and the zero mode can be given analytically [15-17]. The normalized eigenfunctions are shown in Fig.~2 for fermion masses of $M_f=80$ GeV, $8$ GeV and $0.8$ GeV. In the background field of the bisphalerons the fermions do not have a zero mode, in fact, the fermion eigenvalue depends on the Higgs mass [22]. When the fermion eigenvalue equations are solved in the background field of the sphaleron barrier, given by the minimal energy path discussed above, the level crossing phenomenon is observed. Since the barrier is symmetric about the sphaleron, the fermion eigenvalue $\omega$ is antisymmetric with respect to the sphaleron configuration. In Fig.~3 we represent the fermion eigenvalue along the barrier for fermion masses of $M_f=80$ GeV, $8$ GeV and $0.8$ GeV. We observe that light fermions are bound only close to the top of the barrier, the sphaleron configuration, while heavy fermions are bound almost along the full path along the barrier. Denoting by $M_f^{\rm cr}$ the fermion mass, at which for a given Chern-Simons number, the fermion bound state enters the continuum, this observation is illustrated also in Fig.~4, where the critical fermion mass $M_f^{\rm cr}$ is shown as a function of the Chern-Simons number $N_{\rm CS}$. At zero mass, fermions are bound only by the sphaleron. \section{References}
2024-02-18T23:40:44.515Z
1993-02-26T16:29:19.000Z
algebraic_stack_train_0000
3,227
2,334
proofpile-arXiv_065-15832
\section{Introduction} We present a short overview on the extragalactic background radiation from radio to X-rays, with an eye to the relation to galaxy formation. The emphasis is on astrophysical (as opposed to cosmological) backgrounds. In particular, we will only cursorily deal with the cosmic microwave background (CMB) and with reference only to spectral distortions and anisotropies due to gas processes at the formation of collapsed structures. \section{The radio background} The extragalactic radio background has received little attention in the last 25 years, while the interest of radio astronomers was shifted on one side towards accurate measurements of the CMB and, on the other side, towards deeper and deeper surveys. In fact, it may well be that most of the extragalactic radio background has already been resolved into sources. Nevertheless, improved determinations of the intensity of the radio background are still of considerable interest, for at least three reasons: i) a significant, or even large, fraction of it may not come from discrete sources at all but from emission of diffuse intergalactic or pre-galactic material; ii) even if all the background is actually attributable to discrete sources, an indefinite fraction of them may be too faint to be detected even in the deepest surveys; ii) the integrated emission of extragalactic sources and from a pre-- or inter--galactic medium constitutes a ``foreground'' signal which needs to be determined and subtracted out to infer the CMB spectrum. Unfortunately, its intensity is difficult to measure because it is swamped by the non thermal radiation of our own Galaxy and/or by the CMB (the minimum sky brigthness at 178 MHz is $\simeq 80\,$K, while that of the isotropic component is estimated to be in the range 15--37 K: Longair \& Sunyaev 1971). However, improving such estimates is possible even with the classic ``T-T plot'' method. The wealth of data on source counts and related statistics at many frequencies (notably at 1.4 and 5 GHz where counts have been determined down to $\sim 10\mu$Jy) makes possible direct estimates of the contributions from extragalactic sources. At 178 MHz the result is $\simeq 22\,$K; $\simeq 60\%$ of the flux comes from radiogalaxies and quasars, the remaining part from sub-mJy radio sources, a large fraction of which is likely to be made up by active star-forming galaxies. At high frequencies, the radio background is dominated by compact sources, which are generally QSO's. Preliminary identifications of sources detected by the EGRET instrument on the Compton Observatory seem to indicate that compact radio quasars also dominate the extragalactic $\gamma$-ray sky (Bignami 1992). Their $\gamma$-ray to radio flux ratios are generally well above the corresponding ratio of background intensities [$(\nu I_\nu)_{{\rm bkg,}\, 100\,{\rm MeV}}/(\nu I_\nu)_{{\rm bkg,}\, 10\, {\rm GHz}}\simeq 5$] suggesting that this kind of sources may be the dominant contributors to the $\gamma$-ray background. \section{The far-IR/sub--mm background} \subsection{Comptonization distortions of the CMB} A significant sub-mm excess can be produced by comptonization of microwave photons off hot electrons. An order of magnitude estimate of CMB spectral distortions expected in different scenarios can be easily obtained by scaling the well known result that an intergalactic medium (IGM) having a kinetic energy density, referred to the present time, $\epsilon_{\rm IGM}(z=0) \approx 10^{-13}\, \hbox{erg}\,\hbox{cm}^{-3}$ ($H_0 = 50$), sufficient to produce the XRB, yields $y = \int(kT_e/mc^2)n_e \sigma_Tc\,dt \approx 10^{-2}$ (Field \& Perrenod 1977). Well known energy sources include: \begin{itemize} \item {\it Stellar nucleosynthesis}. The present average density of metals in observed galaxies is estimated to be (Songaila et al. 1990): $6\times 10^{-5} < (H_0/50)^2 \Omega < 2\times 10^{-4}$. If the average density of helium synthesized in stars is $\Omega_{He} \simeq 3\Omega_Z$, the energy density produced by stellar nucleosynthesis is $\epsilon_\star(z=0) \approx 2\hbox{--}6 \times 10^{-15}\, \hbox{erg}\,\hbox{cm}^{-3}$ ($z_\star \simeq 2$). Only a fraction of this contributes to heating of the IGM. \item {\it Binding energy}. The binding energy of baryons in galaxies is $\epsilon_{b}(z=0) \simeq \frac1/2 \rho_b v^2(1+z_{\rm coll})^{-1} \approx 2\times 10^{-18} [10/(1+z_{\rm coll})]\, \hbox{erg}\,\hbox{cm}^{-3}$. The energy density associated with larger scale structure is probably of the same order. \item {\it Nuclear activity}. The energy density produced by AGNs, estimated from B counts assuming a ratio of bolometric to B flux $\kappa = 30$ and an effective redshift $z_{\rm AGN} = 1.5$ is (Padovani et al. 1990): ${\epsilon_{\rm AGN}(z=0) \approx 5\times 10^{-16}\, \hbox{erg}\,\hbox{cm}^{-3}}$, about \frac1/6 of which is in the form of ionizing photons. \end{itemize} Although all the above values are highly uncertain, it appears likely, also in view of the primordial nucleosynthesis constraint ($\Omega_b \lsim 0.1$), that the amount of energy that can have been released by well established energy sources to heat up the IGM is bounded by $\epsilon_{\rm IGM}(z=0) \lsim {\rm few} \times 10^{-15}\,\hbox{erg}\,\hbox{cm}^{-3}$, so that $y$ is expected to be smaller (and possibly much smaller) than ${\rm few} \times 10^{-4}$. Detailed calculations of comptonization distortions from gas heated during the formation of large scale structures have been carried out by Cen \& Ostriker (1992), Cavaliere et al. (1991), Markevitch et al. (1992), and others. Inhomogeneities of the IGM will translate into CMB {\it fluctuations} $\Delta T/T \sim y/\sqrt{N}$, $N$ being the effective number of blobs per beam. Detailed maps of fluctuations expected in the framework of self similar evolution of clusters of galaxies have been produced by Markevitch et al. (1992). Fluctuations $\delta T/T \sim 10^{-5}$ on scales $\theta \sim 2'$ may be expected from correlated motions of plasma concentrated in young galaxies or pre-galactic star clusters; anisotropies of similar or somewhat larger amplitude but on smaller scales ($\theta \sim 3\Omega\,$arc sec) can be produced by scattering of microwave photons by moving plasma in young galaxies (Peebles 1990). \subsection{Astrophysical backgrounds} A particularly intense far-IR/sub--mm background is predicted if a large fraction of the hard XRB comes from starburst galaxies (Griffiths \& Padovani 1990), owing to the relatively low efficiency of these sources in producing high energy photons. {\it Einstein Observatory} data indicate that they generally have $(\nu I_\nu)_{\rm 2 keV}/(\nu I_\nu)_{60 \mu{\rm m}} \lsim 10^{-4}$ (Fabbiano 1990). Thus models implying that they make up a large fraction of the hard XRB, might face problems with the upper limits on the far-IR isotropic flux as well as with energy constraints following from estimates of the density of metals in galaxies (cf. De Zotti et al. 1991). Things turn out to be more or less right if the active starforming phases make up $\simeq 10\%$ of the XRB at 2 keV. The contribution at lower energies might well be significantly higher if these sources have the steep soft X-ray spectra indicated by ROSAT data (Boller et al. 1992). \medskip\noindent A substantial far-IR/sub-mm background is, however, expected also from galaxies directly detected in very deep optical and near-IR surveys. The integrated flux corresponding to direct counts in the wavelength range $\lambda =$3200--10000 {\AA} is (Tyson 1990) $\nu I_\nu \simeq 10^{-9}\,\hbox{erg}\,\hbox{cm}^{-2}\, \hbox{s}^{-1}\,\hbox{deg}^{-2}$ i.e. $\simeq 5\times 10^{-3}(\nu I_\nu)_{{\rm CMBpeak}}$. The observed flattening of the counts suggests that the total contribution of galaxies is not much larger than that. The local far-IR luminosity density of galaxies is about \frac1/3 of the optical luminosity density (Saunders et al. 1990); substantial cosmological evolution in the far-IR is suggested by $60\,\mu$m IRAS counts (Hacking et al. 1987; Danese et al. 1987). All that boils down to an expected far-IR/sub-mm background due to directly observed galaxies $(\nu I_\nu)_g \simeq \hbox{few}\times 10^{-3}(\nu I_\nu)_{{\rm CMBpeak}}$, peaking at $\lambda \simeq 100(1+z_{\rm eff})\,\mu$m (Franceschini et al. 1991). \medskip\noindent The estimated energy density of known AGNs, $\epsilon_{\rm AGN} \approx 5\times 10^{-16}\,\hbox{erg}\,\hbox{cm}^{-3}$, is $\approx 10^{-3} \epsilon_{\rm CMB}$ and corresponds to a mass density of collapsed nuclei of $(H_0/50)^2\Omega_{\rm AGN} \simeq 3\times 10^{-6} (\kappa/30)\linebreak(\eta/0.1)^{-1}$, where $\eta$ is the mass-energy conversion efficiency (Padovani et al. 1990). A similar mass density of dust-enshrouded AGNs accreting with the normally adopted efficiency $\eta \simeq 0.1$ would yield a far-IR background potentially detectable by COBE. Already available data on diffuse backgrounds rule out the possibility that the dark matter consists of black holes built up by accretion with such efficiency (see Bond et al. 1991). \medskip\noindent The possibility that early structures, at $z \sim 5$--100, could have led to copious star formation, producing both an intense background and dust capable of reprocessing it, has been extensively discussed by Bond et al. (1991). In this case, essentially all the energy produced by nuclear reactions comes out at far-IR/sub-mm wavelengths. The peak wavelength depends on the redshift and temperature distributions of the dust but, for a relatively broad range of parameter values, occurs at $\lambda \sim 600\,\mu$m. \medskip\noindent Measurements of the far-IR background spectrum would be informative on the birth of galaxies, their chemical and photometric evolution, the evolution of interstellar dust, the average density of metals in the universe. On the other hand, spectral measurements alone will not identify the sources of the background; measurements of small scale anisotropies would provide complementary information (Bond et al. 1991; Peebles 1990). For example, surface brightness fluctuations would be suppressed if there is a large contribution from star clusters not (yet) bound in galaxies; also, if the far-IR background is dominated by thermal radiation from dust in galaxies, the autocorrelation function of the far-IR intensity fluctuations directly reflects the galaxy correlation function. \section{The near-IR/optical background} Measurements are difficult because of the intense foreground emissions. In fact, the observational situation is still unclear. The tightest upper limits are already only a factor of several above the expected contribution from galaxies and about a factor of 10 above the integrated light from direct counts. Claims of a substantially more intense background in some bands are not confirmed by more recent experiments (Noda et al. 1992). The flattening of the QSO counts at $B\simeq 19.5$ suggests that their contribution to the optical/near-IR background is small in comparison to that of galaxies. Predictions for a radiating highly ionized IGM at $T\sim 10^4\,$K are at least two orders of magnitude below the emission from galaxies (Paresce et al. 1980). Measurements of the autocorrelation function of optical and near-IR fluctuations have placed important constraints on the background intensity. A determination of the fluctuation spectrum would provide a test for the presence of sources other than galaxies and measures at different angular scales would probe the space distribution of sources (see Peebles 1990 and references therein). \section{The UV background} Its magnitude has been the subject of dispute for many years (see Bowyer 1991; Henry 1991). The first, and still one of the strongest pieces of evidence of an intense diffuse ionizing flux is the Gunn-Peterson effect. The lack of continuous absorption shortward of the Ly$\alpha$ line in distant QSOs implies that the IGM must be highly ionized up to $z\simeq 4.9$ (Schneider et al. 1991). The ``proximity'' or ``inverse'' effect (decrease in the counted number of intergalactic Ly$\alpha$-absorbing clouds in the vicinity of luminous, UV-emitting quasars) gives an independent estimate of the ionizing background intensity, $J_{912} \sim 10^{-21\pm 0.5}\,\hbox{erg}\,\hbox{cm}^{-2}\,\hbox{s}^{-1} \,\hbox{Hz}^{-1}\,\hbox{sr}^{-1}$ at the hydrogen Ly$\alpha$ edge (912 \AA), approximately independent of redshift for $1.7 < z < 3.8$ (Lu et al. 1991). Measurements of the autocorrelation function of intensity fluctuations in the 1400--1900 {\AA} band (Martin \& Bowyer 1989) have allowed to derive tight upper limits on the background flux. {\it Young galaxies} could easily produce the UV flux up to 4 Ryd, if enough radiation can escape from the galaxy (a galaxy undergoing a burst of star formation is obviously expected to be gas-rich) and a significant fraction of the metal abundance is produced before $z\simeq 3$ (Miralda-Escud\'e \& Ostriker 1990 and references therein). Also Steidel \& Sargent (1989) have argued that to account for the observed ionization state of heavy element absorption systems a harder UV spectrum than produced by hot main sequence stars is required; on the other hand, such ionization state may also be affected by local sources of radiation or by collisional ionization (Miralda-Escud\'e \& Ostriker 1990). The integrated UV background from {\it observed QSOs} as a function of redshift has been discussed by many authors (Madau, 1992 and references therein). The general conclusion is that optically selected QSOs probably fail to emit sufficient ionizing flux to account for the ionization level implied by the Gunn-Peterson test and for the proximity effect, particularly at $z\gsim 2.5$ where a decline in the comoving space density is suggested. AGNs could nevertheless be the sources of the UV background. The missing contribution may come either from quasars not seen because of dust obscuration by intervening galaxies (Miralda-Escud\'e \& Ostriker 1990) or by a new class of AGNs such as the reflection dominated ones, proposed to be the source of the hard XRB (Fabian et al. 1990). The model discussed by Fabian et al. (1990) predicts $J_{912} \sim 10^{-21}\,\hbox{erg}\,\hbox{cm}^{-2}\,\hbox{s}^{-1} \,\hbox{Hz}^{-1}\,\hbox{sr}^{-1}$ at $z\simeq 2$--3. {\it Protogalactic shock radiation\/} appears to be insufficient by a large factor to account for the required photoionization (Shapiro \& Giroux 1989). {\it Decaying 'inos\/} as possible sources of the UV background are discussed e.g. by Field \& Walker (1989). \section{The X-ray background (XRB)} \subsection{XRB constraints on evolution of the AGN population} About 50\% of the 1--2 keV XRB has been resolved into discrete sources in the deepest ROSAT field (Hasinger 1992), implying that X-ray surveys are seeing directly almost the entire evolution history of X-ray sources and in particular of AGNs. However, the situation is still somewhat confusing mostly because data from different X-ray bands yield apparently contradictory results (see Franceschini et al. 1992 for additional details and references). \begin{itemize} \item HEAO-1 and {\it Ginga} fluctuation analyses indicate a normalization of source counts a factor of about 3 above that derived from the {\it Einstein Observatory} EMSS, a result confirmed by direct {\it Ginga} counts. \item The energy spectrum of fluctuations in the {\it Ginga} band (4--12 keV) is consistent with the ``canonical'' AGN slope $\alpha \simeq 0.7$ (but beware of the effect of clusters!) and substantial photoelectric absorption ($N_H \sim 10^{22}\,{\rm cm}^{-2}$). \item On the other hand, soft X-ray selected AGNs rather show steep spectral indices ($\alpha \simeq 1$--1.3) and no significant absorption down to faint flux limits. Also, there is a distinct absence, at faint X-ray fluxes, of the low luminosity AGNs which dominate the HEAO-1 A2 LLF by Piccinotti et al. (1982). \end{itemize} Allowing for the presence of different components (a self absorbed power law plus a soft excess plus a high energy bump) with a broad distribution of spectral parameters is not enough to account for all the data (the most critical being the mean spectral index of AGNs detected in deep ROSAT surveys), as far as we assume: i) continous distributions and ii) no spectral evolution. A consistent picture obtains considering {\it two} AGN populations (see Franceschini et al. 1992 for details). \begin{itemize} \item A {\it soft X-ray spectrum population}, contributing $\lsim 30\%$ of local hard X-ray selected AGNs (Piccinotti et al. 1982), of relatively high luminosity, strongly evolving, as indicated by soft X-ray counts, with X-ray spectra similar to those of optically selected quasars. \item A {\it hard X-ray spectrum population}, contributing most of the AGN in the Piccinotti survey, having a ``canonical'' spectral index, strong self absorption ($N_H \sim 10^{22}\,{\rm cm}^{-2}$) plus a high energy bump, relatively low mean luminosity. Since no more than $\simeq 10$--15\% of EMSS and few ROSAT AGNs can belong to this class, they should either be evolving very slowly (if at all) or be characterized by a spectral evolution essentially counterbalancing, in soft X-ray bands, their luminosity/density evolution. The latter possibility is favoured by fluctuation analyses; evolution is required if these sources have to account for the XRB intensity above 3 keV. \end{itemize} {\it Soft}-spectrum AGNs should give a minor contribution to the XRB above 3 keV (the model discussed by Franceschini et al. 1992 yields 23\% at 3 keV) and might also not saturate the XRB intensity in soft bands. In fact, at least in the framework of luminosity evolution models, the decline of their local luminosity function at low luminosities, indicated by {\it Einstein Observatory} Medium Sensitivity Survey data, entails a correspondingly fast convergence of the counts and an integrated flux at 1 keV $\simeq 30$--50\% of the XRB intensity estimated from ROSAT data (cf. Franceschini et al. 1992). Additional contributions from {\it soft} X-ray sources (IGM in groups and clusters, ASF galaxies...) may be required. {\it Hard}-spectrum AGNs are essentially invisible below a few keV. They could account for the XRB at higher energies if both their luminosity and their absorbing columns evolve. A third AGN population characterized by extreme absorbing columns ($N_H \sim 10^{24}\hbox{--}10^{25}\,{\rm cm}^{-2}$; Setti \& Woltjer 1989) or by a reflection spectrum (Fabian et al. 1990) could be the dominant contributor to the XRB above 3 keV. Optically selected Seyfert 2 may have the necessary properties. This population, however, is not represented in the Piccinotti LLF and apparently cannot give a large contribution to fluctuations measured by {\it Ginga} [Tanaka (1992) quotes an upper limit of $\sim 20\%$ to this contribution down to $S_{\rm 2-10 keV} = 2\times 10^{-13}\,\hbox{erg}\,\hbox{cm}^{-2}\, \hbox{s}^{-1}$], implying a very low local volume emissivity, so that extreme evolution is needed to produce a significant fraction of the XRB. \subsection{XRB constraints on large scale structure } If the minimum angle, $\theta_{min}(r_0)$, subtended by the maximum scale of significant clustering is larger than the angular scale of observations, the autocorrelation function $W(\theta)$ scales as (De Zotti et al. 1990) $W(\theta) \propto \theta^{1-\gamma} r_0^\gamma f^2$, where $r_0$ is the ``clustering radius'' at the present time ($\xi(r) = (r/r_0)^{-\gamma}$), $f$ is the fraction of the residual background (after subtraction of the resolved source contribution) produced by the considered population. If, on the other hand, $\theta_{min}(r_0) \ll \theta$, most of the contribution to the observed ACF comes from local sources, which provide only a minor fraction of the XRB; the above equation still holds but $f$ has to be replaced by the fraction of the XRB volume emissivity made up by the considered sources: $j_{\rm sources}/j_{\rm XRB}$. Particularly tight constraints are obtained for AGNs, whose local volume emissivity is $\simeq 20\%j_{\rm XRB}$ (Barcons \& Fabian 1989; Mart{\'\i}n-Mirones et al. 1991; Carrera \& Barcons 1992). Several analyses of clustering properties of optically selected quasars consistently indicate a 2-point correlation function consistent with the $-1.8$ power law form derived for galaxies with a scale length $r_0 = (14 \pm 3)(50/H_0)\,$Mpc at $z \simeq 1.4$ (Andreani et al. 1991, and references therein). The availability of large quasar samples has recently made possible to investigate the cosmological evolution of their clustering. Such evolution is usually modelled as $\xi(r,z) = (r/r_0)^{-\gamma}(1+z)^{-(3+\epsilon)}$, where $\epsilon = 0$ corresponds to ``stable clustering'', $\epsilon = \gamma -3$ or $\epsilon = -3$ to a clustering radius constant in comoving or in physical coordinates, respectively, $\epsilon = \gamma -1$ to linear growth for $q_0=0.5$. Recent studies (Andreani \& Cristiani 1992) support the ``comoving'' model ($\epsilon = \gamma -3 \simeq - 1.2$), although stable clustering, favoured by some earlier analyses, cannot yet be ruled out. But sources with $r_0 > 10\,$Mpc and $\epsilon = -1.2$ would produce an X-ray ACF exceeding the small scale ROSAT limits (Hasinger 1992), if their contribution to the XRB exceeds $\simeq 30\%$. Sources with $r_0 \geq 10\,$Mpc can account for $\geq 50\%$ of the XRB only if $\epsilon \geq 0$. Similar constraints apply to Active Star-forming Galaxies. If their clustering radius is equal to that of normal galaxies ($r_0 \simeq 10\,$Mpc), their contribution to the soft XRB cannot exceed $\simeq 30\%$ if $\epsilon = \leq -1.2$ and is $\lsim 45\%$ if $\epsilon = 0$. Mushotzky \& Jahoda (1992) report the detection (99\% confidence) of a positive ACF at scales $\sim 6^\circ$--$20^\circ$ with $W(\theta) \sim 3\times 10^{-5}$. Based on the LLFs by Piccinotti et al. (1982), Danese et al. (1992) find that rich clusters with $r_0 = 50\,$Mpc yield $W(6^\circ) \simeq 10^{-5}$, while AGNs with $r_0 = 20\,$Mpc give $W(6^\circ) \simeq 3\times 10^{-5}$. Any class of sources distributed like normal galaxies ($r_0\simeq 10\,$Mpc) must have a local volume emissivity $j_{\rm gal} \lsim 0.4 j_{\rm XRB}$. \medskip\noindent {\it Acknowledgements.} GDZ wishes to thank the organizers for having allowed him to attend this very successful workshop and the Pontificial Academy of Sciences for the very warm hospitality. Work supported in part by ASI and CNR. \medskip \centerline{\bf References} \def\noindent\hangindent=20pt\hangafter=1{\noindent\hangindent=20pt\hangafter=1} \noindent\hangindent=20pt\hangafter=1 Andreani, P., Cristiani, S., \& La Franca, F. 1991, MNRAS, 253, 527 \noindent\hangindent=20pt\hangafter=1 Andreani, P., \& Cristiani, S. 1992, ApJ, 398, L13 \noindent\hangindent=20pt\hangafter=1 Barcons, X., \& Fabian, A.C. 1989, MNRAS, 237, 119 \noindent\hangindent=20pt\hangafter=1 Bignami, G.F. 1992, Nature, 360, 416 \noindent\hangindent=20pt\hangafter=1 Boller, Th., Meurs, E.J.A., Brinkmann, W., Fink, H., Zimmermann, U., \& Adorf, H.-M. 1992, A\& A, 261, 57 \noindent\hangindent=20pt\hangafter=1 Bond, J.R., Carr, B.J., \& Hogan, C.J. 1991, ApJ, 367, 420 \noindent\hangindent=20pt\hangafter=1 Bowyer, S. 1991, ARAA, 29, 59 \noindent\hangindent=20pt\hangafter=1 Carrera, F.J., \& Barcons, X. 1992, MNRAS, 257, 507 \noindent\hangindent=20pt\hangafter=1 Cavaliere, A., Menci, N., \& Setti, G. 1991, A\& A, 245, 59 \noindent\hangindent=20pt\hangafter=1 Cen, R., \& Ostriker, J. 1992, ApJ, 393, 22 \noindent\hangindent=20pt\hangafter=1 Danese, L., De Zotti, G., Franceschini, A., \& Toffolatti, L. 1987, ApJ, 318, L15 \noindent\hangindent=20pt\hangafter=1 Danese, L., De Zotti, G., \& Andreani, P. 1992, in The X-ray Background, Cambridge Univ. Press, p. 61 \noindent\hangindent=20pt\hangafter=1 De Zotti, G., Persic, M., Franceschini, A., Danese, L., Palumbo, G.G.C., Boldt, E.A., \& Marshall, F.E. 1990, ApJ, 351, 22 \noindent\hangindent=20pt\hangafter=1 De Zotti, G., Mart{\`\i}n-Mirones, J.M., Franceschini, A., \& Danese, L. 1991, in Proc. XI Moriond Astrophysics Meeting ``The Early Observable Universe from Diffuse Backgrounds'', p. 31 \noindent\hangindent=20pt\hangafter=1 Fabbiano, G. 1990, ARAA, 27, 87 \noindent\hangindent=20pt\hangafter=1 Fabian, A.C., George, I.M., Miyoshi, S., \& Rees, M. 1990, MNRAS, 242, 14P \noindent\hangindent=20pt\hangafter=1 Field, G.B., \& Perrenod, S.C. 1977, ApJ, 215, 717 \noindent\hangindent=20pt\hangafter=1 Field, G.B., \& Walker, T.P. 1989, Phys. Rev. Lett. 63, 117 \noindent\hangindent=20pt\hangafter=1 Franceschini, A., Mart{\'\i}n-Mirones, J.M., Danese, L., \& De~Zotti, G. 1992, MNRAS, submitted \noindent\hangindent=20pt\hangafter=1 Franceschini, A., Toffolatti, L., Mazzei, P., Danese, L., \& De~Zotti, G. 1991, A\& A Suppl. 89, 285 \noindent\hangindent=20pt\hangafter=1 Griffiths, R.E., \& Padovani, P. 1990, ApJ, 360, 483 \noindent\hangindent=20pt\hangafter=1 Hacking, P.B., Condon, J.J., \& Houck, J.R. 1987, ApJ, 316, L15 \noindent\hangindent=20pt\hangafter=1 Hasinger, G. 1992, in X-ray Emission from Active Galactic Nuclei and the Cosmic X-ray Background, MPE Report 235, p. 321 \noindent\hangindent=20pt\hangafter=1 Henry, R.C. 1991, ARAA, 29, 89 \noindent\hangindent=20pt\hangafter=1 Longair, M.S., \& Sunyaev, R.A. 1971, Usp. Fiz. Nauk, 105, 41 (Sov. Phys. Usp. 14, 569, 1972) \noindent\hangindent=20pt\hangafter=1 Lu, L., Wolfe, A.M., \& Turnshek, D.A. 1991, ApJ, 367, 19 \noindent\hangindent=20pt\hangafter=1 Madau, P. 1992, ApJ, 389, L1 \noindent\hangindent=20pt\hangafter=1 Markevitch, M., Blumenthal, G.R., Forman, W., Jones, C., \& Sunyaev, R.A. 1992, ApJ, 395, 326 \noindent\hangindent=20pt\hangafter=1 Martin, C., \& Bowyer, S. 1989, ApJ, 338, 677 \noindent\hangindent=20pt\hangafter=1 Mart{\'\i}n-Mirones, J.M., De~Zotti, G., Boldt, E.A., Marshall, F.E., Danese, L., Franceschini, A., \& Persic, M. 1991, ApJ, 379, 507 \noindent\hangindent=20pt\hangafter=1 Miralda-Escud\'e, J., \& Ostriker, J.P. 1990, ApJ, 350, 1 \noindent\hangindent=20pt\hangafter=1 Mushotzky, R., \& Jahoda, K. 1992, in The X-ray Background, Cambridge Univ. Press, p. 80 \noindent\hangindent=20pt\hangafter=1 Noda, M., et al. 1992, ApJ, 391, 456 \noindent\hangindent=20pt\hangafter=1 Padovani, P., Burg, R., \& Edelson, R.A. 1990, ApJ, 353, 438 \noindent\hangindent=20pt\hangafter=1 Paresce, F., McKee, C., \& Bowyer, S. 1980, ApJ, 240, 387 \noindent\hangindent=20pt\hangafter=1 Peebles, P.J.E. 1990, in Proc. IAU Symp. No. 139 ``The Galactic and Extragalactic Background Radiation'', p. 295 \noindent\hangindent=20pt\hangafter=1 Piccinotti, G., Mushotzky, R.F., Boldt, E.A., Holt, S.S., Marshall, F.E., Serlemitsos, P.J., \& Shafer, R.A. 1982, ApJ, 253, 485 \noindent\hangindent=20pt\hangafter=1 Saunders, W., Rowan-Robinson, M., Lawrence, A., Efstathiou, G., Kaiser, N., Ellis, R.S., \& Frenk, C.S. 1990, MNRAS, 242, 318 \noindent\hangindent=20pt\hangafter=1 Schneider, D.P., Schmidt, M., \& Gunn, J.E. 1991, ApJ, 306, 411 \noindent\hangindent=20pt\hangafter=1 Setti, G., \& Woltjer, L. 1989, A\& A, 224, L21 \noindent\hangindent=20pt\hangafter=1 Shapiro, P.R., \& Giroux, M.L. 1989, in The Epoch of Galaxy Formation, Kluwer, p. 153 \noindent\hangindent=20pt\hangafter=1 Songaila, A., Cowie, L.L., \& Lillie, S.J. 1990, ApJ, 348, 371 \noindent\hangindent=20pt\hangafter=1 Steidel, C.C., \& Sargent, W.L.W. 1989, ApJ, 343, L33 \noindent\hangindent=20pt\hangafter=1 Tanaka, Y. 1992, in X-ray Emission from Active Galactic Nuclei and the Cosmic X-ray Background, MPE Report 235, p. 303 \noindent\hangindent=20pt\hangafter=1 Tyson, J.A. 1990, in Proc. IAU Symp. No. 139 ``The Galactic and Extragalactic Background Radiation'', p. 245 \end{document}
2024-02-18T23:40:44.927Z
1993-02-04T16:55:45.000Z
algebraic_stack_train_0000
3,245
4,565
proofpile-arXiv_065-15834
\section{Introduction and summary} The discovery of the top quark has been anticipated since many years at accelerators of increasing energy. Present hopes are based on analyses of high precision data and the standard theory, see \cite{Rolandi}. The top is the first heavy quark whose mass can be measured to better than 1\% precision at a future $e^+e^-$ collider. Therefore, measurements of its width will not only test the standard model at the Born level, but also the QCD radiative corrections which are of order 10\% \cite{JK1}~. This is in contrast to $b$ and $c$ quarks, where uncertainties in the masses and non-perturbative effects preclude this possibility. Recently, the complete one loop electroweak corrections to the total rate have been also calculated \cite{DS,Gad}~, and turned out to be rather small (1-2\%)~. Nevertheless, it has been claimed \cite{DS,Gad} that a precise measurement of the top width may serve as a consistency check for the electroweak sector of the standard model. In fact a number of calculations have been performed studying electroweak effects on the top width in theories extending the standard model \cite{GH}~. In particular it has been found that the additional corrections from the extended Higgs sector of the minimal supersymmetric standard model are significantly smaller than 1\%. In this article we give the standard model predictions for the top quark width. Our results are different form those in \cite{DS,Gad} because we include the effect of $W$ boson width considered in \cite{JK1} and neglected in later works. This effect is comparable in size to the electroweak corrections. A number of intrinsic uncertainties remains. The present uncertainty in $\alpha_s$ and the ignorance concerning the QCD correction of order ${\cal O}({\alpha_s}^2)$ limit the accuracy of the prediction to about 1-2\%. One has to take into account also the errors, both experimental and theoretical, in the determination of the top mass. At present the best place for a precise determination of $\Gamma_t$ is believed to be the threshold region for $t\bar t$ production in $e^+e^-$ annihilation. The most optimistic current estimate of the relative precision is 5\% \cite{Fujii}~. Therefore, it is mandatory to give the theory prediction which as the one presented in this article is accurate up to order of 1\% . \section{QCD corrected decay rate} We assume throughout three families of quarks. Thus the effects of CKM mixing are negligible. The QCD corrected width of the top quark is given by the following formula \cite{JK1}: \begin{eqnarray} \Gamma^{(1)} = {{{\rm G}_F}^2 {m_t}^5\over 192\pi^3} \left( 9 + 6{\alpha_s\over\pi}\right) \int^{(1-\epsilon)^2}_0 {{\rm d}y\over (1-y/\bar y)^2+\gamma^2} \left[ {\rm F}_0(y,\epsilon) - {2\alpha_s\over 3\pi} {\rm F}_1(y,\epsilon) \right] \label{eq:1} \end{eqnarray} where $$\bar y= \left( M_W/m_t\right)^2\ ,\qquad\epsilon= m_b/m_t\ , \qquad\gamma=\Gamma_{W}/M_W$$ and \begin{equation} \Gamma_{W}= {{\rm G}_F {M_W}^3\over 6\sqrt{2}\pi} \left( 9 + 6 {\alpha_s\over\pi}\right) \label{eq:2} \end{equation} The functions ${\rm F}_0(y,\epsilon)$ and ${\rm F}_1(y,\epsilon)$ read \footnote{We slightly simplify an original formula from [1] using relations between dilogarithms.} \def\lambda(1,y,\epsilon^2){\lambda(1,y,\epsilon^2)} \def\sqrt{\Alambd}{\sqrt{\lambda(1,y,\epsilon^2)}} \def{\cal C}_0(y,\epsilon){{\cal C}_0(y,\epsilon)} \def{\rm Li_2}\,{{\rm Li_2}\,} \defu_w{u_w} \defu_q{u_q} \begin{equation} {\rm F}_0(y,\epsilon) = {1\over 2}\sqrt{\Alambd}\,{\cal C}_0(y,\epsilon) \label{eq:3} \end{equation} where \begin{equation} \lambda(u,v,w) = u^2+v^2+w^2- 2(uv+vw+wu) \label{eq:4} \end{equation} \begin{equation} {\cal C}_0(y,\epsilon) = 4[(1-\epsilon^2)^2+y(1+\epsilon^2)-2y^2] \quad, \label{eq:5} \end{equation} and \begin{eqnarray} {\rm F}_1(y,\epsilon)= \frac{1}{2}{\cal C}_0(y,\epsilon)(1+\epsilon^2-y) \left[ 2\pi^2/3 +4{\rm Li_2}\,(u_w) -4{\rm Li_2}\,(u_q) \right. \nonumber\\ \left. -4{\rm Li_2}\,(\UQu_w) -4\lnu_q\ln(1-u_q)-2\lnu_w\lnu_q+\ln{y}\lnu_q +2\ln\epsilon\lnu_w \right] \nonumber\\ -2{\rm F}_0(y,\epsilon) \left[ \ln{y}+3\ln\epsilon-2\ln\lambda(1,y,\epsilon^2) \right] \nonumber\\ +4(1-\epsilon^2)\left[ (1-\epsilon^2)^2 +y(1+\epsilon^2)-4y^2 \right] \lnu_w \nonumber\\ +\left[ 3-\epsilon^2+11\epsilon^4-\epsilon^6+ y(6-12\epsilon^2+2\epsilon^4) - y^2(21+5\epsilon^2)+12y^3 \right]\lnu_q \nonumber\\ + 6\sqrt{\Alambd}(1-\epsilon^2)(1+\epsilon^2-y)\ln\epsilon \nonumber\\ +\sqrt{\Alambd}\left[ -5+22\epsilon^2 -5\epsilon^4- 9y(1+\epsilon^2)+6y^2\right] \nonumber\\ \label{eq:6} \end{eqnarray} where \begin{equation} u_q= {1+ \epsilon^2 -y -\sqrt{\Alambd}\over 1+ \epsilon^2 -y +\sqrt{\Alambd}} \label{eq:7} \end{equation} \begin{equation} u_w= {1- \epsilon^2 +y -\sqrt{\Alambd}\over 1- \epsilon^2 +y +\sqrt{\Alambd}} \label{eq:8} \end{equation} Above threshold for real W production the rate (1) can be approximated by: \begin{equation} \Gamma^{(1)}_{nw} = {{{\rm G}_F} {m_t}^3\over 16\sqrt{2}\pi} \left[ {\rm F}_0(\bar y,\epsilon) - {2\alpha_s\over 3\pi} {\rm F}_1(\bar y,\epsilon) \right]\quad, \label{eq:9} \end{equation} a result valid in the narrow width approximation. Neglecting $\epsilon$ one arrives at the following relatively compact expressions: \begin{equation} {\rm F}_0(y,0) = 2(1-y)^2 (1+2y) \label{eq:10} \end{equation} and\footnote{This form clearly exhibits limiting behavior $$ f(y) = {2\pi^2\over3} -{5\over2} -3y(1+y\ln y)+\dots $$ for small y, and $$ f(y) = 3\ln(1-y) +{4\pi^2\over 3}-{9\over 2}+\dots $$ for $y\to 1^-$~. Although stated in the text, these limits are not manifest in the original formula given in \cite{JK1}.}: \begin{eqnarray} f(y) = {\rm F}_1(y,0)/{\rm F}_0(y,0) = {2\pi^2\over3}- {5\over 2}+2\ln y\,\ln(1-y)+4{\rm Li_2}\, y -2y + \nonumber\\ {1\over1+2y} \left[ (5+4y)\ln(1-y) +{2y\ln y\over1-y} -{4y^3(1-y+\ln y)\over(1-y)^2 } \right] \nonumber\\ \label{eq:11} \end{eqnarray} The formula (1) has been derived in \cite{JK1} and tested in \cite{JK3,JK4}~. When applied to charm decays, i.e. in the four fermion limit, it reproduces the numerical results for the total rate \cite{CM}~. The formulae (3-6) including the $b$ quark mass corrections have been tested by a numerical calculation in \cite{JK3}. Although performed by the same authors this calculation should be considered an independent one since it was based on a completely different technique and matrix elements equivalent to those derived in the classic papers on muon decays \cite{muon} in a form adopted in \cite{AP} for charm decays. Furthermore we have observed that these formulae after an appropriate analytical continuation are equivalent to formulae in \cite{CGN} describing vacuum polarization effects from heavy quarks in the W boson propagator. \\ Independent calculations including non-zero $b$ quark mass have been performed in [2] and \cite{Gad}. The authors found a numerical agreement of their results with the formulae (3-6). The massless limit, eqs. (10-11), derived in [1] was rederived and confirmed by a number of groups \cite{Czarnecki}-\cite{LY}. We proceed now to the discussion of the numerical predictions for the decay rate and the quality of different approximations. As our input we use:\\ $M_W = 80.10$ GeV \cite{Rolandi}, $m_b = 4.7$ GeV, $\alpha_s(M_{Z}) = .118 \pm .007$ \cite{Altarelli} and $M_Z = 91.187$ GeV \cite{Rolandi}.\\ Then $\alpha_s(m_{t})${} is derived from the formula \begin{eqnarray} &\alpha_s(Q) = {4\pi\over b_0 \ln Q^2/\Lambda^2} \left[ 1 - {b_1\over {b_0}^2} {\ln\ln Q^2/\Lambda^2\over \ln Q^2/\Lambda^2} \right] \\ \label{eq:12} & b_0 = 11 - {2\over 3}N_f , \qquad b_1 = 102 - {38\over 3}N_f \nonumber \end{eqnarray} for $N_f$=5 quark flavours. Uncertainties in the input value of $\alpha_s(M_{Z})${} as well as the second order corrections ${\cal O}({\alpha_s}^2)$~, which have not been calculated yet, lead to an error which we estimate to be of order 1\%. In Table 1 we give our results for the widths obtained from different approximations as well as from the formula (1). Since most other authors present their results in comparison with the zeroth-order result \gamud{(0)}{nw} obtained in the narrow width approximation, we define \begin{equation} \delta^{(i)} = \Gamma^{(i)}/\Gamma^{(0)}_{nw} - 1 \label{eq:13} \end{equation} where $i = 0,1$ corresponds to the Born and the QCD corrected rate respectively, and the widths in the numerators include the effects of the W propagator, cf. eq. (1). Analogously we define \delud{(1)}{nw} which is given by the ratio of the QCD corrected and the Born widths, both evaluated in the narrow width approximation, and \delud{(1)}{nw}$(0)$ for massless $b$ quark. \begin{table}[h] \begin{tabular}{|r|c|c|c|c|c|c|c|c|c|} \hline $m_t\ $ & $\alpha_s(m_{t})$ & \gamud{(0)}{nw} &\delud{(0)}{}& \delud{(1)}{nw}$(0)$ & \delud{(1)}{nw} & \delud{(1)}{}& \gamud{(1)}{} & \delud{}{ew}& \gamud{}{t} \\ {\scriptsize(GeV)} & & {\scriptsize(GeV)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(\%)} & {\scriptsize(GeV)} & {\scriptsize(\%)} & {\scriptsize(GeV)} \\ \hline 90.0& .118& .0234& 11.69 & 7.88 &-3.81& 6.56 &.0249& 0.81& .0251\\ 100.0& .116& .0931& 0.16 &-4.56 &-6.91& -6.89 &.0867& 1.04& .0876\\ 110.0& .115& .1955& -1.44 &-6.81 &-7.83& -9.22 &.1775& 1.20& .1796\\ 120.0& .113& .3265& -1.78 &-7.61 &-8.20& -9.89 &.2942& 1.33& .2982\\ 130.0& .112& .4849& -1.82 &-7.97 &-8.37&-10.08 &.4360& 1.43& .4423\\ 140.0& .111& .6708& -1.77 &-8.15 &-8.44&-10.10 &.6031& 1.51& .6122\\ 150.0& .110& .8852& -1.69 &-8.25 &-8.47&-10.05 &.7962& 1.57& .8087\\ 160.0& .109& 1.130& -1.60 &-8.31 &-8.49& -9.99 &1.017& 1.62& 1.033\\ 170.0& .108& 1.405& -1.52 &-8.34 &-8.49& -9.91 &1.266& 1.67& 1.287\\ 180.0& .107& 1.714& -1.45 &-8.35 &-8.48& -9.84 &1.546& 1.70& 1.572\\ 190.0& .106& 2.059& -1.39 &-8.36 &-8.47& -9.77 &1.857& 1.73& 1.890\\ 200.0& .106& 2.440& -1.33 &-8.36 &-8.46& -9.70 &2.203& 1.76& 2.242\\ \hline \end{tabular} \caption{Top width as a function of top mass and the comparison of the different approximations.} \end{table} \section{Electroweak corrections} The complete one loop electroweak correction to the standard model top decay have been calculated in [2] and \cite{Gad}. If the lowest order width is parametrized by ${\rm G}_F$ and $M_W$, cf. eqs. (1) and (9), the electroweak corrections are less than 2\% for realistic top masses. In particular there are no sizable effects arising from Yukawa couplings \cite{IMT}~\footnote{We thank Andre Hoang for checking that this important result is in agreement with [2] when the latter calculation is restricted to the leading ${\cal O}\left({m_t}^2/{M_W}^2\right)$ contribution \cite{Hoang}.}~. For $100\ GeV \le\ m_t\ \le\ 200\ GeV$ and Higgs mass $M_H\ \ge\ 100\ GeV$ the potentially large ${\cal O}\left({m_t}^2/{M_W}^2\right)$ contribution from the diagrams with Yukawa couplings are smaller than 0.2\%~, and hence much smaller than other, subleading in $m_t$ terms. The dependence of the correction on $M_H$ is weak; see [2] for details. In the following we assume $M_H = 100 GeV$~. Strictly speaking $m_t$, $M_Z$, $M_W$, and $M_H$ cannot be treated as independent parameters. The standard model and the existing data imply a relation between them. For our choice of the masses one can neglect this effect, provided $m_t$ is not too close to the present experimental lower limit. The corresponding change of the Born width is -2.6\%, -0.8\%, and less than 0.3\% for $m_t =$~90,~100, and~$\ge$~110~$GeV$, respectively. Therefore we ignore the above mentioned relation and treat all the masses as independent parameters. If the measured $M_W$ and $M_H$ turned out to be very different from the values assumed in this paper, it would be straightforward to evaluate the corresponding change of the Born width. The width of the top quark including the electroweak correction can be evaluated from the formula \begin{equation} \Gamma_t = \Gamma^{(1)} \left[ 1 + \delta_{ew} \right]\quad, \end{equation} and a simple parametrization \begin{equation} \delta_{ew} (\%) \approx 2 - 1.5\bar y \end{equation} has been obtained by us from Table 1 in [2]. The results for \gamud{}{t} calculated using (14) and (15) are given in our Table 1. It should be noted that the size of the electroweak corrections is comparable to the uncertainties from as yet uncalculated ${\cal O}({\alpha_s}^2)$ corrections and the present uncertainty in the value of $\alpha_s$. The electroweak corrections are furthermore sensitive to the details of the Higgs sector, as exemplified by the recent calculations in the context of the two Higgs doublet model \cite{GH}~. \vskip 1cm {\bf\large\noindent Acknowledgements} \\ \vskip0.1cm M.J. thanks Lalit Sehgal for a conversation which stimulated writing this report. He would like also to acknowledge a research fellowship from the Alexander-von-Humboldt Foundation which enabled his stay in the Institut f\"ur Theoretische Teilchenphysik - Univ. of Karlsruhe, where a part of this work was done, and to thank the members of the physics faculty there for warm hospitality and stimulating atmosphere. \newpage \def\PLB #1 #2 #3 {{\it Phys. Lett.} {\bf {#1}B} (#2) #3} \def\NPB #1 #2 #3 {{\it Nucl. Phys.} {\bf B#1} (#2) #3} \def\PRD #1 #2 #3 {{\it Phys. Rev.} {\bf D#1} (#2) #3} \def\PRB #1 #2 #3 {{\it Phys. Rev.} {\bf B#1} (#2) #3} \def\PR #1 #2 #3 {{\it Phys. Rev.} {\bf #1} (#2) #3} \def\PP #1 #2 #3 {{\it Phys. Rep.} {\bf#1} (#2) #3} \def\PRL #1 #2 #3 {{\it Phys. Rev. Lett.} {\bf#1} (#2) #3} \def\CPC #1 #2 #3 {{\it Comp. Phys. Commun.} {\bf#1} (#2) #3} \def\ANN #1 #2 #3 {{\it Annals of Phys.} {\bf#1} (#2) #3} \def\APPB #1 #2 #3 {{\it Acta Phys. Polonica} {\bf B#1}(#2) #3} \def\ZPC #1 #2 #3 {{\it Zeit. f. Phys.} {\bf C#1} (#2) #3} \def\CPC #1 #2 #3 {{\it Comp. Phys. Commun.} {\bf#1} (#2) #3} \def\SJNP #1 #2 #3 {{\it Sov. J. Nucl. Phys.} {\bf#1}(#3) #3} \def\YadF #1 #2 #3 {{\it Yad. Fiz.} {\bf#1} (#2) #3} \def\IJMPA #1 #2 #3 {{\it Int. J. Mod. Phys.} {\bf A#1} (#2) #3}
2024-02-18T23:40:44.930Z
1993-02-23T17:09:02.000Z
algebraic_stack_train_0000
3,246
2,535
proofpile-arXiv_065-15841
\section{Introduction} \setcounter{equation}{0} We begin by introducing the definitions and notation that will be used. Unless otherwise specified, $X$ is an infinite dimensional real Banach space with norm $\|\cdot\|$ and dual space $X^{\ast}$. A {\it bornology} ${\cal B}$ on $X$ is a family of bounded subsets of $X$ such that $\cup \left\{ B:B \in {\cal B} \right\} = X$. We will focus on the following bornologies: $G=$ \{singletons\}, $H=$ \{compact sets\}, $WH=$ \{weakly compact sets\} and $F=$ \{bounded sets\}. Observe that $G \subset H \subset WH \subset F$. A function $f : X \rightarrow \Bbb{R}$ is called {\it ${\cal B}$-differentiable} at $x \in X$ if there is $\Lambda \in X^{\ast}$ such that for each $B \in {\cal B}$, \[ \frac{1}{t} \Bigl[ f(x+th) - f(x) - \langle \Lambda, th\rangle \Bigr] \rightarrow 0~~ {\rm as}~~ t \downarrow 0 \] uniformly for $h \in B$. Let ${\cal F}$ denote a family of real-valued locally Lipschitz functions on $X$; we will usually consider locally Lipschitz (loclip), Lipschitz (lip), distance (dist), continuous convex (conv) and norms (norm); it is, of course, easy to check that continuous convex functions are locally Lipschitz (\cite{Ph}, Proposition 1.6). For two bornologies on a fixed Banach space $X$, say $F$ and $G$ and a family of functions ${\cal F}$, we will write $F_{{\cal F}} = G_{{\cal F}}$, if for every $f \in {\cal F}$ and every $x \in X, f$ is $F$-differentiable at $x$ if and only if $f$ is $G$-differentiable at $x$. Since $G_{\mbox{loclip}} = H_{\mbox{loclip}}$, we will write $G$ and $H$ interchangeably. In the paper \cite{BF}, it was shown that $H_{\mbox{conv}} = F_{\mbox{conv}}$ if and only if $X$ is infinite dimensional. From this, one might be tempted to believe that various notions of differentiability for convex functions coincide precisely when the bornologies on the space coincide. However, this is far from being the case; for example, according to (\cite{BF}, Theorem 2), $WH_{\mbox{conv}} = F_{\mbox{conv}}$ if and only if $X \not\supset \ell_1$. In contrast to this, we will show in Section 2 that differentiability notions coincide for Lipschitz functions precisely when the bornologies are the same (for the $H$, $WH$ and $F$ bornologies). In the third section we will study the relationship between $WH$-differentiability and $H$-differentiability for continuous convex functions. In particular, we show that if $B_{X^*}$ is $w^*$-sequentially compact, then $H_{\mbox{conv}} = WH_{\mbox{conv}}$ precisely when $H = WH$. However, there are spaces for which $H_{\mbox{conv}} = WH_{\mbox{conv}}$ and yet $H \not= WH$. This leads to examples showing that one cannot always extend a convex function from a space to a superspace while preserving $G$-differentiability at a prescribed point. Some characterizations of the Schur and Dunford-Pettis properties are also obtained in terms of differentiabilty of continuous $w^*$-lower semicontinuous convex functions on the dual space. \section{Lipschitz functions and bornologies} \setcounter{equation}{0} As mentioned in the introduction, there are spaces for which $WH \neq F$ but $WH_{\mbox{conv}} = F_{\mbox{conv}}$. However, this is not the case for Lipschitz functions. \begin{thm}\label{thm2.1} For a Banach space $X$, the following are equivalent. \begin{description} \item[(a)] $X$ is reflexive. \item[(b)] $WH_{\mbox{\em lip}} = F_{\mbox{\em lip}}$. \item[(c)] $WH_{\mbox{\em dist}} = F_{\mbox{\em dist}}$. \end{description} \end{thm} In order to prove this theorem, we will need a special type of sequence in nonreflexive Banach spaces. Namely, we will say $\left\{x_k \right\}^{\infty}_{k=1} \subset X$ is a {\it special sequence} if there is an $\epsilon > 0$ such that $\left\{ z_k \right\} \subset X$ has no weakly convergent subsequence whenever $\|x_k - z_k \| < \epsilon$. \medskip \noindent {\bf Remark} (a) There are examples of sequences $\left\{ x_k \right\}$ such that $\left\{ x_k \right\}$ has no weakly convergent subsequence but $\left\{ x_k \right\}$ is not special. Indeed, let $X = \ell_1$ and consider $y_{n,m} = e_n + \frac{1}{n}e_m$ for $m$, $n \in \Bbb{N}, m > n$. Let $\{x_k\}$ be any sequential arrangement of $\left\{ y_{n,m} \right\}$. It is not hard to verify $\left\{ x_k \right\}$ has the desired properties. Another example is $X = c_0$ and $y_{n,m} = {\displaystyle \sum^{n}_{k=1}} e_k + {\displaystyle \sum^{n+m}_{k = n+1}} \frac{1}{n}e_k$. (b) If $f$ is Lipschitz and $WH$-differentiable at $0$ (with $f^{\prime}(0)=0)$ but $f$ is not Fr\'{e}chet differentiable at 0, it is not hard to construct a special sequence $\left\{ x_k \right\}$. Indeed, because $f$ is not Fr\'{e}chet differentiable, we can choose $\left\{ x_k \right\}$ in the unit sphere $S_{X}$ of $X$ and $t_k \downarrow 0$ which satisfy \[ \frac{|f(t_k x_k) - f(0)|}{t_k} \geq \epsilon \hspace{.5in} {\rm for~~some}~~ \epsilon > 0. \] Using the fact that $f$ is Lipschitz and $WH$-differentiable at 0, one can easily show that $\left\{ x_k \right\}$ is special. \hfill{$\square$}\vspace{\baselineskip} \noindent Part (b) of the above remark shows that in order to prove Theorem \ref{thm2.1}, it is necessary to show each nonreflexive Banach space has a special sequence. On the other hand, part (a) shows that such sequences must be chosen carefully. \begin{lem} \label{lem2.2} Suppose $\left\{ x_n \right\} \subset X$ has no weakly convergent subsequence. Then some subsequence of $\left\{x_n \right\}$ is a special sequence. \end{lem} \noindent {\bf Proof.\ \ } If some subsequence of $\left\{ x_n \right\}$ is special, then there is nothing more to do. So we will suppose this is not so and arrive at a contradiction by producing a weakly convergent subsequence of $\left\{ x_n \right\}$. Given $\epsilon = 1$, by our supposition, we choose $N_1 \subset \Bbb{N}$ and $\left\{ z_{1,i} \right\}_{i \in N_1}$ such that \[ \| x_i - z_{1,i} \| < 1 ~~ \enskip ~~ {\rm for} ~~ \enskip ~~ i \in N_1 ~~\enskip~~ {\rm and}~~\enskip~~ w\!{\rm-}\!\!\lim_{i \in N_1} z_{1,i} = z_1 . \] Supposing $N_{k-1}$ has been chosen, we choose $N_k \subset N_{k-1}$ and $\left\{ z_{k,i} \right\}_{i \in N_k} \subset X$ satisfying \begin{equation}\label{eqn2.1} \| x_i - z_{k,i} \| < \frac{1}{k} ~~\enskip~~ {\rm for}~~ i \in N_k ~~\enskip~~ {\rm and} ~~\enskip~~ w\!{\rm-}\!\!\lim_{i \in N_k} z_{k,i} = z_k . \end{equation} In this manner we construct $\left\{ z_{k,i} \right\}_{i \in N_k}$ and $N_k$ for all $k \in \Bbb{N}$. Notice that $z_n - z_m = {\displaystyle w\!{\rm-}\!\!\lim_{i \in N_n}} (z_{n,i} - z_{m,i})$ for $n > m$. Thus by the $w$-lower semicontinuity of $\|\cdot\|$ and (\ref{eqn2.1}) we obtain \[ \| z_n - z_m \| \leq \liminf_{i \in N_n} \|z_{n,i} - z_{m,i} \| \leq \liminf_{i \in N_n} (\| z_{n,i} - x_i \| + \| x_i - z_{m,i} \|) \leq \frac{1}{n} + \frac{1}{m} \leq \frac{2}{m}. \] Thus $z_n$ converges in norm to some $z_{\infty} \in X$. Now for each $n \in \Bbb{N}$ choose integers $i_n \in N_n$ with $i_n > n$. We will show $x_{i_{n}} \stackrel{w}{\rightarrow} z_{\infty}$. So let $\Lambda \in B_{X^{\ast}}$ and $\epsilon > 0$ be given. We select an $n_0 \in \Bbb{N}$ which satisfies \begin{equation}\label{eqn2.2} \frac{1}{n_0} < \frac{\epsilon}{3} ~~\enskip~~ {\rm and} ~~\enskip~~ \| z_m - z_{\infty} \| < \frac{\epsilon}{3} ~~\enskip~~ {\rm for} ~~\enskip~~ m \geq n_0. \end{equation} Because $z_{n_{0},i} \stackrel{w}{\rightarrow} z_{n_{0}}$, we can select $m_0$ so that \begin{equation}\label{eqn2.3} \bigl| \langle \Lambda, z_{n_{0},i} - z_{n_{0}} \rangle \bigr| < \frac{\epsilon}{3} ~~ \enskip~~ {\rm for~all} ~~\enskip~~ i \geq m_0. \end{equation} For $m \geq \max \left\{ n_0, m_0 \right\}$, we have \[ \begin{array}{ll} \bigl| \langle \Lambda, x_{i_{m}} - z_{\infty} \rangle \bigr| & \leq \bigl| \langle \Lambda, x_{i_{m}} - z_{n_{0},i_{m}} \rangle \bigr| + \bigl| \langle \Lambda, z_{n_{0},i_{m}} - z_{n_{0}} \rangle \bigr| + \bigl| \langle \Lambda, z_{n_{0}} - z_{\infty} \rangle \bigr| \\ & < \| x_{i_{m}} - z_{n_{0},i_{m}} \| + \frac{\epsilon}{3} + \| z_{n_{0}} - z_{\infty} \| \hspace{.25in} \left[ {\rm by~(\ref{eqn2.3})~since}~i_m > m \geq m_0 \right] \\ & < \frac{1}{n_0} + \frac{\epsilon}{3} + \frac{\epsilon}{3} < \epsilon. \hspace{2.1in} {\rm [by~ (\ref{eqn2.2})~ and ~(\ref{eqn2.1})]} \end{array} \] Therefore $x_{i_{n}} \stackrel{w}{\rightarrow} z_{\infty}.$ \hfill{$\square$}\vspace{\baselineskip} \noindent \noindent {\bf Proof of Theorem 2.1.} Notice that (a) $\Longrightarrow$ (b) $\Longrightarrow$ (c) is trivial. It remains to prove (c) $\Longrightarrow$ (a). Suppose $X$ is not reflexive, hence $B_X$ is not weakly compact and so there exists $\left\{ x_n \right\} \subset S_X$ with no weakly convergent subsequence. By Lemma 2.2 there is a subsequence, again denoted by $\left\{ x_n \right\}$, and a $\Delta \in (0,1)$ such that $ \left\{ z_n \right\} \subset X$ has no weakly convergent subsequence whenever $\| z_n - x_n \| < \Delta$. By passing to another subsequence if necessary we may assume $\| x_n - x_m \| > \delta$ for all $n \neq m$, with some $0<\delta<1$. For $n = 1,2, \ldots,$ let $B_n = \{x \in X : \|x - 4^{-n}x_n\| \le \delta \Delta 4^{-n-1} \}$ and put $C = X \backslash {\displaystyle \cup^{\infty}_{n=1}} B_n$. Because $4^{-m} + \delta \Delta 4^{-m-1} < 4^{-n} - \delta \Delta 4^{-n-1}$ for $m > n$, we have that $B_n \cap B_m = \emptyset$ whenever $n \neq m$. For $x \in X$, let $f(x)$ be the distance of $x$ from $C$. Thus $f$ is a Lipschitz function on $X$ with $f(0) = 0$. We will check that $f$ is $WH$-differentiable at 0 but not $F$-differentiable at 0. Let us first observe that $f$ is $G$-differentiable at 0. So fix any $h \in X$ with $\|h\|=1$. Then $[0,+\infty)h$ meets at most one ball $B_n$. In fact assume $t_m, t_n > 0$ are such that $\|t_{i}h - 4^{-i}x_i \| < \delta \Delta 4^{-i-1}$ for $i = n,m$. Then $|4^{i}t_i - 1 | < \frac{\delta \Delta}{4}$ for $i = n,m$ and \[ \begin{array}{ll} \| x_n - x_m \| & \leq \| x_n - 4^n t_n h\| + \| 4^n t_n h - 4^m t_m h \| + \| 4^m t_m h - x_m \| \\ & < \frac{\delta \Delta}{4} + \frac{2 \delta \Delta}{4} + \frac{\delta \Delta}{4} = \delta \Delta < \delta. \end{array} \] This means that $n=m$. It thus follows that for $t>0$ small enough, we have $f(th)=0$. Therefore $f$ is $G$-differentiable at 0, with $f^{\prime}(0) = 0$. Let us further check that $f$ is not $F$-differentiable at 0. Indeed, \[ \frac{f(4^{-n}x_n)}{\| 4^{-n}x_n\|} = \frac{\delta \Delta}{4} ~~\enskip~~ {\rm for~all}~~n, \] while $\|4^{-n}x_n \| \rightarrow 0$. Finally assume that $f$ is not $WH$-differentiable at 0. Then there are a weakly compact set $K \subset B_X, \epsilon > 0$, and sequences $\left\{ k_m \right\} \subset K, t_m \downarrow 0$ such that \[ \frac{f(t_m k_m)}{t_m} > \epsilon ~~\enskip~~ {\rm for~all}~~ m \in \Bbb{N}. \] Hence, as $f$ is $1$-Lipschitz, we have $\inf \| k_n \| \ge \epsilon> 0$. Further, because $f(t_m k_m) > 0$, there are $n_m \in \Bbb{N}$ such that \[ \| t_m k_m - 4^{-n_{m}} x_{n_{m}} \| < \Delta \delta 4^{-m_{n}-1}, ~~ m = 1,2, \ldots\ . \] Consequently, \begin{equation}\label{eqn2.4} \|4^{n_{m}}t_m k_m - x_{n_{m}} \|< \frac{\Delta \delta}{4} < \Delta ~~\mbox{and}~~ |4^{n_{m}} t_m \| k_m \| - 1 | < \frac{\Delta \delta}{4} . \end{equation} Because $\left\{ x_n \right\}$ is a special sequence with $\Delta$, the first inequality in (\ref{eqn2.4}) says that $\left\{ 4^{n_{m}} t_m k_m \right\}$ does not have a weakly convergent subsequence. However the second inequality in (\ref{eqn2.4}) together with $\inf \| k_n \| > 0$ ensures that $4^{n_m}t_m$ is bounded and so, since $\{k_m\}$ is weakly compact, $4^{n_m}t_m k_m$ has a weakly convergent subsequence, a contradiction. This proves $f$ is $WH$-differentiable at 0. {\hfill{$\square$}\vspace{\baselineskip} \noindent} Recall that a Banach space has the {\it Schur property} if $H = WH$, that is, weakly convergent sequences are norm convergent. \begin{thm}\label{thm2.3} For a Banach space $X$, the following are equivalent. \begin{description} \item[(a)] $X$ has the Schur property. \item[(b)] $H_{\mbox{\em lip}} = WH_{\mbox{\em lip}}.$ \item[(c)] $H_{\mbox{\em dist}} = WH_{\mbox{\em dist}}$. \end{description} \end{thm} \noindent {\bf Proof.\ \ } It is clear that (a) $\Longrightarrow$ (b) $\Longrightarrow$ (c), thus we prove (c) $\Longrightarrow$ (a). Suppose $X$ is not Schur and choose $\left\{ x_n \right\} \subset S_X$ such that $x_n \stackrel{w}{\rightarrow} 0$ but $\| x_n \| \not\rightarrow 0$. Since $\left\{ x_n \right\}$ is not relatively norm compact, we may assume by passing to a subsequence if necessary that $\| x_i - x_j \| > \delta$ for some $\delta \in (0,1)$ whenever $i \neq j$. As in the proof of Theorem 2.1, let $B_n = \{x \in X: \|x - 4^{-n}x_n\| \le \delta 4^{-n-1}\}$, $C = X \backslash {\displaystyle \cup^{\infty}_{n = 1}} B_n$ and let $f(x) = d(x,C)$. Now $f(0)=0$ and the argument of Theorem \ref{thm2.1}, shows that $f$ is $G$-differentiable at 0 with $f^{\prime}(0)=0$. However, \[ \frac{f(4^{-n}x_n)}{4^{-n}} = \frac{\delta}{4} ~~\enskip~~ {\rm for~all} ~~ n \in \Bbb{N}. \] Since $\left\{ x_n \right\} \cup \left\{ 0 \right\}$ is weakly compact, it follows that $f$ is not $WH$-differentiable at 0. \hfill{$\square$}\vspace{\baselineskip} \noindent \noindent {\bf Remark.} Using the technique from the proof of Theorem 2.1, one can also prove the following statement. If a nonreflexive Banach space $X$ admits a Lipschitzian $C^k$-smooth bump function, then it admits a Lipschitz function which is $C^k$-smooth on $X \backslash \left\{ 0 \right\}, WH$-differentiable at 0, but not $F$-differentiable at 0. A corresponding remark holds for non-Schur spaces. \section{Differentiability properties of convex functions} \setcounter{equation}{0} We begin by summarizing some known results. First recall that a Banach space $X$ has the {\it Dunford-Pettis property} if $\langle x^{\ast}_{n}, x_{n}\rangle \rightarrow 0$ whenever $x_{n} \stackrel{w}{\rightarrow} 0$ and $x^{\ast}_{n} \stackrel{w}{\rightarrow} 0$. For notational purposes we will say $X$ has the $DP^{\ast}$ if $\langle x^{\ast}_{n},x_n \rangle \rightarrow 0$ whenever $x^{\ast}_{n} \stackrel{w^{\ast}}{\rightarrow} 0$ and $x_n \stackrel{w}{\rightarrow} 0$; see (\cite{DU}, p. 177) for more on the Dunford-Pettis property. Note that a {\it completely continuous} operator takes weakly convergent sequences to norm convergent sequences. The proof of the next result is essentially in {\cite{BF}}. \begin{thm}(\cite{BF}) \label{thm3.1} \begin{enumerate} \item[(a)] $X$ does not contain a copy of $\ell_1$ if and only if $WH_{\mbox{\em conv}} = F_{\mbox{\em conv}}$ if and only if $WH_{\mbox{\em norm}} = F_{\mbox{\em norm}}$ if and only if each completely continuous linear $T : X \rightarrow c_0$ is compact. \item[(b)] $X$ has the $DP^{\ast}$ if and only if $H_{\mbox{\em conv}} = WH_{\mbox{\em conv}}$ if and only if $H_{\mbox{\em norm}} = WH_{\mbox{\em norm}}$ if and only if each continuous linear $T : X \rightarrow c_0$ is completely continuous. \item[(c)] $X$ is finite dimensional if and only if $G_{\mbox{\em conv}} = F_{\mbox{\em conv}}$ if and only if $G_{\mbox{\em norm}} = F_{\mbox{\em norm}}$ if and only if each continuous linear $T : X \rightarrow c_0$ is compact. \end{enumerate} \end{thm} \noindent {\bf Proof.\ \ } Let us mention that (a) is contained in (\cite{BF}, Theorem 2) and (c) is from (\cite{BF}, Theorem 1). Whereas (b) can be obtained by following the proofs of (\cite{BF}, Proposition 1 and Theorem 1). \hfill{$\square$}\vspace{\baselineskip} \noindent If $WH_{\mbox{conv}} \neq G_{\mbox{conv}}$, for example, we can be somewhat more precise. \begin{prop}\label{prop3.2} Suppose $WH_{\mbox{\em conv}} \neq G_{\mbox{\em conv}}$ on $X$. Then there is a norm $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ on $X$ such that $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is not $WH$-differentiable at $x_0 \neq 0$ but $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |^{\ast}$ is strictly convex at $\Lambda_0 \in X^{\ast}\backslash\{0\}$ satisfying $\langle \Lambda_0,x_{0}\rangle =|\hskip-.13em | \hskip-.13em | x_0|\hskip-.13em | \hskip-.13em |\, |\hskip-.13em | \hskip-.13em | \Lambda_0|\hskip-.13em | \hskip-.13em | $. \end{prop} \noindent {\bf Proof.\ \ } Following the techniques of \cite{BF}, one obtains a norm $\| \cdot \|$ on $X$ such that $\| \cdot \|$ is $G$-differentiable at $x_0 \neq 0$ but $\| \cdot \|$ is not $WH$-differentiable at $x_{0}$. Now define $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ on $X$ by \[ |\hskip-.13em | \hskip-.13em | x |\hskip-.13em | \hskip-.13em | = (\|x\|^2 + d^2 (x,\Bbb{R} x_0) )^{\frac{1}{2}} \] Clearly $d^2 ( \cdot , \Bbb{R} x_0 )$ is $F$-differentiable at $x_0$ and so it follows that $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is $G$-differentiable at $x_0$ but $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is not $WH$-differentiable at $x_0$ because $\| \cdot \|$ is not. Suppose now that $\left\{ x_n \right\}$ satisfies \begin{equation}\label{eqn3.1} 2 |\hskip-.13em | \hskip-.13em | x_n |\hskip-.13em | \hskip-.13em |^2 + 2|\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |^2 - |\hskip-.13em | \hskip-.13em | x_n + x_0 |\hskip-.13em | \hskip-.13em |^2 \rightarrow 0. \end{equation} Then by convexity one obtains \[ \| x_n \| \rightarrow \| x_0 \| ,\ d^2 (x_n , \Bbb{R} x_0 ) \rightarrow d^2 (x_0 , \Bbb{R} x_0 ) = 0. \] >From this one easily sees that $\| x_n - x_0 \| \rightarrow 0.$ Now take $\Lambda_0 \in X^*$ such that $|\hskip-.13em | \hskip-.13em | \Lambda_0|\hskip-.13em | \hskip-.13em | =1$ and $\langle \Lambda_0, x_0 \rangle = |\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$. We show that $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is strictly convex at $\Lambda_0$. Suppose that $|\hskip-.13em | \hskip-.13em | x^*|\hskip-.13em | \hskip-.13em | = 1$ and $|\hskip-.13em | \hskip-.13em | x^* + \Lambda_0|\hskip-.13em | \hskip-.13em | = 2$, then choose $\left\{ x_n \right\}$ with $|\hskip-.13em | \hskip-.13em | x_n |\hskip-.13em | \hskip-.13em | = |\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$ so that \begin{equation}\label{eqn3.2} \langle x^* + \Lambda_0 , x_n \rangle \rightarrow 2 |\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |. \end{equation} Consequently $\langle \Lambda_0 , x_n \rangle \rightarrow |\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$ and thus $\langle \Lambda_0 , x_n + x_0 \rangle \rightarrow 2|\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$; $|\hskip-.13em | \hskip-.13em | x_n + x_0 |\hskip-.13em | \hskip-.13em | \to 2|\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$. But then $\left\{ x_n \right\}$ satisfies (\ref{eqn3.1}) and so $\| x_n - x_0 \| \rightarrow 0$. This with (\ref{eqn3.2}) shows that $\langle x^* , x_0 \rangle = |\hskip-.13em | \hskip-.13em | x_0 |\hskip-.13em | \hskip-.13em |$. Because $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is $G$-differentiable at $x_0$, we conclude that $x^* = \Lambda_0$. This proves the strict convexity of $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ at $\Lambda_0$. \hfill{$\square$}\vspace{\baselineskip} \noindent One can also formulate similar statements (and proofs) for the cases $G_{\mbox{conv}} \neq F_{\mbox{conv}}$ and $WH_{\mbox{conv}} \neq F_{\mbox{conv}}$. We now turn our attention to spaces for which $WH_{\mbox{conv}} = H_{\mbox{conv}}$. Let us recall that a Banach space $X$ has the {\it Grothendieck property} if $w^{\ast}$-convergent sequences in $X^{\ast}$ are weakly convergent; see (\cite{DU}, p. 179). The following corollary is an immediate consequence of Theorem \ref{thm3.1} (b). \begin{cor} \label{cor3.3} If $X$ has the Dunford-Pettis property and the Grothendieck property, then $WH_{\mbox{\em conv}} = H_{\mbox{\em conv}}$. \end{cor} In particular, note that $\ell_{\infty}$ has the Grothendieck property (cf \cite{D}, p.103) and the Dunford-Pettis property (cf \cite{DU}, p. 177). Thus, unlike the case for Lipschitz functions, one can have $WH_{\mbox{conv}} = H_{\mbox{conv}}$ for non-Schur spaces. It will follow from the next result that these non-Schur spaces must be quite large, though. \begin{thm} \label{thm3.4} For a Banach space $X$, the following are equivalent. \begin{itemize} \item[(i)] $X$ has the $DP^{\ast}$ \item[(ii)] $H_{\mbox{\em conv}} = WH_{\mbox{\em conv}}$ \item[(iii)] If $B_{Y^{\ast}}$ is $w^{\ast}$-sequentially compact, then any continuous linear $T : X \rightarrow Y$ is completely continuous. \end{itemize} \end{thm} \noindent {\bf Proof.\ \ } By Theorem \ref{thm3.1}(b) we know that (i) and (ii) are equivalent and that (iii) implies (i). We will show (i) implies (iii) by contraposition. Suppose (iii) fails, that is, there is an operator $T : X \rightarrow Y$ which is not completely continuous for some $Y$ with $B_{Y^{\ast}}$ $w^{\ast}$-sequentially compact. Hence we choose $\left\{ x_n \right\} \subset X$ such that $x_n \stackrel{w}{\rightarrow} 0$ but $\| T x_n \| \not\rightarrow 0$. Because $T x_n \stackrel{w}{\rightarrow} 0$, we know that $\left\{ T x_n \right\}$ is not relatively norm compact. Hence letting $E_n = {\rm span} \left\{ y_k : k \leq n \right\}$ with $y_k = T x_k$ we know there is an $\epsilon > 0$ such that ${\displaystyle \sup^{}_{k}} \,d (y_k , E_n ) > \epsilon$ for each $n$. By passing to a subsequence, if necessary, we assume $d(y_n , E_{n-1}) > \epsilon$ for each $n$. Now choose $\Lambda_n \in B_{Y^{\ast}}$ such that $\langle \Lambda_n , x \rangle = 0$ for all $x \in E_{n-1}$ and $\langle \Lambda_n , x_n \rangle \geq \epsilon$. Because $B_{Y^{\ast}}$ is $w^{\ast}$-sequentially compact, there is a subsequence $\Lambda_{n_{k}}$ such that $\Lambda_{n_{k}} \stackrel{w^{\ast}}{\rightarrow} \Lambda \in B_{Y^{\ast}}$. Observe that $\langle \Lambda_{n}, y_{k}\rangle = 0$ for $n > k$ and consequently $\langle \Lambda, y_k\rangle = 0$ for all $k$. Now let $z^{\ast}_{k} = T^{\ast} (\Lambda_{n_{k}} - \Lambda)$ and $z_k = x_{n_{k}}$. Certainly $z^{\ast}_{k} \stackrel{w^{\ast}}{\rightarrow} 0$ and $z_k \stackrel{w}{\rightarrow} 0$ while $\langle z^{\ast}_{k},z_k \rangle = \langle \Lambda_{n_{k}} - \Lambda, Tx_{n_{k}}\rangle = \langle \Lambda_{n_{k}} - \Lambda, y_{n_{k}}\rangle \geq \epsilon$ for all $k$. This shows that $X$ fails the $DP^{\ast}$. \hfill{$\square$}\vspace{\baselineskip} \noindent \begin{cor} \label{cor3.5} If $X$ has a $w^{\ast}$-sequentially compact dual ball or, more generally, if every separable subspace of $X$ is a subspace of a complemented subspace with $w^{\ast}$-sequentially compact dual ball, then the following are equivalent. \begin{itemize} \item[(a)] $WH_{\mbox{\em conv}} = H_{\mbox{\em conv}}$. \item[(b)] $X$ has the Schur property. \end{itemize} \end{cor} \noindent {\bf Proof.\ \ } Note that (b) $\Longrightarrow$ (a) is always true, so we show (a) $\Longrightarrow$ (b). If $B_{X^{\ast}}$ is $w^{\ast}$-sequentially compact and $X$ is not Schur then $I : X \rightarrow X$ is not completely continuous and Theorem 3.4 applies. More generally, suppose $x_n \stackrel{w}{\rightarrow} 0$ but $\| x_n \| \not\rightarrow 0$ and ${\overline {\rm span}} \left\{ x_n \right\} \subset Y$ with $B_{Y^{\ast}}$ $w^{\ast}$-sequentially compact. If there is a projection $P : X \rightarrow Y$, then $P$ is not completely continuous since $P|_{Y}$ is the identity on $Y$. \hfill{$\square$}\vspace{\baselineskip} \noindent We can say more in the case that $X$ is weakly countably determined (WCD); see \cite{M} and Chapter VI of \cite{DGZ} for the definition and further properties of WCD spaces. \begin{cor} \label{cor3.6} For a Banach space $X$, the following are equivalent. \begin{itemize} \item[(a)] $X$ is WCD and $WH_{\mbox{\em conv}} = H_{\mbox{\em conv}}$ \item[(b)] $X$ is separable and has the Schur property. \end{itemize} \end{cor} \noindent {\bf Proof.\ \ } It is obvious that (b) $\Longrightarrow$ (a), so we show (a) $\Longrightarrow$ (b). First, since $B_{X^{\ast}}$ is $w^{\ast}$-sequentially compact (see e.g. \cite{M}, Corollary 4.9 and \cite{LP}, Theorem 11), it follows from Corollary \ref{cor3.5} that $X$ has the Schur property. But $WCD$ Schur spaces are separable (see e.g. \cite{M}, Theorem 4.3). \hfill{$\square$}\vspace{\baselineskip} \noindent \noindent {\bf Remark} \begin{enumerate} \item[(a)] Corollary \ref{cor3.5} is satisfied, for instance, by $GDS$ spaces (see \cite{LP}, Theorem 11) and spaces with countably norming $M$-basis (see \cite{Pl}, Lemma 1). Notice that $\ell_1 (\Gamma)$ has a countably norming $M$-basis for any $\Gamma$, thus spaces with countably norming $M$-bases and the Schur property need not be separable. \item[(b)] If $X^{\ast}$ satisfies $WH_{\mbox{conv}} = F_{\mbox{conv}}$, then $X$ also does (because $L_1 \subset X^{\ast}$ if $\ell_1 \subset X$ (see \cite{Dul} Proposition 4.2)) but not conversely ($c_0$ and $\ell_1$); cf. Theorem \ref{thm3.1}(a). \item[(c)] Let $X$ be a space such that $X$ is Schur but $X^{\ast}$ does not have the Dunford-Pettis property (cf. \cite{DU}, p 178). Then $X$ satisfies $H_{\mbox{conv}} = WH_{\mbox{conv}}$ but $X^{\ast}$ does not satisfy $H_{\mbox{conv}} = WH_{\mbox{conv}}$. \item[(d)] There are spaces with the $DP^{\ast}$ that are neither Schur nor have the Grothendieck property; for example $\ell_1 \times \ell_{\infty}$. \item[(e)] It is well-known that $\ell_{\infty}$ has $\ell_2$ as a quotient (\cite{LT}, p. 111). Thus quotients of spaces with the $DP^{\ast}$ need not have the $DP^{\ast}$. It is clear that superspaces of spaces with the $DP^*$ need not have the $DP^*$; the example $c_0 \subset \ell_\infty$ shows that subspaces need not inherit the $DP^*$. \item[(f)] Haydon (\cite{H}) has constructed a nonreflexive Grothendieck $C(K)$ space which does not contain $\ell_\infty$. Using the continuum hypothesis, Talagrand (\cite{T}) constructed a nonreflexive Grothendieck $C(K)$ space $X$ such that $\ell_\infty$ is neither a subspace nor a quotient of $X$. Since $C(K)$ spaces have the Dunford-Pettis property (see \cite{D}, p. 113), both these spaces have the $DP^*$. \end{enumerate} As a byproduct of Corollaries \ref{cor3.3} and \ref{cor3.5} we obtain the following example which is related to results from (\cite{Z}). \smallskip \noindent {\bf Example.} Let $X$ be a space with the Grothendieck and Dunford-Pettis properties such that $X$ is not Schur (e.g. $\ell_{\infty}$). Then there is a separable subspace $Y$ (e.g. $c_0$) of $X$ and a continuous convex function $f$ on $Y$ such that $f$ is $G$-differentiable at 0 (as a function on $Y$), but no continuous convex extension of $f$ to $X$ is $G$-differentiable at 0 (as a function on $X$); there also exist $y_0 \in Y\backslash\{0\}$ and an equivalent norm $\|\cdot\|$ on $Y$ whose dual norm is strictly convex but no extension of $\|\cdot\|$ to $X$ is G-differentiable at $y_0$. \smallskip \noindent {\bf Proof.\ \ } Let $Y$ be a separable non-Schur subspace of $X$. By Corollary \ref{cor3.5}, there is a continuous convex function $f$ on $Y$ which is $G$-differentiable at 0, but is not $WH$-differentiable at 0. Since any extension $\tilde{f}$ of $f$ also fails to be $WH$-differentiable at 0, it follows that $\tilde{f}$ is not $G$-differentiable at 0 because $X$ has the $DP^{\ast}$. Because $Y$ fails the $DP^*$, there is a sequence $\{ \Lambda_n \} \subset X^*$ such that $\Lambda_n$ converges $w^*$ but not Mackey to $0$. By the proof of (\cite{BF}, Theorem 3), there is a norm $\|\cdot\|$ on $Y$ whose dual is strictly convex that fails to be $WH$-differentiable at some $y_0 \in Y\backslash \{0\}$; as above, no extension of $\|\cdot\|$ to $X$ can be $G$-differentiable at $y_0$. \hfill{$\square$}\vspace{\baselineskip} \noindent We close this note by relating the Schur and Dunford-Pettis properties to some notions of differentiability for dual functions. \begin{thm} \label{thm3.7} For a Banach space $X$, the following are equivalent. \begin{itemize} \item[(a)] $X$ has the Schur property. \item[(b)] $G$-differentiability and $F$-differentiability coincide for $w^{\ast}$-$\ell sc$ continuous convex functions on $X^{\ast}$. \item[(c)] $G$-differentiability and $F$-differentiability coincide for dual norms on $X^{\ast}$. \item[(d)] $H_{\mbox{\em lip}} = WH_{\mbox{\em lip}}$. \end{itemize} \end{thm} \noindent {\bf Proof.\ \ } Of course (a) and (d) are equivalent according to Theorem \ref{thm2.3}. (a) $\Longrightarrow$ (b): Suppose (b) does not hold. Then for some continuous convex $w^{\ast}$-$\ell sc$ $f$ on $X^{\ast}$, there exists $\Lambda_{0} \in X^{\ast}$ such that $f$ is $G$-differentiable at $\Lambda_{0}$ but $f$ is not $F$-differentiable at $\Lambda_{0}$. Let $f^{\prime} (\Lambda_{0}) = x^{\ast\ast} \in X^{\ast\ast}$. We also choose $\delta > 0$ and $K > 0$ such that for $x_1^*, x_2^* \in B(\Lambda_{0},\delta)$ we have $|f(x_1^*) - f(x_2^*)| \leq K\|x_1^* - x_2^*\|$ (since $f$ is locally Lipschitz). Because $f$ is not $F$-differentiable at $\Lambda_{0}$, there exist $t_n \downarrow 0, t_n < \frac{\delta}{2}, \Lambda_{n} \in S_{X^{\ast}}$ and $\epsilon > 0$ such that \begin{equation} \label{eqn3.3} f(\Lambda_{0} + t_n \Lambda_{n}) - f(\Lambda_{0}) - \langle x^{\ast\ast},t_n \Lambda_{n} \rangle \geq \epsilon t_n. \end{equation} Because $f$ is convex and $w^{\ast}$-$\ell sc$, using the separation theorem we can choose $x_n \in X$ satisfying \begin{equation}\label{eqn3.4} \langle x_n,x^* \rangle \leq f(\Lambda_{0} + t_n \Lambda_{n} + x^*) - f(\Lambda_{0} + t_n \Lambda_{n}) + \frac{\epsilon t_n}{2} ~~ \mbox{for~all}~~ x^* \in X^{\ast}; \end{equation} Putting $x^* = -t_n \Lambda_n$ in (\ref{eqn3.4}) and using (\ref{eqn3.3}) one obtains \[ \begin{array}{lcl} \langle x_n , t_n \Lambda_{n} \rangle & \geq & f(\Lambda_{0} + t_n \Lambda_{n}) - f(\Lambda_{0}) - \frac{\epsilon t_n}{2} \\ & \geq & \langle x^{\ast\ast} , t_n \Lambda_{n} \rangle + \frac{\epsilon t_n}{2}. \end{array} \] And hence, $\|x_n - x^{\ast\ast} \| \geq \frac{\epsilon}{2}$ for all $n$. Let $\eta > 0$ and fix $x^* \in S_{X^*}$. Since $f$ is $G$-differentiable at $\Lambda_0$, there is a $0 < t_0 < \frac{\delta}{2}$ such that for $|t| \le t_0$ we have \begin{equation}\label{eqn3.5} \langle x^{**}, t x^* \rangle - f(\Lambda_0 + t x^*) + f(\Lambda_0) \ge - \frac{\eta}{2} t_0. \end{equation} Using (\ref{eqn3.4}) with the fact that $f$ has Lipschitz constant $K$ on $B(\Lambda_0, \delta)$, for $|t| \le t_0$ we obtain \[ \begin{array}{lcl} \langle x_n, t x^* \rangle & \le & f(\Lambda_0 + t_n \Lambda_n + tx^*) - f(\Lambda_0 + t_n \Lambda_n) + \frac{\epsilon t_n}{2} \\ & \le & f(\Lambda_0 + t x^*) - f(\Lambda_0) + \frac{\epsilon t_n}{2} + 2K t_n. \end{array} \] Choosing $n_0$ so large that $\frac{\epsilon t_n}{2} + 2K t_n < \frac{\eta}{2} t_0$ for $n \ge n_0$, the above inequality yields \begin{equation}\label{eqn3.6} f(\Lambda_0 + tx^*) - f(\Lambda_0) - \langle x_n, t x^* \rangle \ge -\frac{\eta}{2}t_0 ~~\mbox{for}~~ n\ge n_0,~|t|\le t_0. \end{equation} Adding (\ref{eqn3.5}) and (\ref{eqn3.6}) results in \[ \langle x^{**} - x_n, t x^* \rangle \ge -\eta t_0 ~~\mbox{for}~~ n\ge n_0, ~ |t| \le t_0. \] Hence $|\langle x^{**} - x_n, x^* \rangle| \le \eta$ for $n \ge n_0$. This shows that $x_n \stackrel{w^{\ast}}{\rightarrow} x^{\ast\ast}$. Combining this with the fact that $\|x_n - x^{\ast\ast} \| \not\rightarrow 0$ shown above, we conclude that for some $\delta > 0$ and some subsequence we have $\| x_{n_{i}} - x_{n_{i+1}} \| > \delta$ for all $i$. However $x_{n_{i}} - x_{n_{i+1}}\stackrel{w}{\rightarrow} 0$ (in $X$) because $x_{n_{i}} - x_{n_{i+1}} \stackrel{w^{\ast}}{\rightarrow} 0$ (in $X^{\ast\ast}$). This shows that $X$ is not Schur. Since (b) $\Longrightarrow$ (c) is obvious, we show that (c) $\Longrightarrow$ (a). Write $X = Y \times \Bbb{R}$ and suppose that $X$ is not Schur. Then we can choose $\left\{y_n \right\} \subset Y$ such that $y_n \stackrel{w}{\rightarrow} 0$ but $\|y_n \| =1$ for all $n$. Let $\{\gamma_n\} \subset (\frac{1}{2}, 1)$ be such that $\gamma_n \uparrow 1$ and define $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ on $X^{\ast} = Y^{\ast} \times \Bbb{R}$ by \[ |\hskip-.13em | \hskip-.13em | (\Lambda , t) |\hskip-.13em | \hskip-.13em | = \sup \bigl\{ |\langle \Lambda , y_n \rangle + \gamma_n t| \bigr\} \lor \frac{1}{2}(\|\Lambda\| + | t|). \] This norm is dual since it is a supremum of $w^{\ast}$-$\ell sc$ functions and the proof of (\cite{BF}, Theorem 1) shows that $|\hskip-.13em | \hskip-.13em | \cdot |\hskip-.13em | \hskip-.13em |$ is Gateaux but not Fr\'echet differentiable at $(0,1)$. \hfill{$\square$}\vspace{\baselineskip} \noindent If $X$ is not Schur, then the previous theorem ensures the existence of a $w^*$-$\ell sc$ convex continuous function on $X^*$ which is G-differentiable but not F-differentiable at some point. The following remark shows that we can be more precise if $X \not\supset \ell_1$. \medskip \noindent {\bf Remark.} If $X \not\supset \ell_1$ and $X$ is not reflexive, then there is a $w^{\ast}$-$\ell sc$ convex $f$ on $X^{\ast}$ and $\Lambda \in X^{\ast}$ such that $f$ is $G$-differentiable at $\Lambda$ and $f^{\prime}(\Lambda) \in X^{\ast\ast} \backslash X$ (and, a fortiori, $f$ is not Fr\'echet differentiable at $\Lambda$). \smallskip \noindent {\bf Proof.\ \ } Let $Y$ be a separable nonreflexive subspace of $X$. Let $y^* \in Y^{\ast}$ be such that $y^*$ does not attain its norm on $B_{Y}$. Let $y^{\ast\ast} \in S_{Y^{\ast\ast}}$ be such that $\langle y^{\ast\ast} , y^* \rangle = 1$. Note that $y^{\ast\ast} \in S_{Y^{\ast\ast}} \backslash Y$ because $y^*$ does not attain its norm on $B_{Y}$. By the Odell-Rosenthal theorem (see \cite{D}, p.236), choose $\left\{y_n\right\} \subset B_Y$ such that $y_n \stackrel{w^{\ast}}{\rightarrow} y^{\ast\ast}$. Now $Y^{\ast\ast} = Y^{\perp\perp} \subset X^{\ast\ast}$ and some careful ``identification checks" show that $y_n \stackrel{w^{\ast}}{\rightarrow} y^{\ast\ast}$ as elements of $X^{\ast\ast}$ and $y^{\ast\ast} \in X^{\ast\ast} \backslash X$. Let $\Lambda$ be a norm preserving extension of $y^*$, then $\langle y^{\ast\ast} , \Lambda\rangle = 1$ and we define $f$ on $X^{\ast}$ by \[ f(x^*) = \sup \bigl\{ \langle x^*, y_n \bigr\rangle - 1 - a_n : n \in \Bbb{N} \bigr\} ~~\mbox{where}~~ a_n \downarrow 0. \] We now show that $y^{\ast\ast} \in \partial f(\Lambda)$. Indeed, \[ \begin{array}{lcl} f(\Lambda + x^*) - f(\Lambda)= f(\Lambda + x^*) & = & {\displaystyle \sup^{}_{n}} \bigl\{ \langle \Lambda + x^* , y_n \rangle - 1 - a_n \bigr\} \\ & \geq & {\displaystyle \lim^{}_{n \rightarrow \infty}} \bigl\{ \langle \Lambda , y_n \rangle - 1 - a_n + \langle x^* , y_n \rangle \bigr\} = \langle y^{**}, x^* \rangle. \end{array} \] To see that $f$ is $G$-differentiable, fix $x^* \in X^{\ast}$ and let $\epsilon > 0$. Choose $n_0$ so that $|\langle y^{\ast\ast} - y_n , x^* \rangle | \leq \epsilon \| x^* \|$ for $n \geq n_0$. Now if $2\|t\,x^*\| < \min \left\{ a_1 , \ldots , a_{n_{0}} \right\}$, we have \[ \begin{array}{lcl} 0 \leq f (\Lambda + t x^* ) - f(\Lambda) - \langle y^{\ast\ast} , t x^* \rangle & = & {\displaystyle \sup^{}_{n}} \bigl\{ \langle \Lambda + t x^* ,y_n \rangle - 1 - a_n \bigr\} - \langle y^{\ast\ast} , t x^* \rangle \\ & = & {\displaystyle \sup_n} \bigl\{ \langle \Lambda, y_n \rangle - 1 + \langle y_n - y^{**}, tx^* \rangle - a_n \bigr\} \\ & \le & \max\Bigl\{0,{\displaystyle \sup^{}_{n \geq n_0}} \bigl\{ \langle y_n - y^{**}, t x^* \rangle - a_n \bigr\}\Bigr\} \\ & \leq & {\displaystyle \sup^{}_{n \geq n_0}} \bigl\{ | \langle y^{\ast\ast} - y_n , t x^* \rangle | \bigr\} \leq \epsilon \|t x^*\|. \end{array} \] Thus $f$ is $G$-differentiable at $\Lambda$ with $G$-derivative $y^{\ast\ast} \in Y^{\ast\ast} \backslash Y$. \hfill{$\square$}\vspace{\baselineskip} \noindent Using the results of \cite{BF} and \u{S}mulyan's test type arguments in a fashion similar to Theorem \ref{thm3.7}, one can also obtain the following result. We will not provide the details. \begin{thm} \label{thm3.8} For a Banach space $X$, the following are equivalent. \begin{itemize} \item[(a)] $X$ has the Dunford-Pettis property. \item[(b)] $G$-differentiability and $WH$-differentiability coincide for $w^{\ast}$-$\ell sc$, continuous convex functions on $X^{\ast}$. \item[(c)] $G$-differentiability and $WH$-differentiability coincide for dual norms on $X^{\ast}$. \end{itemize} \end{thm} We next consider what happens for ${\cal F}$ a family of norms alone. \medskip\noindent {\bf Remark.} \begin{itemize} \item[(a)] $G_{\mbox{norm}}=F_{\mbox{norm}}$ on $X$ implies $G_{\mbox{dualnorm}} = F_{\mbox{dualnorm}}$ on $X^{\ast}$, but not conversely. \item[(b)] $G_{\mbox{norm}} = WH_{\mbox{norm}}$ on $X$ implies $G_{\mbox{dualnorm}} = WH_{\mbox{dualnorm}}$, but not conversely. \item[(c)] $WH_{\mbox{norm}} = F_{\mbox{norm}}$ on $X$ does not imply $WH_{\mbox{dualnorm}} = F_{\mbox{dualnorm}}$ on $X^{\ast}$. \end{itemize} \noindent {\bf Proof.\ \ } (a) This is immediate from Theorem \ref{thm3.1}(c) and Theorem 3.7 (since there are Schur spaces that are not finite dimensional). (b) Since the $DP^{\ast}$ implies the Dunford-Pettis property the first part follows from Theorem 3.1(b) and Theorem \ref{thm3.8}. However, if $X$ is separable, then by Corollary \ref{cor3.6}, $H_{\mbox{conv}} = WH_{\mbox{conv}}$ if and only if $X$ is Schur. Thus the separable space $C[0,1]$ does not satisfy $G_{\mbox{norm}} = WH_{\mbox{norm}}$ yet it has the Dunford-Pettis property, and thus by Theorem 3.8 satisfies $G_{\mbox{dualnorm}} = WH_{\mbox{dualnorm}}$. (c) On $c_0$ one has $WH_{\mbox{norm}} = F_{\mbox{norm}}$ (see Theorem \ref{thm3.1}). But $\ell_1 = c^{\ast}_{0}$ is a separable dual space and so it admits a dual $G$-norm (see \cite {DGZ}, Theorem II.6.7(ii) and Corollary II.6.9(ii)). This norm cannot be everywhere $F$-differentiable since $\ell_1$ is not reflexive (see \cite{DGZ}, Proposition II.3.4). However, this dual norm is everywhere $WH$-differentiable since $\ell_1$ is Schur. Thus we do not have $WH_{\mbox{dualnorm}} = F_{\mbox{dualnorm}}$ on $\ell_1$. \hfill{$\square$}\vspace{\baselineskip} \noindent In fact we can be more precise than we were in (c). Using Theorem 3.7, Theorem \ref{thm3.8} and Theorem 3.1 along with results from \cite{BF} one can obtain the following chain of implications. \centerline{$X$ fails the Schur property but has the Dunford-Pettis property $\Longrightarrow$} \centerline{$WH_{\mbox{dualnorm}} \neq F_{\mbox{dualnorm}}$ on $X^{\ast}$ $\Longrightarrow$} \centerline{$X$ fails the Schur property and $X^{\ast} \supset \ell_1$.}
2024-02-18T23:40:44.949Z
1998-03-31T20:24:37.000Z
algebraic_stack_train_0000
3,248
6,502
proofpile-arXiv_065-15909
\section{Introduction} The incredible success of hydrodynamical approaches in describing the enormous data of heavy ion collision experiments \cite{Busza:2018rrf,Pasechnik:2016wkt}, including the nuclear supression factor, radial flow and elliptic flow measurements provides compelling evidence to believe that the produced hot and dense matter thermalizes within about 0.6 fm/c after the initial impact~\cite{Heinz:2004pj,Huovinen:2001cy,Hirano:2002ds}. However, in the seminal work by Baier, Mueller, Schiff and Son in Ref.~\cite{Baier:2000sb}, the thermalization time via scattering processes in weak-coupling limit has been estimated theoretically to be 2.5 fm/c or above. Recent studies \cite{Berges:2020fwq,Epelbaum:2013ekf,Berges:2013fga} in weak coupling limit have improved our understanding of quark-gluon-plasma(QGP) equilibration to a great extent. On the other hand, several attempts also has been made to study the thermalization at strong coupling limit within AdS/CFT formulation~\cite{Strickland:2013uga,Chesler:2008hg,Chesler:2009cy,Heller:2011ju,Casalderrey-Solana:2011dxg}. The modern formulations of relativistic fluid dynamics suggests that neither local near-equilibrium nor near-isotropy is required in order to have a successful hydrodynamical description of the experimental results \cite{Romatschke:2016hle}. Several efforts have been made over the years in the development of relativistic viscous hydrodynamics \cite{Romatschke:2017ejr} which systematically incorporates the dissipative effects~\cite{Florkowski:2017olj,Jaiswal:2016hex,Jeon:2015dfa,Kovtun:2012rj}. The viscous hydrodynamics concludes that at time $\sim$ 2 fm/c, the QGP created in ultra relativistic heavy-ion collisions (URHIC) has different longitudinal and transverse pressures~\cite{Strickland:2013uga}. This occurs due to the rapid expansion of the QCD matter along the longitudinal direction (beam direction) which gives rise to a large local rest frame momentum space anisotropy ~\cite{Strickland:2013uga,Mandal:2013jkd,Romatschke:2003ms} in the $p_T-p_L$ plane. This anisotropic momentum distribution can cause plasma instabilities in the system which contribute in the thermalization and isotropization process of the QCD plasma ~\cite{Arnold:2003rq,Mrowczynski:1988dz,Mrowczynski:1993qm,Mrowczynski:2000ed}. It is found that the exponential growth of the unstable modes plays an important role in the dynamics of the system in weak coupling limit~\cite{Randrup:2003cw}. In the hydrodynamic side, the anisotropic hydrodynamics (aHydro) framework is formulated to efficiently take into account the large momentum space anisotropy of the system \cite{Strickland:2014pga,Alqahtani:2017mhy}. On the other hand, the hard-thermal-loop perturbation theory \cite{Ghiglieri:2020dpq,Su:2012iy} has been employed~\cite{Romatschke:2003ms,Nopoush:2017zbu} to systematically study the properties of anisotropic QCD plasma. Typically, one uses a specific distribution function of light quarks and gluons which is widely known as the $`$Romatschke-Strickland' (RS) form~\cite{Romatschke:2003ms,Romatschke:2004jh}. There has been a concerted effort to study the effect of the momentum-space anisotropies on the heavy-quark potential~\cite{Nopoush:2017zbu,Dumitru:2007hy,Burnier:2009yu}, bottomonia suppression~\cite{Strickland:2011mw,Strickland:2011aa,Krouppa:2015yoa}, photon and dilepton production rates~\cite{Schenke:2006yp,Bhattacharya:2015ada}, wake potential~\cite{Mandal:2013jla} and so on. The generalized RS form of the distribution function \cite{Tinti:2013vba} that takes into account the azimuthal momentum-space anisotropy has been recently investigated in Refs.~\cite{Kasmaei:2018yrr,Ghosh:2020sng,Carrington:2021bnk}. On the other hand, the production of strong magnetic fields~\cite{Kharzeev:2007jp,Skokov:2009qp} at early stages of the non-central heavy-ion collisions has triggered enormous research interest in the theoretical, phenomenological and experimental understanding of the strongly interacting matter under extreme conditions \cite{Huang:2015oca,Miransky:2015ava}. The time dependence of the produced magnetic field has remained a subject of debate for a long period of time in the heavy-ion collision community~\cite{McLerran:2013hla,Roy:2017yvg,Huang:2015oca}. At the early stages of the collision, the system is dominated mainly by the gluons. Subsequently, a large number of quarks and anti-quarks are produced and the system evolves towards the equilibrium. Therefore, the system is believed to be much less conducting in the early times. Considering the Pb+Pb collision at $\sqrt{s}=2.76$ TeV, it is found in Ref.~\cite{Roy:2017yvg} that for an insulating medium, a magnetic field of strength $\sim$ 100 $m_\pi^2$ rapidly decays to a very low value within around 0.1 fm/c after the initial impact \cite{Huang:2015oca}. The rapid decrease in the field strength follows the $1/t^3$ behavior. However, the electrical conductivity of the medium significantly influences the time evolution of the electromagnetic fields in the late stage when the system reaches near the equilibrium state. It is shown in the Refs.~\cite{Tuchin:2013apa,Tuchin:2013ie} that the electrical conductivity of the medium can resist the decay of the magnetic field, at least to some extent. Intense research works have been performed to study the properties of the QCD matter in presence of such strong magnetic background which resulted in several interesting findings like chiral magnetic effect~\cite{Fukushima:2008xe,Kharzeev:2007jp}, magnetic catalysis~\cite{Lee:1997zj}, inverse magnetic catalysis~\cite{Bali:2011qj,Ayala:2014iba}, non-trivial magnetic modifications of chiral symmetry broken/restored phases~\cite{Andersen:2012dz,Avancini:2016fgq}, photon and dilepton production rate~\cite{Wang:2020dsr,Tuchin:2013bda,Bandyopadhyay:2016fyd,Das:2021fma,Ghosh:2018xhh,Hattori:2020htm}, thermodynamic properties~\cite{Bali:2011qj,Rath:2017fdv,Karmakar:2019tdp,Bandyopadhyay:2017cle}, heavy quark potential~\cite{Singh:2017nfa}, transport coefficients~\cite{Kurian:2018qwb,Kurian:2018dbn} and so on. The production of strong magnetic field at early stages of collision naturally motivates one to investigate the magnetic field effects on anisotropic QGP. In presence of external magnetic field ( with intensity $B$ ), one can define a hierarchy of energy scales as $\sqrt{|eB|}\gg T\gg g_sT$ which essentially determines the regime of validity of the strong magnetic field approximation. Here $e$ denotes the electric charge of proton and $g_s$ is the strong coupling constant. In this regime, the quarks occupy only the lowest Landau level and the dynamics becomes 1+1 dimensional. In this article, we restrict ourselves to the lowest Landau level approximation and investigate the gluon collective modes in presence of anisotropic momentum distribution. For this purpose, the one loop gluon self energy is obtained in the HTL approximation using the real time formalism of thermal field theory. We note here that the general structure of the polarization tensor, plays an important role in the determination of the effective propagator and the collective modes. The thermo-magnetic collective modes has been studied recently in Refs.~\cite{Hattori:2017xoo,Karmakar:2018aig}. The direction of the external magnetic field brings in an anisotropy in the system and naturally breaks the spherical symmetry. It also appears among the available four vectors that has to be taken into account for the construction of the general structure. The situation is similar to the spheroidal momentum space anisotropy. Thus, it is interesting to compare the two scenarios: one is the anisotropy due to the background field and the other is the anisotropy that arises due to the modeling of the non-equilibrium distribution function from the equilibrium distribution by suitable stretching or squeezing. In the present study we systematically address this issue. Throughout the article, we use the following convention: $\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}={\rm diag}(1,0,0,-1)$ and $\eta_{\perp}^{\mu\nu}={\rm diag}(0,-1,-1,0)$ with $\eta^{\mu\nu}=\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}+\eta_{\perp}^{\mu\nu}$ where the Lorentz indices $\{\mu, \nu\}\in\{0,1,2,3\}$. For a generic four vector $a^\mu$, we define $a_{\stretchrel*{\parallel}{\perp}}^\mu=(a^0,0,0,a^3)=(a_0,0,0,a_z)$ and $a_\perp^\mu=(0,a^1,a^2,0)=(0,a_x,a_y,0)$. The corresponding scalar products are defined as $(a_{\stretchrel*{\parallel}{\perp}}\cdot b_{\stretchrel*{\parallel}{\perp}})=a^0b^0-a^3b^3$ and $(a_\perp\cdot b_\perp)=-a^1b^1-a^2b^2$. \section{Formalism} In this section we obtain the one loop gluon self energy in presence of anisotropic thermo-magnetic medium within HTL approximation. For this purpose we follow the real-time Schwinger-Keldysh formalism \cite{Dumitru:2009fy,Carrington:1997sq,Carrington:1998jj,Mrowczynski:2000ed,Mrowczynski:2016etf} based on contour Green's functions which is applicable for non-equilibrium field theories. The basic formalism to obtain the retarded, advanced and the Feynman self-energies is reviewed in \cite{Mrowczynski:2016etf,Nopoush:2017zbu} in a self-contained manner. Here we briefly recall the essential steps to obtain the retarded part of the gluon self-energy in an anisotropic background. In the real time Keldysh formalism, the Green's functions for the quark field of a given flavour $\psi^i_\alpha$ and gluon field $A^a_\mu$ can be expressed as \begin{align} i\[S(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~,\nonumber\\ i\[D(x,y)\]^{ab}_{\mu\nu}&=\left\langle\hat{T}\[A^a_\mu(x)A^b_\nu(y)\]\right\rangle~. \end{align} where the spinor indices are represented by the set $\{\alpha,\beta\}\in\{1,2,3,4\}$ and the color indices in fundamental and adjoint representations of $SU(N_c)$ group with $N_c=3$ are represented by the sets $\{i,j\}\in\{1,2,3\}$ and $\{a,b\}\in\{1,2\cdots 8\}$ respectively. Here the angular bracket notation $\left\langle\cdots\right\rangle$ denotes the expectation value and the time ordering $\hat{T}$ of two generic fields $\Phi_1$ and $\Phi_2$ has the usual meaning \begin{align} \hat{T}\[\Phi_1(x)\Phi_2(y)\]&=\Theta(x^0-y^0)\Phi_1(x)\Phi_2(y)\pm\Theta(y^0-x^0)\Phi_2(y)\Phi_1(x)~, \end{align} where $\Theta$ denotes the Heaviside step function and the upper (lower) sign corresponds to the bosonic (fermionic) nature of the $\Phi$ fields. At one loop level, the gluon polarization function has three different contributions arising namely from the gluon tadpole and loop diagrams, the ghost loop diagram and the quark loop diagram. In presence of external magnetic field, the contributions from gluon and ghost remain unmodified whereas corrections appear in the quark loop contribution. Moreover, in the HTL approximation, the expressions of the photon and gluon self-energy differ only in the definition of the Debye mass. Thus, to find the net contribution of the gluon and ghost loops in presence of anisotropic momentum distribution, it is convenient to obtain the photon polarization function first without the external magnetic field, and then replace the QED Debye mass by the corresponding QCD expression for pure glue \cite{Nopoush:2017zbu}. The retarded self energy so obtained, is given by \cite{Kasmaei:2018yrr,Ghosh:2020sng}: \begin{align} \tilde{\Pi}_{ab}^{\mu\nu}(\omega,{\bm p},\xi)&=\delta_{ab}~\tilde{m}_D^2\int\frac{d\Omega_{\bm v}}{4\pi}v^\mu\frac{v^l+\xi_1({\bm v}\cdot {\bm a_1})a_1^l+\xi_2({\bm v}\cdot {\bm a_2})a_2^l}{(1+\xi_1({\bm v}\cdot {\bm a_1})^2+\xi_2({\bm v}\cdot {\bm a_2})^2)^2}\left.\Big[\eta^{\nu l}-\frac{v^\nu p^l}{\omega-{\bm p}\cdot {\bm v}+i0^+}\Big]\right\vert_{{\scriptscriptstyle l \in\{1,2,3\}}},\label{glughost_part} \end{align} where, the strong coupling constant $g_s$ appears explicitly in the expression of $\tilde{m}_D^2=\frac{g_s^2\Lambda_T^2}{3}N_c$ which corresponds to the QCD Debye mass with $N_f=0$ and the scale $\Lambda_T$ in the equilibrium limit, corresponds to the temperature. Together with the scale $\Lambda_T$, the anisotropy tuple $\xi=(\xi_1,\xi_2)$ characterizes the ellipsoidal anisotropic distribution function constructed from the bosonic equilibrium distribution function as \cite{Nopoush:2017zbu,Ghosh:2020sng} \begin{align} f^{\rm B}_{\mbox{aniso}}({\bm k})\equiv f^{\rm B}_{\mbox{iso}}\Bigg(\frac{\sqrt{{\bm k}^2+\xi_1({\bm k}\cdot{\bm a_1})^2+\xi_2({\bm k}\cdot{\bm a_2})^2}}{\Lambda_T}\Bigg). \end{align} In this work, the spatial anisotropy vectors ${\bm a_1}$, ${\bm a_2}$ are chosen along $\hat{x}=(1,0,0)$ and $\hat{z}=(0,0,1)$ directions respectively whereas the spatial components $v^l$ of the parton four velocity $v^\mu=(1,{\bm v})$ as well as the external momentum vector ${\bm p}$ are chosen in the spherical polar coordinates with angles $(\theta_k, \phi_k)$ and $ (\theta_p, \phi_p)$ respectively. To obtain the quark loop contribution of the retarded self energy in real time, here we recall the required definitions of the four Green's functions based on the propagation along the contour: \begin{align} i\[S^>(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\right\rangle~,\nonumber\\ i\[S^<(x,y)\]^{ij}_{\alpha\beta}&=-\left\langle\overline{\psi}^j_\beta(y)\psi^i_\alpha(x)\right\rangle~,\nonumber\\ i\[S^{\bar{c}}(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}^{\bar{c}}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~,\nonumber\\ i\[S^{\bar{a}}(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}^{\bar{a}}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~. \label{sdef} \end{align} Here $\hat{T}^{\bar{c}}$ is same as the usual time-ordering operator $\hat{T}$ defined earlier whereas the anti-time-ordering operator $\hat{T}^{\bar{a}}$ is defined as \begin{align} \hat{T}^{\bar{a}}\[\Phi_1(x)\Phi_2(y)\]&=\Theta(y^0-x^0)\Phi_1(x)\Phi_2(y)\pm\Theta(x^0-y^0)\Phi_2(y)\Phi_1(x)~, \end{align} where the upper (lower) sign corresponds to the bosonic (fermionic) nature of the generic $\Phi$ fields. The Green's function $S^{\bar{c}/\bar{a}}(x,y)$ is same as the time ordered propagator $S(x,y)$ with both $x^0$ and $y^0$ chosen on the upper/lower branch of the contour where the contour runs along the forward/backward time direction. On the other hand $S^{<}(x,y)$ and $S^{>}(x,y)$ is same as $S(x,y)$ with $x^0$ on the upper and $y^0$ on the lower branch and vice versa. To avoid clutter in the notations, let us first consider the photon self-energy in presence of magnetic background which is given by \begin{align} i\Pi^{\mu\nu}(x,y)&=-e^2 {\rm Tr}\[\gamma^\mu S(x,y)\gamma^\nu S(y,x)\] \end{align} where $e$ is the magnitude of the electron charge and $S(x,y)$ represents the electron propagator in presence of magnetic field. From the similar definitions as given in \eqref{sdef}, one can easily express the polarization tensor as a sum of $\Pi_{\mu\nu}^>$ and $\Pi_{\mu\nu}^<$ where \begin{align} i\Pi_{\mu\nu}^>(x,y)&=-e^2 {\rm Tr}\[\gamma_\mu S^>(x,y)\gamma_\nu S^<(y,x)\] ~,\nonumber\\ i\Pi_{\mu\nu}^<(x,y)&=-e^2 {\rm Tr}\[\gamma_\mu S^<(x,y)\gamma_\nu S^>(y,x)\] ~. \label{pi_grt_less} \end{align} Now, the retarded self energy is defined as \begin{align} \Pi_{\mu\nu}^R(x,y)&=\theta(x^0-y^0)\big[\Pi_{\mu\nu}^>(x,y)-\Pi_{\mu\nu}^<(x,y)\big]~. \label{retarded} \end{align} It should be noted here that the fermion propagator in presence of background magnetic field possess a multiplicative phase factor which spoils translational invariance \cite{ Schwinger:1951nm}. However, in the one loop photon polarization, the phase factor arising from the two fermion propagator cancels each other and only the translationally invariant parts of the propagators contribute. The same argument also applies for the quark loop in the gluon polarization tensor that we are interested in. Thus, from here on out, it is useful to decompose the fermion propagator as \cite{Shovkovy:2012zn} $S(x,y)=e^{i\Phi(x_{\perp},y_{\perp})}\overline{S}(x-y)$ and consider only the invariant $\overline{S}(x-y)$ part in the self energy. In that case, we are free to choose $y=0$ because of translational invariance and obtain the retarded self-energy as \begin{align} i\Pi^{\mu\nu}_R(x)&=-\frac{e^2}{2} {\rm Tr}\[\gamma^\mu \overline{S}_F(x) \gamma^\nu \overline{S}_A(-x)+\gamma^\mu \overline{S}_R(x) \gamma^\nu \overline{S}_F(-x)\]~. \end{align} Note that, in the above expression, the $S^>$ and $S^<$ propagators that arise from Eq.\eqref{pi_grt_less} and Eq.\eqref{retarded}, have been expressed in terms of the Feynman, advanced and retarded propagators which are defined respectively as \begin{align} S_F(x,y)&=S^>(x,y)+S^<(x,y)~,\nonumber\\ S_A(x,y)&=-\theta(y^0-x^0)\[S^>(x,y)-S^<(x,y)\]~,\nonumber\\ S_{R}(x,y)&=\theta(x^0-y^0)\[S^>(x,y)-S^<(x,y)\]~. \end{align} In the momentum space one obtains \begin{align} i\Pi^{\mu\nu}_R(p)&=-\frac{e^2}{2}\int\frac{d^4k}{(2\pi)^4} {\rm Tr}\[\gamma^\mu \overline{S}_F(k) \gamma^\nu \overline{S}_A(q)+\gamma^\mu \overline{S}_R(k) \gamma^\nu \overline{S}_F(q)\]~,\label{momentum_space_pi} \end{align} where $q=k-p$. In the mass-less limit, the invariant part of the propagators with lowest Landau level approximation are given by \cite{Shovkovy:2012zn} \begin{align} \overline{S}_R(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_R(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\frac{\exp\big(-\frac{k^2_\perp}{|e_f B|}\big)}{k_{\paral}^2+i\epsilon~ {\rm sgn}(k^0)}~, \nonumber\\ \overline{S}_A(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_A(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\frac{\exp\big(-\frac{k^2_\perp}{|e_f B|}\big)}{k_{\paral}^2-i\epsilon~{\rm sgn}(k^0)} ~, \nonumber\\ \overline{S}_F(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_F(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\Big[(2\pi i)\big[-1+2 f_{\rm F}(k_z)\big]\Big] \delta(k_{\paral}^2) \exp\Big(-\frac{k^2_\perp}{|e_f B|}\Big)~, \end{align} where $s_\perp={\rm sgn}(e_f B)$ with `${\rm sgn}$' representing sign function and the electric charge of the fermion is denoted as $e_f=q_f e$ which is equal to $-e$ for the electron. Also we note that the expressions of the fermion propagator used here is derived for a constant magnetic field intensity $B$ along the $\hat{z}$ direction which is same as the direction of the anisotropy vector ${\bm a_2}$. It should be noted that in presence of background magnetic field, the energy eigenvalue for the free fermion only depends on the longitudinal momentum (say $k_z$) and the Landau level index (say $n$) as these are the conserved quantum numbers independent of the gauge choice. On the other hand, the transverse momentum, which appears in the expression of the propagators, should be considered only as a conjugate variable arising from the Fourier transform of the translationally invariant part and it does not appear in the energy eigenvalue. Hence, in the lowest landau level approximation ($n=0$), we construct the nonequilibrium fermion distribution function from the equilibrium Fermi-Dirac distribution function $f_{\rm F}(k_z)$ as \begin{align} f^{\rm F}_{\rm aniso}(k_z)&\equiv f^{\rm F}_{\rm iso}\Big(\sqrt{k^2_z+\xi_2k^2_z}/\Lambda_T\Big)=f^{\rm F}_{\rm iso}\Big(|k_z|/\lambda_T\Big)~,\label{fermi_aniso} \end{align} where the newly defined momentum scale $\lambda_T$ is related to $\Lambda_T$ as $\lambda_T=\Lambda_T/\sqrt{1+\xi_2}$. Now, using the definition of the propagators in Eq.\eqref{momentum_space_pi}, the spinor trace can be performed and one obtains \begin{align} i\Pi^{\mu\nu}_R(p)&=-4e^2\int\frac{d^4k}{(2\pi)^4}\Big[k_{\paral}^\muq_{\paral}^\nu+k_{\paral}^\nuq_{\paral}^\mu-\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotq_{\paral})\Big]\Big[\Delta_R(k)\Delta_F(q)+\Delta_F(k)\Delta_A(q)\Big]~,\nonumber\\ &=-8e^2\int\frac{d^4k}{(2\pi)^4}\Big[k_{\paral}^\muq_{\paral}^\nu+k_{\paral}^\nuq_{\paral}^\mu-\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotq_{\paral})\Big]\Delta_F(k)\Delta_A(q)~, \end{align} where in the last step we have used $\Delta_F(-k)=\Delta_F(k)$ and $\Delta_R(-k)=\Delta_A(k)$. As we are interested in the medium effects, here we only consider the medium modified part of $\Delta_F(k)=4\pi i f_{\rm F}(k_z)e^{-\frac{k^2_\perp}{|e_f B|}}\delta(k_{\stretchrel*{\parallel}{\perp}}^2)$. Moreover, with this structure of the propagator, the longitudinal and the transverse part of the integrals gets separated and one can easily perform the integral over the transverse momentum as \begin{align} \int\frac{d^2k_\perp}{(2\pi)^2}\exp\Big(-\frac{k^2_\perp}{|e B|}\Big)\exp\Big(-\frac{q^2_\perp}{|e B|}\Big)&=\frac{|eB|}{8\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)~. \end{align} The polarization function now becomes \begin{align} \Pi^{\mu\nu}_R(p)&=-4 e^2|eB|\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{d^2k_{\stretchrel*{\parallel}{\perp}}}{(2\pi)^2}f_{\rm F}(k_z)\Bigg[\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{p_{\paral}^2-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0-p^0)}\Bigg]\delta(k_{\paral}^2)~. \end{align} In the HTL approximation we consider the external momentum to be `soft' that is $p\sim e\Lambda_T$ and the internal momentum is `hard' that is $k\sim\Lambda_T$. With this hierarchy, a Taylor series expansion of the terms in side the square brackets can be performed which up to second order is given as \begin{align} &\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{p_{\paral}^2-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0-p^0)}=\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0)}\Bigg[1-\frac{p_{\stretchrel*{\parallel}{\perp}}^2}{2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)}\Bigg]^{-1}~,\nonumber\\ &\approx\frac{2k_{\paral}^\muk_{\paral}^\nu}{-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0)}-\frac{\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}}{2}+\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)}-p_{\paral}^2\frac{2 k_{\paral}^\mu k_{\paral}^\nu }{\big[2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)\big]^2}~. \end{align} As in the thermal case, the first term in the expansion does not contribute. Integrating over the $k^0$ variable using the delta function property \begin{align} \delta(k^2_{\stretchrel*{\parallel}{\perp}})&=\frac{\delta(k^0-|k_z|)+\delta(k^0+|k_z|)}{2|k_z|}~, \end{align} one obtains \begin{align} \Pi^{\mu\nu}_R(p)&= e^2\frac{|eB|}{\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{dk_z}{2\pi}\frac{f_{\rm F}(k_z)}{|k_z|}\left.\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}-\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{(k_{\paral}\cdotp_{\paral})+i\epsilon}+\frac{p_{\paral}^2k_{\paral}^\mu k_{\paral}^\nu }{\big[(k_{\paral}\cdotp_{\paral})+i\epsilon\big]^2}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} The term in the square braces can be related to a total derivative term as \begin{align} \left.\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}-\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{(k_{\paral}\cdotp_{\paral})+i\epsilon}+\frac{p_{\paral}^2k_{\paral}^\mu k_{\paral}^\nu }{\big[(k_{\paral}\cdotp_{\paral})+i\epsilon\big]^2}\Bigg]\right\vert_{k^0=|k_z|}&=-|k_z|\frac{\partial}{\partial k_z}\left.\Bigg[p_z\frac{k_{\paral}^\muk_{\paral}^\nu}{|k_z|(k_{\paral}\cdotp_{\paral}+i \epsilon)}-\frac{k_{\paral}^\mu\eta_{\stretchrel*{\parallel}{\perp}}^{\nu 3}}{|k_z|}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} After performing an integration by parts with the assumption $\lim_{k_z\rightarrow\pm\infty}f(k_z)=0$ one obtains \begin{align} \Pi^{\mu\nu}_R(p)&= e^2\frac{|eB|}{\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{dk_z}{2\pi}\frac{\partial f_{\rm F}(k_z)}{\partial k_z}\left.\Bigg[p_z\frac{k_{\paral}^\muk_{\paral}^\nu}{|k_z|(k_{\paral}\cdotp_{\paral}+i \epsilon)}-\frac{k_{\paral}^\mu\eta_{\stretchrel*{\parallel}{\perp}}^{\nu 3}}{|k_z|}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} As in the thermal case, the above expression can further be simplified by expressing the magnitude and the angular integrals separately. Considering the anisotropic distribution function as given in Eq.~\eqref{fermi_aniso} one obtains \begin{align} \Pi^{\mu\nu}_R(p)&=\frac{m^2_{D,e}}{2}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\sum_{{\rm sgn}(k_z)=\pm1}\left.\frac{v_{\stretchrel*{\parallel}{\perp}}^\mu v_{\stretchrel*{\parallel}{\perp}}^l}{1+\xi_2}\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\nu l}-\frac{v_{\paral}^\nu p^l}{(v_{\paral}\cdotp_{\paral}+i \epsilon)}\Bigg]\right\vert_{l=3}~,\label{electron_loop} \end{align} where the Debye mass is defined as \begin{align} m^2_{D,e}&=-\frac{e^2}{\pi^2}|e B|\int d|k_z| \frac{\partial f^{\rm iso}_{\rm F}(|k_z|)}{\partial |k_z|}=e^2\frac{|eB|}{2\pi^2}~. \end{align} One can observe that, as a consequence of dimensional reduction in the strong field approximation, the solid angle integral with the $4\pi$ angular average in Eq.~\eqref{glughost_part}, now reduces to a summation along with an average over two possible directions. It should be noticed that unlike the thermal case, the self-energy is independent of the momentum scale $\Lambda_T$ and the anisotropy parameter appears only in a multiplicative factor without any directional dependence. However, the implicit dependence on the momentum scale is present due to the running of the coupling constant. Now, incorporating the flavor sum and the color factor, the quark loop contribution in the retarded gluon polarization tensor can be obtained from Eq.~\eqref{electron_loop} as \cite{Fukushima:2015wck} \begin{align} \bar{\Pi}_{ab}^{\mu\nu}(p)&=\delta_{ab}\sum_{f}g_s^2\frac{|e_f B|}{8\pi^2}\exp\Big(-\frac{p^2_\perp}{2|e_f B|}\Big)\sum_{{\rm sgn}(k_z)=\pm1}\left.\frac{v_{\stretchrel*{\parallel}{\perp}}^\mu v_{\stretchrel*{\parallel}{\perp}}^l}{1+\xi_2}\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\nu l}-\frac{v_{\paral}^\nu p^l}{(v_{\paral}\cdotp_{\paral}+i \epsilon)}\Bigg]\right\vert_{l=3}~.\label{quark_loop} \end{align} In the static limit ($\omega=0, {\bm p}\rightarrow 0 $), the temporal component $\bar{\Pi}^{00}$ with $\xi_2=0$ becomes \cite{Karmakar:2018aig} \begin{align} \bar{m}_D^2&=\sum_{f}g_s^2\frac{|e_f B|}{4\pi^2}~, \end{align} which, together with the magnetic field independent contribution $\tilde{m}_D^2$, defines the Debye screening mass $\hat{m}_D=\sqrt{\tilde{m}_D^2+\bar{m}_D^2}$. Finally, the retarded gluon polarization function is obtained from the individual contributions given in Eq.~\eqref{glughost_part} and Eq.~\eqref{quark_loop} as \begin{align} \Pi_{ab}^{\mu\nu}(p,eB,\xi,\Lambda_T)&=\tilde{\Pi}_{ab}^{\mu\nu}(p,\xi_1,\xi_2,\Lambda_T)+\bar{\Pi}_{ab}^{\mu\nu}(p, eB,\xi_2,\Lambda_T)~, \label{pi} \end{align} where the dependence on the external parameters $p$, $eB$, $\xi$ and $\Lambda_T$ has been shown explicitly in each case. The polarization function is symmetric in the Lorentz indices ($\Pi^{\mu\nu}=\Pi^{\nu\mu}$) and satisfies the transversality condition $p_\mu\Pi^{\mu\nu}=0$. Incorporating these constraint relations, the general structure of the polarization function can be constructed from the available basis tensors. A suitable choice in this regard is the basis set constructed for the ellipsoidal momentum anisotropy in Ref.~\cite{Ghosh:2020sng}. A list of the required basis tensors is provided in the Appendix \ref{list_tensor} for completeness. In that basis, the gluon polarization tensor can be expressed as \begin{align} \Pi^{\mu\nu}&=\alpha A^{\mu\nu}+\beta B^{\mu\nu}+\gamma C^{\mu\nu}+\delta D^{\mu\nu}+\sigma E^{\mu\nu}+\lambda F^{\mu\nu}~, \end{align} and the corresponding form factors can be extracted from Eq.~\eqref{pi} through suitable projections. Here we note that, all of the projection tensors are symmetric and transverse to the external momentum. Thus, the decomposition of the polarization function trivially satisfies the symmetry constraint as well as the transversality condition. Now, the effective gluon propagator can be obtained from the Dyson--Schwinger equation \begin{align} \mathcal{D}&=\mathcal{D}_0-\mathcal{D}_0\Pi \mathcal{D}~. \end{align} Here $\mathcal{D}_0$ is the bare propagator and its inverse, with the gauge fixing parameter $\zeta$, is given by \begin{align} (\mathcal{D}_0^{-1})^{\mu\nu}&=-p^2\eta^{\mu\nu}-\frac{1-\zeta}{\zeta}p^\mu p^\nu~. \end{align} From the pole of the effective propagator, one can obtain the gluon collective modes by solving \begin{eqnarray} p^2-\Omega_{0,\pm}(p)&=&0~.\label{disp} \end{eqnarray} Any deviation from the light like dispersion is encoded in the mode functions $\Omega_{0,\pm}$ which are given by \cite{Ghosh:2020sng} \begin{eqnarray} \Omega_0&=&\frac{1}{3}(\alpha + \beta + \delta)- \frac{1}{3}\frac{\varpi}{\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}}}+\frac{1}{3}\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}},\label{om0}\\ \Omega_{\pm}&=&\frac{1}{3}(\alpha + \beta + \delta)+ \frac{1\pm i\sqrt{3}}{6}\frac{\varpi}{\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}}}-\frac{1\mp i\sqrt{3}}{6}\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}},\label{ompm} \end{eqnarray} where, the $\varpi$ and $\chi$ in the expression are defined in terms of the form factors as \begin{eqnarray} \varpi&=&\alpha(\beta-\alpha)+\beta(\delta-\beta)+\delta(\alpha-\delta)-3(\gamma^2+\lambda^2+\sigma^2)~,\\ \chi&=&(2\alpha-\beta-\delta)(2\beta-\delta-\alpha)(2\delta-\alpha-\beta)+54\gamma\lambda\sigma\nonumber\\ &-&9\big[\alpha(2\lambda^2-\sigma^2-\gamma^2)+\beta(2\sigma^2-\gamma^2-\lambda^2) +\delta(2\gamma^2-\lambda^2-\sigma^2)\big]~. \end{eqnarray} It should be noted here that among the six form factors, only the $\alpha$, $\beta$ and $\gamma$ gets modified in presence of the external magnetic field. However, all of the form factors depend on the external anisotropy parameter $\xi$. Now, each of the mode functions being a nontrivial combination of all six form factors, it is expected that, in addition to the anisotropy induced effects, all the gluon collective modes will possess magnetic modifications. We explore such anisotropic gluon collective modes in the following section. \section{Results} As mentioned earlier, for fixed set of external parameters, the gluon collective modes can be obtained by solving Eq.~\eqref{disp} which only requires the knowledge of the momentum dependence of the form factors. Choosing a particular orientation of the reference frame, this momentum dependence can be obtained from the components of the polarization tensor. Here we note that, without the background field, $\Lambda_T$ being the only available energy scale for the anisotropic plasma, the tensor components (and consequently the form factors and the mode functions) are proportional to the square of the thermal Debye screening mass given by \begin{align} m_D&=\sqrt{\frac{g_s^2 \Lambda_T^2}{3}\Bigg(N_c+\frac{N_f}{2}\Bigg)}~, \end{align} and one can do away with the $\Lambda_T$ dependence by simply expressing the dispersion in terms of the scaled variables $\omega/m_D$ and $|{\bm p}|/m_D$. When the external magnetic field is turned on, the gluon and the quark loop contribution respectively becomes proportional to $\tilde{m}_D^2$ and $\bar{m}_D^2$. However, to compare with the thermal case, here we consider the same $m_D$ for scaling instead of the thermo-magnetic Debye mass $\hat{m}_D$. With $N_c=3$, the ratio of the square of the Debye masses arising from the gluon and the quark contribution can be expressed as \begin{align} \frac{\bar{m}_D^2}{\tilde{m}_D^2}&=\mathcal{R}^2\sum_f|q_f|~, \end{align} where the ratio of the two energy scales is set by $2\pi\mathcal{R}=\sqrt{|eB|}/\Lambda_T$. In the present study, we consider $eB=30 m_\pi^2 $ and $\Lambda_T=0.2$ GeV which gives $2\pi \mathcal{R}\sim 3$. The value of the coupling $g_s$ at the fixed $\Lambda_T$ is determined considering the one loop running. For this purpose, the $\overline{\rm{MS}}$ renormalization scale is set at 0.176 GeV by fixing the QCD fine structure constant $\alpha_s(1.5~ {\rm GeV},N_f=3)=0.326$ \cite{Bazavov:2012ka,Haque:2014rua}. With these fixed set of external parmeters, we now obtain the three stable gluon collective modes characterized by the corresponding mode functions given in Eqs.~\eqref{om0} and \eqref{ompm}. At first we consider the case with one anisotropy direction. This can arise either due to the presence of the external magnetic field or by the expansion of the medium resulting in an anisotropic momentum distribution of the partons. Now, irrespective of the origin of the anisotropy, the gluon polarization tensor can be expressed in terms of four basis tensors and the pole of the effective propagator gives rise to the same mode functions given by \cite{Karmakar:2018aig,Ghosh:2020sng} \begin{eqnarray} \Omega_0&=& \frac{1}{2}\bigg( \alpha + \beta+\sqrt{(\alpha - \beta)^2+4\gamma^2} \bigg), \label{RS1}\\ \Omega_+&=& \frac{1}{2}\bigg( \alpha + \beta-\sqrt{(\alpha - \beta)^2+4\gamma^2} \bigg), \label{RS2}\\ \Omega_-&=& \delta \label{RS3}. \end{eqnarray} However, it should be observed that the parameter dependences of the form factors in the two cases are completely different and it is interesting to compare the two scenarios. In Fig.~\ref{disp_plot_mo1}, we consider the dispersion corresponding to the mode function $\Omega_0$ for two different values of $\theta_p=\{\pi/2,\pi/4\}$ which represents the angel between the anisotropy vector and the external momentum. One can notice that the angular dependence is weak in both cases. In contrast to the magnetic field case, the mode corresponding to the spheroidal anisotropy shows more prominent angular dependence in the low momentum regime. Also it can be noticed that the plasma frequency in presence of the external magnetic field is significantly larger compared to the spheroidal anisotropy scenario. The introduction of the magnetic field enhances the plasma frequency for this mode compared to the isotropic case (also shown in the figure) whereas a spheroidal anisotropy decreases it. \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{Omega0com.pdf} \caption{The collective mode of gluon corresponding to the mode function $\Omega_0$ is shown at fixed momentum scale $\Lambda_T=0.2$ GeV and propagation angles $\theta_p=\pi/2$ and $\pi/4$ for two different cases: (i) with external magnetic field $eB=30 m_\pi^2$ (shown in red) and (ii) with spheroidal anisotropy (shown in blue). The light cone and the isotropic collective modes are also shown for comparison. } \label{disp_plot_mo1} \end{figure} The scenario is quite different in case of the collective mode corresponding to $ \Omega_+$ as shown in Fig.~\ref{disp_plot_mo2}. In presence of background magnetic field one can observe a prominent angular dependence in the dispersion shown in Fig.~\ref{disp_plot_mo2}(a). In the two limiting cases when the propagation angle $\theta_p$ is zero and $\pi/2$, the collective mode becomes identical respectively to the transverse ($\Pi_T$) and the longitudinal ($\Pi_L$) mode of the isotropic gluonic medium \cite{Karmakar:2018aig,Hattori:2017xoo}. It should be noted here that as $\gamma$ vanishes in the isotropic case and $\beta$ and $\delta=\Pi_T$ become degenerate, one obtains two distinct dispersive modes (also shown in the figure) corresponding to the mode functions \cite{Bellac:2011kqa} \begin{eqnarray} \Omega_0=\Pi_L=-\tilde{m}_D^2\frac{\omega^2-p^2}{ p^2}\bigg[ 1-\frac{\omega}{2p}\ln \frac{\omega+p}{\omega-p}\bigg]~, \label{disp_iso_1} \end{eqnarray} and \begin{eqnarray} \Omega_\pm=\Pi_T=\frac{\tilde{m}_D^2}{2}\frac{\omega^2}{p^2}\bigg[1-\frac{\omega^2-p^2}{2\omega p}\ln \frac{\omega+p}{\omega-p} \bigg]~. \label{disp_iso_2} \end{eqnarray} For the intermediate angles (shown for $\theta_p=\pi/12,\pi/6$), the mode lies within the isotropic dispersion curves of the pure gluonic medium. On the other hand, in case of spheroidal momentum space anisotropy, the angular dependence of the collective mode is quite different from the magnetic field case as shown in Fig.~\ref{disp_plot_mo2}(b). Here we consider anisotropy tuple $\xi=(0,10)$. One can notice that the angular dependence is weaker. Moreover, the isotropic dispersions can not be recovered by simply varying the propagation angle. It should be noted here that in this case the isotropic mode functions are same as Eq.~\eqref{disp_iso_1} and Eq.~\eqref{disp_iso_2} however with the replacement of $\tilde{m}_D^2$ by $m_D^2$. Comparing the modes in Fig.~\ref{disp_plot_mo2}(a) and Fig.~\ref{disp_plot_mo2}(b), one can observe that in both cases the plasma frequency decreases compared to the isotropic value with $N_f=3$ and for the external magnetic field, it becomes equal to the plasma frequency of the isotropic pure gluonic medium ($ N_f=0$). Interestingly, due to the similar decomposition of the basis tensor, in both cases the mode characterized by $\Omega_-$ becomes identical to the corresponding isotropic transverse mode (see Eq.~\eqref{RS3} where $\delta$ is respectively proportional to $\tilde{m}_D^2$ and $m_D^2$ for the magnetic and spheroidal anisotropy case) and consequently becomes independent of the propagation angle. \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{Oplx0z0eB30.pdf} \includegraphics[width=7cm, scale=0.7]{Oplx0z10eB0.pdf} \caption{Angular variation of the collective mode of gluon corresponding to the mode function $\Omega_+$ is shown at fixed momentum scale $\Lambda_T=0.2$ GeV for two different cases: (a) with external magnetic field $eB=30 m_\pi^2$ and (b) with spheroidal anisotropy. The light cone and the isotropic collective modes ( with (a) $N_f=0$ and (b) $N_f=3$ ) are also shown for comparison. } \label{disp_plot_mo2} \end{figure} \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{x10z0th4ph6eB30.pdf} \includegraphics[width=7cm, scale=0.7]{x10z5th4ph6eB30.pdf} \caption{The collective modes of gluon with (a) spheroidal and (b) ellipsoidal anisotropy are shown for $\theta_p=\pi/4$ and $\phi_p=\pi/6$ at fixed momentum scale $\Lambda_T=0.2$ GeV and magnetic field strength $30m_\pi^2$. The light cone is also shown for comparison.} \label{disp_plot} \end{figure} Finally, the dispersion relation for the three stable modes in presence of momentum space anisotropy as well as external magnetic field is shown in Fig.~\ref{disp_plot}. In the left panel, we consider the momentum anisotropy along $\hat{x}$ which is orthogonal to the magnetic field direction (along $\hat{z}$) and fix the anisotropy tuple at $\xi=(10,0)$, whereas, in right panel, the dispersion is shown for ellipsoidal momentum anisotropy with two anisotropy directions : one along the magnetic field ({\it{i.e.}} along $\hat{z}$) and the other orthogonal to it ({\it{i.e.}} along $\hat{x}$). In this case the anisotropy tuple is set at $\xi=(10,5)$. It should be noted that in the presence of either magnetic field or spheroidal momentum anisotropy (say along $\hat{z}$), the rotational symmetry of the system is broken and the dispersive modes depend on the direction of propagation of the gluons which is characterized by the polar angle $\theta_p$. However, when the two anisotropy directions are considered together, as long as they are not parallel to each other, the azimuthal symmetry of the system is also broken and consequently, the collective modes show azimuthal angular dependence. Here we consider a fixed propagation direction now characterized by $\theta_p=\pi/4$ and $\phi_p=\pi/6$. Unlike the magnetic field case discussed earlier (where the plasma frequencies of $\omega_+$ and $\omega_-$ were degenerate), one can observe from Fig.~\ref{disp_plot}(a), that all the collective modes possess different plasma frequencies. Moreover, an overall decrease in the magnitude is observed compared to the thermo-magnetic modes (shown in Figs.\ref{disp_plot_mo1} and ~\ref{disp_plot_mo2}(a)). Once the ellipsoidal anisotropy is considered, the plasma frequencies further decreases for all the modes as can be seen from Fig.~\ref{disp_plot}(b). This is in fact expected from Eq.~\eqref{quark_loop} as the anisotropy parameter $\xi_2$ essentially suppresses the quark loop contribution thereby decreasing the overall magnitude. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[scale=0.5]{mass_eB30_piby12_xi2.pdf} \caption{Variation of the squared mass with polar angle $\theta_p$ is shown for each mode functions at fixed values of external parameters $\phi_p=\pi/12$, $\xi_1=10$, $\Lambda_T=0.2$ GeV and $eB=30m_\pi^2$. The continuous and the dashed curves represent $\xi_2=5$ and $\xi_2=0$ respectively.} \label{mass} \end{center} \end{figure} \end{center} Let us now consider the influence of the magnetic field on the unstable modes of the anisotropic medium. As in the case of spheroidal~\cite{Romatschke:2003ms} and ellipsoidal momentum anisotropy~\cite{Ghosh:2020sng}, in the limit $\omega\rightarrow0$, one can define three mass scales ($m_0$ and $m_\pm$) corresponding to the mode functions $\Omega_0$ and $\Omega_\pm$. A negative value of a given squared mass indicates the existence of an unstable mode. It should be mentioned here that instead of considering $N_f=3$, if one considers a two flavour plasma, all the qualitative features remain the same and in the following, we study the mass scales and the instability growth rate considering $N_f=2$. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[width=5.5cm,scale=0.35]{mass_m1_eB_compare.pdf} \includegraphics[width=5.5cm,scale=0.35]{mass_m2_eB_compare.pdf} \includegraphics[width=5.5cm,scale=0.35]{mass_m3_eB_compare.pdf} \caption{Variation of the squared masses with the polar angel $\theta_p$ is shown at $\xi_1=10$ and $\xi_2=5$ with $\phi_p$ as a parameter. The continuous and the dashed curves represent the magnetic field strength 30$m_\pi^2$ and $0$ respectively.} \label{mass_compare} \end{center} \end{figure} \end{center} In Fig.~\ref{mass} we show the variation of the squared mass with the propagation angle of the gluon with respect to the magnetic field direction. For a fixed $\phi_p=\pi/12$, we consider two scenarios: one with $\xi=(10,0)$ and the other with $\xi=(10,5)$. In the former case, as we increase $\theta_p$, $m^2_+$ and $m^2_-$ gradually become negative. However, a positive value value is observed for $m^2_0$ throughout the $\theta_p$ range. One should note that, at small $\phi_p$ ( as considered here ), the higher values of $\theta_p$ indicates proximity to the anisotropy axis and the observed angular dependence of the mass scales is similar to the spheroidal anisotropy case \cite{Romatschke:2003ms}. When the momentum anisotropy along the magnetic field direction is turned on, all the mass scales become nearly independent of $\theta_p$. In this case, a prominent negative value for $m^2_+$ is observed for the entire range of the polar angle. It is interesting to compare the scenario with the ellipsoidal anisotropy results as obtained in Ref.~\cite{Ghosh:2020sng}. For this purpose, in Fig.~\ref{mass_compare}, we show the directional dependence of the square mass scales with and without the external magnetic field. Here we consider $\xi=(10,5)$. One can notice that the angular dependence of the mass scales are similar to the ellipsoidal anisotropy scenario showing a positive $m^2_0$ throughout the considered range of $\theta_p$ and $\phi_p$ along with instability windows for $m^2_\pm$. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[width=7cm,scale=0.7]{insta_1.pdf} \includegraphics[width=7cm,scale=0.7]{insta_2.pdf} \caption{The growth rate corresponding to $\Omega_+$ mode is plotted for (a) $\xi=(10,0)$ and (b) $\xi=(10,5)$ at fixed angles $\theta_p=\pi/3$, $\phi_p=\pi/12$. The continuous and the dashed curves correspond to the magnetic field strength $0$ and 30$m_\pi^2$ respectively.} \label{inst_mag} \end{center} \end{figure} \end{center} As already mentioned, the negative values in the square mass indicate the presence of unstable modes whose amplitude grows exponentially with time. The growth rate of such instabilities (that is the imaginary part of the mode frequency) can be obtained from the pole of the effective propagator. For this purpose, the mode frequency ($p^0=\omega$) in Eq.~\eqref{disp} is replaced by $ i \Gamma_{0,\pm}$ and one looks for the solution of $\Gamma$ corresponding to each mode functions ~\cite{Ghosh:2020sng,Kasmaei:2018yrr}. The numerical solution for $\Gamma_+$ is shown in Fig.~\ref{inst_mag} for a fixed propagation direction $(\theta_p,\phi_p)=(\pi/3,\pi/12)$. In the left panel, we consider the spheroidal momentum anisotropy with $\xi=(10,0)$ whereas in the right panel, we take $\xi=(10,5)$ characterizing an ellipsoidal momentum space anisotropy. It can be observed that in both cases the amplitude of the growth rate significantly decreases in presence of the external magnetic field. For the spheroidal and ellipsoidal anisotropy without any magnetic background, there exists a critical value of the momentum beyond which the growth rate becomes negative and the instability ceases to exist. When the external magnetic field is turned on, we observe a significant decrease in the critical momentum providing a smaller momentum window for the positive growth rate. The situation may be compared to the instabilities in collisional plasma \cite{Schenke:2006xu,Jamal:2017dqs,Kumar:2017bja} where a critical collisional frequency exists beyond which the growth rate becomes negative for any value of external momentum. In a similar way, one may expect a critical magnetic field intensity beyond which no instabilities occur. Here we recall that in the present study we have considered the field intensity $\sqrt{eB}$ as high as three times the momentum scale $\Lambda_T$ to justify the lowest Landau level approximation. Now, for the anisotropic collisional plasma, a small change in the collisional frequency significantly reduces the growth rate \cite{Schenke:2006xu}. However, in the present study we find that, even if one increases the magnetic field to several times the considered value, the amplitude and the critical momentum corresponding to the growth rate hardly decreases. Thus, as long as the heavy ion collisions are concerned, a critical magnetic field intensity is unlikely to be present in the realistic scenario. \iffalse It also largely reduces the momentum range within which the gluon modes are unstable. Following this behavior, one can conclude that the modes become stable at high enough magnetic field strength. Although, such high magnetic field may not be produced in the heavy-ion collisions. One can also conclude that the magnetic field and the anisotropy ($\xi_2$) behave in an opposite manner. The effect of the magnetic field gets reduced by the presence of an anisotropy along the same direction due to introduction of the factor $(1+\xi_2)$ in the momentum scale $\lambda_T=\Lambda_T/\sqrt{1+\xi_2}$ in the nonequilibrium fermion distribution function. The presence of ellipsoidal anisotropy ({\it{i.e.,}} when the momentum space anisotropy in the transverse plane is also taken into account) enhances the growth rate of the unstable modes when compared to the shperoidal anisotropy as found in Ref.~\cite{Kasmaei:2018yrr}. On the contrary, the magnetic field (along $z$ direction in the transverse plane {\it{i.e.,}} $y-z$ plane) reduces the growth rate of the unstable modes leading to a competition between the magnetic field and the anisotropy $\xi_2$. \fi \section{Summary and Conclusion} In this article, the collective modes of gluon in the presence of momentum space anisotropy along with a constant background magnetic field have been studied using the hard-thermal loop perturbation theory. For this purpose, we have obtained the one loop gluon self energy in the real time Schwinger-Keldysh formalism. The contributions from the gluon and ghost loops remain unaffected by the external magnetic field whereas the entire modification arises from the quark loop contribution which has been evaluated in the lowest Landau level approximation. To extract the Lorentz invariant form factors from the polarization tensor, we implement the basis decomposition obtained in Ref.~\cite{Ghosh:2020sng} which is originally constructed for describing the ellipsoidal momentum anisotropy. From the pole of the effective gluon propagator, we obtain three stable dispersive modes of gluon. At first we compare the collective modes of spheroidal anisotropy with that of isotropic thermal background along with external magnetic field. In both cases, the dispersion is governed by four non-vanishing form factors. Though the mode functions in terms of the form factors are identical in the two cases, the form factors themselves are different. Consequently, significant differences are observed in the angular dependence of the collective modes . When the external magnetic field is considered along with spheroidal or ellipsoidal momentum anisotropy, the azimuthal symmetry of the system is lost. As a result, the collective modes depend on the polar as well as on the azimuthal angels corresponding to the propagation direction. It is observed that due to the dimensional reduction in the LLL approximation, the parameter $\xi_2$ that characterizes the anisotropy along the magnetic field direction, appears in the quark loop only in an overall suppressing factor. Thus, the momentum anisotropy along the magnetic field direction essentially counterbalances the magnetic field effects. As the quark loop contribution is suppressed in this case, we observe smaller plasma frequencies for all the collective modes. To investigate the unstable modes, we have studied the angular dependence of the squared mass scales corresponding to each mode functions. Depending upon the propagation direction, we have observed negative values in the squared masses corresponding to $\Omega_\pm$ indicating instability in the collective modes. \iffalse We note that the tensor structure used in Ref.~\cite{Karmakar:2018aig} is sufficient if one considers themo-magnetic system with spheroidal anisotropy (the anisotropy should be along the magnetic field direction). However, one needs to have the six tensors used in this article while considering ellipsoidal anisotropy of the medium in the presence of a magnetic field. The real time formalism has been used to compute the gluon self-energy form factors. Three out of the six form factors go to zero at zero magnetic field and isotropic limit and one gets back the thermal form factors. From the pole of the effective gluon propagator, we obtain three stable dispersive modes of gluon. In general, the dispersive modes of gluon in anisotropic medium strongly depend on the propagation angle. However, comparing the collective modes of the spheroidal momentum anisotropy case with the anisotropy due to the presence of external magnetic field the angular dependence Comparing the spheroidal anisotropy case with the Also, we found that one out of the three modes $\omega_0$ depends largely on the magnetic field strength. \fi Here we note that no unstable gluon mode exists in an isotropic medium even in the presence of a background magnetic field. It is the momentum space anisotropy that gives rise to the instability. However, the external magnetic field has a significant influence on the growth rate of the unstable modes. In particular, the amplitude as well as the critical momentum corresponding to the growth rate of the unstable mode is significantly reduced in presence of strong magnetic background. This observation is similar to the instability growth rate in anisotropic collisional plasma \cite{Schenke:2006xu} where larger collisional frequency suppresses the growth rate and eventually, no unstable mode exists beyond a critical frequency. However, it has been argued that the realistic collision frequencies usually lie within the critical value. Here also we find that for anisotropic thermal medium with realistic magnetic field intensity ( which is expected to be present in heavy ion collisions ), unstable collective modes do exist in certain propagation direction. The present study has several interesting future directions. First of all, due to the lowest Landau level approximation, only a 1+1 dimensional quark dynamics is considered here. Consequently, the momentum anisotropy orthogonal to the magnetic field direction does not affect the quark loop contribution at all. However, a non trivial influence of such momentum space anisotropy is expected in the weak field limit where the energy eigen value of the quarks have the usual three momentum dependence. Thus it is interesting to contrast such scenario with the strong field case as presented here. Also, the fermionic collective modes have recently been studied in presence of magnetic field \cite{Das:2017vfh} and also in case of ellipsoidal anisotropy \cite{Kasmaei:2016apv}. Thus, the combined effect of the magnetic field and momentum space anisotropy on the fermionic collective modes deserves further investigation. Similar scenario also exists in the studies of heavy quark potential where the effect of the external magnetic field and the momentum anisotropy has been considered individually \cite{Dumitru:2007hy,Nopoush:2017zbu,Singh:2017nfa,Hasan:2020iwa,Ghosh:2022sxi} and their mutual influence remains to be explored. We intend to pursue such exploration in future. \begin{acknowledgments} A. M. would like to acknowledge fruitful discussions with Ashutosh Dash and Sunil Jaiswal. B. K. acknowledges HORIZON 2020 European research council (ERC) 2016 Consolidation grant, ERC-2016-COG: 725741: QGP TOMOGRAPHY (under contract with ERC). R. G. is funded by University Grants Commission (UGC). A. M. acknowledges Department of Science and Technology (DST), Government of India, for funding. \end{acknowledgments}
2024-02-18T23:40:45.270Z
2022-04-21T02:40:51.000Z
algebraic_stack_train_0000
3,262
8,991
proofpile-arXiv_066-49
\section{Introduction} A major goal of controller synthesis is to achieve best possible closed-loop performance. Besides choosing a suitable controller and possibly state estimator, their correct parametrization can heavily improve closed-loop performance. Analytical design rules are only available for a limited number of controllers and scenarios. Manual tuning of controller parameters is tedious and often suboptimal. Mathematical optimization can lead to a more systematic approach and overall better performance. The problem of finding the optimal configuration of a controller can in the most simple case be written as: \begin{equation} \label{eq:optproblem} \boldsymbol{\theta}_c^{\text{opt}} = \argmin_{\boldsymbol{\theta}_c \in \mathcal{D}} C(\boldsymbol{\theta}_c) \end{equation} The value of the loss function to be minimized $C(\boldsymbol{\theta}_c)$ is obtained for one set of controller parameters through simulation or experiment. This results in three challenges for the optimizer used to find the solution to Eq. \ref{eq:optproblem}: Firstly, we have a black-box optimization problem. Secondly, simulations and experiments might be time consuming requiring a sample efficient optimization algorithm. Thirdly, the objective function might be corrupted by noise $\tilde{C}(\boldsymbol{\theta}_c) = C(\boldsymbol{\theta}_c) + \epsilon$. These challenges rule out a number of common optimization algorithms. Algorithms relying on gradients or relaxations of the problem are not suitable. Meta-heuristics such as Genetic Algorithms or Particle Swarm Optimization are considered to be not sample efficient since they discard some of the previously obtained simulation results. Additionally, they do not incorporate stochastic constraints and objective functions inherently. Bayesian optimization is a common method used for noisy optimization. Surrogate models of black-box responses are learned by fitting fast-to-evaluate probabilistic regression models using all data obtained through previous sampling of the black-box during optimization. These surrogate models are used to find the next promising sample point. Bayesian optimization is a tool widely used in algorithmic tuning across different domains such as tuning of optimization algorithms [\cite{Hutter.2011}] and Fault detection and isolation (FDI) [\cite{Marzat.2011}]. Recent examples from the control and robotics community include learning gaits under uncertainty [\cite{Calandra.2016}], throttle valve control [\cite{NeumannBrosig.2019}], local linear dynamics learning [\cite{Bansal.2017}] and tuning for a linear quadratic regulator for robots [\cite{Marco.2016}]. A related line of research is safe Bayesian optimization which tries to prevent the algorithm from sampling in unsafe controller parameter regions and therefore allows experimental parameter tuning [\cite{Berkenkamp.2016}]. This method was recently applied by \cite{Khosravi.2019}. In this contribution we apply Bayesian optimization to an industrial CNC machining process with machinery sensitive to improper actuation. We cannot expect this process to be modeled accurately with low computational cost a-priori\footnote{Accurate modeling is possible via dexel simulations. Computational costs prohibit their usage in controller synthesis.}. Therefore the focus of this contribution is to find a robust parameterization for the combination of a Kalman Filter (KF) and a MPC with respect to random model plant mismatches in the simulation. Goal is to maximize expected performance while ensuring safe worst case behavior. Parameter tuning for MPC with Bayesian optimization was recently examined by \cite{Piga.2019} and \cite{Lucchini.2020}. However, the authors do not consider save worst case behavior or optimization of the state estimator. \cite{Andersson.2016} optimize tuning parameters of soft constraints for stochastic collision avoidance with MPC taking into account probabilistic safety constrains. Ensuring minimal worst case behavior has been previously addressed in the FDI community by \cite{Marzat.2011}. \cite{Ghosh.2018} used Bayesian optimization to search for adversarial examples for controllers. \cite{Krause.2011} considered finding optimal parameters for different application-contexts using Bayesian optimization. Our main contributions in the context of parameter tuning for MPC with Bayesian optimization are to \begin{itemize} \item ensure acceptable worst case behavior by explicitly considering model uncertainty while \item simultaneously optimizing the MPC and KF \newline parametrization and \item expand previous work on outlier detection by adding a classification step to prohibit resampling in an area where outliers often occur. \end{itemize} The presented method can be used to find a safe parametrization to initiate experimental manual tuning, tuning with safe Bayesian optimization [\cite{Berkenkamp.2016}] or other online learning methods. The paper is organized as follows: In Section \ref{sec:PS}, the application system is described in detail and the optimization problem is stated. In Section \ref{sec:BayesOpt}, Bayesian optimization including design choices made for the problem at hand is introduced. Section \ref{sec:MinMaxBayesOpt} shows how it is used in a two stage framework to solve the problem described in Section \ref{sec:PS}. In Section \ref{sec:Results}, the presented approach is evaluated. \section{Problem Statement} \label{sec:PS} Milling is a fast and flexible machining process, which is highly acclaimed in production due to its productivity. During milling, a rotating tool is moved against a workpiece to cut material along a predefined trajectory. Thus, a desired geometry can be manufactured. The presented optimization approach is examined for the quality control during milling. The quality of the production is defined by the deviation of the manufactured geometry from the desired geometry. This deviation relates to the deflection of the working tool during the process. Hence, a reproducible quality for the milling process requires the control of the cutting force, which leads to tool deflection. The force in turn relates to the feed velocity of the tool. Therefore, a multi-stage approach is applied, where on the outer loop the trajectory for the feed velocity is optimized with respect to the resulting force and on the inner loop the feed velocity of the tool is controlled. The tool dynamic is described as \vspace{-0.2cm}\begin{equation} \ddot{v}\left(t\right) + 2 \, D \, \omega_0 \, \dot{v}\left(t\right) + \omega_0^2 \, v\left(t\right) = K \, \omega_0^2 \, u\left(t-t_d\right), \end{equation} where $v\left(t\right)$ stands for the tool velocity at time $t$ in dependence of the control input $u$ after the delay-time $t_d$ and the model parameters $K$ for gain, $D$ for damping and $\omega_0$ for resonance frequency. In order to avoid high cutting forces before occurrence, a model predictive control strategy is applied for the quality control during milling. In addition, a Kalman-Filter (KF) is used for state-estimation during the process. The reader may refer to previous work by \cite{Stemmler.2019} for further details about the structure of the control approach. The performance of the overall control strategy depends in part on the parametrization of the MPC and KF. Namely for the MPC, the prediction horizon $H_\text{p}$, control horizon $H_\text{u}$ are relevant parameters. Additionally, the ratio $\lambda_{\text{MPC}} \in \mathbb{R}$ between the weighting matrices $\mathbf{Q} = \mathbf{I}$ and $\mathbf{R} = \mathbf{I} \, 10^{\lambda_{\text{MPC}}}$ in the MPC cost function, \vspace{-0.2cm}\begin{equation} J = ||\boldsymbol{e}||_{\mathbf{Q}} + ||\boldsymbol{\Delta u}||_{\mathbf{R}} \end{equation} are considered within optimization. This way, the deviation from the desired trajectory $\boldsymbol{e}$ and the change of the control input $\boldsymbol{\Delta u}$ can be weighted differently during optimization. Regarding the KF, the covariance matrix for measurement noise is predefined as $\mathbf{R}_{KF} = \mathbf{I} \cdot 0.001$ which corresponds to an experimentally determined variance of the sensor. In order to weight between prediction and measurement at correction, the covariance matrix for process noise is set as $\mathbf{Q}_{\text{KF}} = \mathbf{I} \, 10^{\lambda_{\text{KF}}}$ by the ratio $\lambda_{\text{KF}} \in \mathbb{R}$. Satisfactory tracking performance for the underlying control of the feed velocity was achieved with the hand-tuned default parametrization ($H_\text{u} = 15$, $H_\text{p} = 15$, $\lambda_{\text{MPC}} = -3$, $\lambda_{\text{KF}} = -1$) on the nominal model. In order to increase the robustness of the parameterization, sources of uncertainty are introduced in the simulation model. Measurement signals are perturbed with zero mean Gaussian noise. Additionally, a model plant mismatch is introduced by modifying the plant models stiffness $\tilde{\omega}_0 = \omega_0 \cdot \theta_{e,1}$ and damping $\tilde{D} = D \cdot \theta_{e,2}$. Note that while the simulated process model is disturbed, the internal MPC model is kept constant. From now on we refer to certain realizations of $\boldsymbol{\theta}_{e} =\{\theta_{e,1}, \theta_{e,2}\}$ as context. Note that in general other types of environmental conditions can be included in the contextual variables such as different reference trajectories or operating modes. Goal now is to find a robust set of parameters $\boldsymbol{\theta}_c = \{H_\text{u} , H_\text{p}, \lambda_{\text{MPC}}, \lambda_{\text{KF}} \}$ with respect to a known probability distribution of $\boldsymbol{\theta}_{e}$. The inputs and outputs of the simulation model used during optimization are shown in Fig. \ref{fig: Overview}. Note that even in a fixed context, the simulation is probabilistic because the realization of measurement noise is drawn at random. \begin{figure}[ht!] \includegraphics[width=\linewidth]{figures/Overview_v2.pdf} \caption{Overview of the simulation environment} \label{fig: Overview} \end{figure} The optimization problem is stated in Eq. \ref{eq:FullOptProblem}. The objective function (Eq. \ref{eq:2a}) is the expected (with respect to the assumed probability distribution of $\theta_{e}$) value of the integral tracking error, $C_{\text{EITE}}(\theta_{\text{c}})$. For one simulation it is calculated by comparing the actual velocity $v_k$ at timestep $k$ with the reference velocity $v_{\text{ref},k}$. Here it is assumed that $\boldsymbol{\theta}_{e}$ is distributed according to a truncated normal distribution with no correlation in between components of $\boldsymbol{\theta}_{\text{e}}$ as follows: \begin{equation} \boldsymbol{\theta}_e \in \mathbb{R}^2 \sim \mathcal{\psi}(\boldsymbol{\mu}_e,\sigma_e^2\mathbf{I},\boldsymbol{\theta}_e^{\text{min}},\boldsymbol{\theta}_e^{\text{max}}) \label{eq:2d} \end{equation} where $\mathcal{\psi}(\boldsymbol{\mu}_e,\sigma_e^2\mathbf{I},\boldsymbol{\theta}_e^{\text{min}},\boldsymbol{\theta}_e^{\text{max}})$ is a normal distribution with $p(\boldsymbol{\theta}_e < \boldsymbol{\theta}_e^{\text{min}}) = p(\boldsymbol{\theta}_e > \boldsymbol{\theta}_e^{\text{max}}) = 0$. Note however that the presented approach is in principle applicable for all probability distributions with bounded support. In order to prevent damage to the milling head the maximal overshoot in the worst case context $\Delta h^*(\boldsymbol{\theta}_c)$ is constrained (Eq. \ref{eq:2f}). This constraint is not explicitly considered in the original MPC formulation\footnote{This constraint can be integrated in the MPC formulation. This might increase calculation time and its fulfillment cannot be guaranteed if the MPC model is inaccurate as it is in the presented case.}. In addition, to be able to use cost efficient hardware the maximal computation time $T(\boldsymbol{\theta}_e|\boldsymbol{\theta}_c)$ is limited (Eq. \ref{eq:2e}). With a probability of $\Phi(z = 3) = 0.96$ it is not allowed to exceed a critical value of $T^{\text{max}}$. \begin{subequations}\label{eq:FullOptProblem} \begin{align} \min_{\boldsymbol{\theta}_c} \quad & C_{\text{EITE}}(\boldsymbol{\theta}_c) = \mathbb{E} \left[\sum_{k = 0}^{N} (v_{\text{ref},k}-v_k(\boldsymbol{\theta}_e|\boldsymbol{\theta}_c))^2 \right] \label{eq:2a}\\[7pt s.\,t.\quad \, \, & \boldsymbol{\theta}_c \in \mathbb{Z}^2 \times \mathbb{R}^{2} \label{eq:2b} \\[7pt] & \boldsymbol{\theta}_c^{\text{min}} \leq \boldsymbol{\theta}_c \leq \boldsymbol{\theta}_c^{\text{max}} \label{eq:2c} \\[7pt] & p(T(\boldsymbol{\theta}_e|\boldsymbol{\theta}_c) < T^{\text{max}}) > \Phi(z = 3) = 0.96 \label{eq:2e} \\[7pt] & \max_{\boldsymbol{\theta}_e} (\Delta h(\boldsymbol{\theta}_e|\boldsymbol{\theta}_c)) = \Delta h^*(\boldsymbol{\theta}_c) < \Delta h^{\text{max}} \label{eq:2f} \end{align} \end{subequations} It should be noted that the optimization methodology presented is not limited to the constraints and objective function chosen here. Arbitrary constraints or performance matrices can be used. Preferably those which cannot be explicitly considered in the controller. \section{Noisy constrained Bayesian optimization with outlier detection} \label{sec:BayesOpt} The aim of this Section is to explain the fundamentals of noisy Bayesian optimization with outlier detection as well as highlight the design choices made for the problem at hand. Two instances of the algorithm described in this section are used in a hierarchical approach as will be described in Section \ref{sec:MinMaxBayesOpt}. For notational simplicity, we now consider a simplified version of the optimization problem stated in Eq. \ref{eq:FullOptProblem}: \begin{subequations}\label{eq:SimpleOptProblem} \begin{align} \min_{x} \quad & \mathbb{E} \left[y(x)\right] \label{eq:4a}\\[3pt s.\,t.\quad \, \, & \boldsymbol{x} \in \mathbb{R}^{m} \label{eq:4b} \\[3pt] & p(g(x) < g^{\text{max}}) > \Phi(z) \label{eq:4c} \end{align} \end{subequations} The objective function $y(x)$ as well as the constrained black-box response $g(x)$ are corrupted by noise. Each time the Black-Box is evaluated with parameters $x_i$, corresponding noisy samples $y_i$ and $g_i$ are obtained. \begin{algorithm}[h] 1: Initial sampling of $X_{1}$, $Y_{1}$ and $G_{1}$: \\ [3pt] 2: \textbf{for} k = 1; 2; . . . ; \textbf{do} \\[3pt] 3: \quad update probabilistic surrogate models using \\ \hspace*{6.5mm} $\tilde{X}_{k+1}$,$\tilde{Y}_{k+1}$ and $\tilde{G}_{k+1} $\\[3pt] 4: \quad select $x_{k+1}$ by optimizing an acquisition function:\\ \hspace*{6.5mm} $x_{k+1} = \argmax_x(\alpha(x))$\\[3pt] 5: \quad query objective function to obtain $y_{k+1}$ and $g_{k+1}$ \\[3pt] 6: \quad augment data $X_{k+1} = \{ X_{k}, x_{k+1}\}$, \\ \hspace*{6.5mm} $Y_{k+1} = \{ Y_{k},y_{k+1}\}, \, G_{k+1} = \{ G_{k},y_{k+1}\}$, \\[3pt] 7: \quad $\tilde{X}_{k+1},\tilde{Y}_{k+1},\tilde{G}_{k+1} \leftarrow \textrm{OutDetect}(X_{k+1},Y_{k+1},G_{k+1}$) \\[7pt] 8: \textbf{end for} \label{Algo:BayesOpt} \caption{Bayesian optimization with outlier detection} \end{algorithm} Algorithm 1 shows the procedure of Bayesian optimization. A detailed introduction to Bayesian optimization is provided for example by \cite{Shahriari.2016}. The main idea is to use all sample points obtained so far ($X_{k} = [x_1,\dots ,x_k],Y_{k} = [y_1,\dots ,y_k],G_{k} = [g_1,\dots ,g_k]$) to construct fast-to-evaluate surrogate models of $y(x)$ and $g(x)$ at each iteration (Line 3) and use these models to search for the next promising sample point. This way the surrogate models are iteratively refined in promising regions. In this work Gaussian process regression (GPR) models are used as surrogate models. A separate GPR model is built for each of the responses. For a detailed introduction on GPR the reader is referred to \cite{Rasmussen.2006}. GPR is a nonparametric regression and interpolation model which provides a probabilistic prediction of each of the black-box responses for parametrization which have not been evaluated yet. The model is here defined by a constant a-priori mean, a squared exponential kernel with automated relevant detection and a homoscedastic Gaussian observation model (to account for the noisy samples). Hyperparameters are optimized at each iteration by maximizing the likelihood. Hyper-priors are placed on the hyperparameters to include expert knowledge in the optimization and avoid potential over fitting. E.g. the lower bound on the kernel length scales in the direction of prediction and control horizon are set to $l_{H_\text{u}}^\text{min} = l_{H_\text{p}}^\text{min} = 0.22$ which roughly speaking corresponds to a covariance of the objective function values of at least $0.1$ if the horizons are changed by one. Based on these models, an acquisition function $\alpha(x)$ is used to balance between exploitation (sampling close to the current optimum) and exploration (sampling where the model is uncertain) when searching for the next sample point (Line 4). Here the so called reinterpolation procedure (RI) proposed by \cite{Forrester.2006} is used. Using RI prohibits multiple evaluations of the objective function for identical parameters by fitting an intermediate interpolating surrogate model. This is beneficial in the presented case since repeating the second stage of the optimization (cf. Section 4.2) twice for identical parameters would be a waste of computational resources. In addition to maximize the noisy expected improvement of the objective function by maximizing $RI(x_q)$, we want our query point to be feasible $p_{\text{feas}}(x_q)$, to not be an outlier $p_{\text{out}}(x_q)$ with off-the-charts objective function value and we want the simulation not to fail $p_{\text{fail}}(x_q)$. Therefore, the following acquisition function is used: \begin{equation} \alpha(x_q) = RI(x_q)p_{\text{feas}}(x_q)(1-p_{\text{out}}(x_q))(1-p_{\text{fail}}(x_q)) \label{eq:InfillCriteria} \end{equation} The probability of feasibility is calculated by using the probabilistic predictions of $g(x)$. At each iteration, the next sample point is chosen by maximizing $\alpha(x_q)$ using particle swarm optimization. After evaluation of the new sample point (Line 5), outlier detection is performed (Line 7). In Bayesian optimization with GPR, special care needs to be taken of outliers. At the problem at hand, some parametrization may lead to very large objective function values. This is problematic because outliers can severely corrupt the probabilistic surrogate models, leading to unrealistically small length scales. In order to detect outliers we follow the approach presented by \cite{MartinezCantin.2017}. In this paper outliers are detected by first fitting a robust GPR model with a student-t observation model and then detecting the observations with a low likelihood. In the original paper, outliers are discarded from the set of observations. We take this approach one step further by building a k-nearest-neighbor (knn) classifier\footnote{A squared exponential kernel as distance metric and a squared inverse distance weighting is used .} to estimate the probability $p_{\text{out}}(x_q)$ of observing an outlier at a given location $x_q$. The training data set of the knn-classifier consists of all sample points within the controller design parameter space labeled according to whether they were identified as outliers or not. For some parameterizations the control algorithm to be configured may lead the simulation environment to crash due to numerical issues. To prevent repetitive sampling in these areas an additional knn-classifier is built to estimate the probability $p_{\text{fail}}(x_q)$ of a parametrization $x_q$ to lead to a crash. \section{Two stage Bayesian optimization for MPC and KF tuning} \label{sec:MinMaxBayesOpt} The algorithm presented in Section \ref{sec:BayesOpt} is applied within a two stage optimization approach to solve the problem stated in Eq. \ref{eq:FullOptProblem}. An overview is given in Fig. \ref{fig: TwoStageOpt}. Stage one has the task to find the optimal controller configuration characterized by low expected integral tracking error, acceptable worst case overshoot and real time capable execution time. For each controller configuration, $\boldsymbol{\theta}_{c}'$, queried by Stage one, the overshoot belonging to the worst possible combination of environmental variables, $\Delta h^*(\boldsymbol{\theta}'_c) = \max_{\boldsymbol{\theta}_e} (\Delta h(\boldsymbol{\theta}_e|\boldsymbol{\theta}'_c))$, is searched for in Stage two. This is achieved by solving a second stage optimization problem and ensures that constraint \ref{eq:2f} is satisfied. This procedure is similar to the one presented by \cite{Marzat.2011} in the context of FDI. \begin{figure}[htb!] \includegraphics[width=\linewidth]{figures/TwoStageOpt_v2.pdf} \caption{Overview of the two stage optimization approach} \label{fig: TwoStageOpt} \end{figure} \subsection{Stage one: Optimize controller parameters} The algorithm presented in Section \ref{sec:BayesOpt} is used within Stage one. Although $p(\boldsymbol{\theta}_e)$ is assumed to be a truncated normal distribution, we cannot make any assumptions about the true structure of the probability density function of a response (i.e. integral tracking error, return time, and maximum overshoot) for a given controller parametrization $\boldsymbol{\theta}_c$ because the simulation model is non-linear. By using GPR with Gaussian likelihood (observation model) we approximate the true unknown probability distribution of the responses, by a deterministic function corrupted by Gaussian noise. This way in total 3 GPR models are built - one for each of the relevant black-box responses: $C_{\text{EITE}}(\boldsymbol{\theta}_c)$, $T(\boldsymbol{\theta}_c)$ and $\Delta h^*(\boldsymbol{\theta}_c)$. Latin hypercube sampling is used as initial sampling. As the objective function (Step 5 of Algorithm 1) of stage one, stage two of the optimization procedure is called. \subsection{Stage two: Find worst possible context}\label{sec:StageTwo} The goal of Stage two is to find the maximum overshoot for a given controller parametrization $\boldsymbol{\theta}_{c}'$ queried by Stage one. Note that using RI in the first stage prohibits multiple evaluations of the second stage with identical controller parameters. Noisy Bayesian optimization is used in this Stage, as well. But in this case only box constraints need to be considered. Outlier detection is also not used. Therefore, only one GPR model to approximate $\Delta h(\boldsymbol{\theta}_e|\boldsymbol{\theta}'_c)$ is built and standard RI is used as the acquisition function. Initial sampling is performed by drawing 5 different contexts from the truncated normal distribution of environmental variables. The corresponding 5 different integral tracking errors and return times of the initial sampling are returned to Stage one. The overshoot is maximized in the successive steps. Stage two is terminated either if an overshoot exceeding the maximum overshoot allowed was observed or if 10 evaluations were performed. Note that the GPR models built in both stages are completely independent from one another. This follows the approach by \cite{Marzat.2011}. An alternative approach would be to use a joint model using the controller parameter as well as the environmental parameters as explanatory variables for the overshoot as for example explained by \cite{Krause.2011}. \section{Results} \label{sec:Results} The algorithm presented in Sections \ref{sec:BayesOpt} and \ref{sec:MinMaxBayesOpt} is applied to the optimization problem stated in Section \ref{sec:PS} and was implemented in MATLAB 2017b using the GPML toolbox [\cite{Rasmussen.2010}] to create the GPR surrogate models. Additionally, a simplified benchmark test case (Section \ref{sec:benchmark}) is used to compare the developed approach with three different benchmark approaches (Section \ref{sec:benchmarkAlgos}). Parameters are optimized with respect to a single reference velocity trajectory. This is reasonable because in a real world milling application we can expect to know the desired trajectory in advance. However, the presented approach is not limited to use only one single reference trajectory during optimization. \subsection{Benchmark algorithms} \label{sec:benchmarkAlgos} The presented algorithm (Algo. IV) is compared with three benchmark algorithms (Algo. I -III): \begin{itemize} \item[I] Bayesian optimization on nominal model \item[II] Random sampling in Stage one \item[III] Bayesian optimization without outlier detection \item[IV] Bayesian optimization with outlier detection \end{itemize} The goal is to evaluate the performance impact of the individual proposed optimization steps. Algo. I is used for parameter optimization on the nominal model whereas Algo. (II-IV) are run on randomly varying realizations of the perturbed model using the second Stage presented in Section \ref{sec:StageTwo} to determine feasibility w.r.t. constraint \ref{eq:2f}. Algo. II uses random sampling in Stage one instead of finding the next sample point by maximizing the acquisition function presented in Eq. \ref{eq:InfillCriteria}. Algo. III is identical to the presented approach except that the outlier detection and classification scheme is not used. \subsection{Benchmark test case} \label{sec:benchmark} In order to compare the algorithms quantitatively each algorithm is run 10 times for 3 hours (only one hour for (Algo. I)) respectively on a simplified benchmark test case\footnote{All experiments were performed on a desktop PC with an AMD Ryzen 7 1700 Eight-Core Processor @3GHZ and 16 GB Ram}. The benchmark test case considers only two controller parameters $H_\text{u}$ and $\lambda_{\text{MPC}}$. The prediction horizon $H_\text{p}$ is set to the controller horizon $H_\text{u}$ and $\lambda_{\text{KF}}$ is kept at its hand-tuned default value. In each of the 10 runs, initial sampling was kept identical for each algorithm. The validation performance of a given parametrization is estimated by running the simulation with 25 different random draws from the distributions of model uncertainties. Table \ref{tab:Results1D} shows the average quality of the final solution of each of the algorithms. Fig. \ref{fig: Performance} depicts the average validation objective function of the feasible best solution after a given number of simulations during optimization. \begin{table}[h] \caption{Quality of the final solution averaged over 10 runs (benchmark test case)} \centering \begin{tabular}{ l r r r r } \toprule Algorithm: & I & II & III & IV \\ \midrule Feasibility & $50\%$ & $100\%$ & $90\%$ & $100\%$ \\ Obj. fun. validation & $0.147$ & $0.182$ & $0.148$ & $0.155$ \\ $\Delta$ Obj. fun. val. - train & $+0.042$ & $+0.001$ & $+0.003$ & $+0.007$ \\ \bottomrule \end{tabular} \label{tab:Results1D} \end{table} \begin{figure}[ht] \includegraphics[width=\linewidth]{figures/PerformanceFig.pdf} \caption{Validation performance after a given number of iterations considering only feasible solutions averaged over 10 runs (benchmark test case)} \label{fig: Performance} \end{figure} It becomes apparent that only optimizing the MPC on the nominal model (Algo. I.) is not sufficient. Although it requires the algorithm only $\sim25$ simulations to converge, half of the time, the final solution is infeasible due to too much overshoot (Constraint \ref{eq:2f}) when model uncertainty is incorporated during validation. Additionally, the expected integral tracking error is underestimated substantially. In contrast, Algos. II-IV are able to consistently find solutions which are feasible during validation and only slightly underestimate the validation integral tracking error. Random Sampling (Algo. II) is not able to find a competitive solution. Although the outlier detection and classification scheme used in Algo. IV improves the initial convergence (until around $200$ simulations), the average performance of the final solution is slightly better in Algo. III. Preliminary experiments have shown that the surrogate model of the objective function generated with outlier detection is more plausible than without outlier detection\footnote{Plots are not shown here due to space limitations.}. Without outlier detection, negative integral tracking errors are predicted in some regions and the surrogate models are far less smooth and in general uncertainty is larger. Furthermore, it was observed that with outlier detection, the hyper parameter optimization favored larger length scales which is consistent with the smoother predictions and smaller uncertainty. We hypothesize that the larger length scales hinder exploration in the later stages of optimization due to lower model uncertainty. Therefore interestingly, the more realistic fit obtained by GPR with outlier detection does not automatically mean that optimization performance is increased. The experiments conducted by \cite{MartinezCantin.2017} have shown that outlier detection consistently improves the optimization procedure. This somewhat contradicts our findings. One reason is that in comparison to \cite{MartinezCantin.2017} we have \textit{deterministic} outliers, which consistently occur in one area of the design parameter space instead of \textit{random} outliers. \begin{figure} [htb!] \vspace{0.1cm} \includegraphics[width=0.9\linewidth]{figures/Final_eps_v2.eps} \centering \caption{Final surrogate model of the benchmark test case with estimated feasible area and objective function.} \label{fig: FinalContour} \end{figure} Fig. \ref{fig: FinalContour} depicts the final estimated behavior of the closed loop within the controller design parameter space. In general the results are comprehensible. Calculation time is solely a function of $H_\text{u}$. Due to the presented outlier classification scheme we can observe that deterministic outliers (very large integral tracking errors) appear in areas with small $H_\text{u}$ and large $\lambda_{\text{MPC}}$. In contrast, the worst case overshoot constraint is violated for small values of $\lambda_{\text{MPC}}$ (i.e. low penalty on change of inputs). Interestingly, the optimum has a smaller $H_\text{u}$ than the calculation time constraint would allow. At first glance this may be counterintuitive. But given enough model uncertainty a longer prediction horizon does increase the chance of misprediction and therefore of critical overshoot. The default parametrization is located in the infeasible region. It was therefore considerably improved by the optimization approach. \subsection{Full test case} In the full test case all four parameters are considered for optimization $\boldsymbol{\theta}_{c} = \{H_\text{u},H_\text{p},\lambda_{\text{MPC}},\lambda_{\text{KF}}\}$. Algos. III \& IV were run 4 times for $6$ hours each. The best and average validation performance as well as the average feasibility of the final solution are shown in Table \ref{tab:Results4Params}. Results indicate that the proposed algorithms can find a competitive solution even in a higher dimensional controller parameter space. Yet performances of the algorithms are less consistent than in the benchmark test case. Additionally, we can observe that the default parametrization of $\lambda_{KF} = -1$ and $H_\text{p} = H_\text{u}$ could not be improved upon. \begin{table}[h] \caption{Validation obj. fun. of the final solution averaged over 4 runs (full test case)} \centering \begin{tabular}{ r r r r} \toprule & Best & Average & Feasibility \\ \midrule Algo. III & $0.143$ & $0.151$ & $75\%$ \\ Algo. IV & $0.141$ & $0.174$ & $75\%$ \\ \bottomrule \end{tabular} \label{tab:Results4Params} \end{table} Temporarily, an alternative parametrization was considered to be the best feasible solution during the course of optimization. This parametrization is shown with some of its neighboring parameterizations in Table \ref{tab:ResultslocalMinimum}. Similar parameterizations were found in $50\%$ of the runs of Algos. III and IV. \begin{table}[h] \caption{Parametrization with the best validation performance and neighboring parameterizations. } \centering \begin{tabular}{ r r r r | r c } \toprule $H_\text{u}$ & $H_\text{p}$ & $\lambda_{\text{MPC}}$ & $\lambda_{\text{KF}}$ & Obj. Fun. & Feasibility \\ \midrule $\boldsymbol{1}$ & $\boldsymbol{4}$ & $\boldsymbol{-3.9}$ & $\boldsymbol{2}$ & $\boldsymbol{0.12}$ & \ding{51} \\ $1$ & $3$ & $-3.9$ & $2$ & $0.19$ & \textbf{\ding{55}} \\ $1$ & $5$ & $-3.9$ & $2$ & $0.22$ & \ding{51} \\ $1$ & $4$ & $-3.9$ & $-1$ & $0.23$ & \ding{51} \\ $2$ & $4$ & $-3.9$ & $2$ & $0.14$ & \ding{55} \\ \bottomrule \end{tabular} \label{tab:ResultslocalMinimum} \end{table} Although this parametrization shows superior validation performance in comparison to the best solution of the simplified test case, its neighboring solutions are either infeasible or the expected integral tracking error is unacceptably high. Therefore from a control engineering perspective this solution cannot be considered robust and would be rejected in practice. Fortunately, during later stages of the optimization these parameterizations are discarded by the optimization algorithms, because of the worse performing or infeasible neighborhood. This can be explained by the fact that in Bayesian optimization with GPR, black-box responses are assumed to be smooth on the characteristic kernel length scales. By setting a minimal kernel length scale based on domain knowledge as done in the present work, the optimizer implicitly considers the neighborhood of the optimal solution. This can be seen as an additional advantage of Bayesian optimization over other optimizers such as genetic algorithms where only the best solution is considered regardless of its neighborhood. Additionally, this alternative parametrization highlights the non-convexity of the optimization problem and of the relevance of all considered parameters. \section{Summary \& Conclusion} In this contribution Bayesian optimization was used to simultaneously optimize hyperparameters of a MPC and KF for an industrial CNC machining process. In order to achieve a robust parameterization, the simulation model was perturbed with model-plant mismatches randomly drawn from a known distribution with bounded support as well as randomly drawn measurement noise. Goal was to minimize expected integral tracking error, ensure worst case safety by constraining the maximum overshoot and enforce real-time capability by limiting the return time. The optimization problem is solved using a two-stage Bayesian optimization procedure relying on GPR with Gaussian observation model, the RI-acquisition function as well as outlier detection and classification. On a simplified benchmark test case, it was shown that optimization on the nominal model does not produce satisfactory parameter combinations. The default parametrization as well as random sampling was outperformed considerably. It was also observed that outlier detection did not consistently improve the convergence although surrogate models are more comprehensible. The empirical performance model obtained through Bayesian optimization allowed to analyze and visualize the results. Optimization on the full test case revealed the relevance of all parameters and the non-convexity of the optimization problem. Furthermore, it was shown how the model assumptions encoded in the hyperprior helps the optimization to avoid narrow and physically incomprehensible local minima. The presented approach can help control engineers to find an \textit{initial} robust and safe parametrization for controllers and state estimators given a closed loop simulation of the system and a probabilistic assumption over the model plant mismatch. It can empirically enforce constraints or performance metrics which are not or cannot explicitly be considered within the controller. This parametrization can then be further improved online for example by safe Bayesian optimization.
2024-02-18T23:40:46.191Z
2020-10-15T02:13:15.000Z
algebraic_stack_train_0000
3,276
5,487
proofpile-arXiv_066-77
\section{Conclusion}\label{sec:Conclusion} In this paper, we propose a distance-aware Transformer, which can leverage the real distance between contexts to adjust the self-attention weights for better context modeling. We propose to first use different learnable parameters in different attention heads to weight the real relative distance between tokens. Then, we propose a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges. They are further multiplied with the raw attention weights that are activated by the ReLU function to keep non-negativity and produce sharper attention. Extensive experiments on five benchmark datasets show that our approach can effectively improve the performance of Transformer by introducing real distance information to facilitate context modeling. \section{Experiments}\label{sec:Experiments} \subsection{Datasets and Experimental Settings} Our experiments are conducted on five benchmark datasets for different tasks. Four of them are benchmark NLP datasets. The first one is AG's News\footnote{https://www.di.unipi.it/en/} (denoted as \textit{AG}), which is a news topic classification dataset. The second one is Amazon Electronics~\cite{he2016ups} (denoted as \textit{Amazon}), which is a dataset for review rating prediction. The third one is Stanford Sentiment Treebank~\cite{socher2013recursive} (denoted as \textit{SST}). We use the binary classification version of this dataset. The fourth one is Stanford Natural Language Inference~\cite{bowman2015large} (\textit{SNLI}) dataset, which is a widely used natural language inference dataset. The detailed statistics of these datasets are summarized in Table~\ref{table.dataset}. In addition, we also conduct experiments on a benchmark news recommendation dataset named MIND~\cite{wu2020mind}, aiming to validate the effectiveness of our approach in both text and user modeling. It contains the news impression logs of 1 million users from Microsoft News\footnote{https://www.msn.com/en-us} from October 12 to November 22, 2019. The training set contains the logs in the first five weeks except those on the last day which are used for validation. The rest logs are used for test. The key statistics of this dataset are summarized in Table~\ref{table.dataset2}. \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lrrrrr} \hline \textbf{Dataset} & \textbf{\# Train} & \textbf{\# Dev.} & \textbf{\# Test} & \textbf{\# Classes} &\textbf{Avg. len.}\\ \hline AG & 108k & 12k & 7.6k & 4 & 44 \\ Amazon & 40k & 5k & 5k & 5 & 133\\ SST & 8k & 1k & 2k & 2 & 19\\ SNLI & 55k & 10k & 10k & 2 & 22\\ \hline \end{tabular} } \caption{Statistics of \textit{AG}, \textit{Amazon}, \textit{SST} and \textit{SNLI} datasets.}\label{table.dataset \end{table} \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lrlr} \hline \textbf{\# Users} & 1,000,000 & \textbf{Avg. title len.} & 11.52 \\ \textbf{\# News} & 161,013 & \textbf{\# Click samples} & 5,597,979 \\ \textbf{\# Impressions} & 500,000 & \textbf{\# Non-click samples} & 136,162,621 \\ \hline \end{tabular} \caption{Statistics of the \textit{MIND} dataset.}\label{table.dataset2 \end{table} In our experiments, we use the 300-dimensional Glove~\cite{pennington2014glove} embeddings for word embedding initialization.\footnote{We do not use contextualized embeddings generated by language models like BERT because we mainly focus on validating the effectiveness of our Transformer architecture.} The number of attention head is 16, and the output dimension of each attention is 16. We use one Transformer layer in all experiments. On the \textit{AG}, \textit{SST} and \textit{SNLI} datasets, we directly apply Transformer-based methods to the sentences. On the \textit{Amazon} dataset, since reviews are usually long documents, we use Transformers in a hierarchical way by learning sentence representations from words via a word-level Transformer first and then learning document representations from sentences via a sentence-level Transformer. On the \textit{MIND} dataset, following~\cite{wu2019nrms,wu2020sentirec} we also use a hierarchical model architecture that first learns representations of historical clicked news and candidate news from their titles with a word-level Transformer, then learns user representations from the representations of clicked news with a news-level Transformer, and final matches user and candidate news representations to compute click scores.\footnote{Both the word-level and news-level Transformers contain one self-attention layer.} We use the same model training strategy with negative sampling techniques as NRMS~\cite{wu2019nrms}. On all datasets we use Adam~\cite{kingma2014adam} as the optimization algorithm and the learning rate is 1e-3. On the \textit{AG}, \textit{Amazon}, \textit{SST} and \textit{SNLI} datasets, accuracy and macro-Fscore are used as the performance metric. On the \textit{MIND} dataset, following~\cite{wu2019nrms} we use the average AUC, MRR, nDCG@5 and nDCG@10 scores of all sessions as the metrics. Each experiment is repeated 5 times independently and the average results with standard deviations are reported. \begin{table*}[!t] \centering \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Methods}}} & \multicolumn{2}{c}{\textbf{AG}} & \multicolumn{2}{c}{\textbf{Amazon}} \\ \cline{2-5} \multicolumn{1}{c}{} & Accuracy & Macro-F & Accuracy & Macro-F \\ \hline Transformer & 93.01$\pm$0.13 & 93.00$\pm$0.13 & 65.15$\pm$0.40 & 42.14$\pm$0.41 \\ Transformer-RPR & 93.14$\pm$0.12 & 93.13$\pm$0.13 & 65.29$\pm$0.38 & 42.40$\pm$0.40 \\ Transformer-XL & \underline{93.35}$\pm$0.10 & \underline{93.34}$\pm$0.11 & \underline{65.50}$\pm$0.40 & \underline{42.88}$\pm$0.43 \\ Adapted Transformer & 93.28$\pm$0.13 & 93.27$\pm$0.14 & 65.47$\pm$0.39 & 42.69$\pm$0.42 \\ \hline *DA-Transformer & \textbf{93.72}$\pm$0.11 & \textbf{93.70}$\pm$0.12 & \textbf{66.38}$\pm$0.39 & \textbf{44.29}$\pm$0.40 \\ \hline \end{tabular} \caption{Results on \textit{AG} and \textit{Amazon}. *Improvement over the underlined second best results is significant at $p<0.05$.} \label{table.result} \end{table*} \begin{table*}[!t] \centering \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Methods}}} & \multicolumn{2}{c}{\textbf{SST}} & \multicolumn{2}{c}{\textbf{SNLI}} \\ \cline{2-5} \multicolumn{1}{c}{} & Accuracy & Macro-F & Accuracy & Macro-F \\ \hline Transformer & 89.67$\pm$0.22 & 89.59$\pm$0.24 & 81.45$\pm$0.30 & 81.42$\pm$0.31 \\ Transformer-RPR & 89.94$\pm$0.19 & 89.90$\pm$0.20 & 82.20$\pm$0.31 & 82.18$\pm$0.31 \\ Transformer-XL & 90.06$\pm$0.20 & 90.02$\pm$0.21 & \underline{83.19}$\pm$0.29 & \underline{83.15}$\pm$0.30 \\ Adapted Transformer & \underline{90.15}$\pm$0.19 & \underline{90.10}$\pm$0.1 & 82.35$\pm$0.28 & 82.31$\pm$0.30 \\ \hline *DA-Transformer & \textbf{90.49}$\pm$0.17 & \textbf{90.43}$\pm$0.19 & \textbf{84.18}$\pm$0.27 & \textbf{84.16}$\pm$0.29 \\ \hline \end{tabular} \caption{Results on \textit{SST} and \textit{SNLI}. *Improvement over the underlined second best results is significant at $p<0.05$.} \label{table.result} \end{table*} \begin{table*}[!t] \centering \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\textbf{Methods}} & AUC & MRR & \small{nDCG@5} & \small{nDCG@10} \\ \hline Transformer & 67.76$\pm$0.18 & 33.05$\pm$0.16 & 35.94$\pm$0.19 & 41.63$\pm$0.20 \\ Transformer-RPR & 67.81$\pm$0.16 & 33.10$\pm$0.17 & 35.98$\pm$0.20 & 41.65$\pm$0.21 \\ Transformer-XL & \underline{67.92}$\pm$0.16 & \underline{33.15}$\pm$0.16 & \underline{36.04}$\pm$0.20 & \underline{41.70}$\pm$0.19 \\ Adapted Transformer & 67.70$\pm$0.22 & 33.01$\pm$0.20 & 35.89$\pm$0.17 & 41.58$\pm$0.23 \\ \hline *DA-Transformer & \textbf{68.32}$\pm$0.15 & \textbf{33.36}$\pm$0.16 & \textbf{36.34}$\pm$0.14 & \textbf{42.07}$\pm$0.17 \\ \hline \end{tabular} \caption{Results on the \textit{MIND} dataset. *Improvement over the underlined second best results is significant at $p<0.05$.} \label{table.result2} \end{table*} \subsection{Performance Evaluation} We compare our proposed \textit{DA-Transformer} method with several baseline methods, including: (1) \textit{Transformer}~\cite{vaswani2017attention}, the vanilla Transformer architecture, where sinusoidal positional embeddings are used. (2) \textit{Transformer-RPR}~\cite{shaw2018self}, a variant of Transformer with relative position representations. (3) \textit{Transformer-XL}~\cite{dai2019transformer}, a variant of Transformer that consists of a segment-level recurrence mechanism and a sinusoidal relative position encoding scheme. (4) \textit{Adapted Transformer}~\cite{yan2019tener}, a variant of Transformer that uses direction- and distance-aware position encoding. The results of our approach and these methods on the five datasets are respectively shown in Tables~\ref{table.result} and~\ref{table.result2}. From the results, we have several observations. First, compared with the vanilla Transformer, the compared methods that consider distance information consistently achieve better performance. It shows that distance information is very important in context modeling. Second, among the methods with distance information, the performance of \textit{Transformer-RPR} is lower than the others. This may be because \textit{Transformer-RPR} does not keep the precise long-distance information. Third, by comparing \textit{Transformer-XL} and \textit{Adapted Transformer}, we find that the performance of \textit{Adapted Transformer} is better on the SST dataset, while \textit{Transformer-XL} is better on other datasets. This is probably because \textit{Adapted Transformer} is more suitable for modeling local contexts and the sentences in the \textit{SST} dataset are usually short, while \textit{Transformer-XL} may be more appropriate for modeling long sequences. Fourth, our method consistently achieves better performance on the five datasets, and its improvement over the second best method is statistically significant (t-test p<0.05). This is because our method can explicitly encode real distance information rather than using positional encoding, making the modeling of distance more accurate. \begin{figure}[!t] \centering \includegraphics[width=0.4\textwidth]{fig/amazonreg.pdf} \caption{Performance comparison of rating regression on \textit{Amazon}. Lower scores indicate better performance.}\label{fig.reg} \end{figure} We further compare the performance of different methods in a rating regression task on the \textit{Amazon} dataset. The results are shown in Fig.~\ref{fig.reg}. From Fig.~\ref{fig.reg} we observe similar patterns with the results in classification tasks, which validate the generality of our DA-Transformer in different genres of tasks. \begin{figure*}[!t] \centering \subfigure[\textit{AG}.]{ \includegraphics[height=1.16in]{fig/positionfunc11.pdf} } \subfigure[\textit{Amazon}.]{ \includegraphics[height=1.16in]{fig/positionfunc12.pdf} } \subfigure[\textit{MIND}.]{ \includegraphics[height=1.16in]{fig/positionfunc13.pdf} } \caption{Influence of using different mapping functions.}\label{fig.positionfunc1} \end{figure*} \begin{figure*}[!t] \centering \subfigure[\textit{AG}.]{ \includegraphics[height=1.16in]{fig/positionfunc21.pdf} } \subfigure[\textit{Amazon}.]{ \includegraphics[height=1.16in]{fig/positionfunc22.pdf} } \subfigure[\textit{MIND}.]{ \includegraphics[height=1.16in]{fig/positionfunc32.pdf} } \caption{Influence of using different attention adjusting methods.}\label{fig.positionfunc2} \end{figure*} \begin{figure}[!t] \centering \subfigure[$w_i$.]{ \includegraphics[width=0.22\textwidth]{fig/aghead.pdf} } \subfigure[$v_i$.]{ \includegraphics[width=0.22\textwidth]{fig/agheadv.pdf} } \caption{The weights learned by different attention heads on the \textit{AG} dataset.}\label{fig.aghead} \end{figure} \begin{figure}[!t] \centering \subfigure[Word-level $w_i$.]{ \includegraphics[width=0.22\textwidth]{fig/newshead1.pdf} \label{fig.newshead1} } \subfigure[News-level $w_i$.]{ \includegraphics[width=0.22\textwidth]{fig/newshead2.pdf} \label{fig.newshead2} } \subfigure[Word-level $v_i$.]{ \includegraphics[width=0.22\textwidth]{fig/newsheadv.pdf} \label{fig.newshead3} } \subfigure[News-level $v_i$.]{ \includegraphics[width=0.22\textwidth]{fig/newsheadv2.pdf} \label{fig.newshead4} } \caption{The distance weights learned by different attention heads on the \textit{MIND} dataset.}\label{fig.newshead} \end{figure} \begin{figure*}[!t] \centering \subfigure[Vanilla Transformer.]{ \includegraphics[width=0.98\textwidth]{fig/vanillaatt.pdf} \label{fig.att1} } \subfigure[DA-Transformer. The first two heatmaps are produced by heads with $w_i<0$ and others are produced by heads with $w_i>0$.]{ \includegraphics[width=0.98\textwidth]{fig/leadatt.pdf} \label{fig.att2} } \caption{The self-attention weights learned by the vanilla Transformer and our proposed DA-Transformer method. }\label{fig.heatmap} \end{figure*} \subsection{Influence of Different Mapping Functions} Next, we study the influence of using different mapping functions $f(\cdot)$ for computing the re-scaled coefficients. We compare the performance of our method w.r.t. several different $f(\cdot)$, including: (1) $f(x)=min(x,T)$ (clip), using a threshold $T$ to clip the weighted distance; (2) $f(x)=k_ix+b_i$ (linear), using a linear transformation to the weighted distance; (3) $f(x)=\mathrm{exp}(x)$ (exponent), using an exponent function to map the weighted distance; (4) $f(x)=\frac{1}{1+\mathrm{exp}(-x)}$ (sigmoid), using the sigmoid function to activate the weighted distance; and (5) $f(x;v_i)=\frac{1+\mathrm{exp}(v_i)}{1+\mathrm{exp}(v_i-x)}$, our learnable sigmoid function. Due to space limitation, we only present the results on the \textit{AG}, \textit{Amazon} and \textit{MIND} datasets in Fig.~\ref{fig.positionfunc1}. From these results, we find that clip is not optimal for mapping the weighted distance. This is because it cannot keep the precise distance information beyond a certain range. In addition, simply using the linear transformation is also insufficient. This may be because our attention adjustment method requires $f(\cdot)$ to be positive, but linear transformation cannot guarantee. Besides, we find that the sigmoid function and our proposed function are better than the exponential function. This may be because long sequences will lead to the problem of exponent explosion, which is harmful to context modeling. Moreover, our proposed learnable sigmoid function is better than the standard sigmoid function. It shows that adjusting the activation function in a learnable way can better map the raw distances into re-scaled coefficients. \subsection{Influence of Different Attention Adjusting Methods} Then, we explore the influence of different methods for adjusting the raw attention weights. We consider four different kinds of methods, including: (1) adding the re-scaled coefficients to the attention weights normalized by softmax (late add); (2) multiplying the re-scaled coefficients with the attention weights normalized by softmax (late multiply); (3) adding the re-scaled coefficients to the raw attention weights before normalization (early add), which is widely used in existing methods like \textit{Transformer-XL}; (4) multiplying the re-scaled coefficients with the raw attention weights activated by ReLU, which is the method used in our approach (early multiply). The results on the \textit{AG}, \textit{Amazon} and \textit{MIND} datasets are shown in Fig.~\ref{fig.positionfunc2}. According to these results, we find that early adjustment is better than late adjustment. This may be because the late adjustment methods will change the total amount of attention, which may not be optimal. In addition, we find that multiplying is better than adding for both early and late adjustment. This may be because adding large re-scaled coefficients may over-amplify some attention weights. For example, if a raw attention weight is relatively small, it is not suitable to add large re-scaled coefficients to it because the corresponding contexts may not have close relations. In contrast, multiplying the re-scaled coefficients will not over-amplify the low attention weights. Moreover, in our early multiply method we further propose to use the ReLU function to introduce sparsity to make the Transformer more ``focused''. Thus, our method is better than the existing early add method in adjusting the attention weights. \subsection{Model Interpretation} Finally, we interpret our proposed method by visualizing its key parameters and the attention weights. we first visualize the parameters $w_i$ and $v_i$ in our method, which control the preferences of attention heads on long-term or short-term information and the shape of the learnable sigmoid function, respectively. The visualization results on the \textit{AG} and \textit{MIND} datasets are respectively shown in Figs.~\ref{fig.aghead} and \ref{fig.newshead}.\footnote{We show the average results of 5 runs. The values of $w_i$ and $v_i$ in these figures are sorted and are not corresponding to the head orders.} From Fig.~\ref{fig.aghead}, we find it is very interesting that half of the parameters $w_i$ are positive and the rest of them are negative. It indicates that half of the attention heads mainly aim to capture local contexts, while the rest ones are responsible for modeling long-distance contexts. It may be because both short-term and long-term contexts are useful for understanding news topics. In addition, we find that most attention heads have negative $v_i$ while the rest are positive. It shows that on the \textit{AG} dataset the intensity of attention adjustment is mild in most attention heads. From Fig.~\ref{fig.newshead1}, we find long-term information is somewhat more important than local information in modeling news texts for news recommendation. However, from Fig.~\ref{fig.newshead2} we find an interesting phenomenon that only one head has a strong negative $w_i$ while the values of $w_i$ in all the rest heads are positive. It means that only one attention head tends to capture short-term user interests while all the other heads prefer to capture long-term user interests. This is intuitive because users usually tend not to intensively click very similar news and their long-term interests may have more decisive influence on their news clicks. In addition, we find it is interesting that on \textit{MIND} all values of $v_i$ are positive. It may indicate that distance information has a strong impact on the attention weights. These visualization results show that DA-Transformer can flexibly adjust its preference on short-term or long-term information and the intensity of attention adjustment by learning different values of $w_i$ and $v_i$ according to the task characteristics.\footnote{We do not observe significant correlations between the sequence length and the signs of $w_i$. This may indicate that the values of $w_i$ depend more on the task characteristics rather than text lengths.} We then visualize the attention weights produced by the vanilla Transformer and the distance-aware attention weights in our DA-Transformer method. The attention weights of a sentence in the \textit{AG} dataset computed by four different attention heads are respectively shown in Figs.~\ref{fig.att1} and \ref{fig.att2}. From Fig.~\ref{fig.att1}, we find it is difficult to interpret the self-attention weights because they are too ``soft''. In addition, it is difficult for us to understand the differences between the information captured by different attention heads. Different from the vanilla Transformer, from Fig.~\ref{fig.att2} we find that the attention weights obtained by our method are more sparse, indicating that the attention mechanism in our method is more focused. In addition, it is easier for us to interpret the results by observing the attention heatmap. For example, the first two heatmaps in Fig.~\ref{fig.att2} are produced by the two attention heads with preferences on short-term contexts. We can see that they mainly capture the relations among local contexts, such as the relations between ``biotech'' and ``sector''. Differently, in the latter two heatmaps obtained by the two attention heads that prefer long-term contexts, we can observe that the model tends to capture the relations between a word (e.g., ``biotech'') with the global contexts. These results show that different attention heads in our method are responsible for capturing different kinds of information, and their differences can be directly observed from the self-attention weights. Thus, our method can be better interpreted than vanilla Transformers. \section{Introduction} Transformer~\cite{vaswani2017attention} has achieved huge success in the NLP field in recent years~\cite{kobayashi2020attention}. It serves as the basic architecture of various state-of-the-art models like BERT~\cite{devlin2019bert} and GPT~\cite{radford2019language}, and boosts the performance of many tasks like text generation~\cite{koncel2019text}, machine translation~\cite{vaswani2017attention}, and reading comprehension~\cite{xu2019bert}. Thus, the improvement on the Transformer architecture would be beneficial for many NLP-related fields~\cite{wu2020improving}. A core component of Transformer is multi-head self-attention, which is responsible for modeling the relations between contexts~\cite{yang2019context,guo2019star}. However, self-attention is position-agnostic since it does not distinguish the orders of inputs. Thus, in the vanilla Transformer, position encoding is applied to the input to help Transformer capture position information. However, in contrast to recurrent and convolutional neural networks, it is difficult for vanilla Transformers to be aware of the token distances~\cite{shaw2018self}, which are usually important cues for context modeling. Thus, several works explored to incorporate token distance information into Transformer. For example, Shaw et al.~\shortcite{shaw2018self} proposed to combine the embeddings of relative positions with attention key and value in the self-attention network. They restricted the maximum relative distance to only keep the precise relative position information within a certain distance. \citeauthor{yan2019tener}~\shortcite{yan2019tener} proposed a variant of self-attention network for named entity recognition, which incorporates sinusoidal embeddings of relative position to compute attention weights in a direction- and distance-aware way. However, the distance or relative position embeddings used by these methods usually cannot keep the precise information of the real distance, which may not be beneficial for the Transformer to capture word orders and the context relations. In this paper, we propose a \underline{\textbf{d}}istance-\underline{\textbf{a}}ware Transformer (DA-Transformer), which can explicitly exploit real token distance information to enhance context modeling by leveraging the relative distances between different tokens to re-scale the raw attention weights before softmax normalization. More specifically, since global and local context modeling usually have different distance preferences, we propose to learn a different parameter in different attention heads to weight the token distances, which control the preferences of attention heads on long or short distances. In addition, since the weighted distances may not have been restricted to a proper range, we propose a learnable sigmoid function to map the weighted distances into re-scaled coefficients. They are further multiplied with the raw attention weights that are clipped by the ReLU function for keeping the non-negativity and introducing sparsity. We conduct extensive experiments on five benchmark datasets for different tasks, and the results demonstrate that our approach can effectively enhance the performance of Transformer and outperform its several variants with distance modeling. The main contributions of this paper include: \begin{itemize} \item We propose a distance-aware Transformer that uses the real token distances to keep precise distance information in adjusting attention weights for accurate context modeling. \item We propose to use different parameters to weight real distances in different attention heads to control their diverse preferences on short-term or long-term information. \item We propose a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges for better adjusting the attention weights. \item We conduct extensive experiments on five benchmark datasets and the results validate the effectiveness of our proposed method. \end{itemize} \section{DA-Transformer}\label{sec:Model} In this section, we introduce our proposed \textbf{\underline{d}}istance-\textbf{\underline{a}}ware Transformer (DA-Transformer) approach, which can effectively exploit real token distance information to enhance context modeling. It uses a learnable parameter to weight the real distances between tokens in each attention head, and uses a learnable sigmoid function to map the weighted distances into re-scaled coefficients with proper ranges, which are further used to adjust the raw attention weights before softmax normalization. The details of DA-Transformer are introduced in the following sections. \subsection{Head-wise Distance Weighting} Similar with the standard Transformer, the input of our model is also a matrix that contains the representation of each token, which is denoted as $\mathbf{H}=[\mathbf{h}_1, \mathbf{h}_2, ..., \mathbf{h}_N]$, where $N$ is the length of the sequence. We denote the real relative distance between the $i$-th and $j$-th positions as $R_{i,j}$, which is computed by $R_{i,j}=|i-j|$. We can then obtain the relative distance matrix $\mathbf{R}\in \mathbb{R}^{N\times N}$ that describes the relative distance between each pair of positions. In each attention head, we use a learnable parameter $w_i$ to weight the relative distance by $\mathbf{R}^{(i)}=w_i\mathbf{R}$, which will be further used to adjust the self-attention weights. In our method, we stipulate that a more positive $\mathbf{R}^{(i)}$ will amplify the attention weights more strongly while a more negative $\mathbf{R}^{(i)}$ will diminish them more intensively. Thus, a positive $w_i$ means that this attention head prefers to capture long-distance information, while a negative $w_i$ means that it focuses more on local contexts. By learning different values of $w_i$, different attention heads may have different preferences on capturing either short-term or long-term contextual information with different intensity. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fig/positionfunc.pdf} \caption{The curves of our learnable sigmoid function under different $v_i$.}\label{fig.func} \end{figure} \subsection{Weighted Distance Mapping} Since the raw weighted distances may not be in the proper range for adjusting the attention weights, we need to map them into the re-scaled coefficients via a function $\mathbf{\hat{R}}^{(i)}=f(\mathbf{R}^{(i)})$ that is suitable for adjusting the self-attention weights. However, it is not a trivial task to design the function $f(\cdot)$ because it needs to satisfy the following requirements: (1) $f(0)=1$. We stipulate that zero distances do not influence the self-attention weights. (2) The value of $f(\mathbf{R}^{(i)})$ should be zero when $\mathbf{R}^{(i)}\to-\infty$. This requirement is to guarantee that if an attention head prefers to capture local information ($w_i<0$), the long-distance information should be surpassed.\footnote{Although the raw negative attention weights may be raised to 0 by $f(\cdot)$, the model can still surpass these attention weights after softmax by increasing the scale of other attention weights.} (3) The value of $f(\mathbf{R}^{(i)})$ should be limited when $\mathbf{R}^{(i)}\to+\infty$. This requirement is to ensure that the model is able to process long sequences without over-emphasize distant contexts. (4) The scale of $f(\cdot)$ needs to be tunable. This aims to help the model better adjust the intensity of distance information. (5) The function $f(\cdot)$ needs to be monotone. To satisfy the five requirements above, we propose a learnable sigmoid function to map the weighted relative distances $\mathbf{R}^{(i)}$, which is formulated as follows: \begin{equation}\label{eq5} f(\mathbf{R}^{(i)}; v_i)=\frac{1+\mathrm{exp}(v_i)}{1+\mathrm{exp}(v_i-\mathbf{R}^{(i)})}, \end{equation} where $v_i$ is a learnable parameter in this head that controls the upperbound and ascending steepness of this function. The curves of our learnable sigmoid function under several different values of $v_i$ are plotted in Fig.~\ref{fig.func}. We can see that the proposed function satisfies all the requirements above. In addition, from this figure we find that if $v_i$ is larger, the upperbound of the curve is higher, which means that distance information is more intensive. When $v_i=0$, it is in fact identical to the standard sigmoid function except for the scaling factor of 2. By mapping the weighted distances $\mathbf{R}^{(i)}$ via the function $f(\cdot)$, we can obtain the final re-scaled coefficients $\mathbf{\hat{R}}^{(i)}$ in a learnable way. Several illustrative examples of the re-scaled coefficients under $w_i=\pm 1$ and $v_i=\pm 1$ are respectively shown in Figs.~\ref{fig.mat1}-\ref{fig.mat4}. We can see that if $w_i$ is positive, long-distance contexts are preferred while short-term contexts are surpassed. The situation is reversed if $w_i$ turns to negative. In addition, the coefficients in Fig.~\ref{fig.mat3} have larger dynamic ranges than the coefficients in Fig.~\ref{fig.mat1}, indicating that long-distance information is more dominant in Fig.~\ref{fig.mat3}. Moreover, the coefficients in Fig.~\ref{fig.mat4} are ``sharper'' than those in Fig.~\ref{fig.mat2}, which indicates that the model tends to capture shorter distances. \begin{figure}[!t] \centering \subfigure[$w_i=1, v_i=-1$.]{ \includegraphics[width=0.22\textwidth]{fig/mat1.pdf} \label{fig.mat1} } \subfigure[$w_i=-1, v_i=-1$.]{ \includegraphics[width=0.22\textwidth]{fig/mat2.pdf} \label{fig.mat2} } \subfigure[$w_i=1, v_i=1$.]{ \includegraphics[width=0.22\textwidth]{fig/mat3.pdf} \label{fig.mat3} } \subfigure[$w_i=-1, v_i=1$.]{ \includegraphics[width=0.22\textwidth]{fig/mat4.pdf} \label{fig.mat4} } \caption{The re-scaled coefficient matrices under different values of $w_i$ and $v_i$. Dark regions indicate that the corresponding attention weights are promoted.} \end{figure} \subsection{Attention Adjustment} Then, we use the re-scaled coefficients to adjust the raw attention weights that are computed by the dot-product between the query and key, i.e., $\frac{\mathbf{Q}^{(i)}\mathbf{K}^{(i)\top}}{\sqrt{d}}$. Different from existing methods that add the query-key dot-product with position or distance representations, in our approach we propose to multiply the re-scaled coefficients with the query-key dot-product. This is because for the tokens whose relations are very weak, if their re-scaled coefficients are large, their final attention weights will be over-amplified if we simply add the re-scaled coefficients to their raw attention weights. This is not optimal for modeling contextual information because the attention weights of irrelevant contexts cannot be fully surpassed. However, there are also some problems if we directly multiply the re-scaled coefficients $\mathbf{\hat{R}}^{(i)}$ and the raw attention weights $\frac{\mathbf{Q}^{(i)}\mathbf{K}^{(i)\top}}{\sqrt{d}}$. This is because the sign of attention weights $\frac{\mathbf{Q}^{(i)}\mathbf{K}^{(i)\top}}{\sqrt{d}}$ is indefinite and the multiplied results cannot accurately reflect the influence of distance information. Thus, we propose to add a ReLU~\cite{glorot2011deep} activation function to the raw attention weights to keep non-negativity. In this way, the final output $\mathbf{O}^{(i)}$ of an attention head can be formulated as follows: \begin{equation}\small \label{eq6} \mathbf{O}^{(i)}=\mathrm{softmax}(\frac{\mathrm{ReLU}(\mathbf{Q}^{(i)}\mathbf{K}^{(i)\top})*\mathbf{\hat{R}}^{(i)}}{\sqrt{d}})\mathbf{V}^{(i)}, \end{equation} where $*$ represents element-wise product. The ReLU function can also introduce sparsity to the self-attention because only the positive attention weights can be amplified by the re-scaled coefficients, which makes the attention weights in our method sharper. We concatenate the output from the $h$ independent attention heads, and project it into a unified output. In addition, we keep the same layer normalization and residual connection strategy as the standard Transformer. \subsection{Computational Complexity Analysis} Compared with the standard Transformer, the major additional time cost is brought by computing the re-scaled coefficients $\mathbf{\hat{R}^{(i)}}$ and using them to adjust the attention weights. The theoretical time complexity of the two operations in each head is $O(N^2)$, which is much smaller than the time complexity of computing the attention weights, i.e., $O(N^2\times d)$. In addition, both Eq. (\ref{eq5}) and Eq. (\ref{eq6}) in our approach can be computed in a vectorized manner. Thus, the additional time consumption of our method is very light. Besides, the increase of parameters is also minimal because we only introduce $2h$ additional parameters, which are usually ignorable compared with the projection matrices like $\mathbf{W}_Q^{(i)}$. Thus, our approach inherits the efficiency of the Transformer architecture. \section{Related Work}\label{sec:RelatedWork} \subsection{Transformer} To make this paper self-contained, we first briefly introduce the architecture of Transformer, which was initially introduced to the machine translation task~\cite{vaswani2017attention}. It has become an important basic neural architecture of various state-of-the-art NLP models like BERT~\cite{devlin2019bert} and GPT~\cite{radford2019language}. The core component of Transformer is multi-head self-attention. It has $h$ attention heads, where the parameters in each head are independent. For the $i$-th attention head, it takes a matrix $\mathbf{H}$ as the input. It first uses three independent parameter matrices $\mathbf{W}_Q^{(i)}$, $\mathbf{W}_K^{(i)}$, and $\mathbf{W}_V^{(i)}$ to respectively transform the input matrix $H$ into the input query $\mathbf{Q}^{(i)}$, key $\mathbf{K}^{(i)}$ and value $\mathbf{V}^{(i)}$, which is formulated as follows: \begin{equation} \mathbf{Q}^{(i)}, \mathbf{K}^{(i)}, \mathbf{V}^{(i)}=\mathbf{H}\mathbf{W}_Q^{(i)}, \mathbf{H}\mathbf{W}_K^{(i)}, \mathbf{H}\mathbf{W}_V^{(i)}. \end{equation} Then, it uses a scaled dot-product attention head to process its query, key and value, which is formulated as follows: \begin{equation}\small \mathrm{Attention}(\mathbf{Q}^{(i)}, \mathbf{K}^{(i)}, \mathbf{V}^{(i)})=\mathrm{softmax}(\frac{\mathbf{Q}^{(i)}\mathbf{K}^{(i)\top}}{\sqrt{d}})\mathbf{V}^{(i)}, \end{equation} where $d$ is the dimension of the vectors in the query and key. The outputs of the $h$ attention heads are concatenated together and the final output is a linear projection of the concatenated representations, which is formulated as follows: \begin{equation}\small \begin{aligned} \mathrm{Multihead}(\mathbf{Q}, \mathbf{K}, \mathbf{V})&= \mathrm{Concat(head_1, ..., head_h)}\mathbf{W}_O,\\ \mathrm{where~~head_i}&= \mathrm{Attention}(\mathbf{Q}^{(i)}, \mathbf{K}^{(i)}, \mathbf{V}^{(i)}), \end{aligned} \end{equation} where $\mathbf{W}_O$ is an output projection matrix. In the standard Transformer, a position-wise feed-forward neural network is further applied to the output of multi-head self-attention network. Its function is formulated as follows: \begin{equation} FFN(\mathbf{x}) = max(0, \mathbf{x}\mathbf{W}_1 + \mathbf{b}_1)\mathbf{W}_2 + \mathbf{b}_2, \end{equation} where $\mathbf{W}_1$, $\mathbf{W}_2$, $\mathbf{b}_1$, $\mathbf{b}_2$ are kernel and bias parameters. Transformer also employs layer normalization~\cite{ba2016layer} and residual connection~\cite{he2016deep} techniques after the multi-head self-attention and feed-forward neural networks, which are also kept in our method. Since self-attention network does not distinguish the order and position of input tokens, Transformer adds the sinusoidal embeddings of positions to the input embeddings to capture position information. However, position embeddings may not be optimal for distance modeling in Transformer because distances cannot be precisely recovered from the dot-product between two position embeddings. \subsection{Distance-aware Transformer} Instead of directly using the sinusoidal position embedding~\cite{vaswani2017attention} or the absolute position embedding~\cite{devlin2019bert}, several variants of the Transformer explore to use the relative positions to better model the distance between contexts~\cite{shaw2018self,wang2019self,dai2019transformer,yan2019tener}. For example, Shaw et al.~\shortcite{shaw2018self} proposed to add the embeddings of relative positions to the attention key and value to capture the relative distance between two tokens. They only kept the precise distance within a certain range by using a threshold to clip the maximum distance to help generalize to long sequences. \citeauthor{dai2019transformer}~\shortcite{dai2019transformer} proposed Transformer-XL, which uses another form of relative positional encodings that integrate content-dependent positional scores and a global positional score into the attention weights. \citeauthor{yan2019tener}~\shortcite{yan2019tener} proposed direction-aware sinusoidal relative position embeddings and used them in a similar way with Transformer-XL. In addition, they proposed to use the un-scaled attention to better fit the NER task. However, relative position embeddings may not be optimal for modeling distance information because they usually cannot keep the precise information of real token distances. Different from these methods, we propose to directly re-scale the attention weights based on the mapped relative distances instead of using sinusoidal position embeddings, which can explicitly encode real distance information to achieve more accurate distance modeling. \section*{Supplementary Materials} \subsection*{Ablation Study} In this section, we present several ablation studies on using our DA-Transformer at different levels on the \textit{Amazon} and \textit{MIND} datasets. The results are respectively presented in Tables~\ref{ab.result} and \ref{ab.result2}. The results show the effectiveness of the DA-Transformer at different levels. \begin{table}[!h] \resizebox{1\linewidth}{!}{ \begin{tabular}{lcc} \hline \multicolumn{1}{c}{Methods} & Accuracy & Macro-F \\ \hline Transformer & 65.15 & 42.14 \\ +Word DA-Transformer & 65.92 & 43.76 \\ +Sentence DA-Transformer & 65.85 & 43.55 \\ DA-Transformer & \textbf{66.38} & \textbf{44.29} \\ \hline \end{tabular} } \caption{Results on the \textit{Amazon} dataset.} \label{ab.result} \end{table} \begin{table}[!h] \resizebox{1\linewidth}{!}{ \begin{tabular}{lcccc} \hline \multicolumn{1}{c}{\textbf{Methods}} & AUC & MRR & \small{nDCG@5} & \small{nDCG@10} \\ \hline Transformer & 67.81 & 33.10 & 35.98 & 41.65 \\ +Word DA-Transformer & 68.10 & 33.21 & 36.19 & 42.07 \\ +News DA-Transformer & 68.24 & 33.30 & 36.29 & 42.01 \\ \hline +Both & \textbf{68.32} & \textbf{33.36} & \textbf{36.34} & \textbf{42.07} \\ \hline \end{tabular} } \caption{Results on the \textit{MIND} dataset.} \label{ab.result2} \end{table} \subsection*{Additional Experimental Results} \begin{figure}[!h] \centering \subfigure[\textit{SST}.]{ \includegraphics[height=1.5in]{fig/positionfunc14.pdf} } \subfigure[\textit{SNLI}.]{ \includegraphics[height=1.5in]{fig/positionfunc15.pdf} } \caption{Influence of using different mapping functions on the \textit{SST} and \textit{SNLI} datasets.}\label{fig.positionfunc14} \end{figure} \begin{figure}[!t] \centering \subfigure[\textit{SST}.]{ \includegraphics[height=1.5in]{fig/positionfunc24.pdf} } \subfigure[\textit{SNLI}.]{ \includegraphics[height=1.5in]{fig/positionfunc25.pdf} } \caption{Influence of using different attention adjusting methods on the \textit{SST} and \textit{SNLI} datasets.}\label{fig.positionfunc24} \end{figure} \begin{figure}[!t] \centering \subfigure[\textit{SST}.]{ \includegraphics[width=0.22\textwidth]{fig/ssthead.pdf} \label{fig.ssthead} } \subfigure[\textit{SNLI}.]{ \includegraphics[width=0.22\textwidth]{fig/snlishead.pdf} \label{fig.snlihead} } \caption{The distance weights learned by different attention heads on the \textit{SST} and \textit{SNLI} datasets.}\label{fig.sstsnlihead} \end{figure} \begin{figure}[!t] \centering \subfigure[Word-level Transformer.]{ \includegraphics[width=0.22\textwidth]{fig/amazonhead1.pdf} \label{fig.amazonhead1} } \subfigure[Sentence-level Transformer.]{ \includegraphics[width=0.22\textwidth]{fig/amazonhead2.pdf} \label{fig.amazonhead2} } \caption{The distance weights learned by different attention heads on the \textit{Amazon} dataset.}\label{fig.amazonhead} \end{figure} \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208 and U1936216.
2024-02-18T23:40:46.341Z
2021-04-13T02:16:46.000Z
algebraic_stack_train_0000
3,282
6,355
proofpile-arXiv_066-90
\section{Introduction} \label{sec:intro} Twitter is a valuable data source for many research topics due to the richness of data it provides and the developed and recorded social interactions. The RecSys Challenge 2020 addresses the prediction tasks of four types of user engagements on Twitter. For privacy reasons the dataset provided in the challenge is an artificial one: it is collected in one week span and consists of public engagements along with pseudo negatives randomly sampled from the public follow graph~\cite{organizersrecsys}. The artificial estimation of pseudo negatives along with the unbalance of the four engagement classes make the prediction task difficult for learning methods. We present the results of our solution implemented utilizing a method based on Click-Through Rate (CTR) on the metrics provided by the challenge and we compare them with those obtained with a gradient boosting learning model. We observe that our solution outperforms the learning method by a margin on the dataset provided. \subsection{Dataset insights} \begin{figure}[H] \includegraphics[scale=0.4]{images/action_distribution.png} \caption{The chart shows the distribution of each action in the training set. Retweet with comment and Reply are very unbalanced, while pseudo negatives represent a large part of the dataset. This chart refers to the official and latest training set released.} \label{fig:action_distribution} \end{figure} The dataset is a collection of public interactions on tweets along with information about their author and the user that generates the engagement. This dataset has an uneven class distribution. As illustrated in Figure ~\ref{fig:action_distribution}, class unbalance in the training set makes the classification process difficult in both validation and test sets. This condition is further stressed in the modality by which pseudo-negative features are obtained as described in~\cite{organizersrecsys}. In that work, authors explain the difficulties in including pseudo-negatives, samples that represent interactions with no engagement. However, the collection of this data hides the reason why a user did not interact with a tweet. In fact, a user could not interact willingly or because he did not see the tweet at all. This implies that a binary classifier is potentially misled in considering negative class candidates. Additionally, users' past history absence leads to the avoidance of user-based and personalized recommendation algorithms. The lack of user historical data is presented in the histogram in Figure \ref{fig:tcpu} that highlights how the majority of users interact at most with less than three tweets. \begin{figure}[H] \centering \includegraphics[scale=0.3]{images/tcpu.pdf} \caption{The histogram in figure represents the amount of tweets for which a user appears in the challenge dataset as content consumer. The horizontal axis represents how many tweets are paired with a unique user, while the vertical axis shows how many users have that number of interactions (both positive or negative).} \label{fig:tcpu} \end{figure} \subsection{Proposed metrics} The organizers of the RecSys Challenge 2020 proposed two different metrics to evaluate the solutions: \begin{description} \item[PRAUC] (Precision Recall Area Under the Curve) \item[RCE] (Relative Cross Entropy) \end{description} The PRAUC is useful to deal with unbalanced classes like \textit{Retweet with comment} and \textit{Reply}. These classes have numerous rows with null values. This condition means that no action is performed. The final ranking is computed in different steps: \begin{itemize} \item Averaging the PRAUC score across the four engagements \item Averaging the RCE score across the four engagements \item Compute the ranking for both metrics \item Sum the two obtained ranking \end{itemize} As we have observed in our experimental results, this way of computing the score favors solutions with a good score on the least competitive metric rankings. \section{Our solution} For the final submission, we propose two solutions that we assess in terms of performance score as observed as results on the evaluation set. In particular we submit: \textit{a CTR (Click-Through Rate-based) method} and \textit{a gradient boosting model}. This choice is supported by the following logical reasoning: the \textit{gradient boosting model} performs very well on our local test set but it has a significant worsening on the public leaderboard measured on the evaluation set. On the other hand, the CTR method behaves almost in the same way on both sets. This method used an optimized constant that is exactly the CTR value for each class. We report them in detail in the two following sections. \subsection{Click-Through Rate-based method} We estimate which constant value has the best outcome on both PRAUC and RCE to get a better understanding of the evaluation metrics. The result of this investigation generates two different outcomes: \begin{itemize} \item \textbf{PRAUC}: any constant produces the same effect in terms of score. \item \textbf{RCE}: has different outcomes depending on the engagement's type. \end{itemize} The best result, as pointed out by the way this metric is calculated, is given by a Click-Through Rate specific for the type of engagement. The CTR represents the ratio of positive engagements to the number of times a tweet has been viewed. This value is calculated, on the training set, in different steps: \begin{itemize} \item Count the number of positive engagements for each class: an engagement is considered \textit{positive} if the timestamp of the interactions between a user and a tweet is not null. \item Count the total number of rows of the training set: this includes the four types of engagement along with pseudo-negatives. \item The CTR is calculated with the following equation depending on the class c: \begin{equation} CTR = \frac{engagement_{c}}{N_{rows}} \label{eq:ctr} \end{equation} \end{itemize} In Equation \ref{eq:ctr}, \textit{c} represents one of the four engagements to predict: Like, Retweet, Reply, Retweet with comment. The CTR numerical values are pointed out in the Table ~\ref{tab:ctr-training}. \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & \textbf{Like} & \textbf{Reply} & \textbf{Retweet} & \textbf{Retweet with comment} \\ \hline \textbf{CTR} & 0.428 & 0.025 & 0.108 & 0.007 \\ \hline \end{tabular} \caption{CTR values calculated on the training set: the class with the highest ratio is the Like that is intuitively the most frequent.} \label{tab:ctr-training} \end{table} The optimum value is found, for both RCE and PRAUC, comparing the results of different constant value on the training set. The optimum value is the CTR in each class. The CTR scores are computed as illustrated in Figure \ref{fig:constant_teaser} where the sum of existing engagements per class (timestamp exists) over the total existing interaction per class (timestamp exists or it is a null value) represents the related CTR value. The score is an average of the performance obtained over different chunks of the dataset. RCE and PRAUC for the four different classes of engagement are listed in Table~\ref{tab:constant_tuning}. The second line reports the scores obtained with a random constant that is used as baseline. As shown by the other rows in Table \ref{tab:constant_tuning} when the constant values increase its absolute distance from the CTR value, RCE decreases in all the four classes. \begin{table*}[!ht] \begin{tabular}{@{}ccccccccc@{}} \toprule & \multicolumn{4}{c}{RCE} & \multicolumn{4}{c}{PRAUC} \\ \midrule & Like & Reply & Retweet & \makecell{Retweet\\ with comment} & Like & Reply & Retweet & \makecell{Retweet\\ with comment} \\ \midrule \textbf{CTR} & -0.01 & -0.002 & -0.003 & -0.001 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{Random} & -46.09 & -739.49 & -189.84 & -2219.446 & 0.43 & 0.03 & 0.109 & 0.007 \\ \textbf{0} & -2091.49 & -642.44 & -994.003 & -483.57 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.1} & -54.81 & -35.68 & -0.135 & -181.54 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.3} & -5.87 & -217.64 & -30.219 & -741.73 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.5} & -1.26449 & -481.89 & -100.908 & -1507.97 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{1} & -2754.47 & -28153.38 & -8817.25 & -79441.8 & 0.72 & 0.51 & 0.554 & 0.503 \\ \bottomrule \end{tabular} \caption{Evaluation of different constants averaged across distinct portions of the training set: the CTR outperforms all the other numbers on RCE, while the PRAUC behaves in the same way with all the numbers provided.} \label{tab:constant_tuning} \end{table*} This constant value was tested on different partitions of the full training set to assess the validity of the approach. Each partition contains 16 million rows to make them similar to the size of the challenge's validation and test set. This number has always the same performance in terms of RCE and PRAUC throughout the different time's spans, as highlighted in Figure~\ref{fig:constant_performance}. \begin{figure}[ht] \begin{subfigure} \centering \includegraphics[width=.45\linewidth]{images/constant_performance_rce.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.45\linewidth]{images/constant_performance_prauc.png} \end{subfigure} \caption{The two charts describe the score obtained by the CTR constant over different time spans: the PRAUC has always the exact same value for each type of engagement and the RCE has almost the same score as well.} \label{fig:constant_performance} \end{figure} \begin{figure} \includegraphics[scale=0.1]{images/constant_model.png} \caption{Overview of the model based on the CTR constant: the first step is to compute the number of actions of each type, then the constant is calculated, with a different value for each class.} \label{fig:constant_teaser} \end{figure} \subsection{Gradient boosting} The gradient boosting approach is implemented using the XGBoost~\cite{xgboost} library and includes four different models, one for each engagement to predict. The input of this model is enriched with 59 features that are reported in the following section. \subsubsection{Feature engineering} 59 additional features were derived from the dataset provided by the challenge organizers, in strict adherence with the terms and conditions of the challenge. These additional features are grouped in different categories to facilitate their understanding: \textbf{Dataset features (12 features)}: given directly by the dataset, they are exploited with little or no adjustments. Examples of these features are the number of hashtags, the language of the tweet and the number of followers. \textbf{Author features (18 features)}: this group profiles each \textit{tweet author} included in the training set. They are pre-computed features detailing the behaviour of each author during the history documented by the dataset. The most relevant features belonging to this category are: \begin{itemize} \item \textit{Author engagement ratio}: $\frac{N_{eng_c}}{N_{tweet}}$ where $N_{eng_c}$ represents the number of actions of a particular type of engagement \textit{c}, where c is one among \textit{Like, Retweet, Reply, Retweet with comment}, received by a specific tweet author. Instead, the denominator $N_{tweet}$ refers to the total number of tweets published by him. In the end, there are four \textit{author engagement ratio}, one per engagement type. \item \textit{Number of received engagement}: expresses the total number of interactions received by the user for each type of engagement. \end{itemize} \textbf{User features (18 features)}: similar to the one applied for the authors but with the involvement of the person interacting with the tweet. In this group, we find statistical features such as the \textit{engagement ratio} and the \textit{number of actions for each type}, calculated from the user point of view. \textbf{Languages spoken (1 feature)}: the main intuition behind this feature is that understanding the language of a tweet plays a key role for a user in having possible interactions. This approach includes the pre-computation of a file containing, for each user id, the number of times that a specific user has interacted with a tweet written in a specific language. The goal of this computation is to identify, for each user $U_{id}$, a list of languages \textit{spoken} by that specific user. In more formal terms: \begin{equation} f(U_{id}, Lang_{id}) = n_{LT} \end{equation} where $Lang_{id}$ is the id of a language and $n_{LT}$ is the number of tweets written in that language engaged by the user. \textbf{Previous actions (4 features)}: another pre-computation is performed to reconstruct the history of previous interactions. This set of features are formalized with the following function: \begin{equation} f(U_{id}, A_{id}, c) = n_{PA} \end{equation} where: \begin{description} \item[$U_{id}$] is the user id \item[$A_{id}$] is the author id \item[$c$] is the class representing the engagement type \item[$n_{PA}$] is the number of previous actions for the triplet $(U_{id}, A_{id}, c)$ \end{description} \textbf{Word search (6 features)} This class of features is the only one referring to the text of the tweet. We extract some meaningful words from the text tokens and generate a boolean variable to identify when that specific word is included in the text of the tweet. The words used are related to the \textit{call to action}, a situation when the tweet author invites his followers to do a specific action with respect to the tweet. The considered words are \textit{share}, \textit{retweet}, \textit{reply}, \textit{comment}. \section{Submission} \subsection{Click-Through Rate-based} This submission was performed using the value of the CTR calculated on the whole training set. The intuition behind this approach was that, if the distribution of positive actions with respect to the negative ones does not change too much, we can achieve a score that outperforms several proposed models including the \textit{gradient boosting model}. \subsection{Gradient Boosting} The final model includes almost sixty different features. The early stopping feature of the XGBoost library was used to avoid overfitting. After each training round the model is evaluated on a validation set using the RCE metric and if there is no improvement with respect to the last N round the training is stopped. The four models were trained on the final release of the dataset with the parameters in Table \ref{tab:xgboostparam}. \begin{table}[H] \begin{tabular}{|l|l|l|l|l|l|} \hline \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline eta & 0.09 & tree\_method & gpu\_hist & sampling\_method & gradient\_based \\ \hline subsample & 0.2 & objective & binary:logistic & max\_depth & 5 \\ \hline max\_delta\_step & 5 & epochs & 200 & early\_stopping\_rounds & 10 \\ \hline \end{tabular} \caption{XGBoost parameters used by the four different models during the training phase.} \label{tab:xgboostparam} \end{table} \subsection{Results} The optimized constant based model achieves better results when compared to those obtained with gradient boosting. The results are reported in Table \ref{tab:leaderboard}. The reason why a constant-based method performs better than a gradient boosted one is due to the way score and ranking are computed in the RecSys Challenge 2020. As presented by the winning team of the challenge, a very computational heavy, more complex and deeply parallel boosting model is able to explore each row in the dataset characteristics, while our fine tuned XGBoost on a subset of the dataset does not. In this way, an optimized constant is able to generalize better the classification procedure avoiding overfitting with respect to the gradient boosted one as described both in Figure \ref{fig:constant_performance} and in Table \ref{tab:leaderboard}. \begin{table*}[!ht] \begin{tabular}{@{}cccccccccc@{}} \toprule & & \multicolumn{2}{c}{Retweet} & \multicolumn{2}{c}{Reply} & \multicolumn{2}{c}{Like} & \multicolumn{2}{c}{RwC} \\ \midrule Model & Dataset & PRAUC & RCE & PRAUC & RCE & PRAUC & RCE & PRAUC & RCE \\ \midrule \multirow{3}{*}{CTR-based} & \makecell{Final Leaderboard\\ (Test set)} & 0.5516 & -0.03 & 0.5135 & -0.05 & 0.7131 & 0 & 0.5037 & 0\\ & \makecell{Public Leaderboard\\ (Validation set)} & 0.5516 & -0.0315 & 0.5135 & -0.0476 & 0.7133 & -0.0008 & 0.5037 & -0.0045 \\ & Local test set & 0.72 & -0.01 & 0.51 & -0.002 & 0.554 & -0.003 & 0.503 & -0.001 \\ \midrule \multirow{2}{*}{XGBoost} & \makecell{Public Leaderboard\\ (Validation set)} & 0.41 & 6.68 & 0.10 & -3.23 & 0.66 & -32.01 & 0.04 & -1.67\\ & Local test set & 0.710 & 46.85 & 0.626 & 35.24 & 0.830 & 32.56 & 0.586 & -8.916 \\ \bottomrule \end{tabular} \caption{Results of the two described models in different contexts: while the CTR constant maintains almost the same score, the XGBoost model loses efficiency when the time difference with respect to the training set period increases.} \label{tab:leaderboard} \end{table*} \section{Final Consideration} The CTR-based method addresses the issues related to the unbalanced classes and pseudo negatives as described in Section \ref{sec:intro}. However, the literature on recommender systems \cite{quadrana,dacremadeep,recissues,netflixrec,reccase} highlights some issues that are intrinsically related to the problems addressed in this challenge and, therefore, observable in the dataset provided. \textit{Short term trends}: those trends that tend to change or disappear quickly due to the rapid evolution of individual and community preferences. \textit{Cold start problem}: when new users enter the system, the preferences of the new users could not be predicted. \textit{Gray sheep}: problematical users that cannot be traced or predictably aligned to any trend, so the suggestions related to current trends are not an effective solution. \textit{Real-time analysis}: real-time data have to be collected to perform analysis for unexpected events (e.g., earthquakes, pandemics) and a continuous update of the model with real-time information. \textit{Context-Awareness, Privacy, and Sparsity}: users' short and long term history and unnoticeable context-related information may not be retrieved~\cite{recissues}. Despite the huge size of data typically collected, most users are occasional or not inclined to interact, both for privacy issues and unwanted exposure of information. These aspects lead to a sparse characterization matrix, thus resulting in less accurate recommendations. \textit{Baseline metrics}: the available evaluation metrics are for general-purpose recommender systems and are not always applicable in different domains, especially in evaluating the context-aware ones. Metrics used in common machine learning approaches do not always lead to well suited recommendations~\cite{dacremadeep}. The constant preserves its performance among training, validation and test set, while the XGBoost model gets worse. A model could be inefficient if it is not able to capture time and event independent features. This, along with the above issues, could be a probable cause of the considerable variation of the challenge leaderboard, from the validation to the final test phase. In fact, as reported\footnote{https://recsys-twitter.com}, the entire dataset was produced in two different weeks. Thanks to our observations over the result of the validation set, we conclude that an optimized constant performs better. This intuition turns out to be successful in the test phase, indeed, the POLINKS solution ranked sixth at the end of the RecSys Challenge 2020. \begin{acks} This research was supported by FITEC S.r.l., LINKS Foundation, Politecnico di Torino and Istituto Italiano di Tecnologia. Computational resources were provided by HPC@POLITO\footnote{http://www.hpc.polito.it}, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino. \end{acks}
2024-02-18T23:40:46.398Z
2020-10-15T02:19:08.000Z
algebraic_stack_train_0000
3,288
3,182
proofpile-arXiv_066-94
\section{introduction} Due to the Pauli exclusion principle, the low-energy electronic excitations in a metal are restricted to a very narrow window around its Fermi surface (FS). Consequently, the transport properties of metals at finite temperatures and/or external electromagnetic fields are governed by the specific shape and topology of its FS, and by the details of the matrix elements defining the scattering processes on it. One of the most important scattering source for electrons in metals is their interaction with the lattice vibrations or phonons \cite{Grimvall1981}. The electron-phonon interaction yields a renormalization of the electronic quasiparticles near the FS, modifying their effective mass and their lifetime, which ultimately gives rise to observable macroscopic phenomena such as the temperature dependent resistivity \cite{Ziman1960}, or even conventional superconductivity \cite{Schrieffer1964, AllenMitrovic1983}. The ubiquitous presence of the electron-phonon interaction has led to a persistent effort to model this physical process ever since the early days of the quantum theory of solids. However, practical \textit{ab initio} calculations with the ability to accurately predict complex materials properties related to the electron-phonon interaction have been possible only recently \cite{GiustinoRMP2016}. The main obstacle to overcome has been the ability to compute, at a reasonable cost, the electron-phonon matrix elements on dense meshes sampling the Brillouin Zone (BZ), which is necessary to capture the fine details of the FS anisotropy. Several efficient numerical techniques have been developed during the last years for this purpose \cite{BaroniRMP2001,GiustinoPRB2007,EigurenPRB2008}, which have boosted tremendously the accuracy of theoretical studies on electron-phonon driven phenomena, such as temperature-dependent charge transport \cite{MustafaPRB2016,PoncePRB2018}, non-adiabatic corrections to phonon dispersions \cite{CalandraPRB2010,NovkoPRB2018,GoiricelayaPRB2020}, quasiparticle renormalization signatures in angle resolved photoemission spectra \cite{EigurenPRL2003,VerdiNCM2017,GoiricelayaCMP2019}, or gap anisotropy in phonon-mediated superconductors \cite{MarginePRB2013}. Nevertheless, these studies have also shown explicitly that extremely fine samplings of the BZ are necessary to obtain converged results, requiring more than $10^{5}$ points in the reciprocal space in typical cases \cite{GiustinoPRB2007}. Apart from the obvious issues related to computer memory demands, the amount of information to be handled on the Fermi surface makes the data analysis of anisotropic quantities certainly difficult, and relegates the comparison between calculations performed with different meshes to a qualitative level. More importantly, for electron-phonon problems such as the Eliashberg equations of superconductivity \cite{AllenMitrovic1983, MarginePRB2013}, in which integral equations have to be solved self-consistently on the FS, the computational workload gets exceedingly high. This has made the high-throughput calculations of superconducting properties a challenging task up to date. Thus, developing a method to effectively treat the anisotropy of the electron-phonon interaction while keeping full accuracy seems very appealing. Almost half a century ago, Allen proposed a procedure by which scalar quantities defined on the FS could be transformed into a new basis set composed of polynomials of electron velocities orthogonalized on the FS, which he called Fermi-surface harmonics (FSH) \cite{AllenPRB1975}. He further anticipated that, if the expansion of anisotropic quantities on FSHs was rapidly convergent, integral problems like the Eliashberg equations could be solved in a particularly simple and efficient way. Despite the interest that the potential of the method sparked in the community, it has only been applied after imposing further approximations in the anisotropy of the electron-phonon interaction \cite{Butler1976}, or only very recently for relatively simple systems in scarce occasions \cite{HeidPRL2008,XuPRL2014}. Among the possible reasons behind the lack of systematic applicability of the method are the difficulty in the construction of the basis set, which involves several semi-analytic steps and requires a different procedure for each crystal structure, and the fact that the completeness of the basis set cannot be guaranteed for general surfaces. An alternative definition of the FSH basis set was put forward by some of the authors in Ref.~\cite{EigurenNJP2014}, which overcome the limitations of Allen's proposal. In this novel approach, the orthonormal basis functions, called Helmholtz Fermi-surface harmonics (HFSH), are obtained by a purely numerical procedure as eigenfunctions of the Laplace-Beltrami operator on a triangularly tessellated Fermi surface, allowing for a systematic construction of the basis set on any FS topology. However, the crystal symmetries were incorporated only approximately in the triangulated FS ---and as a result in the properties of the basis set---, limiting the potential of the method in the compression of physical anisotropic quantities. In this paper, we improve on Ref.~\cite{EigurenNJP2014} by incorporating the symmetries of the crystal in the HFSH basis set. We propose a numerical procedure to obtain a fully-symmetric triangulated FS, which is general and applicable to any crystal structure. The main outcome of the upgrade is the ability to detect functions within the HFSH set that are invariant under all the symmetry operations of the crystal. As any scalar physical quantity defined on the FS must also follow this symmetry restriction, its expansion on the HFSH basis set will have finite coefficients only in the fully-symmetric subset. This implies an extra reduction of about an order of magnitude in the compression of anisotropic FS quantities with respect to Ref.~\cite{EigurenNJP2014}. We describe the procedure using FCC-Cu as an example, and demonstrate its potential in the electron-phonon problem by showing that the mass-enchancement parameter of the anisotropic superconductors HEX-MgB2 and BCC-YH6 can be represented to high accuracy by a handful of coefficients. The rest of the paper is structured as follows. In Sec.II, we describe the details of the implementation to obtain a fully symmetric triangulated Fermi surface. In Sec.III, we analyze the effects of the symmetries of the triangulated mesh in the HFSH basis set. Using FCC-Cu as an example, we detect the fully symmetric HFSH subset, and confirm that only coefficients of this subset contribute to the expansion of any symmetric quantity defined on the FS. In Sec.IV, we apply the method to two distinct phonon-mediated superconductors with different crystal symmetries and FS topologies, namely HEX-MgB$_2$ and BCC-YH$_6$, demonstrating the general validity and the potential of the method. \section{Fully symmetric triangulated Fermi surface} \label{sec:trisurf} \begin{figure*}[ht] \includegraphics[width=2.0\columnwidth]{./figure_1.pdf} \caption{Numerical procedure to obtain a fully symmetric triangulated Fermi surface. (a) Star tetrahedral tessellation of the full BZ, from which the irreducible BZ volume can be detected (b). Irreducible faces at the IBZ boundary are highlighted in blue. (c),(d),(e) Fine tetrahedral tessellation of the IBZ. The IBZ boundary faces are first triangulated (c), extra Steiner points are added within the IBZ volume (d), and a Delaunay tetrahedralization is performed constrained by the triangular facets at the boundary (e). The linear tetrahedron method is applied in the fine tetrahedra (f), from which a triangulated irreducible Fermi surface is obtained (g). (h),(i),(j) Mesh-refinement techniques are applied to the IFS, resulting in a high-quality triangulated mesh. (k),(l) In a last step, all the symmetry operations are applied to the IFS (k), obtaining a fully symmetric triangulated FS (l). \label{fig:Fig1}} \end{figure*} In this section we describe a numerical procedure to obtain from first principles a triangular tessellation of the Fermi surface of any metal, which fulfills all the point group symmetries of its crystal structure. In principle, a robust method for accomplishing such a task is the linear tetrahedron method \cite{BlochlPRB1994}. In its original formulation, the tetrahedral tessellation of the BZ is performed in crystal coordinates, where a $\mathbf{k}$-point grid translates into cubes which can be trivially decomposed in six tetrahedra. This approach was used in Ref.~\cite{EigurenNJP2014}. However, when analyzing in detail the resulting triangulated isosurface, one finds that the symmetries of the crystal are not incorporated in the FS. In other words, the triangulated FS obtained is not invariant under all the symmetry operations of the crystal ---as it should---, due to the broken symmetries introduced by the initial tetrahedral tessellation of the BZ. This led us to modify the original method, and to find an irreducible isosurface in a previously detected irreducible BZ in Cartesian coordinates. As we will see in the next sections, this variation provides an elegant and effective way of obtaining a fully symmetric triangulated Fermi surface, but also poses some technical difficulties, which are nevertheless overcome by our procedure. We describe our approach following several steps. First, the irreducible volume of the BZ (IBZ) is identified. Then a tetrahedral tessellation of the IBZ is generated, from which a triangulated irreducible Fermi surface (IFS) is obtained using the linear tetrahedron method. As an optional intermediate step, different mesh improvement techniques are proposed and implemented in order to increase the quality of the triangular mesh. Finally, the IFS is rotated using all the symmetries of the crystal, resulting in a high-quality and fully symmetric triangulated Fermi surface. \subsection{Detection of the irreducible wedge of the Brillouin zone} In any crystal system, an irreducible wedge of the Brillouin zone exists from which the full BZ can be recovered by applying all the symmetry operations that the crystal possesses. The first task in our procedure will be to identify such an irreducible volume of the Brillouin zone, for any system crystallizing in a given space group. Geometrically speaking, the BZ is a polyhedron composed of polygonal faces joined by edges. We first make the observation that, apart from the $\Gamma$ point that lies in the center of the BZ, the high symmetry points of the 14 types of Bravais lattices always lie either in the center of a face, or in the corner or the middle-point of an edge \cite{AroyoAC2014}. As an illustrative example, we show in Fig.~\ref{fig:Fig1}(a) the BZ of the FCC lattice (space group $\mathrm{Fm}\overline{3}\mathrm{m}$), in which all the corners, the centers of the faces and the middle-points of the edges have been highlighted with blue dots. Joining each of the points in the edges with the points at their nearest corners, and these two in turn with the points at the center of their corresponding face, we can create a triangular tessellation of the polygonal faces of the BZ. Moreover, joining all of these points with the $\Gamma$ point in the center of the BZ, we can obtain a star tetrahedral tessellation of the whole BZ volume. Given that, by definition, all the non-equivalent high symmetry points have to be included in the irreducible wedge of the BZ, we can represent the IBZ as the sum of several of these tetrahedra. We show in Fig.~\ref{fig:Fig1}(a) the coarse tetrahedral tessellation of the FCC BZ volume obtained in this way, in which some tetrahedra have been removed for ease of visualization. In order to determine which are the irreducible tetrahedra that we have to include in order to form the IBZ of a given system, we need to know the particular symmetry operations belonging to its space group. The procedure is similar to the one used to detect the irreducible number of ${\bf k}$-points within a regular grid that can form a full mesh in the BZ by applying all the symmetry operations of a particular system. In a first step, one selects an arbitrary tetrahedron and applies all the symmetry operations allowed by the point group. In this way, we detect the volume of the BZ connected by symmetry to the initially selected tetrahedron. We repeat this operation for all the tetrahedra in the initial tessellation of the BZ volume, constructing in this way the irreducible volume of the BZ. As an example, the resulting IBZ volume for the FCC lattice, which is composed by three tetrahedra, is shown in Fig.~\ref{fig:Fig1}(b). As a result of the translational invariance of crystals, at the BZ boundary the Fermi surface possesses extra symmetries beyond the point group. In this respect, we have to check for further reduction of the irreducible wedge at the BZ boundary. For this purpose, we repeat a similar procedure as the one described above, but only for the triangular facets on the boundary of the IBZ volume. We now apply $\mathcal{S}+\mathbf{G}$ operations, where $\mathcal{S}$ is a symmetry rotation and $\mathbf{G}$ is a reciprocal lattice vector, and check if any of the facets can be recovered from an irreducible subgroup. Following with the FCC example, we find that one of the three triangular facets ($\mathrm{F}_{3}$) can be recovered in this way from its neighbor facet ($\mathrm{F}_{2}$). The irreducible facets of the IBZ ($\mathrm{F}_{1}$ and $\mathrm{F}_{2}$) are highlighted in blue in Fig.~\ref{fig:Fig1}(b). \subsection{Tetrahedral tessellation of the irreducible wedge of the Brillouin zone} The next step in our procedure will be to obtain a fine tetrahedral tessellation of the irreducible wedge of the Brillouin zone identified in the previous section. The tetrahedral tessellation of a general polyhedron defining the IBZ in Cartesian coordinates is not trivial, and the $\mathcal{S}+\mathbf{G}$ symmetries mentioned in the previous section forces us to proceed with care. We describe the scheme we have implemented for this purpose in the following. The first task will be to triangulate the faces of the IBZ volume in such a way that all the $\mathcal{S}+\mathbf{G}$ symmetries are fulfilled. To this end, in a first step we triangulate the irreducible facets that are related to the non-irreducible ones by symmetry ($\mathrm{F}_{2}$ in Fig.~\ref{fig:Fig1}(c)). In a second step, we obtain the triangulation of the non-irreducible facets by applying the corresponding symmetry operations ($\mathrm{F}_{3}$ in Fig.~\ref{fig:Fig1}(c)). In a third step, we triangulate all the rest of the irreducible facets, considering the constraints imposed by the nodes already present in the facet-joining edges. For the triangulation of each facet, we first distribute nodes throughout the facet-plane. This distribution is done in such a way that the projection of a given mesh of points in the reciprocal-lattice vectors $\left\lbrace n_{k_{1}},n_{k_{2}},n_{k_{3}}\right\rbrace$ onto the facet-plane is approximately matched, setting the condition that the nodes on each edge are regularly spaced. Then a constrained Delaunay triangulation is constructed from these nodes using the {\sc Triangle} code \cite{ShewchukSpringer1996}, in which the edges of the facet are maintained. As an example, the triangulation obtained in such a way for the boundary of the FCC IBZ volume is shown in Fig.\ref{fig:Fig1}(c), where the facet obtained by symmetry is highlighted in red. Next, as shown in Fig.~\ref{fig:Fig1}(d), we populate the IBZ volume with a set of regularly spaced points, selected from the points of the $\left\lbrace n_{k_{1}},n_{k_{2}},n_{k_{3}}\right\rbrace$ mesh that fall within this volume. Finally, a constrained Delaunay tetrahedralization is constructed using the {\sc TetGen} code \cite{SiACM2015}, in which the boundary triangulation is maintained and the volume-nodes are added as Steiner points. The resulting tetrahedral tessellation of the FCC IBZ example is shown in Fig.~\ref{fig:Fig1}(e), in which some of the tetrahedra on the upper part have been removed for ease of visualization. \subsection{Linear tetrahedron method and triangle mesh refinement} Filling the IBZ volume with tetrahedra, as described above, allows us to apply the linear tetrahedron method \cite{BlochlPRB1994} in order to obtain a numerical representation of the irreducible Fermi surface in terms of a triangle mesh. Each tetrahedron marks four points in the reciprocal space in which the energies have to be computed, as represented schematically in Fig.~\ref{fig:Fig1}(f) by $\left( \varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4}\right)$. Then we check if the energy corresponding to the isosurface lies within the values at the corners of the tetrahedron. In the affirmative case, a linear interpolation among the values at the corners gives an approximation to the points in which the isosurface crosses the tetrahedral edges. Depending on the number of edges that the isosurface crosses, one or two triangles can be formed inside the tetrahedron, as discussed in detail, for example, in Ref.\cite{EigurenNJP2014}. The simplest case is shown in Fig.~\ref{fig:Fig1}(f), in which the isosurface (denoted as $\varepsilon_{\mathrm{F}}$) crosses three of the tetrahedral edges, directly forming a triangle inside. All the triangles constructed in this way form a two-dimensional triangle mesh, representing numerically the Fermi surface within the IBZ. As an illustrative example, we show in Fig.~\ref{fig:Fig1}(g) the isosurface obtained for FCC-Cu from the tetrahedral tessellation of the IBZ shown in Fig.~\ref{fig:Fig1}(e) \footnote{ The ground state calculations for FCC-Cu are performed within the generalized gradient approximation of density functional theory \cite{PerdewPRL1996} on a coarse $12\times12\times12$ ${\bf k}$-point grid using the {\sc Quantum ESPRESSO} package \cite{QE2017}, and the energies at the tetrahedral vertices are obtained by means of Wannier interpolation \cite{MarzariPRB1997,SouzaPRB2001,PizziJPCM2020}. } . A clearer view of the triangle mesh formed in this example is shown in Fig.~\ref{fig:Fig1}(h). As it can be noted from this figure, even if a good initial tetrahedral tessellation is provided, the resulting triangular mesh may be of a low quality, meaning that the isosurface may present an inhomogeneous density of vertices which will most likely form a set of triangles with a poor aspect ratio. Although not strictly necessary, it is highly desirable to incorporate procedures to improve the quality of the mesh, possibly eliminating redundant and poor-quality triangles. We have implemented two different mesh refinement techniques, namely the mesh-simplification and the vertex-relaxation procedures \cite{botschCRC2010}. Special care has been taken with the vertices at the BZ boundary, so that the borders of the irreducible Fermi surface are preserved, and the $\mathcal{S}+\mathbf{G}$ symmetries are maintained after the refinement process. In the mesh-simplification procedure, triangles with a poor shape-quality are detected first, following the criteria that one of their edges is much shorter than the perimeter of the triangle, up to a given threshold value. This short edge is collapsed, so that one vertex and one triangle are removed from the mesh, though maintaining the original topology. This procedure is repeated iteratively until all poor shape-quality triangles are eliminated. The simplified mesh obtained after this procedure in the FCC-Cu example is shown in Fig.~\ref{fig:Fig1}(i). The so-called vertex-relaxation procedure consists of two steps. First, a tangential relaxation of the vertices is performed. Each vertex is moved from its position seeking a homogeneous distance with respect to all of its neighbor vertices. However, this movement is constrained to the tangential plane of the vertex, defined by its velocity vector, $\mathbf{v}_{n\mathbf{k}}=\nabla \varepsilon_{n\mathbf{k}} / \hbar$. Note that this vector for a $\mathbf{k}$-point at the Fermi surface is, by definition, the normal vector of the Fermi surface at this point. The Fermi velocities $\mathbf{v}_{n\mathbf{k}}$ at the triangular vertices are computed efficiently by means of the Wannier interpolation method \cite{MarzariPRB1997,SouzaPRB2001,PizziJPCM2020}. This procedure is repeated iteratively for all the vertices in the mesh, resulting in a homogeneous distribution of triangles with similar areas. Finally, the vertices are relaxed along their normal vector. This additional step compensates the error introduced by the linear interpolation in the regular linear tetrahedron method, so that the final relaxed vertices are located at $\varepsilon_{F}$ to a great accuracy \cite{EigurenNJP2014}. The final refined mesh for the FCC-Cu example is shown in Fig.~\ref{fig:Fig1}(j), where the improvement in the quality of the mesh is clearly appreciated. These mesh refinement techniques translate into a considerable accuracy and efficiency gain in the computation of Fermi surface integrals. \subsection{Rotation to a fully symmetric Fermi surface} The very last step in our procedure consists of applying all the symmetry operations to the irreducible Fermi surface described in the previous sections, in order to obtain a fully symmetric Fermi surface mesh, which is invariant under all the symmetry operations of the crystal up to numerical precision. The irreducible part of the Fermi surface of the FCC-Cu example is shown within the full BZ in Fig.~\ref{fig:Fig1}(k). The complete Fermi surface mesh obtained by rotation of the irreducible part is shown in Fig.~\ref{fig:Fig1}(l). As can be appreciated in the figure, our procedure provides a high-quality triangulated Fermi surface, which fulfills all the symmetries of the crystal. We note that working on the IBZ as a prior step, and making use of efficient computational geometry packages \cite{ShewchukSpringer1996,SiACM2015} and the Wannier interpolation method \cite{MarzariPRB1997,SouzaPRB2001,PizziJPCM2020}, turns the computational cost of constructing the fully symmetric triangular mesh minimal as compared, for example, to a typical ground state calculation. As a last remark, we mention that even though the linear tetrahedron method has been already used in other works to obtain a triangulated Fermi surface \cite{KawamuraCPC2019,MustafaPRB2016,RittwegerJPCM2017}, to the best of our knowledge the symmetries of the FS have not been incorporated exactly in the triangulated mesh in these works, as it is done in our procedure. In the next sections we will show the importance of this aspect in the construction of the Helmholtz Fermi-surface harmonics basis set, and in its application to compress anisotropic quantities on the Fermi surface into a few coefficients. \section{Symmetries on the Helmholtz Fermi-surface harmonics basis set} \label{sec:symfsh} In this section, we show how the fully symmetric triangulated Fermi surface obtained by the procedure described in Sec.~\ref{sec:trisurf} provides a direct way to incorporate the symmetries of the crystal in the HFSH basis set. For completeness, we first review the main aspects of the method proposed in Ref.~\cite{EigurenNJP2014} to obtain the basis set. The HFSHs are defined as the eigenmodes of a velocity-weighted Laplace-Beltrami operator on the curved Fermi surface, \begin{equation} \label{eq:lapl-belt} v({\bf k}) \nabla^{2}_{\bf k} \Phi_{L}({\bf k}) + \omega_{L} \Phi_{L}({\bf k}) = 0 , \end{equation} where $\omega_{L}$ are the eigenvalues associated with the HFSH set functions $\left\lbrace \Phi_{L}({\bf k}) \right\rbrace$, which obey the following orthogonality condition, \begin{equation} \label{eq:orth_cond} \int \frac{d^{2}s_{\bf k}}{v({\bf k})} \Phi_{L'}({\bf k}) \Phi_{L}({\bf k}) = \delta_{L',L} \int \frac{d^{2}s_{\bf k}}{v({\bf k})} ~. \end{equation} The triangular tessellation of the Fermi surface allows for a numerical solution of a discretized version of Eq.~\ref{eq:lapl-belt}, which is transformed into a generalized sparse eigenvalue problem: \begin{equation} \label{eq:discr-lapl-belt} \frac{v({\bf k}_{i})}{S_{i}} \sum_{j} \Omega_{i,j} \Phi_{L}({\bf k}_{j}) = \omega_{L} \Phi_{L}({\bf k}_{i})~. \end{equation} In this expression, $i$ and $j$ are indices for vertices on the triangulated mesh, ${\bf k}_{i}$ represents the coordinates of a vertex $i$ in the reciprocal space, and $S_{i}$ its control area, defined as the sum of $\frac{1}{3}$ of its neighboring triangle areas. The discretized Laplace-Beltrami operator $\Omega_{i,j}$ takes the form \begin{equation} \label{eq:discr-lapl-omegaij} \Omega_{i,j} = \begin{cases} -\frac{1}{2} \left[ \cot(\alpha_{i,j}) + \cot(\beta_{i,j}) \right] & i\ne j\\ \sum_{i\ne j} \Omega_{i,j} & i=j \end{cases} ~, \end{equation} where $\alpha_{i,j}$ and $\beta_{i,j}$ are the two opposite angles of the triangles sharing the edge joining the vertices $i$ and $j$. By virtue of its completeness, we can efficiently represent any anisotropic function $F({\bf k}_i)$ defined on the triangulated Fermi surface by performing an expansion in the HFSH basis set, \begin{equation} \label{eq:fsh_expansion} F({\bf k}_i) = \sum_{L} c_{L}(F) \Phi_{L}({\bf k}_{i}) ~, \end{equation} where the expansion coefficients are defined by the following FS integrals: \begin{equation} \label{eq:fsh_coef} c_{L}(F) \equiv \frac{\int_{S_{F}} \frac{d^{2}s_{\bf k}}{v({\bf k})} \Phi_{L}({\bf k}) F({\bf k}) } { \int_{S_{F}} \frac{d^{2}s_{\bf k}}{v({\bf k})} } \approx \frac{ \sum_{i} \frac{S_{i}}{v({\bf k}_i)} \Phi_{L}({\bf k}_i) F({\bf k}_i) }{ \sum_{i} \frac{S_{i}}{v({\bf k}_i)} } ~. \end{equation} We refer the reader to Ref.~\cite{EigurenNJP2014} for further details and properties of the HFSH basis set. \subsection{Degenerate subspaces} Clearly, the symmetries of the surface on which Eq.~\ref{eq:lapl-belt} is defined translate into symmetric properties of the HFSH basis set. Given that the symmetries of the surface are exactly maintained in the triangulated mesh, these properties will be preserved in the discretized form of Eq.~\ref{eq:discr-lapl-belt}. The fulfillment of this requirement is guaranteed if the mesh is constructed following the procedure described in Sec.~\ref{sec:trisurf}. We note that any symmetric mesh obtained by any other alternative procedure is also valid for the conclusions drawn in the rest of the paper. A straight consequence of retaining the symmetries on the surface appears in the degeneracies of the energy levels $\omega_{L}$. For instance, in a perfect sphere, the full rotational symmetry enforces the threefold and fivefold degeneracies in the $p$ and $d$ spherical harmonics, respectively. Even though the full rotational symmetry of the sphere is broken in a realistic Fermi surface due to the crystal field, the possible discrete rotational symmetries of the crystal may enforce subspaces within the HFSH basis set which are exactly degenerate. Continuing with the FCC-Cu example, we show in Fig.~\ref{fig:Fig2}(a) the first nine HFSH basis functions $\Phi_{L}({\bf k})$, obtained as solutions of Eq.~\ref{eq:discr-lapl-belt} on the symmetric mesh produced in Sec.~\ref{sec:trisurf}. The corresponding eigenvalues $\omega_{L}$ are shown in Fig.~\ref{fig:Fig2}(b), compared with the eigenvalues obtained on a mesh in which the crystal symmetries are not explicitly enforced, as in Ref~\cite{EigurenNJP2014}. As discussed in Ref.~\cite{EigurenNJP2014}, the threefold degeneracy in the $p$-like harmonics is maintained, but the energies of the $d$-like states are split into two subspaces of threefold and twofold degeneracies. However, a closer look reveals that these degeneracies are fulfilled only approximately on the non-symmetric mesh, as shown in Fig.~\ref{fig:Fig2}(c) for the energies of the $p$-like harmonics. As we can see in this figure, not incorporating the symmetries exactly on the mesh can introduce errors of $\sim 0.15\%$ in the energies. In contrast, when using the exactly symmetric mesh we obtain equal energies up to numerical accuracy, with relative differences of the order of $\sim 10^{-10}$ in this particular example. Similar results are obtained for all the degenerate subspaces of the full HFSH basis set. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{./figure_2.pdf} \caption{Degeneracies in the HFSH eigenmodes. (a) First nine HFSH basis functions for FCC-Cu, and (b) their corresponding eigenvalues. The eigenvalues obtained using the fully symmetric mesh are shown in blue, and the eigenvalues obtained in Ref.~\cite{EigurenNJP2014} are shown in orange for comparison. (c) Zoom on the first non-zero eigenvalues, highlighting the numerical accuracy of the degeneracy in the symmetric mesh. \label{fig:Fig2}} \end{figure} \subsection{Fully symmetric HFSHs} \label{sec:fullsymhfsh} As the triangular mesh in which Eq.~\ref{eq:discr-lapl-belt} is solved and in which the $\Phi_{L}({\bf k}_{i})$ functions are defined is exactly symmetric, we can identify numerically those functions within the HFSH basis set that are invariant under all the symmetry operations of the crystal. We will name this subset the \emph{fully symmetric} HFSHs, and label them with the symbol $\tilde{L}$. Formally, they are identified as the functions within the HFSH basis set that satisfy the following condition: \begin{equation} \label{eq:fulsym_hfsh} \Phi_{\tilde{L}}(\mathcal{S}_{n}{\bf k}_{i}) = \Phi_{\tilde{L}}({\bf k}_{i}) ~, \end{equation} for all $n$, where $\mathcal{S}_{n}$ is a symmetry operation of the crystal. As an example, in the FCC-Cu case, out of the first 400 HFSH functions only 12 satisfy Eq.~(\ref{eq:fulsym_hfsh}) and are shown in Fig.~\ref{fig:Fig3}(a). The fact that most of the physical properties defined on the Fermi surface are invariant under all the symmetry operations of the crystal imposes severe restrictions on their expansion in the HFSH basis set. As can be directly deduced from Eq.~\ref{eq:fsh_expansion}, if a given function $F({\bf k}_i)$ is fully symmetric on the FS, only those coefficients corresponding to the fully symmetric HFSHs can give a finite contribution in the expansion. We now demonstrate that this restriction is satisfied in our implementation up to numerical precision. As an illustrative example, we consider the squared of the Fermi velocity, $F({\bf k})=v^{2}({\bf k})$, clearly a fully symmetric function [see inset of Fig.~\ref{fig:Fig3}(b)]. We show in Fig.~\ref{fig:Fig3}(b) the first 400 coefficients of the expansion of this function in the HFSH basis set [see Eq.~\ref{eq:fsh_coef}], relative to the value of the first coefficient, i.e. the FS average $v^{2}_{0}\equiv\langle v^{2}({\bf k}) \rangle_{\mathrm{FS}}$. As appreciated from this figure, the values of the expansion coefficients decrease rapidly for larger HFSH indices. This trend goes in line with the fact that HFSHs with higher energies oscillate more intensely, and therefore only add finer details to the anisotropy of the expanded function. The more isotropic is the quantity to be transformed, the less are the coefficients needed for a faithful representation of its anisotropy. In the extreme case of a constant function, only the first coefficient will be finite. Most importantly, we see that only those coefficients corresponding to the fully symmetric HFSHs shown in Fig.~\ref{fig:Fig3}(a) have a finite value, the rest being all strictly zero up to numerical precision. We show this more clearly in Fig.~\ref{fig:Fig3}(c), which zooms into the last two finite coefficients of Fig.~\ref{fig:Fig3}(b). The magnitude of these coefficients is only $\sim 0.5\%$ of the average value, showing that we can achieve this accuracy in the representation of the anisotropic function $v^{2}({\bf k})$ of this example by using only 12 coefficients. All in all, the use of a fully symmetric Fermi surface for the construction of the HFSH basis set, and the identification of the fully symmetric HFSH subset, allows us to obtain an extra reduction of at least one order of magnitude in the computational workload to describe anisotropic quantitites on the FS with respect to Ref.~\cite{EigurenNJP2014}. Note that this method already introduced a saving factor of approximately two orders of magnitude with respect to the conventional ${\bf k}$-space representation. In the next section we apply this methodology to the electron-phonon problem, where the ${\bf k}$-space representation of the related quantities generates a major bottleneck in the computation of prominent properties of metals, such as superconductivity. \begin{figure}[b] \includegraphics[width=1.0\columnwidth]{./figure_3.pdf} \caption{Fully symmetric HFSHs. (a) First 12 fully symmetric HFSH basis functions for FCC-Cu, together with their index in the full HFSH basis set. (b) First 400 expansion coefficients in the HFSH basis set for the squared modulus of the electron velocity, relative to the $L=0$ coefficient. The magnitude and anisotropy of $v^{2}({\bf k})$ over the FS is shown in the inset in atomic units. Only the fully symmetric HFSH functions shown in (a) give finite contributions, as highlighted in (c) where a zoom on the last two finite coefficients is shown. \label{fig:Fig3}} \end{figure} \section{Electron-phonon anisotropy in the HFSH representation} \label{sec:elphfsh} \begin{figure*}[ht] \includegraphics[width=2.0\columnwidth]{./figure_4.pdf} \caption{(a) Anisotropic electron-phonon mass-enhancement parameter $\lambda_{{\bf k}}$ on the Fermi surface of MgB$_2$, separated in the four different FS sheets. (b) First four fully symmetric HFSH basis functions for each FS sheet. The hexagonal BZ and the corresponding IBZ is shown in the top left corner. (c) First ten expansion coefficients of $\lambda_{{\bf k}}$ for each FS sheet in the fully symmetric HFSH subset. The inset shows the same result with a logarithmic scale on the $y$-axis. \label{fig:Fig4}} \end{figure*} Given that the energy scale of phonons ($\sim$meV) is roughly three orders of magnitude smaller than the energy scale of the electrons ($\sim$eV), the electron-phonon scattering events are, to a good approximation for most metals, limited to the Fermi surface. Nonetheless, the variation of the electronic wave functions and velocities on the FS is usually sizable, and so is the variation of the phonon frequencies and change of potential for the different momentum vectors joining the electron states on the FS. This implies that, for an accurate description of electron-phonon problems, the anisotropy of the matrix elements and the related quantities on the FS have to be accurately taken into account, which poses a major computational bottleneck in practical calculations. In this section we will show that the HFSH representation presented in Sec.~\ref{sec:symfsh} provides an elegant and extremely efficient solution to this difficulty. As a representative anisotropic quantity related to the electron-phonon problem, we consider the momentum-dependent mass enhancement parameter for electron states at the FS, \begin{equation}\label{eq:lambda_k} \lambda_{n{\bf k}} = \frac{2}{\Omega_{\textrm{BZ}}} \sum_{m\nu} \int_{S_{F}} \frac{d^{2}s_{{\bf k}'}}{v({\bf k}')} ~ \frac{|g^{\nu}_{mn}({\bf k},{\bf k}')|^{2}}{\omega^{\nu}_{{\bf k}'-{\bf k}}} ~, \end{equation} where $\Omega_{\textrm{BZ}}$ is the BZ volume, $n$ and $m$ represent electron band indices, $\omega^{\nu}_{{\bf k}'-{\bf k}}$ is the frequency of the phonon mode $\nu$ at momenta ${\bf q}\equiv{\bf k}'-{\bf k}$, and $g^{\nu}_{mn}({\bf k},{\bf k}')$ is the electron-phonon matrix element for the scattering of an electron $n{\bf k}$ to $m{\bf k}'$ via emission/absorption of a phonon $\nu{\bf q}~.$ The $\lambda_{n{\bf k}}$ parameter is the most meaningful measure of the quasiparticle renormalization driven by electron-phonon interactions, directly affecting several transport properties such as the electronic heat capacity or the amplitude of the de Haas-van Alphen oscillations \cite{Grimvall1981}. Its average over the FS is a central parameter in simplified expressions for the critical temperature of superconductors \cite{McMillanPR1968,AllenDynesPRB1975}, and its two-index and frequency-dependent generalization is crucial in the full Eliashberg theory of superconductivity \cite{AllenMitrovic1983}. We note, however, that the compression described below is equally applicable to any anisotropic quantity defined on the FS, such as transport scattering rates $1/\tau_{{\bf k}}$, or the superconducting energy gap $\Delta_{{\bf k}}$. \subsection{HEX-MgB$_2$} As a first example we consider the prototypical anisotropic phonon-mediated superconductor MgB$_2$. The calculation parameters have been chosen with the aim of making the comparison with previous works as direct as possible. The ground state calculations have been performed with the {\sc Quantum ESPRESSO} package \cite{QE2017} within the local density approximation of density functional theory \cite{PerdewPRB1981} in a $24^{3}$ \textbf{k}-point grid, using norm-conserving pseudopotentials and a kinetic energy cutoff of 60Ry in the plane-wave expansion of valence electronic wave functions. The lattice parameters have been set to the experimental values of $a=5.832~\mathrm{bohr}$ and $c/a=1.142$ \cite{NagamatsuNAT2001}. Phonon properties have been computed within density functional perturbation theory \cite{BaroniRMP2001} on a $8^3$ \textbf{q}-point grid. Electron-phonon matrix elements have been computed on a coarse $(8^{3},8^{3})$ \textbf{k} and \textbf{q}-point grid, and the Wannier interpolation method \cite{MarzariPRB1997,SouzaPRB2001,PizziJPCM2020,GiustinoPRB2007,EigurenPRB2008} has been used to interpolate the matrix elements to the triangular vertices \cite{EigurenPRL2008,EigurenPRB2008,EigurenPRB2009,GoiricelayaPRB2018,GoiricelayaCMP2019,GoiricelayaPRB2020}. As a last remark, we note that the high-quality triangulated Fermi surface as obtained by the method presented in Sec.~\ref{sec:trisurf} allows for an efficient numerical integration of Eq.~(\ref{eq:lambda_k}), \begin{equation}\label{eq:num_lambda_k} \lambda_{n{\bf k}_{i}} \approx \frac{2}{\Omega_{\textrm{BZ}}} \sum_{m\nu} \sum_{j} \frac{S_{j}}{v({\bf k}_{j})} ~ \frac{|g^{\nu}_{mn}({\bf k}_{i},{\bf k}_{j})|^{2}}{\omega^{\nu}_{{\bf k}_{j}-{\bf k}_{i}}} ~, \end{equation} where only the matrix elements at the ${\bf k}$-points lying on the FS vertices are needed. We show in Fig.~\ref{fig:Fig4}(a) our results for the anisotropic mass-enhancement parameter of MgB$_2$, in which the four different FS sheets have been separated for clarity. In agreement with previous works \cite{EigurenPRB2008,ChoiPRB2002,MarginePRB2013}, we find that $\lambda_{n{\bf k}}$ takes considerably large values in the range of $1.00$--$1.37$ on the cylinder-like FS sheets corresponding to the $\sigma$ bands. In contrast, the FS sheets formed by the $\pi$ bands couple much less efficiently to phonons, resulting in smaller $\lambda_{n{\bf k}}$ values in the range of $0.35$--$0.47$. Apart from the arrangement of the absolute values of the $\lambda_{n{\bf k}}$ parameter in two main groups, this figure also shows that its anisotropy within each FS sheet is sizable. For the Fermi surface averaged mass-enhancement parameter we obtain $\lambda=0.73$, also in very good agreement with previous calculations \cite{EigurenPRB2008,ChoiPRB2002,MarginePRB2013}. In particular, we find that our results agree very well with the values presented in Ref.~\cite{MarginePRB2013}, where a systematic convergence test of $\lambda$ with respect to the \textbf{k}-point sampling was performed. Remarkably, while they showed that $\sim 10^{5}$ points are needed in the three-dimensional BZ to obtain converged results when approximating the FS with a smearing function, we already obtain convergence in the average value and the distribution of $\lambda_{{\bf k}}$ with $\sim 8\times 10^{3}$ points in the triangulated Fermi surface. Now we move on to the HFSH representation. Being a scalar quantity, the $\lambda_{n{\bf k}}$ parameter is invariant under all the symmetry operations of the crystal, as can be appreciated in Fig.~\ref{fig:Fig4}(a). Therefore, as discussed in Sec.~\ref{sec:fullsymhfsh}, its expansion will only have finite coefficients in the fully symmetric HFSH subset: \begin{equation} \label{eq:lambda_L} \lambda_{n,\tilde{L}} = \frac{\int_{S_{F_{n}}} \frac{d^{2}s_{\bf k}}{v({\bf k})} ~ \Phi_{n,\tilde{L}}({\bf k}) ~ \lambda_{n{\bf k}} } { \int_{S_{F_{n}}} \frac{d^{2}s_{\bf k}}{v({\bf k})} } ~ . \end{equation} Note that the HFSH functions for each FS sheet are independent by construction \cite{EigurenNJP2014} , and that the integrals are performed over the corresponding sheet $S_{F_{n}}$. We show in Fig.~\ref{fig:Fig4}(b) the first four fully symmetric HFSH functions for the different FS sheets of MgB$_2$. The first HFSH function is always the trivial constant solution with eigenvalue $\omega_{\tilde{L}}=0$, and the following ones oscillate more and more rapidly in direct analogy with the normal modes of a vibrating membrane. The first ten $\lambda_{n,\tilde{L}}$ coefficients given by Eq.~(\ref{eq:lambda_L}) are shown in Fig.~\ref{fig:Fig4}(c). As it can be anticipated from Eq.~(\ref{eq:fsh_expansion}) and by looking at the HFSH functions of Fig.~\ref{fig:Fig4}(b), the $\tilde{L}=$~$0$ coefficient gives the average value of $\lambda_{n{\bf k}}$ in each FS sheet, and the subsequent coefficients add finer and finer anisotropic details. It is therefore reassuring to see that the magnitude of the $\lambda_{n,\tilde{L}}$-s decay very quickly for bigger $\tilde{L}$-s. What is more remarkable is the rate at which the coefficients decay. In order to analyze this point further, we plot in the insets of Fig.~\ref{fig:Fig4}(c) the same result but using a logarithmic scale in the $y$ axis, revealing that the value of the coefficients decay very rapidly. Indeed, this result demonstrates that the transformation from \textbf{k}-space to the HFSH representation turns out strikingly beneficial, as all the details of the $\lambda_{n{\bf k}}$ parameter can be compressed with an accuracy of at least $10^{-3}$ in as few coefficients as $10~\lambda_{\tilde{L}}$-s per FS sheet. In comparison with the $\sim 8000$ triangular vertices needed in \textbf{k}-space, this simplification implies a saving factor of $\sim 2\times 10^{2}$ with no loss of accuracy. \begin{figure}[hb!] \includegraphics[width=1.0\columnwidth]{./figure_5.pdf} \caption{(a)Anisotropic electron-phonon mass-enhancement parameter on the six different FS sheets of YH$_6$ at 300GPa. The BCC Brillouin zone and the corresponding IBZ is shown in the top left corner. First four fully symmetric HFSH basis functions for the (b) second and (c) third FS sheets of BCC-YH6. First ten expansion coefficients of $\lambda_{{\bf k}}$ for the (d) second and (e) third FS sheets in the fully symmetric HFSH subset. The insets show the same result with a logarithmic scale on the $y$-axis. \label{fig:Fig5}} \end{figure} \subsection{BCC-YH$_6$} In order to validate that our method is robust and applicable to different systems, we apply the very same procedure presented in the previous sections to BCC-YH$_6$. Compressed hydrides have been attracting an enormous interest during the last years, especially since the prediction and discovery of conventional high-temperature superconductivity in hydrogen sulfide under high pressures \cite{LiJCP2014,DuanSCR2014,DrozdovNAT2015}. Developing methods to alleviate the computational cost of calculating superconducting properties appears particularly interesting in this research field, in which theoretical predictions of new candidates guide the experimental efforts to find materials with increasingly favorable properties \cite{FloresLivasPHR2020}. YH$_6$ represents an interesting case within this class of materials, as the very recent experimental confirmation of superconductivity in this system \cite{TroyanARXIV2019,KongARXIV2019} has revealed a sizable deviation in the measured critical temperature with respect to the theoretical predictions \cite{LiSCR2015,PengPRL2017,HeilPRB2019}. We have considered YH$_6$ at 300GPa in the BCC structure, whith the lattice parameter reported in Ref.~\cite{HeilPRB2019}. Ground state calculations have been performed in a $12^{3}$ ${\bf k}$-point grid within the generalized gradient approximation of density functional theory \cite{PerdewPRL1996}, using norm-conserving pseudopotentials of the Goedecker-Hartwigsen-Hutter-Teter table \cite{HartwigsenPRB1998,*GoedeckerPRB1996}. Phonon properties and electron-phonon matrix elements have been computed on a coarse $4^{3}$ ${\bf q}$-point grid, and later interpolated to the triangular vertices forming the FS by the Wannier interpolation method. A tessellation consisting of $\sim 7.5 \times 10^{3}$ vertices has been needed in this case to obtain converged results. Figure \ref{fig:Fig5}(a) shows our results for the anisotropic mass-enhancement parameter on the six FS sheets of YH$_6$ at 300Gpa. As can be appreciated in this figure, $\lambda_{{\bf k}}$ varies considerably among the different FS sheets, ranging from $\sim 1.0$ on the small electron pockets to $\sim 1.9$ on some regions of the biggest sheet. Most importantly, $\lambda_{{\bf k}}$ varies substantially within the second and third sheets, which in turn show strongly anisotropic and intricate topologies, serving as a challenging test for our method. We obtain a FS averaged mass-enhancement parameter of $\lambda=1.5$, somewhat smaller than the $\lambda=1.9$ reported in Ref.~\cite{HeilPRB2019}. We ascribe the discrepancy to the different FS integration method, and note that our high-quality triangulated mesh gives a superconducting transition temperature which is in better agreement with the experimental results \cite{TroyanARXIV2019,KongARXIV2019}, as discussed in Ref.~\cite{AccompanyingPaper}. We present in Figs.~\ref{fig:Fig5}(b) and (c) the first four fully symmetric HFSH functions of the second and third FS sheets, respectively. Their corresponding $\lambda_{\tilde{L}}$ parameters are shown in Figs.~\ref{fig:Fig5}(d) and (e), respectively. Similar to HEX-MgB$_2$, here we observe that the values of the coefficients decay very rapidly in this case as well. This means that a relative accuracy of $\sim10^{-2}$ can be obtained in the description of the anisotropy of $\lambda_{{\bf k}}$ with less than ten coefficients in both sheets. This results demonstrates that our methodology is equally valid for systems with any kind of symmetry or FS topology, and that it appears remarkably beneficial even in extremely anisotropic scenarios. \section{Conclusions} \label{sec:concl} In summary, we have presented a method to describe anisotropic Fermi surface quantities very efficiently. This work constitutes an improvement over the HFSH basis set presented in Ref.~\cite{EigurenNJP2014}. The major advance is the incorporation of crystal symmetries through the construction of a fully symmetric triangulated Fermi surface. We have shown the general applicability of the method in systems with different symmetries. As an application, we have demonstrated that the method is extremely efficient for compressing quantities related to the electron-phonon interaction in prototypical anisotropic superconductors. The full potential of the method will be further illustrated in an accompanying paper \cite{AccompanyingPaper}, in which it is shown that the fully anisotropic Eliashberg equations of superconductivity can be solved in a very efficient and physically meaningful way in the HFSH representation. In the case of conventional $s$-wave superconductors, only the fully symmetric HFSH subset introduced in this paper is needed. Besides computational time and memory saving advantages, we believe that this work opens a path towards a quantitative comparison between different calculations ---and ultimately with experiments--- able to capture anisotropic effects. Further ahead, we anticipate that this method can provide a tabulation of coefficients describing anisotropic physical quantities that could be included in material databases. The need for anisotropy descriptors in the prediction of the superconducting critical temperature through machine-learning algorithms has been recently pointed out, for instance, in Ref.~\cite{XiePRB2019}. We believe that the coefficients obtained through the procedure presented in this work are perfect candidates to be used as such descriptors. \begin{acknowledgments} The authors acknowledge the Department of Education, Universities and Research of the Basque Government and the University of the Basque Country UPV/EHU (Grant No. IT756-13), the Spanish Ministry of Economy and Competitiveness MINECO (Grants No. FIS2016-75862-P and No. PID2019-103910GB-I00) and the University of the Basque Country UPV/EHU (Grant No. GIU18/138) for financial support. J.L.-B. acknowledges the University of the Basque Country UPV/EHU (Grant No. PIF/UPV/16/240) and the Donostia International Physics Center (DIPC) for financial support. Computer facilities were provided by the DIPC. \end{acknowledgments}
2024-02-18T23:40:46.416Z
2020-10-15T02:15:14.000Z
algebraic_stack_train_0000
3,290
7,912
proofpile-arXiv_066-445
\section{Introduction} \subsection{} The study of \emph{real \'etale cohomology} originates from attempts to understand the link between \'etale cohomology with $2$-torsion coefficients and real algebraic geometry. To any scheme $X$, one can associate a \emph{real scheme} $X_r$ which is a topological space obtained by gluing the real spectra of rings (\cite[0.4.2]{Sch3}). The points of $X_r$ are pairs $(x,\sigma)$, where $x$ is a point of $X$ and $\sigma$ is an ordering on the residue field of $x$. The \emph{real \'etale site} of $X$ is the category of \'etale $X$-schemes endowed with the Grothendieck topology where covering families are the ones that induce surjections on real schemes (\cite[Definition 1.2.1]{Sch3}). A fundamental theorem of Scheiderer (\cite[Theorem 1.3]{Sch3}) states that there is a canonical equivalence of sites between the real \'etale site on $X$ and the site defined by the topological space $X_r$, and in particular the real \'etale site is spatial. This result makes it much convenient when working on real \'etale cohomology, as one can not only find inspirations from the powerful machinery of \'etale cohomology (see \cite{SGA4.5}), but also one benefit from the fact that it is possible to work with an actual topological space instead of an abstract Grothendieck topology. \subsection{} The following deep result shows an intrinsic relation between the real \'etale sheaves and motivic homotopy theory: for any noetherian scheme of finite dimension $X$, there is a canonical equivalence \begin{align} \label{eq:SHdec} \mathbf{SH}(X,\mathbb{Q}) \simeq \mathbf{DM}(X,\mathbb{Q}) \times D(X_r,\mathbb{Q}). \end{align} where $\mathbf{SH}(X,\mathbb{Q})$ is the rational stable motivic homotopy category over $X$, $\mathbf{DM}(X,\mathbb{Q})$ is the category of Beilinson motives over $X$ (\cite[Definition 14.2.1]{CD2}), and $D(X_r,\mathbb{Q})$ is the derived category of sheaves over $X_r$.\footnote{More generally, it is proved in \cite{Bac} that real \'etale motivic stable homotopy category is equivalent to $\rho$-inverted motivic stable homotopy category. As the results of this paper only concern the rational case, we only state this version.} Indeed, by a result of Morel, the ``switching factors'' automorphism of $\mathbb{P}^1\wedge\mathbb{P}^1$ induces a decomposition of $\mathbf{SH}(X,\mathbb{Q})$ into the $+$-part and the $-$-part; the $+$-part is identified with $\mathbf{DM}(X,\mathbb{Q})$ by \cite[Theorem 16.2.13]{CD2}, and the $-$-part is identified with the derived category $D(X_r,\mathbb{Q})$ by \cite[Theorem 35]{Bac} combined with the equivalence between $\mathbf{SH}$ and $D^{\mathbb{A}^1}$ with rational coefficients. Furthermore, it is proved in \cite{Bac} that the association $X\mapsto D(X_r,\Lambda)$ for a ring $\Lambda$ satisfies the axioms of a \emph{motivic $\infty$-category} (\cite{Kha}),\footnote{One may readily replace this notion with the one of motivic triangulated categories in \cite[Definition 2.4.45]{CD2}.} and therefore one can define and study the associated six functors.\footnote{The results in \cite{Bac} are stated for noetherian schemes of finite dimension, which can be extended to quasi-compact quasi-separated schemes using continuity arguments, at least in the case of the derived category, see \cite[Appendix A]{DFKJ} and \cite[Lemma B.10]{ES}.} Another treatment of the real \'etale motivic homotopy theory from the point of view of $C_2$-equivariant spectra can be found in \cite{ES}. \subsection{} \label{num:motcons} The goal of this paper is to investigate some finiteness conditions on $D(X_r,\Lambda)$ using these structures. In Section~\ref{sec:cons} we study constructibility conditions. First there is a natural constructibility condition on motivic categories introduced by Ayoub (\cite[D\'efinition 2.2.3]{Ayo}): if $\mathcal{T}$ is a motivic $\infty$-category and $X$ is a scheme, the subcategory of \textbf{constructible objects} over $X$, denoted as $\mathcal{T}_c(X)$, is the thick subcategory generated by elements of the form $Rf_!f^!\mathbbold{1}_X(d)$, where $f:Y\to X$ is a smooth morphism and $(d)$ stands for the Tate twist. This notion can be readily recognized categorically: if $\mathcal{T}$ satisfies some compact generation properties, then the subcategory of constructible objects are exactly the compact objects (see Lemma~\ref{lm:mconscomp}). In particular, the equivalence~\eqref{eq:SHdec} induces an equivalence of the subcategories of constructible objects. On the other hand, the real scheme associated to a quasi-compact quasi-separated scheme is a \emph{spectral space} (see~\ref{num:spectral}), and there is a natural notion of constructible sheaves on spectral spaces (\cite[Definition A.3]{Sch3}). We promote this notion to the derived category in Definition~\ref{def:conscom} (see also \cite[\S20]{Sch}). The first main result is that under some assumptions these two notions of constructibility agree: \begin{theorem}[See Theorem~\ref{th:ctf=c}] Let $X$ be a quasi-compact quasi-separated scheme of finite dimension and let $\Lambda$ be a ring. Then the constructible objects in $D_c(X_r,\Lambda)$ are exactly the complexes $C$ such that there exists a finite stratification of $X_r$ into locally closed constructible subsets such that the restriction of $C$ to each stratum is the constant sheaf associated to a perfect complex of $\Lambda$-modules. \end{theorem} The proof reduces to show the equivalence between the topological constructibility and compactness, which holds more generally for spectral spaces of finite dimension (see Proposition~\ref{prop:conscomp}): the key result is a theorem of Scheiderer (\cite[Corollary 4.6]{Sch2}) which states that the cohomological dimension of a spectral space is bounded by the Krull dimension. Using this result we are able to give a partial answer to a question raised by Scheiderer (\cite[Remark 17.7.1]{Sch3}), by proving that under some assumptions the derived direct image functor preserves constructible objects, see Corollary~\ref{cor:prescons} and Remark~\ref{rm:schfin}. Another application is the following description of the Grothendieck group of the constructible rational stable motivic homotopy category: \begin{theorem}[See Corollary~\ref{cor:K0SHQ}] For $X$ an excellent separated scheme of finite dimension, there is a canonical isomorphism of abelian groups \begin{align} K_0(\mathbf{SH}_c(X,\mathbb{Q}))\simeq K_0^{\oplus}(\mathbf{CHM}(X,\mathbb{Q}))\oplus Cons(X_r,\mathbb{Z}). \end{align} \end{theorem} Here $K_0^{\oplus}(\mathbf{CHM}(X,\mathbb{Q}))$ is the direct sum Grothendieck group of the category of Chow motives with rational coefficients over $X$ (see~\ref{num:CHM}), and $Cons(X_r,\mathbb{Z})$ is the group of $\mathbb{Z}$-valued constructible functions on the spectral space $X_r$ (see~\ref{num:consfct}), which is a free abelian group generated by the closed constructible subsets of $X_r$. The proof uses Bondarko's theory of weight structures (see~\ref{num:CHM}), and a general comparison result between the Grothendieck group of the constructible complexes and the group of constructible functions over a spectral space (Proposition~\ref{prop:KS9.7.1}). Note that there are also analogous results for the category $\mathbf{SH}_c(X_r)$, see Remark~\ref{rm:SHret}. \subsection{} In Section~\ref{sec:genbc} we prove the generic base change property (see~\ref{num:gbc}) for constructible real \'etale sheaves. The generic base change theorem for \'etale sheaves is due to Deligne (\cite[Th. finitude 1.9]{SGA4.5}), and is generalized to $h$-motives in \cite[2.4.2]{Cis}. Our treatment here uses a very similar strategy, together with some inputs from the topology of real schemes. In Theorem~\ref{th:ret_genbc} we prove the generic base change property for constructible complexes of real \'etale sheaves. From this we deduce the generic base change property for constructible rational motivic spectra (Corollary~\ref{cor:genbcsh}) and for constructible complexes of $b$-sheaves (Corollary~\ref{cor:genbcb}). \subsection{Conventions and notations} All smooth morphisms of schemes are assumed separated of finite presentation, and the dimension of a topological space or a scheme stands for the Krull dimension. We use the following notations: if $\mathcal{T}$ is a fibered category, $f:X\to Y$ is a morphism and $K\in\mathcal{T}(Y)$, we denote $K_{|X}=f^*K\in\mathcal{T}(X)$. If $f:X\to Y$ is a morphism of schemes, we denote $f^*:D(Y_r,\Lambda)\to D(X_r,\Lambda)$ instead of $f_r^*$, and similarly for the other functors, to simplify the notations. \subsection{} \label{num:char0} For any scheme $X$ we have $X_r=(X\times_\mathbb{Z}\mathbb{Q})_r$, that is, the underlying real scheme only depends on the characteristic $0$ fiber. Therefore in the study of real schemes it is harmless to assume that all schemes have characteristic $0$. \subsection{} The following result is a special case of the recollement of topoi (\cite[IV Proposition 14.6]{SGA4}): let $X$ be a topological space, let $i:Z\to X$ be the inclusion of a closed subspace with open complement $j:U\to X$ and let $\Lambda$ be a ring, then there is a canonical distinguished triangle \begin{align} \label{eq:locseq} Rj_!j^*\to id\to Ri_*i^*\to Rj_!j^*[1] \end{align} of endofunctors of the derived category $D(X,\Lambda)$. \subsection*{Acknowledgments} The author would like to thank Denis-Charles Cisinski, Fr\'ed\'eric D\'eglise, Jean Fasel, Adeel Khan, Marc Levine and Heng Xie for helpful discussions. He is supported by Marc Levine's ERC Advanced Grant QUADAG. \section{Constructibility} \label{sec:cons} In this section we discuss constructibility conditions in $D(X_r,\Lambda)$, and deduce some consequences on Grothendieck groups. Throughout the section, we assume that $\Lambda$ is a commutative ring. \subsection{} We first look at the motivic constructibility condition as recalled in~\ref{num:motcons}. The following result is standard: \begin{lemma} \label{lm:mconscomp} Let $X$ be quasi-compact quasi-separated scheme. Then the objects of $D_c(X_r,\Lambda)$ are exactly the compact objects of $D(X_r,\Lambda)$. \end{lemma} \proof Since $D(X_r,\Lambda)$ is compactly generated by the Tate twists by \cite[Lemma 15 and Theorem 35]{Bac}, the result follows from \cite[Proposition 1.4.11]{CD2}. \endproof \subsection{} \label{num:spectral} We now consider a topological constructibility condition. Recall that a \textbf{spectral space} is a quasi-compact topological space in which \begin{enumerate} \item the intersection of two quasi-compact opens is quasi-compact (i.e. the space is quasi-separated); \item every irreducible closed subset is the closure of a unique point; \item the collection of quasi-compact opens forms a basis for the topology. \end{enumerate} For any quasi-compact quasi-separated scheme $X$, the associated real scheme $X_r$ is a spectral space. The class of \textbf{constructible subsets} of a spectral space is the smallest class of subsets stable by finite unions, finite intersections and complements which contains all quasi-compact open subsets. \begin{definition} \label{def:conscom} If $M$ is a spectral space and $\Lambda$ is a ring, we define $D^b_{ctf}(M,\Lambda)$ to be the full subcategory of $D(M,\Lambda)$ consisting of complexes $A$ such that there is a finite stratification of $M$ into locally closed constructible subsets $M_i$ such that $A_{|M_i}$ is the constant sheaf associated to a perfect complex of $\Lambda$-modules. \end{definition} \subsection{} Recall that if $M$ is a spectral space and $\Lambda$ is a noetherian ring, a sheaf of $\Lambda$-modules $\mathcal{F}$ over $M$ is \textbf{constructible} if there is a finite stratification of $M$ into locally closed constructible subsets $M_i$ such that $\mathcal{F}_{|M_i}$ is the constant sheaf associated to a finitely generated $\Lambda$-module (\cite[Definition A.3]{Sch3}). Then in the case where $\Lambda$ is noetherian, we can replace Definition~\ref{def:conscom} by the following equivalent characterizations: \begin{proposition} \label{prop:ctfequi} Let $M$ be a spectral space and let $\Lambda$ be a noetherian ring. Then for $A\in D(M,\Lambda)$, the following are equivalent: \begin{enumerate} \item \label{num:ctf} $A\in D^b_{ctf}(M,\Lambda)$; \item \label{num:boundedflat} $A$ can be represented by a bounded complex of constructible flat sheaves of $\Lambda$-modules; \item \label{num:fintor} $A$ has finite $\operatorname{Tor}$-dimension and each $\mathcal{H}^i(A)$ is constructible.\footnote{Recall that a complex $A$ has $\operatorname{Tor}$-amplitude in $[a,b]$ if $H^n(A\overset{L}{\otimes}N)=0$ for any $n\notin[a,b]$ and any $\Lambda$-module $N$. A complex $A$ has finite $\operatorname{Tor}$-dimension if there exist $(a,b)$ such that $A$ has $\operatorname{Tor}$-amplitude in $[a,b]$.} \end{enumerate} \end{proposition} \proof It is clear that~\eqref{num:boundedflat} implies~\eqref{num:ctf}. To show that~\eqref{num:ctf} implies~\eqref{num:fintor}, by localization~\eqref{eq:locseq} we are reduced to the case where $A=Rj_!C$, where $j$ is the inclusion of a locally closed constructible subset and $C$ is a perfect complex of $\Lambda$-modules, in which case~\eqref{num:fintor} is clearly satisfied. The equivalence between~\eqref{num:boundedflat} and~\eqref{num:fintor} is classical, whose proof can be easily adapted from the \'etale case, see \cite[Rapport 4.6]{SGA4.5}, \cite[Proposition 6.4.6]{Fu} and \cite[Tag 03TT]{Stack}. \endproof \subsection{} The following result is an analogue of \cite[Theorem 6.3.10]{CD} for spectral spaces: \begin{proposition} \label{prop:conscomp} Let $M$ be a spectral space of finite dimension and let $\Lambda$ be a ring. Then the objects of $D^b_{ctf}(M,\Lambda)$ agree with the compact objects in $D(M,\Lambda)$. \end{proposition} \proof The family of objects $Rj_!\Lambda$ where $j:U\to M$ is the inclusion of a quasi-compact open subset form a generating family of $D(M,\Lambda)$. By \cite[Tag 0902]{Stack}, any such $U$ is itself a spectral space of finite dimension. By \cite[Corollary 4.6]{Sch2}, the cohomological dimension of $U$ is bounded by the dimension of $M$. By \cite[Proposition 1.1.9]{CD}, the family $Rj_!\Lambda$ is a generating family of compact objects. It follows that the subcategory of compact objects in $D(M,\Lambda)$ agrees with the thick subcategory generated by those $Rj_!\Lambda$, and therefore lie in $D^b_{ctf}(M,\Lambda)$. Conversely, we need to show that every object in $D^b_{ctf}(M,\Lambda)$ is compact. The localization sequence~\eqref{eq:locseq} implies that the functors $Rj_!$ and $j^*$ for $j$ the inclusion of a locally closed constructible subset preserve compact objects. The result then follows from the fact that perfect complexes of $\Lambda$-modules are compact objects in $D(\Lambda)$. \endproof \subsection{} By \cite[Proposition 4.3.9]{BCR}, for any scheme $X$, the dimension of $X_r$ is bounded by the dimension of $X$. Therefore by Lemma~\ref{lm:mconscomp} and Proposition~\ref{prop:conscomp} we obtain the following result: \begin{theorem} \label{th:ctf=c} Let $X$ be a quasi-compact quasi-separated scheme of finite dimension and let $\Lambda$ be a ring. Then $D^b_{ctf}(X_r,\Lambda)$ agrees with $D_c(X_r,\Lambda)$. \end{theorem} \begin{corollary} \label{cor:prescons} Let $\Lambda$ be a ring. Given a morphism $f:X\to S$ between quasi-compact quasi-separated schemes , the subcategory $D^b_{ctf}(X_r,\Lambda)$ of $D(X_r,\Lambda)$ is preserved by the following operations: \begin{itemize} \item $f^*$, $\overset{L}{\otimes}_S$, and $Rf_!$ for $f$ separated of finite type; \item $Rf_*$ and $R\underline{Hom}_S$ for $f$ of finite type and $X$, $S$ quasi-excellent of finite dimension; \item $f^!$ for $f$ separated of finite type and $X$, $S$ quasi-excellent of finite dimension. \end{itemize} \end{corollary} \proof By Theorem~\ref{th:ctf=c} we are reduced to show that these functors preserve $D_c(X_r,\Lambda)$. By~\ref{num:char0} we may assume that all schemes have characteristic $0$. In this case it is known that the six operations in the given situations preserve constructible objects in any motivic $\infty$-category, see \cite[Theorem 2.4.9]{BD} and \cite[Proposition 3.3]{DFKJ}. \endproof \begin{remark} \label{rm:schfin} In \cite[Remark 17.7.1]{Sch3}, Scheiderer raised the question whether the (higher) direct image functors of a morphism of finite type between excellent schemes preserve constructible sheaves on real schemes. Corollary~\ref{cor:prescons} gives a result in this direction by showing that the functor $Rf_*$ preserves $D^b_{ctf}$ for $f$ a morphism of finite type between quasi-excellent schemes of finite dimension. \end{remark} \subsection{} \label{num:consfct} In the rest of this section we deal with Grothendieck groups. For any stable $\infty$-category $\mathcal{C}$, its \textbf{Grothendieck group} $K_0(\mathcal{C})$ is the quotient of the free abelian group generated by its objects by the relations $[B]=[A]+[C]$ for all distinguished triangles $A\to B\to C\to A[1]$. For any ring $\Lambda$, let $K_0(\Lambda)$ be the Grothendieck group of the $\infty$-category of perfect complexes of $\Lambda$.\footnote{It is well-known that $K_0(\Lambda)$ agrees with the Grothendieck group of finitely generated projective $\Lambda$-modules (\cite[3.10]{TT}).} If $M$ is a spectral space and $B$ is a ring, a $B$-valued \textbf{constructible function} is a function $\phi:M\to B$ such that there is a finite stratification of $M$ into locally closed constructible subsets $M_i$ such that $\phi_{|M_i}$ is constant for each $i$. This is equivalent to say that for each $m\in B$, $\phi^{-1}(\{m\})$ is locally closed and constructible, and is non-empty for only a finite number of $m$'s. We denote by $Cons(M, B)$ the $B$-algebra of constructible functions on $M$. As a $B$-module, $Cons(M,B)$ agrees with the free $B$-module generated by the characteristic functions of closed constructible subsets of $M$, and we have $Cons(M, B)\simeq Cons(M,\mathbb{Z})\otimes_{\mathbb{Z}}B$. Let $M$ be a spectral space and let $\Lambda$ be a ring. For $A\in D^b_{ctf}(M,\Lambda)$ and $x\in M$, define the \textbf{local Euler-Poincar\'e index} $\chi(A)(x)\in K_0(\Lambda)$ as the class of the stalk $[A_x]\in K_0(\Lambda)$.\footnote{Note that when $\Lambda$ is a field, the class $[A_x]\in K_0(\Lambda)\simeq\mathbb{Z}$ agrees with $\sum_i(-1)^i\operatorname{dim}H^i(A)$.} We then have a canonical map \begin{align} \label{eq:locEP} \begin{split} D^b_{ctf}(M,\Lambda)&\xrightarrow{}Cons(M,K_0(\Lambda))\\ A&\mapsto (x\mapsto\chi(A)(x)). \end{split} \end{align} The map~\eqref{eq:locEP} factors through the Grothendieck group and induces a map \begin{align} \label{eq:locEPK0} \chi:K_0(D^b_{ctf}(M,\Lambda))\xrightarrow{}Cons(M,K_0(\Lambda)) \end{align} (see also \cite[Note 87$_1$]{ReS}). The following result is a variant of \cite[Theorem 9.7.1]{KS}: \begin{proposition} \label{prop:KS9.7.1} The map~\eqref{eq:locEPK0} is an isomorphism. \end{proposition} \proof For $a\in K_0(\Lambda)$, let $A$ be a perfect complex of $\Lambda$-modules whose class is $a$. If $j:Z\to M$ is the inclusion of a closed constructible subset, then map~\eqref{eq:locEPK0} sends $[Rj_!A]$ to $a_Z$. This shows that the map~\eqref{eq:locEPK0} is surjective. Now prove the injectivity. Note that every element of $K_0(D^b_{ctf}(M,\Lambda))$ can be represented by a single complex in $D^b_{ctf}(M,\Lambda)$. Consider a complex $A\in D^b_{ctf}(M,\Lambda)$. Choose a finite stratification of $M$ into locally closed constructible subsets $M_i$ such that $A_{|M_i}$ is the constant sheaf associated to a perfect complex $C_i$ of $\Lambda$-modules. The localization sequence~\eqref{eq:locseq} shows that we have \begin{align} [A]=\sum_i[Rj_{i!}C_i]\in K_0(D^b_{ctf}(M,\Lambda)) \end{align} where $j_i:M_i\to M$ is the inclusion. But to say that $\chi(A)=0$ means that for each $i$, $[C_i]=0\in K_0(\Lambda)$. This shows that $[A]=0$. \endproof From Proposition~\ref{prop:KS9.7.1} and Theorem~\ref{th:ctf=c} we deduce the following result: \begin{corollary} \label{cor:K0cons} Let $X$ be a quasi-compact quasi-separated scheme of finite dimension and let $\Lambda$ be a ring. Then there is a canonical isomorphism \begin{align} K_0(D_c(X_r,\Lambda))\simeq Cons(X_r,K_0(\Lambda)). \end{align} \end{corollary} \begin{remark} \label{rm:SHret} \begin{enumerate} \item Similar arguments can be applied to the homotopy category of sheaves of spectra $\mathbf{SH}(X_r)$: by virtue of \cite[Corollary 3]{Bac}, the analogue of Theorem~\ref{th:ctf=c} states that the constructible objects of $\mathbf{SH}(X_r)$ are exactly the sheaves of spectra $C$ such that there exists a finite stratification of $X_r$ into locally closed constructible subsets such that the restriction of $C$ to each stratum is the constant sheaf associated to a compact spectrum. The analogue of Corollary~\ref{cor:K0cons} provides a canonical isomorphism \begin{align} K_0(\mathbf{SH}_c(X_r))\simeq Cons(X_r,\mathbb{Z}), \end{align} where we use the fact that the Grothendieck group of compact spectra agrees with that of finitely generated abelian groups, which follows from the canonical $t$-structure on spectra (\cite[Lemma 2]{Bac}). \item Corollary~\ref{cor:K0cons} implies that any additive invariant on $D_c(X_r,\Lambda)$, such as the Euler characteristic or the characteristic class (\cite[Definition 5.1.3]{JY}), only depends on the constructible function defined by the local Euler-Poincar\'e index. This reflects the fact that the real \'etale site only lives on the characteristic $0$ fiber as in ~\ref{num:char0}, see \cite{Ill}. \end{enumerate} \end{remark} \subsection{} \label{num:CHM} We now give an application in determining the Grothendieck group of the constructible rational motivic spectra. If $X$ is an excellent separated scheme of finite dimension, then by \cite[Proposition 2.10]{Bon2}, there is a bounded weight structure on $\mathbf{DM}_c(X,\mathbb{Q})$, called the \textbf{Chow weight structure}. We define the category of \textbf{Chow motives} with rational coefficients over $X$, denoted as $\mathbf{CHM}(X,\mathbb{Q})$, as the heart of the Chow weight structure. \begin{remark} As Bondarko's construction of the Chow weight structure is obtained by an abstract gluing procedure, the category $\mathbf{CHM}(X,\mathbb{Q})$ is quite mysterious in general. In some cases we have more explicit descriptions of this category: \begin{enumerate} \item By \cite[Corollary 2.12]{Bon2}, the category $\mathbf{CHM}(X,\mathbb{Q})$ contains the idempotent completion of the additive subcategory generated by elements of the form $p_*\mathbbold{1}_Y(d)[2d]$, where $d\in\mathbb{Z}$ and $p:Y\to X$ is a proper morphism with $Y$ regular. If $X$ is a scheme of finite type over an excellent separated scheme of dimension at most $2$, then the two categories coincide (\cite[Th\'eor\`eme 3.3]{Heb} and \cite[Theorem 2.1]{Bon2}). \item By \cite[Theorem 3.17]{Jin}, if $X$ is quasi-projective over a perfect field, then $\mathbf{CHM}(X,\mathbb{Q})$ agrees with the category of Chow motives over $X$ defined in \cite[Definition 2.8]{CH}. \end{enumerate} \end{remark} It is a general fact that weight structures have strong consequences on Grothendieck groups related to the heart. For any additive category $\mathcal{A}$, denote by $K_0^{\oplus}(\mathcal{A})$ the quotient of the free abelian group generated by its objects by the relations $[B]=[A]+[C]$ if $B\simeq A\oplus C$. By \cite[Theorem 5.3.1]{Bon}, the inclusion $\mathbf{CHM}(X,\mathbb{Q})\to\mathbf{DM}_c(X,\mathbb{Q})$ induces an isomorphism \begin{align} \label{eq:K0DM} K_0(\mathbf{DM}_c(X,\mathbb{Q}))\simeq K_0^{\oplus}(\mathbf{CHM}(X,\mathbb{Q})). \end{align} Combining~\eqref{eq:K0DM}, Corollary~\ref{cor:K0cons} and decomposition~\ref{eq:SHdec}, we deduce the following result: \begin{corollary} \label{cor:K0SHQ} Let $X$ be an excellent separated scheme of finite dimension. Then there is a canonical isomorphism of abelian groups \begin{align} K_0(\mathbf{SH}_c(X,\mathbb{Q}))\simeq K_0^{\oplus}(\mathbf{CHM}(X,\mathbb{Q}))\oplus Cons(X_r,\mathbb{Z}). \end{align} \end{corollary} \section{Generic base change} \label{sec:genbc} The main goal of this section is to prove the generic base change property for constructible complexes of real \'etale sheaves. The style of the proof is quite close to \cite[Th. finitude \S 2]{SGA4.5} and \cite[\S 2.4]{Cis}. See also \cite[Theorem 9.3.1]{Fu} for a detailed exposition. Throughout this section, we fix a ring $\Lambda$. \subsection{} \label{num:gbc} Let $\mathcal{T}$ be a motivic $\infty$-category, or more generally any $\infty$-category fibered over schemes such that for any morphism of schemes $f:X\to Y$, the functor $f^*:\mathcal{T}(Y)\to\mathcal{T}(X)$ has a right adjoint $Rf_*:\mathcal{T}(X)\to\mathcal{T}(Y)$. Let $S$ be a scheme and let $f:X\to Y$ be a $S$-morphism of schemes. For an open subscheme $U$ over $S$ and an object $K\in\mathcal{T}(X)$, we say that $K$ \textbf{satisfies base change along $f$ over $U$} if the formation of $Rf_*K$ is compatible with any base change over $S$ which factors through $U$. In other words, for any morphism $p:V\to U$ with the Cartesian square \begin{align} \begin{split} \xymatrix@=10pt{ X\times_SV \ar[r]^-{p_X} \ar[d]_-{f_V} & X\times_SU \ar[d]^-{f_U}\\ Y\times_SV \ar[r]^-{p_Y} & Y\times_SU } \end{split} \end{align} the canonical map $p_Y^*Rf_{U*}K_{|X\times_SU}\to Rf_{V*}p_X^*K_{|X\times_SU}$ is an isomorphism. We say that an object $K\in\mathcal{T}(X)$ \textbf{satisfies generic base change along $f$ relatively to $S$} if there is an open dense subscheme $U$ of $S$ such that $K$ satisfies base change along $f$ over $U$. \subsection{} We start with a special case of the generic base change. If $\mathcal{C}$ is a closed symmetric monoidal $\infty$-category and $M\in\mathcal{C}$, we denote $M^\vee=\underline{Hom}(M,\mathbbold{1})$ where $\mathbbold{1}$ is the monoidal unit. An object $M\in\mathcal{C}$ is called \textbf{dualizable} if the canonical map $M\otimes M^\vee\to \underline{Hom}(M,M)$ is an isomorphism. \begin{lemma} \label{lm:cis2.4.4} Let $\mathcal{T}$ be a motivic $\infty$-category, let $f:X\to S$ be a smooth morphism and let $K\in\mathcal{T}(X)$. Assume that $K$ is dualizable in $\mathcal{T}(X)$ and $Rf_!(K^\vee)$ is dualizable in $\mathcal{T}(S)$. Then for any Cartesian square of the form \begin{align} \begin{split} \xymatrix@=10pt{ W \ar[r]^-{q} \ar[d]_-{g} & X \ar[d]^-{f}\\ T \ar[r]^-{p} & S } \end{split} \end{align} the canonical map $p^*Rf_*K\to Rg_*q^*K$ is an isomorphism. \end{lemma} \proof The proof of \cite[Proposition 2.4.4]{Cis} works for any motivic $\infty$-category, using proper base change and relative purity. See Remark 2.4.5 of loc. cit. \endproof \begin{lemma} \label{lm:cis2.4.9} Let $S$ be a scheme and let $f:X\to Y$ be a morphism between $S$-schemes. Let $\mathcal{T}(X)$ and let $j:Y\to Z$ be an open immersion of $S$-schemes. If $U$ is an open subscheme of $S$ such that $K$ satisfies base change along $jf$ over $U$, then $K$ satisfies base change along $f$ over $U$. \end{lemma} \proof This follows from the canonical identification $j^*j_*=id$. \endproof \subsection{} If a motivic $\infty$-category $\mathcal{T}$ satisfies some conditions on resolution of singularities (see \cite[2.4.1]{BD}, \cite[2.1.12]{JY}), then for any field $k$, every object in $\mathcal{T}_c(k)$ is dualizable (\cite[Remark 2.1.16]{JY}). Such conditions are automatically satisfied for $\mathcal{T}(X)=D(X_r,\Lambda)$ since by~\ref{num:char0} we may assume that all schemes have characteristic $0$. By the continuity property of $D(X_r,\Lambda)$ (\cite[Appendix A]{DFKJ}), we deduce the following result: \begin{lemma} \label{lm:gendual} For every noetherian scheme $X$ and every constructible object $K$ of $D(X_r,\Lambda)$, there is a dense open subscheme $U$ of $X$ such that $K_{|U}$ is dualizable in $D(U_r,\Lambda)$. \end{lemma} From now on we focus on the case $\mathcal{T}(X)=D(X_r,\Lambda)$. \begin{lemma} \label{lm:fincons} Let $f:X\to S$ be a finite morphism of schemes. Then the functor $Rf_*:D(X_r,\Lambda)\to D(S_r,\Lambda)$ is conservative. \end{lemma} \proof Since $f$ is a finite morphism, then for every point $s\in S$ the fiber $f^{-1}(s)$ is finite, and the residue field of any point in the fiber is a finite extension of the residue field of $s$. It follows that the map $f_r:X_r\to S_r$ has finite discrete fibers, because for a finite field extension $k\to k'$ and an ordering $\sigma$ of $k$, there are only finitely many orderings on $k'$ extending the ordering $\sigma$ on $k$ (\cite[VIII 2.20]{Lam}). Therefore the functor $Rf_*:D(X_r,\Lambda)\to D(S_r,\Lambda)$ is clearly conservative. \endproof \subsection{} Denote by $P(n)$ the following statement: for any integral noetherian scheme $S$, and any open immersion with dense image $f:X\to Y$ between $S$-schemes of finite type such that the generic fiber of $X$ over $S$ has dimension at most $n$, any object $K\in D_c(X_r,\Lambda)$ satisfies generic base change along $f$ relatively to $S$. \begin{proposition} \label{prop:cis2.4.10} Let $n\geqslant0$ be an integer, and assume that $P(n-1)$ holds. Let $S$ be an integral noetherian scheme and let $f:X\to Y$ be a morphism between $S$-schemes of finite type such that $X$ is smooth over $S$ and the generic fiber of $X$ over $S$ has dimension $n$. Then any $K\in D(X_r,\Lambda)$ which is dualizable satisfies generic base change along $f$ relatively to $S$. \end{proposition} \proof Let $K\in D(X_r,\Lambda)$ be dualizable. The problem is local on $Y$, and therefore we may assume that $Y$ is affine over $S$. Choose a closed embedding $Y\to \mathbb{A}^d_S$ defined by $d$ functions $(g_k:Y\to\mathbb{A}^1_S)_{1\leqslant k\leqslant d}$. Then the generic fiber of $g_i\circ f$ has dimension at most $n-1$ (see \cite[7.5.3]{Fu}). For each $1\leqslant k\leqslant d$, applying the statement $P(n-1)$ to the morphism $f$ as a morphism of schemes over $\mathbb{A}^1_S$ via the structure morphism $g_k$, we know that there exists a dense open subscheme $U_k\subset\mathbb{A}^1_S$ such that $K$ satisfies base change along $f$ over $U_k$ via the structure morphism $g_k$. Let $V=\bigcup_{1\leqslant k\leqslant d}(g_k)^{-1}(U_k)$ and denote by $j:V\to Y$ the canonical open embedding. Then $K_{|f^{-1}(V)}$ satisfies base change along $f_V:f^{-1}(V)\to V$ over $S$. Let $T=Y-V$ be the complement of $V$ (with any scheme structure). Then $T$ satisfies \begin{align} T\subset Y\cap\left((\mathbb{A}^1_S-U_1)\times_S\cdots\times_S (\mathbb{A}^1_S-U_d)\right). \end{align} Since the generic fiber of $(\mathbb{A}^1_S-U_1)\times_S\cdots\times_S (\mathbb{A}^1_S-U_d)\to S$ is finite, by shrinking $S$, we may assume that $T$ is finite over $S$. Let $\bar{Y}$ be the closure of $Y$ in $\mathbb{P}^n_S$. By shrinking $S$, we may assume that $\bar{Y}-V$ is also finite over $S$. By Lemma~\ref{lm:cis2.4.9}, we may replace $Y$ by $\bar{Y}$, which amounts to say that we may assume that the structure morphism $p:Y\to S$ is proper. Therefore we are reduced to the following situation: $Y$ is proper over $S$, and there exists a open immersion $j:V\to Y$ with dense image whose complement $Y-V$ finite over $S$ such that $K_{|f^{-1}(V)}$ satisfies base change along $f_V:f^{-1}(V)\to V$ over $S$, that is, the formation of $Rj_!j^*Rf_*K$ is compatible with any base change over $S$. Denote by $i:Y-V\to Y$ the canonical closed immersion. We have the localization sequence \begin{align} \label{eq:locflower*} Rj_!j^*Rf_*K\to Rf_*K\to Ri_*i^*Rf_*K\to Rj_!j^*Rf_*K[1]. \end{align} Therefore it remains to show that after shrinking $S$, the formation of $Ri_*i^*Rf_*K$ is compatible with any base change over $S$. Since $Ri_*=Ri_!$ commutes with any base change, it suffices to prove that after shrinking $S$, the formation of $i^*Rf_*K$ is compatible with any base change over $S$. Since the composition $pi$ is finite, by Lemma~\ref{lm:fincons}, it suffices to prove that after shrinking $S$, the formation of $Rp_*Ri_*i^*Rf_*K$ is compatible with any base change over $S$. Applying the functor $Rp_*$ to~\eqref{eq:locflower*}, we obtain the following distinguished triangle: \begin{align} \label{eq:locflower*p} Rp_*Rj_!j^*Rf_*K\to Rp_*Rf_*K\to Rp_*Ri_*i^*Rf_*K\to Rp_*Rj_!j^*Rf_*K[1]. \end{align} Since the composition $pf:X\to S$ is smooth over $S$, we may apply Lemma~\ref{lm:cis2.4.4} and Lemma~\ref{lm:gendual} to obtain that, after shrinking $S$, the formation of $Rp_*Rf_*K=R(pf)_*K$ is compatible with any base change over $S$. On the other hand, since $p$ is proper and the formation of $Rj_!j^*Rf_*K$ is compatible with any base change over $S$, we deduce from the proper base change that the formation of $Rp_*Rj_!j^*Rf_*K$ is compatible with any base change over $S$. We conclude using the distinguished triangle~\eqref{eq:locflower*p}. \endproof \begin{theorem} \label{th:ret_genbc} Let $S$ be a noetherian scheme and let $f:X\to Y$ be a morphism between $S$-schemes of finite type. Then every object of $D_{c}(X_r,\Lambda)$ (respectively $\mathbf{SH}_c(X_r)$) satisfies generic base change along $f$ relatively to $S$. \end{theorem} \proof We prove the case of $D_{c}(X_r,\Lambda)$, the case of $\mathbf{SH}_c(X_r)$ being very similar. The problem is local on $Y$, so we may assume that $Y$ is affine. By virtue of the hypercohomology spectral sequence for a finite open affine cover of $X$, we reduce the problem to the case where $X$ is also affine. So we may assume that there exists a compactification $f=pj$ with $p$ proper and $j$ open immersion. By proper base change, we may assume that $f$ is an open immersion with dense image. By working with each irreducible component of $S$, we may also assume that $S$ is integral. Therefore we are reduced to prove $P(n)$. We use induction on $n$. The case $n=-1$ is clear, and we may assume that $n\geqslant0$ and $P(n-1)$ holds. By~\ref{num:char0}, we may assume that all schemes have characteristic $0$. By generic smoothness, after shrinking $S$, we may assume that there is an open immersion $j:U\to X$ with dense image such that $U$ is smooth over $S$. Let $K\in D_{c}(X_r,\Lambda)$. By Lemma~\ref{lm:gendual}, by shrinking $U$, we may also assume that $K_{|U}$ is dualizable in $D(U_r,\Lambda)$. Let $i:Z\to X$ be the closed complement (with any scheme structure). Consider the following localization sequence: \begin{align} \label{eq:locflower*p} R(fi)_*i^!K\to Rf_*K\to R(fj)_*j^*K\to R(fi)_*i^!K[1]. \end{align} Applying Proposition~\ref{prop:cis2.4.10} to $K$ along the morphism $fj$ and the statement $P(n-1)$ to $i^!K$ along the morphism $fi$, we know that there exists a dense open subscheme $W$ of $S$ such that the formations of $R(fi)_*i^!K$ and $R(fj)_*j^*K$ are compatible with any base change over $S$ which factors through $W$. Therefore the same property holds for $Rf_*K$, which finishes the proof. \endproof \begin{corollary} \label{cor:genbcsh} Let $S$ be a noetherian scheme of finite dimension and let $f:X\to Y$ be a morphism between $S$-schemes of finite type. Then every object of $\mathbf{SH}_c(X,\mathbb{Q})$ satisfies generic base change along $f$ relatively to $S$. \end{corollary} \proof By the decomposition~\eqref{eq:SHdec}, we only need to prove the generic base change for $D_c(X_r,\mathbb{Q})$ and $\mathbf{DM}_c(X,\mathbb{Q})$. The first case is Theorem~\ref{th:ret_genbc}. For the second case, by \cite[Theorem 5.2.2]{CD} we know that $\mathbf{DM}_c(X,\mathbb{Q})$ agrees with constructible $h$-motives with rational coefficients, for which generic base change is proved in \cite[Theorem 2.4.2]{Cis}. \endproof \subsection{} Recall that the $b$-topology is obtained by gluing the \'etale topology and the real \'etale topology (\cite[Definition 2.3]{Sch3}). Denote by $j$ (respectively $i$) the canonical inclusion of the \'etale topos (respectively the real \'etale topos) into the $b$ topos. Then we have the following generic base change result for complexes of $b$-sheaves: \begin{corollary} \label{cor:genbcb} Let $S$ be a noetherian scheme of finite dimension and let $f:X\to Y$ be a morphism between $S$-schemes of finite type. Let $\Lambda$ be a noetherian ring, and let $K\in D(X_b,\Lambda)$ be such that \begin{enumerate} \item $i^*K\in D_{c}(X_r,\Lambda)$; \item $j^*K\in D^b_{ctf}(X_{\textrm{\'et}},\Lambda)$; \item There exists an integer $n\in\mathcal{O}^*(X)$ such that $n\cdot j^*K=0$. \end{enumerate} Then $K$ satisfies generic base change along $f$ relatively to $S$. \end{corollary} \proof By virtue of \cite[Remarks 16.1]{Sch3}, the result follows from the real \'etale case and the \'etale case, which follow from Theorem~\ref{th:ret_genbc} and \cite[Theorem 2.4.2]{Cis} combined with \cite[Theorem 6.3.11]{CD} respectively. \endproof
2024-02-18T23:40:47.991Z
2020-09-29T02:21:00.000Z
algebraic_stack_train_0000
3,361
6,399
proofpile-arXiv_066-448
\section{Applications} \label{sec.5.app} The infinitely many fillings arising from the aperiodic DT transformation imply that the self-concordance monoid and the fundamental group of the space of Legendrian embeddings are infinite. \begin{cor} For any braid word $\beta$, if the \emph{DT} transformation for $Q_\beta$ is aperiodic, then \begin{enumerate}[label*=\emph{(\arabic*)}] \item the Lagrangian self-concordance monoid $\text{Con}(\Lambda_\beta)$ has a subgroup $\mathbb{Z}$; \item the fundamental group of Legendrian embeddings $\pi_1\mathcal{L} eg(\Lambda_\beta)$ has a subgroup $\mathbb{Z}$. \end{enumerate}{} \end{cor} \begin{proof} (1) Following the convention in the proof of Theorem \ref{5.11}, if the graph of $\mathcal{R}^m$ is Hamiltonian isotopic to the trivial cylinder, then its $L_m$ and $L_0$ would induce the same chart, but they do not by Theorem \ref{5.11}. (2) Consider the monoid morphism $\pi_1\mathcal{L} eg(\Lambda_\beta) \rightarrow \text{Con}(\Lambda_\beta).$ Two loops are distinct if they graph distinct concordances. \end{proof} \begin{rmk} The proof of Corollary \ref{rainbowinfinite} implies that the Lagrangian self-concordance monoid of $\Lambda_\beta$ contains a $\mathbb{Z}$-subgroup. Meanwhile, the results in \cite{CasalsGao,CZ} manifest additional strength in those cases. In \cite{CasalsGao}, the Lagrangian self-concordance monoid of $\Lambda_{(3,6)}$ has a factor of $\text{PSL}(2,\mathbb{Z})$, and that of $\Lambda_{(4,4)}$ has a factor of the mapping class group $M_{0,4}$ (spherical braid group in $4$-strands mod center), yielding the monoid of each link has a subgroup of exponential-growth. The construction in \cite{CZ} uses Legendrian weaves instead of Legendrian loops, which can be extended beyond positive braid Legendrian links. Also, the Lagrangian disks in the Polterovich surgery can be visualized explicitly. \end{rmk} The infinitely many fillings can be used to construct Weinstein $4$-manifolds or Stein surfaces that admit infinitely many closed exact Lagrangian surfaces. Let $\Lambda\subset \mathbb{R}^3\subset S^3$ be a Legendrian knot with infinitely many exact Lagrangian fillings. Attaching a Weinstein $2$-handle along $\Lambda$, we obtain a Weinstein manifold \cite{weinstein1991}, which is a Stein surface with the homotopy type of $S^2$ \cite{Eliashberg90,Gompf}. Using this construction, it was first shown in \cite{CasalsGao} that there exists a Stein surface which is \emph{homotopic to the $2$-sphere} and has infinitely many closed exact Lagrangian surfaces of \emph{higher genus}. These Lagrangian surface are not related by symplectic Dehn twists \cite{Arnoldmilnor,Seidelknotted}. Prior to the emergence of the infinite filling approach, closed exact Lagrangian surfaces were constructed in some cases. \begin{enumerate}[leftmargin=*] \item Seidel proved infinitely many Lagrangian $2$-spheres in $A_k$-Milnor fibers, $k\geq 3$ \cite{Seidelgraded}. \item Keating constructed an exact Lagrangian torus in the $A_{p,q,r}$-Milnor fiber, that cannot be expressed by $2$-spheres in the Fukaya category \cite{Keating}. \item Vianna constructed infinitely many exact Lagrangian tori in $\mathbb{C}\mathbb{P}^2\setminus \{\textrm{smooth cubic}\}$ \cite{vianna2014}. (The original result states that there are infinitely many monotone Lagrangian tori in $\mathbb{C}\mathbb{P}^2$, but one can delete the smoothing of the toric divisor to adapt to the exact setting.) Later the result was generalized to del Pezzo surfaces \cite{Viannadelpezzo}. \end{enumerate} The Weinstein manifold in (1) is homotopic to a bouquet of spheres. The Weinstein manifold in (2) or (3) is not homotopic to a bouquet of spheres, i.e. $\pi_1(\mathbb{C}\mathbb{P}^2\setminus \{\textrm{smooth cubic}\}) =\mathbb{Z}_3$. By \cite{CasalsGao}, there are Weinstein manifolds with the homotopy type $S^2$ and infinitely many Lagrangian fillings of genus $g\geq 7$. Using the examples constructed in this section, we can lower the genus bound to $g\geq 4$. \begin{cor}\label{application Weinstein} For any $g\geq 4$, there is a Stein surface that is homotopic to $S^2$ and contains infinitely many exact Lagrangian surfaces of genus $g$ that are smoothly isotopic but non-Hamiltonian isotopic. \end{cor}{} \begin{proof} By Lemma \ref{basicbraidsinf} (1), $\beta_0 = s_1^2s_2^2s_1^2s_2^2\in \mathsf{Br}_3^+$ has infinite fillings. Then $\beta = s_1s_2\beta_0$ gives rise to a Legendrian knot with tb $= 10-3 =7$, and then its fillings have genus $g=4$ \cite{chantraine2010}. Attach a a Weinstein $2$-handle along $\Lambda_{\beta}$. The concatenation of a Lagrangian filling with the core of the Weinstein handle produces a closed exact Lagrangian surface. These Lagrangian surfaces are distinct, following the same argument as in \cite[Corollary 1.10]{CasalsGao}. For higher $g$, consider $s_1^{2g-8}\beta$. \end{proof}{} \begin{rmk} We do not know if there is a Weinstein manifold homotopic to $S^2$ with infinitely many exact Lagrangian surfaces of genus $g= 0,1,2$ or $3$. \end{rmk} We can lower the genus bound of the closed Lagrangian if we are willing to trade off the homotopy type of the Weinstein manifold. Consider Legendrian links with infinitely many Lagrangian fillings. After attaching a Weinstein $2$-handle at each component, we obtain a Weinstein manifold which is homotopic to a bouquet of spheres. The Weinstein structure does not depend on the order of the handle attachment. Previous methods have achieved $g\geq 4$ \cite{CasalsGao,CZ}. Now it is true for any genus. \begin{cor} For any $g\in \mathbb{N}$, there is a Stein surface that is homotopic to a bouquet of spheres, and contains infinitely many exact Lagrangian surfaces of genus g that are smoothly isotopic but non-Hamiltonian isotopic. \end{cor} \begin{proof} The case $g=0$ follows from \cite{Seidelgraded}. For $g\geq 1$, the braid $s_1^{2g-2}s_1^2s_2^2s_1^2s_2^2\in \mathsf{Br}_3^+$ gives a $4$-component Legendrian link with genus $g$. \end{proof} \section{DT Transformation and Infinitely Many Fillings} \label{sec 2} Let $\Lambda_\beta$ be the rainbow closure Legendrian link associated with an $n$-strand braid word $\beta$, with a marked point near each right cusp. Let $\mathbb{F}$ denote an algebraically closed field of characteristic $2$ and let $\mathrm{Aug}(\Lambda_\beta)$ denote the $\mathbb{F}$-augmentation variety. In \cite{GSW}, we construct cluster $\mathrm{K}_2$ structures on augmentation varieties $\mathrm{Aug}\left(\Lambda_\beta\right)$ and prove that admissible fillings of $\Lambda_\beta$ induce cluster seeds on $\mathrm{Aug}\left(\Lambda_\beta\right)$. Theorem 1.4 in \textit{loc,cit} states that if two admissible fillings induce distinct cluster seeds on $\mathrm{Aug}\left(\Lambda_\beta\right)$, then they are not Hamiltonian isotopic. In this section we will utilize this result to prove the existence of Legendrian links with infinitely many fillings. \subsection{Full Cyclic Rotation} Let $\beta=s_i\beta'$ be a braid word starting with the letter $s_i$. The cyclic rotation $\rho$ is a Legendrian isotopy from $\Lambda_{s_i\beta'}$ to $\Lambda_{\beta's_i}$, illustrated by the following moves on the front projection of Legendrian links \[ \begin{tikzpicture}[baseline=25,scale =0.5] \draw (1,0) rectangle node [] {$\beta'$} (3,1.5); \draw (3,0.25) to [out=0,in=180] (5.5,2) to [out=180,in=0] (3, 3.75) -- (0,3.75) to [out=180,in=0] (-2.5,2) to [out=0,in=180] (0,0.25) to [out=0,in=180] (1, 0.75); \draw (3,0.75) to [out=0,in=180] (4.75,2) to [out=180,in=0] (3, 3.25) -- (0,3.25) to [out=180,in=0] (-1.75,2) to [out=0,in=180] (0,0.75) to [out=0,in=180] (1, 0.25); \draw (3,1.25) to [out=0,in=180] (4,2) to [out=180,in=0] (3, 2.75) -- (0,2.75) to [out=180,in=0] (-1,2) to [out=0,in=180] (0,1.25) -- (1,1.25); \node at (0.5, 0) [] {$s_i$}; \end{tikzpicture} \quad \rightsquigarrow \quad \begin{tikzpicture}[baseline=25,scale =0.5] \draw (1,0) rectangle node [] {$\beta'$} (3,1.5); \draw (3,0.25) to [out=0,in=180] (5.5,2) to [out=180,in=0] (3, 3.75) -- (1,3.75) to [out=180,in=0] (0, 3.25) to [out=200,in=0] (-2.3,1.7) to [out=0,in=180] (0,0.25) to [out=0,in=180] (1, 0.75); \draw (3,0.75) to [out=0,in=180] (4.75,2) to [out=180,in=0] (3, 3.25) -- (1,3.25) to [out=180,in=0] (0, 3.75) to [out=180,in=0] (-2.3,2.3) to [out=0,in=160] (0,0.75) to [out=0,in=180] (1, 0.25); \draw (3,1.25) to [out=0,in=180] (4,2) to [out=180,in=0] (3, 2.75) -- (0,2.75) to [out=180,in=0] (-1,2) to [out=0,in=180] (0,1.25) -- (1,1.25); \end{tikzpicture} \quad \rightsquigarrow \quad \begin{tikzpicture}[baseline=25,scale =0.5] \draw (1,0) rectangle node [] {$\beta'$} (3,1.5); \draw (3,0.25) to [out=0,in=180] (5.5,2) to [out=180,in=0] (3, 3.75) -- (2.5, 3.75) to [out=180,in=0] (1.5, 3.25) -- (1,3.25) to [out=180,in=0] (-0.75,2) to [out=0,in=180] (1,0.75); \draw (3,0.75) to [out=0,in=180] (4.75,2) to [out=180,in=0] (3, 3.25) -- (2.5, 3.25) to [out=180,in=0] (1.5, 3.75) -- (1,3.75) to [out=180,in=0] (-1.5,2) to [out=0,in=180] (1,0.25); \draw (3,1.25) to [out=0,in=180] (4,2) to [out=180,in=0] (3, 2.75) -- (1,2.75) to [out=180,in=0] (0,2) to [out=0,in=180] (1,1.25); \end{tikzpicture} \] \[ \qquad\qquad\qquad \rightsquigarrow \quad \begin{tikzpicture}[baseline=25,scale =0.5] \draw (-1,0) rectangle node [] {$\beta'$} (-3,1.5); \draw (-3,0.25) to [out=180,in=0] (-5.5,2) to [out=0,in=180] (-3, 3.75) -- (-1,3.75) to [out=0,in=180] (0, 3.25) to [out=-20,in=180] (2.3,1.7) to [out=180,in=0] (0,0.25) to [out=180,in=0] (-1, 0.75); \draw (-3,0.75) to [out=180,in=0] (-4.75,2) to [out=0,in=180] (-3, 3.25) -- (-1,3.25) to [out=0,in=180] (0, 3.75) to [out=0,in=180] (2.3,2.3) to [out=180,in=20] (0,0.75) to [out=180,in=0] (-1, 0.25); \draw (-3,1.25) to [out=180,in=0] (-4,2) to [out=0,in=180] (-3, 2.75) -- (0,2.75) to [out=0,in=180] (1,2) to [out=180,in=0] (0,1.25) -- (-1,1.25); \end{tikzpicture} \quad \rightsquigarrow \quad \begin{tikzpicture}[baseline=25,scale =0.5] \draw (-1,0) rectangle node [] {$\beta'$} (-3,1.5); \draw (-3,0.25) to [out=180,in=0] (-5.5,2) to [out=0,in=180] (-3, 3.75) -- (0,3.75) to [out=0,in=180] (2.5,2) to [out=180,in=0] (0,0.25) to [out=180,in=0] (-1, 0.75); \draw (-3,0.75) to [out=180,in=0] (-4.75,2) to [out=0,in=180] (-3, 3.25) -- (0,3.25) to [out=0,in=180] (1.75,2) to [out=180,in=0] (0,0.75) to [out=180,in=0] (-1, 0.25); \draw (-3,1.25) to [out=180,in=0] (-4,2) to [out=0,in=180] (-3, 2.75) -- (0,2.75) to [out=0,in=180] (1,2) to [out=180,in=0] (0,1.25) -- (-1,1.25); \node at (-0.5, 0) [] {$s_i$}; \end{tikzpicture} \] The \emph{full cyclic rotation} $\mathsf{R}$ of $\Lambda_\beta$ is the composition of cyclic rotations that rotate each letter in $\beta$ one-by-one from left to right. The resulted Legenedrian link is $\Lambda_beta$ again. Therefore $\mathsf{R}$ is a Legendrian loop, which induces an automorphism $\Phi_\mathsf{R}$ on $\mathrm{Aug}(\Lambda_\beta)$. \begin{rmk} Below on the left is an example of the annular Legendrian weave (developed in \cite{CZ}) for the Lagrangian self concordance induced from the full cyclic rotation of a $3$-strand braid. The dashed teal region is the full twist $w_0^2$ coming from satelliting the positive braid along the unknot. This Legendrian $3$-graph consists mostly spirals, except they enter and leave the full twist region horizontally. The boxes on the right shows what happens inside that region. \[ \begin{tikzpicture}[baseline=0,scale=0.8] \draw (45:2) arc (45:-225:2); \draw (45:0.5) arc (45:-225:0.5); \draw [teal, dashed] (45:2) -- (45:0.5) arc (45:135:0.5); \draw [teal, dashed] (135:0.5) -- (135:2) arc (135:45:2); \draw [blue,domain=0:45,variable=\t,smooth,samples=100] plot ({180-\t}: {0.5+0.0055*\t}); \draw [red,domain=0:90,variable=\t,smooth,samples=100] plot ({225-\t}: {0.5+0.0055*\t}); \draw [blue,domain=0:135,variable=\t,smooth,samples=100] plot ({270-\t}: {0.5+0.0055*\t}); \draw [blue,domain=0:225,variable=\t,smooth,samples=100] plot ({360-\t}: {0.5+0.0055*\t}); \draw [blue,domain=0:225,variable=\t,smooth,samples=100] plot ({180+\t}: {2-0.0055*\t}); \draw [red,domain=0:180,variable=\t,smooth,samples=100] plot ({225+\t}: {2-0.0055*\t}); \draw [blue,domain=0:135,variable=\t,smooth,samples=100] plot ({270+\t}: {2-0.0055*\t}); \draw [blue,domain=0:45,variable=\t,smooth,samples=100] plot ({0+\t}: {2-0.0055*\t}); \node at (180:1.25) [] {$\cdots$}; \node at (0:1.75) [] {$\cdots$}; \end{tikzpicture} \quad \quad \quad \begin{tikzpicture}[baseline=0,scale=0.8] \draw [teal, dashed] (0,0.25) rectangle (3.5,1.75); \draw [teal, dashed] (0,-0.25) rectangle (3.5,-1.75); \draw [red] (-0.25,1) -- (0.75,1); \draw [blue] (0.75,1) to [out=0,in=-90] (1.5,1.75); \draw [blue] (0.5,1.75) to [out=-90,in=120] (0.75,1); \draw [blue] (-0.25,-1) -- (0,-1) to [out=0,in=-90] (0.5,-0.25); \draw [blue] (0.5,0.25) to [out=90,in=-120] (0.75,1); \draw [red] (0.75,1) to [out=60,in=-90] (1,1.75); \draw [red] (0.75,1) to [out=-60,in=90] (1,0.25); \draw [red] (1,-0.25) to [out=-90,in=120] (1.25,-1) to [out=-120,in=90] (1,-1.75); \draw [blue] (0.5,-1.75) to [out=90,in=180] (1.25,-1) to [out=60,in=-90] (1.5,-0.25); \draw [blue] (1.5,0.25) to [out=90,in=180] (2.25,1) to [out=60,in=-90] (2.5,1.75); \draw [blue] (1.25,-1) to [out=-60,in=90] (1.5,-1.75); \draw [red] (1.25,-1) to [out=0,in=-90] (2,-0.25); \draw [red] (2,0.25) to [out=90,in=-120] (2.25,1)to [out=120,in=-90] (2,1.75); \draw [red] (2,-1.75) to [out=90,in=180] (2.75,-1) to [out=60,in=-90] (3,-0.25); \draw [red] (3,0.25) to [out=90,in=180] (3.5,1) -- (3.75,1); \draw [blue] (2.5,-1.75) to [out=90,in=-120] (2.75,-1) to [out=120,in=-90] (2.5,-0.25); \draw [blue] (2.5,0.25) to [out=90,in=-60] (2.25,1); \draw [red] (2.25,1) to [out=0,in=-90] (3,1.75); \draw [blue] (2.75,-1) -- (3.75,-1); \draw [red] (2.75,-1) to [out=-60,in=90] (3,-1.75); \end{tikzpicture} \] \end{rmk} \subsection{Donaldson-Thomas Transformation} The cluster DT transformation is a central element of the cluster modular group acting on the associated cluster varieties. Combinatorially, a cluster DT transformation can be manifested as a maximal green sequence, or more generally, a reddening sequence of quiver mutations \cite{KelDT}. \begin{lem}\label{6.7} For any braid word $\beta$, we have $\Phi_\mathsf{R}=\mathrm{DT}^{-2}$ on $\mathrm{Aug}\left(\Lambda_\beta\right)$. \end{lem} \begin{proof} Each positive braid $[\beta]$ defines a double Bott-Samelson cell $\mathrm{Conf}^e_\beta(\mathcal{C})$ associated with the group $\mathsf{G}=\mathrm{SL}_n$ (see \cite{SWflag} for definition). Theorem 4.10 of \cite{GSW} constructs an algebraic variety isomorphism $\gamma$ between $\mathrm{Aug}\left(\Lambda_\beta\right)$ and $\mathrm{Conf}^e_\beta(\mathcal{C})$. The cluster $\mathrm{K}_2$ structure on $\mathrm{Aug}\left(\Lambda_\beta\right)$ is inherited from $\mathrm{Conf}^e_\beta(\mathcal{C})$ via the isomorphism $\gamma$. The {left reflection} $_ir$ from $\mathrm{Conf}_{s_i\gamma}^\delta(\mathcal{C})$ to $\mathrm{Conf}_\gamma^{s_i\delta}(\mathcal{C})$ and the {right reflection} $r^i$ from $\mathrm{Conf}_\gamma^{\delta s_i}(\mathcal{C})$ to $\mathrm{Conf}_{\gamma s_i}^\delta(\mathcal{C})$ are biregular isomorphisms between double Bott-Samelson cells. By \cite[Corollary 5.4]{GSW}, the isomorphism $\gamma$ intertwines the cyclic rotation $\Phi_\rho$ on augmentation varieties and the isomorphism $r^i\circ {_ir}$ on double Bott-Samelson cells. That is, the following diagram commutes. \begin{equation}\label{comm diag} \vcenter{\vbox{\xymatrix@R=1.5em{\mathrm{Aug}\left(\Lambda_{s_i\beta'}\right) \ar@{->}[r]^{\gamma}_\cong \ar[d]_{\Phi_{\rho}}^\cong & \mathrm{Conf}^e_{s_i\beta'}(\mathcal{C}) \ar[d]^{r^i\circ {_ir}}_{\cong}\\ \mathrm{Aug}\left(\Lambda_{\beta' s_i}\right) \ar@{->}[r]_{\gamma}^\cong & \mathrm{Conf}^e_{\beta' s_i}(\mathcal{C})}}} \end{equation} Let $\beta=s_{i_1}\ldots s_{i_l}$. By \cite{SWflag}, the inverse of the DT transformation on $\mathrm{Conf}^e_\beta(\mathcal{C})$ is given by \[ \mathrm{DT}^{-1}=\left(r^{i_l}\circ r^{i_{l-1}} \circ \dots \circ r^{i_1}\right)\circ t, \] where $t$ is a biregular isomorphism induced by the transposition action on $\mathsf{G}=\mathrm{SL}_n$. Then \begin{align*} \mathrm{DT}^{-2} =&\left(r^{i_l}\circ r^{i_{l-1}} \circ \dots \circ r^{i_1}\right)\circ t\circ\left(r^{i_l}\circ r^{i_{l-1}} \circ \dots \circ r^{i_1}\right)\circ t\\ =&\left(r^{i_l}\circ r^{i_{l-1}}\circ \cdots \circ r^{i_1}\right)\circ \left({_{i_l}r}\circ {_{i_{l-1}}r} \circ \cdots \circ {_{i_1}r}\right)\circ t\circ t\\ =&\left(r^{i_l}\circ r^{i_{l-1}}\circ \cdots \circ r^{i_1}\right)\circ \left({_{i_l}r}\circ {_{i_{l-1}}r} \circ \cdots \circ {_{i_1}r}\right) \\ =&\left(r^{i_l}\circ {_{i_l}r}\right)\circ \left(r^{i_{l-1}}\circ {_{i_{l-1}}r}\right)\circ \cdots \circ \left(r^{i_1} \circ {_{i_1}r}\right). \end{align*} By translating the identity $\mathrm{DT}^{-2}=\left(r^{i_l}\circ {_{i_l}r}\right)\circ \cdots \circ \left(r^{i_1} \circ {_{i_1}r}\right)$ to the augmentation variety side according to the commutative diagram \eqref{comm diag} we get that $\Phi_\mathsf{R}=\mathrm{DT}^{-2}$ on $\mathrm{Aug}\left(\Lambda_\beta\right)$. \end{proof} \subsection{Aperiodic DT Yields Infinitely Many Lagrangian Fillings} \begin{thm}\label{5.11} For any braid word $\beta$, if the $\mathrm{DT}$ transformation on $\mathrm{Aug}\left(\Lambda_\beta\right)$ is aperiodic, then $\Lambda_\beta$ admits infinitely many admissible fillings. \end{thm} \begin{proof} Let $L_0$ be the admissible filling that pinches the crossings in $\beta$ from left to right and then fills the resulted unlinks with minimum cobordisms. Let $L_m = \mathsf{R}^{m}\circ L_0$. We claim that $L_m$ is not Hamiltonian isotopic to $L_k$ for $m \neq k$. To see this, note that by Lemma \ref{6.7}, the cluster seeds of $L_m$ can be computed by mutating the initial seed according to $\mathrm{DT}^{-2m}$; the aperiodicity of $\mathrm{DT}$ implies that the cluster seeds of $L_m$ and $L_k$ are distinct for $m\neq k$. The statement follows from \cite[Theorem 1.3]{GSW}. \end{proof} \begin{rmk} The torus $(n,m)$-link $\Lambda_{(n,m)}$ is the rainbow closure of the $n$-strand braid word $\beta = (s_1s_2 \dotsb s_{n-1})^m$. K\'alm\'an \cite{Kalman} defined a Legendrian loop $\mathsf{K}=\rho^{n-1}$ for $\Lambda_{(n,m)}$. By definition, the full cyclic rotation $\mathsf{R} = \mathsf{K}^{m}.$ The induced action $\Phi_\mathsf{K}$ and $\Phi_\mathsf{R}$ on the augmentation variety are of finite order. The quivers associated to $\mathrm{Aug}\left(\Lambda_{(n,m)}\right)$ and those associated to the Grassmannian $\mathrm{Gr}_{n,n+m}$ share the same unfrozen parts. Hence, their DT transformations have the same order. The $\mathrm{DT}$ on $\mathrm{Gr}_{n,n+m}$ has finite order because it is related to the periodic Zamolodchikov operator by $\mathrm{DT}^2=\mathrm{Za}^{m}$ \cite{Kelperiod,weng, SWflag}. In fact, K\'alm\'an's loop induces the Zamolodchikov operator. Summarizing, $$\Phi_\mathsf{R} = \Phi_\mathsf{K}^m = \mathrm{DT}^{-2} = \mathrm{Za}^{-m}.$$ \end{rmk}{} \begin{thm}\label{6.13} Let $Q$ be an acyclic quiver. Its associated $\mathrm{DT}$ transformation is of finite order if and only if $Q$ is of finite type. \end{thm} \begin{proof} Combinatorially, the DT transformation arises from a maximal green sequence of quiver mutations \cite{KelDT}. When $Q$ is acyclic, one may label the vertices of $Q$ by $1,\ldots, l$ such that $i<j$ if there is an arrow from $i$ to $j$. The mutation sequence $\mu_n\circ\cdots \circ \mu_1$ is maximal green and therefore gives rise to the DT transformation associated with $Q$. The DT transformation acts the cluster variety $\mathscr{A}_Q$ associated with the quiver $Q$. Following \cite{lee2018frieze}, the {\it frieze variety} $X(Q)$ is defined to be the Zariski closure of the DT-orbit containing the point $P=(1, \ldots, 1)\in \mathscr{A}_Q$. Theorem 1.1 of {\it loc.cit.} states that \begin{enumerate} \item If $Q$ is representation finite then the frieze variety $X(Q)$ is of dimension $0$. \item If $Q$ is tame then the frieze variety $X(Q)$ is of dimension $1$. \item If $Q$ is wild then the frieze variety $X(Q)$ is of dimension at least $2$. \end{enumerate}{} As a direct sequence, if $Q$ is not of finite type, then the DT-orbit of $P$ contains infinitely many points, and therefore DT is not periodic. If $Q$ is of finite type, then its corresponding cluster variety is of finite type. Hence, its DT transformation is periodic. \end{proof} \begin{rmk} Keller pointed out to us that the aperiodicity of $\mathrm{DT}$ for acyclic quiver $Q$ of infinite type follows from the aperiodicity of the Auslander-Reiten translation functor on the derived category of representations of $Q$. \end{rmk} \begin{cor}\label{rainbowinfinite} For any braid word $\beta$, if $Q_\beta$ is acyclic and of infinite type, then $\Lambda_\beta$ admits infinitely many admissible fillings. \end{cor} \begin{proof} It follows from Theorem \ref{5.11} and Theorem \ref{6.13}. \end{proof} \section{Finite Type Classification} \label{sec 4} In this section, we focus on positive braid Legendrian links of finite type. \begin{thm}\label{Dynkin quiver} Let $\beta$ be a braid word such that $Q_\beta$ is mutation equivalent to a Dynkin quiver and $\Lambda_\beta$ does not contain a split union of knots. Then $\Lambda_\beta$ is Legendrian isotopic to a standard link in Definition \ref{stad.links}. \end{thm} \begin{proof} By Proposition \ref{Mainquiver} (1), it suffices to assume that $Q_\beta$ is a Dynkin quiver. If $Q_\beta$ is of type $\mathrm{A}$, we repeated utilize Lemma \ref{3.13} to reduce the number of strands of $\Lambda_\beta$ until it becomes $2$-strand link, which is a standard link of type A. If $Q_\beta$ is of type $\mathrm{D}$ or $\mathrm{E}$, then it contains a unique trivalent vertex. If $n\geq 4$, we can apply Lemma \ref{3.13} to $\beta(1,2)$ or $\beta(n-2,n-1)$, whichever does not contain the trivalent vertex, to reduce $n$ until $n=3$. Note that $\beta$ can be written as \eqref{beta.exp}. Since $[\beta]$ is of finite type, following the discussion in Section \ref{sec.3.2}, we may assume $m=2$ in \eqref{beta.exp}. After necessary rotation, we get \[ \beta= s_1^{a_1}s_2^{b_1}s_1^{a_2}s_2^{b_2}, \quad \quad \mbox{where } a_1\geq 2, ~a_2 \geq 2, ~ \min\{b_1, b_2\}=1. \] The trivalent vertex in a Dynkin $\mathrm{DE}$ quiver has three legs, at least one of which is of length $1$. For $Q_\beta$, two legs lie in level $1$ and one leg stretches to level $2$. We show that $b_1=b_2= 1$ after suitable Legendrian isotopy. Otherwise, one of the level $1$ legs is of length $1$. Then up to cyclic rotations, we get $a_2=2$. Depending on $b_1=1$ or $b_2=1$, we have the following Legendrian isotopies: \begin{align*} \beta &= s_1^{a_1}s_2s_1^2s_2^{b_2} = s_1^{{a_1}-1}{\color{blue}{s_1s_2s_1}}s_1s_2^{b_2} \stackrel{\textrm{R3}}{=} s_1^{a_1-1}s_2s_1{\color{blue}{s_2s_1s_2^{b_2}}} \stackrel{\textrm{R3}}{=} s_1^{a_1-1}s_2s_1^{b_2+1}s_2{\color{teal}{s_1}} \stackrel{\rho}{=} s_1^{a_1}s_2s_1^{b_2+1}s_2, \\ \beta &= s_1^{a_1}s_2^{b_1}s_1^2s_2 = {\color{teal}{s_1}}s_1^{{a_1}-1}s_2^{b_1}s_1^2s_2 \stackrel{\rho}{=} s_1^{a_1-1}s_2^{b_1}s_1{\color{blue}{s_1s_2s_1}} \stackrel{\textrm{R3}}{=} s_1^{a_1-1}{\color{blue}{s_2^{b_1}s_1s_2}}s_1s_2 \stackrel{\textrm{R3}}{=} s_1^{a_1}s_2s_1^{b_1+1}s_2. \end{align*} Eventually, after necessary cyclic notations, we get the standard links. \end{proof} \begin{defn} Let $\beta$ be an $n$-strand braid word and let $\gamma$ be an $m$-strand braid word. Denote by $\gamma^{\#_j}$ the word obtained from $\gamma$ via $s_i\mapsto s_{i+j}$. The \emph{connect sum} of $\beta$ and $\gamma$ is the braid word $\beta \# \gamma:=\beta\gamma^{\#_{n-1}}$. The \emph{split union} of $\beta$ and $\gamma$ is the braid word $\beta \sqcup \gamma:=\beta\gamma^{\#_{n}}$. Note that $\left[\beta \# \gamma\right] \in \mathsf{Br}_{n+m-1}^+$ and $[\beta\sqcup\gamma] \in \mathsf{Br}_{n+m}^+$. \end{defn} The connect sum of two positive braid links is again a positive braid link. By \cite{EV}, positive braid links attain a unique maximum tb Legendrian representative. The connect sum of two links is well-defined once specifying which components to attach the $1$-handle. Once well-defined, the connect sum is associative and commutative. \begin{rmk} Below is a list of the numbers of components for the standard $\mathrm{ADE}$ links. \begin{center} \begin{tabular}{|c|c|c|} \hline \rule{0pt}{2.3ex} knots & 2-component links & 3-component links \\[0.045cm] \hline \rule{0pt}{2.3ex} $\mathrm{A}_\text{even}$, $\mathrm{E}_6$, $\mathrm{E}_8$ & $\mathrm{A}_\text{odd}$, $\mathrm{D}_\text{odd}$, $\mathrm{E}_7$ & $\mathrm{D}_\text{even}$\\[0.045cm] \hline \end{tabular} \end{center} \end{rmk} \begin{thm}\label{4.6} If $\beta$ is of finite type, then $\Lambda_\beta$ is Legendrian isotopic to a split union of unknots and connect sum of standard $\mathrm{ADE}$ links. \end{thm} \begin{proof} The vertices on each level of $Q_\beta$ form a type $\mathrm{A}$ quiver. If $Q_\beta$ is disconnected, then we have \begin{enumerate} \item two adjacent levels of $Q_\beta$ have vertices but no arrows in between; and/or \item a level of $Q_\beta$ has no vertex. \end{enumerate} For (1), after necessary rotation, we get $\beta(i, i+1) = s_i^a s_{i+1}^b$ for some $i$. We may further commute $s_1,\dotsb, s_{i-1}$ with $s_{i+2},\dotsb, s_{n-1}$, obtaining \[\beta = \beta(1,i) \beta(i+1,n-1).\] Hence, $\beta$ is a connect sum of two braid words. For (2), we get $\beta(i,i) = s_i$ or empty for some $i$. If it is empty, then \[\beta = \beta(1,i-1) \beta(i+1,n),\] which is a split union of two braid words. If $\beta(i,i) = s_i$, then the braid is a connect sum via the following Legendrian isotopy: \[ \begin{tikzpicture}[baseline=20,scale=0.9] \draw [teal](0,-0.1) rectangle (0.5,0.35); \draw (0,0.4) rectangle (0.5,0.85); \draw (1,-0.1) rectangle (1.5,0.35); \draw [teal](1,0.4) rectangle (1.5,0.85); \draw (0.5,0.25) -- (1,0.5); \draw (0.5,0.5) -- (1,0.25); \draw (0.5,0.75) -- (1,0.75); \draw (0.5,0) -- (1,0); \draw (1.5,0.75) to [out=0,in=180] (1.75,0.875) to [out=180,in=0] (1.5,1)--(0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (1.5,0.5) to [out=0,in=180] (2,0.875) to [out=180,in=0] (1.5,1.25) -- (0,1.25) to [out=180,in=0] (-0.5,0.875) to [out=0,in=180] (0,0.5); \draw (1.5,0.25) to [out=0,in=180] (2.25,0.875) to [out=180,in=0] (1.5,1.5) -- (0,1.5) to [out=180,in=0] (-0.75,0.875) to [out=0,in=180] (0,0.25); \draw (1.5,0) to [out=0,in=180] (2.5,0.875) to [out=180,in=0] (1.5,1.75) -- (0,1.75) to [out=180,in=0] (-1,0.875) to [out=0,in=180] (0,0); \end{tikzpicture} \quad \overset{\rho}{\rightsquigarrow} \quad \begin{tikzpicture}[baseline=20,scale=0.9] \draw [teal] (2,-0.1) -- (2.5,-0.1) -- (2.5,0.35) -- (2,0.35); \draw (2,-0.1) -- (1.5,-0.1) -- (1.5,0.35) -- (2,0.35); \draw (0.5,0.4) -- (1,0.4) -- (1,0.85) -- (0.5,0.85); \draw [teal] (0.5,0.4) -- (0,0.4) -- (0,0.85) -- (0.5,0.85); \draw (1,0.75) -- (2.5,0.75) to [out=0,in=180] (2.75,0.875) to [out=180,in=0] (2.5,1)--(0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (0,0.25) -- (1,0.25) to [out=0,in=180] (1.5,0.5) -- (2.5,0.5) to [out=0,in=180] (3,0.875) to [out=180,in=0] (2.5,1.25) -- (0,1.25) to [out=180,in=0] (-0.5,0.875) to [out=0,in=180] (0,0.5); \draw (1,0.5) to [out=0,in=180] (1.5,0.25); \draw (2.5,0.25) to [out=0,in=180] (3.25,0.875) to [out=180,in=0] (2.5,1.5) -- (0,1.5) to [out=180,in=0] (-0.75,0.875) to [out=0,in=180] (0,0.25); \draw (2.5,0) to [out=0,in=180] (3.5,0.875) to [out=180,in=0] (2.5,1.75) -- (0,1.75) to [out=180,in=0] (-1,0.875) to [out=0,in=180] (0,0) -- (1.5,0); \end{tikzpicture} \quad \rightsquigarrow\quad \begin{tikzpicture}[baseline=20,scale=0.9] \draw (0,0.4) rectangle (0.5,0.85); \draw (1,-0.1) rectangle (1.5,0.35); \draw (0.5,0.75) to [out=0,in=180] (0.75,0.875) to [out=180,in=0] (0.5,1) -- (0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (1.5,0) to [out=0,in=180] (2.5,0.875) to [out=180,in=0] (1.5,1.75) -- (-1,1.75) to [out=180,in=0] (-2,0.875) to [out=0,in=180] (-1,0) -- (1,0); \draw [red] (0.5,0.25) to [out=0,in=180] (1,0.5) -- (1.5,0.5) to [out=0,in=180] (2,0.875) to [out=180,in=0] (1.5,1.25) -- (-1,1.25) to [out=180,in=0] (-1.5,0.875) to [out=0,in=180] (-1,0.5) -- (0,0.5); \draw (0.5,0.5) to [out=0,in=180] (1,0.25); \draw (1.5,0.25) to [out=0,in=180] (2.25,0.875) to [out=180,in=0] (1.5,1.5) -- (-1,1.5) to [out=180,in=0] (-1.75,0.875) to [out=0,in=180] (-1,0.25) -- (0.5,0.25); \end{tikzpicture} \] \[ \overset{\text{R2,R3}}{\rightsquigarrow}\quad \begin{tikzpicture}[baseline=20,scale=0.9] \draw (0,0.4) rectangle (0.5,0.85); \draw (1,-0.1) rectangle (1.5,0.35); \draw (0.5,0.75) to [out=0,in=180] (0.75,0.875) to [out=180,in=0] (0.5,1) -- (0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (1.5,0) to [out=0,in=180] (2.5,0.875) to [out=180,in=0] (1.5,1.75) -- (-1,1.75) to [out=180,in=0] (-2,0.875) to [out=0,in=180] (-1,0) -- (1,0); \draw [red] (-1,0.25) to [out=0,in=180] (-0.25,0.625) to [out=180,in=0] (-0.625,0.75) to [out=180,in=0] (-1,0.625) to[out=0,in=180] (-0.5,0.5) -- (0,0.5); \draw (0.5,0.5) to [out=0,in=180] (1,0.25); \draw (1.5,0.25) to [out=0,in=180] (2.25,0.875) to [out=180,in=0] (1.5,1.5) -- (-1,1.5) to [out=180,in=0] (-1.75,0.875) to [out=0,in=180] (-1,0.25); \end{tikzpicture}\quad \overset{\text{R1}}{\rightsquigarrow}\quad \begin{tikzpicture}[baseline=20,scale=0.9] \draw (0,0.4) rectangle (0.5,0.85); \draw (1,-0.1) rectangle (1.5,0.35); \draw (0.5,0.5) to [out=0,in=180] (1,0.25); \draw (0.5,0.75) to [out=0,in=180] (0.75,0.875) to [out=180,in=0] (0.5,1) -- (0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (1.5,0.25) to [out=0,in=180] (2,0.875) to [out=180,in=0] (1.5,1.25) -- (0,1.25) to [out=180,in=0] (-0.5,0.875) to [out=0,in=180] (0,0.5); \draw (1.5,0) to [out=0,in=180] (2.25,0.875) to [out=180,in=0] (1.5,1.5) -- (0,1.5) to [out=180,in=0] (-0.75,0.875) to [out=0,in=180] (0,0) -- (1,0); \end{tikzpicture} \quad \rightsquigarrow\quad \begin{tikzpicture}[baseline=20,scale=0.9] \draw (0,0.4) rectangle (0.5,0.85); \draw (1,0.15) rectangle (1.5,0.6); \draw (0.5,0.5) -- (1,0.5); \draw (0.5,0.75) -- (1.5,0.75) to [out=0,in=180] (1.75,0.875) to [out=180,in=0] (1.5,1) -- (0,1) to [out=180,in=0] (-0.25,0.875) to [out=0,in=180] (0,0.75); \draw (1.5,0.5) to [out=0,in=180] (2,0.875) to [out=180,in=0] (1.5,1.25) -- (0,1.25) to [out=180,in=0] (-0.5,0.875) to [out=0,in=180] (0,0.5); \draw (1.5,0.25) to [out=0,in=180] (2.25,0.875) to [out=180,in=0] (1.5,1.5) -- (0,1.5) to [out=180,in=0] (-0.75,0.875) to [out=0,in=180] (0,0.25)--(1,0.25); \end{tikzpicture} \] Each quiver component is Legendrian isotopic to the standard $\mathrm{ADE}$ links. There could also be a split union of unknot for every pair of consecutive levels $\beta(i,i)$ and $\beta(i+1,i+1)$ that are both empty. We complete the proof. \end{proof} \section{Infinitely Many Fillings for Infinite Type} \label{sec 3} This section is devoted to the proof of the following result. \begin{thm}\label{MainStep1} If $[\beta]$ is a positive braid of infinite type, then the positive braid Legendrian link $\Lambda_\beta$ admits infinitely many non-Hamiltonian isotopic exact Lagrangian fillings. \end{thm} \begin{defn} Given two braid words $\beta$ and $\gamma$, we say $\beta$ \emph{dominates} $\gamma$ if there exists an admissible cobordism from $\Lambda_\gamma$ to $\Lambda_\beta$. Dominance is a partial order on braid words. \end{defn} Recall that a quiver is \emph{connected} if its underlying graph is connected. Connectedness of quivers is invariant under mutations. Under the connectedness assumption, Theorem \ref{MainStep1} is a consequence of Corollary \ref{rainbowinfinite} and the following Propositions. \begin{prop}\label{3.2} Suppose $\beta$ dominates $\gamma$. If $\Lambda_\gamma$ admits infinitely many admissible fillings, then so does $\Lambda_\beta$. \end{prop} \begin{proof} Recall from \cite[Theorem 1.4]{GSW} that the induced cluster charts on the augmentation variety can be used to distinguish admissible fillings, and the functorial morphism between augmentation varieties induced by any admissible cobordism maps distinct cluster charts to distinct cluster charts. \end{proof} \begin{prop}\label{Mainquiver} For any braid word $\beta$ with connected $Q_\beta$, either one of the following two scenarios happens: \begin{enumerate}[label*=\emph{(\arabic*)},leftmargin = *] \item there is an admissible concordance from $\Lambda_\gamma$ to $\Lambda_\beta$ and $Q_\gamma$ is a quiver of finite type. \item $\beta$ dominates a braid word $\gamma$ and $Q_\gamma$ is acyclic and of infinite type. \end{enumerate} \end{prop} \begin{prop}\label{3.4} If Proposition \ref{Mainquiver} (1) happens, then $[\beta]$ is of finite type. If Proposition \ref{Mainquiver} (2) happens, then $[\beta]$ is of infinite type. \end{prop} \begin{proof} Admissible concordances give rise to sequences of mutations (\cite[\S 5]{GSW}). If Proposition \ref{Mainquiver} (1) happens, then $Q_\beta$ is mutation equivalent to $Q_\gamma$. The latter is of finite type. Therefore $[\beta]$ is of finite type. If Proposition \ref{Mainquiver} (2) happens, then by \cite[Proposition 5.25(2)]{GSW}, $Q_\beta$ is mutation equivalent to a quiver which contains $Q_\gamma$ as a full subquiver. Suppose that $[\beta]$ is of finite type. Then $Q_\gamma$ is mutation equivalent to finite type quiver, which contradicts with the assumption that $Q_\gamma$ is acyclic and of infinite type. Therefore $[\beta]$ is of infinite type. \end{proof} Proposition \ref{3.4} implies the exclusiveness of the two scenarios of Proposition \ref{Mainquiver}. To conclude the proof of the latter, it remains to prove that the two scenarios in Proposition \ref{Mainquiver} cover all braid words with connected quivers. The strategy of our proof is as follows. \begin{itemize} \item Suppose there is an admissible concordance $\Lambda_\gamma\rightarrow \Lambda_\beta$ such that $Q_\gamma$ is acyclic. If $Q_\gamma$ is of finite type, then $\beta$ satisfies (1); otherwise, $\beta$ satisfies (2). \item Otherwise, we prove that $\beta$ satisfies (2). \end{itemize} \subsection{Preparation} We adopt the following notations for operations on braid words. \begin{enumerate} \item[1.] $\overset{\text{R1}}{=}$ denotes the positive Markov destabilization, which deletes the $s_1$ (resp. $s_{n-1}$) if it only occurs once in $\beta$. \item[2.] $\overset{\text{R3}}{=}$ denotes the braid move R3, which switches $s_is_{i+1}s_i$ and $s_{i+1}s_is_{i+1}$. \item[3.] $\overset{\rho}{=}$ denotes the cyclic rotation, which turns $\beta s_i$ into $s_i\beta$ or vice versa. \item[4.] $\overset{c}{=}$ denotes the commutation which turns $s_is_j$ into $s_js_i$ whenever $|i-j|>1$. \item[5.] $\succ$ denotes deleting letters; $\beta\succ \gamma$ means that $\gamma$ can be obtained by deleting letters in $\beta$. In particular, when $\beta\succeq \gamma$, we say that $\gamma$ is a \emph{subword} of $\beta$. \item[6.] $\overset{\textrm{oppo}}{\rightsquigarrow}$ denotes taking the opposite word ${\beta}^{\mathrm{op}}$. The quiver $Q_{\beta^{\mathrm{op}}}$ alters the orientation of every arrow in $Q_\beta$. \end{enumerate} Operations 1 - 4 induce Legendrian isotopies between corresponding positive braid Legendrian links, which are building blocks for admissible concordance. Operations 5 induces pinch cobordisms between Legendrian links. Operation 6 is a symmetry that can be used to reduce the number of cases considered in the proof. \begin{lem}\label{basicbraidsinf} The quivers for the following braids are acyclic and of infinite type: \begin{enumerate}[label*=\emph{(\arabic*)}] \item $s_1^2s_2^2s_1^2s_2^2$, or more generally, $s_i^2s_{i+1}^2s_i^2s_{i+1}^2$; \item $s_1s_3s_2^2s_1s_3s_2^2$. \end{enumerate} \end{lem} \begin{proof} The quivers for (1) and (2) are $\tilde{\mathrm{D}}_5$ and $\tilde{\mathrm{D}}_4$ respectively. \[ \begin{tikzpicture} \draw [thick,->](0.15,0) -- (0.85, 0); \filldraw (0,0) circle (2pt); \filldraw (1,0) circle (2pt); \filldraw (-1,0) circle (2pt); \filldraw (0,-1) circle (2pt); \filldraw (1,-1) circle (2pt); \filldraw (-1,-1) circle (2pt); \draw [thick,->] (0.15,-1) -- (0.85, -1); \draw [thick,<-](0,-0.15) -- (0, -0.85); \draw [thick,<-](-0.15,0) -- (-0.85, 0); \draw [thick,->](-0.85,-1) -- (-0.15,-1); \end{tikzpicture} \qquad\qquad\qquad \begin{tikzpicture}[baseline = -15] \draw [thick,->](0.15,0) -- (0.85, 0); \filldraw (0,0) circle (2pt); \filldraw (1,0) circle (2pt); \filldraw (-1,0) circle (2pt); \filldraw (0,1) circle (2pt); \filldraw (0,-1) circle (2pt); \draw [thick,<-](0,0.15) -- (0, 0.85); \draw [thick,<-](0,-0.15) -- (0, -0.85); \draw [thick,<-](-0.15,0) -- (-0.85, 0); \end{tikzpicture} \qedhere \] \end{proof} \begin{lem} \label{ws4lemma} Suppose $w_1,w_2,w_3\in \left\{ s_1s_3, s_1^2, s_3^2\right\}$. Then $w_1s_2w_2s_2w_3s_2w_4s_2$ dominates a braid with an acyclic quiver of infinite type. \end{lem} \begin{proof} Note that $\beta = w_1 s_2 {\color{red}{w_2}} s_2 w_3 s_2 {\color{red}{w_4}} s_2 \succ w_1s_2^2 w_3s_2^2.$ If $w_1=w_3$, then the Lemma follows from Lemma \eqref{basicbraidsinf}. The same argument applies to $w_2=w_4$. In the rest of the proof, we assume that $w_1 \neq w_3$ and $w_2 \neq w_4$. Let $k$ be the size of the set $\{i \mid w_i = s_1s_3\}$. Here $k\leq 2$; otherwise, $w_1 = w_3$ or $w_2 = w_4$. Using the symmetry between $s_1$ and $s_3$, we further assume that there are more $s_1^2$ than $s^2_3$ in $\{w_1, w_2, w_3, w_4\}$. We shall exhaust all the possibilities of $k$. \vskip 2mm \paragraph{{\it Case 1: k=2}} After taking necessary cyclic rotations and/or the opposite word, we have $w_1 = w_2 = s_1s_3$, and the values of $w_3, w_4$ split into two subcases. If $w_3 =s_1^2$ and $w_4 = s_3^2$, then $\Lambda_\beta$ is admissible concordance to the standard $\mathrm{E}_9$ link: \begin{align*} \beta &= s_1s_3s_2s_3s_1s_2s_1s_1 {\color{teal}{s_2s_3s_3s_2}} \stackrel{\rho}{=} s_2s_3s_3s_2s_1 {\color{blue}{s_3s_2s_3}} s_1s_2s_1s_1 \\ &\stackrel{\textrm{R3}}{=} s_2s_3s_3 {\color{blue}{s_2s_1s_2}} s_3s_2 s_1s_2s_1s_1 \stackrel{\textrm{R3}}{=} s_2{\color{blue}{s_3s_3s_1}}s_2 {\color{blue}{s_1s_3}}s_2 s_1s_2s_1s_1 \\ & \stackrel{c}{=} s_2s_1{\color{blue}{s_3s_3s_2s_3}}s_1s_2 s_1s_2s_1s_1 \stackrel{\textrm{R3}}{=} s_2s_1s_2{\color{teal}{s_3}}s_2s_2s_1s_2 s_1s_2s_1s_1 \\ &\stackrel{\textrm{R1}}{=} {\color{blue}{s_2s_1s_2}}s_2s_2s_1{\color{blue}{s_2 s_1s_2}}s_1s_1 \stackrel{\textrm{R3}}{=} s_1{\color{blue}{s_2s_1s_2s_2}}s_1s_1 s_2s_1s_1s_1 \\ &\stackrel{\textrm{R3}}{=} s_1s_1s_1s_2s_1s_1s_1 s_2s_1s_1s_1 = s_1^3 s_2s_1^3 s_2 {\color{teal}{s_1^3}} \stackrel{\rho}{=} s_1^6 s_2s_1^3 s_2 \end{align*} If ${w_3 =w_4 =s_1^2}$, then \begin{align*} \beta &= s_1 {\color{blue}{s_3 s_2 s_3}} s_1 s_2 s_1^2 s_2 s_1^2 s_2 \stackrel{\textrm{R3}}{=} s_1s_2 {\color{teal}{s_3}} s_2 s_1s_2 s_1^2 s_2 s_1^2 s_2 \stackrel{\textrm{R1}}{=} s_1s_2^2 s_1s_2 s_1^2 s_2 s_1^2 s_2 \\ &= s_1s_2^2 s_1s_2 s_1 s_1 s_2 s_1 {\color{teal}{s_1 s_2}} \stackrel{\rho}{=} s_1 s_2 s_1s_2^2 s_1 s_2 s_1 {\color{blue}{s_1 s_2 s_1}} \stackrel{\textrm{R3}}{=} s_1 {\color{red}{s_2}} s_1s_2^2 s_1 {\color{red}{s_2}} s_1 s_2 {\color{red}{s_1}} s_2 {\succ} s_1^2s_2^2s_1^2s_2^2. \end{align*} \vskip 2mm \paragraph{{\it Case 2: k=1}} We assume that $w_1 =s_1s_3$ after a necessary cyclic rotation. Then $w_2,w_3,w_4$ are either $s_1^2$ or $s_3^2$. Note that $w_2\neq w_4$. By the symmetry between $s_1$ and $s_3$, and taking rotations and the opposite word if necessary, it suffices to consider $w_2=w_3=s_1^2$ and $w_4=s_3^2$. The $\Lambda_\beta$ is admissible concordance to the standard $\mathrm{E}_9$ link: \begin{align*} \beta &= s_3s_1s_2s_1^2s_2s_1^2{\color{teal}{s_2s_3^2s_2}} \stackrel{\rho}{=} s_2{\color{blue}{s_3^2s_2s_3}}s_1s_2s_1^2s_2s_1^2 \stackrel{\textrm{R3}}{=} s_2s_2^2{\color{teal}{s_3}}s_2s_1s_2s_1^2s_2s_1^2 \\ &\stackrel{\textrm{R1}}{=} s_2s_2^2s_2s_1s_2s_1^2s_2s_1^2 = {\color{blue}{s_2^4s_1s_2}}s_1^2s_2s_1^2 \stackrel{\textrm{R3}}{=} s_1^4s_2s_1s_1^2s_2{\color{teal}{s_1^2}} \stackrel{\rho}{=} s_1^6s_2s_3s_2. \end{align*} \vskip 2mm \paragraph{{\it Case 3: k=0}} Assume that $w_1=w_2 =s_1^2$ and $w_3=w_4 = s_3^2$. Then $Q_\beta$ is of type $\tilde{\mathrm{D}}_8$: \[ \begin{tikzpicture} \draw [thick,->](0.15,0) -- (0.85, 0); \filldraw (0,0) circle (2pt); \filldraw (1,0) circle (2pt); \filldraw (-1,0) circle (2pt); \filldraw (0,-1) circle (2pt); \filldraw (1,-1) circle (2pt); \filldraw (2,-1) circle (2pt); \filldraw (3,-2) circle (2pt); \filldraw (2,-2) circle (2pt); \filldraw (1,-2) circle (2pt); \draw [thick,->] (0.15,-1) -- (0.85, -1); \draw [thick,<-](0,-0.15) -- (0, -0.85); \draw [thick,<-](-0.15,0) -- (-0.85, 0); \draw [thick,->](1.15,-1) -- (1.85,-1); \draw [thick,->](1.15,-2) -- (1.85,-2); \draw [thick,->](2.15,-2) -- (2.85,-2); \draw [thick,<-](2,-1.15) -- (2, -1.85); \end{tikzpicture} \qedhere \] \end{proof} \begin{defn}\label{defn:subword} Let $\beta$ be a braid word of $n$ strands. For $1\leq i < j\leq n-1$, we define $$\beta(i,j) := \textrm{the sub-word of }{\beta} \textrm{ that contains } s_i, s_{i+1},\dotsb, s_j.$$ For example, if $\beta = s_1 s_2s_3 s_1^2 s_2 s_5s_2s_3s_4$, then $\beta(2,3) = s_2s_3s_2^2s_3$. \end{defn} \begin{lem}\label{quiverdeg} Let $\beta$ be a braid word of $n$ strands. \begin{enumerate}[label=\emph{(\arabic*)}] \item If $s_i^2$ is not a subword of $\beta$, then $Q_\beta=Q_{\beta(1,i-1)}\sqcup Q_{\beta(i+1,n-1)}$. \item If $\beta(i,i+1)$ does not contain a sub-word of intertwining pairs, namely neither $s_{i}s_{i+1}s_{i}s_{i+1}$ nor $s_{i+1}s_{i}s_{i+1}s_{i}$, then $Q_\beta=Q_{\beta(1,i)}\sqcup Q_{\beta(i+1,n-1)}$. \end{enumerate} \end{lem} \begin{proof} The brick diagram has an empty level $i$ in case (1) and does not have arrows between level $i$ and level $i+1$ in case (2). \end{proof} \begin{lem}\label{3.10} Let $n\geq 3$ and let $\beta$ be an $n$-strand braid word such that $Q_\beta$ is connected. If $\beta\succ s_1^2$ and $\beta \succ s_{n-1}^2$, then $Q_\beta$ is acyclic if and only if for $1\leq i\leq n-2$, we have \[\beta(i,i+1)=s_i^{a_1}s_{i+1}^{b_1}s_i^{a_2}s_{i+1}^{b_2} ~\mbox{or}~ s_{i+1}^{b_1}s_i^{a_1}s_{i+1}^{b_2}s_i^{a_2}, \quad \quad \mbox{where } a_1, a_2, b_1, b_2\geq 1\] \end{lem} \begin{proof} The \emph{if} direction is obvious. To see the \emph{only if} direction, let us assume without loss of generality that $\beta(i,i+1)$ begins with $s_i$. If $\beta(i,i+1)$ does not end after $s_i^{a_1}s_{i+1}^{b_1}s_i^{a_2}s_{i+1}^{b_2}$, then there is at least one $s_i$ after $s_{i+1}^{b_2}$, giving $Q_\beta$ an $a_2$-cycle between levels $i$ and $i+1$. \end{proof} \begin{assumption}\label{mainassumption} Note that $2$-strand braids correspond to type A quivers. It suffices to consider braid words $\beta$ of at least 3 strands. Let us single out the generator $s_2$. After necessary rotations, we assume that $\beta$ does not start with $s_2$ but ends with $s_2$, that is, $$\beta = w_1s_2^{b_1} w_2 s_2^{b_2}\dotsb w_m s_2^{b_m},$$ where each $w_i$ is a word of $s_1, s_3, s_4, \dotsb, s_{n-1}$. We assume that every $w_i$ contains at least one $s_1$ or $s_3$; otherwise, we can move the whole $w_i$ across the $s_2$'s at either end and merge it with $w_{i-1}$ or $w_{i+1}$. We further assume that $\sum b_i$ achieves minimum. Under this assumption, the length of every $w_i(1,3)$ is at least $2$. Otherwise, with the letters $s_4\dotsb, s_{n-1}$ migrated away, we have $s_2w_is_2=s_2s_1s_2$ or $s_2s_3s_2$, and we can use R3 to reduce $\sum b_i$. We assume that $m\geq 2$; otherwise, $Q_\beta$ is disconnected by Lemma \ref{quiverdeg}. Meanwhile, if $m\geq 4$, then after necessarily deleting letters, we land on the case of Lemma \ref{ws4lemma}, and the braid $\beta$ dominates a braid with an acyclic quiver of infinite type. In the rest of this section, without loss of generality, we assume that \begin{equation} \label{beta.exp} \beta = w_1s_2^{b_1}w_2s_2^{b_2}\dotsb w_ms_2^{b_m}, \end{equation}where $b_i \geq 1$, $m=2$ or $3$, and $w_i\succeq s_1^2, s_3^2$ or $s_1s_3$. \end{assumption} We prove Proposition \ref{Mainquiver} by induction on the number of strands of $\beta$. \subsection{Proof of Proposition \ref{Mainquiver} for 3-strand braids{}} \label{sec.3.2} If $m=2$ in \eqref{beta.exp}, then $Q_\beta$ is acyclic and therefore the proposition follows. It remains to consider $m=3$. Suppose that at least one of the $b_i$'s, say $b_3$ after necessary cyclic rotations, is greater than 1. The proposition follows since \[\beta \succeq w_1s_2{\color{red}{w_2}}s_2w_3s_2^2 {\succ} w_1s_2^2w_3s_2^2\succeq s_1^2s_2^2s_1^2s_2^2.\] It remains to consider $b_1=b_2=b_3=1$, i.e., \[\beta = s_1^{a_1}s_2s_1^{a_2}s_2s_1^{a_3}s_2.\] If two of $a_i$'s, say $a_1$ and $a_2$ after necessary rotations, are equal to 2, then \begin{align*} \beta &= s_1{\color{blue}{s_1s_2s_1}}s_1s_2s_1^{a_3}s_2 \stackrel{\text{R3}}{=} s_1s_2s_1{\color{blue}{s_2s_1s_2}}s_1^{a_3}s_2 \stackrel{\text{R3}}{=} s_1s_2s_1s_1s_2s_1s_1^{a_3}{\color{teal}{s_2}} \\ &\stackrel{\rho}{=} {\color{blue}{s_2s_1s_2}}s_1s_1s_2s_1s_1^{a_3} \stackrel{{\text{R3}}}{=} {\color{teal}{s_1}}s_2s_1s_1s_1s_2s_1s_1^{a_3} \stackrel{{\rho}}{=} s_2s_1s_1s_1s_2s_1s_1^{a_3}s_1 = s_2s_1^3s_2s_1^{a_3+1}. \end{align*} The quiver for the last word is acyclic. The proposition is proved. Otherwise, at least two of the $a_i$'s, say $a_1$ and $a_2$ after necessary rotations, are greater than 2. The proposition follows since \begin{align*} \beta &\succ s_1^{3}s_2s_1^{3}s_2s_1^{2}s_2 = s_1^{2}{\color{blue}{s_1s_2s_1}}s_1^{2}s_2s_1^{2}s_2 \stackrel{\text{R3}}{=} s_1^{2}s_2{\color{red}{s_1}}s_2s_1^{2}s_2{\color{red}{s_1^{2}}}s_2 {\succ} s_1^2s_2^2s_1^2s_2^2. \end{align*} \subsection{Proof of Proposition \ref{Mainquiver} for braids of at least 4 strands{}} Assume that $\beta$ is expressed as in \eqref{beta.exp}. Note that $s_1$ commutes with all other generators in $w_i$. Therefore we further assume that \begin{itemize} \item $w_i = s_1^{a_i}v_i = v_i s_1^{a_i}$, where $v_i$ is a word of $s_3, \dotsb, s_{n-1}$. \end{itemize} We shall start with the proof of the following two lemmas. \begin{lem}\label{3.13} Suppose $\beta \succ s_1$ and $\beta \succ s_2$. If $Q_{\beta(1,2)}$ is of Dynkin type $\mathrm{A}$, then there exists an admissible concordance $\Lambda_\gamma\rightarrow \Lambda_\beta$ such that $\gamma$ has fewer strands than $\beta$. \end{lem} \begin{proof} Since $Q_{\beta(1,2)}$ is of Dynkin type $\mathrm{A}$, $\beta(1,2)$ must be of the form $s_1^{a_1}s_2^{b_1}s_1^{a_2}s_2^{b_2}$ with $\min\{a_1, a_2\}=\min\{b_1,b_2\}=1$. After necessary cyclic rotations and/or taking the opposite word, we assume $a_1=b_1=1$. Then \[ \beta = v_1 {\color{blue}s_1 s_2 s_1^{a_2}} v_2 s_2^{b_2} \stackrel{\textrm{R3}}{=} v_1 s_2^{a_2} {\color{teal}s_1} s_2 v_2 s_2^{b_2} \stackrel{\textrm{R1}}{=}v_1 s_2^{a_2} s_1 s_2 v_2 s_2^{b_2}. \] The braid reduces to the case of one less strand. \end{proof} \begin{lem} \label{A sub-tree} Suppose $Q_\beta$ is connected. If $Q_{\beta(1,3)}$ is acyclic and $Q_{\beta(1,2)}$ is not of type $\mathrm{A}$, then Proposition \ref{Mainquiver} is true for $[\beta]$. \end{lem} \begin{proof} Define \[k: = \max\{i ~|~ Q_{\beta(1,i)} \mbox{ is acyclic}\}. \] If $k=n$, then $Q_\beta$ is acyclic and the Lemma is proved. If $Q_{\beta(1,k)}$ is of infinite type, then the Lemma follows since $\beta \succ \beta(1,k)$. Now we assume $k<n$ and $Q_{\beta(1,k)}$ is of finite type. Note that $Q_{\beta(1,3)}$ is a subquiver of $Q_{\beta(1,k)}$. By assumption, $Q_{\beta(1,2)}$ is not of type $\mathrm{A}$. Therefore $Q_{\beta(1,k)}$ must be of type $\mathrm{D}$ or $\mathrm{E}$. Hence, $Q_{\beta(i,j)}$ is of type $\mathrm{A}$ for $1<i<j\leq k$. In particular, $Q_{\beta(k-1,k)}$ is of type $\mathrm{A}$ and $Q_\beta$ is connected. Therefore we have $$\beta(k-1,k) = s_{k-1}^{e_1} s_{k}^{f_1} s_{k-1}^{e_2} s_{k}^{f_2},\quad \text{or} \quad \beta(k-1,k) = s_{k}^{f_1} s_{k-1}^{e_1} s_{k}^{f_2}s_{k-1}^{e_2},$$ where $$\min\{e_1,e_2\} = \min\{f_1,f_2\}=1.$$ Below we consider the first case $\beta(k-1,k) = s_{k-1}^{e_1} s_{k}^{f_1} s_{k-1}^{e_2} s_{k}^{f_2}$. The second case follows by taking the opposite word of $\beta$. The letters $s_1,\dotsb, s_{k-2}$ commute with $s_{k+1},\dotsb, s_n$. After necessary communications of the letters in $\beta$, we can write $$\beta = \gamma_1\delta_1\gamma_2\delta_2,$$ where $\gamma_i~ (i=1,2)$ is a word of $s_1\dotsb, s_{k-1}$ with $e_i$ many of $s_{k-1}$, and $\delta_i$ is a word of $s_k,\dotsb, s_n$ with $f_i$ many of $s_{k}$. We remark that we have \emph{not} performed cyclic rotations yet and will only do it carefully, so that the quiver for $\beta(1,k-1)=\gamma_1\gamma_2$ is not distorted. Recall that $\min\{f_1,f_2\} = 1$. We consider the case $f_1=1$. The argument for $f_2=1$ is a similar repetition. Let us write $\delta_1 = xs_k y,$ where $x,y$ are words of $s_{k+1}\dotsb, s_n$ and they commute with $\gamma_1,\gamma_2$. We pass $y$ through $\gamma_2$, and we pass $x$ through $\gamma_1$ and rotation, obtaining $\gamma_1(xs_ky)\gamma_2\delta_2 \rightsquigarrow \gamma_1 s_k \gamma_2 (y\delta_2x)$. This move does not change the quiver for $\beta (1,k)$, and is a Legendrian isotopy. Consequently, we can assume $\delta_1 = s_k$ and write $\beta=\gamma_1s_k\gamma_2\delta_2$. Now we consider \[\delta_2(k,k+1)=s_k^{g_1}s_{k+1}^{h_1}s_k^{g_2}s_{k+1}^{h_2}\cdots s_k^{g_l},\] where $g_1, g_l\geq 0$ and all other powers $\geq 1$. The Lemma holds for the following two cases. \begin{enumerate} \item If $\delta_2\succ s_{k+1}s_k^2s_{k+1}$, then $\beta \succ \gamma_1 s_k \gamma_2s_{k+1}s_k^2s_{k+1}:=\beta_1$. \item If $\delta_2 \succ s_{k+1}^2s_ks_{k+1}^2$, then $\beta \succ \gamma_1 s_k \gamma_2s_{k+1}^2s_ks_{k+1}^2:=\beta_2$. \end{enumerate} The quivers for $\beta_1$ and $\beta_2$ are acyclic and of infinite type, as depicted below: $$ \begin{tikzpicture} \filldraw (0,-1) circle (2pt); \filldraw (-1,0) circle (2pt); \node at (-1.5,0) [] {$\dotsb$}; \filldraw (0,0) circle (2pt); \filldraw (1,0) circle (2pt); \node at (1.5,0) [] {$\dotsb$}; \draw [thick,->](0.15,0) -- (0.85, 0); \draw [thick,<-](-0.15,0) -- (-0.85, 0); \draw [thick,<-](0,-0.15) -- (0, -0.85); \node at (0, -1.3) [] {\vdots}; \filldraw (0,-1.8) circle (2pt); \filldraw (1,-1.8) circle (2pt); \filldraw (0,-2.8) circle (2pt); \draw [thick,->](0.15,-1.8) -- (0.85, -1.8); \draw [thick,->](0,-2.65) -- (0, -1.95); \node at (-2, -2.3) [] {$(1)$}; \end{tikzpicture} \qquad\quad \begin{tikzpicture} \filldraw (0,-1) circle (2pt); \filldraw (-1,0) circle (2pt); \node at (-1.5,0) [] {$\dotsb$}; \filldraw (0,0) circle (2pt); \filldraw (1,0) circle (2pt); \node at (1.5,0) [] {$\dotsb$}; \draw [thick,->](0.15,0) -- (0.85, 0); \draw [thick,<-](-0.15,0) -- (-0.85, 0); \draw [thick,<-](0,-0.15) -- (0, -0.85); \node at (0, -1.3) [] {\vdots}; \filldraw (0,-1.8) circle (2pt); \filldraw (1,-2.8) circle (2pt); \filldraw (-1,-2.8) circle (2pt); \filldraw (0,-2.8) circle (2pt); \draw [thick,->](0.15,-2.8) -- (0.85, -2.8); \draw [thick,->](-0.85,-2.8) -- (-0.15, -2.8); \draw [thick,->](0,-2.65) -- (0, -1.95); \node at (-2, -2.3) [] {$(2)$}; \end{tikzpicture} $$ By the definition of $k$, the quiver for $\beta(k,k+1)$ is not acyclic. Therefore we have $\delta_2\succeq s_{k+1}s_ks_{k+1}s_k$. We assume that $\delta_2$ does not satisfy the above $(1)$ or $(2)$. Then \[\delta_2(k,k+1)=s_k^{g_1}s_{k+1}^{h_1}s_ks_{k+1}^{h_2}s_k^{g_3},\] where $g_1\geq 0$, $g_3\geq 1$, and $\min\{h_1,h_2\} =1$. Depending on whether $h_1=1$ or $h_2=1$, we have the following two cases: \[ \beta(1,k+1) = \gamma_1s_k\gamma_2{\color{blue}s_k^{g_1}s_{k+1}s_k}s_{k+1}^{h_2}s_k^{g_3} \overset{\text{R3}}{=} \gamma_1s_k\gamma_2s_{k+1}{\color{teal}s_ks_{k+1}^{g_1+h_2}s_k^{g_3}} \overset{\rho}{=} s_ks_{k+1}^{g_1+h_2}s_k^{g_3}\gamma_1s_k\gamma_2s_{k+1}, \] \[ \beta(1,k+1) = \gamma_1s_k\gamma_2s_k^{g_1}s_{k+1}^{h_1}{\color{blue}s_ks_{k+1}s_k^{g_3}} \overset{\text{R3}}{=} \gamma_1s_k\gamma_2s_k^{g_1}s_{k+1}^{h_1+g_3}s_ks_{k+1}. \] In both cases, the only R3 move is $s_ks_{k+1}s_k=s_{k+1}s_ks_{k+1}$ performed in $\delta_2$, hence the move can be extended from $\beta(1,k+1)$ to $\beta$. The cyclic rotations can also be extended to $\beta$ without changing the quiver for $\beta(1,k)$. In the end, we performed a Legendrian isotopy and get a new braid word $\beta'$ with acyclic $Q_{\beta'(1,k+1)}$. We repeat the above argument for $\beta'$ and $k+1$. This completes the case $f_1=1$. \end{proof} Now we prove the proposition. If $m=2$ in \eqref{beta.exp}, then $Q_{\beta(1,3)}$ is acyclic. If $Q_{\beta(1,2)}$ is not of type $\mathrm{A}$, then the proposition follows directly from Lemma \ref{A sub-tree}. Otherwise, we apply Lemma \ref{3.13}. It remains to consider the case $m=3$, in which we have \begin{equation}\beta =w_1s_2^{b_1}w_2s_2^{b_2}w_3s_2^{b_3}= v_1s_1^{a_1}s_2^{b_1}v_2s_1^{a_2}s_2^{b_2}v_3s_1^{a_3}s_2^{b_3}. \end{equation} Let us set \[ p= \# \{i~|~a_i\neq 0 \}, \hskip 14mm q=\#\{i~|~ v_i \succeq s_3\}. \] Here $p,q \in \{2, 3\}$. We consider cases by $(p,q)$. \medskip \paragraph{\it Case 1: $(p,q) = (2,2)$} After suitable cyclic rotation, we assume $a_3=0$. Then $Q_{\beta(1,3)}$ is acyclic. The rest goes through the same line as the above proof for the case $m=2$. \medskip \paragraph{\it Case 2: $(p,q) = (2,3)$} After suitable cyclic rotation, we assume $a_3=0$. If $b_1 \geq 2$, then \[\beta= v_1s_1^{a_1}s_2^{b_1}v_2s_1^{a_2}s_2^{b_2}v_3s_2^{b_3} \succ v_1s_1s_2^{b_1}v_2s_1s_2^{b_2+b_3} \succeq s_3s_1s_2^2 s_3s_1s_2^2. \]The proposition follows. So we assume $b_2=1$. Now if $a_2 =1$, then using $s_1^{a_1}s_2s_1 = s_2s_1s_2^{a_1}$, we can reduce the number of strands. The same argument works for $a_1=1$. It remain to consider $a_1 \geq 2$ and $a_2\geq 2$. Then $$\beta \succeq s_1^2 {\color{red}{v_1}} s_2 {\color{red}{v_2}} s_1^2 s_2 {\color{red}{v_3}} s_2 {\succ} s_1^2 {\color{blue}s_3 s_2 s_3} s_1^2s_2 s_2 \stackrel{\textrm{R3}}{=} s_1^2 s_2 {\color{red}{s_3}} s_2 s_1^2s_2 s_2 {\succ} s_1^2 s_2^2 s_1^2s_2^2. $$ \medskip \paragraph{\it Case 3: $(p,q) = (3,3)$} We have \[w_i=v_is_i^{a_i}\succeq s_3s_1, \quad \quad \forall i=1,2,3.\] If there is a $b_i$, say $b_1$ after necessary cyclic rotation, greater than $1$, then \[\beta \succ s_3s_1 s_2^{b_1}s_3s_1s_2^{b_2+b_3}\succeq s_1s_3s_2^2s_1s_3s_2^2. \] The proposition follows. It remains to consider $b_1= b_2=b_3 =1$. If there is some $w_i$ with $w_i(1,3) = s_1s_3$, after suitable cyclic rotation we can assume $w_2(1,3) = s_1s_3$. Note that the rest letters of $w_2$ are $s_4, \ldots, s_n$. They commute with the $s_2$ at either end and can be merged into $w_1$ or $w_3$. Therefore, we may assume $w_2=s_1s_3$ and use identity \begin{equation}\label{3strandstocancel1} s_1^{a_1}s_2s_1s_3s_2s_1^{a_3} = s_3^{a_3}s_2s_1s_3s_2s_3^{a_1} \end{equation} to reduce the number of strands. If none of the $w_i$'s has $w_i(1,3) = s_1s_3$. Then $w_i(1,3) \succeq s_1^2s_3$ or $w_i(1,3) \succeq s_1s_3^3$ for $i=1,2,3$. Two of them must be the same kind, and they have adjacent indices after cyclic rotation. For example, if $w_1,w_2$ are of the same type $s_1s_1s_3$, then $$\beta \succeq s_1s_3s_1 s_2 s_1s_3s_1 s_2 {\color{red}{w_3}} s_2 {\succ} s_1s_3 {\color{blue}{s_1s_2s_1}} s_3s_1s_2s_2 \stackrel{\textrm{R3}}{=} s_1s_3 s_2 {\color{red}{s_1}} s_2 s_3s_1s_2s_2 {\succ} s_1s_3 s_2^2 s_3s_1s_2^2. $$ Other combinations of $w_i$ are similar. \medskip \paragraph{\it Case 4: $(p,q) = (3,2)$} If $\beta$ is a $4$-strand word, then by the symmetry between $s_1$ and $s_3$, it reduces to the case $(p,q)=(2,3)$. Below we assume $\beta$ is at least of $5$ strands. After cyclic rotations, we assume that $v_3$ does not contain $s_3$. Then $v_3$ commutes with $s_2$ and can be merged into $w_2$. Hence we assume that \[w_1\succ s_1s_3, \quad \quad w_2\succ s_1s_3, \quad \quad w_3 = s_1^{a_3} \mbox{ with } a_3\geq 2.\] If $b_1\geq 2$, then the proposition follows since \[\beta\succeq s_1s_3s_2^2s_1s_3s_2 {\color{red}{w_3}} s_2 {\succ} s_1s_3s_2^2s_1s_3s_2^2.\] Below we consider $b_1=1$. If $w_1\succ s_1s_3^2$ and $w_2\succ s_1s_3^2$, then the proposition follows since \[\beta\succeq s_1s_3 {\color{blue}{s_3s_2s_3}} s_3s_1s_2^2 \stackrel{\textrm{R3}}{\succ} s_1s_3s_2{\color{red}{s_3}}s_2s_3s_1s_2^2 {\succ} s_1s_3s_2^2s_1s_3s_2^2.\] Hence, we assume that one of $w_1, w_2$ contains a single $s_3$. After suitable cyclic rotations and taking the opposite word if necessary, we assume that $w_2$ contains a single $s_3$. Moreover, all the letters $s_4,\dotsb, s_{n-1}$ in $w_2$ can be merged to $w_1$ by moving them in two directions and taking necessary cyclic rotations. To summarize, it remains to consider \[\beta = v_1 s_1^{a_1} s_2 s_1^{a_2} s_3 s_2^{b_2} s_1^{a_3} s_2^{b_3},\quad \mbox{where }v_1\succeq s_3,~a_3\geq 2,~ \mbox{and }a_1, a_2, b_2, b_3\geq 1.\] We split our proof into two cases based on the value of $a_2$. \medskip A. If $a_2 =1$, then $b_2\geq 2$. Otherwise, $\beta =v_1 {\color{purple}{s_1^{a_1} s_2 s_1 s_3 s_2 s_1^{a_3}}} s_2^{b_3}$, and we can apply Identity \eqref{3strandstocancel1} to the purple part to reduce the number of strands. We further assume $b_3 =1$; otherwise, $b_3\geq 2$, and together with $a_3,b_2\geq 2$, we have $$\beta \succeq s_1^{a_1}{\color{red}{s_2}}s_1^{a_2}s_2^{b_2}s_1^{a_3}s_2^{b_3} {\succ} s_1^{a_1+a_2}s_2^{b_2}s_1^{a_3}s_2^{b_3} {\succeq} s_1^{2}s_2^{2}s_1^{2}s_2^{2}. $$ To recollect, we have \[\beta = v_1 s_1^{a_1} s_2 s_1^{a_2} s_3 s_2^{b_2\geq 2} s_1^{a_3\geq 2} s_2.\] If $w_1(1,3)=s_1s_3$, then $w_1 =x s_1s_3 y$. After rotating $s_1^{a_3} s_2$, and moving $x,y$, we have $$ \beta = x s_1s_3 y s_2 s_1^{a_2} s_3 s_2^{b_2} {\color{teal}{s_1^{a_3} s_2}} \stackrel{\rho}{=} s_1^{a_3} s_2 {\color{teal}{x}} s_1s_3 {\color{teal}{y}} s_2 s_1^{a_2} s_3 s_2^{b_2} \stackrel{\textrm{c},\rho}{=} {\color{purple}{s_1^{a_3} s_2 s_1s_3 s_2 s_1^{a_2}}} ys_3 s_2^{b_2}x. $$ We apply identity \eqref{3strandstocancel1} to the purple part to reduce the number of strands. Therefore we can assume $w_1(1,3)\succeq s_1s_3^2$ or $s_1^2s_3$. Now we focus on $w_1(1,4)$. The connectedness of $Q_\beta$ implies that $w_1(1,4)$ has at least two copies of $s_4$, with at least one $s_3$ sandwiched in between. Hence there are four possibilities: $$ w_1(1,4)\succeq \; (a)\; s_1^2s_4s_3s_4, \; (b)\; s_1s_4s_3s_3s_4, \; (c)\; s_1s_4s_3s_4s_3, \; (d)\; s_1s_3s_4s_3s_4. $$ The proposition follows via direct calculations: \begin{enumerate} \item[($a$)] $w_1(1,4)\succeq s_1^2s_4s_3s_4 = s_1^2s_3s_4s_3\succ s_1^2s_3^2$. Then $$ \quad \quad \quad \beta \succ s_1^2{\color{blue}{s_3^2s_2s_3}}s_1s_2^2s_1^2s_2 \stackrel{\textrm{R3}^2}{=} s_1^2s_2s_3s_2{\color{blue}{s_2s_1s_2}}s_2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1^2s_2{\color{red}{s_3}}s_2s_1{\color{red}{s_2}}s_1s_2{\color{red}{s_1^2}}s_2 {\succ} s_1^2s_2^2s_1^2s_2^2. $$ \item[($b$)] $w_1(1,4)\succeq s_1s_4s_3s_3s_4$. Then \begin{align*} \beta & \succeq s_1{\color{teal}{s_4}}s_3^2{\color{teal}{s_4}}s_2s_1s_3s_2^2s_1^2s_2 \stackrel{\textrm{c},\rho}{=} s_1s_3^2s_2s_1{\color{blue}{s_4s_3s_4}}s_2^2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1s_3^2s_2s_1s_3{\color{red}{s_4}}s_3s_2^2s_1^2s_2 \\ &{\succ} s_1s_3^2s_2s_1{\color{teal}{s_3}}s_3s_2^2s_1^2s_2 \stackrel{\textrm{c}}{=} s_1s_3{\color{blue}{s_3s_2s_3}}s_1s_3s_2^2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1s_3s_2{\color{red}{s_3}}s_2s_1s_3s_2^2{\color{red}{s_1^2s_2}}\\ &{\succ} s_1s_3s_2^2s_1s_3s_2^2. \end{align*} \item[($c$)] $w_1(1,4)\succeq s_1s_4s_3s_4s_3 = s_1s_3s_4s_3s_3 \succ s_1s_3^3$. Then \begin{align*} \beta & \succeq s_1{\color{blue}{s_3^3s_2s_3}}s_1s_2^2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1s_2{\color{teal}{s_3}}s_2^3s_1s_2^2s_1^2s_2 \stackrel{\textrm{R1}}{\succ} s_1s_2^4s_1s_2^2{\color{teal}{s_1^2s_2}} \\ &\stackrel{\rho}{=} {\color{blue}{s_1^2}}s_2s_1s_2^4s_1s_2^2 \stackrel{\textrm{R3}}{=} {\color{teal}{s_2}}s_1s_2^2s_2^4s_1s_2^2 \stackrel{\rho}{=} s_1s_2^6s_1s_2^3. \end{align*} We end up with an $\mathrm{E}_9$ quiver, which is acyclic and of infinite type. \item[($d$)] $w_1(1,4)\succeq s_1s_3s_4s_3s_4 = s_1s_3s_3s_4s_3 \succ s_1s_3^3$. The rest follows from the same calculation as in $(c)$. \end{enumerate} \medskip B. If $a_2\geq 2$, then we look at $w_1(1,3)$. If $w_1(1,3) = s_1s_3$, then $v_1=xs_3y$, where $x,y$ are words of $s_4, \ldots, s_{n-1}$. Let $\tilde{x}$ and $\tilde{y}$ be the opposite word of $x$ and $y$ respectively. Then \[ \beta = {\color{teal}{x}}s_3y s_1 s_2 s_1^{a_2} s_3 {\color{teal}s_2^{b_2} s_1^{a_3} s_2^{b_3}} \stackrel{\rho, c}{=} s_2^{b_2} s_1^{a_3} s_2^{b_3} s_3 s_1 s_2 s_1^{a_2} ys_3x \stackrel{\textrm{oppo}}{\rightsquigarrow} \tilde{x}s_3\tilde{y}s_1^{a_2}s_2s_1s_3s_2^{b_3}s_1^{a_3}s_2^{b_2}. \] It goes back to Case A. If $w_1(1,3) \succeq s_1^2s_3$, then $$ \beta \succeq s_3s_1{\color{blue}{s_1s_2s_1}}s_1s_3s_2^{b_2}s_1^{a_3}s_2^{b_3} \stackrel{\textrm{R3}}{=} s_3s_1s_2{\color{red}{s_1}}s_2s_1s_3{\color{red}{s_2^{b_2}s_1^{a_3}s_2^{b_3}}} {\succ} s_1s_3s_2^2s_1s_3s_2^2. $$ It remains to consider $w_1(1,3)\succeq s_1s_3^2$. There are three possibilities for $w_1(1,4)$: $$ w_1(1,4)\succeq \; (e)\; s_1s_4s_3s_4s_3, \; (f)\; s_1s_3s_4s_3s_4, \; (g)\; s_1s_4s_3s_3s_4. $$ For both ($e$) and ($f$), we have $w_1(1,4)\succ s_1s_3^3$. Then \begin{align*} \beta & \succ s_1{\color{blue}{s_3^3s_2s_3}}s_1^2s_2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1s_2{\color{teal}{s_3}}s_2^3s_1^2s_2s_1^2s_2 \stackrel{\textrm{R1}}{=} s_1s_2^4s_1^2{\color{teal}{s_2s_1^2s_2}} \\ & \stackrel{\rho}{=} s_2s_1^2s_2s_1s_2^4s_1^2 \stackrel{\textrm{R3}}{=} s_2s_1^2s_1^4s_2s_1s_1^2 = s_2s_1^6s_2s_1^3 \end{align*} This is again the $\mathrm{E}_9$ quiver. For ($g$), we have $w_1(1,4)\succeq s_1s_4s_3s_3s_4$. Then \begin{align*} \beta &\succeq s_1{\color{teal}{s_4}}s_3^2{\color{teal}{s_4}}s_2s_1^2s_3s_2s_1^2s_2 \stackrel{\textrm{c},\rho}{=} s_1s_3^2s_2s_1^2{\color{blue}{s_4s_3s_4}}s_2s_1^2s_2 \stackrel{\textrm{R3}}{=} s_1s_3^2s_2s_1^2s_3{\color{red}{s_4}}s_3s_2{\color{red}{s_1^2}}s_2 \\ &{\succ} s_1s_3^2s_2s_1^2{\color{teal}{s_3}}s_3s_2s_2 \stackrel{\textrm{c}}{=} s_1s_3{\color{blue}{s_3s_2s_3}}s_1^2s_3s_2s_2 \stackrel{\textrm{R3}}{=} s_1s_3s_2{\color{red}{s_3}}s_2{\color{red}{s_1}}s_1s_3s_2s_2 \\ &{\succ} s_1s_3s_2^2s_1s_3s_2^2. \end{align*} We complete the proof of Proposition \ref{Mainquiver}. \begin{cor} For positive braids $[\beta]$ with connected $Q_\beta$, the two cases in Proposition \ref{Mainquiver} coincides with the dichotomy between finite and infinite types for positive braids. \end{cor} \begin{proof} It follows from Proposition \ref{Mainquiver} and Proposition \ref{3.4}. \end{proof} \noindent \textit{Proof of Theorem \ref{MainStep1} for disconnected $Q_\beta$.} Suppose $Q_\beta$ has two components. Because vertices on the same level are connected, there exists a unique $1\leq i<n$ such that no arrow appears between level $i$ and $i+1$. We consider $\beta(1,i)$ and $\beta(i+1,n-1)$. Since we can pinch some crossings of $\beta$ to obtain $\beta(1,i)$ and $\beta(i+1,n-1)$, if one of them has infinitely many admissible fillings, so does $\beta$ by Proposition \ref{3.2}. Otherwise by Propositions \ref{Mainquiver} (1) and \ref{3.4}, both $Q_{\beta(1,i)}$ and $Q_{\beta(i+1,n-1)}$ are mutation equivalent to finite type quivers, and hence $[\beta]$ is of finite type. In general, we can induct on the number of components in the quiver of the braid. \qed \section{Introduction} To this day, the only complete classification of exact Lagrangian fillings of a given Legendrian knot was the unique filling of the unknot with maximal Thurston-Bennequin number \cite{EP}. Most subsequent work focuses on giving a lower bound on the number of distinct fillings, which is typically achieved by constructing fillings explicitly and distinguishing them using an invariant. Known constructions of fillings include (1) decomposable Lagrangian fillings \cite{EHK12}, (2) conjugate Lagrangians for alternating Legendrians \cite{STWZ}, and (3) free Legendrian weaves \cite{TZ,CZ}. The invariants used to distinguish these fillings include augmentations \cite{EHK12, Pan_2017} and microlocal sheaves \cite{STWZ}. The question whether there exists a Legendrian link admitting infinitely many exact Lagrangian fillings remained open until the year 2020. Several methods emerged concurrently and each successfully solved the problem for a class of Legendrian links. \begin{itemize} \item \cite{CasalsGao} Any torus $(n,m)$-link except $(2,m), (3,3), (3,4), (3,5)$ admits infinitely many Lagrangian fillings. The proof uses Legendrian loops, microlocal sheaves, and cluster algebras. \item \cite{CZ} The closure of the braid $\left[\beta_{p,q}\right]\in \mathsf{Br}_3^+$ admits infinitely many Lagrangian fillings, where $$\beta_{p,q}=(s_1^3s_2)(s_1^3s_2^2)^ps_1^3s_2(s_2^2s_1^3)^q(s_2s_1^3)(s_2^{q+1}s_1^2s_2^{p+2}) , \quad p,q\in \mathbb{N}_{\geq 1}.$$ The proof uses Legendrian weaves, sheaves, and cluster algebras. \item The upcoming work \cite{CN} shows that certain Legendrian links (not necessarily positive braid links) have Legendrian loops of infinite order, using Legendrian contact dga. \end{itemize} In this paper, we solve the infinitely many fillings problem for positive braid links using cluster structure on augmentation varieties \cite{GSW} and Donaldson-Thomas transformations of cluster varieties \cite{GS2, SWflag}. \begin{Mthm}\label{Main} If a positive braid Legendrian link is not Legendrian isotopic to a split union of unknots and connect sums of standard $\mathrm{ADE}$ links, then it admits infinitely many non-Hamiltonian isotopic exact Lagrangian fillings. \end{Mthm} A $n$-strand braid word $\beta$ is a finite sequence of the letters $s_1, \ldots, s_{n-1}$. Every braid word $\beta$ determines a quiver $Q_\beta$ by the following equivalent constructions: wiring diagrams \cite{FZ, BFZ}, amalgamation \cite{FGamalgamation}, and brick diagrams \cite{Rudolph,BLL}. In detail, $Q_\beta$ can be constructed by the following steps. \begin{enumerate} \item Plot $\beta$ and replace each crossing by a vertical bar $\mathsf{I}$. We get a ``wall of bricks''. \item Draw a vertex at each compact brick. \item For any two adjacent bricks on the same level, draw a rightward horizontal arrow connecting them. For any two adjacent bricks forming a Z or S pattern, draw a leftward arrow connecting them. \end{enumerate} A quiver is of \emph{infinite type} if it has infinitely many isomorphism classes of indecomposable representations; otherwise it is of \emph{finite type}. A \emph{Dynkin quiver} is a quiver whose underlying undirected graph is one of the Dynkin diagrams: $\mathrm{A}_r,\mathrm{D}_r, \mathrm{E}_6, \mathrm{E}_7, \mathrm{E}_8$. By a theorem of Gabriel \cite{Gab}, a quiver is of finite type if and only if it is a disjoint union of Dynkin quivers. \begin{exmp} The braid word $s_1^{r}s_2s_1^3s_2$ gives rise to an $\mathrm{E}_{r+3}$ quiver. For example, an $\mathrm{E}_9$ quiver is obtained as follows: \[ \begin{tikzpicture} \draw [gray] (0,0) -- (12,0); \draw [gray] (0,0.8) -- (12,0.8); \draw [gray] (0,1.6) -- (12,1.6); \draw [gray] (1,0.8) -- (1, 1.6); \draw [gray] (2,0.8) -- (2, 1.6); \draw [gray] (3,0.8) -- (3, 1.6); \draw [gray] (4,0.8) -- (4, 1.6); \draw [gray] (5,0.8) -- (5, 1.6); \draw [gray] (6,0.8) -- (6, 1.6); \draw [gray] (7,0) -- (7, 0.8); \draw [gray] (8,0.8) -- (8, 1.6); \draw [gray] (9,0.8) -- (9, 1.6); \draw [gray] (10,0.8) -- (10, 1.6); \draw [gray] (11,0) -- (11, 0.8); \node (1) at (1.5,1.2) [] {$\bullet$}; \node (2) at (2.5,1.2) [] {$\bullet$}; \node (3) at (3.5,1.2) [] {$\bullet$}; \node (4) at (4.5,1.2) [] {$\bullet$}; \node (5) at (5.5,1.2) [] {$\bullet$}; \node (6) at (8,0.4) [] {$\bullet$}; \node (7) at (7,1.2) [] {$\bullet$}; \node (8) at (8.5,1.2) [] {$\bullet$}; \node (9) at (9.5,1.2) [] {$\bullet$}; \draw [->, thick] (1) -- (2); \draw [->, thick] (2) -- (3); \draw [->, thick] (3) -- (4); \draw [->, thick] (4) -- (5); \draw [->, thick] (5) -- (7); \draw [->, thick] (7) -- (8); \draw [->, thick] (8) -- (9); \draw [->, thick] (6) -- (7); \end{tikzpicture} \] \end{exmp} The equivalence class $[\beta]$ of braid words $\beta$ modulo the relations $s_is_js_i=s_js_is_j$ for $|i-j|=1$ and $s_is_j=s_js_i$ for $|i-j|\geq 2$ is a positive braid. The collection $\mathsf{Br}_n^+$ of $n$-strand positive braids under juxtaposition forms a semigroup. It is known that different braid words of the same braid yield mutation equivalent quivers (see \cite{FZI} for the definition of quiver mutations). We say a positive braid $[\beta]$ is of \emph{finite type} if its associated quiver $Q_\beta$ is mutation equivalent to a quiver of finite type; otherwise it is of \emph{infinite type}. The link closure of a positive braid $[\beta]$ admits a unique Legendrian representative $\Lambda_\beta$ with maximal Thurston-Bennequin number by the rainbow closure construction \cite{EV}. \begin{defn} \label{stad.links} For each of the Dynkin diagrams, we define its \emph{standard link} to be the rainbow closures of the following positive braid words. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \rule{0pt}{2.3ex} $\mathsf{Br}_2^+$ & \multicolumn{4}{c|}{$\mathsf{Br}_3^+$} \\[0.045cm] \hline \rule{0pt}{2.3ex} $\mathrm{A}_r$ & $\mathrm{D}_r$ & $\mathrm{E}_6$ & $\mathrm{E}_7$ & $\mathrm{E}_8$ \\[0.045cm] \hline \rule{0pt}{2.3ex} $s_1^{r+1}$ & $ s_1^{r-2}s_2s_1^2s_2$ & $s_1^3s_2s_1^3s_2$ & $s_1^4s_2s_1^3s_2$ & $s_1^5s_2s_1^3s_2$ \\[0.045cm] \hline \end{tabular} \end{center} \end{defn} \medskip The infinitely many fillings problem refers to whether there exists a Legendrian link admitting infinitely many Hamiltonian-isotopic classes of exact Lagrangian fillings. We solve it by explicitly constructing infinitely many \emph{admissible fillings}. An {admissible} filling/cobordism/concordance is by definition a composition of saddle cobordisms, cyclic rotations, braid moves (R3), and minimum cobordisms (\cite[Definition 1.3]{GSW}). \smallskip Here is a summary of several key results needed for the proof of Main Theorem \ref{Main}. In Section \ref{sec 2}, we prove that if $Q_\beta$ is acyclic and of infinite type, then $\Lambda_\beta$ admits infinitely many admissible fillings (Corollary \ref{rainbowinfinite}). Building upon this result, we prove in Section \ref{sec 3} that if a positive braid $[\beta]$ is of infinite type, then $\Lambda_\beta$ admits infinitely many admissible fillings (Theorem \ref{MainStep1}). In Section \ref{sec 4}, we prove that if a positive braid $[\beta]$ is finite type, then $\Lambda_\beta$ is Legendrian isotopic to a split union of unknots and connect sums of standard $\mathrm{ADE}$ links (Theorem \ref{4.6}). Main Theorem \ref{Main} follows from the dichotomy between finite and infinite types for positive braids. We discuss further applications in Section \ref{sec.5.app}. \medskip \noindent \textbf{Acknowledgement.} We thank everyone acknowledged in \cite{GSW} for their support on the overarching project. In particular, we would like to thank Roger Casals, Bernhard Keller, and Eric Zaslow for useful discussions and advice on references.
2024-02-18T23:40:47.997Z
2020-09-02T02:21:53.000Z
algebraic_stack_train_0000
3,362
12,968
proofpile-arXiv_066-640
\section{Introduction} The $sp\left(2M\right)$ invariance of the higher-spin (HS) field multiplet was first proposed in \cite{Fronsdal:1985pd}. The idea that HS theories should admit a description in a larger manifestly $sp(2M)$ invariant space-time is as natural as the idea to describe supersymmetric theories in superspace. Formulations of HS theories in $sp\left(2M\right)$ invariant (super)spaces has been widely elaborated (see \cite{Bandos:1999qf,Bandos:1999pq,Vasiliev:2001zy,Vasiliev:2001dc,Vasiliev:2002fs,Bandos:2002te,Didenko:2003aa,PluSorTsu,spspace,BanPasSorTon,BanBekAzSorTsu,IvLuk,Iv,theta,sph,twistors,Sorokin} and references therein). In this setup free massless HS bosonic and fermionic fields are described \cite{Vasiliev:2001zy,Vasiliev:2001dc} by a scalar field $C\left(X\right)$ and a svector field $C_A\left(X\right)$, respectively, in the generalized space-time $\mathcal{M}_M$ with local coordinates $X^{AB}=X^{BA}$, $A,B = 1\ldots M$. Conserved charges corresponding to conformal and higher symmetries were constructed in \cite{Vasiliev:2002fs} (see also \cite{theta,rank,GSV,ads}). Unfolded dynamics approach to the $sp(2M)$ invariant equations was first considered in \cite{Vasiliev:2001zy} and later extended in \cite{theta,sph,twistors} to conserved currents and charges. As usual in unfolded dynamics, to this end the generalized space-time $\mathcal{M}_M$ is extended by auxiliary twistor-like spinor variables $Y^A$ to $\mathcal{M}_M\times\mathbb{R}^M$. The variables $Y^A$ together with derivatives $\frac{\partial}{\partial Y^A}$ form Heisenberg algebra $H_M$ \cite{sph} \begin{equation} \left[\dfrac{\partial}{\partial Y^A},Y^B\right] = \delta_A^B\, \end{equation} while the bilinears of oscillators \begin{equation} P_{AB} = \dfrac{\partial}{\partial Y^A}\dfrac{\partial}{\partial Y^B},\qquad K^{AB} = Y^A Y^B, \qquad L_A{}^B = \dfrac{1}{2}\left(Y^A\dfrac{\partial}{\partial Y^B} + \dfrac{\partial}{\partial Y^B} Y^A\right) \end{equation} form $sp\left(2M\right)$. Here $P_{AB}$ and $K^{AB}$ represent generalized translations and special conformal transformations. The $gl(M)$ subalgebra spanned by $L_A{}^B$ decomposes into generalized dilatation generator $D = L_A{}^A$ and $sl(M)$ representing generalized Lorentz transformations generated by $l_A{}^B = L_A{}^B - \frac{1}{M}\delta_A^B\, D$ \cite{Vasiliev:2001zy,Vasiliev:2002fs}. In this paper we construct a complete set of conserved charges in the case of periodic coordinates $Y^A$. Thus, the full space where fields live is $\mathcal{M}_M\times T^M$. Analogous problem for the non-compact twistor-space was considered in \cite{twistors}. Complete set of non-trivial conserved charges is constructed together with the residual global symmetries they correspond to. Conserved charges are represented as integrals of closed forms independent of local variations of the integration cycle. Despite considerable similarity with the non-compact case, periodicity in the twistor-like variables causes a number of peculiarities. One is that, since toric geometry allows inequivalent cycles non-contractible to each other, corresponding integrations give different sets of conserved charges. An interesting output of this paper is that nevertheless the latter are related to each other by some HS transformation. The complete set of charges can be obtained starting from some elementary cycle in the spinor space. Charges associated with other cycles result from those in the spinor space by virtue of higher symmetries. In fact, this implies that HS symmetries can affect topology of cycles. The global symmetry compatible with the periodicity in $Y$ is represented by an infinite-dimensional Lie algebra generated by basis elements $\mathrm{T}^r_{\left(\xi,n\right)}$ with $\xi^A\in\left[0,2\pi\right)$, $n_B\in\mathbb{Z}$ and $r=0,1$ obeying the following commutation relations \begin{equation}\label{eq:residual_Lie} \left[\mathrm{T}^q_{\left(m,\xi\right)},\mathrm{T}^r_{\left(n,\zeta\right)}\right] = \mathrm{T}^{\left|q+r\right|_2}_{\left(\left(-\right)^r m+n,\left(-\right)^r \xi+\zeta\right)}e^{i\left(-\right)^r \left(m_C\zeta^C - n_C\xi^C\right)} - \mathrm{T}^{\left|q+r\right|_2}_{\left(m+\left(-\right)^q n,\xi+\left(-\right)^q\zeta\right)}e^{-i\left(-\right)^q \left(m_C\zeta^C - n_C\xi^C\right)}, \end{equation} where $\left|q+r\right|_2:=\left(q + r\right)\mod 2$. Subalgebra of \eqref{eq:residual_Lie} with $q=r=0$ obeys commutation relations \begin{equation}\label{eq:residual_Lie_subalgebra} \left[\mathrm{T}^0_{\left(m,\xi\right)},\mathrm{T}^0_{\left(n,\zeta\right)}\right] = 2i\,\sin\left(m_C\zeta^C - n_C\xi^C\right)\, \mathrm{T}^0_{\left(m+n,\xi+\zeta\right)} \end{equation} and is somewhat analogous to the sine algebra introduced in \cite{Fairley} which is reproduced at $\xi,\zeta\in\mathbb{Z}^M$. Analogously to \cite{Fairley}, relations \eqref{eq:residual_Lie} admit oscillator representation \begin{equation} \mathrm{T}^r_{\left(n,\xi\right)}\left(k;v\right) = K^r\star e^{i\xi\, k + in\,v}, \end{equation} $k\in\mathbb{Z}^M$ and $v^C\in\left[0,2\pi\right)$, with respect to the Moyal-like star product \begin{multline} \left(f \star g\right)\left(k;v\right) = \dfrac{1}{\left(2\pi\right)^{2M}}\sum_{m,n\in\mathbb{Z}^M}\int_0^{2\pi}\mathrm{d}^Mu\,\mathrm{d}^Mw \cdot\\\cdot f\left(k+m;v+u\right)g\left(k+n;v+w\right)\exp\left[i\left(m_C w^C - n_C u^C \right)\right], \end{multline} acting on functions $f\left(k;v\right) = \sum_N f_N\left(k\right)\,e^{iN_C v^C}$ with half of arguments discrete and another half periodic. Klein operator $K$ (see e.g. \cite{NonLinHSmanual}) is defined to fulfill the following properties \begin{equation} K\star K = 1,\quad K\star f\left(k;v\right) = f\left(-k;-v\right)\star K. \end{equation} Note that $\left[k_C,e^{iN_B v^B}\right]_\star = -2N_C\,e^{iN_B v^B}$, where $\left[f,g\right]_\star \equiv f\star g - g\star f$. Riemann theta-function \cite{Mumford} \begin{equation} \Theta\left(Y\middle| X\right) = \sum_{n\in\mathbb{Z}^M} \,\exp\left[i\pi\,n_A X^{AB} n_B +2\pi i\,n_A Y^A\right] \end{equation} can be interpreted as an evolution operator (or $\mathcal{D}$-function) for fields, propagating in the $X,Y$ space with periodic $Y$-variables \cite{theta}. Connection with field theory may lead to an alternative interpretation of some of the important theta-function identities \cite{Mumford,KZ} (see also \cite{theta}) and, other way around, to applications of the apparatus of toric geometry to field theory. Consideration of the periodic twistor-like space can open a way toward a uniform description of black holes in various dimensions. Black-hole solutions like Schwarzshild, Kerr-Newman, Reisner-Nordström solutions in asymptotically flat or $AdS$ spacetime or like BTZ black hole in $3d$ are exact solutions of theories of gravity. Moreover BTZ black hole is locally $AdS_3$ resulting from quotiening of the $AdS_3$ group $\mathrm{O}\left(2,2\right)$ over some discrete subgroup \cite{BTZ}. Hence it is described by an $AdS_3$ flat connection obeying certain periodic boundary conditions. Such construction gives a hint that black holes in various dimensions may be constructed with the aid of BTZ-like solutions in $\mathcal{M}_M\times T^M$ for properly chosen flat connections in unfolded equations. Periodicity of solutions in generalized space-time $\mathcal{M}_M$ is induced by periodicity in spinor variables via unfolded dynamics and black holes themselves perhaps could be obtained as projection of aforementioned ``flat'' solutions onto surfaces in $\mathcal{M}_M$ representing usual space-time. This conjecture provides the main motivation for the problem addressed in this paper, opening a vast area for further research. The rest of the paper is organized as follows. In Section $2$ the main ingredients such as fields and currents, their unfolded equations of motion and symmetry transformations are introduced following the non-compact case \cite{twistors}. In Section $3$ the periodicity conditions on twistor-like variables are imposed. In Section $4$ construction of conserved charges is described and on-shell current cohomology along with the full set of non-zero conserved charges are presented. In section $5$ it is shown that charges resulting from integration over non-homotopic cycles turn out to be related by the action of HS symmetry. In Section $6$ conserved charges are represented as symmetry generators acting on quantized fields and the full symmetry of dynamics in the periodic spinor space is formulated in terms of an infinite-dimensional Lie algebra. In Conclusion some peculiar features of symmetries in the periodic spinor space are discussed. \section{Fields and currents} \subsection{Fields} As shown in \cite{Vasiliev:2001dc} (infinite towers of) conformal fields in various dimensions ($d\geq 4$) can be conveniently described in terms of generalized space-time $\mathcal{M}_M$ with symmetric real matrix coordinates $X^{AB} = X^{BA}$ ($A,B = 1...M$). For the unfolded formulation $\mathcal{M}_M$ is extended by auxiliary twistor-like variables $Y^A$ spanning $\mathbb{R}^M$ \cite{Vasiliev:2001zy}. Conformal fields are described by scalar functions $C^\pm\left(Y\middle| X\right)$ obeying rank-one unfolded equations \cite{twistors,rank} \begin{equation}\label{eq:unfolded_r1} \left(\dfrac{\partial}{\partial X^{AB}} \pm i\dfrac{\partial ^2}{\partial Y^A \partial Y^B}\right)C^\pm\left(Y\middle| X\right) = 0. \end{equation} Equation \eqref{eq:unfolded_r1} expresses covariant constancy condition with the flat connection \begin{equation} W^\pm\left(Y,\partial_Y\middle| X\right) = \pm i\,\mathrm{d} X^{AB}\dfrac{\partial^2}{\partial Y^A \partial Y^B}. \end{equation} Bosons are described by even functions ($\C{\pm}{-Y}{X} = \C{\pm}{Y}{X}$) while fermions are described by odd ones ($\C{\pm}{-Y}{X} = -\C{\pm}{Y}{X}$) \cite{twistors}. Unfolded formulation is useful in many respects. In particular, equation \eqref{eq:unfolded_r1} reconstructs $X$-dependence from a given function $\C{\pm}{Y}{0}$, \begin{equation} \C{\pm}{Y}{X} = \exp\left[\mp i X^{AB}\dfrac{\partial^2}{\partial Y^A \partial Y^B}\right] \C{\pm}{Y}{0}. \end{equation} Fourier decomposition of $\C{\pm}{Y}{0}$ gives the following representation for general solution of \eqref{eq:unfolded_r1} \begin{equation}\label{eq:solution_r1} \C{\pm}{Y}{X} = \int \mathrm{d}^M\xi\,c^{\pm}\left(\xi\right)\,\exp\left[\pm i\left(\xi_A \xi_B X^{AB} + \xi_B Y^B\right)\right] \end{equation} with the elementary solutions \begin{equation}\label{eq:basis_r1} \basis{\pm}{\xi}{Y}{X} = \exp\left[\pm i\left(\xi_A \xi_B X^{AB} + \xi_B Y^B\right)\right]. \end{equation} \noindent As explained in \cite{theta,twistors} (see also \cite{Vasiliev:2001dc}), the superscript $\pm$ in \eqref{eq:solution_r1} distinguishes between positive- and negative-frequency modes corresponding to particles and antiparticles upon quantization. The two modes are complex conjugated \begin{equation} \overline{\C{+}{Y}{X}} = \C{-}{Y}{X}\quad \Longleftrightarrow \quad \overline{c^+\left(\xi\right)} = c^-\left(\xi\right). \end{equation} Another useful feature of unfolded formulation is that it allows one to describe symmetries of a system in a regular way. Namely, consider a transformation \begin{equation} \C{\pm}{Y}{X}\rightarrow \eta\left(Y,\partial_Y\middle| X\right) \C{\pm}{Y}{X}. \end{equation} To be a symmetry, $\eta\left(Y,\partial_Y\middle| X\right)$ should commute with the differential operator on the \textit{lhs} of \eqref{eq:unfolded_r1}, \begin{equation}\label{eq:unfolded_symmetry_r1} \left[\dfrac{\partial}{\partial X^{AB}} \pm i\dfrac{\partial^2}{\partial Y^A \partial Y^B}, \eta\left(Y,\partial_Y\middle| X\right)\right] = 0. \end{equation} Condition \eqref{eq:unfolded_symmetry_r1} is formally consistent since connection in \eqref{eq:unfolded_r1} is flat. The first-order differential operators \begin{equation}\label{eq:covar_oscillators} \mathcal{A}_{\pm}{}^C\left(Y\middle|X\right) = Y^C \mp 2i X^{CB}\dfrac{\partial}{\partial Y^B}, \quad \mathcal{B}^{\pm}{}_{C}\left(Y\middle| X\right) = \dfrac{\partial}{\partial Y^C} \end{equation} verify \eqref{eq:unfolded_symmetry_r1}. Each pair $\mathcal{A}_+{}^C,\mathcal{B}^+{}_{C}$ and $\mathcal{A}_-{}^C,\mathcal{B}^-{}_{C}$ obeys Heisenberg algebra $H_M$ \cite{sph} \begin{equation}\label{eq:Heisenberg} \def1.5{1.5} \begin{array}{c} \left[\mathcal{B}^\pm{}_{A},\mathcal{A}_{\pm}{}^B\right] = \delta^B_A\,,\qquad \left[\mathcal{A}_\pm{}^B,\mathcal{A}_\pm{}^C\right]=0,\quad \left[\mathcal{B}^\pm{}_{B},\mathcal{B}^\pm{}_{C}\right] = 0\,. \end{array} \end{equation} Since operators \eqref{eq:covar_oscillators} are covariantly constant, they will be referred to as \textit{covariant oscillators}\footnote{Here and after notations of covariant oscillators correspond to \cite{twistors}.}. Any function of covariant oscillators $\eta\left(\mathcal{A}_\pm;\mathcal{B}^\pm\right)$ is a solution of \eqref{eq:unfolded_symmetry_r1} and hence is a symmetry of \eqref{eq:unfolded_r1}. Covariant oscillators act on \eqref{eq:basis_r1} as follows \begin{equation}\label{eq:covar_oscillators_action} \def1.5{1.7} \begin{array}{l} \mathcal{B}^\pm{}_{C} \,\theta^\pm_\xi = \pm i\xi_C \,\theta^\pm_\xi,\qquad \mathcal{A}_{\pm}^C \,\theta^\pm_\xi = \mp i\dfrac{\partial}{\partial \xi_C} \,\theta^\pm_\xi. \end{array} \end{equation} Exponentiation of \eqref{eq:covar_oscillators_action} gives \begin{equation}\label{eq:covar_oscillators_action_exp} \def1.5{1.7} \begin{array}{l} \exp\left[\pm i\zeta_C\,\mathcal{A}_{\pm}{}^C\right] \,\theta^\pm_\xi = \theta^\pm_{\xi + \zeta}, \end{array} \end{equation} allowing to generate the whole basis \eqref{eq:basis_r1} from a single vacuum vector $\theta_0:=\basis{\pm}{0}{Y}{X} = 1$, \begin{equation} \def1.5{1.5} \begin{array}{l} \mathcal{B}^\pm{}_{C} \,\theta_0 = 0,\qquad \exp\left[\pm i\xi_C\,\mathcal{A}_{\pm}^C\right] \,\theta_0 = \theta^\pm_{\xi}. \end{array} \end{equation} Any solution to \eqref{eq:unfolded_r1} can thus be written as \begin{equation} \C{\pm}{Y}{X} = \int \mathrm{d}^M\xi\,c^{\pm}\left(\xi\right)\,\exp\left[\pm i\xi_C\,\mathcal{A}_{\pm}^C\right] \,\theta_0. \end{equation} The result of the action of a symmetry transformation can be represented as \begin{equation} \eta\left(\mathcal{A}_\pm;\mathcal{B}^\pm\right)\,\C{\pm}{Y}{X} = \int \mathrm{d}^M\xi\,c^\pm\left(\xi\right)\, \eta\left(\mp i\partial_{\xi};\pm i\xi\right)\,\basis{\pm}{\xi}{Y}{X}\,. \end{equation} Evolution of a particular field configuration in $X$-variables from $\C{\pm}{Y}{X^\prime}$ to $\C{\pm}{Y}{X}$ is given by a $\mathcal{D}$-function via the following transformation \cite{twistors} \begin{equation} \C{\pm}{Y}{X} = \int \mathrm{d}^M Y^\prime\,\mathcal{D}^\pm\left(Y-Y^\prime\middle| X - X^\prime\right)\,\C{\pm}{Y^\prime}{X^\prime}\,. \end{equation} The $\mathcal{D}$-function is a solution to \eqref{eq:unfolded_r1} with the $\delta$-functional initial data \begin{equation} \left(\dfrac{\partial}{\partial X^{AB}} \pm i\dfrac{\partial^2}{\partial Y^A \partial Y^B}\right)\mathcal{D}^\pm\left(Y-Y^\prime\middle| X - X^\prime\right) = 0,\quad \mathcal{D}^\pm\left(Y-Y^\prime\middle| 0\right) = \delta\left(Y - Y^\prime\right). \end{equation} Hence, \begin{equation} \mathcal{D}^\pm\left(Y\middle| X\right) = \dfrac{1}{\left(2\pi\right)^M}\int \mathrm{d}^M\xi\,\basis{\pm}{\xi}{Y}{X}. \end{equation} \subsection{Currents} Doubling of spinor variables leads to a \textit{rank-two unfolded equation} \cite{rank}. \begin{equation}\label{eq:unfolded_r2} \left(\dfrac{\partial}{\partial X^{AB}} + i \dfrac{\partial^2}{\partial Y_1^A \partial Y_1^B} - i\dfrac{\partial^2}{\partial Y_2^A \partial Y_2^B}\right) \J{Y_1,Y_2}{X} = 0. \end{equation} Its solutions $\J{Y_1,Y_2}{X}$ are called \textit{current fields} or simply \textit{currents}. They describe conserved currents in the unfolded formulation. The flat connection \begin{multline} W^{(2)}\left(Y_{1,2},\partial_{Y_{1,2}}\middle| X \right) = i\,\mathrm{d} X^{AB}\left( \dfrac{\partial^2}{\partial Y_1^A \partial Y_1^B} - \dfrac{\partial^2}{\partial Y_2^A \partial Y_2^B}\right) = W^+\left(Y_{1},\partial_{Y_{1}}\middle| X \right) + W^-\left(Y_{2},\partial_{Y_{2}}\middle| X \right) \end{multline} is the sum of flat connections for positive- and negative-frequency modes of \eqref{eq:unfolded_r1}. Hence bilinears of rank-one fields \begin{equation}\label{eq:bilinear} \J{Y_{1,2}}{X} = \C{+}{Y_1}{X}\C{-}{Y_2}{X}\,, \end{equation} called \textit{bilinear currents}, verify \eqref{eq:unfolded_r2}. The straightforward generalization of currents \eqref{eq:bilinear} via extension of the set of rank-one fields \eqref{eq:solution_r1} by a color index $\mathrm{i}=1...\mathcal{N}$ \begin{equation}\label{eq:bilinear_color} \J{Y_{1,2}}{X} = \sum_{\mathrm{i}=1}^{\mathcal{N}}\Col{i}{+}{Y_1}{X}\Col{i}{-}{Y_2}{X}, \end{equation} plays the central role in $AdS\slash CFT$ correspondence (see \textit{e.g.} \cite{KlebPol,Giombi:2012ms}) and were considered for the non-compact twistor-like space \textit{e.g.} in \cite{twistors}. Since it does not play any role in this paper and can be easily reinserted at any moment we will not consider it in the sequel. Bilinear current \eqref{eq:bilinear} is a particular case of a more general current field \begin{equation} \label{eq:bilinear_general} \J[\eta]{Y_{1,2}}{X} = \eta\left(Y_{1,2},\partial_{Y_{1,2}}\middle| X\right)\C{+}{Y_1}{X}\C{-}{Y_2}{X}, \end{equation} where $\eta\left(Y_{1,2},\partial_{Y_{1,2}}\middle| X\right)$ is a symmetry of \eqref{eq:unfolded_r2}. Analogously to the rank-one case $\eta$ commutes with covariant differential of \eqref{eq:unfolded_r2} \begin{equation}\label{eq:unfolded_symmetry_r2} \left[\dfrac{\partial}{\partial X^{AB}} + i\dfrac{\partial^2}{\partial Y_1^A \partial Y_1^B} -i\dfrac{\partial^2}{\partial Y_2^A \partial Y_2^B}, \eta\left(Y_{1,2},\partial_{Y_{1,2}}\middle| X\right)\right] = 0. \end{equation} Covariant oscillators \eqref{eq:covar_oscillators} \begin{equation}\label{eq:covar_oscillators_r2} \mathcal{A}_+{}^C\left(Y_1\middle| X\right),\mathcal{B}^+{}_{C}\left(Y_1\middle| X\right)\quad \text{and}\quad \mathcal{A}_-{}^C\left(Y_2\middle| X\right),\mathcal{B}^-{}_{C}\left(Y_2\middle| X\right) \end{equation} verify \eqref{eq:unfolded_symmetry_r2}, hence any function $\eta\left(\mathcal{A}_{+},\mathcal{A}_{-};\mathcal{B}^{+},\mathcal{B}^{-}\right)$ is a symmetry of \eqref{eq:unfolded_r2} and the most general form of a bilinear current is \cite{twistors} \begin{equation} \label{eq:bilinear_general_oscillators} \J[\eta]{Y_{1,2}}{X} = \eta\left(\mathcal{A}_{+},\mathcal{A}_{-};\mathcal{B}^{+},\mathcal{B}^{-}\right)\C{+}{Y_1}{X}\C{-}{Y_2}{X}. \end{equation} The action of covariant oscillators on the rank-two basis vectors $\basis{+}{\xi}{Y_1}{X}\basis{-}{\zeta}{Y_2}{X}$ \begin{equation} \def1.5{1.5} \begin{array}{ll} \mathcal{B}^+{}_{C}\,\theta^+_\xi\theta^-_\zeta = i\xi_C\,\theta^+_\xi\theta^-_\zeta, & \mathcal{B}^-{}_{C}\,\theta^+_\xi\theta^-_\zeta = -i\zeta_C\,\theta^+_\xi\theta^-_\zeta,\\ \mathcal{A}_+{}^C\,\theta^+_\xi\theta^-_\zeta = -i\dfrac{\partial}{\partial \xi_C} \theta^+_\xi\theta^-_\zeta, & \mathcal{A}_-{}^C\,\theta^+_\xi\theta^-_\zeta = i\dfrac{\partial}{\partial \zeta_C} \theta^+_\xi\theta^-_\zeta \end{array} \end{equation} generates the complete basis from a single vacuum vector $\theta^{(2)}_0 := \theta^+_0 \theta^-_0 = 1$, \begin{equation}\label{eq:basis_covar_r2} \def1.5{1.5} \begin{array}{l} \mathcal{B}^\pm{}_{C}\,\theta^{(2)}_0 = 0\,,\qquad \theta^+_\xi \theta^-_\zeta=\exp\left[i\xi_C\,\mathcal{A}_+{}^C - i\zeta_C\,\mathcal{A}_-{}^C\right]\,\theta^{(2)}_0 . \end{array} \end{equation} The action of a symmetry parameter in \eqref{eq:bilinear_general_oscillators} is \begin{equation} \label{eq:bilinear_general_oscillators_explicit} \J[\eta]{Y_{1,2}}{X} = \int \mathrm{d}^M\xi\mathrm{d}^M\zeta\,c^+\left(\xi\right)c^-\left(\zeta\right)\,\eta\left(-i\partial_{\xi},i\partial_{\zeta};i\xi,-i\zeta\right)\basis{+}{\xi}{Y_1}{X}\basis{-}{\zeta}{Y_2}{X}. \end{equation} It is convenient to introduce the following linear combinations of the covariant oscillators $\mathcal{A},\mathcal{B}$ \begin{equation}\label{eq:covar_essential} \begin{array}{ll} \mathfrak{B}_C = \mathcal{B}^-{}_{C} -\mathcal{B}^+{}_{C}, & \widetilde{\mathfrak{B}}_C = \mathcal{B}^-{}_{C} +\mathcal{B}^+{}_{C},\\ \widetilde{\mathfrak{B}}^C = \dfrac{1}{2}\left(\mathcal{A}_-{}^C - \mathcal{A}_+{}^C\right), & \mathfrak{B}^C = \dfrac{1}{2}\left(\mathcal{A}_-{}^C + \mathcal{A}_+{}^C\right)\\ \end{array} \end{equation} with the non-zero commutation relations \begin{equation} \big[ \mathfrak{B}_A,\widetilde{\mathfrak{B}}^B\big] = \delta_A^B,\quad\big[\widetilde{\mathfrak{B}}_A,\mathfrak{B}^B\big] = \delta_A^B. \end{equation} These oscillators are most conveniently represented as differential operators \begin{equation} \def1.5{1.5} \begin{array}{ll} \mathfrak{B}_C = \dfrac{\partial}{\partial U^C}, & \widetilde{\mathfrak{B}}_C = \dfrac{\partial}{\partial V^C},\\ \widetilde{\mathfrak{B}}^C = U^C + iX^{CB}\dfrac{\partial}{\partial V^B}, & \mathfrak{B}^C = V^C + iX^{CB}\dfrac{\partial}{\partial U^B} \end{array} \end{equation} in terms of the variables \cite{twistors} \begin{equation}\label{eq:uv} Y_1 = V - U,\quad Y_2 = V + U. \end{equation} Let us introduce an involutive antiautomorphism $\rho$ of the algebra of covariant oscillators that acts as follows \begin{equation}\label{eq:antiautomorphism} \rho\left(\mathcal{A}_\pm{}^B\right) =\mathcal{A}_{\mp}{}^B,\quad \rho\left(\mathcal{B}^\pm{}_C\right) = -\mathcal{B}^\mp{}_C. \end{equation} The oscillatros $\mathfrak{B},\widetilde{\mathfrak{B}}$ are $\rho$-even and $\rho$-odd, respectively, \begin{equation} \rho\left(\mathfrak{B}\right) =\mathfrak{B},\quad \rho (\widetilde{\mathfrak{B}}) = -\widetilde{\mathfrak{B}}. \end{equation} For practical computations it is convenient to chose a specific ordering prescription for functions of covariant oscillators. We will use the totally symmetric Weyl ordering described by the Weyl star product. In these terms, any two symbols ({\it i.e.,} functions of commuting variables) $f\left(\mathcal{A}_+,\mathcal{A}_-;\mathcal{B}^+,\mathcal{B}^-\right)$ and $g\left(\mathcal{A}_+,\mathcal{A}_-;\mathcal{B}^+,\mathcal{B}^-\right)$ are star-multiplied as follows \begin{equation}\label{eq:starproduct} \left(f * g\right)\left(\mathcal{A};\mathcal{B}\right) = f\left(\mathcal{A};\mathcal{B}\right)\,\exp{\dfrac{1}{2}\sum_{a=+,-} \left(\dfrac{\overleftarrow{\partial}}{\partial \mathcal{B}^a{}_{C}}\dfrac{\overrightarrow{\partial}}{\partial \mathcal{A}_a{}^C} - \dfrac{\overleftarrow{\partial}}{\partial \mathcal{A}_a{}^{C}}\dfrac{\overrightarrow{\partial}}{\partial \mathcal{B}^a{}_{C}}\right)}\; g\left(\mathcal{A};\mathcal{B}\right). \end{equation} In terms of the star product (\ref{eq:starproduct}) the vacuum vector $\theta^{(2)}_0$ obeying $\mathcal{B}^\pm{}_{C}*\theta_0^{(2)} = 0$ is realized as \begin{equation} \theta^{(2)}_0 = \exp\left[-2\sum_{a=+,-}\mathcal{A}_a{}^C \mathcal{B}^a{}_{C} \right]. \end{equation} Symbols of the basis star-product elements $\theta^+_\xi\theta^-_\zeta$ can be generated from the vacuum $\theta^{(2)}_0$ by the left star-multiplication \eqref{eq:starproduct} via \eqref{eq:basis_covar_r2} \begin{multline} \theta^+_\xi \theta^-_\zeta = \exp\left[i\xi_C\,\mathcal{A}_+{}^C - i\zeta_C\,\mathcal{A}_-{}^C\right]*\theta_0^{(2)} = \exp\left[2i\xi_C\,\mathcal{A}_+{}^C - 2i\zeta_C\,\mathcal{A}_-{}^C\right]\cdot\theta_0^{(2)}. \end{multline} In terms of the star product, symmetry parameters are represented by their symbols $\eta\left(\mathcal{A};\mathcal{B}\right)$. Since Weyl ordering is totally symmetric, antiautomorhism $\rho$ \eqref{eq:antiautomorphism} acts on a symbol $f\left(\mathcal{A}_+,\mathcal{A}_-;\mathcal{B}^+,\mathcal{B}^-\right)$ simply as \begin{equation} \rho\left(f\left(\mathcal{A}_+,\mathcal{A}_-;\mathcal{B}^+,\mathcal{B}^-\right)\right) = f\left(\mathcal{A}_-,\mathcal{A}_+;-\mathcal{B}^-,-\mathcal{B}^+\right). \end{equation} Indeed, it is straightforward to check that $\rho$ is an antiautomorphism of the star-product algebra \eqref{eq:starproduct}, {\it i.e.} $\rho\left(f*g\right) = \rho\left(g\right) * \rho\left(f\right)$. \section{Periodic spinor space} To construct periodic solutions it suffices to put \eqref{eq:solution_r1} on a lattice by setting $\xi_A = \dfrac{2\pi}{\ell^{(A)}}n_A$ with $n_A\in\mathbb{Z}$ (or in condensed notation $\xi = \dfrac{2\pi}{\ell}n$ for $n\in\mathbb{Z}^M$) \begin{equation}\label{eq:solution_r1_periodic} \C{\pm}{Y}{X} = \dfrac{\left(2\pi\right)^M}{\ell^{(1)}...\ell^{(M)}}\;\sum_n\, c\left(\dfrac{2\pi}{\ell}n\right)\,\basis{\pm}{2\pi\,n\slash\ell}{Y}{X}\,, \end{equation} \begin{equation}\label{eq:basis_r1_periodic_raw} \basis{\pm}{2\pi\,n\slash\ell}{Y}{X} = \exp\left[\pm i\left(4\pi ^2\,n_A n_B\,\dfrac{X^{AB}}{\ell^{(A)}\ell^{(B)}} + 2\pi\,n_A\,\dfrac{Y^A}{\ell^{(A)}} \right)\right]. \end{equation} Such solutions are $\ell^{(A)}$-periodic in $Y^A$-variables. The non-compact limit corresponds to $\ell^{(A)}\to\infty$. It is convenient to use rescaled variables and change notations as follows \begin{equation}\label{eq:rescaled} Y^{\prime A} := \dfrac{2\pi}{\ell^{(A)}}\,Y^A,\; X^{\prime AB} := \dfrac{4\pi^2}{\ell^{(A)}\ell^{(B)}}\,X^{AB}\,. \end{equation} Since this is equivalent to setting \begin{equation}\label{lpi} \ell^{(A)}=2\pi, \end{equation} in the sequel we will not distinguish between primed and unprimed variables. The dependence on $\ell^{(A)}$ can be easily reconstructed in the very end if necessary. In terms of rescaled variables basis vectors \eqref{eq:basis_r1_periodic_raw} are \begin{equation}\label{eq:basis_r1_periodic} \basis{\pm}{n}{Y}{X} := \exp\left[\pm i\left(n_A n_B\, X^{AB} + n_B\, Y^B\right)\right]\,, \end{equation} \noindent while any periodic solution \eqref{eq:solution_r1_periodic} can be written as follows \begin{equation}\label{eq:solution_r1_periodic_norm} \C{\pm}{Y}{X} = \sum_{n\in\mathbb{Z}^M} c^\pm_n\,\basis{\pm}{n}{Y}{X}. \end{equation} Basis functions \eqref{eq:basis_r1_periodic} (and hence functions \eqref{eq:solution_r1_periodic_norm}) are $2\pi$-periodic in $Y$-variables, $2\pi$-periodic in $X^{AA}$ and $\pi$-periodic in $X^{AB}$ with $A\neq B$. Hence, unfolded dynamics induces periodicity in $\mathcal{M}_M$ from that in the spinor variables. It also implies that, reintroducing arbitrary radii, periods of the $X$-variables factorize into products of periods of $Y$-variables. Namely, periods of $Y$- and $X$-variables are $\ell^{(A)}$ for $Y^A$, $\frac{\ell^{(A)}\ell^{(A)}}{2\pi}$ for $X^{AA}$ and $\frac{\ell^{(A)}\ell^{(B)}}{4\pi}$ for $X^{AB}$ ($A\neq B$). So, possible periods of the $\frac{M\left(M+1\right)}{2}$-dimensional space $\mathcal{M}_M$, that can be respected by solutions of the rank-one equations \eqref{eq:unfolded_r1}, are parametrized by $M$ numbers. Due to the second relation in \eqref{eq:covar_oscillators_action} which does not respect periodicity, polynomials of covariant oscillators $\mathcal{A}_\pm{}^C$ (\ref{eq:covar_oscillators}) do not act properly on \eqref{eq:basis_r1_periodic}. The generators respecting periodicity are \begin{equation}\label{eq:symmetries_Fock_r1_periodic} \def1.5{1.5} \begin{array}{l} \mathcal{B}^\pm{}_{C} \theta^\pm_n = \pm i\, n_C \,\theta^\pm_n,\qquad \exp\left[\pm i\, m_C\,\mathcal{A}_\pm{}^C\right] \theta^\pm_n = \theta^\pm_{n+m} \end{array} \end{equation} for any $m,n\in \mathbb{Z}^M$. As in the non-compact case, basis vectors are generated from a single vacuum vector $\theta_0$ \begin{equation} \def1.5{1.5} \begin{array}{l} \mathcal{B}^\pm{}_{C}\, \theta_0 = 0\,,\qquad \exp\left[\pm i\, n_C\,\mathcal{A}_\pm{}^C\right] \theta_0 = \theta^\pm_{n}. \end{array} \end{equation} Periodicity demands any symmetry transformation to be $2\pi$-periodic in the oscillators $\mathcal{A}$ \begin{equation} \eta = \eta\left(e^{i\,\mathcal{A}_\pm};\mathcal{B}^\pm\right). \end{equation} The parameter $\eta $ can be viewed as a polynomial of $\mathcal{B}^\pm$ and a Laurent polynomial of $e^{i\,\mathcal{A}_\pm}$. The action of a symmetry transformation can be written as \begin{equation} \eta\left(e^{i\,\mathcal{A}_\pm};\mathcal{B}^\pm\right)\,\C{\pm}{Y}{X} = \sum_n\, c^\pm_n\,\eta\left(e^{\pm\frac{\partial}{\partial n}};in\right)\,\basis{\pm}{n}{Y}{X}. \end{equation} For rank-two equation \eqref{eq:unfolded_r2} periodic Ansatz is introduced in the same manner. Bilinear currents \eqref{eq:bilinear} are built from positive- and negative-frequency rank-one fields \eqref{eq:solution_r1_periodic_norm} \begin{equation} \J{Y_{1,2}}{X} = \sum_{m,n} c^+_m c^-_n\, \basis{+}{m}{Y_1}{X}\basis{-}{n}{Y_2}{X}. \end{equation} Basis elements $\basis{+}{m}{Y_1}{X}\basis{-}{n}{Y_2}{X}$ are constructed from the vacuum vector $\theta^{(2)}_0$ analogously to \eqref{eq:basis_covar_r2} \begin{equation}\label{eq:basis_covar_r2_periodic} \def1.5{1.5} \begin{array}{l} \mathcal{B}^\pm{}_{C}\,\theta^{(2)}_0 = 0\,,\qquad \exp\left[i m_C\,\mathcal{A}_+{}^C - i n_C\,\mathcal{A}_-{}^C\right]\,\theta^{(2)}_0 = \theta^+_m \theta^-_n. \end{array} \end{equation} In terms of $Y_1,Y_2$ and $U,V$ \eqref{eq:uv} they have the form \begin{multline}\label{eq:basis_r2_periodic} \basis{+}{m}{Y_1}{X}\basis{-}{n}{Y_2}{X} = \exp\left[i\left(\left(m+n\right)_B\left(m-n\right)_C \,X^{BC} + m_C\, Y_1^C - n_C\, Y_2^C \right)\right] =\\= \exp\left[i\left(\left(m+n\right)_B\left(m-n\right)_C \,X^{BC} - \left(m+n\right)_C\, U^C + \left(m-n\right)_C\, V^C \right)\right]. \end{multline} Periodicity properties of the vector \eqref{eq:basis_r2_periodic} in $X^{AB}$ are the same as of \eqref{eq:basis_r1_periodic}. The global symmetry transformation respects periodicity in $Y_1$ and $Y_2$ variables iff they are generated by $e^{i\,\mathcal{A}_a{}^C}$ and $\mathcal{B}^a{}_{C}$, \begin{equation}\label{eq:symmetry_r2_periodic} \eta = \eta\left(e^{\pm i\,\mathcal{A}_{a}};\mathcal{B}^b\right)\,,\qquad a,b = +,-\,. \end{equation} Within the Weyl ordering the periodic star-product symbols of parameters \eqref{eq:symmetry_r2_periodic} admit Fourier decomposition, \begin{equation}\label{eq:Fourier_decompose} \eta\left(\mathcal{A};\mathcal{B}\right) = \sum_{k,l\in\mathbb{Z}^M}\eta_{kl}\left(\mathcal{B}^+,\mathcal{B}^-\right) \,e^{ik_B\,\mathcal{A}_+{}^B} e^{il_C\,\mathcal{A}_-{}^C}. \end{equation} This gives the following explicit formula for the symmetry action \begin{multline}\label{eq:symmetry_action_periodic} \J[\eta]{Y}{X} := \eta\left(\mathcal{A};\mathcal{B}\right)\,*\,\J{Y}{X} = \sum_{m,n,k,l} c_m^+ c_n^-\,\eta_{kl}\left(im + \dfrac{ik}{2},-in+\dfrac{il}{2}\right)\cdot\theta_{m+k}^+ \theta_{n-l}^-. \end{multline} In terms of oscillators \eqref{eq:covar_essential} decomposition \eqref{eq:Fourier_decompose} is \begin{equation}\label{eq:Fourier_decompose_essential_raw} \eta\big(\mathfrak{B},\widetilde{\mathfrak{B}}\big) = \sum_{k,l} \eta_{kl}\big(\mathfrak{B}_C,\widetilde{\mathfrak{B}}_D\big)\, e^{i\left(k+l\right)_A\,\mathfrak{B}^A} e^{-i\left(k-l\right)_B\,\widetilde{\mathfrak{B}}^B}. \end{equation} Let, for $N_A\in\mathbb{Z}$, $|N_A|_2 = N_A\mod 2$, and for $N\in\mathbb{Z}^M$, $\left|N\right|_2\in\mathbb{Z}_2{}^M$ is understood component-wise. Then $|k+l|_2 = |k-l|_2$ and hence decomposition \eqref{eq:Fourier_decompose_essential_raw} can be rewritten as follows \begin{equation}\label{eq:Fourier_decompose_essential} \eta\big(\mathfrak{B},\widetilde{\mathfrak{B}}\big) = \sum_{|N|_2 = |\widetilde{N}|_2} \eta_{N,\widetilde{N}}\big(\mathfrak{B}_C,\widetilde{\mathfrak{B}}_D\big)\, e^{iN_A\,\mathfrak{B}^A} e^{i\widetilde{N}_B\,\widetilde{\mathfrak{B}}^B}. \end{equation} $\mathcal{D}$-functions for periodic solutions of \eqref{eq:unfolded_r1} can be introduced analogously to the non-compact case \cite{theta}. In the positive-frequency sector, the $\mathcal{D}$-function \begin{equation}\label{eq:theta_raw} \theta\left(Y\vert X\right) = \dfrac{1}{\left(2\pi\right)^M}\sum_n\,\exp\left[i\left(n_A n_B\,X^{AB} + n_A\,Y^A\right)\right] \end{equation} is a solution to \eqref{eq:unfolded_r1} with the $\delta$-functional initial data on a torus, \begin{equation} \theta\left(Y\vert 0\right) = \delta\left(Y\right),\quad Y^A\in \left[-\pi,\pi\right)\;\text{for $A = 1...M$}. \end{equation} \noindent Along with $Y$-periodicity (and aforementioned $X$-periodicity) it is quasi-periodic with $X$ being the matrix of quasi-periods \cite{Mumford} \begin{equation} \theta\left(Y^C + 2m_B \,X^{BC}\middle|X\right) = e^{-i\left(m_B m_C\, X^{BC} + m_B\,Y^B\right)}\,\theta\left(Y\middle|X\right). \end{equation} Up to simple redefinitions of arguments expression \eqref{eq:theta_raw} represents Riemann theta-function \cite{Mumford} \begin{equation} \Theta\left(Y\middle| X\right) = \sum_{n\in\mathbb{Z}^M}\exp\left[i\pi\, n_A n_B\,X^{AB} + 2\pi i\,n_A Y^A\right]. \end{equation} Action of covariant oscillators $e^{ib\, \mathcal{B}} e^{ia\,\mathcal{A}}\, \theta\left(Y\middle|X\right)$ gives rise to theta-functions with rational characteristics $a_C\in\mathbb{Q}$ and $b^C\in\mathbb{Q}$ ($C=1...M$) (\cite{Mumford}, see also \cite{KZ}) \begin{equation} \Theta_{a,b}\left(Y\middle|X\right) := \sum_n\,\exp\left[i\pi \left(n + a\right)_B\left(n + a\right)_C X^{BC} + 2\pi i\,\left(n + a\right)_B\, \left(Y + b\right)^B\right]. \end{equation} \section{Charges} \subsection{Charge components and integration surfaces} Conserved charges can be represented as integrals of on-shell-closed current differential forms. Current forms are constructed from an arbitrary rank-two field \cite{theta,twistors,ads}, and, in particular, from the bilinear currents (\ref{eq:bilinear_general_oscillators}). In the non-compact case the closed on-shell current $M$-form is \cite{twistors} (see also \cite{theta}) \begin{equation}\label{eq:charge_form} \Omega\left(J_\eta\right) = W^1\wedge ... \wedge W^M\,\left.\J[\eta]{Y_{1,2}\left(U,V\right)}{X}\right|_{U=0}, \end{equation} with $U,V$ \eqref{eq:uv}. $ W^A$ is the operator-valued 1-form \begin{equation}\label{eq:w} W^A = \mathrm{d} V^A + i\,\mathrm{d} X^{AB}\,\dfrac{\partial}{\partial U^B}. \end{equation} Conserved charges result from integration over an $M$-dimensional surface $\Sigma \subset \mathcal{M}_M\times \mathbb{R}^M$ which is spacelike in $X$-variables \cite{twistors} ($\mathbb{R}^M$ is parametrized by spinor variables $V^A$), \begin{equation}\label{eq:charge} \mathrm{Q}_\eta = \int_\Sigma \Omega\left(J_\eta\right). \end{equation} Charge \eqref{eq:charge} is independent of local variations of $\Sigma$ since $\mathrm{d}\Omega\left(J_\eta\right) = 0$ by virtue of the current equation. Non-trivial charges correspond to the on-shell de Rham cohomology of the set of forms \eqref{eq:charge_form}. As presented in \cite{twistors}, in the non-compact case non-zero charges are completely represented by the $\widetilde{\mathfrak{B}}$-independent symmetry parameters $\eta\left(\mathfrak{B}\right)$. In other words, given $\eta\big(\mathfrak{B},\widetilde{\mathfrak{B}}\big)$ there exists such $\eta^\prime\left(\mathfrak{B}\right)$ that $\Omega\left(J_{\eta^\prime}\right) - \Omega\left(J_\eta\right) = \mathrm{d}\omega$. Another set of dual closed current forms $\widetilde{\Omega}\left(J_{\widetilde{\eta}}\right)$ is constructed via exchange $U\leftrightarrow V$ \cite{twistors}. Nontrivial conserved charges for such current forms are represented by $\widetilde{\eta}\big(\widetilde{\mathfrak{B}}\big)$. Hence the complete set of charges is doubled giving rise to the $\mathcal{N} = 2$ supersymmetric HS algebra \cite{twistors}. For definiteness in this paper we mostly focus on current forms \eqref{eq:charge_form}. The dual set of charges can be considered analogously. For the $Y$-periodic case the situation is somewhat different. Now functions \eqref{eq:solution_r1_periodic_norm} live on a torus $\mathcal{T}_M\times T^M := \left(\mathcal{M}_M\times \mathbb{R}^M\right)\slash L$ where $L\subset \mathcal{M}_M\times \mathbb{R}^M$ is the lattice corresponding to the periods of rescaled coordinates \eqref{eq:rescaled} \begin{equation}\label{eq:lattice} L = \left\{Y^A = 2\pi\,p,\, X^{AA} = 2\pi\,q,\, \left. X^{AB}\right|_{A\neq B} = \pi\,r\;\text{for}\; p,q,r\in \mathbb{Z}\right\}\,. \end{equation} Integration surfaces $\Sigma$, being $M$-dimensional cycles in $\mathcal{T}_M\times T^M$, may belong to different homotopy classes. These are anticipated to generate different charges. In the periodic case, the question whether it is possible to eliminate the $\widetilde{\mathfrak{B}}$-dependence from the symmetry parameters \eqref{eq:Fourier_decompose_essential_raw} has to be reconsidered. The goal is to find the essential part of $\mathfrak{B}$- and $\widetilde{\mathfrak{B}}$-dependence of the symmetry parameters, associated with the current cohomology in the periodic case. For symmetry parameters \eqref{eq:Fourier_decompose_essential_raw} and periodic solutions \eqref{eq:solution_r1_periodic_norm} the current forms \eqref{eq:charge_form} are \begin{multline}\label{eq:charge_form_periodic} \Omega\left( J_\eta\right) = \sum_{m,n,k,l}\left(\mathrm{d}\,V^A + \left(m+n+k-l\right)_C \mathrm{d}\,X^{CA}\right)^{\wedge M} \\ c_m^+ c_n^-\,\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right), i\left(m-n+\frac{k+l}{2}\right)\right)\cdot\\ \cdot \exp\left[i\left(m-n+k+l\right)_B\left(V^B + \left(m+n+k-l\right)_D \,X^{DB}\right)\right]\,, \end{multline} where $\left( W^A\right)^{\wedge M} := W^{1}\wedge ...\wedge W^{M}$. Current form \eqref{eq:charge_form_periodic} is defined on $\mathcal{T}_M\times T^M$ where the twistor-like sector $T^M$ is parametrized by variables $V^A\in\left[0,2\pi\right)$. Integration of \eqref{eq:charge_form_periodic} over a compact surface $\Sigma\subset\mathcal{T}_M\times T^M$ gives \begin{equation}\label{eq:form_integration} \int_\Sigma \Omega\left( J_\eta\right) = \sum_{m,n,k,l} c_m^+ c_n^-\,\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right), i\left(m-n+\frac{k+l}{2}\right)\right)\cdot \cq{\Sigma}{m-n+k+l}{m+n+k-l}, \end{equation} where \begin{equation}\label{eq:charge_components} \cq{\Sigma}{\nu}{\widetilde{\nu}} := \int_{\Sigma} \mathrm{d}w^1\wedge ... \wedge \mathrm{d}w^M \,e^{i\nu_C\,w^C}\,,\qquad w^A := V^A + \widetilde{\nu}_C \,X^{CA} \end{equation} will be referred to as \textit{charge components} and \begin{equation}\label{eq:nunu} \nu_C = \left(m-n+k+l\right)_C,\quad \widetilde{\nu}_C = \left(m+n+k-l\right)_C. \end{equation} Charge components are independent of local variations of $\Sigma$ since the differential form in \eqref{eq:charge_components} is closed. Integration in \eqref{eq:charge} and \eqref{eq:charge_components} should be performed over \textit{space-like $M$-cycles}. Space-like directions in $\mathcal{M}_M$ are associated with the traceless parts of $X^{AB}$ \cite{Vasiliev:2001dc} \begin{equation}\label{eq:space_time} \sum_{A=1}^M X^{AA} = 0. \end{equation} Consider the following parametrization of $\mathcal{M}_M$ by variables $t,y_{i,j+1},z_i$ ($i\le j$ and $i,j=1...M-1$): \begin{equation}\label{eq:parametrization_M} \begin{array}{c} X^{11} = z_1,\,X^{22} = -z_1 + z_2,...,X^{M-1,M-1} = - z_{M-2}+z_{M-1},X^{MM} = t - z_{M-1},\\ \quad y_{i,j+1} = X^{i,j+1} = X^{j+1,i}. \end{array} \end{equation} Here $t = \sum_{A=1}^M X^{AA}$ parametrizes time while $y$ and $z$, parametrizing the traceless part of $X^{AB}$, are space coordinates. Note that transformation \eqref{eq:parametrization_M} is from $\mathrm{SL}\left(\frac{M(M+1)}{2}\middle|\mathbb{Z}\right)$. Therefore it preserves the lattice \eqref{eq:lattice} acting properly on the torus $\mathcal{T}_M\subset \mathcal{M}_M$. The freedom in the choice of parametrization of space-like directions in \eqref{eq:space_time}, \textit{i.e.} of traceless part of $X^{AB}$ is not essential. Different parametrizations resulting from $\mathrm{SL}\left(\frac{M(M+1)}{2}\middle|\mathbb{Z}\right)$ transformations of $X^{AB}$ preserve the lattice and give equivalent sets of conserved charges. Indeed, in this case fundamental cycles corresponding to one parametrization are expressed as integer combinations of those for the other. The same is true for conserved charges being integrals over $M$-dimensional space-like cycles in $\mathcal{T}_M\times T^M$. More generally, parametrizations of $\mathcal{T}_M\times T^M$ resulting from $\mathrm{SL}\left(\frac{M\left(M+1\right)}{2} + M\middle| \mathbb{Z}\right)$ coordinate transformations of $X^{AB},Y^A$ give equivalent charges. For instance, consider parametrization \eqref{eq:parametrization_M}. Consider fundamental space-like $M$-cycles $\left\{\sigma_\mathsf{a}\right\}$ of $\mathcal{T}_M\times T^M$ with a single winding parametrized by all sets of $M$ pairwise different $y$, $z$ and $V$. A single-winding cycle $\sigma_0$ in the spinor space parametrized by the variables $V$ will be referred to as the \textit{lower cycle}. Other space-like cycles will be called \textit{higher}. Any space-like cycle $\Sigma$ is homotopic to a sum of fundamental cycles \begin{equation}\label{eq:fundamental_cycles} \Sigma = \sum_\mathsf{a} b_\mathsf{a} \sigma_\mathsf{a} \end{equation} with the coefficients $b_\mathsf{a}\in\mathbb{Z}$ representing the number of windings over the respective fundamental cycle. Charge components \eqref{eq:charge_components} being linear functions on the space of cycles are determined by their values $\cq{\sigma_\mathsf{a}}{\nu}{\widetilde{\nu}}$ on the fundamental cycles. Using \eqref{eq:parametrization_M} for \eqref{eq:charge_components} and setting $t=0$ one arrives at the sum of monomials of the $M$-th power in $\mathrm{d} V,\mathrm{d} y,\mathrm{d} z$ which correspond to integration over space-like fundamental cycles. One can see that for any $\sigma_\mathsf{a}$ \begin{equation}\label{eq:charge_components_calculated} \cq{\sigma_\mathsf{a}}{\nu}{\widetilde{\nu}} \propto p_{\sigma_\mathsf{a}}\left(\widetilde{\nu}\right)\,\delta_{\nu,0} \,,\qquad \delta_{\nu,0} := \delta_{\nu_1,0}\,...\,\delta_{\nu_M,0}\,, \end{equation} where $p_{\sigma_\mathsf{a}}\left(\widetilde{\nu}\right)$ is a homogeneous polynomial of $\widetilde{\nu}_A$. Indeed, formula \eqref{eq:charge_components_calculated} results from the change of variables $w^A$ in \eqref{eq:charge_components} to those among $V,y,z$ \eqref{eq:parametrization_M} that parametrize $\sigma_\mathsf{a}$, with $p_{\sigma_\mathsf{a}}\left(\widetilde{\nu}\right)$ being the Jacobian. As a result, for any cycle $\sigma_\mathsf{a}$ there is a linear transformation $F_\mathsf{a}\left[\widetilde{\nu}\right]$ of variables $\nu$ leading to \eqref{eq:charge_components_calculated} in the form \begin{equation}\label{eq:cycles_det} \cq{\sigma_\mathsf{a}}{\nu}{\widetilde{\nu}} \propto \det F_\mathsf{a}\left[\widetilde{\nu}\right] \,\delta_{F_\mathsf{a}\left[\widetilde{\nu}\right]\nu,0}. \end{equation} Generally, different cycles may give the same polynomials. Because charge components \eqref{eq:charge_components} are linear functions of cycles, for any cycle $\Sigma$ (\ref{eq:fundamental_cycles}) \begin{equation}\label{eq:charge_components_cycles} \cq{\Sigma}{\nu}{\widetilde{\nu}} \propto p_{\Sigma}\left(\widetilde{\nu}\right)\,\delta_{\nu,0} \,,\qquad p_\Sigma = \sum_{\mathsf{a}} b_\mathsf{a} p_{\sigma_\mathsf{a}}\,. \end{equation} A useful consequence of \eqref{eq:charge_components_cycles} is the expression for charge components of any cycle in terms of those for the lower fundamental one \begin{equation}\label{eq:to_V} \cq{\Sigma}{\nu}{\widetilde{\nu}} \propto p_{\Sigma}\left(\widetilde{\nu}\right)\,\cq{\sigma_0}{\nu}{\widetilde{\nu}}\,. \end{equation} Note that $p_{\sigma_0}\left(\widetilde{\nu}\right) \propto 1$. \subsection{ $M=2$ example } \noindent As an example, consider the $M=2$ case in some detail. General parametrization \eqref{eq:parametrization_M} for $X^{AB}$ is \begin{equation}\label{eq:parametrization_M2} X = t\begin{pmatrix} 1 & 0\\ 0 & 0\\ \end{pmatrix} + y\begin{pmatrix} 0 & 1\\ 1 & 0\\ \end{pmatrix} + z\begin{pmatrix} 1 & 0\\ 0 & -1\\ \end{pmatrix}. \end{equation} The volume form in \eqref{eq:charge_components} for $t=0$ is \begin{multline}\label{eq:volume_M2} \mathrm{d} w^1\wedge \mathrm{d} w^2 = \left(\mathrm{d}\, V^1 + \widetilde{\nu}_2\,\mathrm{d}\, y + \widetilde{\nu}_1\,\mathrm{d} \, z\right)\wedge \left(\mathrm{d}\, V^2 + \widetilde{\nu}_1\,\mathrm{d}\,y - \widetilde{\nu}_2\,\mathrm{d}\,z\right) =\\= \mathrm{d}\,V^1\wedge \mathrm{d} \,V^2 + \widetilde{\nu}_1\,\mathrm{d}\,V^1\wedge \mathrm{d}\,y - \widetilde{\nu}_2\,\mathrm{d}\,V^2\wedge \mathrm{d}\,y -\\- \widetilde{\nu}_2\,\mathrm{d}\, V^1\wedge \mathrm{d}\,z - \widetilde{\nu}_1\, \mathrm{d}\,V^2\wedge \mathrm{d}\,z - \left(\widetilde{\nu}_1{}^2 + \widetilde{\nu}_2{}^2\right)\,\mathrm{d}\, y\wedge \mathrm{d} \, z. \end{multline} With the particular parametrization \eqref{eq:parametrization_M2} fundamental $2$-cycles of $\mathcal{T}_M\times T^M$ are associated with the following pairs of variables $V^1V^2$ (the lower cycle) , $V^1y$ , $V^2z$, $V^2y$, $V^1z$ and $yz$. Charge components for $\sigma_0 = V^1V^2$ are \begin{equation} \cq{\sigma_0}{\nu}{\widetilde{\nu}} = \iint_{0}^{2\pi} \mathrm{d} V^1\mathrm{d} V^2\;e^{i\nu_1\,V^1 + i\nu_2\,V^2} = 4\pi^2\,\delta_{\nu_1,0}\delta_{\nu_2,0}\propto \delta_{\nu,0}. \end{equation} Analogous computation for $V^1y$ gives \begin{equation} \cq{V^1y}{\nu}{\widetilde{\nu}} = \widetilde{\nu}_1 \int_0^\pi\mathrm{d} y \int_{0}^{2\pi} \mathrm{d} V^1\;e^{i\nu_1\,V^1 + i\big(\nu_1\widetilde{\nu}_2 + \widetilde{\nu}_1\nu_2\big)\,y} = 2\pi^2\,\widetilde{\nu}_1\,\delta_{\nu_1,0}\delta_{\nu_1\widetilde{\nu}_2 + \nu_2\widetilde{\nu}_1,0}, \end{equation} In agreement with \eqref{eq:charge_components_calculated} this is equivalent to \begin{equation} \mathrm{q}_{V^1y}^{\left(\nu,\widetilde{\nu}\right)} \propto\widetilde{\nu}_1\,\delta_{\nu,0}\,. \end{equation} Analogous computation of the full set of charge components for fundamental cycles gives \begin{equation} \begin{array}{c} \mathrm{q}_{V^1V^2}^{\left(\nu,\widetilde{\nu}\right)} \propto \delta_{\nu,0},\\ \mathrm{q}_{V^1y}^{\left(\nu,\widetilde{\nu}\right)} \propto\widetilde{\nu}_1\,\delta_{\nu,0},\quad \mathrm{q}_{V^2z}^{\left(\nu,\widetilde{\nu}\right)} \propto \widetilde{\nu}_1\,\delta_{\nu,0},\\ \mathrm{q}_{V^2y}^{\left(\nu,\widetilde{\nu}\right)} \propto \widetilde{\nu}_2\,\delta_{\nu,0},\quad \mathrm{q}_{V^1z}^{\left(\nu,\widetilde{\nu}\right)} \propto \widetilde{\nu}_2\,\delta_{\nu,0},\\ \mathrm{q}_{yz}^{\left(\nu,\widetilde{\nu}\right)} \propto\left(\widetilde{\nu}_1{}^2 +\widetilde{\nu}_2{}^2\right)\delta_{\nu,0}. \end{array} \end{equation} The respective polynomials in \eqref{eq:charge_components_cycles} are integer combinations of $1$, $\widetilde{\nu}_1$, $\widetilde{\nu}_2$ and $\widetilde{\nu}_1{}^2 + \widetilde{\nu}_2{}^2$. \subsection{On-shell current cohomology and non-zero charges} \subsubsection{$\widetilde{\mathfrak{B}}_C$-dependence} Here we show that, analogously to the non-compact case \cite{twistors} the dependence on $\widetilde{\mathfrak{B}}_C$ can be eliminated from the parametrization of non-zero conserved charges. Namely, for a given symmetry parameter $\eta_{kl}\big(\mathfrak{B}_A,\widetilde{\mathfrak{B}}_B\big)$ we introduce another parameter \begin{equation}\label{eq:equivalent_parameter} \eta^\prime_{kl}\big(\mathfrak{B}_A\big) = \eta_{kl}\big(\mathfrak{B}_A,-i\frac{k+l}{2}\big) \end{equation} depending solely on $\mathfrak{B}_A$ such that current forms \eqref{eq:charge_form_periodic} with $\eta_{kl}\big(\mathfrak{B}_A,\widetilde{\mathfrak{B}}_B\big)$ and $\eta^\prime_{kl}\big(\mathfrak{B}_A\big)$ differ by an exact form. Indeed, using \eqref{eq:charge_form_periodic}, \eqref{eq:charge_components}, \eqref{eq:nunu} and \eqref{eq:equivalent_parameter}, \begin{multline}\label{eq:forms_difference} \Omega\left(J_\eta\right) - \Omega\left(J_{\eta^\prime}\right) = \sum_{m,n,k,l}\left(\mathrm{d} w^A\right)^{\wedge M} c_m^+ c_n^- \exp\left[i\nu_B w^B\right]\\ \Delta\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right), i\left(m-n+\frac{k+l}{2}\right)\right) \,, \end{multline} where \begin{multline} \Delta\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right), i\left(m-n+\frac{k+l}{2}\right)\right)=\\=\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right),i\left(m-n+\frac{k+l}{2}\right)\right) - \eta^\prime_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right)\right) =\\= \int_0^1 \mathrm{d} t\;\dfrac{\mathrm{d}}{\mathrm{d} t} \eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right),-i\frac{k+l}{2} + it\,\nu\right)=\\= i\nu_C\int_0^1 \mathrm{d} t\,\frac{\partial \eta_{kl}}{\partial \widetilde{\mathfrak{B}}_C}\left(-i\left(m+n+\frac{k-l}{2}\right),-i\frac{k+l}{2} + it\,\nu\right). \end{multline} Hence \begin{equation} \Omega\left(J_\eta\right) - \Omega\left(J_{\eta^\prime}\right) = \mathrm{d}\beta, \end{equation} where \begin{multline} \beta \propto \sum_{m,n,k,l} c_m^+ c_n^-\,\varepsilon_{A_1...A_M}\,\mathrm{d} w^{A_1}\wedge ... \wedge \mathrm{d} w^{A_{M-1}}\,e^{i\nu_C\, w^C}\cdot\\ \cdot \int_0^1 \mathrm{d} t\,\frac{\partial \eta_{kl}}{\partial \widetilde{\mathfrak{B}}_{A_M}}\left(-i\left(m+n+\frac{k-l}{2}\right),-i\frac{k+l}{2} + it\,\nu\right). \end{multline} \subsubsection{$\mathfrak{B}^C$- and $\widetilde{\mathfrak{B}}^C$-dependence} Symmetry parameters \eqref{eq:Fourier_decompose_essential} with reduced $\widetilde{\mathfrak{B}}_A$-dependence are \begin{equation}\label{eq:parameters} \eta\big(\mathfrak{B}_C;\mathfrak{B}^A,\widetilde{\mathfrak{B}}^B\big) = \sum_{|N|_2 = |\widetilde{N}|_2} \eta_{N,\widetilde{N}}\left(\mathfrak{B}_C\right)\, e^{iN_A\,\mathfrak{B}^A} e^{i\widetilde{N}_B\,\widetilde{\mathfrak{B}}^B}. \end{equation} The non-compact case \cite{twistors} suggests that oscillators $\mathfrak{B}^A$ play the central role in parametrization of non-trivial charges. On the other hand, since for a symmetry parameter \eqref{eq:Fourier_decompose_essential} periodicity implies that $|N|_2 = |\widetilde{N}|_2$, the $\widetilde{\mathfrak{B}}^A$-dependence cannot be fully eliminated in the periodic case. Indeed, parameters \eqref{eq:parameters} depending solely on parameters $\mathfrak{B}$, \begin{equation}\label{eq:parameters_B} \eta\big(\mathfrak{B}\big) = \sum_{N} \eta_{N}\left(\mathfrak{B}_C\right)\, e^{2iN_A\,\mathfrak{B}^A}, \end{equation} give rise to charges for the lower cycle $\sigma_0$ (\textit{cf.} \eqref{eq:form_integration}, \eqref{eq:Fourier_decompose_essential_raw}-\eqref{eq:Fourier_decompose_essential} and \eqref{eq:to_V}) \begin{equation}\label{eq:charge_B} \mathrm{Q}_{\eta} \propto \sum_{m,n,N} c^+_m c^-_n\,\eta_N\left(-i\left(m+n\right)\right)\,\delta_{m-n+2N,0}, \end{equation} where $c_m^+$ and $c_n^-$ enter $\mathrm{Q}_\eta$ with $\left|m\right|_2 = \left|n\right|_2$. The latter condition implies that parameters of the form \eqref{eq:parameters_B} do not represent the full set of conserved charges. Since the form of charge components \eqref{eq:charge_components_cycles} gives $\delta_{m-n+2N,0}$ in \eqref{eq:charge_B} for any cycle $\Sigma$ this is true for the general case. In the sequel we focus on integration over the lower cycle $\sigma_0$ showing in the next section that this allows us to obtain the full set of non-trivial conserved charges. The $\widetilde{\mathfrak{B}}^A$-dependence can be however minimized as follows. For a parameter \eqref{eq:parameters} the conserved charge for $\sigma_0$ is \begin{equation} \mathrm{Q}_\eta \propto\sum_{m,n,|N|_2=|\widetilde{N}|_2} c_m^+ c_n^-\, \eta_{N,\widetilde{N}}\left(-i\left(m+n-\frac{\widetilde{N}}{2}\right)\right)\,\delta_{m-n+N,0}. \end{equation} \noindent Let $\eta_{N,\widetilde{N}}$ have definite grading $\Gamma = |N|_2\in\mathbb{Z}_2{}^M$. Since the charge depends only on the following combinations of parameters \begin{equation} \eta_N\left(-ik\right) = \sum_{\widetilde{N}:|\widetilde{N}|_2 = \Gamma} \eta_{N,\widetilde{N}}\left(-ik + i\,\dfrac{\widetilde{N}}{2}\right), \end{equation} it can be represented by any term with $|\widetilde{N}|_2=\Gamma$. The simplest options are \begin{equation}\label{eq:parameter_representative} \eta^{\left(\pm\Gamma\right)} = \sum_{N:|N|_2 = \Gamma} \eta_N\left(\mathfrak{B}_C\right)\,e^{iN_A\,\mathfrak{B}^A}e^{\pm i\Gamma_B\,\widetilde{\mathfrak{B}}^B}\,. \end{equation} The antiautomorphism $\rho$ acts on \eqref{eq:parameter_representative} as follows \begin{equation}\label{eq:rho_action} \rho\left(\eta^{\left(\pm\Gamma\right)}\right) = \eta^{\left(\mp\Gamma\right)}. \end{equation} Since $\rho$ is involutive, parameters $\eta$ can be decomposed into $\rho$-even and $\rho$-odd parts \begin{equation}\label{eq:even_odd} \eta^\pm = \dfrac{1\pm\rho}{2}\,\eta. \end{equation} For the case of $\Gamma = 0$, $\eta^- = 0$ . Note that star product of two parameters \eqref{eq:parameter_representative} is not necessarily of the form \eqref{eq:parameter_representative} because, generally, $\Gamma + \Gamma^\prime\notin\mathbb{Z}_2{}^M$. However, parameters of the form \eqref{eq:parameter_representative} are not demanded to form an algebra and will only be used for calculation of charges which are \begin{equation}\label{eq:charge_gamma} \mathrm{Q}_\eta = \sum_{m,n:|m-n|_2 = \Gamma} c_m^+ c_n^-\, \eta^{\left(\pm\Gamma\right)}_{n-m}\left(-i\left(m+n\right)\right)\,,\qquad \eta^{\left(\pm\Gamma\right)}_N\left(k\right) = \eta_N\left(k \pm \frac{i\Gamma}{2}\right)\,. \end{equation} The grading $\Gamma\in\mathbb{Z}_2{}^M$ can be interpreted as distinguishing between bosonic and fermionic degrees of freedom. This suggests an extension of the initial periodic spinor Ansatz by allowing anti-periodic (Neveu-Schwarz) conditions. Detailed consideration of this issue is, however, beyond the scope of this paper. Non-trivial dual charges resulting from the substitution $V\leftrightarrow U$ in \eqref{eq:charge_form} are parametrized by \begin{equation}\label{eq:parameter_representative_cong} \widetilde{\eta}^{\left(\widetilde{\Gamma}\right)} = \sum_{\widetilde{N}:|\widetilde{N}|_2 = \widetilde{\Gamma}} \widetilde{\eta}_{\widetilde{N}}\big(\widetilde{\mathfrak{B}}_C\big)\,e^{i\widetilde{N}_A\,\widetilde{\mathfrak{B}}^A}e^{i\widetilde{\Gamma}_B\,\mathfrak{B}^B} \end{equation} having the form \begin{equation}\label{eq:charges_dual} \widetilde{\mathrm{Q}}_{\widetilde{\eta}} = \sum_{m,n:|m-n|_2 = \widetilde{\Gamma}} c_m^+ c_n^-\, \widetilde{\eta}^{\big(\widetilde{\Gamma}\big)}_{m+n}\left(i\left(m-n\right)\right)\,,\qquad \widetilde{\eta}^{\big(\widetilde{\Gamma}\big)}_N\left(k\right) = \widetilde{\eta}_N\left(k + \frac{i\widetilde{\Gamma}}{2}\right). \end{equation} Note that at $\widetilde{\Gamma} = 0$ parameters \eqref{eq:parameter_representative_cong} are $\mathfrak{B}$-independent giving $\eta^\pm = 0$ whenever $\eta\big(-\widetilde{\mathfrak{B}}\big) = \mp\ \eta\big(\widetilde{\mathfrak{B}}\big)$. \subsection{Non-compact limit} The $\mathbb{Z}_2{}^M$-grading $\Gamma$ accounting for even and odd components $N_A$ in \eqref{eq:parameter_representative} degenerates in the non-compact limit $\ell\to\infty$. Indeed, in terms of oscillators \eqref{eq:covar_essential}, the rescaled oscillator $\widetilde{\mathfrak{B}}_C$ is $\dfrac{2\pi}{\ell^{(C)}}\widetilde{\mathfrak{B}}_C$. Hence \begin{equation} e^{i \,\dfrac{2\pi}{\ell^{(A)}}\widetilde{\mathfrak{B}}^A} \xrightarrow{\ell\to\infty} 1. \end{equation} This repoduces the result of \cite{twistors} for the non-compact case that non-trivial charges are parametrized solely by the oscillators $\mathfrak{B}$. Fourier components $c_n$ in \eqref{eq:solution_r1_periodic_norm} reproduce their non-compact analogs $c\left(\xi\right)$ in \eqref{eq:solution_r1} with $\xi = \dfrac{2\pi}{\ell}n$ \begin{equation} \dfrac{\left(2\pi\right)^M}{\ell^{(1)} ... \ell^{(M)}}\,\sum_n\,... \xrightarrow{\ell\to\infty} \int \mathrm{d}^M\xi\,...\;. \end{equation} Independence of non-compact charges of the integration surface \cite{twistors} is also reproduced in the limit $\ell\to\infty$. Indeed, charge components \eqref{eq:charge_components} for fundamental cycles where shown to be of the form \eqref{eq:cycles_det}. They are different for different integration cycles because $F_{\mathrm{a}}\left[\widetilde{\nu}\right]$ corresponds to a particular fundamental cycle $\sigma_\mathsf{a}$. In the non-compact limit the dependence of charge components on integration cycles disappears since \begin{equation}\label{eq:limit} \dfrac{\left(2\pi\right)^M}{\ell^{(1)}...\ell^{(M)}} \det F\left[\widetilde{\nu}\right]\,\delta_{F\left[\widetilde{\nu}\right]\nu,0}\xrightarrow{\ell\to\infty}\det F\left[\widetilde{\nu}\right]\cdot\delta\left(F\left[\widetilde{\nu}\right]\nu\right) = \delta\left(\nu\right), \end{equation} where $\delta\left(\nu\right)$ on the \textit{r.h.s.} of \eqref{eq:limit} is the Dirac delta-function. \section{Higher-spin symmetry mappings between different cycles} An interesting outcome of the developed techniques is that charges resulting from any cycle are equivalent to those evaluated on the lower cycle with appropriately modified charge parameters. Indeed, dropping $\widetilde{\mathfrak{B}}_C$-dependence in (\ref{eq:charge_form_periodic}) gives \begin{multline}\label{eq:B_independent} \Omega\left(J_\eta\right) = \sum_{m,n,k,l} \left(\mathrm{d}\,V^A + \left(m+n+k-l\right)_C \mathrm{d}\,X^{CA}\right)^{\wedge M} c_m^+ c_n^-\,\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right)\right)\\ \cdot \exp\left[i\left(m-n+k+l\right)_B\left(V^B + \left(m+n+k-l\right)_D \,X^{DB}\right)\right]. \end{multline} The charge resulting from \eqref{eq:B_independent} by integration over any cycle $\Sigma$ equals to a charge associated with the lower fundamental cycle $\sigma_0 = V^1...V^M$ with appropriately modified symmetry parameter. In more detail, let \begin{equation}\label{eq:parameter_P} \eta^\prime_{kl}\left(-i\left(m+n+\dfrac{k-l}{2}\right)\right) = p_\Sigma\left(m+n+k-l\right)\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right)\right). \end{equation} Integration of \eqref{eq:charge_form} with $\eta^\prime$ \eqref{eq:parameter_P} over $\sigma_0$ gives the same charge as for $\eta_{kl}\left(-i\left(m+n+\frac{k-l}{2}\right)\right)$ integrated over $\Sigma$. Indeed, using \eqref{eq:to_V}, integration over $\Sigma$ with parameter $\eta$ gives a factor of $p_\Sigma\left(m+n+k-l\right)$, while on the \textit{rhs} of \eqref{eq:parameter_P} it is included into $\eta^\prime$. In terms of star product \eqref{eq:starproduct} relation \eqref{eq:parameter_P} takes the form \begin{equation}\label{eq:switch} \eta^\prime\left(\mathfrak{B}_C;\mathfrak{B}^A,\widetilde{\mathfrak{B}}^B\right) = p_\Sigma\left(i\,\mathfrak{B}_C\right) * \eta\left(\mathfrak{B}_C;\mathfrak{B}^A,\widetilde{\mathfrak{B}}^B\right). \end{equation} This implies that a charge for a higher cycle $\Sigma$ corresponding to some symmetry $\eta$ equals to a charge for the lower cycle corresponding to the higher symmetry $p_{\Sigma} * \eta$. Let conserved charges be considered as pairings between cycles and symmetry parameters \begin{equation} \left<\sigma_\mathsf{a},\eta\right> = \int_{\sigma_\mathsf{a}} \Omega\left(J_\eta\right). \end{equation} Consider a transformation $\Xi_\mathsf{a}$ mapping the lower cycle $\sigma_0$ to a higher one $\sigma_\mathsf{a}=\Xi_\mathsf{a}\left(\sigma_0\right) $. By \eqref{eq:switch} \begin{equation}\label{eq:conjugate} \left<\Xi_\mathsf{a}\left(\sigma_0\right),\eta\right> = \left<\sigma_0,p_{\sigma_\mathsf{a}} * \eta\right>\,. \end{equation} As a result, transformation of symmetry parameters \eqref{eq:switch} is conjugate to a transition from the lower cycle to $\Sigma$ represented by an integer combination $\Sigma = \sum_{\mathsf{a}}b_{\mathsf{a}}\,\Xi_\mathsf{a}\left(\sigma_0\right)$ (\textit{cf.} \eqref{eq:fundamental_cycles}). Note that only specific polynomials $p_{\sigma_{\mathsf{a}}}$ described in \eqref{eq:charge_components_calculated} and their integer combinations generate transformation \eqref{eq:switch} conjugate to mappings of the lower cycles to higher ones. An important outcome is that any conserved charge can be obtained by integration over the spinor space, \textit{i.e.} over the lower cycle $\sigma_0$, for some symmetry parameter $\eta\big(\mathfrak{B}_C;\mathfrak{B}^A,\widetilde{\mathfrak{B}}^B\big)$. This is somewhat analogous to the situation in the non-compact case, where, for a given parameter $\eta$, conserved charge is independent of the integration cycle. In the periodic case transition to the integration over the spinor space is always possible with the symmetry parameters transformed according to \eqref{eq:switch} and hence involving HS algebra into play. Transformation \eqref{eq:conjugate} relates different geometric structures to algebraic properties of the symmetry transformations, \textit{i.e.} higher integration cycles correspond to higher symmetries which are naturally included into the whole framework. On the other hand, in the customary lower-symmetry framework there is no room for algebraic relations between different integration cycles. An interesting remaining question is to describe inverse transformation, \textit{i.e.} conditions on the symmetry parameters $\eta$ allowing to obtain the same charge from a higher cycle $\Sigma$ with some symmetry parameter $\eta^\prime$ such that $ \left<\sigma_0,\eta\right> = \left<\Sigma,\eta^\prime\right>. $ According to \eqref{eq:conjugate} this is possible provided that $\eta = p_{\Sigma} * \eta^\prime$. Analysis of this issue is less trivial, demanding a definition of a proper class of (may be nonpolynomial) functions $\eta^\prime$. Its detailed consideration is beyond the scope of this paper. \section{Algebra of charges and symmetries} \subsection{Charges as symmetry operators} Conserved charges correspond to symmetries of the rank-one system \eqref{eq:unfolded_r1} via Noether's theorem. Constructed in terms of rank-two fields they can be realized as symmetry generators acting on rank-one fields. Via quantization of rank-one fields Fourier amplitudes $c_n^\pm$ become operators $\hat{\mathfrak{c}}_n^\pm$ with non-trivial commutation relations (see \cite{Vasiliev:2001dc} for details of the quantization procedure and \cite{twistors} for algebra of charges). Analogously to \cite{theta}, the non-zero commutation relations in the periodic case are \begin{equation} \left[\hat{\mathfrak{c}}_m^-,\hat{\mathfrak{c}}_n^+\right] = \,\delta_{mn}. \end{equation} With the symmetry parameters $\eta^{\left(\Gamma\right)}$ \eqref{eq:parameter_representative} quantized conserved charges \eqref{eq:charge_gamma} become operators \begin{equation}\label{eq:charge_operator} \hat{\mathrm{Q}}_{\eta} = \sum_{m,n} \eta^{\left(\Gamma\right)}_{n-m} \left(-i\left(m+n\right)\right)\,\hat{\mathfrak{c}}^+_m\hat{\mathfrak{c}}^-_n\,. \end{equation} As in the non-compact case \cite{twistors} they form closed algebra with respect to commutators \begin{equation}\label{eq:charge_algebra} \left[\hat{\mathrm{Q}}_{\eta^\prime},\hat{\mathrm{Q}}_{\eta}\right] = \hat{\mathrm{Q}}_{\left[\eta^\prime,\eta\right]_\star}\,, \end{equation} where $\eta\left(k;v\right) = \sum_N\,\eta_N\left(k\right) e^{iNv}$ ($k_B\in\mathbb{Z}, v^C\in \left[0,2\pi\right)$) are Weyl symbols for the Moyal-like star product \begin{multline}\label{eq:Moyal} \left(f \star g\right)\left(k;v\right) = \sum_{m,n\in\mathbb{Z}^M}\int_0^{2\pi} \dfrac{\mathrm{d}^Mu\,\mathrm{d}^Mw }{\left(2\pi\right)^{2M}} f\left(k+m;v+u\right)g\left(k+n;v+w\right)\exp\left[i\left(m_C w^C - n_C u^C \right)\right], \end{multline} and the corresponding star commutator is $\left[f,g\right]_\star = f \star g - g \star f$, $\left[k_C,e^{iN_Bv^B}\right]_\star = -2N_C\, e^{iN_Bv^B}$. One-to-one correspondence between symbols of the form \begin{equation}\label{eq:symbol} \eta\left(k;v\right) = \sum_N \eta_N\left(-ik\right)e^{iNv} \end{equation} and charges \eqref{eq:charge_operator} results from the substitution $k\leftrightarrow m+n$ and $N\leftrightarrow n-m$ for Fourier components $\eta_N\left(-ik\right)$. The charge $\hat{\mathrm{Q}}_{\left[\eta^\prime,\eta\right]_\star}$ in \eqref{eq:charge_algebra} is associated with the symbol $\left[\eta^\prime,\eta\right]_\star\left(k;v\right)$. The dual set of charges for parameters $\eta$ is constructed from the symbols $K\star \eta$, where $K$ is Klein operator (see e.g. \cite{NonLinHSmanual}) obeying \begin{equation} K\star K = 1,\quad K\star f\left(k;v\right) = f\left(-k;-v\right)\star K. \end{equation} In terms of star product \eqref{eq:Moyal} it is represented by the delta-function \begin{equation} K = \left(2\pi\right)^M\delta_{k,0}\,\delta\left(v\right). \end{equation} The whole algebra of symmetries is thus $\mathbb{Z}_2$-graded by $K$ and parametrized by symbols of the form $\varepsilon = \eta + K\star\eta^\prime$ with symmetry operators obeying commutation relations \begin{equation}\label{eq:charge_algebra_full} \left[\hat{\mathrm{Q}}_{\varepsilon^\prime},\hat{\mathrm{Q}}_{\varepsilon}\right] = \hat{\mathrm{Q}}_{\left[\varepsilon^\prime,\varepsilon\right]_\star}. \end{equation} This is the straightforward generalization of the non-compact construction of \cite{twistors}. It is straightforward to see that for parameters $\eta^{\prime\left(\Lambda\right)}$ and $\eta^{\left(\Gamma\right)}$ with gradings $\Lambda$ and $\Gamma$, their product $\eta^{\prime\left(\Lambda\right)}\star\eta^{\left(\Gamma\right)}$ has grading $\Lambda + \Gamma \mod 2$. The charges act on quantized rank-one fields $\qC{\pm}{Y}{X} = \sum_n\,\hat{\mathfrak{c}}^\pm_n\basis{\pm}{n}{Y}{X}$ by commutator. For instance, for the symmetry parameters $\mathfrak{B}_C$ and $\eta_N = e^{iN_B\,\mathfrak{B}^B}e^{i|N_C|_2\widetilde{\mathfrak{B}}^C}$, the charges are \begin{equation} \hat{\mathrm{Q}}_{\mathfrak{B}_C} = -2i\sum_n n_C\,\hat{\mathfrak{c}}^+_n\hat{\mathfrak{c}}^-_n,\qquad \hat{\mathrm{Q}}_{\eta_N} = \sum_n \hat{\mathfrak{c}}^+_n\hat{\mathfrak{c}}^-_{n+N} \end{equation} acting on $\qC{+}{Y}{X}$ as follows \begin{equation} \left[\hat{\mathrm{Q}}_{\mathfrak{B}_C},\hat{\mathfrak{C}}^+\right] = -2\sum_n in_C\,\hat{\mathfrak{c}}^+_n\theta^+_n,\quad \left[\hat{\mathrm{Q}}_{\eta_N},\hat{\mathfrak{C}}^+\right] = \sum_n \,\hat{\mathfrak{c}}^+_n\theta^+_{n+N}. \end{equation} This is equivalent to the action \eqref{eq:symmetries_Fock_r1_periodic} of operators $-2\,\mathcal{B}^+{}_{C}$ and $e^{iN_C\,\mathcal{A}_+{}^C}$ on $\C{+}{Y}{X}$. Computation for $\qC{-}{Y}{X}$ and the operators $-2\,\mathcal{B}^-{}_{C}$ and $-e^{iN_C\,\mathcal{A}_-{}^C}$ is analogous. This makes the correspondence between conserved charges and symmetries of rank-one system \eqref{eq:unfolded_r1} manifest. \subsection{Symmetry algebra } Periodicity of $Y$-variables changes symmetries of the rank-one system compared to the non-compact case. Residual symmetry algebra, that respects periodicity, is presented by conserved charge operators \eqref{eq:charge_algebra_full} as functionals of symmetry parameters \eqref{eq:symbol}, acting on quantized rank-one fields via commutator. Symmetries of the rank-one system are thus generated by symbols of symmetry transformations constituted by monomials of the type $K^{r}\star k_{C_1}...k_{C_m}\;e^{ in_B\,v^B}$ ($r=0,1$) which can be packed into generating functions \begin{equation}\label{eq:sin_elements} \mathrm{T}^r_{\left(n,\xi\right)}\left(k;v\right) = K^r\star e^{i\xi\,k + in\,v},\quad \xi^C\in \left[0,2\pi\right), \;n\in\mathbb{Z}^M,\,r=0,1 \end{equation} with the star product \eqref{eq:Moyal}. Polynomials in $k$'s can be obtained via differentiation over $\xi^B$ at $\xi = 0$. The set of generating functions \eqref{eq:sin_elements} is closed with respect to the star commutator \begin{equation}\label{eq:sin_algebra} \left[\mathrm{T}^q_{\left(m,\xi\right)},\mathrm{T}^r_{\left(n,\zeta\right)}\right]_\star = \mathrm{T}^{\left|q+r\right|_2}_{\left(\left(-\right)^r m+n,\left(-\right)^r \xi+\zeta\right)}e^{i\left(-\right)^r \left(m_C\zeta^C - n_C\xi^C\right)} - \mathrm{T}^{\left|q+r\right|_2}_{\left(m+\left(-\right)^q n,\xi+\left(-\right)^q\zeta\right)}e^{-i\left(-\right)^q \left(m_C\zeta^C - n_C\xi^C\right)}. \end{equation} The generators with $q=r=0$ form a subalgebra obeying \begin{equation}\label{eq:sin_algebra_even} \left[\mathrm{T}^0_{\left(m,\xi\right)},\mathrm{T}^0_{\left(n,\zeta\right)}\right]_\star = 2i\,\sin\left(m_C\zeta^C - n_C\xi^C\right)\, \mathrm{T}^0_{\left(m+n,\xi+\zeta\right)}, \end{equation} which is analogous to the sine algebra introduced in \cite{Fairley}, where its oscillator representation was also presented. The difference is that a half of indices in \eqref{eq:sin_elements} are continuous, while for the sine algebra all of them are discrete. Relations \eqref{eq:sin_algebra} obey Jacobi identity and hence elements \eqref{eq:sin_elements} form a Lie algebra with respect to star commutator \eqref{eq:sin_algebra}. This infinite-dimensional Lie algebra represents the symmetry of rank-one system \eqref{eq:unfolded_r1} with periodic variables $Y$. \section{Conclusion} Analysis of conserved charges of the HS equations with periodic twistor-like coordinates $Y^A$ performed in this paper exhibits several interesting features. The charges are represented as integrals of closed current forms in the extended $X^{AB}$, $Y^A$ space. Since periodicity in the $Y$-variables implies periodicity in $X$ variables, one can consider charges associated with different cycles in the $X^{AB}$, $Y^A$ space. Closed current forms may depend on symmetry parameters $\eta$ parametrizing different charges like momentum, electric charge, conformal weight as well as their HS generalizations. In the non-compact case the most general symmetry parameters $\eta$ depend on the four types of oscillators $\mathcal{A}_{\pm}$ and $\mathcal{B}^{\pm}$ \eqref{eq:covar_oscillators}. In the periodic case, the symmetry parameters depend on $\mathcal{B}^\pm$ and $e^{i\mathcal{A}_\pm}$. Nontrivial charges are represented by the current cohomology, \textit{i.e.} those closed current forms that are not exact. In the non-compact case the current cohomology was shown in \cite{twistors} to be represented by the symmetry parameters that depend solely on the oscillators $\mathfrak{B}$ or $\widetilde{\mathfrak{B}}$ \eqref{eq:covar_essential}. In the periodic case the situation is slightly different with the current cohomology represented by various $\mathbb{Z}_2{}^M$-graded parameters of the form \begin{equation} \eta^{\left(\Gamma\right)} = \sum_{N:|N|_2 = \Gamma} \eta_N\big(\mathfrak{B}_C\big)\,e^{iN_A\,\mathfrak{B}^A}e^{ i\Gamma_B\,\widetilde{\mathfrak{B}}^B},\quad \Gamma\in\mathbb{Z}_2{}^M \end{equation} and \begin{equation} \widetilde{\eta}^{\left(\widetilde{\Gamma}\right)} = \sum_{\widetilde{N}:|\bar{N}|_2 = \widetilde{\Gamma}} \widetilde{\eta}_{\widetilde{N}}\big(\widetilde{\mathfrak{B}}_C\big)\,e^{i\widetilde{N}_A\,\widetilde{\mathfrak{B}}^A}e^{i\widetilde{\Gamma}_B\,\mathfrak{B}^B},\quad \widetilde{\Gamma}\in\mathbb{Z}_2{}^M \end{equation} for the dual set of charges. Another peculiarity of the periodic case is that naive expectation that charges evaluated as integrals over non-equivalent cycles are different is not quite true. Namely, the complete set of charges can be obtained by integration over the lower fundamental cycle $\sigma_0 = V^1...V^M$ constituted solely by spinor variables. Other cycles $\Sigma$ for a given symmetry parameter $\eta$ give charges which can be also obtained from the lower cycle with appropriate higher symmetry $\eta^\prime$ \begin{equation}\label{eq:hs_shift} \eta^\prime = p_\Sigma\left(i\,\mathfrak{B}_C\right)*\eta . \end{equation} This means that HS symmetries act on different non-contractible to each other cycles and hence connect them algebraically. Let us stress that there is no room for such connection unless higher symmetries are around. On the other hand, from this perspective (some of) HS symmetries acquire a nontrivial geometric meaning as relating nonequivalent cycles. An interesting remaining question is whether it is possible for a given parameter $\eta$ for the charge $\langle\sigma_0,\eta\rangle$ to find $\eta^\prime$ such that $\langle\sigma_0,\eta\rangle = \langle\Sigma,\eta^\prime\rangle$ for a higher cycle $\Sigma$. Expression \eqref{eq:conjugate} gives only sufficient condition for this to be true. In accordance with the Noether's theorem, quantized conserved charges resulting from the lower fundamental cycle generate symmetry transformations \eqref{eq:symmetries_Fock_r1_periodic} of quantized rank-one fields via the commutator. Charges are parametrized by elements of the star-product algebra \eqref{eq:Moyal} which are conveniently packed into generating functions \eqref{eq:sin_elements} closed under the star-product commutator as in \eqref{eq:sin_algebra} and which subalgebra \eqref{eq:sin_algebra_even} resembles sine algebra introduced in \cite{Fairley}. The Lie algebra \eqref{eq:sin_algebra} represents the full residual global symmetry of the unfolded system \eqref{eq:unfolded_r1} after imposing periodic conditions on the spinor variables $Y^A$. The results of this paper may have several applications mentioned in Introduction. One related to black hole solutions in the HS theory seems to be the most interesting. We hope to consider this issue in the future. \section*{Acknowledgments} We are grateful to Olga Gelfond for stimulating discussions.
2024-02-18T23:40:48.940Z
2016-11-29T02:11:10.000Z
algebraic_stack_train_0000
3,410
12,142